Does the BRK command set the I flag, as an actual IRQ would?
All interrupts set the 'I' flag - even NMI and RESET.
What's your source on the RESET interrupt setting the flag?
WedNESday wrote:
What's your source on the RESET interrupt setting the flag?
Tests performed on the real hardware, though I forget who did them (it was somebody in here, though).
Quietust wrote:
Tests performed on the real hardware, though I forget who did them (it was somebody in here, though).
I witnessed the same behaviour on the real hardware with respect to I flag being set on RESET. My devcart is currently toast, so i can't test one other piece of unverified info; that RESET takes 6-7 cycles. Plus i haven't thought of a way to test that yet.
Someone who has a working devcart should try this out. On reset:
- Disable the APU frame IRQ.
- PHP PLA and if the I flag is set, play a tone.
- CLI and loop forever.
If the reset button produces a tone every time, then reset turns on the I flag.
Reset takes longer than 7 cycles because it takes a long time for the player to press and release the reset button. Or are you talking about something more subtle?
tepples wrote:
Reset takes longer than 7 cycles because it takes a long time for the player to press and release the reset button. Or are you talking about something more subtle?
I think they mean it takes 7 cycles, starting from when the RESET signal is de-asserted. When a user holds the reset button, the RESET signal is asserted, and the cpu can be considered "in reset", a condition that will persist until the RESET signal become de-asserted. Though I don't know how to test that reset lasts 7 cycles without manually clocking your CPU.
I ran a test like tepples suggested. My devcart has the vectors in battery-backed RAM, so running code at reset requires temporarily patching the vector. The test code first clears the I flag, then patches the reset vector and waits. On reset, it prints the status of the I flag.
To verify that the test itself is doing what is expected, before waiting for reset it also sets/clears the D flag and later prints this along with the I flag. The D flag's value is preserved, showing that the setup and test code is working properly.
Code:
main:
lda #$40 ; disable frame irq
sta $4017
nop
cli ; clear I flag
cld ; clear/set D flag
;sed
jsr patch_reset_then_wait
reset:
lda #$40 ; disable frame irq
sta $4017
;cli ; uncomment to be sure test is correct
php ; print I and D flags
pla
and #$0c
jsr print_a ; always prints $04
jmp forever
irq: lda #$ff ; prints $FF $FF if irq unexpectedly occurs
jsr print_a
lda #$ff
jsr print_a
jmp forever
One thing I remember hearing was that on reset, the APU frame counter begins running 9 to 12 cycles before program execution starts. Since the APU is part of the CPU, I would suspect that it would start in sync with the CPU starting.
I cannot think of a software solution that could be used to measure RESET length, unless the lockout chip's exact timing is known (which I doubt). A hardware solution could be done by manually controlling RESET and, upon releasing it, watching M2 and seeing how many times it is clocked before $FFFC and $FFFD are read.
teaguecl wrote:
tepples wrote:
Reset takes longer than 7 cycles because it takes a long time for the player to press and release the reset button. Or are you talking about something more subtle?
I think they mean it takes 7 cycles, starting from when the RESET signal is de-asserted. When a user holds the reset button, the RESET signal is asserted, and the cpu can be considered "in reset", a condition that will persist until the RESET signal become de-asserted. Though I don't know how to test that reset lasts 7 cycles without manually clocking your CPU.
All interrupts function pretty much the same on the 6502... reset, NMI, IRQ, and BRK are all interrupts. Judging from the 6502 chip schematic, the opcode 00h is forced into the opcode latch for ANY interrupt. Think of reset as a high priority level sensitive NMI. One small quirk of reset is that it still tries to push the return address onto the stack, but R/W is forced high, preventing the write. The SP still decrements and all that just like it does on any other interrupt.
The reset thus should take as many cycles as any other "real" interrupt. The only difference between NMI, reset, and IRQ is the enabling circuitry and edge detection (on NMI only) and the I flag of course... all of which are done before the interrupt signals are fed to a priority encoder which selects the desired vector. fffeh is the "default" vector, which is why IRQ and BRK share the vector. The B flag is not a flag at all, but a signal from the opcode fetch circuitry.
So, getting back to the APU thinger, the seemingly random number of cycles that it delays for the APU is most likely caused by when the reset signal is de-asserted during the interrupt sequence... theoretically you could get a lag of up to 6-7 cycles. The APU's counters are probably just connected to /RST like everything else so when reset is deasserted there, it happens "instantly" with respect to the 6502 core.
So if I understand that right, when RESET is asserted, the CPU handles the interrupt repeatedly, over and over, without executing any other instructions, until the RESET line is released, at which point it finishes the current interrupt sequence, executes one more interrupt sequence (since RESET was low for part of the previous sequence), then finally starts executing instructions. This would result in a delay of anywhere from 8 to 14 clock cycles (11 on average).
There are still a couple of things that don't make sense to me:
1. It was measured that if the system was reset, the stack pointer was decremented by 3, and that on powerup it was initialized to $FD (with other registers initialized to $00 ). If RESET were processed multiple times, would that not cause SP to change repeatedly, ending at an unpredictable state?
2. Interrupts can only be serviced after an instruction completes (i.e. IRQs and NMIs won't occur at all if a bad opcode (HLT) is executed). Why, then, can RESET be serviced when the CPU is locked up?
Testing #1 would be easy: On reset, skip 'ldx #$ff; txs'; instead, play a tone with $4002=SP and $4003=1.
I think RESET can be serviced after 'stp' because unlike 'brk', IRQ, and NMI, RESET loads all the control unit state registers with their initial values.
I have tested reset behavior and reported the result that the stack pointer is decremented by 3, but that the stack is not modified. Maybe the CPU clock is disabled as long as reset is asserted.
Does anyone know of any games that actually use the BRK opcode? I can't imagine so, as the vector is the same as the sound's, unless you wanted to prematurely process sound code...
BRK could be useful as a space-saver; using the second byte as a selector allows calling one of 256 subroutines using two bytes, or one subroutine using only one byte.
BRK uses the same vector as IRQ, and the IRQ vector itself is used for more than just the APU interrupts. The scanline interrupts of various mappers (MMC3 for one) also use this vector. Most sound code runs on NMI anyway, so the IRQ vector handler doesn't usually have to decode the source.
The way an IRQ handler determines whether the cause was an IRQ of BRK is by examining bit 4 of the byte at the top of the stack (which also contains the saved status flags in other bits). If set, BRK caused invocation, otherwise IRQ. Be sure to ignore all the wrong documentation that claims the existence of a B status flag; there is no such thing, only the confusion created around its non-existence.
blargg wrote:
The way an IRQ handler determines whether the cause was an IRQ of BRK is by examining bit 4 of the byte at the top of the stack (which also contains the saved status flags in other bits).
Like this?
Code:
irqvector:
pha
txa
pha ; stack state after: SP | X | A | P | LR_low | LR_high | ...
tsx
lda #$10
and $103,x
beq irqhandler
lda $104,x
sta zp_lr
lda $105,x
sta zp_lr+1
tya
pha ; stack state after: SP | Y | X | A | P | LR_low | LR_high | ...
ldy #0
lda (zp_lr),y
asl a
tax
jmp (brktable,x)
; Each entry in brktable is responsible for pulling arguments that
; were originally passed in Y, X, A, and P, and then doing RTS.
irqhandler:
; [omitted IRQ handler code]
pla
tax
pla
rti
But then I don't see the point of BRK if it means wasting all this time to save one byte of code.
Ugh, I get your point. I guess BRK isn't much use beyond invoking a debugger by changing a single byte.
WedNESday wrote:
Does anyone know of any games that actually use the BRK opcode? I can't imagine so, as the vector is the same as the sound's, unless you wanted to prematurely process sound code...
Dragon Warrior 1 uses it, I don't know of any other games that do, but I'm sure there are a few.
If you don't implement IRQs in any way (as is the case with most pre-MMC3 games), then BRK can be useful as a subroutine call. The BRK handler wouldn't have to check the B flag, nor would it necessarily need to preserve registers (particularly if they are parameters).
I noticed that some games that don't use IRQs set the IRQ vector to match the RESET vector. In this scenario, should a bug cause the PC to get corrupted, there's a chance that the game would reset, particularly if the PC ends up in RAM that was cleared to zero and since unused. I know that some newer architectures (such as PowerPC) will always consider opcode 0 as illegal and trigger an exception, since a PC pointed to unused memory often causes a zero to be read for the next instruction.
Quote:
I know that some newer architectures (such as PowerPC) will always consider opcode 0 as illegal and trigger an exception, since a PC pointed to unused memory often causes a zero to be read for the next instruction.
Modern architectures offer both an MMU to mark unmapped pages as invalid, and bus signals to signal an error for the memory transaction. Taking an exception for undefined instructions is to allow detection of erroneous execution, and emulation of unsupported instructions and modes (for example, unaligned access support and older complex instructions are sometimes removed from the silicion and put in the operating system instead).
Anonymous wrote:
BRK can be useful as a subroutine call. The BRK handler wouldn't have to check the B flag, nor would it necessarily need to preserve registers (particularly if they are parameters).
Wouldn't it still need to use A, X, and Y in order to get the syscall number from the byte after the BRK opcode?
Same code with IRQ support deleted and with an optimization to use the rare (d,x) mode:
Code:
brkvector:
; First copy the return address to the zero page, giving a
; pointer to the syscall number (e.g. $69 in BRK $69).
pha
txa
pha ; stack state after: SP | X | A | P | LR_low | LR_high | ...
tsx
lda $104,x
sta zp_lr
lda $105,x
sta zp_lr+1
; Now read the syscall number.
ldx #0
lda (zp_lr,x)
; Look up the syscall in the jump table.
asl a
tax
jmp (brktable,x)
; Each entry in brktable is responsible for pulling arguments that
; were originally passed in X, A, and P, and then doing RTS.
; This new BRK handler does not modify the Y register.
; This code is NOT reentrant because there's a race condition on
; zp_lr.
There's just one problem: there is no "JMP (addr,X)" instruction on the 6502, so you'll have to load the address manually.
diff:
Code:
- jmp (brktable,x)
+ lda brktable,x
+ sta zp_lr
+ lda brktable+1,x
+ sta zp_lr+1
+ jmp (zp_lr)
Of course it becomes an order of magnitude simpler in the degenerate case where the syscall number does not matter (BRK $00 is the same as BRK $01 is the same as BRK $FF), but how is that useful?