I know most of us are getting sick of FCE Ultra, but Parasyte has just recently released his own personal version of FCE Ultra with
the sexiest NES/Fami debugger ever.
Give
Parasyte some love.
Except for the arrows I don't see any difference. NO$NES' is much better since you can actually assemble to memory.
And do any debuggers implement a "step backwards" command yet? That'd be extremely useful.
Hi there.
kyuusaku: FCEUXD has always been able to assemble directly into memory. This build does not, because I removed the unintuitive sidebar which provides that functionality.
blargg: Not that I am aware of. But it's something I have been pondering lately, myself. I was recently inspired by a quote on SourceForge, "What if your debugger could go back in time?"
To anyone else: Sorry about the hosting problems. My host sucks. When it works, it will work. Also, sorry that I won't maintain that debugger or fix the bugs; it's already two-and-a-half years old, any way. I have a new project going that will make up for it.
I've always wanted the stepping back feature, but it would mean that you'd have to cache at least 5 steps or so per step -- so you could go back.
Quote:
I've always wanted the stepping back feature, but it would mean that you'd have to cache at least 5 steps or so per step -- so you could go back.
Eh? When debugger has emulator stopped, some number of clocks have run since the last save state. To step backwards, simply restore from last save state, then stop CPU just before it's reached the current time. For faster step backwards, make save states more often. With this kind of implementation, you can step backwards as far as you want, from wherever you stopped emulation.
or by adding a
rewind feature
I've used save states before. I'm gonna side with hap on this one. Either way it's already possible.
The methods are out there... but it's an issue of laziness.
Are the debuggers too lazy to use save states or are the coders too lazy to implement new methods?
It's both: the users of a debugger are too lazy to make save states, and the coders of a debugger are too lazy to provide tools to work with save states at a cycle level. A little work on both parts would result in the illusion of reversible computing. Here, I describe a possible user interface:
Ideally, pausing for debug should make an automatic save state, or pull one from the emulator's rewind buffer. Then the emulator would have buttons for back/forward 80 scanlines, back/forward 8 scanlines, back/forward 1 scanline, and back/forward 1 CPU instruction. Going back would subtract that many cycles from the current value of cycles since last save state, and then run the CPU and PPU for that many cycles. Then the user would step forward by however many lines to get into a subroutine, and then back and forward an instruction at a time.
It could even have something like "rewind to JSR that called this routine", with detection of the case where the top two bytes on the stack aren't a return address (it'd detect this by backing up several frames, setting a breakpoint, and finding that the PC never hits that point). Probably another reason these kinds of things aren't implemented is that many programmers haven't embraced
modularity to much of a degree, making the task very complex. With proper modularity, you might have the emulator, with the ability to run for a frame, run until a certain time, save and restore state, and set a breakpoint. Then the debugger implements single stepping on top of that, then the advanced functions are implemented on top of the debugger interface.
None of these modules mucks around with another's internal data; it all goes through well-defined function calls.
Ah, modularity! That's my cue.
How does one solve the problems with current debuggers? First, by identifying those problems. Next by addressing them. Finally, in implementation.
So what are the problems with debuggers integrated in today's emulators? Well, for one thing, they are integrated. This can cause portability problems, in many cases (I am ashamed to admit my guilt in perpetuating this problem, by writing debuggers that vendor lock users to the Windows operating system). It can also cause undue stress for debugger developers. We are a lazy species, and we do not like rewriting the same debugger multiple times, attempting to port our work to a newer, better emulator, or porting it to a completely new emulated architecture. And then there is the problem of features, or lack thereof. Some hackers and homebrewers need specialized features in their debuggers.
Modularity is one possible solution to these problems. The first thing to do is segregate the low-level debug primitives (functions and whatnot) from the user interface; make the interface modular, interchangeable with any interface. Then you define how the debug primitives interact with the interface via a communications link; make the communications link modular, able to establish communication using any number of interchangeable modules for TCP/IP sockets, operating system pipes, RS232, USB, etc. Next, you define the protocol; make the protocol modular, a 'universal language' that describes generic debug primitives, and allow it to be extensible as necessary. Finally, you define those debug primitives and provide a base implementation that can be expanded if required. However, a well-defined set of primitives is unlikely to need expansion for anything but the most exotic architecture configurations.
What does all of this mean? Where does it leave us, the debugger developers? And where does it place the users, the hackers, and the homebrew developers?
It means that the debugger developers can implement an accepted standard (accepted being the keyword) for debugger support within not only emulators, but any kind of virtual machine or interpreted byte code in any kind of program. It could be a simple set of debug primitives (in a static or linked library, for example) added by an emulator author (or emulator extender) that connects to a debugger interface of the user's choice. The interface might be highly specialized for a particular architecture, or it might be very complex and advanced with universal support for many architectures. This would put a large number of options into the hands of users.
Now let me try to get a more solid description of this idea out there. The number one underlying technology to be assessed to make any of this work is simply the protocol. That means, a formal description of how a target (an emulator, or other program wishing to use debugger functionality) talks to an interface (a separate program designed to give the user direct access to the debug primitives and link them together in ways that provide many very advanced features ... such as stepping backwards in architecture-time). This would probably be a command reference which supplies things like:
1) A description of the architecture (the emulated system, like NES). This description would include the number of CPUs available, the type of the CPUs, endianness, memory maps as accessible by the CPU, memory maps not accessible to the CPU, etc. Basically a complete virtual model of the architecture.
2) Debug primitives: breakpoints and stepping functionality; read/write access to the memory maps, cpu registers and statuses, and access to internal hardware registers; interrupt and exception handling; scripted macros with callback functions; essentially all of the basic functions which the interface can use to procedurally create high-level features.
3) Extensibility; able to provide expansions to architecture descriptions, debug primitives, and other specialty features.
With such a protocol in place, the interface can do the rest of the high-level work; disassembling, video memory viewing and modification, hex editing, cheat searching and management, etc.
I'm hoping this has been verbose enough that you all understand where I am coming from, but not too verbose that I've created confusion or completely went the wrong direction in the discussion.
Bottom line is, I think we only need to agree on one thing: the protocol. If you refuse to believe that, and only want to do your own thing with your own emulator, that's quite alright. But if you want to reap the benefits of interchangeable debugger interfaces [pick your favorite, or just choose the right one for the job at hand] that are platform-independent [can run on any host operating system, even a completely different machine from the target emulator; not at all bound to the target emulator] and potentially architecture-independent [capable of debugging NES, Genesis, PS2, Wii, Java, brainf**k, the custom scripting language in your new game, you name it!] then I say let's work some crazy Voodoo and invent ourselves a standard for modern debugging!
P.S. I think this "new project" might make up for the shortcomings in FCE<insert arbitrary acronym here>, wouldn't you?
Actually, I suggested modularity simply to reduce complexity to a manageable level, and nothing as grand as you describe (which tends to increase complexity). I'm talking really simple stuff, where one examines their architecture and reduces it to a minimum of functions, and no public (global) data. And I am saying that without modularity, complexity is simply too great to implement these features well and bug-free. Re-use is a possible side benefit, and it only comes when different emulators have the same module interfaces, something that can be difficult if the emulator designs differ greatly. I don't think changes should be too drastic; they should be small steps that each do something useful.
I hate to play '
Mom' here, but it's cool seeing you kids playing in this thread, but for 'searchability' and contextual relevance, we might wanna continue this thread over where
hap pointed out.
This topic about the 'rewind feature' seems to be the best area to discuss the subject.
I'd really like to see the feature worked on, but we should try to keep our notes organized, concise, and easy to find.
Not that I'm a moderator of any sort, but it would be nice to have things in their own place.
Quote:
I'd really like to see the feature worked on, but we should try to keep our notes organized, concise, and easy to find.
You must be new here. People don't care much about that, so it's wasted effort worrying about it. Use the wiki if you want something organized.
Totally would, blargg... totally would. The Wiki is what I prefer to use whenever I do extensive research. There are a lot of holes in it though that could be filled. I do not yet consider myself qualified to write articles on the Wiki because even though I have been lurking and around in the whole NESdev area for quite some time now. I'm just beginning now to take the time to understand the workings aside from the audio portion.
In summary: The Wiki needs more articles. I'm a moron; but an assertive moron, so I'll push others to include the information that it needs.
Also if this is an environment that needs to be disheveled to be conducive to learning, then so be it. I relate being an artist -- or a struggling artist in the least.
Parasyte wrote:
Modularity is one possible solution to these problems. The first thing to do is segregate the low-level debug primitives (functions and whatnot) from the user interface; make the interface modular, interchangeable with any interface. Then you define how the debug primitives interact with the interface via a communications link; make the communications link modular, able to establish communication using any number of interchangeable modules for TCP/IP sockets, operating system pipes, RS232, USB, etc. Next, you define the protocol; make the protocol modular, a 'universal language' that describes generic debug primitives, and allow it to be extensible as necessary. Finally, you define those debug primitives and provide a base implementation that can be expanded if required. However, a well-defined set of primitives is unlikely to need expansion for anything but the most exotic architecture configurations.
Could we start with the GDB protocol? Or would that work only for the 32-bit machines that the GNU operating system targets?
The GDB protocol is established, which is nice, but there is a slight problem with it: The protocol does not allow binary data to be sent. In interfaces which might want to update a PPU viewer in real-time, you will have to encode the PPU data to ASCII characters (0x94 becomes "94", for example) which has the side effect of doubling the bandwidth requirements.
Its use of a specific start/end sequence and a checksum indicates that it is really designed for a raw communications link which does not provide synchronization or cyclic redundancy checking. So it is great for very simple serial communication, but poor for TCP/IP or pipes/sockets.
Another problem is that it does not provide access to multiple memory maps. So you would have to establish a standard of some sort for accessing CPU, PPU, PRG, Sprite memory, etc. using the GDB commands for reading/writing memory (m/M respectively). [Not to mention related problems dealing with a lack of a well-defined command to describe the target architecture.]
And it would be very cool to use GDB with an NES target. Unfortunately, GDB hasn't had 65816 support since 2001, and has never supported 6502. So if you can't use GDB, why constrict yourself to the GDB protocol?
There's also
DBGp which uses XML+TCP/IP, but obviously adds a LOT of overhead. And the
Ladebug Remote Debugger Protocol, using simple UDP/IP packets with a 16-byte header. Finally,
RFC-909 - Loader Debugger Protocol, which looks a bit closer to what I had in mind.
Parasyte, I'm on board. Years ago I implemented GDB debugging over TCP/IP into my emulator. I ran into all the issues you mentioned (specifically having 2 address spaces), and modified the protocol to suit my needs. That meant no more using gdb/ddd, so I started writing my own debugger interface. I modeled off of the best debugger I've ever used, SoftIce.
I was never really happy with any of it, though I love the approach. If we can decide on a good protocol, get some buy in from a few emulator authors to support it, then I'd love to write a new debugger. Maybe someone would want to do it as a plugin to Eclipse?
Would it be possible to pretend to have a 24-bit address space, where the upper 4 bits select which chip we're looking at?
- $000000-$00FFFF: CPU address space
- $600000-$602FFF: PPU address space
- $603F00-$603F1F: Palette
- $700000-$7000FF: OAM
- $800000-$BFFFFF: All PRG ROM banks
- $C00000-$DFFFFF: All CHR ROM or CHR RAM banks
- $E00000-$FFFFFF: All PRG RAM banks
teaguecl: That's about what I would expect. Thanks for confirming. Did you look over LDP (RFC 909) yet? I think it might be a very good starting point, since it's already designed to support multiple architectures and has a lot of room for extension. It's also well suited for TCP/IP. I'm thinking that's the one to build from, or at least barrow [a lot of] inspiration from.
The last time I tried Eclipse, it was too bloated for my tastes so I wouldn't be the one to write a plugin for it. But I imagine it could be done without much trouble.
tepples: Yes and no. It's certainly
possible to define a generic 24-bit address space for NES, but what happens when you want to add support for SNES? Do you shift it to 32-bit address space? And when you want to add DS support later? 40-bit address space? What about 64-bit architectures like ia64/x64? Virtualizing the address space would compromise the idea of using an architecture-independent protocol.
LDP's proposed solution to this problem is fairly simple:
Code:
4.3.1 Long Address Format
The long address format is five words long and consists of a
three-word address descriptor and a two-word offset (see Figure
9). The descriptor specifies an address space to which the offset
is applied. The descriptor is subdivided into several fields, as
described below. The structuring of the descriptor is designed
to support complex addressing modes. For example, on targets
with multiple processes, descriptors may reference virtual
addresses, registers, and other entities within a particular
process.
Address spaces would be accessed using the "ID" field in the descriptor, which separates it from the address. Seems like an elegant way to handle it.
Parasyte wrote:
And when you want to add DS support later? 40-bit address space?
The DS has working gdb.
And GDB still has negligible support for multiple address spaces, let alone multiple CPUs.
I like the disassembler in FCEUXD, but it lacks in PPU debugging features (think no$gba-style viewers for nametables, attribute tables, palettes, sprites etc).