Why NES have strange and ugly architecture?
strange and ugly, compared to what system?
FrankenGraphics wrote:
strange and ugly, compared to what system?
GBA
Well it was Nintendo's first real home console that could play arcade-style games.
I think the 6502 architecture is simple and beautiful, though Ricoh butchered it a bit when they cut off the decimal mode and made other things not work in the R2A03.
And then there are hardware features in the console that doesn't seem to work as they should, like the sprite overflow flag.
But other than that I think it's a quite simple (which can't be said about the SNES or GBA) and beautiful console. Its biggest problem is the lack of color though.
Pokun wrote:
Well it was Nintendo's first real home console that could play arcade-style games.
I think the 6502 architecture is simple and beautiful, though Ricoh butchered it a bit when they cut off the decimal mode and made other things not work in the R2A03.
And then there are hardware features in the console that doesn't seem to work as they should, like the sprite overflow flag.
But other than that I think it's a quite simple (which can't be said about the SNES or GBA) and beautiful console. Its biggest problem is the lack of color though.
I try PyNes.
But how to paint my graphics and save in chr?
Any tutorial?
Easiest way might be to use
NES Screen Tool. You can download a BMP here:
https://forums.nesdev.com/viewtopic.php?p=115203 to use as a template, so you get an indexed BMP with 4 colors (don't use more colors than that). NES Screen Tool can convert a BMP image to CHR pattern data for you, and it can also make edits if you don't have a good image editor.
Pokun wrote:
Easiest way might be to use
NES Screen Tool. You can download a BMP here:
https://forums.nesdev.com/viewtopic.php?p=115203 to use as a template, so you get an indexed BMP with 4 colors (don't use more colors than that). NES Screen Tool can convert a BMP image to CHR pattern data for you, and it can also make edits if you don't have a good image editor.
Photoshop is bad for NES?
monobogdan wrote:
FrankenGraphics wrote:
strange and ugly, compared to what system?
GBA
The GBA came out in 2001, while the NES came out in 1983. One tends to learn quite a lot of things about system design over the course of
18 years, so of course the NES is going to seem "strange and ugly" to you in comparison.
Pointing it out explicitly isn't going to make you many friends here, though - on the contrary, it's likely to make a lot of people here
dislike you for insulting one of their favorite video game consoles...
monobogdan wrote:
Photoshop is bad for NES?
Photoshop won't be able to save graphics in the format that the NES needs, which is an array of 2-bit planar 8x8 tiles - you could use a normal editor, but you'd then need another tool to convert the resulting images.
Even the first ARM processor wouldn't appear until 1987, and that would have been way too expensive to put into a console at the time.
Quietust wrote:
monobogdan wrote:
FrankenGraphics wrote:
strange and ugly, compared to what system?
GBA
The GBA came out in 2001, while the NES came out in 1983. One tends to learn quite a lot of things about system design over the course of
18 years, so of course the NES is going to seem "strange and ugly" to you in comparison.
Pointing it out explicitly isn't going to make you many friends here, though - on the contrary, it's likely to make a lot of people here
dislike you for insulting one of their favorite video game consoles...
monobogdan wrote:
Photoshop is bad for NES?
Photoshop won't be able to save graphics in the format that the NES needs, which is an array of 2-bit planar 8x8 tiles - you could use a normal editor, but you'd then need another tool to convert the resulting images.
I not hate NES, i love NES, but it architecture in some aspects strange.
For example, why console self can't get controller states and save it in one byte with every bits - button state?
Loop with 8 passes is only waste of CPU time
monobogdan wrote:
But how to paint my graphics and save in chr?
I use GIMP and a conversion tool I wrote in Python. I draw things, save in 4-color indexed mode, and have my makefile run
pilbmp2nes.py. Lack of indexed mode is why Pyxel Edit alone is not sufficient for retro game dev, as I discovered in a recent paid project where the artists used Pyxel Edit.
Quote:
Photoshop is bad for NES?
Tools to create graphics for retro consoles need to export in indexed mode because these consoles expect transparency to be color 0.
Quote:
For example, why console self can't get controller states and save it in one byte with every bits - button state?
Unless you're on an Atari 2600 or a Neo Geo, there isn't a wire for every button. If you're referring to some sort of Super NES-style autoreading, some controllers don't have exactly eight buttons.
In Photoshop, you can reduce the image to 128x128 pixels (or smaller), and change color mode to 'indexed', 'custom palette', choose 4 colors...
...some people use posterize filter to reduce to 4 colors...
Open YY-CHR.
Photoshop, select all, cut. Paste to YY-CHR.
Edit, nobody uses pyNES because it has no documentation or example code.
The time it takes to read a game pad is negligible. It shouldn't be the cause of slowdowns.
Quote:
I try PyNes.
I suggest taking the time (a couple of hours) reading through one of the many excellent 6502 instruction set manuals. Whether you'll end up using assembly language or not, it will probably help you a lot. Currently, viable options are assembly or C if you want to make a game.
Or if you on the other hand think NES is just a little
too limited, you might want to try coding for PC Engine/TurboGRAFX or Sega Mega Drive/Genesis.
If you don't want to mess with indexed mode graphics, my tool
makechr is another option. It figures out the palette automatically, and can even compile directly to a sample rom.
FrankenGraphics wrote:
The time it takes to read a game pad is negligible. It shouldn't be the cause of slowdowns.
Quote:
I try PyNes.
I suggest taking the time (a couple of hours) reading through one of the many excellent 6502 instruction set manuals. Whether you'll end up using assembly language or not, it will probably help you a lot. Currently, viable options are assembly or C if you want to make a game.
Or if you on the other hand think NES is just a little
too limited, you might want to try coding for PC Engine/TurboGRAFX or Sega Mega Drive/Genesis.
PyNES is simply Python API -> NES assembler translator.
What specific part of the NES architecture do you think is ugly ? Without specifying that, this thread is a non-discussion.
Quote:
Python API -> NES assembler translator.
That is, a specific nes assembler (nesasm based according to what little documentation there is). Eventually, you might want to switch to a more versatile assembler, like asm6 or ca65 / cc65 or perhaps some other assembler.
Bregalad wrote:
What specific part of the NES architecture do you think is ugly ?
These quirks of the NES architecture cause problems:
- No write FIFO for modifying VRAM while the display is on. The TMS9918 in the ColecoVision and MSX and its descendants in the Master System and Genesis have one, as does the TG16 VDC. And there's plenty of downtime in the scanline to execute queued writes, such as repeat fetches of the same attribute byte.
- NTSC picture height extends into overscan and is not adjustable. Competing consoles could instead blank the top and bottom 8 or 24 lines to free up more VRAM transfer time.
- PAL NES CPU divides the master clock by 16 instead of 15. This breaks raster effects.
- 8x8 or 8x16 pixel sprite size choice is global, not per sprite.
- Audio path doesn't pass through the 72-pin cartridge, and there is no FIFO for PCM writes.
- Limited selection of noise and DMC periods.
The NES has
other quirks.
If it can be called a quirk, to the best of my knowledge, the reversed duty cycle on the sq. channels serves no practical purpose. We'd gain more timbral variation if there was a fourth duty setting of an audible quality.
I don't think the NES' architecture is ugly or weird. It's a fairly simple console and if you use the basic features as intended, everything is pretty straightforward. IMO, the quirks only start showing when you try to deviate from the norm, doing timed writes to registers when they shouldn't normally be written to or exploiting other undocumented behaviors.
The way the controllers are read may seem awkward, but if you research a bit about how the hardware works, it makes sense. Being an old design, there's a lot of focus on simplifying the hardware itself, which might cause the way it's interfaced with to be less intuitive than ideal, but IMO the NES has a fairly good balance between the simplicity of the hardware and the simplicity of the interface. The Atari 2600 for example is much weirder, look at how sprites are positioned and you'll go "WTF?" like never before. Again, when you study how the hardware works, the interface makes sense, but it's much more awkward than most things you'll see on the NES.
tepples wrote:
- 8x8 or 8x16 pixel sprite size choice is global, not per sprite.
Curious how you would propose changing this. Use one of the spare attribute bits? It seems to me like this would really complicate the PPU code to detect when more than 8 sprites are on a scanline.
I think it's quite ugly, personally. The errata list kind of covers it; they tried to add several features that were ultimately broken and not terribly useful. A lot of the NES' design was clearly experimental. Here's a few of its "dead ends" that bug me:
- Pretty much everything about DPCM is broken. It's got terrible sound quality, interferes with the controller, was tuned to an A-440 scale except an off-by-one error on the length control completely breaks all of that tuning, has an IRQ but is very unwieldy. (Despite this, still pretty usable just for playing samples, and at least has 6:1 compression.)
- APU envelopes offer only one shape and it has to go from full volume to silent. Pretty much useless compared to the per-frame volume control, so it goes unused too.
- APU length counter is another annoying feature. It has a very bizarre arbitrary table of note lengths, again completely useless because of the per-frame volume control. It constantly gets in the way too, because if you don't remember to re-initialize this useless feature it will silence the channel, and there are a lot of conditions where it will need to be re-initialized.
- One of the sweep units for the two square channels has an off-by-one error and sweeps at a different rate than the other.
- Sprite overflow also doesn't work as intended, unusable unless you know the exact conditions of how it's broken.
- The "ZIF" connector on the front loading NES was a complete disaster. The worst cartridge connector I've ever seen. (Not fixed until a very late revision.)
- The original Famicom had square buttons which were prone to mechanical failure. (Fixed in an early revision.)
It's all forgivable, because you don't
need to depend on any of these features, and the system as-is is quite usable and versatile, but each one of these features is clearly a failure in my eyes. Dead weight that probably cost them a lot to develop.
Every system has things that turn out to be broken in production, but NES has some really strange ones to me. It's understandable that they didn't have the kind of years of trial and experience in console design that they do now, and they still make mistakes now too. What's incredible to me is that despite all these bad decisions, none of them were critical to its success, and it survived and thrived! Amazing!
dustmop wrote:
tepples wrote:
- 8x8 or 8x16 pixel sprite size choice is global, not per sprite.
Curious how you would propose changing this. Use one of the spare attribute bits?
Yes. It would have been attribute 2 bit 4:
vhps--ccQuote:
It seems to me like this would really complicate the PPU code to detect when more than 8 sprites are on a scanline.
You're correct that the OAM would have needed slightly different physical organization to allow fetching the size bit and Y at once. But because it already fetches an entire 8-byte word line at once, it could probably have been done.
tepples wrote:
These quirks of the NES architecture cause problems:
rainwarrior wrote:
I think it's quite ugly, personally. The errata list kind of covers it; they tried to add several features that were ultimately broken and not terribly useful.
I was aiming the question towards the OP. He's the one who said the architecture was ugly and wanted to discuss it explicitely in this thread, but didn't even care to mention what part exactly he had in mind.
And if I had to answer that question myself, I agree with you guys. APU is broken, especially the part where one duty cycle is wasted, and where one extra period bit for Square channels is missing (since you only need 8 steps to produce the duty cycle that are used, but they use 16 steps instead). Also the length counter and enveloppe decay are almost completely useless - their only purpose would be to slightly simplify music playback software and avoid some simple NROM games to have to handle volume changes by themselves, which costs some ROM.
The most ridiculous part of APU is the fact Triangle channel has two separate timers, while other features leaves to be desired on that channel. For instance, sweep on triangle channel would be useful - but no, they used a second timer instead ! And, like Rainwarrior said, DPCM is broken. Raw PCM is rendered very difficult to use by the lack of a simple CPU cycle IRQ timer, and this also causes problem for scanline effects, where we have to use the weird Sprite Zero hit polling because there's no IRQ timer. Even a sprite zero hit IRQ would already be much better !
I also agree the PAL PPU is basically broken and that the top/bottom 8 scanlines should have been hidable by hardware.
However, some other points of the architecture are beautiful. I especially like how the graphics are made, and I don't mind global 8x8/8x16 sprite switch, in most cases a single game will only use one mode and stick to it.
tepples wrote:
dustmop wrote:
tepples wrote:
- 8x8 or 8x16 pixel sprite size choice is global, not per sprite.
Curious how you would propose changing this. Use one of the spare attribute bits?
Yes. It would have been attribute 2 bit 4:
vhps--ccQuote:
It seems to me like this would really complicate the PPU code to detect when more than 8 sprites are on a scanline.
You're correct that the OAM would have needed slightly different physical organization to allow fetching the size bit and Y at once. But because it already fetches an entire 8-byte word line at once, it could probably have been done.
The PC Engine allows any combination of sprite size from 16x16 to 32x64 for any sprite by setting certain bits in the Sprite Attribute Table (PC Engine's OAM buffer) for each sprite entry. But on the other hand it can't use 8x8 sprites at all, and when counting if more than 16 sprites are sharing a scanline, every 16 dot part counts as a sprite.
I think Sega systems also can mix sprite sizes, only Nintendo systems have a global sprite size, and it doesn't change with SNES either.
monobogdan wrote:
But how to paint my graphics and save in chr?
Any tutorial?
monobogdan wrote:
Pokun wrote:
Easiest way might be to use
NES Screen Tool. [...]
Photoshop is bad for NES?
I really think before you go on with anything you have in mind right now, you should work through one of the NES development tutorials to get a general understanding of how this works.
Your questions like the above one where you ask whether you can save NES graphics with Photoshop or the question "How to draw sprites?" show that you're still lacking the very basics of NES development.
So, before bothering how to convert graphics or discussing philosophical questions about the architecture, you should really make sure that you finish one of those tutorials.
http://nintendoage.com/pub/faq/NA/nerdy_nights_out.htmlhttps://nesdoug.com/If you have any questions regarding specific things from those tutorials, the people here will surely be able to help you.
tepples wrote:
8x8 or 8x16 pixel sprite size choice is global, not per sprite.
I don't mind the globalness of this, but you know what I would have liked better? 16 x 8 sprites instead of 8 x 16, to double the size of objects that can be put on the screen without flickering.
SNES has 2 global sprite sizes, and sprites can individually select between those 2 sizes.
GBA has a per-sprite size setting, however it's much more recent so comparison isn't fair.
There's nothing wrong with using Photoshop for pixel art if you know the program well enough to disable interpolation, opacity, and anything else that has no place in retro consoles. As long as you know how to export an image with the appropriate dimensions and color count for conversion, any tool you're comfortable with will do.
What I really don't recommend is drawing your graphics directly into restrictive tile editors where you have to draw everything inside little boxes (which often causes graphics to look blockier than they would if you had more room) and there are no layer or onion skinning features to help with animation (often resulting in stiffer animations).
As long as you know the limitations of the system you're making sprites for, I find any good editor will do. When making backgrounds for the NES, for example, it's easy just to use a 16x16 grid while drawing and just remember the 4-colour limitations. You need to learn the rules for the system before you begin, though.
I strongly recommend
aseprite for animated pixel art. Its free version is quite good, and its paid version is very good as well as inexpensive.
I tend to use
Gimp for non-animated pixel art. It might not seem pixel oriented at first glance but I find it works very well for it once you get yourself accustomed to it. (
tutorial).
Personally, I would just say monobogdan is trolling, and ignorant of how the passage of time works.
"Why didn't you invest in Apple stock in 1994? They're worth billions now!"
"Why did you marry Mom and have kids? Your next girlfriend could have been much nicer."
"Why did you design the space shuttle, NASA, when it would explode in 1986?"
"Why did you invent the TV, Mr. Farnsworth, when you knew it wouldn't be in colour for a couple more decades?"
etc.
Learn about the computers and game systems that preceded and were contemporary to the Famicom / NES and thank Ricoh that you're not designing 2-colour bitmaps and single-colour sprites, of which only 2 to 4 can appear on one scanline (if you're lucky).
All I'm saying is if this guy thinks the NES is poorly designed, he needs to take a look at the SNES.
ccovell wrote:
"Why didn't you invest in Apple stock in 1994? They're worth billions now!"
I don't think many were predicting that... If Apple's rise taught anyone anything, it's that I guess we should never overestimate how smart the average consumer is.
The nes is hoeky to the max, and its not just because it was 1983, it was hoeky for 1983 standards. Nintendo were cheap as cheap and it shows in every aspect.
No BCD because then it would cost more, but even then they technically broke the law and if Commodore found out in time, would have pulled in a hefty fee. Would have probably been enough to finish the LCD and C900 to which we could probably be using Commodores and not Apples today.
2K of RAM, maskable non maskable interrupt.. when the IRQ basically doesn't do very much and audio, particularly samples are very time critical and having them on NMI would make more sense. 8x8 sprites? with a limit of 8 on a line - ugh... Timers who needs Timers... Raster counter meh.... drawing sprites up the edge of the screen on the right boring. The write to this registers and it sets an internal latch so then you can write to it again to set the other half... look spend the extra 0.001c and add the gate for a 2nd register...
So 1983 what do we have.
Well to be fair a NES is better than all the Apple][s bar the GS in terms of capabilities in video games.
Compared the 2600VCS it is a fancy drink on a beach with a dedicated fan boy.
Its better than the P.E.T of 77 and the TRS-80 of 77
Atari 800s - 79 - bitmaps, timers, decent sound chip, 48K ram, built in software, raster counters, expansion ports, higher resolution
VIC-20 of 80~1 with 5K of ram, NES has more colours and sprites and audio channels, but VIC-20 has keyboard, timers and more RAM. And Vic-20 was 1/3 the price.
ColecoVision of 82 getting about square
C64 of 82 - bitmaps, larger sprites, timers, decent sound chip, 64K ram, built in software, raster counters, expansion ports, higher resolution
But the C64 cost more in 83 than the NES did. However by 85 when it hit states side, it was about even if not cheaper. And in 85 we had C128s and A1000s. In 83 the A800s were $165
I fell that Nintendo should have "refreshed" the NES in 85 for the US launch and added in more things to bring it more up to date, having proven the model in Japan for 2 years first. Not that it seems to have mattered for some reason the US jumped on to it. My guess is piracy issues.
But to answer the original post - because Japan was a backwater in 83, that was still crippled from the war - recovering but still gut punched, there was no money, so every single yen counted. So they made it as cheap as possible to sell it as cheap as possible to survive.
Oziphantom wrote:
Atari 800s - 79 - bitmaps, timers, decent sound chip, 48K ram, built in software, raster counters, expansion ports, higher resolution
VIC-20 of 80~1 with 5K of ram, NES has more colours and sprites and audio channels, but VIC-20 has keyboard, timers and more RAM. And Vic-20 was 1/3 the price.
ColecoVision of 82 getting about square
C64 of 82 - bitmaps, larger sprites, timers, decent sound chip, 64K ram, built in software, raster counters, expansion ports, higher resolution
This is a better comparison, as we can see that the Famicom was often better but sometimes worse due to year/technology/cost tradeoffs. But it must be mentioned that the A800, C64, and CV all were monochrome / 1bpp (per tile) in their high(er) resolution modes. Their resolution/colour settings meant that games were primarily between 160-176 pixels wide on the computers. Colourful, brick-shaped graphics.
8x8 sprites makes perfect sense to me since that's one of the more useful sizes. The PC Engine allows 16x16 as its smallest, which I believe wastes a lot of VRAM for the pattern data for small things like bullets. Not to mention the fact that sprite patterns and background character patterns uses two quite different formats (background characters are always 8x8 on PC Engine).
And max 8 sprites on a scanline was standard for sprite-based systems no? What systems of the time allowed more than this? Game Boy did but it was a later system.
According to interviews, they had no idea how to design the system at first, but they decided to make a system that could run Donkey Kong which was the hottest game at the time. Donkey Kong was using the same board as Radar Scope which in turn was inspired by Namco's Galaxian I think.
The launch price was ¥14800 (according to wikipedia) which was very expensive at the time (about the same as the Nintendo Switch I think). It was already more expensive than what Yamauchi wanted, adding more colors and stuff would have made it too expensive to sell I think (all the newbie mistakes and non-functional features aside).
Quote:
I fell that Nintendo should have "refreshed" the NES in 85 for the US launch and added in more things to bring it more up to date, having proven the model in Japan for 2 years first.
Updating the NES and possibly breaking compatibility with existing games? Even if it would be backwards compatible with Famicom games that would mean developers would have to make two versions of their games just to take advantage of the updated NES as well as the Famicom that people already had bought.
Quote:
Not that it seems to have mattered for some reason the US jumped on to it. My guess is piracy issues.
The US was under the effects of the "American video game crash" so games didn't sell at all. Nintendo somehow convinced people that they had high quality stuff (which they did), and it sold.
Nintendo was even reluctant to sell in Europe at first even though we weren't affected by the crash as much. Bergsala (which is now Scandinavian Nintendo) had been successfully selling Game & Watch systems in Sweden, and now wanted to import the Famicom, but Nintendo wouldn't let them at first because they where afraid of the effects of the crash.
Quote:
But to answer the original post - because Japan was a backwater in 83, that was still crippled from the war - recovering but still gut punched, there was no money, so every single yen counted. So they made it as cheap as possible to sell it as cheap as possible to survive.
Maybe the yen pinching mentality from the war was still in effect but they had definitely recovered from the wounds of the war quite well in the eighties. The bubble economy started in 1986, so they were on the rise.
But yeah Nintendo was still a small company that had already failed in several business areas, so of course they wanted to make it as cheap as possible.
Yeah, the higher resolution modes of those machines are practically text only since it eats up the ram that can provide higher color resolution at lower pixel resolutions (and still being expensive RAM-wise). You'd need that setting for word processing and programming, while most games stayed at lower resolution. Commodore 64 gets its extra colourfulness from resorting to 2:1 wide pixels in lower res setting. That would count as ugly in my book, although a bit charming in certain sense.
If the NES had a high-res mode, we'd maybe see more text adventures, i guess.. Not much else to do.
I think the most straightforward consideration for it would be to have 4k built-in ram, but that might have been to expensive at the time. I don't know.
Another factor to count in is that the NES not having software built in is really a feature. You don't need to have that much ram if no OS is there to eat it. On C64, you would have a lot more restricted access to zero page for your software. On the NES, it's all yours. That's in other words both more efficient and more straightforward.
Quote:
thank Ricoh that you're not designing 2-colour bitmaps and single-colour sprites, of which only 2 to 4 can appear on one scanline (if you're lucky).
Incidentally, i sort of did that for a
ming mecca user.
Background is 4 colours per palette. 2 palettes can be shown simultaneously. Sprites are 2 colour (1 solid, one see-through). There can only be 2 sprites at any one time. If you want "more", you have to resort to position jumping synced to screen refresh (so with a bit of glitch prone trickery, it can be 2-per scanline). A complete system costs ~10k usd, no cables or housing included. For being able to patch up more complex games, you'd need to add third party or DIY logic gate modules. The price for gaming modular...
Quote:
On C64, you would have a lot more restricted access to zero page for your software.
As far as I know, most games/demoes disable BASIC and never return to the OS ever again and just "takes over" the system. Some keeps usage of Kernal ROM, some don't even bother doing that. This leaves all zero page usable exept $00 and $01 which are in fact hardware registers.
As for this thread, I think the OP is simply a troll. He came up calling the NES "ugly architecture" but never mentionned what exactly he thought was ugly, and now he left the discussion altogether, so he just wanted us to argue what was ugly or whatever.
Quote:
I fell that Nintendo should have "refreshed" the NES in 85 for the US launch and added in more things to bring it more up to date, having proven the model in Japan for 2 years first.
That's actually exactly what they did - the system looks completely different and cartridges aren't even compatible. They just made sure the software was (mostly) compatible. The big failure was that software stopped being compatible with the PAL NES - even the soviet pirates were better than Nintendo at porting the console to PAL - the Dendy does that just fine !
Sorry, my bad - i didn't know commercial games didn't use kernal that much?
FrankenGraphics wrote:
Sorry, my bad - i didn't know commercial games didn't use kernal that much?
Well I don't know exactly what percentage of games uses the Kernal but it's possible to ignore it entirely. Also on the C64 the barrier between "commercial" and "homebrew" games isn't as evident as with the NES since the platform was open to homebrew from it's very start.
The NES's main advantage over the other machines of the era is hardware char level scrolling, the C64 has pixel level( some other machines don't even have that much ) but you then have to manually shift the screen 1 char over, which eats a bunch of clocks. Then if you want CRAM too.... We found ways around it though
Being 40chars wide causes pain when trying to do it in hardware... Other machines don't have char mode and well slow is the word...
So its easier to get more colours on the screen on a NES, but the C64 can get more colours on the screen, its a trade off.
While the C64 does have the 8 sprites on a line limit, the sprites are 24 pixels wide, so in NES terms it gets 24 sprites per line. The A800 only has 4 ( or 5 ) but as tall as you like. The 8x8 is great for a bullet hell. Annoying for most other things though I would think. To make a normal sized player character, Mario for example, you have to bolt a bunch together, then you have to store data to point out which sprites are offset from the "anchor" position, then you have to move and update all the sprites to move the character. And then with a character being 2 or 3 sprites wide you hit flicker fast fast. First level of Mario fast. Most games on the C64 for example will use 1 sprite or 2 sprites to make a character. So to move a player one pixel to the right its
INC $D000
and if 2 sprites
INC $D002
But the NES has a faster CPU clock speed. There are points where the smaller sprites help and places where they hinder. Then the C64 can access 16K of RAM for the graphics at one time, allowing it to store more sprite images. Said 16K is directly accessible by the CPU so you can modify it and race the beam or chase the beam as you need.
But to put the architecture in to perspective a C64 has 46 control registers on its Graphics chip, although to be fair this does include the sprite settings, discounting them gives you 21 registers. The sound chip has 28 registers, the CIAs of which there are 2 have 16 registers each, the CPU has a 5 bit register on it as well. The NES Programmers Reference Guide is 2 stapled pages made on a photocopier
Remember we are comparing bare metal to bare metal, so none of the mappers that allow you to get away from what is just in the box, as both the NES and other machines can do it as well.
I don't think massive upgrades would have been done. Just maybe bump it to 8K RAM and add a couple of timers to the machine to make it more in line with the computers the western devs are use to. Japanese devs would be able to ignore them for a local machine and then those would work on a US/EU machine without any changes. The US and Euro devs mostly made things for the Western market so not being able to throw a cart into Japan probably wasn't a big issue, and then if you did, you can easily add a CIA chip or some extra RAM onto the cart to bring it up to the extra spec.
The C64 has banking, so you can kick the internal software to the kerb and take all 64K RAM for yourself and use all of the ZP for yourself as well most games do. And then some games are just written in BASIC or partly use BASIC, Sid Meir's Pirates being the typical example.
2 pixels wide on a 402 pixel screen makes the pixels 1.27x as wide as a NES pixel so not that much chunkier. But then we can overlay hires sprites on the double wide multies of which the pixels are 60% the width of a NES pixel.
PAL vs NTSC is just they don't work, its not Nintendo's fault for them being incompatible, they just are. You can't keep the same number of clocks per line, you can't keep the same number of lines, you can't keep the same refresh rate. These days in HD world were all TVs support 60hz HD images its not an issue any more, but back then nothing you could do.
Oziphantom wrote:
PAL vs NTSC is just they don't work, its not Nintendo's fault for them being incompatible, they just are. You can't keep the same number of clocks per line, you can't keep the same number of lines, you can't keep the same refresh rate. These days in HD world were all TVs support 60hz HD images its not an issue any more, but back then nothing you could do.
Well, about the refresh rate nothing can be done obviously, but about the clock per line, the ratio should have been kept the same so that timed code works; Dendy does that just fine.
The issue is their needs to be a master clock. This is usually the colour carrier clock. On PAL this is 4.43361875 Mhz and on NTSC it is 3.58Mhz if you have looked at a NTSC Z80 machine, a lot of arcades for example, SNES even, 3.58 Mhz clock might look familiar.
The PPU has to run on a clock based upon the output image otherwise the image will roll and the pixels will jump around. If you have the PPU running at one speed and the CPU running at another speed, that is not a nice division, then you have the PPU clock and the CPU clock out of phase which complicates it sending data to the PPU registers or OAM. An example of this case is the C128's VDC which uses a different time base to the VIC IIe chip. Bil Herd had to fudge it with some buffer logic and the chip is a total pain in the butt to use as you have to ask the chip if it is ready to receive data, then poll it to find out when it is ready, then write data then wait a few clocks to make sure it got through the buffer and start again. Vs the VIC chip and the CPU of which the CPU is /8 what the VIC runs at, so you just write what you want and the VIC's address set up times, hold times, data latch times are all in perfect sync with the CPU and it just works(tm) So sliding the CPU clocks on the PAL version would cause more pain then its worth, and then sometimes you will right to the first clock, sometimes the 2nd clock, you would have jitter.
The Dendy is a kooky special PAL that uses a 3.58Mhz colour clock but in a PAL frame timings, thus the divides then happen and line up with NTSC timings, and so the clocks can match NTSC ish which after the divs get you the same number of clock per line.
C64 sprites can be most usefully thought of as four 24x21-pixel sprites with 4 colors plus transparency. Each consists of a 1bpp outline plane on top of a chunkier multicolor plane. Thus you have four colors: the two shared multicolor sprite colors and two particular to a single sprite (the outline plane's color and the first multicolor color). Mayhem demonstrates this well.
One serious problem with C64 is that it takes unbearably long to load games from cassette tape. Unlike with NES, few games came on cartridge, and as I understand it, few people outside the USA got a 1541 disk drive because it was so expensive.
Dendy doesn't use "a 3.58Mhz colour clock". It uses the same color subcarrier and master clock frequency as the PAL NES, just with a /15 in the CPU instead of a /16 and a later NMI in the PPU. Use of /15 causes the number of CPU clocks per scanline, which is the important part, to match NTSC.
ccovell wrote:
Personally, I would just say monobogdan is trolling, and ignorant of how the passage of time works.
"Why didn't you invest in Apple stock in 1994? They're worth billions now!"
"Why did you marry Mom and have kids? Your next girlfriend could have been much nicer."
"Why did you design the space shuttle, NASA, when it would explode in 1986?"
"Why did you invent the TV, Mr. Farnsworth, when you knew it wouldn't be in colour for a couple more decades?"
etc.
Learn about the computers and game systems that preceded and were contemporary to the Famicom / NES and thank Ricoh that you're not designing 2-colour bitmaps and single-colour sprites, of which only 2 to 4 can appear on one scanline (if you're lucky).
No, it's not trolling.
I'm talking about why nes can't self read input every frame
A serial read is of no issue as long as the package is small enough. Reading the game pad is a very small package.
You can, theoretically, also use these ports for external device communication, provided you have a device. Rachel Simone Weil has used it for posting and reading tweets, aswell as TASboting via wi-fi. Not that it's limited to the NES, but the point is you can do it. Heck, you can probably connect a couple of NES units for true separate screen multiplayer, had you the time and means.
It takes a miniscule amount of time to read in the grand scheme of things (even with an active DMC), and even then some games opted out for an unrolled loop and had the same code repeated 8 times. It works, and honestly my question would rather be why the designers of the Mega Drive felt that a controller port interrupt was necessary. Does such a thing improve responsivity noticeably at all?
za909 wrote:
It takes a miniscule amount of time to read in the grand scheme of things (even with an active DMC), and even then some games opted out for an unrolled loop and had the same code repeated 8 times. It works, and honestly my question would rather be why the designers of the Mega Drive felt that a controller port interrupt was necessary. Does such a thing improve responsivity noticeably at all?
SMD have more powerful CPU
Reading controllers in software using what amounts to bit-banged SPI has three advantages:
- It makes the hardware cheaper to manufacture.
- It doesn't really burden anything. I counted 228 cycles to read both controllers, which is two scanlines and less than 1% of CPU time.
- It makes the system more flexible, as manufacturers can make specialize controllers that send more detailed reports than the standard controller. Thwaite using the mouse is an example of this.
za909 wrote:
my question would rather be why the designers of the Mega Drive felt that a controller port interrupt was necessary.
An interrupt is more necessary for a pin devoted to a 2D light gun's photodiode or serial communication between two machines. Polling is good enough for returning a Y coordinate from a light gun, based on the time between the start of a frame and when its sensor starts to receive light, in order to narrow down which targets are close. But retrieving both X and Y coordinates, as in the case of the Menacer, Justifier, or Super Scope, needs more precise circuitry.
monobogdan wrote:
I'm talking about why nes can't self read input every frame
Is that all? A problem you can easily solve with a tiny routine you can call from the NMI handler, that you'll very likely write only once in your life and never think about it again? That hardly sounds like a significant design flaw to me.
As for something else: I've thought more than once, though without any real reference, how a chunk of address range is wasted on mirrored features, but i don't know if this is normal for comparable systems.
The
ColecoVision memory map has RAM from $6000 to $63FF, but it's mirrored all the way up to $7FFF. Its I/O map is full of mirroring as well.
For hardware external to the CPU die, incomplete decoding is cheaper. The NES APU is completely decoded (and thus not mirrored) because it's on the same die as the CPU.
FrankenGraphics wrote:
As for something else: I've thought more than once, though without any real reference, how a chunk of address range is wasted on mirrored features, but i don't know if this is normal for comparable systems.
It's very normal, and it's the opposite of a waste. Every mirror doubling is the result of one bit of the address being ignored (no circuitry, no cost). To restrict something to just one memory address means you need logic to deal with every single bit of the address.
monobogdan wrote:
SMD have more powerful CPU
We've been over this before.
Thousands of times across the internet.
In short: no, the Z80 at X MHz is roughly as capable as a 6502 at X/2 MHz.
rainwarrior wrote:
It's very normal, and it's the opposite of a waste. Every mirror doubling is the result of one bit of the address being ignored (no circuitry, no cost). To restrict something to just one memory address means you need logic to deal with every single bit of the address.
Which I think the NES does for the $4000-$4017 registers, so logic is sort-of wasted on this. (Sort-of because this adress range was used by the FDS and other mappers).
lidnariq wrote:
monobogdan wrote:
SMD have more powerful CPU
We've been over this before.
Thousands of times across the internet.
In short: no, the Z80 at X MHz is roughly as capable as a 6502 at X/2 MHz.
Mega Drive, not Master System.
But as for the Z80, I don't think the ratio is even as low as 2:1. It's probably 3:1,
particularly if you're randomly accessing members of an actor data structure.
Oh, pff, I keep on forgetting that this entire thread started with comparing the NES to things that are 5-20 years newer.
The Mega Drive needs the interrupts for the blast processing to work properly.
Just kidding!
Bregalad wrote:
Quote:
I fell that Nintendo should have "refreshed" the NES in 85 for the US launch and added in more things to bring it more up to date, having proven the model in Japan for 2 years first.
That's actually exactly what they did - the system looks completely different and cartridges aren't even compatible. They just made sure the software was (mostly) compatible.
Refreshed in design but not as Oziphantom meant. And I hardly think removing the microphone (
which broke compatibility with some games) and the 15-pin expansion port is an upgrade. The new expansion port and controller ports are upgrades though.
If the NES was released a little bit later, maybe 1987, it could've had the following specs:
- 8 palettes of 4 colors for BG tiles, and 8 palettes of 3 colors for sprites
- 16kB of CHR-ROM, 512 tiles for sprites and 512 tiles for BG
- 16 sprites per scanline
- 8x8 or 8x16 sprites (selected per sprite)
- CHR patterns are stored with each pair of bytes representing an 8x1 sliver
- 6 byte FIFO
That interrupt on controller port in MD is for lightguns and RS232 communitation. When TH line is confed to be an input, and Port ints are enabled then high to low transition on the TH line creates a Port int to the 68K, in addition you can conf the VDP to freeze HV counter and read the pixel position where the Port int happened. Changing TH state by software will not cause Port ints, the change must come from external hardware (and to read normal controllers TH must be set as an output, so there can never be Port ints).
tepples wrote:
C64 sprites can be most usefully thought of as four 24x21-pixel sprites with 4 colors plus transparency. Each consists of a 1bpp outline plane on top of a chunkier multicolor plane. Thus you have four colors: the two shared multicolor sprite colors and two particular to a single sprite (the outline plane's color and the first multicolor color). Mayhem demonstrates this well.
One serious problem with C64 is that it takes unbearably long to load games from cassette tape. Unlike with NES, few games came on cartridge, and as I understand it, few people outside the USA got a 1541 disk drive because it was so expensive.
Dendy doesn't use "a 3.58Mhz colour clock". It uses the same color subcarrier and master clock frequency as the PAL NES, just with a /15 in the CPU instead of a /16 and a later NMI in the PPU. Use of /15 causes the number of CPU clocks per scanline, which is the important part, to match NTSC.
Sorry I mean Chroma clock. The Dendy is the special hacky Russian/Argentinean NES that uses PAL-M/N right? So it kinda works on NTSC-M displays and SECAM because those countries couldn't really get their TV standards straight?
https://en.wikipedia.org/wiki/PAL#PAL-N
The UA6538 PPU in the Dendy outputs PAL video at the standard chroma subcarrier frequency (4.43 MHz). It is not PAL M or N.
It was really nice how PAL and NTSC both have almost the exact same line frequency, and that PAL has a color carrier almost exactly 5/4 of NTSC.
I might as well say the 6502 instruction set is awkward. It should've gave both index registers the same addressing modes, and included "stx abs,y" and "sty abs,x". Have non-indexed indirect mode (though later revisions had it). Have add and sub without carry. Have register swap instructions. Being able to add or subtract index registers.
psycopathicteen wrote:
included "stx abs,y" and "sty abs,x"
The weirdest thing is that those two instructions are just barely not present... I have to assume it just didn't occur to the designing team.
Given how it ended up (with SHY abx/SHX aby), I expect it's more an error that the 6502 designers never got around to fixing, even in most descendants. Those instructions are on the 65ce02, though, five? designs later…albeit not at those opcodes.
psycopathicteen wrote:
Being able to add or subtract index registers.
Code:
ByteTable:
.db $00, $01, $02, $03, $04, (...) $fe, $ff
adc ByteTable, x ;ADX
adc ByteTable, y ;ADY
sbc ByteTable, x ;SBX
sbc ByteTable, y ;SBY
What tokumaru said is that by having an identity table, you can emulate certain instructions like doing arithmetic or bitwise operations with the index registers;
it's in this thread.
They're not the accumulator, though. The accumulator is for math. Index registers aren't.
This exchange seems relevant.Nobody tell them about the PowerPC.
(And that thread you linked calls the ZP 256 extra registers.)
The Dendy sounds like a fancy piece of engineering then.
Does the NES have a DRAM refresh circuit or does it force SRAM? Would have been nice if it had a DRAM circuit on-board board so you could put cheaper DRAMs in the carts instead of making us shill for SRAM each time.
The 6502 had one goal, to be $5. There was nothing else to compare it to so Chuck and Mensch weren't forced into adding feature X or Y to compete. Personally I think getting SEI and CLI backwards was their biggest misstep on the 6502. Also Non-Maskable ..
sigh... For me Mensch nailed it with the 65816, it has all the goodies, although the 4510 might be a bit better in the end with its built in bank switching op code.
PHY/X PLX/Y would have really helped though.
TXY,TYX also would have been useful.
Not having LDA(zp) has been thorn in my side many a time.
LDA (zp,x) is a total waste and would have been better served as LDA (zp,x),y which would be pure gold.
JSR (XXXX) would save a lot of pain too.
Remember if you really need speed, S is a register too
Oziphantom wrote:
The Dendy sounds like a fancy piece of engineering then.
Yes, I'm still impressed that pirates did a better job of adapting the Famicom architecture to PAL than Nintendo themselves.
Quote:
Personally I think getting SEI and CLI backwards was their biggest misstep on the 6502.
What do you mean backwards? Do you think the mnemonics would better match their function if they were switched? The CPU only understands opcodes, not mnemonics, so if this bothers you so much you can always create your own assembler (or modify someone else's) with these two switched. I mean, look at the source code for the NES version of
Magic Floor... It's written in 80XX syntax! You can write source code anyway you want, as long as the resulting binary is comprised of valid 6502 opcodes. Unintuitive mnemonics are NOT hardware design flaws by any stretch of the imagination.
Quote:
PHY/X PLX/Y would have really helped though.
Would them? Maybe it's my style of coding, but I hardly ever use the stack to back up values. Even in the NMI handler, where I need to backup all 3 registers, I use 3 ZP locations I have reserved exclusively for this purpose, instead of using the stack.
Quote:
TXY,TYX also would have been useful.
ByteTable to the rescue:
Code:
ByteTable:
.db $00, $01, $02, $03, $04, (...) $fe, $ff
ldy ByteTable, x ;TXY
ldx ByteTable, y ;TYX
Quote:
Not having LDA(zp) has been thorn in my side many a time.
That I actually miss sometimes, but I often end up finding a way to make Y useful instead of having to load it with 0.
Quote:
LDA (zp,x) is a total waste
I don't know, LDA (ZP, X) can be useful for accessing collections of streams, such as the different channels of a song.
Quote:
and would have been better served as LDA (zp,x),y which would be pure gold.
But then you'd be using all your registers for reading, meaning you could have trouble indexing the destination if not reusing Y or an auto-increment register such as $2007. Not to mention that this would be a pretty slow instruction.
Quote:
JSR (XXXX) would save a lot of pain too.
Working with the attribute tables in an 8-way scroller is a pain. Changing the palette mid-frame is a pain. Doing raster effects without IRQs is a pain. Animating patterns in such a short vblank time is a pain. JSR'ing to a JMP (XXXX) is definitely not a pain.
Quote:
Remember if you really need speed, S is a register too
Specially on the Atari 2600, which doesn't have any interrupts, so S can safely be used to load data faster in display kernels.
As far as instruction sets go, sometimes I look at the Z80 set with quite some envy. It may be slow as all hell but it sure has some clever instructions that I could really use in NES programs. For example the bit tests that can test a single bit in a register or in RAM. With the 6502 you can only check the top two bits with relative ease (and even the available addressing modes of BIT leave a lot to be desired) while the rest require you to load from the address and then AND to discard the bits you aren't interested in testing, or LSR to carry if you want to test bit 0.
I could say the same about the individual bit-setting and clearing instructions. No need to load from RAM and then use AND/ORA, and then storing the result. The Z80 even has those DMA-type instructions to work with a large chunk of data (moving it, finding a certain value in a table, etc.)
tokumaru wrote:
Oziphantom wrote:
The Dendy sounds like a fancy piece of engineering then.
Yes, I'm still impressed that pirates did a better job of adapting the Famicom architecture to PAL than Nintendo themselves.
Quote:
Personally I think getting SEI and CLI backwards was their biggest misstep on the 6502.
What do you mean backwards? Do you think the mnemonics would better match their function if they were switched? The CPU only understands opcodes, not mnemonics, so if this bothers you so much you can always create your own assembler (or modify someone else's) with these two switched. I mean, look at the source code for the NES version of
Magic Floor... It's written in 80XX syntax! You can write source code anyway you want, as long as the resulting binary is comprised of valid 6502 opcodes. Unintuitive mnemonics are NOT hardware design flaws by any stretch of the imagination.
Sure I could easily make my own standard but the standard set down by Chuck and Bill is the standard we all know and use. SEI Set Enable Interrupt - actually disables Interrupts. Not a 6502 design flaw, nothing wrong with the die, but still Chuck and Bill's misstep
tokumaru wrote:
Quote:
PHY/X PLX/Y would have really helped though.
Would them? Maybe it's my style of coding, but I hardly ever use the stack to back up values. Even in the NMI handler, where I need to backup all 3 registers, I use 3 ZP locations I have reserved exclusively for this purpose, instead of using the stack.
If you have a single NMI source and 100% guarantee you won't renter, then sure use the ZP, its faster. But if you have 4 IRQ sources and an NMI source or might re-enter, use the stack or you are could get lots of pain, the adding 1 opcode to this unrelated function causes the whole screen to become a mess kind of pain. Or make 5x3 store areas if you have the RAM to spare.If you want to make lite threads or "peel off" threading on a 6502 the stack helps. Also when you just need to preserve a register somewhere to recall it 3 lines down or so, having the option to use the Stack over the ZP, which might then get trashed by some other function or maybe in an interrupt if its a shared "general ZP store" would be nice sometimes. Rather than a STX Somewhere, LDX somewhere else, it adds a dependency to your code or lib that could be mitigated with a Stack operation. You could even use a PHY PLX to get around not having a TYX for example for when you have to do the I need what is in X to now be in Y but I really want to preserve A docey-do. I would also be handy for parameter passing, that you use in the function but don't need it now. For example something like
Code:
myFunc
STY ZPY
STX ZPX
LDY #4
STA (ptr),y
JSR functionThatModifiesA ; I hope this doesn't trash ZPY or ZPX one day
LDX ZPX
ADC Table,x
LDX ZPY
AND Table,y
LDY #4
STA (ptr),y
RTS
Could become
Code:
myFunc
PHY
PHX
LDY #4
STA (ptr),y
JSR functionThatModifiesA ; can do whatever it wants with things
PLX
ADC Table,x
PLY
AND Table,y
LDY #4
STA (ptr),y
RTS
tokumaru wrote:
Quote:
TXY,TYX also would have been useful.
ByteTable to the rescue:
Code:
ByteTable:
.db $00, $01, $02, $03, $04, (...) $fe, $ff
ldy ByteTable, x ;TXY
ldx ByteTable, y ;TYX
Yeah byte tables are nice things to have around.
tokumaru wrote:
Quote:
Not having LDA(zp) has been thorn in my side many a time.
That I actually miss sometimes, but I often end up finding a way to make Y useful instead of having to load it with 0.
Quote:
LDA (zp,x) is a total waste
I don't know, LDA (ZP, X) can be useful for accessing collections of streams, such as the different channels of a song.
Quote:
and would have been better served as LDA (zp,x),y which would be pure gold.
But then you'd be using all your registers for reading, meaning you could have trouble indexing the destination if not reusing Y or an auto-increment register such as $2007. Not to mention that this would be a pretty slow instruction.
If X is your entity number, and Y is the field you want in the entity then ZP can hold an entity pointer table of which you can easily index into. Gives you the ability to easily make 2 dimensional arrays. It would add 1 clock to the cycle which is a lot faster than doing it the long way at the moment.
tokumaru wrote:
Quote:
JSR (XXXX) would save a lot of pain too.
Working with the attribute tables in an 8-way scroller is a pain. Changing the palette mid-frame is a pain. Doing raster effects without IRQs is a pain. Animating patterns in such a short vblank time is a pain. JSR'ing to a JMP (XXXX) is definitely not a pain.
Doing a push with the address -1 on the stack then doing a jump is cumbersome, and I come from a machine where I can actually just change the JSR params
Still it would be nice if I didn't have to. Also those other things are NES problems not a 6502 problem.
tokumaru wrote:
Quote:
Remember if you really need speed, S is a register too
Specially on the Atari 2600, which doesn't have any interrupts, so S can safely be used to load data faster in display kernels.
Oziphantom wrote:
SEI Set Enable Interrupt - actually disables Interrupts.
The word "enable" isn't what the E stands for. "SE" and "CL" are just short for "set" and "clear"; SED is the only one of those that's actually intended to enable anything.
For me these were always SEt I, CLear I, SEt D, CLear D, with I and D being the "IRQ inhibit" and "Decimal mode" flags. Pretty straightforward if you ask me.
In fact, the actual
6502 datasheet from 1976 (page 5) refers to SEI as "Set Interrupt Disable", as well.
That works too, if you call the I flag "interrupt disable".
Oziphantom wrote:
Sure I could easily make my own standard but the standard set down by Chuck and Bill is the standard we all know and use. SEI Set Enable Interrupt - actually disables Interrupts. Not a 6502 design flaw, nothing wrong with the die, but still Chuck and Bill's misstep
The
early MOS datasheets for 6502 call them "Set Interrupt Disable Status" and "Clear Interrupt Disable Bit". I dunno where you got "Set Enable Interrupt" from, but if you were confused about the meaning of this mnemonic because of it, it wasn't from "Chuck and Bill".
(Edit: Okay apparently 3 other people said this already while I was typing.)
Oziphantom wrote:
tokumaru wrote:
Quote:
JSR (XXXX) would save a lot of pain too.
Working with the attribute tables in an 8-way scroller is a pain. Changing the palette mid-frame is a pain. Doing raster effects without IRQs is a pain. Animating patterns in such a short vblank time is a pain. JSR'ing to a JMP (XXXX) is definitely not a pain.
Doing a push with the address -1 on the stack then doing a jump is cumbersome, and I come from a machine where I can actually just change the JSR params
Still it would be nice if I didn't have to. Also those other things are NES problems not a 6502 problem.
A substitute wrapper for JSR (XXXX) takes only one extra line of code.
Code:
jsr jsr_xxxx
...
jsr_xxxx:
jmp ($XXXX)
The -1 problem is self inflicted complexity. You
can use RTS for a jump table implementation, but there's not a whole lot of need to do so. It's only really "necessary" in some very specific situations (e.g. you want the pointer on the stack).
Oziphantom wrote:
Personally I think getting SEI and CLI backwards was their biggest misstep on the 6502. Also Non-Maskable ..
sigh...
Think of it this way and it'll click.
I bit is the minimum level of IRQs that get through.
NMI and RESET are level 1, and IRQ is level 0.
SEI sets the minimum level to 1, and NMI and RESET have level at least 1.
CLI sets the minimum level to 0, and IRQ, NMI, and RESET have level at least 0.
tokumaru wrote:
What do you mean backwards? Do you think the mnemonics would better match their function if they were switched?
I think Oziphantom is referring to the fact that it's an IRQ
inhibit bit in the first place. On some other architectures, the sense of the I bit are such that 1 means allow IRQ and 0 means suppress IRQ, such as 8086 and the otherwise very 65C02-like SPC700. Blargg's SPC700 macro pack for ca65 switches the mnemonics back around to the 6502 way, but the underlying 8086 way is still visible if you PHA PLP or PHP PLA.
tokumaru wrote:
Even in the NMI handler, where I need to backup all 3 registers, I use 3 ZP locations I have reserved exclusively for this purpose, instead of using the stack.
And then you
formally prove that NMI-in-NMI can never happen, right?
tokumaru wrote:
LDA (ZP, X) can be useful for accessing collections of streams, such as the different channels of a song.
As seen in, for example, the Pently music engine. It stores pointers to note attack data, sound effect data, and note pattern data for each of the four APU channels, plus one more pointer to note pattern data for the attack track. But it might not appear so useful if your only 6502 experience is on 6502-based home computers such as Apple IIe and Commodore 64, where BASIC uses up most of zero page.
tokumaru wrote:
Specially on the Atari 2600, which doesn't have any interrupts, so S can safely be used to load data faster in display kernels.
Or in my
Popslide VRAM update kernel for NES, because there's enough space in page $01 to leave room for the interrupt handler before the update data.
za909 wrote:
As far as instruction sets go, sometimes I look at the Z80 set with quite some envy.
I sure don't, particularly when it comes to
random access to the properties of an entity. IX/IY stuff is slow on Z80 and nonexistent on
LR35902, which lacks a couple of the Z80 prefixes. On LR35902, or on Z80 without (slow) IX/IY stuff, you need to make successively accessed properties either adjacent (so you can INC L to get to the next and/or use the (HL+) and (HL-) modes) or one bit apart in address (so you can SET/RES bits in L).
Oziphantom wrote:
If X is your entity number, and Y is the field you want in the entity then ZP can hold an entity pointer table of which you can easily index into.
The typical way to do this on 6502 is X is your entity number, and a separate array of properties for each entity. This is the "structure of arrays" approach:
Code:
.bss
actor_xhi: .res NUM_ACTORS
actor_x: .res NUM_ACTORS
actor_xlo: .res NUM_ACTORS
actor_dx: .res NUM_ACTORS
actor_dxlo: .res NUM_ACTORS
actor_y: .res NUM_ACTORS
actor_ylo: .res NUM_ACTORS
actor_dy: .res NUM_ACTORS
actor_dylo: .res NUM_ACTORS
actor_class: .res NUM_ACTORS ; index into move vtable and sprite sheet pointers
actor_frame: .res NUM_ACTORS
actor_frame_sub: .res NUM_ACTORS
actor_health: .res NUM_ACTORS
actor_hurt_ht: .res NUM_ACTORS
za909 wrote:
As far as instruction sets go, sometimes I look at the Z80 set with quite some envy. It may be slow as all hell but it sure has some clever instructions that I could really use in NES programs. For example the bit tests that can test a single bit in a register or in RAM. With the 6502 you can only check the top two bits with relative ease (and even the available addressing modes of BIT leave a lot to be desired) while the rest require you to load from the address and then AND to discard the bits you aren't interested in testing, or LSR to carry if you want to test bit 0.
I could say the same about the individual bit-setting and clearing instructions. No need to load from RAM and then use AND/ORA, and then storing the result. The Z80 even has those DMA-type instructions to work with a large chunk of data (moving it, finding a certain value in a table, etc.)
The
DMA block-transfer instructions are also, unlike the NES's, beatable by fairly unremarkable programming.
Data tables for set/clear/toggle/test bit are also easy:($) 1 2 4 8 10 20 40 80; FE FD FB F7 EF DF BF 7F
rainwarrior wrote:
Oziphantom wrote:
Sure I could easily make my own standard but the standard set down by Chuck and Bill is the standard we all know and use. SEI Set Enable Interrupt - actually disables Interrupts. Not a 6502 design flaw, nothing wrong with the die, but still Chuck and Bill's misstep
The
early MOS datasheets for 6502 call them "Set Interrupt Disable Status" and "Clear Interrupt Disable Bit". I dunno where you got "Set Enable Interrupt" from, but if you were confused about the meaning of this mnemonic because of it, it wasn't from "Chuck and Bill".
(Edit: Okay apparently 3 other people said this already while I was typing.)
I stand correct, so it does... I guess I never really learnt the official names off by heart and over the last 28 years just derived them again Branch Not Equal, STore A etc. The irony is Set Enable Interupt and SEt Interupt both are still valid, in that it is what the instruction does. As you do Set Enable Interupt, just it's an active low signal. I feel it would be better to have been DII and ENI DIsable Interupts and ENable Interupts to be clearer. While I say its the biggest, it doesn't mean I think it is a
big misstep.
rainwarrior wrote:
A substitute wrapper for JSR (XXXX) takes only one extra line of code.
Code:
jsr jsr_xxxx
...
jsr_xxxx:
jmp ($XXXX)
The -1 problem is self inflicted complexity. You
can use RTS for a jump table implementation, but there's not a whole lot of need to do so. It's only really "necessary" in some very specific situations (e.g. you want the pointer on the stack).
(/)_. JSR (XXXX,x) is what I meant sorry, I forgot the ,x and didn't notice until I saw your code. You can store the address into a location and jump to it, personally I use the rts as its smaller. Seems most of the NES code I have read does the table after a JSR then looks up the stack to read beyond the caller method, fair statement?
Oziphantom wrote:
Seems most of the NES code I have read does the table after a JSR then looks up the stack to read beyond the caller method, fair statement?
I've seen that particular technique in one or two NES games, yes.
tepples wrote:
And then you
formally prove that NMI-in-NMI can never happen, right?
Code:
Save registers
Do the critical thing
Ack the interupt
Do something long and you don't care if it is interrupted like prepare animation data, shift memory etc
restore registers
rti
Once you do the Acknowledge then your interrupt handler can fire again, and hence you can NMI in your NMI. If you hardware requires acknowledgement for NMIs. Yes in most cases you can guarantee to yourself that you will not NMI in your NMI. I would think on a NES if you got bogged down in doing something long that only happens every now and then, for example too many entities trying to update their animation data, that allowing the NMI to trigger again would be beneficial as you then don't drop a frame and get marked down for "slow-down"
Adding the necessary safe guards to stop the update animations function being called, so things at least move and maybe make sure the player updates.
tepples wrote:
tokumaru wrote:
LDA (ZP, X) can be useful for accessing collections of streams, such as the different channels of a song.
As seen in, for example, the Pently music engine. It stores pointers to note attack data, sound effect data, and note pattern data for each of the four APU channels, plus one more pointer to note pattern data for the attack track. But it might not appear so useful if your only 6502 experience is on 6502-based home computers such as Apple IIe and Commodore 64, where BASIC uses up most of zero page.
Sure its not useless as in has absolutely no use what so ever. Just it is one the of the much lesser used instructions( but on a NES they could have re-purposed CLD and SED though right
). The Audio Stream example is basically
the use for it. Of which I would argue that doing the ,x,y double lookup is more useful and for the audio case you can do a ldy #0 and pay the 3 clock penitently to get just a ,x version again. Having said that, you probably want to read a value out of the audio stream to which loading Y with the current music data index and doing a LDA (MusicTable,Channel),DataIndex would be handy in that case anyway right?
Its other
top use is I want to do LDA(ZP) but Y has something useful so I will LDX #0 and LDA(ZP,x) instead
Also good for burning 6 clocks in 2 bytes
BASIC has 3 jobs in this world, one to do a LOAD, two to do a RUN, three do an SYS and then it can get its fat arse out of the memory map
tepples wrote:
Oziphantom wrote:
If X is your entity number, and Y is the field you want in the entity then ZP can hold an entity pointer table of which you can easily index into.
The typical way to do this on 6502 is X is your entity number, and a separate array of properties for each entity. This is the "structure of arrays" approach:
Code:
.bss
actor_xhi: .res NUM_ACTORS
actor_x: .res NUM_ACTORS
actor_xlo: .res NUM_ACTORS
actor_dx: .res NUM_ACTORS
actor_dxlo: .res NUM_ACTORS
actor_y: .res NUM_ACTORS
actor_ylo: .res NUM_ACTORS
actor_dy: .res NUM_ACTORS
actor_dylo: .res NUM_ACTORS
actor_class: .res NUM_ACTORS ; index into move vtable and sprite sheet pointers
actor_frame: .res NUM_ACTORS
actor_frame_sub: .res NUM_ACTORS
actor_health: .res NUM_ACTORS
actor_hurt_ht: .res NUM_ACTORS
Struct of Arrays is the way to do it sure, but sometimes you need a double look up, more for entity map data rather than live entity state, I would think. You want to compress the data used to position and set up entities, of which say a trigger has a very different data needs to moving enemy, or a spawner etc