So I was recently watching a YouTube video explaining how the SNES worked, and in one part it was explaining some things about how the system processed its information while the screen was drawing an image.
I had never really thought about it in such depth before, but the way I had always thought of how these consoles processed the game data and rendered the image was wrong. For some reason I was thinking they worked too similar to how modern computers render an image, which basically would "store" the complete image in memory. Well duh the NES doesn't do that, because it would require several KB of video RAM. It processes the render of an image in literally the same moment that it sends the signal to the screen, and thus only processes a tiny part of it. This makes so much sense, and I'm surprised I never really thought about it before. All of the tricks where a system changes properties between scanlines makes more sense this way.
But there's a couple things about this that I'm a little unsure of, and so I thought I'd ask for some clarity.
The biggest thing that is confusing me is how the NES (and other consoles) updated the game logic. In the video it stated that this was done at the same time everything was rendering on the screen. To my understanding, this is everything like updating where the player and enemies are, processing commands for someone attacking, changing the name tables to change the backgrounds, and whatever else needs to be done in a given frame.
But if this is being done at the same time that the image is being drawn onto the screen, um, wouldn't that cause problems? Like if an object moves on a given frame, what if say part of that object was drawn on one scanline, then the code updated that object, and while drawing the next scanline that same object would be in a different position, resulting in its appearance being split on that frame.
I would assume that all of this logic would have to be done during the V-Blank so that the image never gets screwed up. But this also doesn't quite add up. If this were so, it would provide a relatively small amount of time wherein the system can process all the game logic. And furthermore, if the system didn't finish all this before the next frame started drawing, then the system would be drawing an incomplete frame where some objects and things have changed but not all, instead of just repeating an exact copy of the last frame.
But yet, the system DOES just repeat the last frame. So, how does it manage to do this if it isn't storing a complete and whole image of the screen in a memory buffer? And how do these systems manage to keep applying the same effects where they switch data between scanlines if the console is still "busy" processing the game logic?
Plus, as I said, this reserves a rather short amount of time for processing game logic. And all the rest of the time is spent processing rendering for the screen. I suppose it is possible, but it just doesn't sound right to me.
I think I have the wrong idea about what is happening.
Also I have a few other things I am curious about. For one, just how much "final render data" is the NES actually "storing" in its system? And what about later consoles like the SNES, Mega Drive, and PC Engine? Do they "store" a complete scanline before it is sent to the TV, or does it really render each pixel as it gets pushed to the signal out of the console? It seems somewhat reasonable to me that it should have some understanding of at least a few pixels ahead before it sends them through the cable.
But again, maybe I have a misconception about how these systems work.
Another thing I am curious about is: how do emulators manage to handle all this? I mean, sure, even back when I was running NESticle my computer could out-perform the NES a hundred times over, but to function properly the emulator still needs to keep all the timing correct, all while drawing an image that it sends to the screen but in a pre-computed fashion. I'm rather curious how a PC handles this change in how the game needs to be rendered.
I had never really thought about it in such depth before, but the way I had always thought of how these consoles processed the game data and rendered the image was wrong. For some reason I was thinking they worked too similar to how modern computers render an image, which basically would "store" the complete image in memory. Well duh the NES doesn't do that, because it would require several KB of video RAM. It processes the render of an image in literally the same moment that it sends the signal to the screen, and thus only processes a tiny part of it. This makes so much sense, and I'm surprised I never really thought about it before. All of the tricks where a system changes properties between scanlines makes more sense this way.
But there's a couple things about this that I'm a little unsure of, and so I thought I'd ask for some clarity.
The biggest thing that is confusing me is how the NES (and other consoles) updated the game logic. In the video it stated that this was done at the same time everything was rendering on the screen. To my understanding, this is everything like updating where the player and enemies are, processing commands for someone attacking, changing the name tables to change the backgrounds, and whatever else needs to be done in a given frame.
But if this is being done at the same time that the image is being drawn onto the screen, um, wouldn't that cause problems? Like if an object moves on a given frame, what if say part of that object was drawn on one scanline, then the code updated that object, and while drawing the next scanline that same object would be in a different position, resulting in its appearance being split on that frame.
I would assume that all of this logic would have to be done during the V-Blank so that the image never gets screwed up. But this also doesn't quite add up. If this were so, it would provide a relatively small amount of time wherein the system can process all the game logic. And furthermore, if the system didn't finish all this before the next frame started drawing, then the system would be drawing an incomplete frame where some objects and things have changed but not all, instead of just repeating an exact copy of the last frame.
But yet, the system DOES just repeat the last frame. So, how does it manage to do this if it isn't storing a complete and whole image of the screen in a memory buffer? And how do these systems manage to keep applying the same effects where they switch data between scanlines if the console is still "busy" processing the game logic?
Plus, as I said, this reserves a rather short amount of time for processing game logic. And all the rest of the time is spent processing rendering for the screen. I suppose it is possible, but it just doesn't sound right to me.
I think I have the wrong idea about what is happening.
Also I have a few other things I am curious about. For one, just how much "final render data" is the NES actually "storing" in its system? And what about later consoles like the SNES, Mega Drive, and PC Engine? Do they "store" a complete scanline before it is sent to the TV, or does it really render each pixel as it gets pushed to the signal out of the console? It seems somewhat reasonable to me that it should have some understanding of at least a few pixels ahead before it sends them through the cable.
But again, maybe I have a misconception about how these systems work.
Another thing I am curious about is: how do emulators manage to handle all this? I mean, sure, even back when I was running NESticle my computer could out-perform the NES a hundred times over, but to function properly the emulator still needs to keep all the timing correct, all while drawing an image that it sends to the screen but in a pre-computed fashion. I'm rather curious how a PC handles this change in how the game needs to be rendered.