I looking to play some old atari games that are in a top down style similar to the following:
Adventure
Berzerk
Dark Chambers
Haunted House
Sword Quest
Venture
Front Line
Indiana Jones
Basically any games that are like 4 bit and in this perspective style. I just like these types of games and want to get into more of them.
Thanks.
Erockbrox wrote:
4 bit
It's a common misconception that the 2600 is a 4-bit machine, because it came before the most famous 8-bit console, but it's also 8-bit. In fact, it uses almost the same CPU as the NES, only slower.
The only 4-bit machines I'm aware of are calculators. If a system can play games, you can be pretty sure it's at least 8-bit.
BTW, the Dreamcast, PlayStation 2, Xbox and Gamecube are not 128-bit, the PlayStation 3, Xbox 360 and Wii are not 256-bit, and the PlayStation 4, Xbox One and Wii U are not 512-bit. This bit counting system was a silly thing invented during the 16-bit era to show everyone how superior the new consoles were. This remained a selling point for a few years, but CPUs generally maxed out at 64 bits. Look at the Nintendo 64, for example: being 64-bit was it's novelty, but games hardly ever used the 64-bit instructions.
tokumaru wrote:
The only 4-bit machines I'm aware of are calculators. If a system can play games, you can be pretty sure it's at least 8-bit.
True. But don't some of the MCUs in handheld LCD games, such as Game & Watch or Tiger, run on 4-bit MCUs?
Quote:
BTW, the Dreamcast, PlayStation 2, Xbox and Gamecube are not 128-bit
How wide is their data bus? That's what's 64-bit about the Atari Jaguar. (The AMD Jaguar in the PlayStation 4 and Xbox One is 64-bit for a different reason.)
Quote:
Look at the Nintendo 64, for example: being 64-bit was it's novelty, but games hardly ever used the 64-bit instructions.
The N64 also has a fast but narrow bus due to RDRAM architecture, which throws the Jaguar data bus theory away.
Now to the original question:
Do you also mean to include games for the Odyssey2, Intellivision, and the TMS9918 platforms (CreatiVision, ColecoVision, SG-1000, Adam, and MSX)? Together with the 2600, these are generally considered the "second-generation" video game consoles: early microprocessors, usually 8-bit, paired with simple, largely off-the-shelf video chips. But the capabilities of the TMS9918 video chip put systems using it half a generation above the 2600, O2, and INTV, with the same relationship to the NES that the Dreamcast has to the PS2, Xbox, and GameCube. The NES can be thought of as a CreatiVision with hardware background scrolling, 2-bit tiles on external memory, and more audio waveforms.
tokumaru wrote:
BTW, the Dreamcast, PlayStation 2, Xbox and Gamecube are not 128-bit, the PlayStation 3, Xbox 360 and Wii are not 256-bit, and the PlayStation 4, Xbox One and Wii U are not 512-bit. This bit counting system was a silly thing invented during the 16-bit era to show everyone how superior the new consoles were. This remained a selling point for a few years, but CPUs generally maxed out at 64 bits. Look at the Nintendo 64, for example: being 64-bit was it's novelty, but games hardly ever used the 64-bit instructions.
hahaha I used to think this exact thing about consoles, that the bit count of the processors just doubled each generation.
I also thought, once I learned about color depth and what "bits" were, that since the NES is always described as having "8-bit graphics" it had 256 colors.
Most people had no idea what those bits were... Normally, the number referred to the CPU's word size, but a lot of people thought it had to do with graphics. That theory would quickly go down the drain if you realized that the Intellivision was 16-bit and the Sega Master System 8-bit, and that consoles with the "same number of bits" often had radically different graphical capabilities.
Companies weren't exactly consistent with the numbers they used either... The PC Engine/TG-16 was marketed as a 16-bit console, but its CPU was 8-bit (the GPU was 16-bit though). The Jaguar was marketed as being 64-bit, apparently because it had 2 32-bit CPUs. Those were weird times. These days it's all about monster GPUs, more RAM and more CPU cores.
tokumaru wrote:
. That theory would quickly go down the drain if you realized that the Intellivision was 16-bit and the Sega Master System 8-bit
Wasn't the CPU in the Intellivision not even clocked at 1Mhz
tokumaru wrote:
The PC Engine/TG-16 was marketed as a 16-bit console, but its CPU was 8-bit (the GPU was 16-bit though).
Even though the GPU is generally what defines a video game console (as long as it isn't the Sega Genesis).
tokumaru wrote:
These days it's all about monster GPUs
I think this is the most justified out of the 3 you mentioned. I still refuse to believe that the XBone or the PS4 need 8GB of ram. It's not a PC. (Or at least that's what people aren't supposed to think...)
Actually, you know what, after seeing how unbelievably terrible this looks, I can totally see why 8GB of ram is necessary. I'm not even sure as to how people lived without 8GB of ram.
In 1985, 8Gb RAM would be the ravings of a lunatic.
Erockbrox wrote:
I looking to play some old atari games that are in a top down style similar to the following:
How about Quest Forge? I'm not super up on all the 2600 but I only know about that one because people talking about it on here.
tokumaru wrote:
Most people had no idea what those bits were...
I didn't really get it until I started doing assembly. I'm curious though, that if a processor has say, a 64-bit word length, does that mean that every data word it processes is processed as 64-bit? I know some of the processors can run in a 32-bit mode but I don't know if that's just emulated. My buddy was talking about some kind of 128-bit processor the other day, and I was saying that unless the workload was entirely encryption or something like that, having a 128-bit word length would only drastically slow down your processing. I'm also slightly curious if something like a modern 16-bit processor would be faster than 64-bit if you were only working with 16-bit data, or mostly 16-bit data. Not that I plan to actually do anything with the information; I'm just curious.
(Edit, I don't know)
I would imagine that 16-bit processing would be faster than 64-bit, but I have no proof, so...empty words
(I mean a hypothetical 16-bit processor with the same frequency and same dimensions as present day 64-bit processors)
I imagine the difference could be made up or reversed by 64-bit SIMD instructions designed for 16-bit bulk processing, though you might have trouble with granularity and branch handling. Not sure any modern processor bothers... and I'm a bit behind on processor architectures anyway; I understand the 65C816 (and by extension the 65xx family) pretty well, I have some theoretical knowledge of the Super FX and SPC700, and I have some idea of how the 68000 family works, but all this superscalar VLIW OoO nonsense is a little beyond me...
Sega could have easily claimed the Genesis as 32bit. From a software engineering perspective - the 68k is 32bit. From a hardware engineering perspective, it's 16bit because the ALU is 16bit. What does that makes the SNES, with an 8bit cpu data bus?
This is the way I always thought of it. If the SNES is 8 bit, then the Genesis is 16 bit. If the SNES is 16 bit, then the Genesis is 32 bit. However, I don't think using 32 bit operations will help too much in a 2D game, save moving objects using sub pixel precision. Using only 8 bits for a scrolling 2D game does seem pretty difficult though.
You know though, I felt like doing some math to see how many textures you could store in 8GB of ram, and this is what I got.
1024 x 1024, 24 bit color + 8 bit alpha channel: 2048 textures
1024 x 1024, 24 bit color: 2738 and 2/3 textures
512 x 512, 24 bit color + 8 bit alpha channel: 8192 textures
512 x 512, 24 bit color: 10922 and 2/3 textures
I feel like people now a days just completely dismiss how large a gigabyte really is.
I think the idea of comparing bits can be blamed on Sega's marketing team, along with things like "Blast Processing". When I was a kid, I remember no one really knowing why "16-bit" was better, but if it was double the NES, so it must be better! Welcome to the next level!
Espozo wrote:
This is the way I always thought of it. If the SNES is 8 bit, then the Genesis is 16 bit. If the SNES is 16 bit, then the Genesis is 32 bit. However, I don't think using 32 bit operations will help too much in a 2D game, save moving objects using sub pixel precision. Using only 8 bits for a scrolling 2D game does seem pretty difficult though.
Not necessary for 2D games, but sometimes it's faster to do 32bit operations on the 68k. All address vectors (registers) are 32bit in length, unless you're accessing 32k of the 64k ram with short addressing mode (and the first 32k of rom, because the 16bit address is signed). Stuff like 24bit fixed point stuff (16:8) is faster as a single 32bit operation. And of course moving data is faster with 32bits at a time. So you tend to use more 32bit operations on the 68k because of its design. Indexing (offsetting) the address register on the the 68k is slow, so it's often faster to add the offset to the base address. That's a 32bit operation. Stuff like that.
Movax12: Yeah. Bits is totally misleading, but getting something the public will understand, at the time, was probably difficult. How does the human brain understand 1.79 million cycle per second vs 7.6 million cycles per second. They're such large numbers. People can more easily related to smaller numbers that are closer to every day life. Numbers like 8 and 16 are relatable to the human experience. It's why we have unit conversions of measurements (1 light year, etc).
The Super NES S-CPU's data bus is narrower but faster. In fast ROM mode, it runs at 3.58 MHz, accessing memory on every cycle. The 68000 in the Genesis runs at 7.67 MHz (15/7 of Super NES fast ROM speed), but its CPU accesses memory once every four cycles, giving it only 15/28 = 53.6% of Super NES data bus clock speed. N64 also took the narrow but fast approach with its 8-bit RDRAM, simplifying board design and contributing to the legendary Tonka Tough reliability of Nintendo hardware.
Ms. Pac-Man and Hack 'Em/Hangly-Man are top down. So are Asteroids and Centipede and Pong for that matter. But more toward the style of games you're talking about are E.T. and Halo 2600.
tepples wrote:
The Super NES S-CPU's data bus is narrower but faster. In fast ROM mode, it runs at 3.58 MHz, accessing memory on every cycle. The 68000 in the Genesis runs at 7.67 MHz (15/7 of Super NES fast ROM speed), but its CPU accesses memory once every four cycles, giving it only 15/28 = 53.6% of Super NES data bus clock speed.
That just means it's a bus hog - lol. All 65x's are bus hogs.
Is that bad? I would have thought it would have been good to read off the data bus as frequently as possible.
Sounds like the opposite of the Amiga, which was designed to relinquish the bus from the CPU every other cycle so the graphics and sound chips had full access to memory.
Well, I don't know about the Amiga, but everything in the SNES pretty much has its own ram, so I don't think it would matter if the main CPU was "hogging" main ram.
You know though, about the Atari 2600, doesn't it have about next to no video hardware? I've just never understood how it works. From what I've heard, it's mostly dependent on timing, I guess like trying to do mid scanline scrolling split on the SNES. I just always love how people like to mention that it doesn't have a framebuffer, when really, from what I've heard, framebuffers really only started to be used for consoles when 3D games had become possible. (I'm not including line buffers. Maybe that's what they should say for the Atari 2600, but I don't even think the NES has a linebuffer for sprites unlike the SNES, so I don't see why it's that important.) I know framebuffers had been used in home computers for a long time though, I guess because graphically, you're really only limited by CPU time and the framebuffer size. (It seems like PCs had next to no dedicated video hardware then.) I just remember hearing about the beginning of id software and how they had made a SMB3 demo on PC, and how it was impressive that they were able to pull of the scrolling, considering it's not like they could just change one value and push the background, it had to be completely redrawn.
For storing pixel data, there's like 5 bytes of 'video ram'. (three bytes for the playfield, and two bytes for the sprites)
Also, the EGA hardware trick used by commander keen was basically hardware scrolling within a wrapping 384x264 area.
Given how limited the TIA is, we may as well call it out: 23 bits for the playfield, two 8 bit values for the two graphic sprites, and four 7 bit values to specify color for backdrop, playfield, and each sprite.
Total state in the TIA is about 200 bits, many of which are a pain to get to.
lidnariq wrote:
Total state in the TIA is about 200 bits
Within reach of CPLDs?
Espozo wrote:
about the Atari 2600, doesn't it have about next to no video hardware?
While most video chips are able to draw an entire screen, the TIA can only draw one scanline. In this scanline, there are 40 wide playfield pixels (20 of which are unique), 2 8-pixel players, 2 1-pixel missiles and a 1-pixel ball. Players, missiles and the ball can be horizintally stretched. Non-stretched players and missiles can be cloned with fixed spacing between copies. There's the background color, the playfield color (also used by the ball), and each player has its own color (shared with its corresponding missile).
Quote:
From what I've heard, it's mostly dependent on timing
Everything about the TIA can be changed at any time, so it's important that games make the changes at the appropriate times. Simpler games just change the background and player patterns every few scanlines, along with the visibility of missiles and the ball, but there are many complex games that modify the playfield mid-scanline (for asymmetric backgrounds), reposition the objects to draw new objects, change colors almost every scanline, and so on.
Quote:
I guess like trying to do mid scanline scrolling split on the SNES.
The whole screen is a series of raster effects on the Atari 2600.
tepples wrote:
lidnariq wrote:
Total state in the TIA is about 200 bits
Within reach of CPLDs?
A large CPLD, yes.
Here's someone who wrote (and GPL'd) VHDL to reimplement the TIA, 6507, and RIOT:
https://retromaster.wordpress.com/a2601/ ( source code link there is broken; it's been copied here:
https://github.com/GadgetFactory/Papili ... iginal.zip )
Was the Atari 2600 the weakest system, other than the Magnavox Odyssey? Heck, even the Odyssey has a bigger resolution than the Atari 2600.
I think the Atari 2600 has some competition other than the Magnavox Odyssey:
https://www.youtube.com/watch?v=wBeokwN_5jg#t=4m03sI don't imagine the Atari 2600 was weak for its time. In the order of the weakest to the strongest video game consoles of that time, most of them seem to be the same as when they where released.
The 2nd generation of consoles was the first generation that used a microprocessor at all—the first generation was done entirely using dedicated digital and analog electronics.
All of the Pong-on-a-chip ICs (and the odyssey!) often had a resolution of 128x192, but comparing the 2600 to the AY-3-8500 is awkward, at best.
You know, this is kind of off topic (surprise!) but just thinking about what I said about how many textures you can fit in ram, are indexed textures used or even supported still? I know the N64 has the option to use 8bpp textures and have a palette for that texture, and I imagine you wouldn't need 16,777,216 different colors for a mostly gray rock.
No. You can fake it with shaders, but that's it.
You can compress textures though (in fact it's faster for GPUs since they cause less misses in the texture cache). Usually it goes down to 1/4 the size, although there's the option of going down to 1/8 the size too.
Espozo wrote:
I think the Atari 2600 has some competition other than the Magnavox Odyssey:
https://www.youtube.com/watch?v=wBeokwN_5jg#t=4m03sBut the Channel F had video memory at least (it holds a bitmap). The Atari 2600 didn't even have
that.
Sik wrote:
No. You can fake it with shaders, but that's it.
Really? Wow... Let's see, I can either have about a 296x296 rock texture using 24 bits, or a 512 x 512 rock texture at 8 bits. I wonder which one looks better...
(Unfortunately, the first two aren't being displayed at full size. You have to click them.)
Attachment:
512 x 512 8bpp.png [ 239.12 KiB | Viewed 1532 times ]
Attachment:
296 x 296 stretched to 512 x 512 24bpp.png [ 524.12 KiB | Viewed 1532 times ]
Attachment:
296 x 296 24bpp.png [ 210.99 KiB | Viewed 1532 times ]
I think the answer is pretty obvious...
Sik wrote:
Usually it goes down to 1/4 the size
Really? I didn't think it where possible to compress textures even nearly that much. That excuses it to a degree. However, I believe in making the best in whatever space you have. The fact that the Xbone and PS4 have 8GB of ram isn't as big as a problem as the fact that they could have managed their resources better and put the money to something that would have mattered more. (Just like how I think the SNES should have 128KB of vram and 64KB of main ram). I mean, 8GB of ram can't be too cheap, can it? Also, just thinking, I've heard that the more ram you have, generally the slower it is. (although I'm not sure how much it affects it.) Could this mean that it is generally best to have the perfect amount of ram to where you have enough, but not too much? You could always get faster ram of the same size, but it would cost more, and of course, there's also the cost of just having more ram in general. Anyway though, it could be a result of poor game design to a degree, but I've seen some pretty bad draw distance problems on some games, to where you can see things like trees deteriorate in quality the farther away they are. I'm guessing this is to save on the polygon budget, but the point is, if more money had been spent on the GPU to increase the rate at which polygons are drawn, they could draw more, and could then push the draw distance back further. I've also stated as to how bill boarding should not still be used in 2015. I mean, that was a common technique on the N64. It wasn't convincing then, and it still isn't convincing now.
Sik wrote:
But the Channel F had video memory at least (it holds a bitmap). The Atari 2600 didn't even have that.
Does that mean we could see Doom on the Fairchild Channel F?
Espozo wrote:
Does that mean we could see Doom on the Fairchild Channel F?
You can maybe see Doom on the Atari 2600 with an ARM co-processor:
http://atariage.com/forums/topic/229083 ... gine-demo/
Espozo wrote:
Really? Wow... Let's see, I can either have about a 296x296 rock texture using 24 bits, or a 512 x 512 rock texture at 8 bits. I wonder which one looks better...
More like 256×256 at 32-bit (if you try to use 24-bit, the driver will add padding to make it 32-bit... don't use uncompressed 24-bit, seriously)
Espozo wrote:
Really? I didn't think it where possible to compress textures even nearly that much. That excuses it to a degree.
There's a catch: it's lossy. Like, awfully lossy. It's the same deal as JPEG: it's not a problem for photorealistic images (and given GPU architectures get optimized for the kind of stuff you see in AAA games, this is not surprising), but it sticks like a sore otherwise. At high resolutions it's not that bad though since usually there isn't much of a transition in the colors anyway.
In case you wonder: they split textures into 4×4 blocks, then for each block they take two colors and all pixels are a gradient between them (and there are only four steps, or three if you use one of the formats with alpha mask).
Espozo wrote:
Does that mean we could see Doom on the Fairchild Channel F?
Given it has 64 bytes of work RAM...? =/ (although precisely for that reason it was common for games to use video memory as RAM as well)
Sik wrote:
In case you wonder: [S3 texture compression] split textures into 4×4 blocks, then for each block they take two colors and all pixels are a gradient between them (and there are only four steps, or three if you use one of the formats with alpha mask).
This exact principle is also what QuickTime 1.0 video compression ("RPZA") used before Apple licensed Cinepak from Radius.
Sik wrote:
It's the same deal as JPEG [which uses 8x8, 8x16, or 16x16 blocks]: [...] [S3TC] split[s] textures into 4×4 blocks
For both, as long as you align features to the block size, you can get sharp edges. It's just that the notion of specifically designing textures to look good when blocked is about as alien to a developer of modern games as obeying the NES's attribute tables.
lidnariq wrote:
...the notion of specifically designing textures to look good when blocked is about as alien to a developer of modern games as obeying the NES's attribute tables.
I don't think that's true. It came up in the development of several games that I worked on. In particular, we advised that striped patterns like pipes or regular ridges to fall on a power of two boundary (for S3TC behaviour as well as mip-map generation), etc. or even if an artist would come to me and ask "why does my blood spatter texture look so blocky when I put it in the game?", and we look at why and devise a better solution for compressing it, and frequently that solution is just to redraw the texture with some principle in mind that would avoid compression artifacts.
Smaller developers just using other people's engines might not notice or be able to speculate on it, but if your game costs $10+ million to make, you've probably got a few people on the team who actually understand texture compression; maybe a programmer to implement solutions around it, and/or a technical art director who can review the work of other artists on the team, and notice when there are problems.
S3TC compression artifacts have a real impact on the quality of the art, and a keen eye will notice when they're not doing a good job. They are still relevant, and sometimes worth working around.