93143 wrote:
Looks like it was even more apples-to-oranges than I thought...
The only consoles of that generation with reasonably comparable graphics hardware are the Gamecube and Xbox (and Xbox soundly wins, aside from the occasionally pixel shading situation where TEV is better, and the sometimes faster blending on the eSRAM). PS2's Graphic Synthesizer is still unique to this day because of its focus on massively high overdraw (particularly useful for alpha blending because you can't avoid overdraw there) and the Dreamcast's PowerVR2 has a mostly unique (on a home device, at least) focus on massively
low overdraw. I think this particular contrast would have added to PS2 hype back in the day where the console was touted as an absolute monster that would make even the Dreamcast look last-gen. When the Dreamcast's VRAM bandwidth is 0.8 GB/s and the PS2's VRAM is 48 GB/s, to the layman it made the Dreamcast look extremely weak. Of course because of the overdraw design difference, PS2 has to read/write VRAM all of the time while Dreamcast only has to do it infrequently, so this VRAM difference is stripped of almost all meaning in context.
93143 wrote:
Draw the additive graphic in a blank secondary framebuffer with the appropriate transforms and interpolation
It's a 'neater' way of doing things, but to me it sounds like a mostly unnecessary waste of the console's processing and memory time.
93143 wrote:
(Can you use transforms and interpolation and such when drawing in 8bpp?)
Are you trying to ask here if RDP supports 8bpp output? I'm fairly sure the answer is no.
93143 wrote:
Or is it actually possible to combine a transformed additive texture with an untransformed framebuffer texture in a single pass?
I still think the only way to do one-pass additive blending with textures is just using the hardware additive blender.
93143 wrote:
the color combiner has registers, that can be used as sources for its operations
I can't see them being specifically useful here (though they could be useful for the additional color combiner pass). If you wanted to do additive blendng without a texture against the framebuffer, it would be better just to use the vertex shade rather than the registers.
calima wrote:
This is exactly what I was curious about. Did anybody manage to do it, and if so, how.
It's not really a practical possibility because you can only sync the RDP pipeline per-primitive. There's an enormous virtually unavoidable risk that swapping the TMEM data in the middle of a primitive will either result in garbage texels being loaded into the pipeline (from an incomplete texture transfer) or simply the timing on the swap over point being wrong (meaning that the point on the primitive surface where the old texture should have stopped and the new texture should have started won't be as expected).
Espozo wrote:
What now? I was always under the impression many PS2 games were blurrier because the PS2 didn't have enough graphics processing power to fill a 640x480 framebuffer, not because of some other limitation.
No, the problem was that due to PS2 Graphic Synthesizer's reliance on extremely high drawing speed, to maximize performance you needed to create as many back buffers in VRAM as possible (also main RAM was invisible to Graphic Synthesizer). Unfortunately due to the size of the VRAM only being 4 MB, that didn't leave a lot of room (plus the texture cache had to share that space). That left developers with a range of choices, none of them too good. For the framebuffers to all fit, they either had to decrease the framebuffer size (makes jaggies and/or blur), decrease color depth (makes banding), or decrease the texture cache (usually resulted in lower texture resolution and/or memory thrashing). Remember also the PS2 has no hardware texture compression except for CLUT (if you could call that compression) putting more pressure on that limited space, though 'software' (i.e. vector unit driven) techniques were developed later on in the console's life.
Also because Graphic Synthesizer's anti-aliasing unit didn't work (broken design in silicon), developers would to come up with buffer tricks to smooth out jaggies. One way to do that was to have different sized front and back buffers and try to 'supersample' the output. Of course, while this would have some success in removing jaggies, it would also create a lot of blur since it wasn't proper supersampling.
There are more complicated factors at play, but I can tell you that while PS2 had a lot of problems, pixel fill speed was the least of them. At just filling any given resolution with pixels it was
much faster than the other console of its generation (well it
did have 4 times more pixel pipelines than Xbox and Gamecube).
EDIT: Gamecube only stored one back buffer in VRAM (eSRAM). The front buffers (or any other buffers) all had to be copied to main RAM. While this meant the Gamecube didn't regularly have any size limitations, it did put a significant damper on memory bandwidth. However, this also meant that with MSAA mode on, the back buffer had to be smaller, which is probably why it was rarely used. A neat bit of trivia: Flipper is capable of z-buffer compression but only when MSAA is enabled (though Xbox's NV2A does it all the time).
Espozo wrote:
Isn't this pretty much designed for audio, where the very low bandwidth wouldn't be an issue? I can't imagine the GameCube was really at a ram disadvantage from the PS2 with most games.
Sure, for audio the speed is not a big problem (PS2 also has audio RAM but it is much smaller though actually faster), but the fact is that Gamecube's main RAM is only 24 MB and the PS2's is 32 MB. It caused annoyance for Gamecube developers to have to constantly switch in things from slow auxiliary RAM into the much faster main RAM. I guess you could argue that the bandwidth difference between the main RAM on the two consoles was not that significant anyway, because in practice the PS2's RDRAM with its high latency would have a lower effective bandwidth, while the Gamecube's 1T-SRAM would actually achieve close to its peak.
Curiously, the Gamecube received a significant downclocking prior to its release. Flipper originally ran at 200mhz (later 162mhz) and main RAM had 3.2 GB/s bandwidth (same as PS2, later downgraded to 2.6 GB/s). I think the Gecko CPU was made faster though (probably needed changing anyway due to the different system bus multiplier).
Espozo wrote:
Resident Evil 4 is a famous example of the GameCube looking very noticeably better than the PS2; it was my impression this game actually stopped most of the debate.
I wouldn't put too much stock in this comparison. Resident Evil 4 was a Gamecube exclusive for almost all of its development lifespan. Given the major differences in graphics hardware architecture between the two consoles, I would say that any game that was not developed as multiplatform from the start could not be properly ported across the two in a way that would maximize their power (at least, not without
a lot of extra development time).
Espozo wrote:
which can probably be attributed to a lack of effort on the developer anyway seeing the huge difference in sales between the PS2 and the GameCube/Xbox.
Sure, but the PS2 was also
way harder to develop for than the Gamecube, so it kind of balanced out. IMO the real reason the graphics between the two (excluding the worst efforts on each) looked fairly equal
overall is because their hardware power was pretty evenly matched despite different strengths.
In my experience checking out multiplatform versions between the two consoles, I generally noticed that the Gamecube versions almost always had higher framebuffer resolution and better texture quality, but the PS2 versions broadly had better vertex-related things, like higher quality reflections and nicer quality lighting. Just an observation of mine, which I think lines up with a reasonably informed view on their hardware capabilities.