I wrote a blog post describing my Verilog implementation of HQ2X for my FPGA NES, if someone is interested:
http://fpganes.blogspot.se/2013/02/the- ... rilog.html
The technical aspect of it is interesting and all, but it surprises me how someone could find the botched mess these filters create better than clean, sharp pixels.
I don't know if all pixel artists are like me, but I always leave a certain amount of ambiguity in the graphics I make, because even I am not sure of what some of the details mean, so how could a mindless algorithm be? IMO the graphics just look... rounder, not more detailed.
Rounder is sometimes desired by itself. Real bodies are rounded to minimize skin surface area. Real clothes are rounded to fit real bodies. Perhaps what the roundness does is make 8-bit game graphics look more like an illustration (or a modern Flash game based on the principles of traditional illustration) than like an obsolete video game. Consider that the first of these algorithms (EPX, which produces identical results to Scale2x) was originally an aid in porting games to a platform with more pixels as a starting point for tweaks that add detail. If you want more detail in particular games, feel free to contribute high-resolution texture packs, such as CHR banks with 16x16 pixel tiles.
tepples wrote:
If you want more detail in particular games, feel free to contribute high-resolution texture packs, such as CHR banks with 16x16 pixel tiles.
This is a more valid approach to increasing the resolution of classic games IMO, specially considering that each game has a different "feel", so you can't properly upscale them without taking that into consideration. For example, games that are cartoon-like will benefit more from the kind of round look that these filters produce.
Then here's a plan that can be properly waterfalled without any time interdependency between coders and artists:
- Make an emulator (whether for PC or for FPGA) that performs Scale2x on each 8x8 pixel tile of CHR ROM when loading it and then uses the resulting 16x16 pixel tiles (and 16x2 pixel slivers) as the rendering primitive.
- Have it output the tilewise Scale2x texture to a bitmap image file (BMP, PNG, etc.) and load a modified high-resolution texture file.
- Make a tool that rearranges a game's CHR dump to a recognizable sprite sheet and back.
- Tweak the rearranged tilewise-enlarged SMB1 CHR ROM to make it look better.
Games using CHR RAM can be handled later. I have some ideas about how to handle them using a hash table, which I'll explain in more detail once CHR ROM games are working as in step 1.
tepples wrote:
Rounder is sometimes desired by itself. Real bodies are rounded to minimize skin surface area. Real clothes are rounded to fit real bodies. Perhaps what the roundness does is make 8-bit game graphics look more like an illustration (or a modern Flash game based on the principles of traditional illustration) than like an obsolete video game.
No video games so far have become obsolete. Most 2D graphics and games are not about realism. The graphics communicate a particular mood which this filter significantly alters. That's the point here. The problem as I understand it is that upscaling graphics requires
some kind of filling-in-the-blanks to avoid them looking blurry. This filter is an alternative to big pixels, which also are not what the games looked like on TVs back then.
I must say this is a great work. Most games looks better with HQ2x than with simple 2x or bilinear. Of course there will be rare cases where it will lead degradation of graphics, but overall, it's great. And much simpler than NTSC, and looks better too.
Bregalad wrote:
Most games looks better with HQ2x than with simple 2x or bilinear.
To me it's the exact opposite: very few games look better with HQ2X. I agree that bilinear scaling sucks big time for old games though.
Quote:
And much simpler than NTSC, and looks better too.
That's subjective. I'm quite fond of the effect NTSC has on pixelated graphics.
I like NTSC because it makes it look like the real system, but in all objectivity, you can't call parasitic hues at transitions and dot crawling appealing to the eye, can you ?
Quote:
To me it's the exact opposite: very few games look better with HQ2X
Screenshots to back up your claims ?
It's not possible to take a screenshot of one's subjective experience of how something looks. The claim is all the evidence there is (and all that's needed; we can trust that tokumaru really does prefer the NTSC filter to HQ2X).
Filters can make certain 3D games look better by eliminating the blurry textures (see below). A SNES or GBA game might benefit, but it just kills the aesthetic of those Mario 1 and 2 shots.
http://www.youtube.com/watch?v=qPiuN7nkbxI
Unfortunately, the only difference I'm noticing in that video is "rendered at PS1-native" or "rendered at higher resolution". Try as I might, I can't see any places where the upscaled textures are noticeable, at least not in the 480p encoding on youtube.
strat wrote:
Filters can make certain 3D games look better by eliminating the blurry textures (see below).
The difference is very subtle on the textures... The increased resolution really helps the 3D objects though, but that's a very different case. 3D objects are not constrained by the resolution of the display, so rendering them at higher resolutions is simple and effective.
Is there an algorithm like HQ2X that works on photos?
psycopathicteen wrote:
Is there an algorithm like HQ2X that works on photos?
I've read about some magic filters for upscaling photos... Something like
this.
tokumaru wrote:
psycopathicteen wrote:
Is there an algorithm like HQ2X that works on photos?
I've read about some magic filters for upscaling photos... Something like
this.
Those tests would have worked out much better if they had used a larger image as the source for starters, the amount of JPEG artifacting is just too much.
Or perhaps it was intended as a test of how the upscaler would handle real-world images from cameras not made for professional use, which tend to be overcompressed. Even RAWs that a pro or prosumer camera saves have compression artifacts from the Bayer dithering at the sensor.