Zepper wrote:
How so? A color fade-in effect starting at the left (or right?) most pixel!?
The specific type of interpolation doesn't matter as much as the resolution at which it's being applied. *ANY* stretching will be obvious when applied at the low resolution, it will *NEVER* look good. At high resolutions though, after blowing up the pixels by an integer value, you can try different scaling algorithms (nearest neighbor, linear, whatever) and see what works best.
Quote:
No, you didn't see the code.
From the images, it's obvious that the stretching is working over 256 unique pixels of the scanline. Even if those pixels are actually 3x3 hardware pixels, you're probably still working with only 256 units, which is the same as working on the original 256-pixel scanline.
Quote:
Annoying of "stretching".
By "stretching", I mean scaling by a non-integer value, a scaling step that changes the aspect ratio.
Quote:
For my best, it's called error difusion.
The error diffuses better at higher resolutions, because the error spreads more evenly across the entire scanline, which is longer.
Here's an example... say you have these 3 pixels that have to be stretched into 5:
███A naive nearest pixel approach at the native resolution will give you something like this:
█████Any pixel that needs to be doubled will create an error of 100%, it will double the native pixel! When you scale this up by 3x,
THE ERROR IS SCALED UP AS WELL!!! So you end up with the following sequence (this is what you appear to be doing):
███ (original)
█████ (stretched by 1.666)
███████████████ (scaled by 3x)
Now, if instead you scale by 3x first, the error will
NOT be scaled up and will be way more subtle (the error is distributed more evenly):
███ (original)
█████████ (scaled by 3x)
███████████████ (stretched by 1.666)
In this particular case, the error was divided so evenly that all final pixels ended up the same size, with absolutely no distortion at all.
The bottomline is, you need to keep the error at the size of the target pixels, which are smaller, not at the size of the source pixels, which are big. At this point you're free to try different interpolations for the optimal result, but no matter what interpolation you use, it should be performed at the higher resolution, not the native resolution.