I see... If you have a specific ratio for wich code can be optimized, great. But most of the time, that's not the case... Like in your idea of having emulators provide user-adjustable stretching of the screen... better just let the hardware do it. Unless you have a very specific reason to do it in software, it seems like a waste of processing time.
Now that I think of it... Your example seems a bit too simple, and might look wrong. When expanding 2 pixels into 5, you have something like this (according to your example):
Code:
+-+-+-+-+-+
| |0| |1| | (ORIGINAL)
+-+-+-+-+-+
|0|1|2|3|4| (INTERPOLATED)
+-+-+-+-+-+
Where only interpolated pixel #2 is a mix of both original pixels. But if you look at a larger view of that:
Code:
+-+-+-+-+-+-+-+-+-+-+
| |0| |1| | |2| |3| |
+-+-+-+-+-+-+-+-+-+-+
|0|1|2|3|4|5|6|7|8|9|
+-+-+-+-+-+-+-+-+-+-+
You'll see that the original pixels are not evenly distributed, while with mathematically correct interpolation they should, right? Maybe this will look close enough, but we can only tell by seeing some horizontal scrolling scaled up with that method.
I'm pretty sure real interpolation needs to use floating point math... unless something like Bresenham can be used for this, but I haven't given much thought to that.