For the
C++ on the Wiki, what is that 9-bit pixel color? Is there a way to convert the 6-bit PPU palette index into that?
It's the 6-bit raw color number (i.e. the output of the palette) appended to the preemphasis bits.
Thanks. I plugged in that code without any optimizations to see what it looks like. He's a screenshot (3x magnification):
That looks pretty horrible to me. And, I'm pretty sure that effect can be simulated by applying a filter that emphasizes R, then G, then B cyclically over every 3 pixels.
Anyway, the NTSC filters in other emulators don't look that bad. What is missing here?
zeroone wrote:
Anyway, the NTSC filters in other emulators don't look that bad. What is missing here?
I think that looks more or less correct, but you've rendered to low resolution and then upscaled.
If you want better looking quality, you should render the NTSC output directly to a higher resolution output. Vertical can just double or triple lines (darkening some if you like "scanline" simulation), but horizontal should be accounting for the NTSC signal across the whole line (possibly while stretching to a different aspect ratio).
e.g.
Blargg's NTSC filter demo upscales 256 x 240 to 602 x 480.
For a quick comparison, Blargg's filter demo output compared to decimating it to half the resolution:
Attachment:
smb3_blargg.png [ 91.55 KiB | Viewed 2715 times ]
Attachment:
smb3_blargg_decimated.png [ 45.36 KiB | Viewed 2715 times ]
Note the better quality in the first image because it is able to subsample the analog NTSC signal during horizontal upscaling.
It looks like every other line is slightly darker. I'll try to figure out how that code works.
You're seeing a combination of three differences:
- As rainwarrior mentioned, TVs and emulators run the NTSC decoder at full output resolution, as opposed to one sample per NES pixel.
- The PPU alternates the color subcarrier's phase by one-third of a cycle between one frame and the next. When played at 60 fps, frames with different phases blend together in a way that a single-field still screenshot doesn't capture.
- The last thing you noticed is "scanlines", an interpolation technique that makes every other line darker to simulate the beam shape on a CRT that's focused for 480i but displaying 240p. But this is independent of the NTSC encoding and decoding; PlayChoice-style emulation with scanlines but without an NTSC filter is still valid.
zeroone wrote:
It looks like every other line is slightly darker. I'll try to figure out how that code works.
That part isn't important, simulating scanlines is merely a stylistic choice / personal preference. It's a vague approximation of the gaps between lines in 240p on a CRT.
The vertical stretching of the image is simply done by doubling every line (no interpolation or oversampling, etc.). The scanline effect is literally just darkening every second line. I would have removed it for clarity if the filter demo had an option to do so, since I was only trying to demonstrate how the horizontal scaling is different.
For a better comparison, here's the Blarrg filter example with the scanlines removed. (Compare this against the second "decimated" example above.)
Attachment:
smb3_blargg_scanlines_removed.png [ 51.53 KiB | Viewed 2673 times ]
Comparing the two you can see the decimated version has the same problem as your example, though maybe slightly milder (the problem is a little bit obscured by the aspect ratio change). The point is that quality improves with better sampling. You should see that the decimated version has more spurious colour fringes, etc. than the normal version, similar to your example.
Does following along with my
manual demodulation explain things at all?
One of the functions on the wiki takes a Width parameter. Here's the result with a Width of 512:
@rainwarrior It's a lot closer to the last Blarrg image that you posted, but Blarrg's still doesn't not contain that much color fringing.
lidnariq wrote:
Does following along with my
manual demodulation explain things at all?
That looks like a really interesting post. I'll study it. I'm clueless about how NTSC does it's thing. I just copied-and-pasted wiki code
Below is another with Width set to 768. It also smoothly stretches it to the TV pixel aspect ratio.
I don't recall any CRT looking quite like these images.
The red-green-blue artifacts are way too pronounced, but the image looks fairly accurate otherwise, IMO.
tokumaru wrote:
The red-green-blue artifacts are way too pronounced, but the image looks fairly accurate otherwise, IMO.
I agree. But, this was generated with the code straight off the wiki. Any tuning suggestions?
The C++ on the wiki "should" generate a 2048x240 image.
Oh, I see, the "width" parameter imposes a simple decimation of the output image. At a width of 768, it ought to be ... moderately representative to the actual bandwidth of the system, I'd think...?
If you're starting with a phase of 0 (out of 12), try rendering with phase 0 in even frames and 4 in odd frames. To take a still screenshot, render with phase 0 and 4 and blend them.
If you're rendering at double height (480), take 584 samples out of 2048. This roughly means samples 0, 3, 7, 10, 14, 17, 21, ...
If you're rendering at triple height (720), take 878 samples out of 2048. This roughly means samples 0, 2, 4, 7, 9, 11, 14, 16, 18, 21, ...
lidnariq wrote:
The C++ on the wiki "should" generate a 2048x240 image.
Oh, I see, the "width" parameter imposes a simple decimation of the output image. At a width of 768, it ought to be ... moderately representative to the actual bandwidth of the system, I'd think...?
See above for Width set to 768. Where does the number 2048 come from anyway?
I really need to study more about how NTSC works. But, from staring at these images for a while, there appears to be an easy way to simulate the color artifacts. Scanning from left-to-right, anytime there is a transition from one color to another, one of the RGB channels lags, while the other 2 transitions immediately. For instance, a transition from white-to-black induces either a red, green or blue pixel at about 50% illumination. While, a transition from black-to-white induces either a cyan, magenta or yellow pixel, the inverses respectively. The color channel that lags is a function of the PPU cycle count modulo 3. That trick will produce almost an identical result, including the changes between frames. That said, I'm not even sure if this is really worth it!
The colour fringes are an authentic NTSC NES artifact, but on most hardware they shouldn't appear as strong as they do in your example. In particular, black and white stripes normally wouldn't have such a pronounced problem; usually they look pretty clean with only very subtle fringing.
Looking at the code on the Wiki, what happens is that the NTSC signal is generated at the NES' output rate (256 * 8 samples per line?) and it takes the average of 12 samples centred on any given pixel. That "average of 12 samples" is a "box filter", described briefly on the wiki page. This is not an ideal filter for this purpose, especially if changing the aspect ratio, the shape and width of this filter emphasizes some unpleasant sidebands, and in this case since your filter does not change size with the ouput Width paramater it's going to change as you adjust the Width too.
Blargg's NTSC filter seems to use a combination of sinc and gaussian FIR filters instead. (I think the sinc filter is the ideal cutoff for the signal, and the gaussian applies an additional blur to control sharpness?) These FIR filters need to be generated with parameters to fit the output image size as well.
In real hardware it would generally be done with an analog IIR filter, I presume.
Designing and implementing appropriate filter for this signal is maybe a more advanced topic, but basically if you want a "cut and paste" example, I suggest you use Blargg's NTSC library instead. The one made by Bisqwit for the Wiki seems to be more of a teaching example for how the signal is generated, rather than a polished usable product.
(Blargg's code, on the other hand, produces a very nice result, but I wouldn't expect anyone to understand the code unless they already knew enough to write it themselves.)
Bisqwit's code assumes that the signal is at twice the NES master clock, or about 43 MHz. As rainwarrior mentioned, this means that each NES pixel is 8 samples wide. But it's also general enough to work with Apple II, CoCo, or Atari 7800 video where each pixel is 6 samples wide, or 3 samples for double hires or 80 column text on Apple II.
I translated Blargg's code to Java with these results:
Blargg's code outputs a 602x240 image. Above, the image was cropped (top and bottom 8 scanlines) and smoothly scaled to TV pixel aspect ratio (8:7).
The color fringes/artifacts are hardly noticeable. Look between the U and the P in the title for an example. And, this is a stationary screenshot; they are even less visible during emulation because the color burst phase is changing, which visually averages out except during horizontal scrolling. Blargg's code provides a bunch of constants to tune the artifacts, fringing, bleed, resolution and many other image aspects. I have yet to play around with that.
Each scanline is dark on the left and right because the filter does not compensate for fewer samples at the edges.
Blargg's code can generate 15, 16 and 24-bit RGB output. Above, the 24-bit RGB setting was used with color emphasis enabled.
The scanline effect mentioned early in the thread is artificiality introduced simply by darkening alternate rows by 12%. I did not apply technique above.
Bisqwit's code was known to not do the YIQ filtering right; he used a N-point "boxcar" lowpass filter (which is equivalent to a 1st-order IIR lowpass where the number of taps set the corner frequency) instead of a proper sharper lowpass.
(Additionally, he used boxcar lowpass filters of length
12·n because that put the one of the filter's nulls at exactly the colorburst frequency, knocking out some chroma-into-luma crosstalk)
I'm not certain whether "using the wrong filter" would cause that much crosstalk. Not implausible, I guess.
By the way, did CRTs back in the day really have dark separations between the scanlines? None of the various ways that the phosphors were packed suggest a line of separation. In addition, the NES double struck each row; so, simulating the effect, if it really existed, would apply twice to each NES scanline.
Is the scanline effect actually a consequence of CRTs being mounted at 90 degree angles in some arcade games? For instance, a region of red would leave the green and blue phosphors unlit. And, if the phosphors are arranged in columns, such a region would appear as red strips between rows of black on the rotated CRT.
The scanline effect is a function of the size of the electron beam that's focused on the CRT grille.
How the manufacturer specified the focus beam size is mostly a function of the size of CRT itself—tiny 4" screens in old luggable sets rarely could even display 200 lines, and no scanlines were visible there (and some blurring was). But larger sets could display more (e.g. the manual for the 19" CRT TV in my house says "330 lines") and that makes a dark bit in-between scanlines.
CRTs with larger beams last longer, because there's a larger surface for electrons (and tiny amounts of metal) to escape/evaporate from. The older the electrode, the worse the focus (because the pointy electron emitter slowly erodes with use)
The "focus" control on old sets controlled the exact voltage difference between the front anode and the electron guns. Higher voltage means a stronger attractive effect, pulling electrons off a larger area of the electron gun.
This twitter account regularly takes up close photographs of CRTs:
https://twitter.com/MyLifeInGaming/stat ... 9248866304
rainwarrior wrote:
This twitter account regularly takes up close photographs of CRTs:
https://twitter.com/MyLifeInGaming/stat ... 9248866304I'm not sure if that is real or a simulation. Following a few links leads to emulation consoles.
zeroone wrote:
By the way, did CRTs back in the day really have dark separations between the scanlines?
Yes (and not just "back in the day" either -- all CRTs will have them). But there's no universal definition for what "dark" means; it varied wildly (visually) per CRT or model. There's a
great write-up about all of this, including pictures. The article borders on sperglord material though, but it should suffice.
Oh, and don't forget about "Trinitron CRT lines", which is due to Sony's use of aperature grilles, requiring very thin horizontal wires to hold the vertical wiring in place, resulting in a "shadow" on the phosphor layer. There were commonly 2 visible lines, but sometimes 3 (depended on monitor size). If you split the entire display into 3 horizontal "sections", you'd find them at the bottoms of the 1st and 2nd "sections" (for those with 3 lines, split the display into 4 horizontal "sections"). They were visible, especially when displaying bright backgrounds, and were significantly more noticeable than scanlines. You just learned to accept it, and it was generally worth it -- Trinitrons were absolutely wonderful. Best image I could find (depicting a single line):
http://computer.howstuffworks.com/question406.htmFor what it's worth -- and this is pure opinion material, so take it with a grain of salt -- I absolutely hate scanlines in emulators. To me, it looks tacky and awkward the majority of the time; even those which do it well (my reaction is usually "Oh hey, that looks about right!") still don't sit well with me. I feel the same way about "NTSC video emulation", as well as "pixel aspect ratio". If an emulator offers these features, as long as there's a way to turn them off, I'm content.
If you want me to take close-ups of my Sony Wega CRT proving the existence of scanlines -- because you doubt the authenticity of, oh, everything on the Internet -- just ask.
zeroone wrote:
I'm not sure if that is real or a simulation. Following a few links leads to emulation consoles.
They're obsessed with CRTs, it's a real picture. (They say it's a Sony PVM specifically in one of the replies.)
They are interested in emulation too, but I don't think they would ever take a picture of an emulator in that manner (i.e. camera to an LCD to an emulator?? why?) they would just take a screenshot.
Of course, a photograph can't tell the whole story (e.g. the wide colour gamut of CRTs, extra brightness, etc) but it does show you what scanlines look like.
lidnariq wrote:
I'm not certain whether "using the wrong filter" would cause that much crosstalk. Not implausible, I guess.
Hmm, I could be wrong about it, but it was my best guess based on the code, and comparing it to blargg's.
The problem could be elsewhere, but I don't have any leads... could try taking blargg's FIR and applying it to bisqwit's output to test?
koitsu wrote:
I absolutely hate scanlines in emulators. To me, it looks tacky and awkward the majority of the time; even those which do it well (my reaction is usually "Oh hey, that looks about right!") still don't sit well with me.
Same here. Personally, I've never experienced scanlines when playing games on CRT, that's something I have absolutely no recollection of. Maybe they were in fact there but I just didn't notice them... but either way, the common technique used to simulate scanlines looks completely artificial to me and nothing like a CRT TV at all.
Quote:
I feel the same way about "NTSC video emulation", as well as "pixel aspect ratio".
I absolutely love NTSC artifacts, and that's definitely something I remember (even though I grew up with PAL-M, so I have no idea why the end result was similar, it just is), because I'm always impressed by the texture and depth they bring to the otherwise flat, dull, low-color images. It almost seems like some artists COUNTED ON the graphics being mangled the way they were by the NTSC encoding (quick example: how Blaster Master uses the same gray to highlight both green and brown, creating convincing dirt and grass from a single palette - an effect that doesn't look nearly as good with crispy pixels). I can understand some people not liking NTSC simulation though, as some of the artifacts (flickering, dot crawl, etc.) can be distracting.
Aspect ratio is pretty important though. I never understood people who bought new HD TVs and configured them (or simply didn't change the factory settings) to display old 4:3 video stretched to 16:9 to fill the entire screen, even though that made circles look like ovals and people look short and fat. Playing console games using the wrong aspect ratio is the same thing, everything is distorted, and has different proportions from those intended by the artists. It's weird to play Sonic Chaos on a Master System emulator and run around egg-shaped loops, after running around the rounder versions of the loops for years - and it's also weird to see that Sonic gets shorter/chubbier when rotated by 90 degrees.
The thing I most dislike about fake scanlines is they're a net loss in picture brightness.
On a CRT, the 240p picture is actually double bright, compensating for the scanline gaps.
An LCD is much dimmer than a CRT even in the best of conditions, and adding these fake scanlines makes that worse.
The other problem is that you need an integer scaling factor to do it without a horrible moire pattern, so in a lot of cases this forces a black border on you, depending on the resolution of the monitor vs best fit integer scaling size. I usually prefer integer scaling anyway, but if you want the picture to fill your screen the scanline idea more or less takes that option away.
Some emulators go even further, warping the image to simulate the curve of a CRT screen, bloom filter to simulate brightness bleeding into neighbouring pixels, or even a glass glare effect, or a 3D rendering of a cabinet around it, or any number of weird textural things like this...
Anyhow... I don't actually like using these in general, but as long as they're
optional I've got no problem with 'em.
I mean, I
like the look of a CRT, but I don't really appreciate these simulations of it much. (Would rather play it on an actual CRT than simulate a CRT on an LCD.)
Here, just to follow up, a video by "My Life in Gaming" about CRTs for gaming. Maybe a bit heavy handed, but they do have a lot of good video examples and I think they explain it well.
https://www.youtube.com/watch?v=RAi8AVj9GV8
Here's a list of things I wish HDTVs would stop doing.
-Deinterlacing everything into 30fps
-Gaussian blurring the entire screen
-Input lag
-Sampling everything at 640 pixels across, and upscaling it (with the dumb gaussian blur filter)
Also, is it just me, or did CRTs usually have better Y/C separation than HDTVs do.
Let me guess the excuses that TV makers will use:
TVs don't attempt to deinterlace video that meets a progressive video standard. This means 480p, 720p, 768p (RGB), or 1080p. The double-struck 240p mode used by classic video game consoles was never published as a formal standard.
It's not Gaussian blur. In many cases, it appears to be either linear or cubic interpolation. This is visually preferable to nearest-neighbor for live action, photorealistic CGI, and cel animation, just not for the nonstandard output of the original PlayStation and earlier video game consoles.
Input lag is not a problem for noninteractive video from broadcast, cable, or satellite television or Internet streaming. Better TVs tend to include a "game mode" that reduces lag from a standard progressive source at the expense of somewhat reduced picture quality.
The only composite picture source I'm aware of that requires a sampling rate greater than 13.5 MHz is an Apple II in 80-column text, double hi-res, or IIGS double super hi-res mode.
For NTSC filtering there is also the possibility of steep notch filters, like the one below. If the dot-crawl is too noticeable, it probably could use a Gaussian filter on top of it.
http://www.dspguide.com/ch19/3.htm
This is totally not how analog TVs work, but I just thought of a neat way to separate luma and chroma. Demodulate chroma. Limit the slope of the I and Q signals. Remodulate the chroma signal and subtract it from the composite signal to form the luma signal.
I'm pretty sure some analog TVs do subtract the remodulated IQ or UV from the composite signal.
Do any of them fake filtering by limiting the rate the I and Q signals change from high to low?
It'll only work if done in digital domain, I cannot see it work with any decent performance when done entirely in analog domain mostly because time delaying original signal is required, demodulation isn't instant and because you got to perfectly line up the original and processed signal, any desync and linearity error results in exaggerated artifacts.
"Slope limiting" sounds mostly like another way of describing "lowpass filter"...
"Slope limiting" or "maximum
slew rate" has a low-pass characteristic, but unlike the common IIR filter designs, it's not linear.
I've been experimenting with different filters using a calculator and a painting program, and I think a 49 point Hamming Sinc filter with the picture scaled to 512x448 looks best. If the original poster wants to use this type of filter the equation is this:
y=sin(7xpi/12)/(xpi)*(25+21cos(xpi/24)/46*256
... and the convolution matrix itself:
{0,0,0,0,-1,0,1,0,-1,1,1,-3,0,4,-3,-5,7,2,-12,4,17,-19,-20,78,153,78,-20,-19,17,4,-12,2,7,-5,-3,4,0,-3,1,1,-1,0,1,0,-1,0,0,0,0}
edit:
Fudged center point to from 149 to 153 so that every 3rd sample adds up to 85, and the whole thing adds up to 255.