All this NTSC signal stuff and dot crawl stuff has always been chineese for me. But today I just throught I HAD to understand this.
If I understand well the color signal is anything above 3.5 MHz and it's phase determine the color while the amplitude determine the saturation.
The luminous signal would be anything below 3.5 MHz and simply put the voltage directly represent the luminous intensity.
The problem is that the NES renders pixels at 5MHz, which is higher than the 3.5 MHz color subcarier.
So for example if I put a single colored pixel in a completely dark area it can't look the right color, because the color signal isn't active long enough so that the TV can determine it's phase, so it will be some "random" color.
Did I get it right ?
If not someone please explain, this aspect of the NES is really what's truly lacking in my knownledge...
You've got the right idea.
On NTSC or PAL/M, the luma (brightness signal) is nominally 0-3.0 MHz and the chroma (color signal) is 3.0-4.2 MHz. Each pixel on a ColecoVision, NES, SMS, or Super NES is 2/3 of an NTSC color subcarrier cycle wide, which isn't necessarily enough to communicate the color of an isolated pixel. Atari 2600, on the other hand, uses pixels that are a whole cycle wide, so they don't suffer from quite the same problem. In theory, it would be possible to synchronize to the NES's pixel clock and recover the phase exactly, but standard TVs aren't designed for such synchronization.
European PAL uses higher frequencies such that each NES pixel takes 5/6 of a color subcarrier cycle.
So for systems with bigger pixels, such as the Atari 2600 it's possible to get the image on TV without any artefacts ?
On a NES, if you'd somehow restrict yourself to graphics where couples of 2 horizontally neightboor pixels are always the same color (something common on the C64 known as "multicolor mode"), you'd get no artefacts/dot crawl on screen ?
What about systems with an even higher horizontal resolution than the NES, such as the PS1 in high res, PS2 or the SNES in modes 5&6 ? Does they need 4 or so pixels in order for them to be shown with the correct color ?!?
As I understand it, fifth generation (PS1/N64) and newer consoles generate luma and chroma signals separately (RGB converted to YUV) and filter them in such a way that the luma and chroma are less likely to bleed into each other.
Bregalad wrote:
So for systems with bigger pixels, such as the Atari 2600 it's possible to get the image on TV without any artefacts ?
Technically no; color bandwidth in NTSC is supposed to only go up to ~1.5MHz. On the 2600 the narrowest pixels, if alternating between two not-too-similar colors, will exceed that. This is even true with one that's had the s-video mod, like in the image I found at the bottom of
http://www.vintagegamingandmore.com/Quote:
What about systems with an even higher horizontal resolution than the NES, such as the PS1 in high res, PS2 or the SNES in modes 5&6 ? Does they need 4 or so pixels in order for them to be shown with the correct color ?!?
Well, it's more that they will become a almost uniform color once bandlimited in NTSC. You can see examples at
http://www.chrismcovell.com/gotRGB/screenshots.html, and that's only for the 256 or 320 pixel wide modes.
The
wikipedia page on NTSC talks about the bandwidth considerations, and the discussion I had
here talks about how the television can do other tricks to try to recover a better separation between luma and chroma.
OK so basically a NTSC video singal is a normal (monochrome) video signal, added with a 3.5 MHz sine wave, right ?
The amplitude of this sine would be saturation and it's phase color.
Then the TV does a very high order band-cut filter at 3.5 MHz to remove color information from the luminance signal ?
Also if the color changes during the image rendering without any grey area in-between, this means the phase will shift quickly, and then harmonics above 3.5 MHz will appear. Is that those who causes the infamous "dot crawl" ? I think it's the case because this appear at color changes.
So an image rendered by a NES where ALL the palettes are the same hue (from the same "column" of the palette), no dot crawl or artifacts appear, right ?
Also because the NES uses square wave instead of sine wave for the color-burst, this makes even more harmonics appear. The first harmonic would be at 3*3.5 = 10.5 MHz, right ? That would be about the width of a pixel in a 480p/480i resolution image, so does this causes problems ?
Bregalad wrote:
So an image rendered by a NES where ALL the palettes are the same hue (from the same "column" of the palette), no dot crawl or artifacts appear, right ?
$0x and $3x are less saturated (sinusoid with smaller amplitude) than $1x and $2x. The artifacts from this are very subtle.
Quote:
Also because the NES uses square wave instead of sine wave for the color-burst, this makes even more harmonics appear.
TVs that I've used appear to have a low-pass filter on their analog inputs to block details that are way out of the SDTV nominal bandwidth. An Apple IIe in 80-column text mode, for example, looks a lot sharper through an AppleColor Composite Monitor IIe than through a television (where 80-column text is nearly unreadable).
I don't think of the NTSC signal in terms of "frequencies." I imagine it as a B&W image with checkered patterned noise.
Really they're the same thing. How big are the checkers, and how sharply can one pattern of checkers transition into another pattern of checkers? That could be considered a
spatial frequency of sorts.
Factoid: The checkers themselves can be used to
recover color from black-and-white film recordings of TV series such as
Doctor Who.
tepples wrote:
TVs that I've used appear to have a low-pass filter on their analog inputs to block details that are way out of the SDTV nominal bandwidth.
The last TV I sat down and tested this with (a 1990s RCA Colorscan) had a very sharp lowpass at ~4.3MHz on luma in video over composite, and a 7MHz limit on luma in s-video. This is on a test signal without the colorburst, so there was no chroma crosstalk (and over composite instead of RF switch so no audio crosstalk).
Bregalad wrote:
Also if the color changes during the image rendering without any grey area in-between, this means the phase will shift quickly, and then harmonics above 3.5 MHz will appear. Is that those who causes the infamous "dot crawl" ? I think it's the case because this appear at color changes.
I think dot crawl comes from two things: 1- the number of color cycles isn't an integer divisor of the horizontal frequency (so yes for the NES at 227⅓, or proper NTSC at 227½, but not the Apple ][, IBM CGA, or Atari 2600 all at 228) so you get different behavior on each scanline (so necessary for it to seem to move) and 2- chroma-into-luma crosstalk (any location where you have sudden changes of chroma exceeding the 1.5MHz nominal bandwidth) or luma-into-chroma crosstalk (any location where you have sudden changes of brightness approaching the 3.6MHz chroma carrier)
The reason why dot crawl happens on quick chroma transitions is because:
1) the dark part of the color carrier is too fat resulting in a dark dot
2) the dark part of the color carrier is too skinny resulting in a light dot
3) the light part of the color carrier is too fat resulting in a light dot
4) the light part of the color carrier is too skinny resulting in a dark dot
tepples wrote:
$0x and $3x are less saturated (sinusoid with smaller amplitude) than $1x and $2x. The artifacts from this are very subtle.
Is there anywhere this information is stated on NESDEV main page or the wiki ?
Quote:
TVs that I've used appear to have a low-pass filter on their analog inputs to block details that are way out of the SDTV nominal bandwidth. An Apple IIe in 80-column text mode, for example, looks a lot sharper through an AppleColor Composite Monitor IIe than through a television (where 80-column text is nearly unreadable).
Well, in fact my TV doesn't have this filter by default and things looks really really terrible. There is like very bright dots appearing between pixels. You can enable filtering by "software" which makes it look alright but even there sometimes with some colors combination it still look worse than in Nestopia with NTSC filter.
Should I add a cap in my toploader so that it filters frequencies higher than 6 MHz ?
Quote:
I think dot crawl comes from two things: 1- the number of color cycles isn't an integer divisor of the horizontal frequency
I throught this trick was supposed to make the NES looks *better* (according to
this).
Finally there is another thing I don't understand. Look at the red rectangle arround the subweapon in Castlevania 1 (or 3 doesn't matter).
Normally all the red pixels are directly neightboor with black pixels.
Black is a low DC voltage, while red would be an oscillating signal at 3.5 MHz, with a phase that would make it red, and a DC offset that would make it bright, right ?
Then there is absolutely no confusion possible, no other colors or luminosities arround, but the rectangle still looks like it has "sawtooth" on it. Why ?
Bregalad wrote:
Quote:
I think dot crawl comes from two things: 1- the number of color cycles isn't an integer divisor of the horizontal frequency
I throught this trick was supposed to make the NES looks *better* (according to
this).
It does make it look better. Dots that crawl (e.g. NES, Super NES) look OK. Dots that don't crawl (e.g. Neo Geo) and vertical lines that don't crawl (e.g. SMS, Genesis) don't look near as good.
Quote:
Finally there is another thing I don't understand. Look at the red rectangle arround the subweapon in Castlevania 1 (or 3 doesn't matter).
Color $16 in CV1.
Quote:
the rectangle still looks like it has "sawtooth" on it. Why ?
Colors $1x alternate level $10 (light gray) with $1D (black). So we have a bunch of tiny light gray dots in the "red" spaces and a bunch of black dots in the "cyan" spaces, which get decoded to a red hue. But the shape of the rectangle cuts through the middle of some of these light gray dots, and you get sawtooth shapes where the vertical line of the rectangle divides a dot on one line and a space on the next.
Quote:
Colors $1x alternate level $10 (light gray) with $1D (black). So we have a bunch of tiny light gray dots in the "red" spaces and a bunch of black dots in the "cyan" spaces, which get decoded to a red hue.
OK that's pretty much what I throught. It should oscillate at 2*3.5 = 7 MHz, right ?
This is only a little more faster (read "narrower") than PPU's pixels which are about 5MHz. So on a "perfect" monochrome monitor, one should see some very narrow vertical gray lines instead of a bold gray line (which would be preferred) ?
Quote:
But the shape of the rectangle cuts through the middle of some of these light gray dots, and you get sawtooth shapes where the vertical line of the rectangle divides a dot on one line and a space on the next.
I'm sorry but I really don't understand thins.
Because the area is ONLY black and red, there is no confusion possible with other hues or luminosity values, so I see no reason not to have a straight line, no matter how the "tiny" lines are arranged. What might change is the exact position the line starts or ends, but it would not start or end "abruply", but rather "gradually", as the luminance is low-pass filtered at 3.5 MHz, right ?
Bregalad wrote:
So on a "perfect" monochrome monitor
Like an old black-and-green computer monitor that I used to own several years ago, which lacked this 4.2 MHz low-pass because it was designed for 80 column text.
Quote:
one should see some very narrow vertical gray lines instead of a bold gray line (which would be preferred) ?
On the Atari 2600, Apple II, SMS, or Genesis, a solid color appears as vertical gray lines. On the NES or Super NES, they're diagonal green lines because each line is offset by one-third of a subcarrier cycle. And on a more standard-conforming system such as anything fifth gen or newer, they're green dots in a checkerboard pattern because each line is offset by one-half of a subcarrier cycle.
Quote:
Because the area is ONLY black and red, there is no confusion possible with other hues or luminosity values, so I see no reason not to have a straight line, no matter how the "tiny" lines are arranged. What might change is the exact position the line starts or ends, but it would not start or end "abruply", but rather "gradually", as the luminance is low-pass filtered at 3.5 MHz, right ?
Luma and chroma are not filtered within the NES before being combined. Want a diagram?
tepples wrote:
I can draw a diagram of where $00/$16 artifacts come from if you want.
I'm working on this, I'm probably just being a little slow and overdetailed about it.
Here's my attempt at a diagram, which I've added to the wiki article:
Here's a complete description of how I've demodulated the NES's video output by hand-
We tell the NES to this
It generates the following signal
Off the left hand of the screen, we get a cue of what phase 8 is (red here is phase 6)
We demodulate I and Q from that. GIMP doesn't support negative colors, so I have to demodulate all four quadrants separately:
I+
, I-
, Q+
, Q-
and then add them back together:
I
and Q
.
Note that I has a bandwidth of 1.5MHz (24 pixels) and Q has a bandwidth of <700kHz (>48 pixels)
We could just lowpass our input at 4.3MHz, which basically won't get rid of much of the chroma-into-luma crosstalk, and it looks like this:
,
and you end up with
at the end.
One of the earlier techniques invented to reduce this crosstalk is to subtract I and Q back from the input signal. We remodulate our calculated I and Q
(I+
Q+
I-
Q-
)
*EDIT I forgot 4 of the 8 products, they're not shown here, but the two results below now include them
and subtract it from the input. Because the NES produced too sharp an edge on the sides, this color trap isn't particularly effective on verticals and the result only looks like
We then lowpass at 4.3MHz (we still know per the spec that there's nothing valid above) and do the colorspace transform and get
you did ALL that by hand?
Anyway, how do you calculate a lowpass filter?
psycopathicteen wrote:
Anyway, how do you calculate a lowpass filter?
I knew the spatial frequency of the images I have there is 1 pixel = 12xNTSC ≈ 43MHz, so when I wanted a 1.5MHz lowpass I I used a 43MHz/1.5MHz = 29 pixel wide gaussian. (Actually, I eyeballed it and used a 24 pixel wide gaussian.) The gaussian isn't quite right -- a sinc or high-order chebychev is probably more authentic, but the gaussian has the advantage that it's symmetrically noncausal (unlike the chebychev or boxcar "motion blur") so I don't need to worry about group delay and it's natively supported in GIMP.
I really appreciate the diagrams, thanks guys.
Apparently the TVs uses a band-cut filter (as opposed to lowpass filter) for luma, which mans resolution higher than 3.5 MHz can happen as long as the hue (I and Q) are the same. However luma's harmonics close to 3.5 MHz will be cut off from the filter and be interpreted as color information, which can be indesirable.
One thing I still don't understand is how colors are decded. One can filter a 3.5 MHz signal easily and detect whenever or not there is color information, but how can you tell WHICH color it is ? I'm pretty sure amplitude modultion is done for staturation (unused on NES) and phase modultion for the color, but it should be really hard to know the phase of the signal to be able to tell if there is a blue, red or whatever color ?
If the phase change multiple times during less than a full period (what would happen if you try to get a higher resolution than 3.5 MHz with color, what the NES is doing in fact), then the TV can't detect this properly right ?
Even if it could, it would be filtered of by the bandpass filter, so changes in phase could not be detected. Filters does affect the phase of signals.
Bregalad wrote:
One can filter a 3.5 MHz signal easily and detect whenever or not there is color information, but how can you tell WHICH color it is ? I'm pretty sure amplitude modultion is done for staturation (unused on NES) and phase modultion for the color, but it should be really hard to know the phase of the signal to be able to tell if there is a blue, red or whatever color ?
Representation of hue and saturation in a color picker
Quote:
If the phase change multiple times during less than a full period (what would happen if you try to get a higher resolution than 3.5 MHz with color, what the NES is doing in fact), then the TV can't detect this properly right ?
Correct. Too rapid changes in phase will get confused with luma.
encoding chroma:
luma = .299*red + .587*green + .114*blue
U = .492*(blue - luma)
V = .877*(red - luma)
chroma(x) = U*cos(x) + V*sin(x)
decoding chroma:
U = chroma(x)*cos(x) + chroma(x-pi/2)*sin(x)
V = chroma(x)*sin(x) - chroma(x-pi/2)*cos(x)
blue = (U + luma)/.492
red = (V + luma)/.877
green = (luma - .114*blue - .299*red)/.587
At the beginning of every scanline (during H-blank) there are 8-10 cycles of "colorburst" that analog TVs use to keep in sync with the color generating. U is 180 degrees from colorburst phase. V is 90 degrees from colorburst phase.
Bregalad wrote:
One thing I still don't understand is how colors are decded. One can filter a 3.5 MHz signal easily and detect whenever or not there is color information, but how can you tell WHICH color it is ? I'm pretty sure amplitude modultion is done for staturation (unused on NES) and phase modultion for the color, but it should be really hard to know the phase of the signal to be able to tell if there is a blue, red or whatever color ?
We are told a reference phase at the beginning of every scanline. Doing this
QAM demodulation gets us the I and Q parts of
YIQ. (PAL uses YUV where U is blueness and V is redness.)
*edit: removed lie -- NTSC's reference phase is neither pure I nor Q. (wtf?)
I always thought NTSC uses YUV, and YIQ is only used for filtering.
psycopathicteen wrote:
I always thought NTSC uses YUV, and YIQ is only used for filtering.
I'm fairly certain PAL uses YUV and NTSC uses YIQ.
It doesn't really matter, it's the same exact colorspace anyway, except YIQ is rotated a little bit from YUV.
The only difference is that U and V are allocated the same bandwidth, but I and Q aren't.
What is the reference phase? I thought it was green-yellow but lidnariq said it was orange-red.
What color does NES send out for colorburst?
psycopathicteen wrote:
What is the reference phase? I thought it was green-yellow but lidnariq said it was orange-red.
What color does NES send out for colorburst?
I was wrong. I assumed (oops) that clearly the colorburst was the
in-phase component, and so I, but it's not; instead it's some other random angle neither I nor Q. The NES sends color $x8, which is mostly yellow.
lidnariq wrote:
psycopathicteen wrote:
What color does NES send out for colorburst?
I was wrong. I assumed (oops) that clearly the colorburst was the
in-phase component, and so I, but it's not; instead it's some other random angle neither I nor Q.
But because I and Q are merely U and V rotated by a phase offset, they appear to be two different ways to decode the same thing.
Quote:
The NES sends color $x8, which is mostly yellow.
In fact, as I understand it, $x8 is yellow by definition because it's pure -U.
This stuff reminds me of how CGA displays colors in composite video mode, it just uses the 640x200 monochrome mode, and certain dot patterns become colors.
Now I'm curious; how different would the NES's video signal look if it output actual sine waves, instead of square waves?
dwedit: Those colors look like they're just decoded using a lookup table or something. Is there something more to it than that?
Edit: I just looked it up on Wikipedia, those are all NTSC artifacts? Pretty crazy!
My god I can't believe how complicated all of this is. Should be one of the most complicated thing ever right behind second order differential equations with multiple variables.
But I can definitely draw a conclusion : People who claim NES graphics were designed with NTSC filter / composite artifacts in mind ARE WRONG. Because it's simply IMPOSSIBLE to keep all this in mind when you are making up sprites or graphics in general.
At best people will correct things if they appear wrong on the screen with the composite artifacts, but they will NEVER design graphics will all of this in mind.
Or they could have used graphics editors that ran on the NES using a TV, so you could see what it looks like, then just tweak it until it looks better.
U and V can be thought of as the amplitude of the sine wave at different instances of time.
Dwedit wrote:
This stuff reminds me of how CGA displays colors in composite video mode, it just uses the 640x200 monochrome mode, and certain dot patterns become colors.
That's Luma interference in the chroma domain. CGA is outputting in exact mutliples of the NTSC chroma frequency. And there's no phase alteration per scanline like the NES/SNES/TG16 do (and Genesis doesn't). Coco 3 can do the exact same effect as CGA composite output tricks (you can actually turn off the subcarrier output but leave the CB on, so the signal is B&W over a color composite signal, that's important). New TV sets have better filters and actually look for chroma phase alternating to pull the higher res Luma and cleaner chroma (some use a whole frame to frame difference, some use just a 3 scanlines).
If have good TV with component input, you can plug the composite output of the NES into just the Y/Green input. This'll allow you to see the chroma dot pattern due to the phase alteration pattern per line. Pretty interesting is you plug in a Genesis by comparison (no phase shifting of the CB and following line subcarrier phase, thus no dots - just long vertical lines).
My standard definition CRT TV can see 7.16mhz and up to 10.74mhz resolution on the Y channel of a composite output and actually pull the detail out of it. Something my old PCI TV capture card can't
even do (7.16mhz Y is twice the subcarrier of chroma and chokes on my capture card). I was really surprised (though it really relies on the alternating chroma dot pattern from line to line to help get back that res).
Bregalad wrote:
At best people will correct things if they appear wrong on the screen with the composite artifacts...
That's what I meant
qbradq wrote:
Bregalad wrote:
At best people will correct things if they appear wrong on the screen with the composite artifacts...
That's what I meant
I also mean that when I say that developers considered NTSC artifacts when drawing graphics for games. I don't think people calculated the exact effects on their minds, but they probably learned to predict some of the effects from experience, and that after seeing how things looked they tweaked them to make everything look better.
Case in point: The Sega Genesis (also called Mega Drive in some markets) has 228 subcarrier cycles per line, and each pixel in 320px mode is 8/15 of a subcarrier cycle wide. Vertical lines alternating 1px light and dark, which are incredibly common in Genesis games (look in any waterfall in any Sonic game), show up as rainbow stripes. It'd be dead simple to hide the rainbow banding by using checkerboard dithering instead of vertical stripes. So I imagine that artists added rainbow banding on purpose in order to increase the number of apparent colors.
No need to imagine; the MD is rife with games that have vertical band-only dithering: Socket, Vectorman, etc, etc.