Was it to improve composite picture quality? If it was, they should've had a missing half cycle every frame instead, so that artifacts cancled out entirely.
To improve it even better, each scanline would have been 341.25 pixels (1365 master clocks) long, with the extra master clock not inserted during each of the three lines of vsync. But that would probably have required more logic (therefore more expensive die area and/or more circuit golf) than what they ended up doing.
It's definitely there to counter-act composite artifacts, I can't think of any other valid reason.
There's a good thread on the subject here, though it's mostly centered around how to disable the "feature", so I'm not sure if it's able to add anything of value:
https://shmups.system11.org/viewtopic.php?f=6&t=61285
tepples wrote:
To improve it even better, each scanline would have been 341.25 pixels (1365 master clocks) long, with the extra master clock not inserted during each of the three lines of vsync. But that would probably have required more logic (therefore more expensive die area and/or more circuit golf) than what they ended up doing.
Well, I think frame cancellation is more important than line cancellation because you don't lose any information for static screens.
The NES PPU's composite output could have been improved far more by separating chroma from luma signal generation like on the Commodore 64. Not only would this have allowed for separated video output, it would have allowed filtering the chroma signal to the appropriate narrow bandwidth before combining it with the luma signal. The reason why the NES' dot crawl is so bad is mostly not the non-standard line rate, but the fact that there are chroma components above 0.6 MHz that should not be there. I can post simulated images later of how the NES would look at its usual line rate but with proper chroma signal filtering during signal generation.
Really it should just have had native RGB output like almost every other console :\
NewRisingSun wrote:
there are chroma components above 0.6 MHz that should not be there. I can post simulated images later of how the NES would look at its usual line rate but with proper chroma signal filtering during signal generation.
Trouble is that would have been more expensive in 1983.
A bit perhaps, but not much. Commodore did it with their VIC-II chip in the Commodore 64, and Commodore certainly were no-frills minded. In any case, here are the simulated pictures I mentioned earlier:
Chroma signal not filtered before combining with luma signal (i.e. normal NES), single field:
Attachment:
UnfilteredChroma-SingleField.png [ 93.76 KiB | Viewed 5741 times ]
Chroma signal not filtered before combining with luma signal (i.e. normal NES), two merged fields:
Attachment:
UnfilteredChroma-MergedFields.png [ 85.49 KiB | Viewed 5741 times ]
Chroma signal low-pass filtered to a
0.61.5 MHz bandwidth before combining with luma signal (i.e. hypothetically improved NES), single field:
Attachment:
FilteredChroma-SingleField.png [ 90.34 KiB | Viewed 5741 times ]
Chroma signal low-pass filtered to a
0.61.5 MHz bandwidth before combining with luma signal (i.e. hypothetically improved NES), two merged fields:
Attachment:
FilteredChroma-MergedFields.png [ 85.81 KiB | Viewed 5741 times ]
How would the four look for a hypothetical NES with a 341.25-dot line? No different from a Wii Virtual Console?
Like this:
Chroma signal not filtered before combining with luma signal (i.e. normal NES), single field:
Attachment:
UnfilteredChroma-SingleField-180.png [ 84.39 KiB | Viewed 5712 times ]
Chroma signal not filtered before combining with luma signal (i.e. normal NES), two merged fields:
Attachment:
UnfilteredChroma-MergedFields-180.png [ 57.04 KiB | Viewed 5712 times ]
Chroma signal low-pass filtered to a
0.61.5 MHz bandwidth before combining with luma signal (i.e. hypothetically improved NES), single field:
Attachment:
FilteredChroma-SingleField-180.png [ 86.22 KiB | Viewed 5712 times ]
Chroma signal low-pass filtered to a
0.61.5 MHz bandwidth before combining with luma signal (i.e. hypothetically improved NES), two merged fields:
Attachment:
FilteredChroma-MergedFields-180.png [ 59.78 KiB | Viewed 5712 times ]
As you can see, the merged fields would be almost identical between filtered and unfiltered chroma scenarios in the 341.25 pixels-per-line case; the unfiltered chroma picture being slightly softer. The filtered chroma signal would still be preferable because the individual field is cleaner, and there would be less crawling pixels during scrolling (which I personally find to be the most annoying aspect of the NES' composite output).
Thanks. Halting the PPU and CPU for one master clock to all lines except the 3 sync lines looks like a winner. Famiclone engineers, please take note.
But the chrominance bandwidth isn't supposed to be 0.6MHz, it's supposed to be 1.5MHz...
No, not in NTSC. You must be thinking of SMPTE-170M, which came out in 1994 (if I remember correctly), and only applies to "studio applications". 1953 NTSC calls for 1.5 MHz I and 0.6 MHz Q, and since no receiver after 1953's RCA CT-100 goes the exta mile to recover the wideband I signal, chrominance is effectively reduced to 0.6 MHz for both in-phase and quadrature signals. Re-checking my source, given my Gaussian kernel size, I'm actually filtering to 1.78 MHz and not 0.6 MHz as stated, so real decoders will look a bit worse.
Either way, by generating the signal the way it does, the NES generates luminance and chrominance both at 5.37 MHz, which is not as it should be, and is the reason for most of the visible dot crawl.
I don't think I believe that it could possibly be as low as 600kHz.
That's worse than VHS chrominance bandwidth.
Unfortunately, I don't have access to any TVs more than 20 years old to do tests on. :/
But that's what the 1953 NTSC standard calls for:
Code:
Q-channel bandwidth
at 400 kc less than 2 db down
at 500 kc less than 6 db down
at 600 kc at least 6 db down
I-channel bandwidth
at 1.3 mc less than 2 db down
at 3.6 mc at least 20 db down
In a 4.2 MHz television channel, only one of the sidebands of the wideband I channel is kept, so unless you do single-sideband demodulation for the high-frequency components of the I signal, there is no way around the 0.6 MHz limit. VHS has 0.4 MHz of chrominance bandwidth, by comparison.
Note that these limitations only apply to 4.2 MHz bandlimited composite signals, and they provide the mathematical basis for "RF quality".
A baseband composite signal on the other hand, where the whole signal can go up as high as 14 MHz or more, can of course retain a chrominance bandwidth up to the subcarrier frequency itself, though no TV receiver will demodulate chrominance components above 1.5 MHz.
Everything I'd read stated that demodulators most often demodulated too much into Q, not failed to demodulate content in I.
If you are demodulating Q at 1.5 MHz for an RF signal, you are in fact demodulating too much Q, according to the numbers I quoted.
As I said, the difference is mostly relevant for RF signals, which is all the original 1953 NTSC standard mentions. For baseband composite signals, you can do everything at 1.5 Mhz according to SMPTE 170M (as I have done in my test pictures) as you stated, and everything should be fine.
Right, that was my point.
My understanding was that there was a brief time in the dawn of NTSC where true separately bandlimited YIQ decoding was the only thing done, but it was one of the first things to be removed in the name of cost reduction. By the 1980s, my understanding is that more-or-less everyone had just switched to equal-bandwith YUV decoding instead.
And that at no point had they done YUV or YIQ demodulation with only 600kHz for the entire chrominance signal altogether.
Mind, if we can find a sufficiently old TV showing such abominable chrominance bandwidth, I will go find a hat to eat.
Let me simulate a few images demonstrating the various decoding bandwidths...
RF video, overall 4.2 MHz bandwidth, equiband YUV decoding at 0.6 MHz.
Attachment:
RF-YUV0.6MHz.png [ 89.98 KiB | Viewed 1733 times ]
RF video, overall 4.2 MHz bandwidth, equiband YUV decoding at 1.5 MHz.
Attachment:
RF-YUV1.5MHz.png [ 70.17 KiB | Viewed 1733 times ]
RF video, overall 4.2 MHz bandwidth, 1953 NTSC YIQ decoding with I at 1.5 MHz and Q at 0.6 MHz. This one looks really weird in the yellow-green bushes, because the unequal bandwidths result in funky transitory colors that don't appear with equal bandwidths, even when they're low.
Attachment:
RF-YIQ.png [ 90.44 KiB | Viewed 1733 times ]
Baseband composite video, unrestricted overall bandwidth, equiband YUV decoding at 1.5 MHz:
Attachment:
Composite-YUV1.5MHz.png [ 92.35 KiB | Viewed 1733 times ]
Baseband composite video, unrestricted overall bandwidth, equiband YUV decoding at 3.5 MHz:
Attachment:
Composite-YUV3.58MHz.png [ 78.04 KiB | Viewed 1733 times ]
The last one is never used, because as you can see, the high chroma bandwidth basically steals all the luma detail, and so is pointless.
Stealthy Mario!
I have to say, the way that the coin disappears in the status line, the way the title box blurs off the sides, and the extent to which it's hard to see Mario in front of a green bush make me skeptical that 600kHz bandwidth was ever used for I.
Did you know that in the 1950s, network transmissions to local TV stations were limited by Bell System's L1 coaxial cable to 2.7 MHz bandwidth for the entire video signal? "By heterodyning the subcarrier and its adjacent side-bands to a lower frequency, within the cable passband, before transmission, and heterodyning them back to the standard values after transnmission, and by cutting off the luminance signal at a frequency low enough to avoid interference with the subcarrier and its side-bands while the signal is on the cable", the picture must have looked like stew in the sun. I don't know enough about the details of that process, otherwise I'd simulate it as well...
Wikipedia says that color NTSC was only adopted in 1953; I wouldn't be surprised that even if the remodulation to transmit it over an L1 line could support chrominance, that almost no-one saw it.
Wikipedia seems to have an article about L1? Looks like it's "just" baseband?
How many pixels wide are the .6Mhz and 1.5Mhz filters?
I don't know if any composite system ever used this, but I've been experimenting with filters and I discovered if you scale the image to 512x448, make a non-bandlimited chroma signal, then lowpass filter it with a {.333, .333, .333} filter, then sharpen it with a {-.25, 0, 1.5, 0, -.25} filter, then subtract the image from the original chroma signal, it comes out surprisingly clear.
I don't think there is any way they could've made a 320x224 Sega Genesis game on composite or RF look clear without a frame delay comb filter.
0.6 MHz is close to 1/6 subcarrier, or 9 pixels per period.
1.5 MHz is close to 5/12 subcarrier, or 3.6 pixels per period.
But are the "0.6 MHz" pictures above actually using 0.6 MHz bandwidth (3.3-3.9 MHz), or 0.6 MHz deviation from the subcarrier's nominal frequency (3.0-4.2 MHz)? The difference matters when using Nyquist's theorem to translate bandwidth of a QAM carrier to usable pixels.
tepples wrote:
0.6 MHz deviation from the subcarrier's nominal frequency (3.0-4.2 MHz)?
This. The 0.6 MHz limitation comes from limiting the signal at 4.2 MHz, or 3.6+0.6 MHz. Which is why I actually don't understand how a TV, even if it tried to, could get 1.5 MHz chroma bandwidth from an RF-modulated NES signal, because the upper sideband will be partially lost. Unless the NES' RF modulator does not actually limit the signal to 4.2 MHz, that is...
Basically, I use a product detector for demodulation: multiply the composite signal with two carriers (inverted burst and inverted burst plus 90 degrees), gaussian filter the multiplied signal to the assumed chroma bandwidths, then subtract the filtered chroma signals from the composite signal to get luma. Working at four times subcarrier (14.32 MHz), I use a gaussian width of 4.78 for 1.5 MHz (14.32/4.75/2) and a width of 12 for 0.6 MHz (14.32/0.6/2). You tell me if that's wrong. For the overall bandwidth, I am filtering the recovered luma signal either not at all for unrestricted baseband, or with a gaussian width of 1.70 for 4.2 MHz (14.32/4.2/2). I should be doing this at the very beginning, since the 4.2 MHz cap is done by the RF modulator. But if I do that, I remove not only fine chroma detail, but a lot of chroma amplitude as well, so this may hint at an error in my method.
psycopathicteen wrote:
{.333, .333, .333} filter, then sharpen it with a {-.25, 0, 1.5, 0, -.25} filter
Because those are both FIR filters, you can just convolve the two together:
[-1/12 -1/12 5/12 1/2 5/12 -1/12 -1/12]
Performance of this one is ... kinda dubious.
The original boxcar filter puts two poles at the origin, and two zeroes at ±(Sample rate÷3) Hz. (In other words, exactly at the chroma modulation frequency)
Adding the extra sharpening filter puts a bunch of zeroes on the real axis (and more poles at the origin), causing a bit of gain just below the zero, probably there to compensate for the too-slow rolloff of the original one-zero notch filter.
Attachment:
File comment: # using octave
% [mag,w]=bodemag(filt(conv([1/3 1/3 1/3],[-1/4 0 3/2 0 -1/4]),[1],11/3/39375000))
% loglog(w/2/pi,mag)
nn565nn-log.png [ 3.49 KiB | Viewed 1674 times ]
Constraining yourself to linear filtering makes this harder than it has to be: the actual TV isn't working purely on baseband luminance and modulated chrominance components. By working on demodulated chrominance, you get a "non-LTI" system that supports sharper filtering for cheaper.
—
NewRisingSun wrote:
Which is why I actually don't understand how a TV, even if it tried to, could get 1.5 MHz chroma bandwidth from an RF-modulated NES signal, because the upper sideband will be partially lost.
The upper sideband
is partly lost. But the lower sideband is still present.
NTSC itself is modulated using "vestigial" sideband, and it's not clear to me why the lower sideband is still broadcast. (But they kept it for ATSC, so there must
(edit: typo) be a good engineering reason. Makes it easier to synchronize to the carrier, I suppose?)
Quote:
Unless the NES' RF modulator does not actually limit the signal to 4.2 MHz, that is...
That's
also true. (You can see this by looking on adjacent TV channels when the RF switch is on. Lots will have a dirty copy of the NES's video output from all the harmonics it's generating).
I think the demodulator in the TV does limit what's
decoded, even those the modulator isn't.
I should remember to plug the RF output of my SNES into my oscilloscope and get a spectrum plot. It's no spectrum analyzer, but I bet even with crappy resolution I'll see a few interesting things.
(Or maybe someone else has a newer nicer 'scope and wants to beat me to it)
Quote:
You tell me if that's wrong.
Gaussians aren't "realistic" because they're noncausal, but I'm not certain what kind of actual filters were used.
I used a Gaussian filter because it was explicitly recommended in Section 7.3 of SMPTE-170M.
I still need a different filter to simulate the 4.2 MHz channel bandwidth cap (the one that the NES' RF modulator does not enforce). My normal Gaussian filter method won't work, because it's too gradual, taking almost all of the subcarrier with it, hence the desaturation. I would need basically a flat response from 0 to 4.2 MHz, then a narrow transition band from 4.2 to 4.5 MHz, with basically infinite attenuation at 4.5 MHz, all for a sampling frequency of 14.32 MHz.
The set of constraints you've lined up is going to be difficult. I think you're either going to need a quite long sinc FIR filter, a very high order IIR filter, or accepting some passband & stop-band ripple. Even a 10th order discrete-time butterworth lowpass (-3dB at 4.2MHz) only gets down to -12dB at 4.5MHz. Sharper filters (chebyshev, elliptic) help a little, but there's going to be a lot of group delay during the transition band.
What filter is in an actual TV's IF section to separate the AM video from the FM audio?
It looks like you stumped everybody. I wish there was an easy way of calculating butterworth and chebychev filters in the digital domain.
I have literature on both RF transmitters and television receivers from the mid-1950s, when color NTSC was introduced. I don't think that's pertinent though, because it's too old.
As for filters, most materials on that subject, on-line or in print, expect way too many things to be known by the reader, are gratuitously formalistic, at the same time offensively imprecise in terminology and prose, and generally are not suited for people who just want to skim trying to find what they need. Of course, I say that about pretty much any mathematical subject, so who am I to complain.
Unfortunately, the vast majority of the service manuals and schematics I can find are either from the very early days of TV, where everything was
valve-based and monochrome, or from after the VLSI revolution and all the TVs do all the interesting signal processing stuff inside a monolithic encapsulated silicon die.
Doing my best to read the valve-based schematic for the Predicta 10L4x chassis, I think it just completely ignores the modulated audio signal altogether and relies on the temporal and spatial bandwidth to hide any crosstalk.
I think the easiest way would be to make a quantized version of the frequency gain spectrum you want, and then do a discreet cosine transform.
In English, discrete (easily distinguished; separate in time or space) and discreet (inconspicuous) mean different things. Even though they come from the same root.
Anyway, you've basically just described generic FIR filter design. The DCT isn't particularly good for analysis either (instead of for storage, where it is).
tepples wrote:
What filter is in an actual TV's IF section to separate the AM video from the FM audio?
Vast majority of the designs I have seen use surface acoustic wave filters tuned to particular IF frequencies used.
lidnariq wrote:
Anyway, you've basically just described generic FIR filter design. The DCT isn't particularly good for analysis either (instead of for storage, where it is).
I think the DCT would work because, for symetrical FIR filters the sine wave part of the fourier transform can be ignored, and also every frequency higher than the niquist frequency would be ignored.
lidnariq wrote:
In English, discrete (easily distinguished; separate in time or space) and discreet (inconspicuous) mean different things. Even though they come from the same root.
psycopathicteen wrote:
I think the DCT would work because,
The DCT causes spectral smearing, relative to the Fourier transform. As such, it's inappropriate for analysis or filtering.
It's implemented in terms of the Fourier transform, by taking the original input and duplicating it once to produce a symmetric input for analysis. When you take a copy of the original input, reverse it, and append that to the original, that changes the spectral content.
The phase information that was present in the FFT still has to be stored somewhere, and it changes the magnitudes of the specific bins of the result of the DCT.
Quote:
for symmetrical FIR filters the sine wave part of the Fourier transform can be ignored
A symmetric (specifically "even") FIR filter does have a purely real Fourier transform, and that does mean that the real and imaginary components of the filter input remain in the same parts after multiplication, but the imaginary part is still affected by the filtering.
Quote:
and also every frequency higher than the nyquist frequency would be ignored.
That's true regardless. Fourier transform of a purely-real input is necessarily symmetric¹, so everything above half the window width is going to be a duplicate of the bottom half. There is nothing actually "above" the Nyquist frequency there, just incorrectly plotting the result as the result of k∈[0,length) instead of k∈(-length÷2,+length÷2]
¹pedantic note: the fourier transform of a real input has an even (mirror symmetry) real component and an odd (180° symmetry) imaginary component.
psycopathicteen wrote:
What's the reason for the missing PPU cycle on even frames?
Was it to improve composite picture quality?
Mathematically, they seem to have aimed at making each (two) frames an exact multiple of the NTSC 3.579545MHz color clock cycles (this seems to have been common practice on older pre-PSX systems like C64, NES, SNES).
I am not aware of a good reason for doing that, but they have probably believed that it would improve the picture quality, although I guess it's actually making things worse. And even if there would be some reason for the short scanline: The
https://shmups.system11.org/viewtopic.php?f=6&t=61285 says that the short scanline is located right before the first visible scanline - which is really bad (to avoid jittering, it should be located after last visible line, so the TV has enough time could resync itself to hsync's).
psycopathicteen wrote:
If it was, they should've had a missing half cycle every frame instead, so that artifacts cancled out entirely.
No! That would highlight the artifacts (the artifacts would be always drawn at the same static locations in all frames, which would make the artifacts more visible). Probably Nintendo actually wanted to do right that - and (fortunately) considered it too difficult to implement "half cycles".
Here are some maths from everynes.htm on the color clock:
Code:
Clock Speeds
Type NTSC PAL Dendy
Master Clock (X1) 21.47727MHz 26.6017125MHz Like PAL
Color Clock 3.579545MHz=X1/6 4.43361875MHz=X1/6 Like PAL
Dot Clock 5.3693175MHz=X1/4 5.3203425MHz=X1/5 Like PAL
CPU Clock 1.7897725MHz=X1/12 1.66260703MHz=X1/16 1.773448MHz=X1/15
Frame Rate 60.09914261Hz 50.00697891Hz Like PAL
Modulator vs PPU Timings
Type NTSC PAL Dendy
Dots per Color Clk 1.5 (6/4) 1.2 (6/5) 1.2 (6/5)
Color Clks per Scanline 227.3333 284.1666
Color Clks per One Frame 59561.33 88660.0
Color Clks per Other Frame 59560.66 88660.0
Color Clks per Two Frames 119122.0 177320.0
For PAL/NES they've successfully achieved perfect uglyness with exactly 88660.0 color clocks per frame.
For NTSC/NES it's 119122.0 color clocks per TWO frames, which appears better than PAL, without the shortscanline it would be 3x59561.33 = 178684.0 clocks per THREE frames, which might look even better (with slight flimmering, but nearly invisible artifacts, ie. picture quality about as good as on PSX consoles).
As far as I know, one could disable the short scanline effect by disabling BG and OBJ (somewhere during vblank, and enable them when the first scanline gets drawn). I don't have a NTSC/NES console myself... but it might be interesting to patch some commercial/homebrew game to see if it looks better/worse with/without short scanlines (might be a bit difficult since NES doesn't have too much support for scanline IRQs, and stuff like Sprite0 Hit likewise won't work when having BG and OBJ disabled).
I hear that Zelda 2 and Dragon Warrior 2 are heavily affected by the artifacts, so I'd think those would be two games to especially compare on hardware.
nocash wrote:
As far as I know, one could disable the short scanline effect by disabling BG and OBJ (somewhere during vblank, and enable them when the first scanline gets drawn).
IIRC, Battletoads is a game that keep rendering disabled at the beginning of the frame, so that's one commercial game you can check out without any modifications (although there will be nothing to compare to).
Quote:
I don't have a NTSC/NES console myself... but it might be interesting to patch some commercial/homebrew game to see if it looks better/worse with/without short scanlines (might be a bit difficult since NES doesn't have too much support for scanline IRQs, and stuff like Sprite0 Hit likewise won't work when having BG and OBJ disabled).
If you can consistently cause a sprite 0 hit or a sprite overflow *anywhere* on the screen, you can detect the end of vblank by waiting for the flag to be cleared, which happens at the beginning of the pre-render scanline. Then you just wait a little to enable rendering after the frame starts rendering.
Quote:
No! That would highlight the artifacts (the artifacts would be always drawn at the same static locations in all frames, which would make the artifacts more visible). Probably Nintendo actually wanted to do right that - and (fortunately) considered it too difficult to implement "half cycles".
Actually, you're right. In order for there to be 180 degrees phases cancelation, it would need an extra 1/4 cycle each frame.
On a well-formed 480i NTSC signal, there are exactly 227.5 × 262.5 = 59718.75 chroma cycles per vertical refresh, so it actually takes 4 fields for the chroma phases to cancel out again.
With a 240p signal, it's a bit harder. The most-frequently done and most problematic thing done was to use 228 chroma cycles per scanline. (CGA, Apple 2, Master System, Genesis). Crosstalk artifacts occur at fixed X positions on every scanline, but aren't a function of which scanline. This produces a stable image, but produces very visible artifacts if there's any movement. As nocash says, "perfect ugliness".
The simplest thing to do would be to have the right hsync rate – 227.5 chroma cycles per scanline – with some odd number of scanlines.
In fact, the NTSC VIC-20 does. It uses a pixel clock of 260÷227.5 times the chroma carrier, and generates 261 total scanlines, for a total of 59377.5 chroma cycles per vertical refresh, and thus both every scanline and every field is exactly opposite the phase of the previous.
The NTSC C64 apparently came with two revisions of the VIC-2. Both used a 8 2/11 MHz pixel clock, but one apparently used 224 chroma cycles per scanline and 262 scanlines (...?) generating stable but highly-visible artifacts; the later revision generated 227.5 chroma cycles per scanline and 263 scanlines, achieving the same objectives as the VIC-20.
(see also)The NTSC NES and SNES both start with 227
1/
3 chroma cycles per scanline. The video generated by the NES dramatically exceeds the chrominance bandwidth that would be transmitted over-the-air, and they decided that having the chroma interference pattern repeat every 3 fields wasn't desirable. By removing one pixel = 2/3 of a chroma cycle every other vsync, they got something close to the desired 30Hz stability.
How does that compare to my previous suggestion of 227.5x262, but drop one-sixth of a chroma cycle (thus 227 1/3) during each of the three cycles of vsync?
I'd say it's easier to just do the same as the VIC-20 or later C64s and have an odd total number of scanlines...
I mean, either way, you get a 180° phase shift from scanline to scanline and also from field to field, which is the only metric I've thought of for evaluating this...
Well, at least the NES' designers weren't as lazy as the Apple IIs, which uses an integer number of color cycles per scanline. This makes cross-color artifacts consistent from line to line and field to field. In fact, since the pixel clock also is a multiple of the color subcarrier frequency, the cross-color artifacts are all the color there is, as the Apple II itself does not generate any color at all, other than the color burst in the back porch!
I would not call the Steve Wozniak lazy when his design for the Apple II's mainboard (which was almost totally a one-man effort) was the first practical and consumer affordable solution to display color graphics on regular TVs without specialized hardware. It was a cool hack of the NTSC decoding system that was sufficiently reliable and cheap enough to use for the first color consumer computer.
However, the Tandy CoCo doesn't have the excuse of coming first. The CoCo's startup composite artifact colors are random when you boot the computer. "Pixel 0" may be orange or blue depending on where in the clock cycle the system starts up into, and you would have to reset the system until you got the "right" colors. That doesn't happen on an Apple II, you know even lines will be purple/blue dominant and odd lines green/orange dominant.
You can literally think of the four bits in chroma period of the Apple 2 (or CGA)'s output as literally being the four samples [Y+U] [Y+V] [Y-U] [Y-V] (or maybe reverse that?). Doing so even makes it easy to see why you get two identical greys out of the Apple 2 or from CGA artifact colors—they're the ones where +U and -U are the same, vs +V and -V are the same.
lidnariq wrote:
On a well-formed 480i NTSC signal, there are exactly 227.5 × 262.5 = 59718.75 chroma cycles per vertical refresh, so it actually takes 4 fields for the chroma phases to cancel out again.
With a 240p signal, it's a bit harder. The most-frequently done and most problematic thing done was to use 228 chroma cycles per scanline. (CGA, Apple 2, Master System, Genesis). Crosstalk artifacts occur at fixed X positions on every scanline, but aren't a function of which scanline. This produces a stable image, but produces very visible artifacts if there's any movement. As nocash says, "perfect ugliness".
The simplest thing to do would be to have the right hsync rate – 227.5 chroma cycles per scanline – with some odd number of scanlines.
In fact, the NTSC VIC-20 does. It uses a pixel clock of 260÷227.5 times the chroma carrier, and generates 261 total scanlines, for a total of 59377.5 chroma cycles per vertical refresh, and thus both every scanline and every field is exactly opposite the phase of the previous.
The NTSC C64 apparently came with two revisions of the VIC-2. Both used a 8 2/11 MHz pixel clock, but one apparently used 224 chroma cycles per scanline and 262 scanlines (...?) generating stable but highly-visible artifacts; the later revision generated 227.5 chroma cycles per scanline and 263 scanlines, achieving the same objectives as the VIC-20.
(see also)The NTSC NES and SNES both start with 227
1/
3 chroma cycles per scanline. The video generated by the NES dramatically exceeds the chrominance bandwidth that would be transmitted over-the-air, and they decided that having the chroma interference pattern repeat every 3 fields wasn't desirable. By removing one pixel = 2/3 of a chroma cycle every other vsync, they got something close to the desired 30Hz stability.
The 64 clock NTSC VIC-II is a mistake, and only the very early revision NTSC 64s had the chip, as most of these were the "Sparkle" C64s most 64 chips are gone( the replaced the VIC-II when they replaced the CHARGEN) . 65 clock is the standard.
Did people use to use TVs as computer monitors a lot? It's weird how many old computers have composite video.
VGA connectors didn't exist until 1987. Plenty of actual computer monitors used composite prior to that.
Yeah, before the VGA card, in the US, there wasn't really a widespread standard video connector other than baseband CVBS via RCA jack. Even S-Video on the MiniDIN-4 was introduced at the same time as VGA - 1987.
I find that strange because composite requires more circuitry.
So does modulating to RF. It's not about complexity of the generator; it's about being able to take advantage of the monitors (televisions) that people already had rather than requiring that they buy a new monitor.
Correct. A composite output for use with the composite monitor that one already owns is less circuitry than an RGB output and a bespoke monitor.
Ninja'd twice while trying to post the following:
Before 1987: Most home computers came with composite output of some sort.
1987 through 2007: The dark ages of most people not knowing to connect a personal computer to a television. Because processing, storage, and network communication weren't yet up to the task of SD full-motion video, computers were seen as tools to display large amounts of text. It was possible to use a
scan converters to downscale VGA to composite or S-Video, but these were expensive and hard to find, and few PC applications were designed to work with standard definition. This gave the console makers a monopoly on the living room, apart from
a minority of geeks, and large game publishers a cartel on
single-screen multiplayer.
2007 to present: Most TVs come with HDMI input, which can display HDMI or DVI-D output. Many also come with VGA input. It became even easier in fourth quarter 2015 when Valve introduced the
Steam Link thin client, as the PC no longer has to be in the same room as the TV.
2012: The successful crowdfunding of the OUYA console shows demand for indie games on a TV, even if the end product underwhelms. Over the next few years, this causes console makers to open their developer programs to smaller studios in order to keep living room PCs from taking over.
psycopathicteen wrote:
Did people use to use TVs as computer monitors a lot? It's weird how many old computers have composite video.
Yes, almost as "The standard", if you had a monitor you were a rich kid. You are use to the modern "TVs cost nothing" world. Back in the late 70s and 80s TV were expensive "furniture" like items.
For example a VIC-20 cost you $99, but a Monitor for it would cost you $395. So if you have to have a monitor with the VIC-20 the computer costs you $494, if you can use the TV you already own it costs to $99.
By 88 seems you could get a 1084 for $289 while a C128 would cost you $244. Mean while you could get a C64 for $154 and use the TV you already have. You could add a 1802 to the 64 for better picture for $164. That is basically the tipping point, by the time you get the IIgs the machine is $999 and a monitor is still $499.
A disk drive for the C64 was $164. So you could have a C64 + Monitor or a C64+Disk drive+Game for the same price. If you already had a TV you would use it and take the Disk Drive.
I find it pretty funny how a lot of Europeans online say that skin colors on NTSC appear as either green or purple, yet, I never seen it happen on any of the TVs I had. If these Europeans actually seen North Amercian TV sets, they must be either way older than me or they watched TV with one of those people who just leave the tint control in a funny place.
Saturation and black levels were always my pet peeve with NTSC. They were always inconsistent from one channel to the next.
psycopathicteen wrote:
I find it pretty funny how a lot of Europeans online say that skin colors on NTSC appear as either green or purple, yet, I never seen it happen on any of the TVs I had.
I think this was common with VHS.
It might've happened with rabbit-ear antennas. Those always had lousy reception.
NTSC does have far worse color fidelity than PAL because...
- ... multipath reception in antennas and signal reflections in impedance-unmatched electrical connections will cause differential (brightness-dependent) phase shift. Differential phase errors cannot be removed by the viewer using the Hue control, as that one will only correct linear phase errors. The beloved NES being a serious offender. The PAL patent on the other hand explicitly mentions its ability to even remove differential phase errors, as my comparison of the PAL NES' output with Simple PAL versus Full PAL demonstrates.
- ... the very presence of a Hue knob not only in monitors but also in video production equipment meaning one more setting than can and will be improperly adjusted.
- ... a black level of 7.5 percent that was almost impossible to get right without an oscilloscope in the pre-digital days. Basically everyone else, including all PAL countries, use a black level of zero.
- ... RGB primaries that were obsolete almost immediately after the standard was adopted yet never formally changed, leading to confusion among everyone about what the correct colorimetry is, and strange correction matrices in production and consumer equipment. For PAL, the European Broadcasting Union simply said in 1970: here are new primaries based on current technology, and here are permissible deviations: use them and shut up. Worked so well that they were used with minimal changes in HDTV (as Rec. 709).
- ... generally much more lax broadcast and somewhat more lax equipment production standards in NTSC regions (although that's not the fault of the system itself)
I would say though that the "green and purple faces" is mostly a stereotype, as none of the above points would distort the signal to such a great degree. And even if it did, the viewer would simply adjust his set.
Then again, if I look at some YouTube videos...
Quote:
... RGB primaries that were obsolete almost immediately after the standard was adopted yet never formally changed, leading to confusion among everyone about what the correct colorimetry is, and strange correction matrices in production and consumer equipment.
I wonder how many people would notice if the colorimetry was different without the correction matrices. If you can get a wider color gamut, you might as well show it off.
What makes it worse is that NTSC 4.48 is valid for a direct signal and hence the NES and SNES could have given you NTSC 4.48 out of the back which would solve a lot of the problems XD Still won't get you a proper red that doesn't bleed everywhere but you can't have everything
Would ntsc 4.48 work with every ntsc tv set?
Don't know. I mean it in the Spec, and they are analogue devices, so if you change the chroma burst signal faster it should "just work", as in its not digital and hence has a yes/no response to everything. I would think at worst if the internal switching can only handle 3.38 you would still get the same you got now, just the NES outputs more data. Might end up looking worse though.
As I understand it, NTSC-only SDTVs are sensitive only to 3.58 MHz chroma.
NewRisingSun wrote:
RF video, overall 4.2 MHz bandwidth, equiband YUV decoding at 0.6 MHz.
Attachment:
RF-YUV0.6MHz.png
RF video, overall 4.2 MHz bandwidth, equiband YUV decoding at 1.5 MHz.
Attachment:
RF-YUV1.5MHz.png
RF video, overall 4.2 MHz bandwidth, 1953 NTSC YIQ decoding with I at 1.5 MHz and Q at 0.6 MHz. This one looks really weird in the yellow-green bushes, because the unequal bandwidths result in funky transitory colors that don't appear with equal bandwidths, even when they're low.
Attachment:
RF-YIQ.png
Baseband composite video, unrestricted overall bandwidth, equiband YUV decoding at 1.5 MHz:
Attachment:
Composite-YUV1.5MHz.png
Baseband composite video, unrestricted overall bandwidth, equiband YUV decoding at 3.5 MHz:
Attachment:
Composite-YUV3.58MHz.png
The last one is never used, because as you can see, the high chroma bandwidth basically steals all the luma detail, and so is pointless.
Is the first one have a ".6mhz" filter on both the encoder and decoder side? Also, the last one DOES look familiar, like it's how my old tv looked like.
Does anybody know the formula to convert the original NTSC primaries to sRGB? I remember someone showing a screenshot of the difference between the 2 standards, with the old format having boosted reds.
The formula depends on whether you just want to convert RGB primaries or also from the original Illuminant "C" white point to D65. Use the attached Excel file to calculate from any color space to any other color space. Remarks:
- "Linear light signals" are the signals you get when you remove gamma-pre-correction, i.e. LinearR = R^2.2, 0.0 <= R <= 1.0.
- "Gamma pre-corrected signals" are the normal RGB values you use. As the article referenced in the file explains, proper conversion requires linear light signals. When you use normal gamma pre-corrected signals, you must specify two chromaticity points for which the inevitably resulting errors will be minimized.
- The "Parker (original)" and "Parker (modified)" sheets differ in how white point differences are treated.
- The "Common settings" sheet contains a number of color spaces that the literature lists as being "common" for television sets of a particular era, as well as the color spaces defined by the various standards documents. "sRGB" uses "CCIR Rec. 709" primaries and white point D65.
- The "Gain/Angle" are the result of folding the NTSC YUV-to-RGB* and "Correction for gamma pre-corrected signals" matrices into one matrix and are the values you would enter for example into Nestopia's NTSC palette generator under "Advanced" (Nestopia for some reason requires Gain to be entered divided by two).
* NTSC is usually described as using YIQ rather than YUV, but it's just the same color space rotated by 33 degrees, and it's easier expressing things in terms of YUV because at least one value will at least nominally be at zero degrees.
Is this what I'm looking for?
Code:
Correction for gamma pre-corrected signals
R's G's B's V U Gain Angle
R't= 1.2903 -0.2705 -0.0198 R'-Y'= 1.628 0.066 1.629 87.7 °
G't= -0.0037 0.9516 0.0521 G'-Y'= -0.557 -0.269 0.618 244.2 °
B't= 0.0317 -0.1982 1.1665 B'-Y'= 0.151 2.445 2.450 3.5 °
No, you are converting approximately and not exactly. To convert exactly, convert to linear light signals and use the formula for linear signals:
Code:
// Convert from 1953 NTSC (with D65 white point) to sRGB color space
float R = (r>=0.0812)? pow((r+0.099)/1.099, 1.0/0.45): r/4.500;
float G = (g>=0.0812)? pow((g+0.099)/1.099, 1.0/0.45): g/4.500;
float B = (b>=0.0812)? pow((b+0.099)/1.099, 1.0/0.45): b/4.500;
float newR = 1.4607*R -0.3845*G -0.0761*B;
float newG =-0.0266*R +0.9654*G +0.0612*G;
float newB =-0.0264*R -0.0414*G +1.0678*B;
if (newR<0.0) newR=0.0; if (newR>1.0) newR=1.0;
if (newG<0.0) newG=0.0; if (newG>1.0) newG=1.0;
if (newB<0.0) newB=0.0; if (newB>1.0) newB=1.0;
r = (newR>=0.018)? 1.099*pow(newR, 0.45)-0.099: 4.5*newR;
g = (newG>=0.018)? 1.099*pow(newG, 0.45)-0.099: 4.5*newG;
b = (newB>=0.018)? 1.099*pow(newB, 0.45)-0.099: 4.5*newB;
Notice the gamma conversion formula from SMPTE-170M in the first and last three lines. It is more involved than a standard power function.
I was thinking today about what would be the easiest digital bandpass filter for a system like the NES to do, and I thought that {-1/4, 0, 1/2, 0, -1/4} BPF sampled at 14.32Mhz would be one of the easiest digital filters, so I calculated what would be the -3dB "cutoff" frequency with my calculator and I got:
1.3031574 Mhz
This CAN'T be a coincidence that such a simple digital filter would hit the 1.3 Mhz bandwidth so closely.
Is it a coincidence that 1.3Mhz at -3dB is the standard chroma bandwidth for NTSC, and it just so happens that a basic (-1/4, 0, 1/2, 0, -1/4) FIR filter with a sample rate of 14.32 Mhz gives a -3dB level at 3.58 +/- 1.3 Mhz?
Yes.
You can painstakingly work out the math longform if you'd rather. Nothing magical will show up about a number that works out to very approximately roughly Fs/11.
Is there possibly some special relationship between the sampling rate—4x the colorburst frequency—and 1.3MHz? Yes, that I think might be deliberate. But your choice of discrete-time filter? Absolutely not, discrete-time stuff was basically not a practical option during NTSC's genesis.
This thread has turned into a "ask questions about analog TVs" thread.
Is there any research about the SNES's RF modulator circuit? I know that most composite encoder chips have a max of 133 IRE, and RF has a smaller range of 120 IRE. Does the SNES in RF have a smaller chroma amplitude, or does the SNES clamp yellows and cyans?
lidnariq wrote:
I should remember to plug the RF output of my SNES into my oscilloscope and get a spectrum plot. It's no spectrum analyzer, but I bet even with crappy resolution I'll see a few interesting things.
Connecting my oscilloscope directly to the RF output of various things. No cart, so solid screen and no sound, scope set to 250MS/s and 50mV/div, the lowest setting that's not additionally bandwidth limited.
NES-001, channel 4:
Primary video carrier image at 67.25MHz; ≈-47dB during vsync/hsync/blanking and ≈-54dB during whatever color it randomly chose.
Audio carrier visible at 71.75MHz, ≈-65dB
Extra modulated signal visible at 62.25MHz, comparable in magnitude to the audio carrier.
Main 21.47MHz system clock visible; ≈ -65dB
NES-001, channel 3:
Primary video carrier image at 61.25MHz, ≈-45dB and ≈-53dB as above
Audio carrier visible at 65.65MHz (should be 65.75), ≈-61dB
Extra modulated signal visible at 66.65 ≈-68dB
Extra modulated signal visible at 56.70MHz, also ≈-61dB
Main 21.47MHz system clock visible; ≈ -65dB
SNS-001:
Weird ringing even when no power adapter plugged in at 66.75MHz, regardless of RF channel setting. ≈-64dB. Image disappears if via RF switch
and console is off.
SNS-001, channel 4:
Primary video carrier image at 67.25MHz, ≈-46dB
Audio carrier visible at 71.75MHz, ≈-64dB
Extra modulated signal visible at 62.8MHz, ≈-61dB
SNS-001, channel 3:
Primary video carrier image at 61.30MHz, ≈-44dB
Audio carrier visible at ≈65.80MHz, ≈-60dB
Extra modulated signal visible at 56.80MHz, ≈-59dB
Unfortunately, the SNR of my scope isn't good enough to see anything quieter, and the whole FM radio band is coupling into everything.
I've been looking at docs for ntsc encoder and decoder chips and something I noticed that most of the "1.3 Mhz" chroma filters have zeros at ~2 Mhz, which is a much faster roll off than either Gaussian or Hamming filter. Does this mean there's chroma ringing on these filters?
Depends on the specific transfer function, whether it's continuous time or discrete, and what you mean by "2MHz".
Oh, of course. And zeroes never cause ringing, only poles do.
Is there a limit to how fast the roll off without ringing?
Not really.
Just increased group delay.