I want to ask nesdev people,
Is there an "emulator"or "filter" that capable simulate
smooth luminofor fading like CRT?
As you can see, stars has comet tails:
I assume that playing with the "motion blur" effect of something like VisualBoyAdvance might provide a similar effect.
ObNES: To run NES games in VisualBoyAdvance, use PocketNES.
Just tested Stars-Field demo on VBA-M (+ pocketnes) with motion blur.
Absolutely no effect, stars doesn't have tails
Googling "crt simulator" provides some links:
http://www.bogost.com/games/a_televisio ... ator.shtml (definitely seems to provide some ghosting emulation)
http://ascii.textfiles.com/archives/3786The current CRT TV I have has a half life noticeably shorter than 1/60th of a second. I'd arbitrarily guess somewhere around 5ms.
If you're okay with an exponential fade, you can implement something like this fairly cheaply by storing the previous frame buffer and blending it with the new frame buffer at some % blend (e.g. 50% blend). Basically like an audio delay with feedback.
Simply blending the current frame with the previous one doesn't produce the right result, since it blurs the front of moving objects and doesn't leave tails. What's needed is a blending mode that blends in more of the color of the current frame pixel if it is brighter than the previous frame pixel.
Well, not y[n]=k·y[n-1]+(1-k)·x[n] but y[n]=max(k·y[n-1],x[n]) then.
I guess it's not simply doable with a couple of transparent opengl textures.
The impression I got when I experimented with moving things around in a BASIC program on an Apple IIe with a monochrome monitor back in middle school is that the a CRT phosphor is like a piano string, decaying quickly over the short term and more slowly over the long term. So you'd need to feed the video signal into two different motion blur processes with two different time constants and then blend the state of both processes to the screen.
As a first approximation, you could try implementing it as 75% current frame, 25% state of feedback motion blur.
Funny effect:
http://youtu.be/SKUHMEgfY_QStars change their size/color and pulsate on motion.
I haven't seen anything like this on software "CRT Filters"
This could be one of two things. If it also happens in an emulator, the game is changing CHR banks based on the horizontal scroll position. MetalStorm is known to do this as a way of faking parallax scroll. Otherwise, you're seeing the fact that each pixel covers only 5/6 of a PAL color subcarrier cycle, and an isolated colored pixel may end up darker or lighter depending on its momentary alignment with the subcarrier grid.
I haven't see size pulsations on emulators.
When you are using blargg's NTSC-filter, you can see color pulsations, but no size pulsations.
I think it's CRT effect.
Are you referring to the effect where the size of the image is proportionate to the brightness of the image? (i.e. crosstalk between the deflection circuit and the electron beam drive)
Yes, difference colors have different brightness.
Dark-blue star looks smaller than white star on CRT, but in real they have same size.
Oh, you're referring to electron beam size being a function of brightness, especially in older tubes where the electron emitter has eroded. I think MAME's CRT simulator does handle that.
You can get a similar effect by rendering the previous frame at 15% or so luminisoty, but with additive blending over the new frame, so that it does not darken the new frame at all. A few iterations of this should look similar.
Googling for "CRT phosphor emulation" and I've found this video:
Emulation of CRT Phosphor + Curved Screen + FadeBut I can't test this GPU shader on my old gma4500
Can anyone test this filter on Stars - Field demo (PD).nes?
Do you have some Csound program which can emulate luminofor fading for a single pixel? If it is treated as an audio signal, I wonder what it will sound like?
CRT phosphors emit light as either a first or overdamped second order exponential decay¹ after the incoming electron beam radiation has stopped. There will be some complication for the electron beam size, such that a "single" phosphor may actually be struck multiple times in quick succession as the screen is redrawn. Vectorscopes have a different retrace pattern, but are subject to the same caveats.
¹
'Properties of Fast-Decay CRT Phosphors' by Pfahnl, ftp.helpedia.com
Looks like motion blur instead of light trails...
Quote:
Looks like motion blur instead of light trails...
Yes.
On the my first video you can see that white stars have greenish phosphoric tails.
This effect is similar to "afterglow" but no blur.
I need to make more quality video.
When you see live on real CRT traces looks much longer, brighter and phosphoric.
I see. At anyway, even for a single pixel, it's brighter and "bigger" due to his properties in a CRT monitor.
Zepper wrote:
At anyway, even for a single pixel, it's brighter and "bigger" due to his properties in a CRT monitor.
Yes... this will be pretty hard so simulate in emulators.
Having done a very simple simulation assuming a half-life of 33ms, without any of the electron beam simulations, the effect is fairly subtle:
Attachment:
f029.png [ 7.85 KiB | Viewed 4389 times ]
This is just "current displayed frame" = maximum("previous displayed frame" * .7,"current notional frame")
(This is a still from the loop that bisqwit made to demonstrate animmerger. Any obvious errors are from it.)
I'm shifting right (div by 2) the previous RGB pixel to simulate an exponential decay. Then, I'm adding the new pixel. Well, I see it produced a similar effect from the previous screenshot...
You're missing a few points:
1. If the new point brighter than the old one - the new one is drawn just as they are.
2. If the new point darker than the old one - then the strength of the effect will depend on the difference in brightness of the old and the new point. Therefore, the maximum effect will be viewed as a new point - black.
3. Each one has a different color phosphor decay rate. I have already given a
link to a video showing it. First fades blue phosphor, then green and red kept the longest. Therefore, it is necessary to work on the sub-pixels separately.
HardWareMan wrote:
3. Each one has a different color phosphor decay rate. I have already given a
link to a video showing it. First fades blue phosphor, then green and red kept the longest. Therefore, it is necessary to work on the sub-pixels separately.
It looks to me like the half life depicted in that recording is on the order of 100µs for blue, 300µs for green, and 2ms for red. With modern phosphors with those halflives, there will be no visible interframe blurring from the phosphors, only from your eyes.
So, it's
not a matter of interpolating frames or blending pixels at every X ms.
Should I say "outputting pixels
whiter than white"?
Funny things ^_^;;
Zepper wrote:
So, it's
not a matter of interpolating frames or blending pixels at every X ms.
Should I say "outputting pixels
whiter than white"?
Funny things ^_^;;
This looks nasty, but it's closer than the other approximations above, which are more like motion blur.
Most of what makes the fading phosphor look look like it does is phosphors not going dark instantly.
Eugene.S wrote:
Googling for "CRT phosphor emulation" and I've found this video:
Emulation of CRT Phosphor + Curved Screen + FadeBut I can't test this GPU shader on my old gma4500
Can anyone test this filter on Stars - Field demo (PD).nes?
This looks more like an LCD response time emulator, ironically, from the video.
Oh, mein gott
It's the
LSD emulator with psilocybine mushrooms!
I want to see it on motion
Eugene.S wrote:
It's the LSD emulator
What does that stand for?
Lovely sweet dream?
I played with pixels and colors a bit here... and looks like a 60hz wouldn't allow that glowing tail. A star that moves on screen would be seen as a trace of it with different brightness, like a .ooOO, and not something really blended.
EDIT: example of what I mean. The brighter palette, the greater the trail.
Can you show this filter on motion?
I made this a while back:
https://www.youtube.com/watch?v=VSZH0zGz7hoIt's not a good realtime effect though, it's a linear fade with hue shifting subimposing the last few seconds worth of frames under the current one.
These trails are the right idea, especially since they are spaced and not blended, but the intensity of even the first brightest trail should be much dimmer than the image that produced it.
rainwarrior wrote:
I made this a while back:
https://www.youtube.com/watch?v=VSZH0zGz7hoIt's not a good realtime effect though, it's a linear fade with hue shifting subimposing the last few seconds worth of frames under the current one.
Any chance you wanna share that filter? It's pretty neat.
Eugene.S wrote:
Can you show this filter on motion?
Do you know any software that captures a windowed region of the screen?
Zepper wrote:
Eugene.S wrote:
Can you show this filter on motion?
Do you know any software that captures a windowed region of the screen?
Camstudio can do an okay job sometimes. Microsoft's Expression Encoder is free for ten minutes at a time and does a bit of a better job than Camstudio.
I have implemented something like phosphors into my game:
It needs a lot of tweaking, but it is not far off from the right idea. It's a little more like a slow LCD simulator, but its blending is additive so a moving black box would not have the effect a moving light box would.
This is the same effect, but tweaked further and with some scanlines. I think it looks pretty decent; I'd say the phosphor trails are generally not very perceivable to the human eye on a real CRT unless the person is using the CRT in the dark, where the lack of external light makes them much more apparent. The above effect looks much like that scenario.
You guys keep posting images, but to actually appreciate these effects we need video!
tokumaru wrote:
You guys keep posting images, but to actually appreciate these effects we need video!
Screen recording software is kind of shit since the stuttering breaks the effect... my game does not have video logging so the best I can do is an actual camera.
mikejmoffitt wrote:
These trails are the right idea, especially since they are spaced and not blended, but the intensity of even the first brightest trail should be much dimmer than the image that produced it.
Any chance you wanna share that filter? It's pretty neat.
Well it's not supposed to be a phosphor fade or anything, was just supposed to be some trippy trails.
As for sharing, there's really nothing to share other than the description of what it is. The implementation is something specific to my own video making program (which I will not share), but it's fairly trivial to write if you already have some sort of video processing framework: keep the last X frames, for each output frame just draw each of them in turn (faded and hue shifted) wherever the transparent key colour remains in the image.
If you save every frame of raw RGB pixel data to a file as you generate it, you can pipe it through FFmpeg with zero dropped frames.
tepples wrote:
If you save every frame of raw RGB pixel data to a file as you generate it, you can pipe it through FFmpeg with zero dropped frames.
This sounds tedious, unless scripts are readily available with little to no setup that will run on Windows. I am not booting linux for this.
Plus, I am not aware of a function Allegro has that will let me easily save the backbuffer as a file. This sounds like filesystem havoc as too...
rainwarrior wrote:
mikejmoffitt wrote:
These trails are the right idea, especially since they are spaced and not blended, but the intensity of even the first brightest trail should be much dimmer than the image that produced it.
Any chance you wanna share that filter? It's pretty neat.
Well it's not supposed to be a phosphor fade or anything, was just supposed to be some trippy trails.
As for sharing, there's really nothing to share other than the description of what it is. The implementation is something specific to my own video making program (which I will not share), but it's fairly trivial to write if you already have some sort of video processing framework: keep the last X frames, for each output frame just draw each of them in turn (faded and hue shifted) wherever the transparent key colour remains in the image.
I see, I don't have any such. I thought it was something you did to FCEUX.
Gosh this looks awful:
http://www.youtube.com/watch?v=rK64l4hA ... e=youtu.be
Is it possible with ImageMagick to make something like this?
zzo38 wrote:
Is it possible with ImageMagick to make something like this?
I don't know; I could just write down in order what I did to produce the effect but I don't know a thing about shaders or what imagemagick is. Anyone can reproduce this with a batch script for an image editor.
I've tweaked it a little, and it's better. Now the lines do look a bit thicker for brighter areas than dimmer ones:
mikejmoffitt wrote:
zzo38 wrote:
Is it possible with ImageMagick to make something like this?
I don't know; I could just write down in order what I did to produce the effect ...
If you please provide such an information then it can help.
zzo38 wrote:
mikejmoffitt wrote:
zzo38 wrote:
Is it possible with ImageMagick to make something like this?
I don't know; I could just write down in order what I did to produce the effect ...
If you please provide such an information then it can help.
Sure, I'll take a little while to write it up.
mikejmoffitt: In many of my projects after writing a screenshot command, I often write a "perfect 60hz" capture mode that dumps frames and audio (i.e. 44100/60=735 samples of sound per frame) with a fixed internal framerate (disregarding actual display rate), which makes it very easy to make nice clean video (which is also helpful for debugging sometimes). VirtualDub has a way to auto-concatenate a stream of sequentially named images, so you don't even have to funnel it into an AVI stream yourself or anything (though that itself can be pretty easy too, e.g. Windows' crusty VFW API).
rainwarrior wrote:
mikejmoffitt: In many of my projects after writing a screenshot command, I often write a "perfect 60hz" capture mode that dumps frames and audio (i.e. 44100/60=735 samples of sound per frame) with a fixed internal framerate (disregarding actual display rate), which makes it very easy to make nice clean video (which is also helpful for debugging sometimes). VirtualDub has a way to auto-concatenate a stream of sequentially named images, so you don't even have to funnel it into an AVI stream yourself or anything (though that itself can be pretty easy too, e.g. Windows' crusty VFW API).
I probably should implement some form of screenshot into my game, I just have to see if Allegro has an easy way of doing this.
It continues to substantially improve, after a friend's comment and suggestion made it much more efficient as well as authentic looking:
mikejmoffitt wrote:
tepples wrote:
If you save every frame of raw RGB pixel data to a file as you generate it, you can pipe it through FFmpeg with zero dropped frames.
This sounds tedious, unless scripts are readily available with little to no setup that will run on Windows. I am not booting linux for this.
My own game engine written in Pygame does this saving as easily on Windows as it does on Linux. FFmpeg does not distribute official binaries for Windows because of patent problems, but they are available if you know where to look.
Quote:
Plus, I am not aware of a function Allegro has that will let me easily save the backbuffer as a file.
I guess I'm spoiled by Pygame, which has the utility module
pygame.image that
translates a memory buffer into a string. But what you can do in Allegro is finish rendering and then call
save_bitmap (or however it was renamed in recent Allegro) to write to a single file. Or you can blit the render surface to a 24-bit bitmap and write each scanline.
Quote:
This sounds like filesystem havoc as too...
If you write all the individual 24-bit frames to a single file one after another, you can then pass the filename of that file, the width, the height, and whether the file is RGB24 or BGR24 to FFmpeg.
tepples wrote:
mikejmoffitt wrote:
tepples wrote:
If you save every frame of raw RGB pixel data to a file as you generate it, you can pipe it through FFmpeg with zero dropped frames.
This sounds tedious, unless scripts are readily available with little to no setup that will run on Windows. I am not booting linux for this.
My own game engine written in Pygame does this saving as easily on Windows as it does on Linux. FFmpeg does not distribute official binaries for Windows because of patent problems, but they are available if you know where to look.
Quote:
Plus, I am not aware of a function Allegro has that will let me easily save the backbuffer as a file.
I guess I'm spoiled by Pygame, which has the utility module
pygame.image that
translates a memory buffer into a string. But what you can do in Allegro is finish rendering and then call
save_bitmap (or however it was renamed in recent Allegro) to write to a single file. Or you can blit the render surface to a 24-bit bitmap and write each scanline.
Quote:
This sounds like filesystem havoc as too...
If you write all the individual 24-bit frames to a single file one after another, you can then pass the filename of that file, the width, the height, and whether the file is RGB24 or BGR24 to FFmpeg.
al_save_bitmap looks to be in Allegro 5, so I think I will go ahead and put that in. Good call.
I guess I really should have called it solid-state write cycles hell; I was referring to writing a new file every 1/60th of a second to the drive.
As for PyGame,
I used it for a while, but stopped because I could not reliably get it to wait for vertical blank before flipping to the display buffer. As a result I could not time the game to Vblank, which is what I had wanted to do. Do you know of a way to get PyGame to do this?
mikejmoffitt wrote:
I guess I really should have called it solid-state write cycles hell; I was referring to writing a new file every 1/60th of a second to the drive.
You don't have to run the game in real time though, you can slow it down for video recording.
tokumaru wrote:
You guys keep posting images, but to actually appreciate these effects we need video!
And YouTube won't be enough, because once the effect is done right it's going to take too few frames to be appreciated (we need the full 60FPS, we can't do with 30FPS), and even then, YouTube will most likely completely ruin the quality (since it'd rely on subtle details).
I recall once (long ago) doing a test and the conclusion being that a white dot against a black background will stay at most 1/12th of a second on screen (possibly less), and I don't know how much of that was CRT lag and how much was eye persistence. Moreover, this was a PC monitor, no idea how different are TVs in that sense.
Make what you want out of that, but basically trails shouldn't last longer than a handful of frames at most.
When I've recorded video of my CRT TV using a medium-high-speed (240fps) camera, I see no significant ghosting.
So I took a CR2 of my screen. 1/2000th exposure, F/2, ISO-equiv 80 (i.e. minimal denoising). I played around with it in cinepaint for a bit. Asked for a linear decode, and pushed the contrast until I could manually calculate the halflife over several periods. By pushing the exposure (which currently shows the active region at about 75%FS), I had to push the exposure up by a factor of about 20 (or 4.5 stops, or 4.5 halflives) to compensate.
With the final version of phosphors used in CRTs in the US before the transition to LCD sets, ghosting simply does not happen. The halflife is FAR too short.
This is not to say that there won't exist CRTs that don't do this! Just that the CRTs that most of us in the USA have seen didn't.
lidnariq wrote:
When I've recorded video of my CRT TV using a medium-high-speed (240fps) camera, I see no significant ghosting.
So I took a CR2 of my screen. 1/2000th exposure, F/2, ISO-equiv 80 (i.e. minimal denoising). I played around with it in cinepaint for a bit. Asked for a linear decode, and pushed the contrast until I could manually calculate the halflife over several periods. By pushing the exposure (which currently shows the active region at about 75%FS), I had to push the exposure up by a factor of about 20 (or 4.5 stops, or 4.5 halflives) to compensate.
With the final version of phosphors used in CRTs in the US before the transition to LCD sets, ghosting simply does not happen. The halflife is FAR too short.
This is not to say that there won't exist CRTs that don't do this! Just that the CRTs that most of us in the USA have seen didn't.
Try playing your CRT in pitch dark with a game like Gradius - you will see the trails!
Fair enough! I tested with Galaxian, and see 3.7 halflives per 1/60th of a second, or a halflife of 4.5ms. This means that the correct constant (with this TV) for the expression I gave above is k=1/13. This is low enough that on the NES, the effect will only ever be visible when transitioning to black pixels. So here's an animated gif, simulating same:
Attachment:
anim.gif [ 15.52 KiB | Viewed 4625 times ]
It's
really subtle, especially without any of the phosphor size blur. You'll also want to enlarge it; I can't see anything when it's at 100dpi.
And after manually capturing 17 frames from FCEUX, here's how I processed it:
Code:
for i in `seq 0 16`; do
pnmarith -maximum $i.ppm previous.ppm > n$i.ppm;
ppmtogif n$i.ppm > n$i.gif;
pnmgamma -ungamma 2.2 n$i.ppm | ppmdim .077 | pnmgamma 2.2 > previous.ppm;
done
pnmcat -tb n*.ppm | ppmtogif > all.gif
gifsicle --use-colormap all.gif -O3 -V -o anim.gif -d2 n{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}.gif
Note that I'm fixing up the gamma; if I hadn't, it'd be even harder to see.
I suspect that standard 24-bit displays are not actually deep enough to show this in a compelling way; after two refreshes (1/13)² even full scale content is just 1 LSB.
See something different this time?
lidnariq wrote:
Fair enough! I tested with Galaxian, and see 3.7 halflives per 1/60th of a second, or a halflife of 4.5ms. This means that the correct constant (with this TV) for the expression I gave above is k=1/13. This is low enough that on the NES, the effect will only ever be visible when transitioning to black pixels. So here's an animated gif, simulating same:
Attachment:
anim.gif
It's
really subtle, especially without any of the phosphor size blur. You'll also want to enlarge it; I can't see anything when it's at 100dpi.
And after manually capturing 17 frames from FCEUX, here's how I processed it:
Code:
for i in `seq 0 16`; do
pnmarith -maximum $i.ppm previous.ppm > n$i.ppm;
ppmtogif n$i.ppm > n$i.gif;
pnmgamma -ungamma 2.2 n$i.ppm | ppmdim .077 | pnmgamma 2.2 > previous.ppm;
done
pnmcat -tb n*.ppm | ppmtogif > all.gif
gifsicle --use-colormap all.gif -O3 -V -o anim.gif -d2 n{0,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16}.gif
Note that I'm fixing up the gamma; if I hadn't, it'd be even harder to see.
I suspect that standard 24-bit displays are not actually deep enough to show this in a compelling way; after two refreshes (1/13)² even full scale content is just 1 LSB.
I should have mentioned that indeed, I only expect something like this to be visible on a totally unlit (black) section.
As the phosphor is no longer being struck by the electron gun, the "Starting brightness" of the fade is very dim, but the "decay" of the dim trail is very slow.
On mine, I can see the trails last for almost two seconds, but the curve is something like this (please excuse the graph quality):
Sure, exponential curve. In a dark room, with a almost-entirely dark CRT, a human eye can see light over something like 10 factors of ten difference in brightness (not simultaneously. Just "dimmest possible" to "brightest possible without being painful"). But we've only got 2-3 factors of ten on a standard monitor, so that's not going to be visible unless you're willing to horrifically overexpose any visible pixels. (log₂(10¹⁰) ≈33; we'd need 96-bit displays to represent this.)
Found via link from
this post. Topic age acknowledged.
lidnariq wrote:
But we've only got 2-3 factors of ten on a standard monitor
Double that to about 4-5 because of the gamma characteristic of sRGB. Signal values represent voltage, while light output is closer to proportional to power, which is the square of voltage.
lidnariq wrote:
so that's not going to be visible unless you're willing to horrifically overexpose any visible pixels. (log₂(10¹⁰) ≈33; we'd need 96-bit displays to represent this.)
That or high dynamic range rendering, which uses a floating-point frame buffer and then a
bloom effect during post-processing to indicate to the eyes which pixels were overexposed.
I think a bloom effect emphasizes the wrong thing, though. You're trying to capture the fading trail of motion. The "overexposed" part is the non-moving image which should be bright and clean and clear, not fuzzed up with a bloom filter.
The simulation of "HDR" in LDR space with bloom is for a very different purpose. This is trying to communicate that bright lights like the sun are "brighter" than the area the shape covers on the image by bleeding it out. This can correspond to some light scattering effects by particles in the atmosphere (e.g. fog), focal depth effects, or lens defects, which is why it gets that feeling across in the simulation, but it's not an actual representation of a high dynamic range, and the same physical effects that are being represented don't really apply here?
tepples wrote:
Double that to about 4-5 because of the gamma characteristic of sRGB. Signal values represent voltage, while light output is closer to proportional to power, which is the square of voltage.
Looking at quantization error at the low end, there's only really 3.5 orders of magnitude with good coverage. The last range between 10
-4 and 10
-5 is values of 1,2,3 out of 255 in sRGB.
The difference between 4/255 and 5/255 in gamma 2.2 converted to linear light is a factor of 2, a whole photographic "stop"