Comment Re:I get it. (Score 1) 70
I know exactly what he’s talking about. That’s why I got an old-school Trinitron (which someone was giving away for free) and now treat it with the greatest care.
Here’s a great example: https://ancillary-proxy.atarimworker.io?url=https%3A%2F%2Fwww.reddit.com%2Fr%2Finter...
That is indeed an excellent illustration of why ”perfect rectangles” is not a desirable representation of pixels when simulating computer graphics originally intended for display on a 15 kHz CRT TV or video monitor.
In some comments to this article, though, there is maybe some confusion over what you'd actually want to see in a CRT simulation. Granted, there were many different types and generations of CRT displays. They varied in terms of their accepted horizontal and vertical timings, the dot pitch of their shadow mask or the density of their aperture grille, their dynamic range, their effective resolution, the persistence of their phosphors, and the details of their electron gun/beam control. There were also many different signal types and color encoding standards implemented for driving the display — some of which generated their unique artifacts.
But if you do not fetishize about the artifacts as such (such as RF noise, or hum bars slowly traveling across the screen, or adjacent colors affecting each other in unintended ways) but just want the nicest-looking, most authentic and crisp picture you could have hoped to achieve back in the day, it is very easy to specify the gold standard that the emulation/simulation should strive to target:
It is something akin to a Commodore Amiga (any model) driving a TV-compatible 15kHz computer/video monitor — such as the Philips CM8833 or the Commodore 1084S — using RGB signaling. No more, no less. Make it look the same and you’re golden.
What this means in practice is:
1. Forget about EGA, VGA, and SVGA. They’re not relevant to the types of signal sources we’re talking about. (8-bit and 16/32 bit home computers and game consoles of the 1980s and early 1990s) The monitor whose display characteristics you want to simulate should be a CGA computer/video monitor; suitable for both computer and video work (that is, also capable of being used as a CCTV monitor, or for video editing using VCRs).
The monitor needs to be compatible with the standard-definition 625/50 and 525/59.94 interlaced video rasters, and their half-the-vertical-resolution, non-interlaced (progressive) variants, with a 15kHz horizontal scan rate. These are the signal types that were conventionally produced by the hobbyist/home computer systems and video game consoles of the era.
Such monitors are generally close to TVs in their design (e.g. they feature a PAL or NTSC color decoder for decoding CVBS and Y/C signal types) but lack an RF tuner. They have more user-accessible picture controls and support more (professional) input signal types (RGB or Y'PbPr), along with consumer video signals, and are capable of displaying higher effective resolution (discernible ”TV lines” — that is, horizontal resolution on a scanline) than your typical (portable) CRT TV of similar size.
2. Make sure that you attempt to simulate scanlines, and align the input video raster (scanline raster) to the pixel matrix of the target display. This immediately leads to a specific technical restriction on the output side: the vertical resolution of the simulated CRT image must not be allowed to scale freely on an LCD or OLED display, but only in discrete steps that are multiples of the number of active scanlines in the original input signal. (e.g. 288, 576, 1152, or 1728 for a 625-line signal and 240, 480, 960, and 1440 for a 525-line signal.) Otherwise you will run into aliasing artifacts.
For instance, your starting point would be the original vertical (active) resolution as is. Or, more accurately, half the nominal vertical resolution of a standard, interlaced video signal since the home computers and game consoles of the 1980s commonly outputted a slightly non-standard, non-interlaced signal where the TV will draw the consecutive fields effectively as progressive frames, with no half-line displacement between the ”even” or ”odd” fields. At this stage, there can be no scanline emulation as there is no room for it. Each scanline will occupy a single pixel row on the target display.
A step up from this is doubling the original vertical resolution and adding a dimmer, interpolated line in-between any original scanlines. (Just leaving every other line blank is too drastic, and not really how the image looked liked on a CRT. Typically, the scanlines are thicker than the inter-scanline gap.)
Another step up would be reserving two horizonal pixel rows per original scanline, and a dimmer line (row) in-between them, to mark the inter-scanline gap. Now we’re at three times the original vertical resolution.
Thereafter — as the resolution allotted for each scanline will increase in discrete steps — you may start simulating the actual beam shape with increasing accuracy. This would involve simulating the rounded/dimmed/tapering edges and blooming at points where the beam switches on or off or increases or decreases in brightness. Such things help recreate the smooth edges of text and pixel art and the overall, scanline-based ”video raster look” of CRT graphics.
3. Once your scanlines are in order, make sure that you maintain the correct aspect ratio by interpolating the horizontal resolution in lock-step with the vertical resolution. Note that the correct aspect ratio of a video raster (as seen on a properly-adjusted TV, or a TV signals-compatible video monitor), is dictated solely by the EBU and SMTPE video / TV signal timing standards. That is, the shape of the image and its pixels is fully determined from the pixel clock producing the signal, and how it relates to the standard TV timings.
The nominal picture shape of the era was 4:3 but the shape of the actual picture you see on the screen depends on the device generating the signal, and how it paints the pixels relative to the so-called ”active picture” area and sync pulses defined by the TV standards. Often, the image produced by a computer or video game console of the era does not fully cover the entire ”active picture” of the video signal (as specified by standards), and therefore does not necessarily represent an exact 4:3 shape, but may have some other rectangular shape centered within the 4:3-shaped ”active picture”.
4. Also note that standard-definition TVs and video monitors are assumed to overscan the active picture. Some of the picture will go over the edges of the CRT screen and be cut off.
This was originally because the early CRTs were rather roundish in their shape, so the corners of the signal got cut off ”by design” so you could fill all of the surface area of the tube with image. Also, early CRT electronics could not hold the image all that stable and its size often fluctuated based on the overall image brightness and also drifted over time due to the components aging unless you readjusted the geometry potentiometers. This made it impossible to keep the edges of the active picture accurately aligned to the edges of the CRT.
Yet another reason for overscan was that analog production technology (analog TV/video cameras, analog VCRs/VTRs, etc.) could not guarantee a clean signal to the very edges of the nominal active picture. There was video head switching noise and other artifacts hidden behind the edges, so it was desirable to let the CRT ”cut” the edges and shape the visible image in a clean way.
Early TV-connectable home computer and video gaming systems countered overscan by generating rather big borders around the picture, to keep the action centered within an area that is actually visible on the tube. The later systems (such as the original Xbox, and optionally the Amiga) embraced overscan, instead: they could fill the active picture area of the signal to the very edges — with no borders. But while doing so, they also had to follow the same safe area guidelines that broadcasters adhered to when positioning text or graphics or important action.
A proper simulation should both simulate overscan and include an option to switch it off (switch into an ”underscan mode”), with the overscanned mode being the default, as it was on a normal TV or on a properly adjusted video monitor.
5. There are some more interesting and esoteric things one could attempt at doing when simulating the display of interlaced signals (in a truly interlaced fashion), or the persistence of phosphors on a CRT — especially given a high enough framerate (preferably an even multiple of the original vertical refresh rate). Doing things accurately would likely require profiling an actual 15kHz CRT monitor with a special high-speed camera.
6. Given a very high-resolution display, it might become possible to simulate an aperture grille or shadow mask patterning at last to some artificial extent where you align the simulated mask exactly to the pixels of the output device so a not to introduce interference patterns but do not align it exactly to the simulated electron beam.
7. Gamma and other characteristics of the CRT color primaries/phosphors could also be modeled and simulated to the degree possible. Some emulators, for instance, used to have rather bad palettes which in no way matched what you would have seen on a CRT.
8. There are likely to be analog overshoots and ringing in the driving video signal and its interaction with the electron beam control, as well as some filtering and electromagnetic effects that produce some subtler characteristics of a CRT image. Modeling these would require a good understanding of the inner workings of a CRT-based display and the signals driving the circuitry.
All of this should lead to an image that makes e.g. the text of a command-line shell look as smooth as it was on a CRT, instead of being a jaggy collection of lego blocks. But the look of a video raster is a very different thing from just simply blurring or interpolating the original pixel data — so if one thinks mere blurring should do the trick, that is a very mistaken assumption.
Then again — and to get back to where we started — I do not think simulating artifacts such as RF noise or interference, or shadowy ghost images, or PAL or NTSC color artifacting gains much — except as a gimmick that you will try out once, then switch off forever. Such artifacts were undesirable back then and they are also (mostly) undesirable now.
What you want is rather a simulation of a clean signal driving the CRT. Back in the day, everyone would have wanted to get as clean a signal on their CRT as possible, with no extra artifacting caused by the inherent limitations of some signal paths, but of course displayed in the technical manner video raster was supposed to be displayed on a CRT. RGB signal, where natively produced by the computer or game console, and where available as a supported input option on your monitor, was the holy grail of picture quality, also for TV-signals-compatible devices. Some cheaper systems just were not designed to output their video as RGB signal since they primarily targeted users who were assumed to use domestic TVs as their displays instead of the more professional and more expensive CGA computer/video monitors. And, in many markets, domestic TVs also did not feature an RGB input.
Europe was lucky in this regard, though, since the French insisted on getting their SCART connector on the TVs sold here, and the SCART connector specified analog RGB input pins along with composite video. So if your signal source supported RGB in the way e.g. the Amiga and many fourth or later generation home video game consoles did, you could get a very clean picture even on a domestic CRT TV.
(Later on, the ubiquitous RGB support on European TVs also helped with DVD players and digital set-top boxes, which in their European versions commonly included a SCART connector and outputted RGB signal, producing as crisp an image as the CRT was technically capable of displaying.)
(OK, NTSC color artifacting was used in some early systems as an indirect, cheap way of creating a color signal without including ”actual” color support, so it is desirable to optionally simulate it in some cases — but its usefulness is limited to the specific systems and software titles that made use of this approach. For any other titles, you'd rather switch off such color processing and treat the input as if it were monochrome or RGB signal, getting a clean image that is free of accidental color artifacts.)
Yet another thing is, of course, the physical size of the image. In the 1980s and early 1990s, the CRTs of RGB video monitors and portable TV sets were around 14 inches in size. Domestic CRT TVs maybe weighed in at 25 to 32 inches. Factor in the typical viewing distances and you'll see the viewing angle was quite narrow-ish compared to the modern, gigantic screens filling your entire field of vision.
There are some reasonable practical limits to how thicc a scanline could be (or should be in its simulated reproduction), and what is its proper relation to the size of the entire video raster, and how big a part of your field of vision the original images would have covered. If you go much beyond those limits — blow up the image to cover a too large part of your field of vision just because your screen is so big — the pixel art, with its now fist-sized pixels, will no longer appear the way it was originally assumed to be displayed.