Jump to content
IGNORED

Why CRY?


kool kitty89

Recommended Posts

All modern encoders seem to use the checkerboarding mechanism, so there's not any similar quality encoders without that filtering to compare.

 

Yes!

 

I have come to learn there are basically a few choices on this: (should do some captures...)

 

1. Jailbars, where the phase of the color burst is not alternated, and the pixel clock is some multiple of the color burst, AKA Atari, Apple ][, Coco as great examples.

 

There are variations on this, where the scan line timing works out the same, but the pixel clock isn't aligned with the colorburst. TI Computers did this, as I recall. 256 pixels in the same region of the screen that would otherwise be 160 or 320, if using Atari computers as a reference.

 

It is possible to do a interlaced display with jailbars too, but I don't think I ever saw a production device do that. Probably there is one, and I've done one myself though.

 

When this signal is viewed through the S-video luma channel, as a composite signal, vertical bars are seen.

 

2. Checkerboard that is static. Color phase changes happen every scan line, but they are done so that a given scan line has a given phase. I think this is what you are referring to above with the Genny. Not sure, but I think I recall seeing that. I don't have a Genny type machine right now.

 

On this one, a static checkerboard can be seen on the luma only channel. Honestly, this one works kind of great for text and static images, mostly because the color artifact resolution is improved and some artifacts cancel on some displays. Once a object moves though, dot crawl city...

 

I was unaware any devices did this, until you mentioned the Genny, and other machines.

 

Interlaced and non-interlaced displays are possible.

 

3. Checkerboard dynamic, color phase changes alternate, On the luma channel, a 'grey" pattern is seen, which is the rapidly shifting checkerboard. On text, the edges of things dot crawl, and moving object vary, depending on their motion, sometimes looking like the static method above. (NES, C64, many others up through modern units today)

 

On a non-interlaced display, dot crawl is 30Hz NTSC. On a interlaced one, it's 15Hz, decidedly annoying, unless the device has a good luma resolution, such that the edges can be blended to minimize this. Modern machines do, older ones do not. Probably why this was not done early on. Text is hard to view this way, unless one has a long persistence monitor.

 

4. As mentioned in #1, variances in the pixel clock may improve or degrade the impact of the various color encoding means used. Pixels either aligned with the colorburst, or not.

 

As for resolution, yeah. If the art direction is good, a 640x400 composite display can be useful, particularly with the larger color spaces we have today. I'm consistently impressed with modern console composite outputs. Back in the day, the only source I could see doing that well was the better equipped TV station weather map, or infographic. Luma drives this mostly.

 

So, on that note, I've still not digested your comment, but now understand the context, and would just blanket say having the better luma control would be the most preferable option on composite displays. Edge blending and transparency can do some serious good on the fringing and dot crawl artfiacts, often implying much greater resolution than what is actually presented. We've just now gotten some 8 bit (ish) luma capability on the Prop, and just 16 hues is so much more useful! Didn't realize the impact of that until recently, where before I would have always said "more hues!". Not anymore. (and there is some wisdom in the C64 palette, often overlooked, it seems --and let's not battle that, just recognize it for what it is today)

 

Re: two channel hue control. Always wanted that. I've seen it on a service menu, and I've seen it on a pot on some displays I've tinkered with. Nice option, but again, one that ordinary people probably will get more bad than good results from, IMHO. I really like the "picture" control found on many sets. It's mostly a gamma, when found in sets that also have contrast. Just having a good grey balance improves things considerably.

 

I run my saturation low on most displays. Was always sensitive to that myself. Newer LCD displays really blow out on saturation! I suspect some cost cutting there, or something, because the response curves are kind of out of control. My newer laptop has this issue. Orange, in particular, just stands out, like "there it is!". ugly.

Link to comment
Share on other sites

Re: dot clock relative to color burst.

 

I've always had trouble communicating that well. Pixels are either aligned with the color burst, or not, and the size of them is just a percentage of the color burst clock period as seen on the display. The Atari 160 pixel color mode is basically 1:1, one color burst cycle = 1 pixel. Terms like 8Mhz never mean much, until I can sort it out that way, though it's getting better. Reading one of the Apple ][ hardware books that explained double high res helped a lot! That 7.something Mhz clock now makes a lot of sense.

 

Anyway, I still find that all very wordy and sometimes unclear.

Link to comment
Share on other sites

2. Checkerboard that is static. Color phase changes happen every scan line, but they are done so that a given scan line has a given phase. I think this is what you are referring to above with the Genny. Not sure, but I think I recall seeing that. I don't have a Genny type machine right now.

 

On this one, a static checkerboard can be seen on the luma only channel. Honestly, this one works kind of great for text and static images, mostly because the color artifact resolution is improved and some artifacts cancel on some displays. Once a object moves though, dot crawl city...

 

I was unaware any devices did this, until you mentioned the Genny, and other machines.

 

The PAL version of NES also does this.

 

3. Checkerboard dynamic, color phase changes alternate, On the luma channel, a 'grey" pattern is seen, which is the rapidly shifting checkerboard. On text, the edges of things dot crawl, and moving object vary, depending on their motion, sometimes looking like the static method above. (NES, C64, many others up through modern units today)

 

That is how NTSC and PAL is intended to work. Originally it was chosen to improve compatibility with black and white TVs (the grey pattern is less objectable than the checkerboard or "jailbars" of the other methods) but it also provides the additional effect of higher quality separation on modern TVs, since adjacent frames can be mixed together when there's no movement, eliminating color artifacts.

Link to comment
Share on other sites

When this signal is viewed through the S-video luma channel, as a composite signal, vertical bars are seen.

Yes . . . or the same for a TV with the comb filter totally disabled. (which is basically what s-video is doing)

And for other examples, you get full-screen checkerboarding, which looks even worse than the jailbars IMO (especially when interlaced or scrolling) . . . and that's exactly what you get from some crappy 3rd party "s-video" cables for SNES/N64/GC. (as they connect composite to Y/C rather than the Y/C lines . . . I can only guess that was to cater to the lack of S-video on the SNES2 -and cheap-o manufacturers rebranded the same cables for all SNES/N64/GC cables -it also only seems to be done on cables that have both composite and s-video plugs on them)

 

With composite (through a good comb filter), you only see dot crawl (be it checkerboard or jailbars) on the fringes of objects (places of strong luminance change), but without filtering you get full-screen bars/dots.

 

2. Checkerboard that is static. Color phase changes happen every scan line, but they are done so that a given scan line has a given phase. I think this is what you are referring to above with the Genny. Not sure, but I think I recall seeing that. I don't have a Genny type machine right now.

Hmm, OK, though when displayed without comb filtering, I see very strong full-screen jailbars . . . albeit that's as strong as the full-screen checkerboards of other encoders when composite is run through s-video luma. (so it may still have similar encoding otherwise, unlike the really early systems)

 

On this one, a static checkerboard can be seen on the luma only channel. Honestly, this one works kind of great for text and static images, mostly because the color artifact resolution is improved and some artifacts cancel on some displays. Once a object moves though, dot crawl city...

For moving/scrolling (or interlaced) images you also get the advantage of no animated dot-crawl (after all, dot crawl gets its name for the checkerboard pattern swarming on the screen . . . and while column/bar luma artifacts are technically the same type of artifacting, they don't literally show "crawling dots" like checkerboarding)

 

I was unaware any devices did this, until you mentioned the Genny, and other machines.

Not the Genesis itself (which uses analog RGB natively), but the off the shelf Sony/Samsung/Fujitsu video encoders Sega used. So you could technically attach those to any RGB device and output that via the same sort of composite signal.

 

The Master System used the CXA1145 too iirc, but the Genesis definitely used the Sony CXA1145 (on almost all NTSC model 1s, some PAL model 1s, and some NTSC/PAL Model 2s), Fujitsu MB3514 (mainly on PAL model 1s and 2s), Samsung KA2195D (PAL and NTSC model 2s -and the CD-X), and CXA1645 on late model 2s and all model 3s.

 

All of those seem to exhibit "picket fence" type luminance artifacts (ie the fringing dot crawl is vertical lines rather than checkerboarding), and all but the KA2195D have very heavy chroma moire/rainbow artifacts on fine dithering/detail of high-luma contrast colors (in 320 wide H40 mode -6.71 MHz dot). The KA2195 only has very faint moire artifacts, but is blurrier and has much stronger dot crawl (picket fencing) on some TVs. (though luma artifacts are about the same on all good TVs I've tried myself, and also tended to look similar with composite fed through s-video -aside from the samsung one being generally blurrier, followed by the CXA1145 and the MB3514 being darker but sharper and CXA1645 being sharper but not dark)

 

3. Checkerboard dynamic, color phase changes alternate, On the luma channel, a 'grey" pattern is seen, which is the rapidly shifting checkerboard. On text, the edges of things dot crawl, and moving object vary, depending on their motion, sometimes looking like the static method above. (NES, C64, many others up through modern units today)

It should be noted that the PCE/TG16's internal video encoder could be toggled in software between checkerboard and non-checkerboard arrangements.

 

And the NES has the added issue of really nasty luma artifacts in general. (much worse than the SNES, Genesis, or Master System running at the same dot resolutions)

 

So, on that note, I've still not digested your comment, but now understand the context, and would just blanket say having the better luma control would be the most preferable option on composite displays. Edge blending and transparency can do some serious good on the fringing and dot crawl artfiacts, often implying much greater resolution than what is actually presented. We've just now gotten some 8 bit (ish) luma capability on the Prop, and just 16 hues is so much more useful! Didn't realize the impact of that until recently, where before I would have always said "more hues!". Not anymore. (and there is some wisdom in the C64 palette, often overlooked, it seems --and let's not battle that, just recognize it for what it is today)

Aside from the comments on feasibility of using analog circuitry to convert a custom colorspace to a standard one (rather than doing so digitally -as the Jaguar does with CRY), most of the comments on CRY weren't really directed at you. (more musing in general . . . or if anyone else from earlier in the discussion is still watching it . . . and I made some mistakes/incorrect assumptions in that latter part of my comments too, so I'd need to address that if/when that part of the discussion starts again -and I answered some of my own questions already)

 

However, on that note, here's the actual color component of the CRY colorspace:

http://www.atariage....83#entry1946583

(showed both as CRY would look if 16-bits were used for chroma and the true limited 256 color values used in the Jaguar's 16-bit CRY -8 bits for intensity, 8 for color, 256 levels of each)

 

The actual colors used are somewhat similar to YIQ in distribution, but are actually directly derived from RGB (a flattened color cube to a square, with white at the center and a bais towards blue and red tinted hues over green-tinted hues)

as decribed here:

http://www.atariage....26#entry1923826

 

That also means that the base colors are not all of the same luminance (at least in terms of human perception -or YIQ or YUV).

 

Somewhat like YCbCr and YIQ (and others), chroma is also separated into 2 elements/axis . . . but they're only approximate in that sense. (and it only matters in terms of actual color blending/shading effects anyway -since all video is output as RGB in the end)

 

I really like the "picture" control found on many sets. It's mostly a gamma, when found in sets that also have contrast. Just having a good grey balance improves things considerably.

The "picture" on an old (late 80s) Zenith I have seems to be brightness control (turning it up makes the whole screen brighter -not paler, brighter- and turning it too high causes luma bleeding and a bulge in the screen -bright scanlines tend to distort wider), and while there's no "contrast" or "brightness" control, there is a designated "black level" control (which seems to be gama/contrast -ie making the image on-screen screen blacker or paler)

 

 

 

 

 

Re: dot clock relative to color burst.

 

I've always had trouble communicating that well. Pixels are either aligned with the color burst, or not, and the size of them is just a percentage of the color burst clock period as seen on the display. The Atari 160 pixel color mode is basically 1:1, one color burst cycle = 1 pixel. Terms like 8Mhz never mean much, until I can sort it out that way, though it's getting better. Reading one of the Apple ][ hardware books that explained double high res helped a lot! That 7.something Mhz clock now makes a lot of sense.

What I was talking about several separate issues: one is dot clock relative to colorburst, another is higher resolutions in general degrading/artifacting more in composite (with different problems for chroma and luma), and 3 was visible screen size/resolution and pixel shape on-screen.

 

The latter issue of screen/pixel size/shape/aspect ratio is independent of the composite video issue in general, but I brought it up to address the "320 pixel" comment, since that actually doesn't define the pixel size/width per-se (you could technically have 320 pixels per line at a variety of different resolutions -and as-such some at direct multiples of the NTSC colorburst and others higher or lower -Genesis is less than 2x, Amiga and A8 are exact 2x, and ST/C64 are more than 2x).

 

Comparing dot clock relative to colorburst is pretty simple . . . NTSC colorburst is 3.58 MHz, so any resolution that's a direct multiple of that will tend to have solid artifacts in general, regardless of encoder (or little/no artifacts for 3.58 MHz dots).

 

However, for actual screen width and pixel shape in NTSC (or any 60 Hz 15.7 kHz 4:3 monitor with NTSC-like calibration), it's a different story.

For that, you need to know the sync rate of the display, general calibration of the display (and most TVs have that very close to the same), and the dot clock of the device generating the video signal. For TVs, the first area is fixed, the 2nd area is also usually fixed (aside from manual re-calibration of overscan), and the 3rd area will vary by game/computer/etc device.

To get the total number of dots cycles per line, simply devide the dot clock by the H-sync rate: ie a 7.16 MHz clock would be 7160/15.7=456. (that's including hblank)

For actual visible screen area on TVs with average NTSC overscan calibration (ie 224/448 lines), you can fairly simply calculate that as 3/4 of the total H-time of a scan line . . . so for 7.16 MHz, that's 456x3/4=342 pixels. (so, in general, that would mean there's roughly a 22 pixel wide boarder for Amiga/A8/etc games in "320" wide mode -and a much bigger one for the Apple II or CoCo . . . or for 160 width on the A8, there's a 11 pixel wide boarder -in both cases ~6.4% of the overall screen width -that 3/4 width value is rounded though . . . and not all TVs will show exactly 224/448 lines either -some more and some less, some older/cheaper TVs doing less in particular and some newer/well-calibrated sets will have no overscan at all)

 

Or for TVs calibrated with zero overscan for the full normal NTSC resolution (ie 480i lines -or 240p, with proportionally reduced horizontal overscan), that 3/4 width figure would be closer to 80% width. (so an amiga would have roughly 366 pixels visible to the edge of the screen -or 46 pixels of boarder at 320 pixels wide)

 

Then there's the actual pixel aspect ratio, and that's directly related to dot clock as well. A higher clock will mean narrower pixels and a lower one will mean wider. For NTSC (or any 60 Hz/15.7 kHz monitor with similar calibration) without interlacing, 6.25 MHz is very near to perfectly square (which, on a ~224 line set, will display ~298 visible pixels per line . . . or 320x240 with zero overscan, ie corresponding directly to 4:3), or 12.5 MHz for interlaced displays.

However, almost no systems support true square pixels in NTSC, though some come fairly close. The common "256 pixel" resolution of TMS9918, NES, SMS, Genesis (low res), TG-16 (low res), Playstation (low res), etc gives pixels that are moderately wider than square (roughly 1.16:1), the Neo Geo's 6 MHz clock is slightly wider than square (1.04:1), the Genesis (and Saturn) have "320" wide modes at 6.71 MHz that are narrower than square (roughly 0.93:1), the common 7.16 MHz dot modes (A8, Amiga, TG16, Saturn, PSX, etc) are much narrower than square (at roughly 0.87:1), and the ST/C64's 8 MHz is even narrower (0.78:1).

 

 

Of course, custom calibration will allow virtually any pixel/screen aspect ratio, 50 Hz PAL has a higher vertical resolution compressed into the same 4:3 display, and anamorphic modes on modern SDTVs (and HDTVs) compress that even further, such that (for non-interlaced) 50 Hz has sqare pixels at slightly less than 7.5 MHz, and anamorphic NTSC at roughly 8.33 MHz. (so ST and and Amiga on PAL TVs look fiarly close to square -amiga slightly wide, ST moderately narrow, and ST is only slightly wide in NTSC anamorphic displays)

 

Varying pixel shapes lead to major problems in graphics design for computers/consoles . . . either you assume square pixels and accept moderate (or heavy) distortion, or specially design the art for those pixel shapes. (but in the latter case you'd still have problems for PAL vs NTSC . . . though if you optimized 256/5.37 MHz for NTSC, it wouldn't look quite as stretched in PAL than if square pixels are assumed) One nice thing about the 6.7 MHz mode of the Genesis (etc) is that it's almost exactly between PAL and NTSC square pixels. So both will look wrong (to narrow in NTSC, and too wide in PAL), but not horribly far off in either case.

And obviously anything lower than 5.37 MHz (like 3.58 or 4 MHz of A8/C64) will be far wider than square, so you'd need good art design to avoid super distorted graphics.

 

There's also the common 320x200 4:3 non-square pixel VGA resolution that was common to games in the early 90s, and there too you ended up with either distorted graphics or special art design (or non-square pixel rasterization for 3D/pseudo 3D stuff). Some games even had issues with some graphics compensating and others not. (both on consoles and computers)

 

 

Non-square pixels were/are also a big complaint for digital video editing/encoding/etc at SD resolutions . . . that could have been avoided if resolutions catered to square pixels for TV resolutions, but they ended up catering to NTSC color clock multiples instead, which has some advantages for composite/s-video output (though not component/RGB) . . . albeit DVDs running at 320x480i would still be non-square. (though that's not a very common resolution used, and it could have exactly 2:1 pixels if the dot clock was right -and 2:1 is pretty simple to deal with . . . and that's what many VCS/C64 emulators also end up doing, stretching 160x192 to 320x192 square pixels -which makes them much wider than NTSC would show)

Edited by kool kitty89
Link to comment
Share on other sites

most of the comments on CRY weren't really directed at you.

 

Oh, likewise! Was just musing here too.

 

I had a Zenith like that. Strange nomenclature. Liked older Zenith "Chroma Color" TV's. If one spent time with the many adjustments, they actually delivered a great picture off of composite. Required regular fiddling though.

 

I may well be mistaken on the Genny, thinking you had described fixed checkerboard. In any case, It's a interesting overall compromise.

 

Re: NES. Yeah, they stand out don't they? One of the consoles I always rolled the sharpness off when viewing. IMHO, the coarse selection of luma in the palette is in part responsible for that. The larger steps tend to introduce harmonics, which just stand out big. That's been my observation anyway.

 

Re: How it was intended to work.

 

Yep. Good, broadcast color programming looked reasonable on a monochrome TV. Some loss of detail though. I still remember high resolution monochrome broadcasts as seen on a sharp TV as a kid. Color programming wasn't always being done in the early 70's, meaning once in a while one could see the original sharp resolution.

Link to comment
Share on other sites

most of the comments on CRY weren't really directed at you.

 

Oh, likewise! Was just musing here too.

 

I had a Zenith like that. Strange nomenclature. Liked older Zenith "Chroma Color" TV's. If one spent time with the many adjustments, they actually delivered a great picture off of composite. Required regular fiddling though.

Mine is an Advanced System 3, one of the higher end variants of that line too (it's a 1988 set with S-video, stereo, and composite/stereo loop-out.passthrough, though the line went from RF only to composite to s-video -I saw one on Craigslist recently that was nearly identical, but RF only)

.

And the picture is really nice aside from some luminance ghosting (it's in s-video too, and I'd guess related to aging capacitors). Composite looks good, and RF decoding is good too. (I can't seem to find manual fine-tuning RF controls like some other sets of the time, but it has "fixed" and AFT search modes -and the AFT works really well . . . including for the often-problematic Sega RF modulators)

 

I may well be mistaken on the Genny, thinking you had described fixed checkerboard. In any case, It's a interesting overall compromise.

I know there's some good pictures of CXA1145/1645 dot crawl on Sega-16 (and of the chroma moire), but I'll have to find them again.

 

Re: NES. Yeah, they stand out don't they? One of the consoles I always rolled the sharpness off when viewing. IMHO, the coarse selection of luma in the palette is in part responsible for that. The larger steps tend to introduce harmonics, which just stand out big. That's been my observation anyway.

The RGB based consoles/computers have pretty high contrast too though.

 

But, yeah, looking at the old Atari palettes in particular, they have lots of shades, but not a massive luma range for high to low (except for black/white). The fact that all the Atari colors are somewhat desaturated would limit color artifacts to some degree too. (or at least bleeding/oversaturation isues -probably not chroma moire/artifacting issues though)

Granted, the only systems to use that palette worked only in resolutions of direct multiples of the NTSC color carrier, so such artifacting would be different anyway.

 

Also, the NES has no external luminance line, so unlike the Atari systems, there's no possibility of external S-video modifications. (the only option for improved video would be obtaining one of the RGB PPUs made for Playchioce 10 arcade boards . . . I don't think those were used for French NESs, but that would be interesting to know -it would make more practical sense than external PAL composite to RGB conversion as suggested earlier in this thread, since SECAM color encoding is problematic in general -at least for devices intended for NTSC or PAL output)

 

 

 

 

 

 

 

 

 

But back on the CRY topic in particular:

Why have the colorspace designed around only shading towards black rather than making intensity 0 black and 255 white for all hues, with 127 or 128 at "normal" saturation and intensity for any given hue? (128 shades is still pretty damn smooth, and allows a lot more flexibility for lighting effects)

This would be a cool feature, but it's not obvious to me how to implement it.

 

I don't want this thread to spiral out into a discussion of 3D lighting, but your approach seems to mix two unrelated kinds of blending: Additive and multiplicative. These are different effects used at different times.

 

Multiplicative blending is for standard lighting, including gouraud shading. If you have a pure blue floor (say 0,0,150), no matter how brightly you light it, it saturates at pure blue (0,0,255), not white.

 

Additive blending is for overlaying objects. If I want to place white fog over distant pixels, I might add white (150,150,150) to the blue floor (0,0,150) yielding a washed out whitish blue (150,150,255). Same trick is used with overlaying explosions, rain (subtraction), and other particle effects that are semi-transparent.

 

In CRY, the blitter does your additive blending, and Y does your multiplicative. In the PS1, the blitter can do both. The PS1's approach uses a lot more hardware and is slower than the Jaguar at shading pixels, but it's more versatile.

 

So, before we extend the range of Y, realize that this is only good for simulating VERY bright lighting conditions, not for fog, explosions, and other overlay effects. In other words, a lot of what we'll need to do is similar to the high dynamic range capabilities that appeared in 21st century consoles.

 

The problem with extending the range of Y comes down to how gouraud shading and multiplicative lighting work. Y must have a linear relationship to perceived intensity, or shaded polygons will look blotchy and 'uneven'. CRY does this already. If I have red-purple (255,0,127) and I cut Y in half, I have half the perceived brightness and the exact same red-purple color (127,0,63).

 

Now how can we maintain this while increasing the range of Y? First, let me try Y=2.0 with my red-purple. On my first try, I don't have red-purple anymore, I have pure purple (255,0,255).

I was reading through this again, and it got me thinking . . . the jaguar already has hardware to interpolate the shades/intensities (towards black) of the 256 base CRY colors. (which themselves are defined in a 256x24-bit table in ROM)

 

And, as it it, it derives all 256 shades through 8-bit division (each base color divided by 1 to 255 to achieve the 256 shades)?

 

I was thinking more in terms of linear addition/subtraction based shades rather than multiplicative . . . ie you had the 256 indexed CRY colors in 24-bit RGB and subtracted 1, 1, 1 to achieve the next shade darker (and clamp each RGB byte at zero, obviously), and thus achieve a very nearly linear shading system (since most, if not all of the CRY colors are very close to 255 in at least 1 RGB channel) at least compared to unweighted RGB shading.

 

Except that wouldn't really work properly as far as allowing darker shades of the same color at all levels (or as close as possible), and instead, you'd gradually desaturate towards black instead. (you'd lose chroma intensity as well as luma intensity as it darkened -which doesn't happen with the multiplicative method . . . well, the 6 primary colors -and white- would shade basically the same, and a few other colors close to those wouldn't be too far off, but others would be much different from CRY's multiplicative shading)

 

However, with the way light is actually perceived by the human eye tends to desaturate colors at lower light levels in general (and decrease the color temperature -but that's a separate issue), so that desaturation "problem" could actually make lighting effects more realistic in general. (that same property can be taken advantage of to reduce some of the limitations of paletted 256 color look-up based shading -a la Doom, etc- since you could design the palette -and lighting tables- around desaturating towards black/gray, and thus have fewer trade-offs for colors vs shades -and less posterized shading- within a limited palette)

And, actually, YUV (and similar colorspaces) generally desaturate towards white/black at high/low luminance

 

And, as such, if an additive lighting model were used, you'd only need 3 8-bit adders to handle CRY to RGB conversion (rather than multipliers), and a mode allowing shading towards white should also be possible.

For a black to white shading mode, rather than subtracting or adding 1, 1, 1, you'd set normal intensity as 128, and add or subtract 2, 2, 2 for each successive shade, thus desaturating towards black in 1/2 the number of shades, but allowing additional shading/lighting effects that desaturated towards white as well as black. (clamping at 7 bits for "normal" shadow-type lighting effects, and reserving the upper 127 shades for fading towards white and additive/subtractive based blending effects, or also allowing a mode that didn't clamp at 7 bits for any lighting effects and thus freed up the use of super bright/pale/desaturated colors for normal use -at the expense of having no cap on bright lighting)

 

Albeit, since many of the base 256 CRY colors are already fairly near to max brightness on all 3 RGB channels, you'd get many colors that only get a few more shades before capping at 255, 255, 255. (albeit the same would be the case for shading those same colors in 24-bit RGB)

 

 

 

On the other hand, for the existing CRY scheme in the Jaguar, you could potentially also design a game to work with additive/subtractive blending effects rather well if the colors (and intensities) used for the textures/models were carefully chosen, particularly that none were too close to full intensity. And thus you could perform additive blending effects by simply averaging the X/Y CR values and adding the Y values. (which you can technically do in any case, but colors near/at max intensity won't brighten any further -albeit that's the same for RGB shading too)

 

Or in the extreme, if you limited all colors to be shaded no brighter than the 128th intensity level (and designed a game around 50% intensity being max/normal brightness) and thus reserved the upper 127 shades for additive lighting effects. (or smoothly shading towards white for that matter)

Such games would have relatively dark colors in general, and either need to be stylized as such, or require the user to adjust the TV's brightness/contrast. ;) (which you have to do with some games as it is . . .like the oddly dark Saturn version of Tomb Raider -well, technically not since you can change the Gama in the game option settings)

That would be even more useful if the blitter (and OPL) could clamp at 7 bits in hardware, to prevent shading too bright when you didn't want to. (or can the Jag already do that . . . maybe to support the 15-bit RGB/CRY "variable mode")

 

Or is the problem that the blitter can't do multiplicative blending at all, even 50/50 averaging? (which would be needed for averaging . . . so all translucency effects, be it 50/50 blending, or saturation based with intensity added and color averaged) In which case, it wouldn't be a limitation of CRY at all, but of the blitter itself. (in which case, you'd have to resort to using the GPU instead)

Which would also imply the blitter is incapable of averaging Y elements as well. (requiring the GPU to handle that too -for framebuffer rendering, not overlaid OPL objects)

 

 

 

Another thing that should already be possible is limited colored shading effects with addition on the X/Y color nybbles. (adding increments/multiples of 1,1 would shade towards yellow; -1, 1 towards cyan; 1, -1 towards red, and -1, -1 towards blue) So shading towards any of the colors at the corners of the CRY square.

Link to comment
Share on other sites

I was thinking more in terms of linear addition/subtraction based shades rather than multiplicative . . . ie you had the 256 indexed CRY colors in 24-bit RGB and subtracted 1, 1, 1 to achieve the next shade darker (and clamp each RGB byte at zero, obviously), and thus achieve a very nearly linear shading system (since most, if not all of the CRY colors are very close to 255 in at least 1 RGB channel) at least compared to unweighted RGB shading.

This doesn't work for gouraud shading. You correctly identify the problem with color shifting later in your post, but the more fundamental problem is that additive lighting causes intensity to be non-linear.

 

3D shading requires a linear intensity function. Imagine a shaded polygon zooming toward you. If the shading is linear, your eye will accept that the polygon is growing in size, because the various shades will spread across the surface in an even way. If the shading is non-linear, it will appear that the lighting is changing as it moves toward you, as the polygon grows disproportionately brighter on one side. The same thing happens as any 3D object moves in any way, and even in static frames. Non-linear intensity creates lumpy blobs of light that do not flow naturally from object to object or frame to frame.

 

I'll give you an example: Let's say I have a blue-green polygon (0,127,255). The human perceived luminance, Y, is 111. Now let's cut the brightness to 0.5 using subtraction instead of multiplication: (0,0,127). Obviously this is a totally different color, but worse, the Y is now 14, or 0.125 brightness! This is hugely non-linear. With CRY, or any shading approach that uses multiplication, cutting lighting in half will result in a color of (0,63,127) - and the Y will be 55.5, exactly 0.5x.

 

However, with the way light is actually perceived by the human eye tends to desaturate colors at lower light levels in general...

Yes, but you have to generate the physically correct color, and leave the human perception to the human. That's the only way to get a result that humans won't find distracting. (Of course, some cheats are way less distracting than others, but for lighting, you have to get intensity right.)

 

That would be even more useful if the blitter (and OPL) could clamp at 7 bits in hardware

It only clamps at 8-bits or 4. This doesn't hurt RGB/CRY mode, since the LSB is used for selection.

Or is the problem that the blitter can't do multiplicative blending at all, even 50/50 averaging?

The blitter can't do multiplicative blending, but it can average by adding 2 half-bright images. So can the OPL.

 

The reason CRY exists is because the multiplication is crucial to make shading work, and the designers wanted to shade 4 pixels per cycle. Other systems like the Playstation do multiplication in the blitter, and thus can shade at most one pixel at a time, usually less! The Jaguar wasn't content with being 'that slow'.

 

To achieve 4 pixels per cycle would use 4 times the multipliers, and multipliers are large and expensive. So, the Jaguar does just one multiply as each pixel is sent out to the TV, instead of 4 when they are shaded. A huge benefit is that they get "8-bit smooth" gouraud shaded polygons with a 16-bit framebuffer, while all those that put multiply in the blitter are stuck with inferior 5-bit shading at 16-bits.

 

In the end fast shading was pointless, in light of what 3D games actually wanted to do. And 8-bit shading isn't as impressive as texture mapping. But the Jaguar design is completely based on fast high-quality shading. If the designers didn't want that, we'd have a different console!

 

- KS

Edited by kskunk
  • Like 2
Link to comment
Share on other sites

I was thinking more in terms of linear addition/subtraction based shades rather than multiplicative . . . ie you had the 256 indexed CRY colors in 24-bit RGB and subtracted 1, 1, 1 to achieve the next shade darker (and clamp each RGB byte at zero, obviously), and thus achieve a very nearly linear shading system (since most, if not all of the CRY colors are very close to 255 in at least 1 RGB channel) at least compared to unweighted RGB shading.

This doesn't work for gouraud shading. You correctly identify the problem with color shifting later in your post, but the more fundamental problem is that additive lighting causes intensity to be non-linear.

Would that also mean that additive blending using linear intensity would also look wrong? (ie does that screw up the possibility of adding Y and averaging chroma for blending effects in CRY)

 

Or, if that sort of blending does already work OK (within the limits of CRY), then it might be possibly useful to support a mode with the upper 127 shades available only for additive blending/shading effects. (ie, smooth shading effects are limited to 7-bits -shading from black up to "normal" intensity, but additive blending/shading uses the entire 8-bit range with the upper section extended with simple 1,1,1 adding used for each "brigther" color beyond "normal" intensity)

That would only work for blending in white, so not as flexible as RGB, but it should still be useful for some effects.

 

 

Plus, if worst came to worst (and you couldn't afford GPU overhead for added effects), you could always make some more limited compromises for some effects. (like for fading towards white, you could just do flat shaded blending towards white -like Checkered Flag does) And limit blending to simple averaging.

 

3D shading requires a linear intensity function. Imagine a shaded polygon zooming toward you. If the shading is linear, your eye will accept that the polygon is growing in size, because the various shades will spread across the surface in an even way. If the shading is non-linear, it will appear that the lighting is changing as it moves toward you, as the polygon grows disproportionately brighter on one side. The same thing happens as any 3D object moves in any way, and even in static frames. Non-linear intensity creates lumpy blobs of light that do not flow naturally from object to object or frame to frame.

 

 

 

I'll give you an example: Let's say I have a blue-green polygon (0,127,255). The human perceived luminance, Y, is 111. Now let's cut the brightness to 0.5 using subtraction instead of multiplication: (0,0,127). Obviously this is a totally different color, but worse, the Y is now 14, or 0.125 brightness! This is hugely non-linear. With CRY, or any shading approach that uses multiplication, cutting lighting in half will result in a color of (0,63,127) - and the Y will be 55.5, exactly 0.5x.

Ah, that makes sense . . . and I guess the case of 256 color LUT based shading/lighting could still work around that (while exploiting the desaturating color for darker shades) by using optimized color selections. (since all values would be defined by the palette and shading tables)

 

And you technically could have more optimized pre-calculated colors for a 16-bit direct/interpolated (to RGB) colorspace like CRY, but that would require a much larger table in ROM. (65,535x24-bit, or 192 kB of ROM vs 768 bytes for 256x24-bits) . . . You could do that in software too, to some extent (not limited to the Jag obviously), rendering in a custom colorspace and using look-up as the final step in rendering to convert to the real colorspace -be it direct RGB. You could also do that with simple dithering too (like for indexed 8bpp, use a 16-bit table rather than 8-bit to define pairs of pixels of different colors as single 1/2 res pixels, or the same for a direct colorspace, but using larger words than 8-bits for higher depths).

 

That takes up some memory (as do all table based systems) and CPU/coprocessor resource . . . bad for a system with very limited memory and/or more dedicated hardware to accomplish the same tasks, though more attractive for systems that have to render everything in software anyway -and would take a lot of overhead to software shade in RGB. (so, for the time, it might have been an attractive for computers or consoles with no hardware shading support) And actually, if you were using 15/16-bit RGB as the output, and especially if you worked within a more limited overall color count (like 256 colors with 32 shades -so a 8kx16-bit table) it would even be practical for systems with pretty limited memory (like the GBA or 32x -assuming you didn't want to use ROM -especially slow ROM common on the 32x) . . . and potentially much faster than manipulating 5-5-5 (or 5-6-5) RGB directly with the CPU. (there's some alternate options to speed that up, like the soft-SMID used for Yeti3D, but that case limits color to effectively 12-bit RGB . . . a pretty big trade-off -and still the overhead of multiplicative shading in software, just less overhead for manipulating 5-bit data, and that game/demo doesn't actually do gouruad shading -it does lighting effects somewhat closer to Doom in that it's not per-polygon, but also not gouraud shaded)

 

Albeit that's still rather moot in the context of the Jaguar, since you do have hardware support for that already.

 

Or is the problem that the blitter can't do multiplicative blending at all, even 50/50 averaging?

The blitter can't do multiplicative blending, but it can average by adding 2 half-bright images. So can the OPL.

Is it possible to average only the 4-bit X/Y color elements/channels and add the intensity bytes of the pixels? (thus allowing some degree of additive blending without added overhead)

 

The reason CRY exists is because the multiplication is crucial to make shading work, and the designers wanted to shade 4 pixels per cycle. Other systems like the Playstation do multiplication in the blitter, and thus can shade at most one pixel at a time, usually less! The Jaguar wasn't content with being 'that slow'.

 

To achieve 4 pixels per cycle would use 4 times the multipliers, and multipliers are large and expensive. So, the Jaguar does just one multiply as each pixel is sent out to the TV, instead of 4 when they are shaded. A huge benefit is that they get "8-bit smooth" gouraud shaded polygons with a 16-bit framebuffer, while all those that put multiply in the blitter are stuck with inferior 5-bit shading at 16-bits.

 

In the end fast shading was pointless, in light of what 3D games actually wanted to do. And 8-bit shading isn't as impressive as texture mapping. But the Jaguar design is completely based on fast high-quality shading. If the designers didn't want that, we'd have a different console!

Yes, so nice fast and smooth shading with considerably lesser logical complexity than lesser shading at similar bit-depth . . . and it probably would have been pretty impressive if used in conjunction with other mainstay features of that era (ie texture mapped games with silky smooth shading) . . . CRY should also do better filtered/interpolated rendering than 15/16-bit RGB (or possibly 4-4-4-4 RGBA -as the N64 supported). That's something the Jaguar II could have showed off much more, though the Jag itself could have if more Doom type games . . . and/or voxel engines. (CRY seemed to facilitate the smooth interpolation used for Phase Zero, though more primitive voxel engines wouldn't show that off as much -still probably would have been a lot more impressive than the limited all-polygon engines in use for many 3D Jag games)

 

Plus, low-res rendering in general could have been a better trade-off over choppy and/or low-detail/untextured games. (a 160x100 game at ~15 FPS with full shading and significant use of textures would probably tend to be a lot more impressive than a 320x200 game at ~10 FPS with no textures and possibly flat shading) I'm talking stretched to full-screen . . . tiny screen sizes tend to get far more complaints than blocky pixels. (big mistake on 3DO Doom not running at low detail . . . or AvP on the Jag for that matter -especially since halving horizontal resolution helps even more for ray-casting column/height maps)

 

 

And, of course, the PSX (and many PC video cards -and a few 256 color software renderers) worked around limited color with realtime dithering to smooth out shading at the expense of looking grainy. (much better at higher resolutions . . . and probably would have been a lot nicer if it was defined to be used on a per-polygon basis and/or if dithering threshold could be set -given how coarse some dithering ended up, a better compromise of posterization and dithering would have been nice -especially since most systems used simple ordered/pattern-block dithering -though X-WIng and Tie Fighter look more like error diffusion dithering)

It wasn't forced either, but many games did it . . . the N64 standard microcodes didn't seem to support that though. (or at least no games used it AFIK -then again, they had the option for RGBA)

 

 

 

 

Then there's the issue that the Jag's biggest limitations on the market had relatively little to do with technical design aspects or capabilities, and far more to do with Atari's financial/management/PR problems of the time, with massive competition on the market on top of that (something that really hit Sega and NEC as well, with their other/internal problems -in some respects similar to Atari's, but not in others- and Nintendo's own set of problems -arguably arrogance/stubbornness more than anything else).

 

Atari Corp really made their big mistakes years before the Jaguar (especially between '89-91 as the ST market stagnated and no new home console was released -let alone a good/well-supported one . . . and the Lynx had its share of problems on the market too). Albeit, if they were to have any chance with the Jaguar at all on the mass market (even in a budget niche position somewhat like the 7800), they'd have had to risk far more investment spending to be remotely competitive. (more risk for potential reward)

Granted, taking the conservative route ended up allowing the Tramiels to liquidate Atari Corp for profit in '96 vs potential bankruptcy of the company with failure after heavy investments. (even worse if many of those investments came from the Tramiel's private funds rather than 3rd party loans)

 

Taking that sort of risk would have made far more sense ~1989-91, when the company was in better shape. (financially and PR wise -with users, investors, and shareholders)

 

Well, that and they really needed good (sustained) management . . . which seemed pretty good (overall) from '84-88, but went into decline ~1989 (around the time Jack and Mike Katz left), and the financial problems obviously exacerbated management even further. (feeding into that decline)

Edited by kool kitty89
Link to comment
Share on other sites

Multiplicative blending is for standard lighting, including gouraud shading. If you have a pure blue floor (say 0,0,150), no matter how brightly you light it, it saturates at pure blue (0,0,255), not white.

 

Additive blending is for overlaying objects. If I want to place white fog over distant pixels, I might add white (150,150,150) to the blue floor (0,0,150) yielding a washed out whitish blue (150,150,255). Same trick is used with overlaying explosions, rain (subtraction), and other particle effects that are semi-transparent.

 

In CRY, the blitter does your additive blending, and Y does your multiplicative. In the PS1, the blitter can do both. The PS1's approach uses a lot more hardware and is slower than the Jaguar at shading pixels, but it's more versatile.

 

The main reason I don't get 4R4G4B4Y is that it seems to have nearly the same color expressiveness as CRY. That is, the main downside of CRY is that 4C4R makes big blocky color bands. But the color bands with 4R4G4B are just as blocky and chunky.

 

Also, CRY does a pretty good job of maximizing unique colors and spreading them out evenly, where RGBY distributes colors less evenly -- try plotting them both if you want to see what I mean. With people already complaining about CRY being chunky, this wouldn't help.

 

If you're more excited about RGBY because it allows additive effects to be more like they were on the Playstation, saturating toward white, it would get you this, but the tradeoffs seem kind of lousy.

 

It would be less hardware to just due proper saturating on 15/16-bit RGB. Of course you wouldn't get that smooth CRY lighting trick, but if you're resorting to RGBY you probably didn't want that anyway!

I was thinking about this again, and realized I made some wrong arguments for 4-4-4-4 RGBI/RGBY in the context of the Jaguar (and obviously went off topic with more generalized comments on RGBA/RGBI).

 

The Jaguar uses CRY for several different reasons already pointed out: smooth intensity shading, significantly decreased logical complexity of the blitter while allowing very fast gouraud shading, and custom optimized 256 color selection that spread color values reasonably well and allowed for decent logical color blending via simple averaging.

 

It might have been practical to add support for blitter addition on 5/6-bit boundaries (to cater to 5-5-5/5-6-5 RGB), but that would only have been good for additive blending/shading effects (no multiplicative lighting/shading).

As it is, 15/16-bit RGB modes are really limited on the Jaguar . . . no hardware support for any effects. (and software effects on the GPU being pretty intensive as well) The 24-bit RGB modes are pretty limited too, non-functioning OPL color look-up and no intensity channel for the blitter to use . . . but 24-bit RGBY would have been really nice (and use no more memory/bandwidth than the existing 24-bit modes), let alone if the CLUT was functioning for 32-bit entries. ;)

 

 

So, with 4-4-4-4 RGBY, you not only have a system that can be manipulated on 4-bit boundaries (already supported by the blitter) and functioning for RGB-style additive/subtractive blending/shading effects, but you have 4-bits of multiplicative lighting/shading retained as well. (granted, it's coarse at 16 shades, but a lot better than nothing at all -which is what you get in 15/16-bit RGB on the Jaguar . . . or 24-bit RGB for that matter -though the latter could have been addressed by supporting 8-8-8-8 RGBY . . . basically behaving like CRY, but directly specifying the RGB values rather than using a fixed 8-bit indexed color set)

So obviously a much more useful format to complement CRY than 16-bit RGB . . .

 

Just use the existing multipliers in the pixel path, but use 4-4-4 RGB values multiplied by the 4-bit intensity value and output the lower 8 bits rather than clipping to the upper 8-bits. (since you'd never overflow 8-bits from a 4x4 bit multiply) You also wouldn't get rounding/truncation errors like CRY (or 8-8-8-8 RGBY for that matter), though that's obviously due to using a low res color space interpolated to a high-res output colorspace. (8-8-8-8 RGBY would do the same if using 16-16-16 RGB color DACs ;) -albeit it would probably be a lot cheaper to just use color DACs specifically catering to RGBY in that case ;))

 

Intensity has to be linear and have 0 as black . . . so 0-15 is the only option there, but for the RGB values, it could make more sense to use 1-16 or 2-17 instead (both would still avoid overflow in 8-bits) . . . 1-16 would probably be best compromise with bright white lighting to F0F0F0 at max intensity (pretty close to true white) and the darkest shade of gray lighting to 0E0E0E at max intensity. (0-15 would be unattractive as you'd end up with redundant colors and a bias towards black, while 2-17 would be biased towards white instead -though certainly more attractive than 0-15 and allows saturation blending to FFFFFF, but F0F0F0 of 1-16 should be close enough that that should't be a big issue -all 3 cases have 4096 colors with 16 linear shades towards black -except pure black in the 0-15 example, since all shades will be 000000- so the main issue is what selection of overall colors would be preferable . . . granted, you could support all 3 cases too -to cater to games/levels with generally brighter or darker colors)

 

 

Also, 4-4-4-4 RGBY would have the added advantage for multiplicative shading of having hard stops at max intensity . . . so compared with RGB lighting or CRY, you wouldn't just shade up until the max R-G-B levels were hit (or the max intensity shades of the 256 CRY colors), but you'd instead stop at the maximum/normal shade of that color in 12-bit RGB. (without needing additional logic to clamp at a certain value) Ie, CRY and RGB shading would both shade a black upbject until it hit white, but there's many cases where you wouldn't want that object to shade beyond a dark black-ish gray. (and the only way to do that in CRY or RGB, would be to explicitly limit the maximum intensity value used on that object/texture -and for a truly black/dark object/texture, the only time you'd really want to shade brighter would be for reflection effects -to simulate a glossy or diffuse sheen on the surface)

 

This last feature is something that would also be present on any RGBY/RGBA based color scheme . . . the jaguar's grayscale CRY test mode (albeit limited to shades of gray).

 

 

 

So, compared to CRY, 4-4-4-4 RGBY would have blitter support for logical additive/subtractive saturation based color/shading/blending effects (via adding the RGBY channels) as well as simple averaging (probably with moderately more accurate blending than CRY averaging), closer to a "standard" color format, textures could be formatted to specify colors only from the RGB channels, independent from Y (vs CRY needing Y+CR for most useful colors/shades), and likewise being able to shade textures/objects with darker colors than CRY can (without having to explicitly limit the max intensity in software -especially on a per-pixel basis).

But the obvious disadvantage of much coarser multiplicative shading than CRY. (basically the same advantages as 8-8-8-8 RGBY would have, but with lower color and intensity resolution -and likewise 1/2 the framebuffer size and bandwidth . . . and 1/2 the size for non-indexed/uncompressed textures)

 

And compared to 16-bit RGB, there's the obvious massive advantages of being able to shade and blend at all on the Jaguar with the same restrictions on logic used in the existing system. (and the same fast gouraud shading -but not nearly as smooth . . . though not that far off from 15-bit RGB) Plus there's the advantage of the separate RGB and intensity definition allowing more flexible selection of colors to actually shade (similar advantage over CRY in that case).

And compared to 24-bit RGB, it uses half the memory/bandwidth and can use proper multiplicative shading with the existing jaguar hardware.

 

Hell, I'd go as far as to say dropping the 16/15-bit and 24-bit RGB modes in general wouldn't have been that bad (given how limited those modes are) . . . and having only 16-bit CRY, 4-4-4-4 RGBY, and perhaps 8-8-8-8 RGBY (especially if the 32-bit CLUT was working). Albeit having "normal" RGB modes on top of that would have been a nice bonus. (though rather unnecessary aside from facilitating really basic/sloppy ports . . . and even then it shouldn't be as big an issue since 4-4-4-4 RGBY would allow relatively simple conversion of graphics done in 12-bit RGB or lower, and relatively decent conversion for 15-bit RGB stuff, and 2D games relying on indexed textures/sprites could more often rely on the CLUT with 32-bit RGBY -at least if they got 32-bit look-up functioning)

Link to comment
Share on other sites

  • 2 months later...

I was thinking about thiss example again (from the old Doom highcolor thread)

Isn't pure white one of the colors available int he CRY palette? (by the discription, I'd have assumed that there would at least be 256 shades of black/gray/white -which are indeed available in 24-bit RGB)

It's the math behind the color scheme that's the problem. Let's say you want to add a sunny yellow (lens flare) to a bright daytime blue. What do must games end up with? Blinding white.

 

In RGB, you add #FFFF80 (sunny yellow) to #80C0FF (sky blue). The total? #ffffff (white) Note that color channels stop at their limits (ff and 00).

 

Let's try that in CRY. #BAF0 (sunny yellow) plus #F (sky blue). The total? #ffff (neon yellow) The major problem is that the neon yellow has no blue in it at all, so the sky has been erased! It's not even a warm sunny yellow like we had before, now it's a freakish electric yellow. This is not how real light is perceived by humans. The various wavelengths of light are "summed" across 3 color "channels" in your eyes, just like my RGB example.

 

To get to white, you have to go to #88ff in CRY. In RGB, to shade from some arbitrary color toward white, just add how much white you want -- #808080 for some white, #c0c0c0 for more white, etc. In CRY, to shade from some arbitrary color toward white, you must first compute the distance between that pixel and white on a per-pixel basis. That's expensive and there's no support for it -- you need to do it in the GPU with math or lookup tables.

Shouldn't that sort of problem being avoided in CRY if you only ever used simple averaging for color (CR) blending and limited additive blending to the Y channel alone? (in that case, sky blue and sunny yellow should indeed map to bright white -the colors blend to the center of the CRY color square while intensity is stopped at 255)

 

And blending colors near to those might result in other near-white colors that are similarly accurate approximations of RGB blending.

 

You obviously can't use additive blending on the 4-bit CR channels since they can't logically work that way at all. (they define points for 2 axes in a 2D array rather than linear color channels as in RGB, so you could never use RGB-style math for those color channels)

 

 

The grayscale CRY mode is 16-bits. It can shade toward white more easily and the shading behavior is more like 24-bit RGB, except with no color. This is literally because the same color is output on R, G, and B.

 

Zerosquare's analysis makes perfect sense. This is a test mode that lets you try every possible combination of 65536 "shades" on each CRY multiplier. The only big problem is that all CRY multipliers run with the same shade, which is fine for grayscale but equals monochrome for games. ;)

 

It probably would have just as cheap to do an RGBA mode compared to this test mode. It would be just as useful for testing but allow amazing color effects. Sadly, it would also use so much bandwidth that it wouldn't help the Jag's framerate problems much.

It might have been even more interesting if they'd supported a 4-4-4-4 CRYA mode. (limiting shading to 16 levels, but adding the flexibility of a 4-bit alpha channel)

 

 

 

On a related note to this discussion:

Kskunk, you mentioned earlier that using simple linear additive/subtractive RGB manipulation doesn't work properly for lighting (colors all desaturate towards white/black as you add/subtract shades, so you have saturation/hue/contrast issues, more so as colors get further away from gray -or from gray/red/greed/blue/cyan/yellow/magenta if only shading towards black).

And multiplicative lighting is required for proper linear intensity shading (without hue/saturation/contrast issues), hence part of the reason for CRY being designed the way it was in the first place (and 24-bit shading being practically limited in the Jaguar too since you're limited to additive RGB shading).

 

It was recently mentioned in a discussion on Sega-16 that the Saturn's lighting/shading effects (including g-shading) are all done through additive shading and that the Saturn has no support for multiplicative lighting at all.

So that would be a good real-world example of what sort of quality (and problems) can result from use of simple additive/subtractive lighting. (and indeed, that would explain the odd-looking lighting/shading in some Saturn games -Tomb Raider comes to mind)

 

 

It was also mentioned that early model PS1s only had support for 16 levels of lighting/shading in the GPU . . . which would make the 4-4-4-4 RGBY comparison pretty competitive in that regard (and also make the PSX's dithering relatively limited in practical use on those models) . . . and would also mean a 4-4-4-4 CRYA mode would compare favorably too.

 

 

 

 

Finally, thinking on the analog-related discussion earlier on, I realized that CRY should have been able to be implemented without multipliers at all if they'd used 4 video DACs to provide true 8-8-8-8 RGBY (so you'd just need the LUT to convert the CRY color and input the Y value directly to the DAC). That would also technically improve image quality capabilities too since you'd have 256 colors with 256 truly unique shades rather than redundancy due to digital granularity limitations of 24-bit RGB, plus you could have true direct 32-bit RGBY to work with. (not only allowing linear shading in truecolor, but better overall colorspace than normal 24-bit RGB).

 

I wonder why they didn't take that route over use of multipliers and 3 DACs . . . does TOM rely on an external video DAC? If so, would 3 8-bit multipliers really take less chip space than 4 8-bit DACs? (I realize these would be high-speed, high-precision DACs -not like simpler/slower audio DACs)

 

Another thing about internal DACs would be a significant reduction in TOM's pin count. (24 digital RGB lines would be replaced by 3 analog lines -and a similar reduction in traces on the board and space saved by the lack of an external color DAC)

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...