Jump to content
IGNORED

Antic Mode 4 communicating the high bit status to GTIA for color picking


robus

Recommended Posts

Working on support for Antic Mode 4 and confused about how GTIA is able to pick an alternate color when the high bit of the character name is set?

 

From the hardware manual:

Quote

In IR modes 4 and 5, each character is only four pixels wide instead of eight (as in the other modes). Two bits per pixel of data are used to select one of three playfield colors, or background. Seven name bits are used to select the character. If the most significant name bit is a zero then data of 10 (binary) selects PFl. If the name bit 7 is one, then data bits of 10 select PF2. This makes it possible to display two characters with different colors, using the same data but different name bytes.

Given that the colors are in GTIA and the character names (codes) are known to ANTIC, what mechanism allows that switching to happen in GTIA?

Edited by robus
Link to comment
Share on other sites

The playfield generation is mainly by communication over the AN0-AN2 lines from Antic to GTIA.

There are 5 possible playfield values + Vsync + HSync/multicolour + HSync/hires monochrome commands.

 

So it's Antic sending either the PF2 or PF3 value to get one or the other.

For GTIA modes it changes the way GTIA interprets the data.  Pairs of values are combined to give the 16 pixel values.

But there are quirks there, e.g. Mode 10 paletted has that 1 colour clock display delay.

  • Like 1
Link to comment
Share on other sites

Thanks, but I still don’t get how GTIA decides which color to pick? It’s says it sends a 10 in both cases but bit7 determines the color, but GTIA doesn’t know about character names? I guess as it’s a stream of bits there must be some other flag read by GTIA that indicates the toggle for that particular pair?

 

I don’t have a stream of bits, my GTIA hands ANTIC a buffer to fill with color indexes and then GTIA expands those into actual pixels. So it’s too late at that point. I’ll probably have to rework that to more closely emulate it.

Edited by robus
Link to comment
Share on other sites

Bit 7 of the character is irrelevant to GTIA.  It only receives the pixel data via the AN bus at ~ 3.6 Mhz.

 

Antic decides to send the command for PF2 or PF3 if bitpair 11 occurs in a character depending on bit 7 setting.

GTIA doesn't know or care what graphics mode is in use - it just gets that raw data and generates the display.  Stuff like priority and GTIA mode or not is about all that matters.

  • Like 1
Link to comment
Share on other sites

9 hours ago, Rybags said:

Bit 7 of the character is irrelevant to GTIA.  It only receives the pixel data via the AN bus at ~ 3.6 Mhz.

 

Antic decides to send the command for PF2 or PF3 if bitpair 11 occurs in a character depending on bit 7 setting.

GTIA doesn't know or care what graphics mode is in use - it just gets that raw data and generates the display.  Stuff like priority and GTIA mode or not is about all that matters.

Yep I’m realizing my implementation has lost that in translation

Link to comment
Share on other sites

If you're doing emulation in sw or hw you need to treat the graphics as such.

Seperation between graphics mode and the data that Antic feeds to GTIA.

Then PMs get mixed in depending on priority settings.

All this stuff done properly would be at the sub-cycle level.   Colour register changes take place half a cycle after the store occurs.

PM objects can activate more than once on a scanline.  PRIOR and DMACTL changes can alter things mid scanline also.

  • Like 1
Link to comment
Share on other sites

I thought the AN2 line would be the indicator for a different color when AN1 and AN0 are both 1, but apparently not. So the ability for GTIA to tell that a different color should be picked when given the same data is still a mystery. I'll keep searching and reading, but I definitely haven't found anything in the pinouts for ANTIC!

Edited by robus
Link to comment
Share on other sites

The key is that GTIA _isn't_ being given the same data. GTIA has no visibility into the playfield data or the IR mode, it only sees playfield data that has already been translated by ANTIC into color codes, including the PF2/PF3 switch based on name bit 7. Basically, it's just a stream of color indices at color clock rate.

 

Only in a hires mode (IR modes 2/3/F) does GTIA see the raw data instead of color codes, because in that case ANTIC sends the hires data in as pairs of bits at a time.

 

  • Like 1
Link to comment
Share on other sites

The functions are documented in the GTIA document which can be had online http://blog.3b2.sk/igi/file.axd?file=2012%2F12%2Fgtia.pdf

 

AN2 active (1) means the other bits supply the PF value.   In hires modes the other 2 bits just supply 2 pixels worth of graphic data.

With AN2 inactive (0) the other bit values...

00 background

01 vertical sync

10 horizontal sync and clear 40 character mode

11 horizontal sync and set 40 character mode

 

If you're doing emulation there's also the consideration of the GTIA "bug" where in hires if you switch GTIA mode off mid-line it doesn't return to 40 character mode but instead interprets the bit pairs as multicolour pixels at 160 resolution.

Edited by Rybags
  • Like 2
Link to comment
Share on other sites

Here's how the mode selection works:

 

The two horizontal sync codes that Rybags has listed above are the key. The %010 and %011 codes both trigger horizontal sync and tell GTIA whether the next mode line is going to be a lores or a hires line. GTIA stores this flag and uses it until the next horizontal sync. If it is a lores line, then the color encodings look like this:

  • %000 background
  • %100 PF0
  • %101 PF1
  • %110 PF2
  • %111 PF3

This allows ANTIC to send 5 different colors per color clock. For IR mode 4, if bit 7 is 0, then the four two-bit encodings are translated to %000 (BAK), %100 (PF0), %101 (PF1), and %110 (PF2); if it's set, then that last one is encoded as %111 (PF3) instead. Notice that ANTIC could mix all five colors completely freely, if it had a playfield mode that could drive it.

 

If hires mode is selected, then the encodings are interpreted differently by GTIA:

  • %000 background
  • %100 two PF2 hires pixels
  • %101 two PF2 hires pixels, PF1 luma replaced on the right pixel
  • %110 two PF2 hires pixels, PF1 luma replaced on the left pixel
  • %111 two PF2 hires pixels, PF1 luma replaced on both pixels

In this case, the playfield will be sent with %1xx encodings where the AN1-0 bits contain pairs of playfield data. But notice that AN2 is still in play, because ANTIC needs to switch between %000 and %1xx so GTIA knows where the background border is and where the playfield is. It's not just held constant.

 

GTIA modes change the interpretation of AN0-2 in a third way, by having GTIA assemble 4-bit pixels by combining the ANx encodings from pairs of color cycles. They are intended to be used with hires modes so that each pair of ANx encodings gives 4 bits from the original playfield data. The various bugs occur when this assumption is invalidated, either by using a non-hires mode or by changing the GTIA mode mid-scanline. Then you get ANTIC sending encodings that don't match what GTIA is expecting, and unusual graphics result.

 

  • Like 3
Link to comment
Share on other sites

Thanks everyone, I appreciate the help!

 

I've reworked the GTIA -> ANTIC interface and realize now, of course, that GTIA is purely a translator of color index to pixel color. In that it always needs all the pixels all the time. (In an earlier incarnation it was aware of which ANTIC mode was being rendered - number of bits per pixel etc). But now it just runs down the scanlines asking ANTIC for color indexes for each pixel. The smarts now lie entirely within ANTIC (which makes total sense).

 

One thing I'm not entirely sure on is the color info for Mode 2 (and Mode F) . It's "1.5" colors in that PF2 sets the background and PF1 sets the luminance value. My question is why does GTIA even know or care? I understand that 320 pixels across is higher res than the TV can support. But how does GTIA know to ignore the color info as it's just running through the pixels for each mode? 

Link to comment
Share on other sites

It doesn't know to ignore it - the same colour, different luma thing is an annoying limitation of the time.

Maybe because it's only running at 3.6 Mhz?  Or some other reason.

But the way it operates is that PF1 just alters the luma, PF2 the colour (and luma if pixel =0 )

 

Of course that can be altered if PM graphics are in use, the difference there is that in some cases a lower priority object can sometimes override the colour of hires pixels, which otherwise wouldn't have happened in a lores mode.

  • Like 1
Link to comment
Share on other sites

On 9/22/2022 at 8:48 PM, phaeron said:

If hires mode is selected, then the encodings are interpreted differently by GTIA:

  • %000 background
  • %100 two PF2 hires pixels
  • %101 two PF2 hires pixels, PF1 luma replaced on the right pixel
  • %110 two PF2 hires pixels, PF1 luma replaced on the left pixel
  • %111 two PF2 hires pixels, PF1 luma replaced on both pixels

In this case, the playfield will be sent with %1xx encodings where the AN1-0 bits contain pairs of playfield data. But notice that AN2 is still in play, because ANTIC needs to switch between %000 and %1xx so GTIA knows where the background border is and where the playfield is. It's not just held constant.

Looking back with the clarity of hindsight I see @phaeron provided the answer and actually my implementation is still lacking! And also that GTIA does somehow know to do something special in high-res mode!

Edited by robus
Link to comment
Share on other sites

3 hours ago, Rybags said:

Maybe because it's only running at 3.6 Mhz?  Or some other reason.

The reason is quite simple - the bandwidth of the TV system cannot carry high-resolution color information. NTSC uses 422 color subsampling, i.e. the resolution of the chroma signal is half of that of the luma signal. The smallest period or time span at which color information can be encoded is a "color clock", and a color clock is two hi-res pixels (or one mode-4/mode-E pixel) wide.

  • Like 1
Link to comment
Share on other sites

1 hour ago, Rybags said:

C64 allows independent colour per pixel and their pixels are even smaller than ours.

Although on a TV it does get unwanted artifacting effects.

It can also scroll in 320 pixel resolution, while the A8 scrolls in colour clock (160 pixel) resolution.

Link to comment
Share on other sites

No, NTSC doesn't use 4:2:2 color subsampling. It's analog and doesn't have a sampling pattern, and the specified chroma bandwidth is not half luma bandwidth. Furthermore, the standard systems for sampling NTSC for digital video that do use a 4:2:2 pattern don't use a pixel clock of 7MHz.

 

The reason for this more has to do with GTIA's internal design. The hires path actually bypasses the majority of the color processing logic, including all of the priority logic, connecting from the beginning of the GTIA pipeline where the AN0-2 values are decoded, to the very end where the luminance values are output. This bypass is the only part of the chip that runs at hires pixel rate (7MHz); the rest of the chip runs at lores pixel rate (3.6MHz). Although it's not documented, this may have been a compromise to get 40 column text and hires graphics without running into limitations on how fast the main data path could run.

 

This design causes some odd visual behaviors, too. Because the hires data path bypasses all priority logic, it stamps PF1 on anything -- including P/M graphics, if they happen to have priority over or blend with PF2.

 

  • Like 2
Link to comment
Share on other sites

7 hours ago, phaeron said:

No, NTSC doesn't use 4:2:2 color subsampling. It's analog and doesn't have a sampling pattern, and the specified chroma bandwidth is not half luma bandwidth. Furthermore, the standard systems for sampling NTSC for digital video that do use a 4:2:2 pattern don't use a pixel clock of 7MHz.

cough. The chroma bandwidth of an NTSC signal is limited to ~3.5Mhz, which is half of the bandwidth of the luma signal. This limits the horizontal resolution of the chroma signal to half of the resolution of the luma signal, and the closest analogon of this is a 422 sampling of a digital signal, which is pretty much what GTIA implements. It is no coincidence that contemporary designs made use of this design "coincidence" by modulating the luma signal to generate chroma, e.g. the Apple II worked exactly like this, and artefacting works like this on the Atari as well.

 

For PAL, the chroma subcarrier is 5/4th of the NTSC subcarrier, which has the consequence that this trick no longer works, and why GTIA requires a separate chroma carrier crystal on such systems.

 

On the C64, you can also create artifacting patterns, even though the pattern is not quite obvious as the relation between the clock and the color carrier is not as simple as in the Atari design. If you check there, you will find that VIC II cannot do magic either and will not create "color" on its hires pixels - instead you will see false colors. That's due to the bandwidth limitation of the signal, which is (from a signal processing perspective) pretty much equivalent to chroma subsampling (relative to the luma signal).

 

Edited by thorfdbg
Link to comment
Share on other sites

16 hours ago, thorfdbg said:

cough. The chroma bandwidth of an NTSC signal is limited to ~3.5Mhz, which is half of the bandwidth of the luma signal. This limits the horizontal resolution of the chroma signal to half of the resolution of the luma signal, and the closest analogon of this is a 422 sampling of a digital signal, which is pretty much what GTIA implements.

Sorry, but I still don't think this is accurate. Chroma bandwidth is not 3.5MHz, that's the color subcarrier frequency. It's significantly lower than that if you are talking about the bandwidth of the encoded I/Q signals, and significantly higher than that if you are talking about the modulated chroma signal. Nor is the luma bandwidth twice that at 7MHz, which is the pixel clock used by the computer, not from the NTSC standard. It's valid to say that GTIA's high-resolution rendering for NTSC is like 4:2:2 sampling, but not NTSC itself.

16 hours ago, thorfdbg said:

It is no coincidence that contemporary designs made use of this design "coincidence" by modulating the luma signal to generate chroma, e.g. the Apple II worked exactly like this, and artefacting works like this on the Atari as well.

Yes, but this behavior is by design of the Apple II and Atari hardware, not inherent to NTSC. As has already been pointed out, there are plenty of other contemporary systems that allow pixels to contribute to chroma at full resolution rather than only half resolution, which is uncommon if not unique only to CTIA/GTIA.

 

Link to comment
Share on other sites

7 hours ago, phaeron said:

Yes, but this behavior is by design of the Apple II and Atari hardware, not inherent to NTSC. As has already been pointed out, there are plenty of other contemporary systems that allow pixels to contribute to chroma at full resolution rather than only half resolution, which is uncommon if not unique only to CTIA/GTIA.

 

As for example which? The C64 is not one of them - it gives you the illusion that you can, but it's physically impossible. Their designers also say so and sell it as a mechanism to generate more colors, if you want to read it.

 

https://spectrum.ieee.org/commodore-64

 

That's of course pretty much the same trick as Atari/Apple used for artefacting.

 

You cannot carry a higher resolution signal on a low-resolution color carrier - you can either describe this as subsampling in the spatial domain, or as aliasing in the frequency domain, which is pretty much equivalent. No matter how you call it, the horizontal color resolution of NTSC (and PAL too) is limited to approximately 200 pixels (or 228 color clocks, how that is called in the Atari world, same thing).

 

The C64 cannot do magic either. It attempts to generate a "color signal" on a high-res resolution, but clearly, it cannot and then generates false colors if the spatial resolution is below 2 hi-res pixels.

 

The relation between color carrier and hi-res pixels is not quite as simple on the C64 as on the Atari (7/16th instead of 1/2 as on the Atari), but the problem is quite the same.

  • Like 1
Link to comment
Share on other sites

On 9/21/2022 at 11:47 PM, robus said:

Thanks, but I still don’t get how GTIA decides which color to pick? It’s says it sends a 10 in both cases but bit7 determines the color, but GTIA doesn’t know about character names? I guess as it’s a stream of bits there must be some other flag read by GTIA that indicates the toggle for that particular pair?

 

I don’t have a stream of bits, my GTIA hands ANTIC a buffer to fill with color indexes and then GTIA expands those into actual pixels. So it’s too late at that point. I’ll probably have to rework that to more closely emulate it.

It works via the color "3" (Both bits are 1) is different if the 7th bit is set or not. If it isn't it will be COLOR2 and if is it will be COLOR3.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...