Jump to content
IGNORED

Atari v Commodore


stevelanc

Recommended Posts

I can't find any circuit on Atari-8bit that FORCES monochrome in 320*200 even if not using PMs. So without PMs, color is do-able in 320*200 on Atari.

You can select two different lumas but only one chroma in hires modes. That's pretty much enforcing monochrome.

 

The fact that you can keep modifying 53272 in a DLI horizontally and affect the chroma of text/pixels means they could have done color better at least in text mode. So A8 has to rely more on PMs to horizontally change char colors in 320*200.

Link to comment
Share on other sites

Oh, and SIO is fast also; the default is 19,200 bps. I am driving it at 59659bps w/o problems. That rate is not upper limit as it just happens to be evenly divisible into both PC and Atari timers (1789790/30 = 1193180/20 = 59659bps). External clocked SIO easily gets >300,000 bps.

That's pure bus speeds which are pretty much academic. Let's do a real world test.

 

For this test I took an empty disk, wrote a 50k file (51200 bytes) to it and loaded it with different loaders. Before loading the directory was loaded so the drive head was on the directory track.

 

Plain CBM loader: 128 seconds (0.39 kB/s)

Final Cartridge 3 (multifunction cart from 1987): 13 seconds (3.85 kB/s)

Action Replay 6 (multifunction cart from 1989): 8.5 seconds (5.88 kB/s)

 

EDIT: Another test with Warpcopy (disk image transfer tool reading disk from real 1541): 22.09 seconds for a 174848 bytes disk image -> 7.73 kB/s

Edited by Fröhn
Link to comment
Share on other sites

The c64's lowercase letters are identical to the Atari's, no doubt about it.

 

dfwubt.jpg

2nut4ic.jpg

 

hehheh... nicking the Atari's font. That's kinda funny.

 

 

It is not identical... it is not the same color by default!! :D

 

And the C64 ones looks better!

 

:)

Link to comment
Share on other sites

THERE'S A LITTLE PIECE OF ATARI IN EVERY COMMODORE 64[/b]

 

 

THERE'S A BIG PIECE OF COMMODORE IN EVERY ATARI

 

==> the 6502 . :) the heart of your beloved machine is Commodore! ;)

 

also dont forget about the PIA chips. which are an older revision of the CIA chips. Now the newer CIA chips are used in the c64 for the same purpose, as PIAs in the A8.

 

BUT.

 

atariksi comes and proves, that the older revision PIA has better "joystick" (I/O!) ports than the newer CIAs.

 

 

the fact is:

 

both chips has 2x8bit parallel I/O ports with programmable direction for each but.

 

 

another fact is:

 

CIA is a newer revision of PIA thus its better.

 

 

conclusion:

 

 

the joystick ports are the same, but c64's chip providing the joystick bort is better.

Edited by Wolfram
Link to comment
Share on other sites

Better chip, but poorer implementation.

 

They didn't even get the Serial I/O sorted properly, so had to resort to bit-banging.

 

Commodore might have had some industry leading hardware designers, but were never renowned for good firmware or software.

Even Amiga's OS was outsourced, although you could probably say the same for TOS/GEM for the ST.

Link to comment
Share on other sites

Better chip, but poorer implementation.

 

They didn't even get the Serial I/O sorted properly, so had to resort to bit-banging.

 

Commodore might have had some industry leading hardware designers, but were never renowned for good firmware or software.

Even Amiga's OS was outsourced, although you could probably say the same for TOS/GEM for the ST.

 

you mix things up. the chip with the Serial I/O bug was the VIA and not the CIA.

 

c64 suffered of loading speed because they decided to keep "compatibility" with the vic-20 which had the faulty VIA.

 

 

 

conclusion:

 

CIA both better&better implementation, as the PIA is which atariksi claims to have a "better joystick port"

 

edit:

 

after further research it looks like CIA and PIA is pretty much the SAME. the only plus CIA has is a TOD clock, hmm and maybe that CIA timers can count each others underflows.

 

:)

Edited by Wolfram
Link to comment
Share on other sites

Oh my goodness... very long post. :)

 

That's basically hysteresis. I'm aware of that and big changes stress things. Again, good quality displays will exhibit very little of this, while poor quality ones exhibit far more. Modern displays are quite fast and can be treated accordingly. We've come a long way since the TV's of the 80's.

 

Yes, modern TV chipsets are more sophisticated. The JVC pro monitor I use with my A8 looks almost RGB quality.

 

And I do understand about the frequency domain bit too. What people get confused about is the frequency component of a pixel. If you plot a wave out on paper, it's period has a length right? Well, one color clock = a length on the screen equal to an Atari 160 mode pixel. It's that simple. If a pixel exists that is smaller than that, some component of it's frequency will be greater than the color reference, and will be seen as color information.

 

Right, Atari tied the pixel clock to the color clock on NTSC machines. It isn't exactly true that something smaller than 1 clock will be seen as color. Modern TV's use a band-pass filter to extract only a notch at 3.58MHz and to ignore everything below and above that frequency. A modern TV should ignore a 5MHz pattern, for example. Whether or not the 5MHz pattern could be seen on the screen depends on the overall resolution of the tube and electronics. Most sets allow more bandwidth in the S-Video mode.

 

With computers and pixels, the size of the pixel is related to the frequencies used to describe it. The position of the pixel on the screen is simply a visual confirmation of it's phase, with respect to the reference signal. Two ways of thinking of the same thing.

 

If you make a really small pixel, it is composed of higher frequencies. Where you put it impacts the phase.

Errr... this is where it sounds like you're confusing Chroma and Luma. Composite video is simply Chroma + Luma. In the luminance channel, a pixel goes where it belongs in time. This is the b&w portion of the picture and it knows nothing of the color carrier. Pixels should not be shifted around in this base image. As an aside, the Apple II achieved color by shifting pixels around and thus generated all color by luminance artifact, but this isn't the correct way to do it. The chroma channel contains the base color burst frequency, adjusted in time for the hue. Where there is color, this signal is present in the chroma channel, with an amplitude proportional to the current luminance level * the saturation level. Another aside, since the chroma signal should vary with brightness and since the Atari has no mechanism for varying the chroma level, bright colors on the A8 generally have poor saturation and look washed out.

 

Now, mix chroma and luma together and you have your pixel in the correct place in the luma channel, plus a sine wave riding on top of it. The pixel is not shifted to achieve color, but the phase of the wave running through it is. This is proper color and this is how the A8 and 64 generate color.

 

One colored pixel @ 160 resolution:

			   __	 __
Chroma component-> | -   -  -  <-phase of wave=color
			   |  ---	| <-amplitude of wave=saturation
			   |		 |
			   |		 |
Luma component->   |		 |
			   |		 |<-H position of pixel=
			   |		 |	H position on screen.
Black level________|		 |____________________

 

As long as the saturation is not maxed out, the color carrier will be filtered out by the TV and dot-crawl will be minimized. Of course, there is no dot crawl in the A8's picture because the phase of the colorburst is never changed.

 

Everything boils down to sine waves. If you make something nice and square, like a pixel, that's not a single wave. It is the sum of a fundamental wave, plus a lot of harmonics. The smaller the pixel, the higher the set of frequencies required to describe it, right? At some point, if the pixel is small enough, the frequencies used to describe it enter the range of frequencies the display device will see as color information.

 

The sharp edges of computer-generated images can mess with color decoding, but most modern sets will filter these high-frequency transitions.

 

You said: "Any content in the b&w base image that has a frequency close to that of the color signal will degrade it and this is the basis of artifact coloring." Exactly!! And those artifacts generate colors that lie at angular displacements on the color wheel, depending on their position on the screen. That position can be translated to their phase with respect to the reference. These are the same things!

 

Two things are important: 1. That the luminance pattern have strong 3.58MHz content (like a A8 high-res on-off pattern). and then 2. the phase will = color. When the TV goes to filter out the 3.58MHz content to remove dot-crawl, it will end up generating an average luminance where the on-off pattern used to be, making the area look solid. This shows how effective artifacting can be, but it is not quite the same as the normal pixel+color carrier combination since it 1. locks the absolute position of the pixel to the phase of its color and 2. it does not allow for saturation control by allowing the color signal to ride atop the pixel's luminance component.

 

Artifact colors can be quite complex and create very good displays. Your Atari creates artifact colors every day of the week. It does it, when you ask for colored pixels on the screen, and this is how:

 

Let's look at the Apple ][ computer display. It applies what I just said perfectly. It's the simplest and most known case.

 

It's got monochrome pixels. No color generator at all exists in that computer. The only thing that does exist, is a phase shift bit in each high resolution screen byte, that will shift things over a small amount. You get to turn the color burst on or off, that's it.

 

On a monochrome display, the Apple ][ is a two color device. Either set or reset the pixels. Right?

 

On a color device, the Apple can produce black, white, orange, blue, cyan and magenta. Where do the colors come from?

 

They come from the fact that pixels smaller than one period of the NTSC color reference signal have frequency components that are seen as color information by an NTSC composite or RF display. So then, one can look at it in the frequency domain, or one can look at it in the pixel over time domain.

 

If the reference signal is stable, then the position of a smaller pixel determines where that pixel color will be on the color wheel. It's relative size determines if it's mostly color info, half and half, or just intensity info. (frequency components) The apple has two pixels per color clock, and each one has a distinct color, based on it's position on the screen. Even pixels are one color, odd ones are another, and two together do not produce a color, only intensity. The small shift in pixel position (or phase, if you want to think of it that way), brings two additional colors to the display.

 

This is where I learned this first. The Apple.

Right, but you need to know that the Apple method is a hack and not a good representation of how NTSC video works because it makes no distinction between the base image and the added color component. Many of your descriptions seem Apple-centric. You need to divorce the base b&w image from the added color component. In the world of video, they are 2 layers.

Now, let's advance the tech a bit. The Atari display has 15 hues, plus intensities. Setting aside the 320 mode for a moment, one intensity only pixel will occupy a space on the screen, equal to the distance the electron beam travels for ONE color clock. If that pixel is a color pixel, additional information is added to the signal, at a higher frequency, to tell the TV what color it is.

This is correct. It's the pixel + the color carrier.

If you go ahead and connect your Atari to a monochrome display, this higher frequency information manifests as small pixels! Monochrome displays have a wider frequency range, because they don't have filters for the color detector. They display EVERYTHING. And everything appears as pixels!

A monochrome display has no 3.58MHz filter. so you see what a color TV doesn't want you to see: A b&w image with a 3.58MHz signal added to it. They aren't really meant to be considered as pixels, but as something that will be filtered out and dealt with separately. This is why the luma channel only should be connected to b&w monitors.

(this is for computer graphics --fully analog color signals do not manifest as pixels, but for special high-detail cases)

An picture from an analog source generally contains a wide range of frequencies in the luminance which obscures observation of the color carrier.

Each Atari 160 mode pixel then will present on a monochrome screen as a smaller pixel that takes one of 16 possible positions with respect to the color beam. If you color cycle on an Atari, you can literally see this motion on the screen with a monochrome display.

correct.

My point being the frequency domain can be seen as pixels on the screen, and can be considered as pixels where the generating of color is, if we ignore saturation.

Sounds like the artifacting method.

The Atari is actually quite simple. There is a counter somewhere that indexes each 160 mode pixel. When a hue is asked for, the color information is output at the right phase, according to that counter, one unique position for each hue so displayed. That's 2400 little pixels actually, if you want to think about it as pixels! And on the Atari, one of those 15 positions is output for each 160 mode pixel, and that's the hue value encoded for display.

 

|00000000000000|000000000000....

|x0000000000000|000000000...

|0x000000000000|00000000...

|00x00000000000|0000

GTIA has a 15-tap delay line so the 3.58MHz clock is available at 15 phases. They each have a 50/50 duty cycle so they appear to be like 320 mode pixels (although their sine-like shape may make them appear thinner).

In the little ASCII art I put here, the top one is a monochrome pixel, the next one is hue 1, the next hue 2, the next hue three. Think of it how you want to think of it. That is what is seen if you factor out the color info.

 

One thing I've learned is that you can set multiple little pixels for blends of color as well. The Atari does not do this, nor does the C64. The Color Computer 3 can, because it has a 640x200x4 color display. On that machine, that's 4 little pixels per 160 mode pixel, each with 4 possible intensities, for a total of 256 combination possible that all fit within one color clock.

What you're doing in that case is mixing other frequencies with the 3.58MHz by making the waveform irregular. This is a dirty method but it will affect the phase. You can do this on the Atari by adding artifact patterns to an already colored 320 screen.

Because that machine uses fixed color timing, like the Apple and Atari do, it does artifacting like the Apple and the Atari does. Pixels smaller than one color clock are seen as color info.

This will only work well if the base 280ns period is maintained. An on-off pattern at 640 resolution won't result in much color.

Put those combination on the screen and you get 256 colors. (actually about 225, because a few of the patterns do not make colors that are all that unique)

 

Again, this occurs because the frequency components required to describe the pixels electrically exceed those permitted for intensity only, and are picked up by the color circuts in the NTSC television.

Close. They are used to distort the color waveform, but the TV is still looking at the 3.58MHz notch.

In my blog here: http://www.atariage.com/forums/index.php?a...;showentry=5974

 

You can see that play out for a high color display.

 

If the video device has more pixel options, then that device can describe more color, because it offers more frequencies.

 

You can see that here: http://propeller.wikispaces.com/file/view/...07_10_03_02.jpg

 

In that screen shot, the colors running down along the bottom are the native colors generated by the device. There are 89 of them, and they are stable up to 160 pixel resolution in the safe area. The ~400 or so color pattern shown above is with a pixel clock of 320 pixels, which permits two sets of color information to be seen in one color clock, and the result is a mixture of the base colors. That's a propeller, BTW. Only some of the 89 colors were used. Truth is, with this information, the device will do well over 1000 colors. Those are the best ones however.

 

This all occurs because fast pixel clocks generate frequencies that are seen as color. I prefer to think in terms of pixels, because it makes things quite easy. Frequency / time really is the same thing as pixel size and position relative to the color reference. The TV is just a simple O-scope, in this regard.

 

BTW, the reason big changes take time is that the color wheel, rolled out as a line, has a length!

 

Ask for a blue pixel and a blue green pixel and those are closely aligned. No worries. All displays will resolve those. Ask for a red one, then a blue one, or some other combination that has a large angular displacement on the color wheel, and it's literally not possible to encode that data because of the color frequency itself! So that transition takes time. The larger the angle, the larger the time. The largest time is ONE color clock for encoding. What the display renders depends on how fast it's color circuits are, and how sensitive to harmonics it is. (harmonics are the color shadows seen on lesser displays after high contrast color and intensity transitions are seen.

 

A studio quality display will, in fact, display ANY color combination correctly, so long as it's one color clock or longer. So then, in effect, you get the whole color wheel every color clock, just like I said.

 

Final example:

 

Let's say I make a monochrome display capable of 1280 pixels in the safe area. 8 * 160. If those pixels have only two states, on and off, that display will generate 8 unique colors, if only one of the pixels were set within a given color clock. If, the states of all the pixels were addressable, then that display would render 256 colors, give or take a few, like in the case of the Color Computer. Some patterns don't differentiate enough for our eyes to see, and that's how it is. I've done this on micros, just like it was done on the Apple. Works great.

 

In the simpler case, of only one being lit, sequentially setting those pixels, would produce colors at angles 360 / 8 around the color wheel. In the more complex case, the actual color seen is a sum of the angles present.

 

We are talking about the same thing. I'm just doing it in terms of discrete pixels over time, and you are talking about it in terms of frequency over time. BTW, if you want to consider what color a display will produce, you will do the math on your pixel clock, and arrive at the same place I did. Just takes longer...

 

All of that, BTW perfectly describes why Atari colors have a fairly consistent saturation. No information of that kind is being sent to the display. It's artifact only, which is why it looks the way it does.

 

I've written code to do these things, posted up the screenies, and there it is. Most, if not all, computer color in the 80's was done this way, with many devices to this day still doing it this way.

Just remember that all colors are ideally represented by the phase of a sine. Generating irregular waveforms with 280ns periods will affect the color angle and produce more color, but will also add noise to the process (which may or may not be visible) and may cause some sets to produce the colors somewhat differently if their color filters have different properties.

 

The Propeller stuff is cool, BTW.

Link to comment
Share on other sites

Better chip, but poorer implementation.

 

They didn't even get the Serial I/O sorted properly, so had to resort to bit-banging.

As Wolfram said: You mixed up the VIA 6522 in the 1541 with the CIA 6526 in the C64. The C64 serial shift register works fine, C128 uses CIAs for it's faster I/O (1571 has a CIA inside) and it's also used for RS232 transfers.

 

Commodore might have had some industry leading hardware designers, but were never renowned for good firmware or software.

Tramiels had the attitude that he was selling hardware not software.

Link to comment
Share on other sites

Better chip, but poorer implementation.

 

They didn't even get the Serial I/O sorted properly, so had to resort to bit-banging.

As Wolfram said: You mixed up the VIA 6522 in the 1541 with the CIA 6526 in the C64. The C64 serial shift register works fine, C128 uses CIAs for it's faster I/O (1571 has a CIA inside) and it's also used for RS232 transfers.

 

Commodore might have had some industry leading hardware designers, but were never renowned for good firmware or software.

Tramiels had the attitude that he was selling hardware not software.

 

the faulty VIA sat in the 1540 ;)

 

also firmware did not exist back then.

 

and last: C= had its software division, most of the very earliest games were done by them, wisely they helped the machine to launch with software. after 1-2 years it wasnt needed anyway, 3rd party created software like crazy.

Link to comment
Share on other sites

Oh my goodness... very long post. :)

 

That's basically hysteresis. I'm aware of that and big changes stress things. Again, good quality displays will exhibit very little of this, while poor quality ones exhibit far more. Modern displays are quite fast and can be treated accordingly. We've come a long way since the TV's of the 80's.

 

Yes, modern TV chipsets are more sophisticated. The JVC pro monitor I use with my A8 looks almost RGB quality.

 

And I do understand about the frequency domain bit too. What people get confused about is the frequency component of a pixel. If you plot a wave out on paper, it's period has a length right? Well, one color clock = a length on the screen equal to an Atari 160 mode pixel. It's that simple. If a pixel exists that is smaller than that, some component of it's frequency will be greater than the color reference, and will be seen as color information.

 

Right, Atari tied the pixel clock to the color clock on NTSC machines. It isn't exactly true that something smaller than 1 clock will be seen as color. Modern TV's use a band-pass filter to extract only a notch at 3.58MHz and to ignore everything below and above that frequency. A modern TV should ignore a 5MHz pattern, for example. Whether or not the 5MHz pattern could be seen on the screen depends on the overall resolution of the tube and electronics. Most sets allow more bandwidth in the S-Video mode.

 

With computers and pixels, the size of the pixel is related to the frequencies used to describe it. The position of the pixel on the screen is simply a visual confirmation of it's phase, with respect to the reference signal. Two ways of thinking of the same thing.

 

If you make a really small pixel, it is composed of higher frequencies. Where you put it impacts the phase.

Errr... this is where it sounds like you're confusing Chroma and Luma. Composite video is simply Chroma + Luma. In the luminance channel, a pixel goes where it belongs in time. This is the b&w portion of the picture and it knows nothing of the color carrier. Pixels should not be shifted around in this base image. As an aside, the Apple II achieved color by shifting pixels around and thus generated all color by luminance artifact, but this isn't the correct way to do it. The chroma channel contains the base color burst frequency, adjusted in time for the hue. Where there is color, this signal is present in the chroma channel, with an amplitude proportional to the current luminance level * the saturation level. Another aside, since the chroma signal should vary with brightness and since the Atari has no mechanism for varying the chroma level, bright colors on the A8 generally have poor saturation and look washed out.

 

Now, mix chroma and luma together and you have your pixel in the correct place in the luma channel, plus a sine wave riding on top of it. The pixel is not shifted to achieve color, but the phase of the wave running through it is. This is proper color and this is how the A8 and 64 generate color.

 

One colored pixel @ 160 resolution:

			   __	 __
Chroma component-> | -   -  -  <-phase of wave=color
			   |  ---	| <-amplitude of wave=saturation
			   |		 |
			   |		 |
Luma component->   |		 |
			   |		 |<-H position of pixel=
			   |		 |	H position on screen.
Black level________|		 |____________________

 

As long as the saturation is not maxed out, the color carrier will be filtered out by the TV and dot-crawl will be minimized. Of course, there is no dot crawl in the A8's picture because the phase of the colorburst is never changed.

 

Everything boils down to sine waves. If you make something nice and square, like a pixel, that's not a single wave. It is the sum of a fundamental wave, plus a lot of harmonics. The smaller the pixel, the higher the set of frequencies required to describe it, right? At some point, if the pixel is small enough, the frequencies used to describe it enter the range of frequencies the display device will see as color information.

 

The sharp edges of computer-generated images can mess with color decoding, but most modern sets will filter these high-frequency transitions.

 

You said: "Any content in the b&w base image that has a frequency close to that of the color signal will degrade it and this is the basis of artifact coloring." Exactly!! And those artifacts generate colors that lie at angular displacements on the color wheel, depending on their position on the screen. That position can be translated to their phase with respect to the reference. These are the same things!

 

Two things are important: 1. That the luminance pattern have strong 3.58MHz content (like a A8 high-res on-off pattern). and then 2. the phase will = color. When the TV goes to filter out the 3.58MHz content to remove dot-crawl, it will end up generating an average luminance where the on-off pattern used to be, making the area look solid. This shows how effective artifacting can be, but it is not quite the same as the normal pixel+color carrier combination since it 1. locks the absolute position of the pixel to the phase of its color and 2. it does not allow for saturation control by allowing the color signal to ride atop the pixel's luminance component.

 

Artifact colors can be quite complex and create very good displays. Your Atari creates artifact colors every day of the week. It does it, when you ask for colored pixels on the screen, and this is how:

 

Let's look at the Apple ][ computer display. It applies what I just said perfectly. It's the simplest and most known case.

 

It's got monochrome pixels. No color generator at all exists in that computer. The only thing that does exist, is a phase shift bit in each high resolution screen byte, that will shift things over a small amount. You get to turn the color burst on or off, that's it.

 

On a monochrome display, the Apple ][ is a two color device. Either set or reset the pixels. Right?

 

On a color device, the Apple can produce black, white, orange, blue, cyan and magenta. Where do the colors come from?

 

They come from the fact that pixels smaller than one period of the NTSC color reference signal have frequency components that are seen as color information by an NTSC composite or RF display. So then, one can look at it in the frequency domain, or one can look at it in the pixel over time domain.

 

If the reference signal is stable, then the position of a smaller pixel determines where that pixel color will be on the color wheel. It's relative size determines if it's mostly color info, half and half, or just intensity info. (frequency components) The apple has two pixels per color clock, and each one has a distinct color, based on it's position on the screen. Even pixels are one color, odd ones are another, and two together do not produce a color, only intensity. The small shift in pixel position (or phase, if you want to think of it that way), brings two additional colors to the display.

 

This is where I learned this first. The Apple.

Right, but you need to know that the Apple method is a hack and not a good representation of how NTSC video works because it makes no distinction between the base image and the added color component. Many of your descriptions seem Apple-centric. You need to divorce the base b&w image from the added color component. In the world of video, they are 2 layers.

Now, let's advance the tech a bit. The Atari display has 15 hues, plus intensities. Setting aside the 320 mode for a moment, one intensity only pixel will occupy a space on the screen, equal to the distance the electron beam travels for ONE color clock. If that pixel is a color pixel, additional information is added to the signal, at a higher frequency, to tell the TV what color it is.

This is correct. It's the pixel + the color carrier.

If you go ahead and connect your Atari to a monochrome display, this higher frequency information manifests as small pixels! Monochrome displays have a wider frequency range, because they don't have filters for the color detector. They display EVERYTHING. And everything appears as pixels!

A monochrome display has no 3.58MHz filter. so you see what a color TV doesn't want you to see: A b&w image with a 3.58MHz signal added to it. They aren't really meant to be considered as pixels, but as something that will be filtered out and dealt with separately. This is why the luma channel only should be connected to b&w monitors.

(this is for computer graphics --fully analog color signals do not manifest as pixels, but for special high-detail cases)

An picture from an analog source generally contains a wide range of frequencies in the luminance which obscures observation of the color carrier.

Each Atari 160 mode pixel then will present on a monochrome screen as a smaller pixel that takes one of 16 possible positions with respect to the color beam. If you color cycle on an Atari, you can literally see this motion on the screen with a monochrome display.

correct.

My point being the frequency domain can be seen as pixels on the screen, and can be considered as pixels where the generating of color is, if we ignore saturation.

Sounds like the artifacting method.

The Atari is actually quite simple. There is a counter somewhere that indexes each 160 mode pixel. When a hue is asked for, the color information is output at the right phase, according to that counter, one unique position for each hue so displayed. That's 2400 little pixels actually, if you want to think about it as pixels! And on the Atari, one of those 15 positions is output for each 160 mode pixel, and that's the hue value encoded for display.

 

|00000000000000|000000000000....

|x0000000000000|000000000...

|0x000000000000|00000000...

|00x00000000000|0000

GTIA has a 15-tap delay line so the 3.58MHz clock is available at 15 phases. They each have a 50/50 duty cycle so they appear to be like 320 mode pixels (although their sine-like shape may make them appear thinner).

In the little ASCII art I put here, the top one is a monochrome pixel, the next one is hue 1, the next hue 2, the next hue three. Think of it how you want to think of it. That is what is seen if you factor out the color info.

 

One thing I've learned is that you can set multiple little pixels for blends of color as well. The Atari does not do this, nor does the C64. The Color Computer 3 can, because it has a 640x200x4 color display. On that machine, that's 4 little pixels per 160 mode pixel, each with 4 possible intensities, for a total of 256 combination possible that all fit within one color clock.

What you're doing in that case is mixing other frequencies with the 3.58MHz by making the waveform irregular. This is a dirty method but it will affect the phase. You can do this on the Atari by adding artifact patterns to an already colored 320 screen.

Because that machine uses fixed color timing, like the Apple and Atari do, it does artifacting like the Apple and the Atari does. Pixels smaller than one color clock are seen as color info.

This will only work well if the base 280ns period is maintained. An on-off pattern at 640 resolution won't result in much color.

Put those combination on the screen and you get 256 colors. (actually about 225, because a few of the patterns do not make colors that are all that unique)

 

Again, this occurs because the frequency components required to describe the pixels electrically exceed those permitted for intensity only, and are picked up by the color circuts in the NTSC television.

Close. They are used to distort the color waveform, but the TV is still looking at the 3.58MHz notch.

In my blog here: http://www.atariage.com/forums/index.php?a...;showentry=5974

 

You can see that play out for a high color display.

 

If the video device has more pixel options, then that device can describe more color, because it offers more frequencies.

 

You can see that here: http://propeller.wikispaces.com/file/view/...07_10_03_02.jpg

 

In that screen shot, the colors running down along the bottom are the native colors generated by the device. There are 89 of them, and they are stable up to 160 pixel resolution in the safe area. The ~400 or so color pattern shown above is with a pixel clock of 320 pixels, which permits two sets of color information to be seen in one color clock, and the result is a mixture of the base colors. That's a propeller, BTW. Only some of the 89 colors were used. Truth is, with this information, the device will do well over 1000 colors. Those are the best ones however.

 

This all occurs because fast pixel clocks generate frequencies that are seen as color. I prefer to think in terms of pixels, because it makes things quite easy. Frequency / time really is the same thing as pixel size and position relative to the color reference. The TV is just a simple O-scope, in this regard.

 

BTW, the reason big changes take time is that the color wheel, rolled out as a line, has a length!

 

Ask for a blue pixel and a blue green pixel and those are closely aligned. No worries. All displays will resolve those. Ask for a red one, then a blue one, or some other combination that has a large angular displacement on the color wheel, and it's literally not possible to encode that data because of the color frequency itself! So that transition takes time. The larger the angle, the larger the time. The largest time is ONE color clock for encoding. What the display renders depends on how fast it's color circuits are, and how sensitive to harmonics it is. (harmonics are the color shadows seen on lesser displays after high contrast color and intensity transitions are seen.

 

A studio quality display will, in fact, display ANY color combination correctly, so long as it's one color clock or longer. So then, in effect, you get the whole color wheel every color clock, just like I said.

 

Final example:

 

Let's say I make a monochrome display capable of 1280 pixels in the safe area. 8 * 160. If those pixels have only two states, on and off, that display will generate 8 unique colors, if only one of the pixels were set within a given color clock. If, the states of all the pixels were addressable, then that display would render 256 colors, give or take a few, like in the case of the Color Computer. Some patterns don't differentiate enough for our eyes to see, and that's how it is. I've done this on micros, just like it was done on the Apple. Works great.

 

In the simpler case, of only one being lit, sequentially setting those pixels, would produce colors at angles 360 / 8 around the color wheel. In the more complex case, the actual color seen is a sum of the angles present.

 

We are talking about the same thing. I'm just doing it in terms of discrete pixels over time, and you are talking about it in terms of frequency over time. BTW, if you want to consider what color a display will produce, you will do the math on your pixel clock, and arrive at the same place I did. Just takes longer...

 

All of that, BTW perfectly describes why Atari colors have a fairly consistent saturation. No information of that kind is being sent to the display. It's artifact only, which is why it looks the way it does.

 

I've written code to do these things, posted up the screenies, and there it is. Most, if not all, computer color in the 80's was done this way, with many devices to this day still doing it this way.

Just remember that all colors are ideally represented by the phase of a sine. Generating irregular waveforms with 280ns periods will affect the color angle and produce more color, but will also add noise to the process (which may or may not be visible) and may cause some sets to produce the colors somewhat differently if their color filters have different properties.

 

The Propeller stuff is cool, BTW.

 

 

hahaha... I do not understand anything here... ;)

Link to comment
Share on other sites

I would go with Atari here, on a personal/technical basis. You can't, AFIK, hook up a C64 to a TV set; certainly not with AV cables. And that's where I'd want to play it. Also, there are tons of cartridge-based games for Atari computers; I don't think there are as many with the C64.

 

Now Amigas, you can hook up to a TV. But, again, most of the games are disk-based, and are hard to find in NTSC format-as are Amigas themselves.

 

 

you can hook up a c64 to a TV set. it was designed to be connected to TVs from the very start. AV cables, no problem. but AFAIK a8 has no AV (composite) out built in. I might be as wrong as you were with TV sets tho.

 

cartridge based games got outdated in the early 80s. they suck anyway to disk based ones, regarding c64 games.

 

 

Ok, thanks for that information! But you'd still have to get a disk drive for the C64, at least one, I suppose. AFIK, the Atari 800XL and the XEGS can use AV cables.

Link to comment
Share on other sites

THERE'S A LITTLE PIECE OF ATARI IN EVERY COMMODORE 64[/b]

 

 

THERE'S A BIG PIECE OF COMMODORE IN EVERY ATARI

 

==> the 6502 . :) the heart of your beloved machine is Commodore! ;)

 

also dont forget about the PIA chips. which are an older revision of the CIA chips. Now the newer CIA chips are used in the c64 for the same purpose, as PIAs in the A8.

 

BUT.

 

atariksi comes and proves, that the older revision PIA has better "joystick" (I/O!) ports than the newer CIAs.

 

...

 

You SHOULD really read the thread before joining atariage and blurting out whatever comes on the top your head. You are DEAD wrong and this point was discussed at least twice already in this thread. I already gave you link to where I sell cables interfacing to various machines through joystick ports-- Ataris, Amigas, ATari STs, C64s, etc. I have done the research, timed things, and optimized transfer times, etc. etc. You are talking bullcrap. You are misleading people (whoever cares to listen to you) and are misinformed. Then you claim I am insulting you. As I stated TMR at least knew his Atari stuff although he was biased toward C64. You don't even know your Atari stuff and speak out against it.

 

>the fact is:

 

>both chips has 2x8bit parallel I/O ports with programmable direction for each but.

 

Wrong. We are talking about implementation as Rybags pointed out. Atari has both joystick ports tied to one 8-bit port. Atari even reads nibbles at higher frequency.

 

>another fact is:

 

>CIA is a newer revision of PIA thus its better.

 

Wrong. You can have better chips be implemented in an inferior way. You can have a FAT AGNUS in your machine where you have to access each palette entry through nibble access versus 16-bit access. Etc. Etc. Invalid argument.

 

>conclusion:

 

the joystick ports are the same, but c64's chip providing the joystick bort is better.

 

Joystick ports are inferior on C64. Add in the keyboard interference and you are set.

Link to comment
Share on other sites

atariksi,

 

>Atari has both joystick ports tied to one 8-bit port. Atari even reads nibbles at higher frequency.

 

same as on c64. higher read speed comes from the higher CPU speed, and has nothing to do with the IO chip itself. its just another way telling a8's cpu is faster. it has nothing to do with the "joystick port".

 

 

>Joystick ports are inferior on C64. Add in the keyboard interference and you are set.

 

its just the same and c64's chip doing the "joyport" offers slightly more. thats all.

Link to comment
Share on other sites

Wrong. We are talking about implementation as Rybags pointed out. Atari has both joystick ports tied to one 8-bit port. Atari even reads nibbles at higher frequency.

Both joysticks on one port... except for the fire buttons. Following your way of argumentation again:

 

Atari needs 3 LDAs to fully read 2 joys where C64 only needs 2 LDAs.

Atari needs 2 LDAs to fully read 1 joy where C64 only needs 1 LDA.

 

But I don't really see the point in this.

Link to comment
Share on other sites

THERE'S A LITTLE PIECE OF ATARI IN EVERY COMMODORE 64[/b]

 

 

THERE'S A BIG PIECE OF COMMODORE IN EVERY ATARI

 

==> the 6502 . :) the heart of your beloved machine is Commodore! ;)

 

I guess. Commodore bought MOS in 1976, after the 65XX line was developed. :) Atari used standard 6502's in the 400/800 but switched to their custom 6502C (Sally) after that.

Link to comment
Share on other sites

Better chip, but poorer implementation.

 

They didn't even get the Serial I/O sorted properly, so had to resort to bit-banging.

 

Commodore might have had some industry leading hardware designers, but were never renowned for good firmware or software.

Even Amiga's OS was outsourced, although you could probably say the same for TOS/GEM for the ST.

 

you mix things up. the chip with the Serial I/O bug was the VIA and not the CIA.

 

c64 suffered of loading speed because they decided to keep "compatibility" with the vic-20 which had the faulty VIA.

 

 

 

conclusion:

 

CIA both better&better implementation, as the PIA is which atariksi claims to have a "better joystick port"

 

edit:

 

after further research it looks like CIA and PIA is pretty much the SAME. the only plus CIA has is a TOD clock, hmm and maybe that CIA timers can count each others underflows.

 

:)

 

After further speculations (not research), you drew another biased conclusion. There are three things-- speculations based on little or no information or misinformation, experimental knowledge, and logical deductions. I don't accept anyone's mental speculations especially yours since you have been proven over and over again to be wrong and still keep raising the same point. Why don't you at least experiment-- cable is already here:

 

http://cgi.ebay.com/ws/eBayISAPI.dll?ViewI...em=320359212377

 

I'll ship it to you for free to help you out. Given a CPU at 1.79Mhz, even if joystick ports on both systems were nibble accessed, Atari wins (logical deduction). The Atari uses the POKEY for the timers not the PIA and the POKEY timers are resolute to 558ns; CIA is fed with a 1Mhz clock so it's timing is also inferior to that of the Atari.

Link to comment
Share on other sites

THERE'S A LITTLE PIECE OF ATARI IN EVERY COMMODORE 64[/b]

 

 

THERE'S A BIG PIECE OF COMMODORE IN EVERY ATARI

 

==> the 6502 . :) the heart of your beloved machine is Commodore! ;)

 

Not from the start. But surely MOS has taken much benefit , being an Atari manufacturer.

 

Particular the C64 was build upon the achievement, MOS reached by creating Atari Hardware.

 

 

... just use what is there.... put stuff in the Jameels thought it was needed ... but they made design flaws with the C64, by not implementing several features. The biggest mistakes the made, was NOT to use a better palette and to have this weak CPU.

 

The mistakes ATARI did, were to create computers just for the US market, not leaving any space for further enhancements. Best examples were the GTIA/ANTIC chips that never used PAL features, only some clockings got set different.

Well, they thought a TV set cannot display colours in hires. I really wonder who decided to chose limiting the Atari to luma only in hires. Every other TV standard easily was able to show colours in "hires".

Link to comment
Share on other sites

atariksi,

 

>Atari has both joystick ports tied to one 8-bit port. Atari even reads nibbles at higher frequency.

 

same as on c64. higher read speed comes from the higher CPU speed, and has nothing to do with the IO chip itself. its just another way telling a8's cpu is faster. it has nothing to do with the "joystick port".

 

 

>Joystick ports are inferior on C64. Add in the keyboard interference and you are set.

 

its just the same and c64's chip doing the "joyport" offers slightly more. thats all.

 

No, you can have CPU setup which insert wait states on I/O calls like PC does. So the fact that they timed the I/O ports at the same frequency is another gain for the Atari since net throughput is higher for I/O. We are talking I/O speed. It's the samething with timers-- PC originally running at 4.77Mhz decided to divide CPU speed by 3 and then feed it's timer chip 8253 (PIT) so their timing accuracy is 1/119318Mhz = 840ns. I agree CIA is a superior chip than PIA but the way it's implemented, it's functionality can be performed better on Atari with its chipset.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...