Jump to content
IGNORED

Bless and Curse of Atari and Amiga computer designs


calimero

Recommended Posts

Maybe too dramatic topic title but I am under impression of playing Ambermoon remake :) 

 

(This just came to my mind, reading post: #11?

 

---

 

Both Atari ST and Amiga use CPU from 1979. (68000) to design new computer in around 1983. (3-5 years later). 

 

Why this is important: RAM memory was faster then CPU in 1983. so they both could design computer where RAM (250ns) can serve CPU (half of memory available time) and also rest of the processors; in Atari ST: video Shifter, ASCI, floppy; later on STe digital sound and (almost) the blitter (in special “non-hog” mode?)… and rest of ST free memory slots are used for memory refreshing (Atari did use RAM “over” the manufactures specification in first STs to be able to achieve such computer design). In Amiga is same story but her additional processors can halt 68000 CPU (in ST this can not happen (?); in worst case ST CPU need to wait only for even memory cycle?). I know that 5th bit plane on Amiga will steal (halt) CPU cycles, but there are some other cases where CPU will be halted since memory gave priority to Amiga chipset (additional processors) over CPU. 

 

More technical details: Atari 68000 8MHz CPU can read 16bit data from memory every four cycles so it needs “2MHz” (= 500ns) memory, right?

 

So this is the reason why it was not simple to make equally efficient computer in future: you would need RAM that is double faster then CPU. And we saw that quite opposite happen: 68020 already got few bytes on-die cache to compensate for slower RAM, 68030 add data cache and 68040 add quite a big chunk of cache in KB size!

 

Memory speed increased in modest steps compering to CPUs speed to fetch and compute the data. 

 

To overcome this problem and to keep same design of computers, both Atari and Amiga (Commodore), come to similar solutions: e.g. Atari TT use quadrupled width of memory (64bit, more on that later) but 68030 can read only 32bit at time. Speed of memory did not change, but they add even faster CPU (32MHz) so they added Fast RAM in design (CPU exclusive RAM, just like in Amiga).

If we look at TT Shifter, it use almost 5x more memory than ST Shifter (154KB vs 32KB) so they compensate this with using 64bit memory in TT and remaking TT Shifter to access memory in 64bit (ST Shifter access RAM in 16bit).

 

This was the end of the road for such computer design. Probably it could go further with Dualport memory (like VRAM) but it would defeat the purpose of using cheap RAM for video in these computers…

 

Even today, gap between memory and CPU speed is getting wider! One crucial thing that makes Apple M1 so fast is fast memory, which is, like in ST, shared among all “custom” chips inside M1!

 

 

Links: https://raw.githubusercontent.com/wiki/mist-devel/mist-board/TG68SDRAM.md

 

http://www.bitsavers.org/pdf/atari/ST/Atari_ST_GEM_Programming_1986/GEM_0904.pdf

 

https://www.synacktiv.com/ressources/Atari-ST-Internals.pdf

 

http://web.archive.org/web/20140715005452/http://www.sarnau.info/atari:atari_tt030_hardware_reference_manual

  • Like 3
Link to comment
Share on other sites

Some things to add here:

Modern computers do not share main RAM between CPU and video. Video RAM is separated, and usually on video card, only for video. That's faster than video on mainboard - and of course costs more. And here is crucial factor in all this: price/performance factor.  Even around 1980 very fast RAM existed - static RAM, but manufacturers used much cheaper and slower dynamic RAM.

And dynamic RAM is what is still used for main RAM, video RAM. Cache is with static RAM. So was in Mega STE too.

CPU speeds increased a lot, and dynamic RAM was not able to follow it. Solution is to use bigger and bigger cache - best if it is in CPU self.

And as we may see, it is not new concept - 68020/30 had it, small size, surely because price.

It is possible to make computer with like 16 GB static, very fast RAM - but that would cost likely 10x more, and speed gain would be ?

Maybe 30% in best case.

Much better to spend money on more efficient, faster CPU. And that's exactly what happens in last 20 years.

 

"In Amiga is same story but her additional processors can halt 68000 CPU (in ST this can not happen (?)"

Actually there are some stops even when only video and CPU are active. That happens with instructions which cycle count is not dividable with 4 - for instance dbf ... - then CPU will wait 2 cycles to be synced with video's RAM access. Such instructions are not frequent, so overall speed loss because it is small, only few % .

On the other side blitter, ACSI (disk DMA) may have high transfer rates, and then CPU must wait. At max DMA transfer rate (2 MB/sec) in ST(E) CPU will work at approx half speed.  Audio DMA in STE is what does not slow CPU - and it has lower bandwidth needs - about 100 KB/sec max at highest quality. And that fits well in video blank periods. I made some tests about it in past.

  • Like 2
Link to comment
Share on other sites

19 hours ago, calimero said:

So this is the reason why it was not simple to make equally efficient computer in future: you would need RAM that is double faster then CPU. And we saw that quite opposite happen: 68020 already got few bytes on-die cache to compensate for slower RAM, 68030 add data cache and 68040 add quite a big chunk of cache in KB size!

 

Memory speed increased in modest steps compering to CPUs speed to fetch and compute the data. 

This is true for the PC as well and probably any other system with their origins in the 80s. There's no "special" design here.

 

The PC/XT and PC/AT had memory faster than what the CPU could utilize. With the 386, they had to add external cache on later, faster models to compensate for slow ram. The 486 finally got internal cache. With the Pentium, the bus was made 64-bit wide, even if the CPU was still 32-bit etc. All the same to what you describe.

  • Like 1
Link to comment
Share on other sites

Hi calimero,

 

Apologies, I'm not sure I have parsed your original post correctly so I might have misunderstood some of the points you have raised.

 

The entire Amiga architecture was designed around minimising memory use and maximising memory bandwidth, both of which were very expensive resources in the early 1980s.

 

Both the Atari ST and the Amiga used planar graphics as a fairly reasonable engineering compromise for reducing memory usage. Each pixel only needs to use as many bit as needed to address the maximum number of colours. Very memory efficient, but make addressing individual pixels for read/write very slow, and requires a shifting if copying graphical images to x positions which don't fall exactly on byte boundaries (this is where a blitter which usually includes a shift operation for free is very useful).

 

On the Atari ST architecture, the Blitter (from the technical documents I have recently read, I must confess to not having use the Atari blitter when I did some ST programming back in the day) runs in competition with the CPU. This makes sense, as it is an independent device sitting on the memory bus which was added to the architecture after it's initial design and needed to be included in such a way as to ensure backwards compatibility with pre-blitter software. The usefulness of the blitter come from its ability to perform a memory to memory copy at least 4 times fast than the CPU could (which needs to go through a Fetch-Decode-Execute process for every single byte/word/long of the copy loop... though we did use movem tricks back in the day to speed this up) and you get the logic and shifts for "free" during the copy.

 

The Amiga's real trick was to use a video beam synchronised DMA to maximise memory bandwidth. The 68000 could only access the memory during the even memory cycles, so the Amiga's DMA did all its time critical work in the odd cycles where it wouldn't stall the CPU: DRAM Refresh, Disk, Audio, Sprites, and Video Fetch (special case, as during the visible parts of the display the Video hardware - The Denise chip- can steal Even Cycles if needed). These time critical DMA functions were sequenced to occur at specific times in the position of the video display. Please see the diagram from the Amiga Hardware Reference Manual below:

 

node02D4.gif

 

The Blitter and the Copper (a beam synchronised coprocessor, which should be familiar to 8bit Atari coders) can use both the Odd memory cycles and any Even cycles the CPU isn't using (if the CPU is working in the registers, cache, or Fast RAM). The Blitter can be set to "Hog Mode" where it will be given priority to the Even memory cycles over CPU... I never used this mode.

 

Hardware sprites are an interesting feature (and not one I used a great deal) as it is a way of using the the memory cycles available during the non visible portions of the display to generate graphics for the visible portions.

 

But all of these clever features became redundant by the late 1980s where DRAM chips were much bigger, faster, and importantly cheaper. The above timing diagram is only relevant for the Original (1985) Amiga chipset, the ECS/AGA chipsets used much fast DRAM chips and were able to use the extra memory bandwidth for more colour depth, greater resolutions, bigger sprites, and faster blitting operations... but really was was needed was a new architecture, to take advantage of the faster DRAM rather than just trying to make the old features better.

 

This is why, in many ways, I prefer the Atari Falcon's hardware design over the Amiga 1200. Of course the Atari Falcon had its own problems trying to shoehorn ST compatibility into what should have been a totally new architecture.

  • Like 4
Link to comment
Share on other sites

2 hours ago, ParanoidLittleMan said:

Modern computers do not share main RAM between CPU and video. Video RAM is separated, and usually on video card, only for video. That's faster than video on mainboard - and of course costs more. And here is crucial factor in all this: price/performance factor.

What I find fascinating is that "Most modern computer", Apple with M1 CPU, use memory just like in ST: every processor (CPU, GPU, NeuralEngine, ISP...) can access any data in one unified RAM. (e.g. back in days HP called this UMA "unified memory architecture").

 

Thanks for clarification about CPU cycles steals from DMA devices! :) 

Edited by calimero
  • Like 1
Link to comment
Share on other sites

2 hours ago, derSammler said:

This is true for the PC as well and probably any other system with their origins in the 80s. There's no "special" design here.

 

The PC/XT and PC/AT had memory faster than what the CPU could utilize.

It seems that you miss the point: Atari and Amiga, unlike PC XT/AT, use one RAM to serve data to CPU and other processors in computer (video, sounds...);

 

This is quite opposite to PC design: where RAM serve only CPU.

 

Amiga and Atari approach could not continue (was not $ feasible) as soon as CPU become faster than RAM. But back in mid 80s, they use this fact (RAM faster then CPU) to design great computers! If you look at Macintosh from that era, original Macintosh need to stall 68000 CPU while graphics is displaying on screen! Mac was worst hardware design among ST and Amiga :) (this can be nicely seen by running Macintosh emulator on Amiga or ST - they are faster then original Mac with same CPU!)

Link to comment
Share on other sites

4 hours ago, h5n1xp said:

node02D4.gif

 

But all of these clever features became redundant by the late 1980s where DRAM chips were much bigger, faster, and importantly cheaper. The above timing diagram is only relevant for the Original (1985) Amiga chipset, the ECS/AGA chipsets used much fast DRAM chips and were able to use the extra memory bandwidth for more colour depth, greater resolutions, bigger sprites, and faster blitting operations... but really was was needed was a new architecture, to take advantage of the faster DRAM rather than just trying to make the old features better.

actually, that diagram is valid for all Amiga computers. Nothing has changed in that matter since A1000 till A4000 and all of them have the same old '83 timing memory access.

The only change is the CPU in A3000/A4000 and A1200 has 32bit access instead of 16bit. Which results in that the CPU in A1200 needs 8 or more cycles to access the CHIP ram, and A3000/A4000 much more.

 

Edited by Cyprian
Link to comment
Share on other sites

3 minutes ago, Cyprian said:

actually, that diagram is valid for all Amiga computers. Nothing has changed in that matter since A1000 till A4000 and all of them have the same old '83 timing memory access.

The only change is the CPU in A3000/A4000 and A1200 has 32bit access instead of 16bit. Which results in that the CPU in A1200 needs 8 or more cycles to access the CHIP ram, and A3000/A4000 much more.

 


I was trying to be brief to remain as on topic as possible, but you are correct that the timings are valid for the most part, but the fetch widths are different (x4). So sprites can be 64 pixels wide and the bitplane fetch can pick up 8bitplanes at horizontal resolutions up to 1280 pixels. These things sound great, but weren’t much use in practice and just add weight to my argument that the Amiga’s architectural design choices did not really offer much advantage once DRAM chips became fast and cheap.

 

An 8bit planar display, for example, is frankly insane, it’s the least efficient way to address 8bit pixels.

Link to comment
Share on other sites

What would have been interesting, would have been to use a T212 transputer instead of the blitter in the blitter socket. Given that the transputer is NUMA-like, incorporating additional transputers in the next iteration (say the TT) would have had a very different architecture to designs of the time, more like a Xeon Phi.

  • Like 1
Link to comment
Share on other sites

I am a bit fuzzy these days on exactly how the Amiga 1000 chipset worked, memory not what it was reading about it in 1985 in Byte and PCW previews onward, but the CPU and AGNUS only halt each other if they both need read/write access to the same 16bit word(?). This also only occurs if the memory in question is CHIP[set] RAM and not FAST (CPU exclusive) RAM memory address.

 

The fifth bitplane and higher modes do need more cycles to complete there tasks which means you get a 40% loss of free DMA cycles in total not 20% loss as you would expect from a 4 to 5 bitplane screen mode. I think that's how it is and the problem gets worse in 6 bitplane moders, don't think that is a CPU issue, it's chipset bandwidth vs system bus related. There is also something weird about using all 8 hardware sprites and doing smooth scrolling at the same time being tricky, so some game coders used 7 sprites per scanline.

 

The Amiga 1000 had 14mhz memory bus vs 7mhz CPU and 7mhz chipset buses so it was interleaved access to read memory at the very least, very cool and decades before Pentium M (AKA Centrino) did it for off the shelf computers. This is also the reason why on Amiga 1200 it is a problem, 14mhz memory with 14mhz CPU and 14mhz AGA chipset so there you do get a lot more contention between CPU or chipset using memory. Of course the AGA chipset was NEVER designed to work in a 2mb 14mhz computer, just shows how much Commodore really had lost the plot and were so cut down and cash strapped (but Irving Gould still wrote his annual $1m bonus checks to himself up to 1994!!)

 

The problem, as perceived by lame coders, is the custom chips can only see a fraction of memory sometimes in 1mb or greater A500 class machines. Hence the improvement of 512k to 1mb address range for the chipset bus to control. You should however be writing your code for games like an arcade machine or megadrive did and only ever use CPU in CHIP RAM if it is 100% required. Lotus II uses about 800kb of RAM on Amiga, it does load in each level too, but crucially it's not coded like some muppet using AMOS to do Zaxxon by having a 600mb IFF image of the background to scroll through lol. 

Link to comment
Share on other sites

20 hours ago, oky2000 said:

The Amiga 1000 had 14mhz memory bus vs 7mhz CPU and 7mhz chipset buses

A1000 (as every other Amigas - A500/2000/3000/4000 and A1200) has 3.5MHz CHIP ram bus, where every second memory slot is available for the CPU

Edited by Cyprian
Link to comment
Share on other sites

10 hours ago, Cyprian said:

A1000 (as every other Amigas - A500/2000/3000/4000 and A1200) has 3.5MHz CHIP ram bus, where every second memory slot is available for the CPU

Like I said I've forgotten more things than most people ever knew lol. Still doesn't sound right but I can only find millennial quality results from google and I no longer have any technical docs from the time or the neurons to go through them :) 

 

The CPU on A1000 is rarely blocked was the point, on A1200 it's blocked most of the time so adding extra RAM to an A1200 makes Doom style engines that need CPU and custom chip DMA time run nearly twice as fast and what you would expect. Ditto for Polygon based games.

Link to comment
Share on other sites

On 10/1/2021 at 11:54 AM, h5n1xp said:

Hi calimero,

 

Apologies, I'm not sure I have parsed your original post correctly so I might have misunderstood some of the points you have raised.

 

My thoughts is clarification how Atari and Amiga leapfrog PC - by utilising fact that memory was faster then CPU at the time.

And how Amiga and Atari could not continue with this design philopsophy because eventualy CPU become faster than memory, much, much faster :)

 

It would be interesting to include Acorn Archimedes with ARM in this story. Motivation for Acorn to create own CPU was because they very much liked MOS 6502, it fast memory access, but there was no roadmap for future 6502 development.

 

It is especial interesting because today we see ARM CPU, Apple M1, that leapfrog Intel CPU.

 

 

On 10/1/2021 at 11:54 AM, h5n1xp said:

Both the Atari ST and the Amiga used planar graphics as a fairly reasonable engineering compromise for reducing memory usage. Each pixel only needs to use as many bit as needed to address the maximum number of colours...

Yes, ST and Amiga are very memory conservative, probably even more than PC at time, but my point is that they leapfrog PC because clever use of the fact that memory was faster than CPU.

 

Like I said, my intention is not to compare ST to Amiga, although is fine to get clarified some facts :) - my intention was to point how ST and Amiga computer design leapfrogging other designs (PC and Mac) at time. And also, how this design was not “sustainable” in long run because eventually CPU speed leapfrog RAM speed and that was the end-road for ST and Amiga design (they need to become more PC-like; e.g. Atari Falcon have particular problem that higher graphics mode “eat” CPU cycles.)

 

On 10/1/2021 at 11:54 AM, h5n1xp said:

The Amiga's real trick was to use a video beam synchronised DMA to maximise memory bandwidth. The 68000 could only access the memory during the even memory cycles...

Yes, like I stated in first post :)

Same as ST.

btw thanks for picture of Amigas cycle diagram! Very informative!

 

On 10/1/2021 at 11:54 AM, h5n1xp said:

Original (1985) Amiga chipset, the ECS/AGA chipsets used much fast DRAM chips and were able to use the extra memory bandwidth for more colour depth, greater resolutions, bigger sprites, and faster blitting operations...

I had impression that AGA, just like TT, only have “wider” access to memory and that memory speed is same as on Amiga 1000/500 OCS? (EDIT: I see that Cyprian also mention this… but like I said, my intention was not to compare ST to Amiga design)

On 10/1/2021 at 11:54 AM, h5n1xp said:

 

but really was was needed was a new architecture, to take advantage of the faster DRAM rather than just trying to make the old features better.

Exactly my point. ST and Amiga design was great when RAM was faster then CPU.

When CPU become faster, new design was needed.

On 10/1/2021 at 11:54 AM, h5n1xp said:

 

This is why, in many ways, I prefer the Atari Falcon's hardware design over the Amiga 1200. Of course the Atari Falcon had its own problems trying to shoehorn ST compatibility into what should have been a totally new architecture.

Atari Falcon start it’s life like addition to STe: faster CPU, DSP… People dig up many prototypes of STe with additional boards that eventually become Falcon (Sparrow was codename of this computer design and Falcon was codename for new computer; people also dig up some Atari Corp. documentations and PCB boards design… you can find it on atari-forum.com).

Link to comment
Share on other sites

The Amiga 1200 was only a couple of years ahead PC gaming (Super Stardust on DOS PC needs minimum 120mhz Pentium to play like crippled 68020 in A1200 as sold in all shops with no Fast RAM) and was a stop gap at best. By the time ESCOM and Amiga Technologies came back with Amiga 1200s in shops in 1996 it was not worth it. Manufacturing them in France also killed any sensible price being possible when Commodore was finally sold.

 

The only real improvement in the A1200 was hardware sprites, which are shit on A500/1000 etc, but even then only for games like Mortal Kombat/SF2. You can have 4 lightning fast 16 x4 palette sprites 64 pixels wide x screen height meaning there is no reason why A1200 SF2 isn't as good as the Megadrive. Other than that it was exactly what I expected from the clueless useless twats left at Commodore in 1991/92, many problems with AGA performance and AGA was never ever supposed to be used with just Chip RAM as it would hog the CPU in game engines too much. Perhaps that financial vampire scumbag Irving Gould shouldn't have pushed out the actual Amiga design talent in 1986? Twat! 

 

Never ever used a Falcon, wish I had bought a boxed one when they were like 400 bucks because I don't know what it's like to use one either for creative/serious stuff or games. The only commercial game from the time I knew was LlamaZZAP!

  • Like 1
Link to comment
Share on other sites

Here was lot of talk about Fast RAM, HW sprites, blitter .. And surely it was interesting in those years when ST and Amiga were designed.

Let see about 3D support:  clearly only support for it was CPU . And it was pretty good for that time. So we had good and fast 3D games at 1987, with polygon graphic. Half meg or more RAM helped l lot too - there were larger precalculated trigonometrical tables placed in it, because that used lot of CPU time. And ST was faster with polygon 3D - faster CPU, less slowdowns ..

And this leads us to FPU. It was rarely used and expensive in 80-es. Only Mega STE has socket for it from ST(E) machines.

And of course, after some years it became part of every modern CPU for those years. What helped a lot 3D gaming, CAD .

In same years 3D graphic chips, cards started to sell for masses, and prices went to affordable for almost everyone.

And they were capable for much more than some fast sprite draw. Memory speed in 3D cards was crucial, so there were special DRAM versions used in them - SGRAM for instance in Nvidia Riva. It was really big success, in big part because was combined 2D/3D chip.

  • Like 1
Link to comment
Share on other sites

10 hours ago, oky2000 said:

The Amiga 1200 was only a couple of years ahead PC gaming (Super Stardust on DOS PC needs minimum 120mhz Pentium to play like crippled 68020 in A1200 as sold in all shops with no Fast RAM) and was a stop gap at best

A1200 was never really ahead of PCs. The fact it could push some sprites well was irrelevant since the years it was released PCs could already play the next-gen of gaming: the likes of Wolfenstein 3D and Ultima Underworld. And in 2D SNES was already king.

Link to comment
Share on other sites

  • 3 weeks later...
On 10/5/2021 at 10:43 AM, youxia said:

A1200 was never really ahead of PCs. The fact it could push some sprites well was irrelevant since the years it was released PCs could already play the next-gen of gaming: the likes of Wolfenstein 3D and Ultima Underworld. And in 2D SNES was already king.

I believe you have uninterpreted what oky2000 was trying to say. 

 

When released in 1992, the AGA chipset was ideal for the popular games that you could buy at that time. But as you have pointed out Wolfenstein had been released and that hinted at the future graphical requirements for games (the byte per pixel chunky graphics mode of VGA allowed for far more efficient rendering with these types of games), and with the release of Doom in 1993, the Amiga architecture was very obviously entirely inappropriate. We can blame commodore (or rather their engineering management who had a "No New Chips" mantra at the time, which is why the AGA chipset is just revision of the ECS, which is just a revision of the OCS) for lack of vision.

Link to comment
Share on other sites

5 hours ago, calimero said:

Small ads: Commodore did add chip for chunky to plannar conversion in CD32 console, released after Amiga1200. 

yep, the CD-rom chip - AKIKO had additional C2P function.

The converting process looks like - CPU has to read 16 bytes chunky data from the ram, write them to the AKIKO, read 16 bitplane bytes from the AKIKO and write them to the CHIP RAM. Really a smart idea...

Why would you add a fast native 8bit chunky mode into AGA (just to avoid an internal bitplane shifters) when you can add additional slow chip with slow C2P method.

Link to comment
Share on other sites

On 10/1/2021 at 3:07 AM, calimero said:

It seems that you miss the point: Atari and Amiga, unlike PC XT/AT, use one RAM to serve data to CPU and other processors in computer (video, sounds...);

 

This is quite opposite to PC design: where RAM serve only CPU.

 

Amiga and Atari approach could not continue (was not $ feasible) as soon as CPU become faster than RAM. But back in mid 80s, they use this fact (RAM faster then CPU) to design great computers! If you look at Macintosh from that era, original Macintosh need to stall 68000 CPU while graphics is displaying on screen! Mac was worst hardware design among ST and Amiga :) (this can be nicely seen by running Macintosh emulator on Amiga or ST - they are faster then original Mac with same CPU!)

 

Don't bash the Mac hardware too much. The ST with its weak-a$$ YM2149 - retroactively thanks for that, Yamaha and Sight+Sound, ya fargin' iceholes! - could've definitely used the Mac's DAC to supplement it. The Mac was also easier to upgrade - if you could afford to do so - thanks to it using SIMMs which didn't come standard to our ST computers until the long-delayed STe [and then only up to 4MB]. The Mac's software platform embraced the 68881/2 FPU better than the ST platform did. When SCSI was finalized, Apple fully embraced it whereas Atari Corp clung to ASCI. Atari Corp failed to incorporate GDOS into the TOS ROMs - which they originally promised - so we continuously got to hear the Cult of Mac adherents gloat over their prettier standard fonts [as if it wasn't bad enough already having to listen to the Amigans]. And they had higher capacity 3.5" discs than the ST too, whether in terms of single or double density.  The small things add up.

 

As for graphics, Amiga bla bla bla. What Atari Corp should've done - since the OMNI chipset walked out Atari Inc's door with permission in early 1984 - was strike up the negotiations with Atari Games, especially while they were sharing a building, and licensed whatever their custom graphics chip that was used in "Atari System 1" [AS1]. For the record, games like arcade Gauntlet's graphics - not to mention Marble Madness which was out in early 1984 - certainly went beyond the Amiga's capabilities even if the color palette was only 1,024 colors compared to Amiga's 4,096. AS1 could display 256 colors on-screen at once without resorting to tricks and at resolutions higher than the ST and Amiga were capable of. Had Atari Corp thought to slap that sucker into the ST to supplement the SHIFTER it would've been advantageous to Atari Games since they could've pooled their manufacturing orders. The ST portion of the order would've been far larger than Atari Games' order but it would've reduced both of their manufacturing costs. Plus, it would've ultimately exposed a larger number of programmers to becoming familiar with AS1 - and AS2 - which could've had other benefits by expanding the competent programmer pool. Even if such an arrangement would've dragged into 1986 - which was about the time Atari Corp and Atari Games settled their legal differences over IP ownership from the former unified Atari Inc - the chip could've still made a major difference long before the STe arrived. Separate that custom graphics chip from the rest of AS1 and it's a 68010 based platform with a YM2151 sound chip, a POKEY, a 6502 to boss around the sound chips and monitor the coin slots, and a TI speech synthesis chip thrown in for good measure. A speech chip that the Toshiba speech chip in the earlier Commodore 64 Speech Synthesis Expansion Module sounds suspiciously like [and was not very expensive to purchase in volumes].

 

On 10/4/2021 at 4:19 PM, oky2000 said:

Never ever used a Falcon, wish I had bought a boxed one when they were like 400 bucks because I don't know what it's like to use one either for creative/serious stuff or games. The only commercial game from the time I knew was LlamaZZAP!

 

Steel Talons, Road Riot 4WD...

 

 

Link to comment
Share on other sites

I don't think that it is fair to blame Atari to not use DAC audio system/chips in Atari ST in 1985. Comparing it with 3 times more expensive MAC ...

Power without price has it's price ? Additionally DAC needs much more RAM for audio.  And lack of expansion ports was part of saving too. Things were much better with Mega STE, although it came little too late.

Then no GDOS in TOS ? Of course that was not - they had already problems to fit it in 192 KB ROM space. And GDOS is not something what most of users needed, or most of SW. It was available as loadable SW, so actually not worse than how it was with MAC - what loaded parts of it's OS from disk, and was way slower than ST.

The real problem by ST serial was slow development in my opinion. Started great, then ... ehm ..

Link to comment
Share on other sites

On 10/24/2021 at 1:44 PM, Cyprian said:

yep, the CD-rom chip - AKIKO had additional C2P function.

The converting process looks like - CPU has to read 16 bytes chunky data from the ram, write them to the AKIKO, read 16 bitplane bytes from the AKIKO and write them to the CHIP RAM. Really a smart idea...

Why would you add a fast native 8bit chunky mode into AGA (just to avoid an internal bitplane shifters) when you can add additional slow chip with slow C2P method.

As I understand it, AKIKO was a built using a third party (i.e. not commodore) ASIC development kit, so had a set size, and there was space on the die after the CD-ROM controller was developed so the designers just added a C2P function to try and cover that glaring deficiency in the AGA design, I think there was nearly two years between the design of Lisa (the AGA display generator chip, which was built around the concept of bitplanes, but with 8bit per pixel should have had a chunky pixel mode) and AKIKO (I would have to confirm that with Dave Haynie).

 

You are correct with your scorn for the feature though. The need to repeatedly read and write from the (already bandwidth) limited Chipram meant that the AKIKO C2P was no faster and sometimes slower than just doing the C2P with the CPU (with Fastram). If one had an 030, then AKIKO C2P was useless.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...