Jump to content
IGNORED

Atari's legal restrictions on using the Lynx Chipset?


kool kitty89

Recommended Posts

Does anyone know if (or how, specifically) Atari Corp was restricted and/or obligated to use the Lynx chipset?

 

I'd imagine that their agreement with Epyx had some obligation to release a handheld system in general, but were there restrictions or provisions on using the hardware for other products?

 

Did Atari have free use of the IP of the design (and did they have exclusivity or could Epyx license it to another company if they chose to)?

 

 

And, if Atari did have free/flexible use of the chipset, why didn't they use it (or parts of it) for other projects? In particular, the lynx design seems to make a much better basis for a cost-effective/flexible/attractive (to developers and consumers) mass-market home game console than the Panther or an ST derived console. (an ST console would actually be more attractive than the panther too, at least in terms of being more "normal" and developer/port friendly, but the Lynx was far better all around from a cost, performance, and programmer/developer standpoint)

 

For that matter, it seems like parts of the lynx hardware (at least the blitter) would have been nice in an updated ST . . . albeit such an update would also mean a SHIFTER with packed pixel support (a feature they should have been aiming at anyway by the late 80s). I can see wanting backwards compatibility with the older ST BLiTTER, but given it had pretty much been used exclusively for high-level applications (MEGA-ST specific) prior to the STe, it wouldn't have been that big of a jump to drop the old blitter and actually switch to a new one without hardware backwards compatibility. (but perhaps with OS level backwards compatibility and facilities for patching some other software that had supported the old BLiTTER)

 

 

It seems like a waste for Atari to have a bunch of different projects and designs when they could have focused on consolidating resources that could be used (to different extents) among multiple products. (that's something you could argue back with Atari Inc in general, but Atari Corp could still have done it moving forward -albeit not so much with older chipsets that would be difficult to re-engineer for added features while maintaining compatibility -something Atari Inc had already partially compromised by losing some of the key engineers from the A8 design teams)

 

Atari Corp ended up with the ST line adding the MEGA BLiTTER, then the STe (same blitter moderate update to SHIFTER and DMA sound), Lynx chipset arriving somewhat before the STe entered production iirc, Panther continuing throughout 1989 to the point of completing the LSI design and preproduction dev systems with Panther chips being released late that year, then the TT with the TT SHIFTER (I think still without packed pixel support, but I'm not sure), then the Falcon with VIDEL in the works (plus the 8 channel 8/16-bit DMA audio, off the shelf 56k DSP, etc), the Jaguar starting in 1990 and being solidified in mid/late 1991 (TOM's first silicon revision taped out late that year), MEGA STE arriving in 1991 with 16 MHz derivative of the old BLiTTER (plus 16 MHz 68k, STe SHIFTER, DMA sound, etc), the the Falcon arriving at market in 1992 (early Jag dev systems released early that year as well), late 1992 saw the final revision of TOM arriving and JERRY being completed as well iirc, then the Jaguar's release/test market in fall of 1993.

It seems like there's a lot of waste there and also that some of the video game specific hardware would have been a good deal better for the computers that the computer-specific hardware actually used. (like a packed pixel supporting SHIFTER -especially with both 4 and 8bpp modes- coupled with the Lynx blitter -something that would have been pretty good even into the early 90s- and the the jaguar chipset being particularly impressive in general with tons of possible used in a computer -especially with more investment in tools and bug fixes . . . and perhaps aiming the Jag blitter's design to include backwards compatibility)

Hell, even the Panther (more so the Jaguar object processor) has some interesting potential as a 2D accelerator, especially for windowing. (probably more realistic to focus on blitter type hardware though -the original Panther was especially impractical due to requiring fast 32-bit SRAM to operate, but the revised version in the Jaguar was optimized for use with 64-bit DRAM efficiently -albeit a fully blitter-oriented design with the jaguar chipset could have had other advantages)

Link to comment
Share on other sites

The lynx design seems to make a much better basis for a cost-effective/flexible/attractive (to developers and consumers) mass-market home game console than the Panther.

While I agree that a console inspired by the Lynx is interesting, I don't think the actual Lynx chipset is usable. It would be a do-over.

 

You probably can't use Mikey, since a 6502 is pretty slow compared to a Genesis/Panther.

 

Suzy is also slow. Slow is fast enough on the Lynx, since you only have 160x102 pixels. But that's one quarter of the pixels used by the Genesis (or Panther). If your Lynx game hits 30FPS (not all can), your Lynx console is good for 7FPS. Not exactly Sonic-speed...

 

I know how these threads go: "Just speed up Suzy"! But Suzy is a clever chip and it pushes everything to the limit. It pushes the DRAMs to their max speed, it uses FIFOs to pack pixels and get optimal page mode access.

 

First you'd have to increase the bus width. 8-bits just isn't enough width for a fast blitter. Whoops, that's a complete datapath redesign. But maybe we can keep parts of the Suzy control logic? Hm, now that's the bottleneck, since Suzy is shifting 4-bit pixels per Suzy cycle. This was fast enough on an 8-bit bus, but we need a new controller now.

 

And you still have only 16 colors, which compares poorly to Genesis and even worse to SNES. Sure, you could use palette swapping tricks to catch up, but so can Genesis to stay even further ahead.

 

At this point we're looking at a complete Suzy redesign: Let's say 8-bit color at 320x200. This won't quite beat the SNES but it will put the Genesis on notice. Let's see, we have 64KB per frame buffer (up from 8KB), and Suzy is double-buffered, so we need 128KB just for frame buffers.

 

We now need more memory than the Genesis just to show the title screen.

 

How much bandwidth do we need? Our framebuffer is 8x the size, so to achieve Lynx levels of performance, we just need a 64-bit memory bus...

 

Okay, okay, I'll stop. I'll just summarize: To reach Panther levels of performance using a Suzy (framebuffer/blitter) style console, the console would cost 2, maybe 3, times as much to build.

 

I'm no fan of Panther, but at least it was cost-efficient. And this is Atari we're talking about here!

 

There's a reason that consoles did not have blitters in the Genesis/SNES generation. Their memory usage and memory bandwidth requirements make them far too expensive. Lynx seemed ahead of its time when it used a blitter, but it could get away with it only because its resolution and color count were very modest compared to contemporary TV consoles.

 

For that matter, it seems like parts of the lynx hardware (at least the blitter) would have been nice in an updated ST

See performance problems above. Also, Suzy's blitter is hard-coded to work with 4-bit chunky pixels. You'd have to redesign it to use anything else. You can't just 'tack on' 1bpp support.

 

Who cares about 1bpp? Anybody who makes computers in the 80s! A good 1bpp blitter can render text at light-speed, which is what a computer like ST really needs.

 

Atari's technical decisions weren't perfect, but they were better than their business decisions.

 

One thing I've learned about engineering: Modern design ideas are well-known at least 10 years before they start showing up in mass market products. It's not like the designers of the Amiga/ST hadn't thought of chunky blitters. And because we all use chunky blitters today, it's obvious (to us) they're 'better'. But at the time, there were cost constraints that made that 'better' design quite a bit worse.

 

Designers were talking about networked multitasking desktop computers with mouses and bitmapped displays in the 70s. It's rarely a lack of ideas that holds back progress!

 

- KS

Edited by kskunk
Link to comment
Share on other sites

The lynx design seems to make a much better basis for a cost-effective/flexible/attractive (to developers and consumers) mass-market home game console than the Panther.

While I agree that a console inspired by the Lynx is interesting, I don't think the actual Lynx chipset is usable. It would be a do-over.

 

You probably can't use Mikey, since a 6502 is pretty slow compared to a Genesis/Panther.

Perhaps an 8 MHz 65C02 would be OK, after all, the PCE/TG-16 ended up being faster at a number of tasks (many game-related/common operations -especially collission detection -hence much less slowdown in shooters) compared to the Genesis, let alone the SNES (a some operations faster per clock, but nothing that was fast enough to overcome the clock speed disparity with the 7.16 MHz CPU in the PCE). Unlike the PCE, you wouldn't be using incredibly fast (140 ns) ROMs, but rather cheap/slow ROM (not even mapped to the CPU address space) intended to be loaded into RAM as needed.

 

Perhaps adding a 8kx8-bit SRAM to work in at full speed (no wait states for page breaks -probably have zero page mapped within that memory as well) sort of like the "caches" in later Apple II derivatives. (more so if it was mapped to a separate bus to run in parallel with Susie, but that's adding to cost again)

Such a "cache" would also be a way to actually have a 6502 going faster than 8 MHz (the limit for the DRAM in page mode) with useful performance gains. (albiet, I'm not sure when the 10-16 MHz rated 65C02s became available)

 

Then there's the possibility of boosting performance with a 65816 rather than a C02, which would still be a relative cheap option for the time. (an 8 MHz 65816 would be much better than the SNES or PC Engine, and with trade-offs with the MD's 68k -but more cases of advantages than the SNES or PCE, the latter already having quite a few areas where it's faster than the MD because of the CPU)

 

Suzy is also slow. Slow is fast enough on the Lynx, since you only have 160x102 pixels. But that's one quarter of the pixels used by the Genesis (or Panther). If your Lynx game hits 30FPS (not all can), your Lynx console is good for 7FPS. Not exactly Sonic-speed...

Separating source and destination (dedicated DRAM for the framebuffer) would help that considerably, wouldn't it? (get the fill rate far closer to peak, albeit that would still be 4Mpix/s max, so you might be able to manage 30 FPS in the 256x200-320x200 16 color range)

 

First you'd have to increase the bus width. 8-bits just isn't enough width for a fast blitter. Whoops, that's a complete datapath redesign. But maybe we can keep parts of the Suzy control logic? Hm, now that's the bottleneck, since Suzy is shifting 4-bit pixels per Suzy cycle. This was fast enough on an 8-bit bus, but we need a new controller now.

Yes, so building directly onto the existing 8-bit design might make more sense. Maybe using a 2 bus design would actually be preferable in that respect. (let the CPU and blitter run in full parallel, and it is 8-bit data paths, so relatively inexpensive as far as number of traces (and overall complexity) on the PCB. (maybe if they wanted to compromise more for cost, the CPU could use shared DRAM for less critical data and some code, and smaller amount of fast SRAM on its own bus to work in, albeit in that case you'd probably at least need a 32kx8-bit SRAM to really have a decent amount of freedom, but that may have been doable at reasonable cost -not nearly as expensive as the 32 MHz 32-bit SRAM used in the Panther, more chips, more traces, high speed rating, etc -after all, Atari actually had put 32kx8-bit SRAMs on 2 Epyx games on the 7800 back in '87/88 . . . and only used 16k -apparently the cost of using 2 8k SRAMs was too much -more likely an issue with the larger PCB required) Of course, the other trade-off would be adding the necessary interface logic for the CPU to have its own bus and purely use DRAM (with some hits with page breaks, but with more freedom than a mix of fast SRAM and shared DRAM, and practically having much more RAM at lower cost than that 32k SRAM -more so depending on whether the added interface logic was embedded in a small ASIC -paying off much more in the long run- or not definitely no point in going beyond 8 MHz in that case -unlike the 68k, you wouldn't really gain anything with a faster clock rate and wait states either given how memory-dependent the 650x architecture generally is)

 

If they did use DRAM, you could probably have a full 64k for the CPU (a single bank if using the '816, otherwise perhaps requiring the blitter to update code/data as needed . . . or maybe do that for the '816 too and only use 16-bit addressing -avoid the latch to demultiplex the added address lines), another 64k for the framebuffer, and perhaps a 128kx8-bit DRAM for textures and added storage, or perhaps more than that given it is relatively inexpensive DRAM and the more RAM used, the more practical it is to have all data compressed in ROM. (and more heavily compressed in ways not at all realistic to stream on the fly) Perhaps 256 kB for textures/added storage would be practical for the time. (thinking a late 1989 release -RAM prices obviously dropped a lot in the early 90s . . . until they stagnated from '93-96)

 

Staying all-DRAM would probably be the most cost-effective all around.

 

And you still have only 16 colors, which compares poorly to Genesis and even worse to SNES. Sure, you could use palette swapping tricks to catch up, but so can Genesis to stay even further ahead.

Yes, and without modification, the only way you could do that would be 2-pass rendering. (the framebuffer reading 8-bit packed pixels, but the blitter moving around 4-bit data) And if you used only 64k for the framebuffers, that would limit the 8-bit color mode to 160x200 or 160x204. (so probably used more for specific games where color is more important than resolution)

 

And if Atari was aiming at a lower cost platform, that might still make it attractive in general. (there's a LOT of games on the MD that could be competed pretty well in 16 colors alone, especially with some palette swapping -which is rarely done on the MD and can only be done for up to 4 colors per line without nasty artifacts -and a fair amount of CPU overhead given the slower interrupts of the 68k, especially compared to 650x)

 

They also didn't need better/more powerful hardware in every respect, they needed a decent performing machine with reasonable overall cost to performance ratio (enough capabilities to make it marketable -and even at 16 colors, it would have had some interesting advantages over the MD), and more importantly, hardware that was relatively easy for developers to work with (the lynx had tools to match) and thus reduce risk of development investment (even if Atari had a weaker reputation) and would also mean better average performance/results and a shallow learning curve. (better quality for less time/cost/talent invested) The same reason an STe based console would have been more realistic for 1989 than the Panther, but that would have generally been less cost effective than the lynx. (plus, if you wanted any decent performance with dual BG layers, you'd probably need something like Crazyace suggested with a doubled shifter with dual framebuffers -like Amiga dual playfield, but with 2 4bpp layer and not 3bpp, probably much less cost effective than what could be possible with the lynx with a single 16 color framebuffer, even on an 8-bit bus)

 

You yourself mentioned before that the tech capabilities are generally not one of the most important things for a system, and in those other areas, Atari was FAR, FAR better off in 1989/1990 than 1993/94. (that, and the tech areas that ARE important aren't generally raw performance as much as real-world performance and how easy that is to achieve -and how much incentive you can give developers to push more difficult architectures vs ones requiring less investment -part of why Sony had sort of a perfect storm with the PSX, they had the hardware/ease of development advantages on top of so many non-technical advantages including competition making some very bad decisions -Sega, Nintendo, 3DO, and NEC . . . Atari was pretty much screwed even if they'd had perfect management in '93-96 due to their pre-existing problems)

 

At this point we're looking at a complete Suzy redesign: Let's say 8-bit color at 320x200. This won't quite beat the SNES but it will put the Genesis on notice. Let's see, we have 64KB per frame buffer (up from 8KB), and Suzy is double-buffered, so we need 128KB just for frame buffers.

Well, if you did to 8-bit color, you would be a lot better off than the SNES in some areas since it's a real 256 color framebuffer rather than 8 15 color subpalettes (and 8 more for sprites) generally resulting in below 100 colors on-screen with further practical limitations of the per tile/sprite palette usage so not even as good as a 100 color framebuffer . . . actually a 32 color framebuffer would still have some advantages. (of course, the SNES has a larger master palette, alpha blending support, etc) Of course, to really use use of that palette, you'd need to have 8bpp textures too, so double the RAM used (you could compress/pack more in ROM at least -some things indexed at 16 colors, others as 8bpp and both with lossless compression -then loaded into RAM as 8bpp graphics)

 

But, again, they probably could have competed with a 16-color optimized system without modifying Suzie much (maybe increasing the addressing if not just resorting to bank switching). It would have been a matter of marketing/market positioning too . . . but more an issue of software support in general. (they couldn't hope for much in Japan, maybe a moderate amount in the US, but probably best in Europe) Atari Corp's image/confidence (as well as their financial standing) steadily declined after 1989, so that would definitely have been the time to act. (the 7800 was also in steep decline that year -from over 1.2 million in US hardware sales in '87 and again '88 to some 600k in '89 and under 100k in 1990). The computers declined from that point on too, so they lost pretty much everything. (losing Jack Tramiel in late 1988 and Michael Katz in early 1989 may have gone a long way towards fostering weaker management and leading to that decline . . . and as that led to weaker financial standings, the situation was further exacerbated and things really started falling apart in both the computer and video game side of things)

The Lynx was doing somewhat OK (mainly in Europe) and fairly consistent from 1990-92, but not enough to support the company. (the handheld market in general was a bit shaky, not nearly as profitable or stable as the home console market . . . especially if you weren't marketable in Japan)

 

The US market also requires some hefty investment in advertising to dig in and sustain a market position (much more so than Europe, far more hype driven), so that would have been an important factor as well. (another thing Atari was pretty much screwed with after 1990 . . . '89 was probably the best position financially to have the internal revenue and credit with investors to actually have a chance at pulling off a reasonably successful game console -far better than '85/86 when they were still steeped in debt and had no possibility of meeting Sega or Nintendo's budgets -albeit their good management and brand recognition managed to outsell Sega considerably with the 7800 over SMS in the US, in spite of weaker hardware and software -the 2600 sales were the real boost to their market share that made them 2-3:1 over Sega from '86-89 -of course, Sega of America got at dramatic shift in management in late 1989 with Mike Katz launching the first really successful ad campaign -Genesis Does- and making a number of other management decisions critical to building a foundation for their later massive success under Kalinske -would have been interesting if Katz had come back to Atari in 1991 ;))

 

We now need more memory than the Genesis just to show the title screen.

Yes, but MUCH cheaper memory. The Genesis used exotic dual-port VRAM for the 8-bit VDP (64k 8-bit dual port VRAM) and pseudostatic RAM for the CPU rather than investing in the R&D necessary for interfacing cheap DRAM. (probably could have had 256k DRAM and still been cheaper than that PSRAM)

The TG-16 used 64k SRAM for video and another 8k SRAM for work RAM, plus 140 ns ROM even fro 1987 releases. (probably only feasible due to NEC's vertical integration)

The SNES used 64 kB SRAM for video, 128k DRAM for work RAM, and 64k SRAM or PSRAM for audio. (not to mention a very expensive audio system that was larely overkill -basically used as an 8 channel MOD player with compression and reverb)

 

Okay, okay, I'll stop. I'll just summarize: To reach Panther levels of performance using a Suzy (framebuffer/blitter) style console, the console would cost 2, maybe 3, times as much to build.

Yes, that's if you wanted to push what you said above, and not a more moderate reworking of the Lynx chipset . . . and still probably not 2-3x the cost as you'd be removing the huge overhead of the color LCD screen and backlight.

 

I'm no fan of Panther, but at least it was cost-efficient. And this is Atari we're talking about here!

Yes, but a more conservative extension of the Lynx chipset (even one leaving it a single bus design, but at least with a separate framebuffer bank) would probably have been far more cost-effective all around. (less powerful for raw 2D, but far more realistic for programmers at the time, cheap DRAM, effective use of slow ROM with enough RAM to allow most data to be compressed -and the lynx's support for hardware RLE decoding on the fly)

Plus more potential for some simple 3D stuff. (and decent 2D scaling support, though nothing like the Panther could push, probably about as good as the Sega CD, but without hardware rotation/texture mapping effects -the Sega CD's ASIC is basically a 4-bit blitter that reads and writes single pixels using 16x16 or 32x32 texture stamps that can be scaled and rotated -or used to texture map if done on a line by line basis, sort of like the Jag blitter's texture mapping feature but limited to 4-bit pixels; no separate destination bank either, so full random accesses in 80 ns DRAM with at 12.5 MHz blitter meaning 3 cycle/240 ns read/write times and a further bottleneck of having to DMA that to MD VRAM in vblank at 2.6 or 3.2 MB/s depending on the resolution mode -at which time the ASIC can't render, plus you need to double buffer in VRAM for larger spans -of course, the advantage of having tiles/sprites to work with for normal 3D, but a ton of trade-offs for what the Lynx could do by comparison)

 

There's a reason that consoles did not have blitters in the Genesis/SNES generation. Their memory usage and memory bandwidth requirements make them far too expensive. Lynx seemed ahead of its time when it used a blitter, but it could get away with it only because its resolution and color count were very modest compared to contemporary TV consoles.

Yes, but they'd be using FAR cheaper RAM. If you had an Amiga based console in '89 with a full 512k DRAM, you'd probably spend less on that DRAM than the MD did on VRAM, PSRAM, and SRAM (8k for the Z80), let alone the savings of a shared bus design and the feasibility of compressing more data in ROM for cheaper games. (and use of slow ROM . . . though the MD generally used slow ROM too, at least for early games)

Same thing for an STe based console, except that's a good deal less competitive than the Amiga. (again, you'd probably at least need dual playfield support, maybe a low-end FM synth chip to add to PCM -more memory savings and some good capabilities with the right programming -which a lot of European developers seemed to have at least, given MD music . . . albeit Atari probably should have done that for the STe, dual playfield support and a YM2203 replacing the YM2149 . . . especially with a full 8 bitplane mode with 256 colors -less realistic for games -unless it was packed pixel, but an excellent feature for graphics applications at the time . . . features that would have made it more attractive compared to lower-end PC and the Amiga at the time -the latter being most important in their primary European market)

But the ST's market is another topic. ;)

 

Besides, I still doubt such an STe derived console would be more cost effective than a lynx based one. (even if you did a separate bank for the framebuffer and added a separate bus for the CPU to work in, there's still the general performance advantages of 4-bit packed pixels and the low cost of 8-bit data paths even if using 2 buses and a 2nd bank on the graphics bus)

 

The only other issue would be that porting 68k to 650x wouldn't always be attractive (or x86 for PC ports), though if the dev tools were like the lynx, it would still be extremely friendly to develop for at the time. (plus, you DID have 2 major 650x based consoles at the time too -3 with the NES and SNES both included, albeit only 1 with any major support in the US or Europe)

 

 

 

 

OTOH, if reworking the Lynx really wasn't cost effective for the time, there was another really interesting alternative in 1989, something I'd have thought Martin Brennan would have brought up while working on (and criticizing) the Panther design. That would be Flare's Slipstream ASIC previously completed for Konix (but which Konix could not afford to license exclusively), a ready-made system on a chip that Atari could license. Unlike the Lynx, it did support a 16-bit bus already and the blitter supported moving 4, 8, and 16-bit data around with a 256x200 8bpp framebuffer (indexed from 12-bit RGB) with the blitter giving a peak fill rate of 12Mpix/s in 8bpp mode (for line fills, I assume, not copying/moving . . . and unless they supported a separate framebuffer bank, that would probably limit "sprite" and multi-layer BG rendering to only 3 Mpix/s -assuming 2 pixels per read and write, though that could be up to 6M if a separate framebuffer bank had been implemented -it probably would have been cost effective in mid/late 1989 to use 256 kB main RAM and 128k for the framebuffer bank -and with 256x200, you'd have 14k left for extra storage in the framebuffer bank)

The Slipstream also added the fast multiplier unit and DSP to give some really interesting potential for 3D at the time. (adding an FM synth chip would have freed up the DSP more for 3D stuff lessened the load on RAM/ROM in general since most DSP sound drivers being developed seem to be sample based)

 

It only supported Z80 and x86 architecture CPUs, and while a 6 MHz 8086 was intended for the multisystem, a 12 MHz 80186 or NEC V30 (probably the best cost/performance option in general, though the 186 might be preferable depending on cost -and whether the embedded features were really useful -and any of those probably would have been cheaper than the 16 MHz 68k planned in the Panther). That would have been easier to work with in some areas than 650x, and certainly easier to port to from PC (and perhaps Z80 based platforms -the V30 actually has a Z80 compatibility mode, possibly useful in some extreme cases), but would have still be inconvenient for ports from 68k based platforms. (albeit probably not worse than 650x)

As time went on, x86 could be more and more attractive though. (and with the 3D capabilities -or resource to assist with 2D scaling/rotation- there would be some potential for ports of some PC games difficult or impossible on other platforms, like a nice version of Wing Commander and Wolf3D, maybe even a cut-down conversion of Doom and various polygon games -granted, depending if/when Atari launched a successor and what games got moved over to that machine)

 

It would be interesting to ask Brennan if he ever suggested that to Atari or if he never considered it. (it would probably have been faster and easier than re-working the Lynx chipset at the time too . . . actually, depending on the licensing/royalties Flare asked, it could have been cheaper than investing in the completion of the Panther project) Even in the simplest case of repurposing the Lynx chipset (singe bus, perhaps 8 MHz CPU, framebuffer bank and more RAM added, probably an off the shelf FM synth chip on top of the simple sound chip -and software PCM, and a new ASIC to control the framebuffer and convert to analog RGB at TV sync rates -so a RAMDAC and CRT controller-)

Plus, several companies had already started work on games for that chipset, but never released anything with the Multisystem not coming to market. (aside from a variety of UK/EU developers, Lucas Arts had shown interest in the platform and had even considered licensing the design to distribute in the US ;))

And I believe Texas Instruments had already manufacturing the ASIC, so perhaps Atari could directly piggyback on that as well. ;)

 

So, actually a lot of practical advantages of using that design over the Lynx or Panther. (the Panther's problem was never low cost or raw performance potential either, but general practical use of the system with the limited RAM, bus hungry Panther starving the 68k, powerful but almost totally alien graphics architecture that would be difficult to port to from conventional blitter/CPU/tile+sprite architectures, etc)

 

For that matter, it seems like parts of the lynx hardware (at least the blitter) would have been nice in an updated ST

See performance problems above. Also, Suzy's blitter is hard-coded to work with 4-bit chunky pixels. You'd have to redesign it to use anything else. You can't just 'tack on' 1bpp support.

Right, and 8-bit too (which I hadn't realized), so not really a good option at all. The Flare 1 blitter would be better in some areas even, but it also doesn't have less than 4bpp support (and was designed to interface with Z80/x86 CPUs). Though it probably wouldn't have been a bad idea to outsource some ST design work to the Flare guys given how capable they were. ;) That probably would have been a much better investment than the Panther project in general, like getting an updated BLiTTER developed in time for the STe's release and perhaps give some really useful additions to the STe SHIFTER as well, like dual playfield and/or 8bpp modes . . . let alone packed pixel modes -perhaps chaining the bitplanes like VGA does . . . maybe even tighter integration/consolidation of the chipset than occurred with the STe. (a separate issue would have been introducing faster 68k options much sooner and probably support for a separate CPU bus more like Amiga FASTRAM -especially important on higher-end machines . . . then they went way too far with the TT and had no middle ground for more mid-range workstations/"serious" machines)

 

Hell, investing in that not only could have made the ST line better in the home/game/business/graphics side of things (especially at a critical time for maintaining their European market position), such developments also could have meant the new ST chipset becoming more attractive. (albeit the Slipstream may still have been more attractive in general for the time)

 

 

Who cares about 1bpp? Anybody who makes computers in the 80s! A good 1bpp blitter can render text at light-speed, which is what a computer like ST really needs.

Yes, and more so for dealing with planar graphics (2, 4, 8, 16 bits of data around would only be good for moving groups of pixels or filling lines).

Albeit, in the specific context of text performance, that wasn't ever a huge selling point for the machine, it might have been (and in 1989 it was probably the best desktop publishing solution on the market -let alone the business applications where fast text would be important), but it's biggest market ended up being as a semi-casual home/games computer, mainly in Europe. (though it was well supported for business applications there too, and had initially established itself for graphics/business capabilities)

 

Hmm, actually, maybe the more "serious" application side of things would have remained being a huge chunk of their European market as well if they'd managed things differently. (getting the BLiTTER into lower-end machines sooner would have been an important part of that too) Releasing something like the MEGA-STe along with the STe back in '89 probably would have been pretty significant as well. (even without other hardware enhancements)

 

 

Atari's technical decisions weren't perfect, but they were better than their business decisions.

At least from 1989 on that was true. ;)

This quote summarized that nicely:

And then Sam Tramiel had a heart attack - Jack stepped back in and wound down operations. I truly think is Sam didnt have his heart attack that Atari would've continued to fight to the last $$$ - but Jack and Leonard were not interested anymore.

 

Truthfully, Leonard didn't have much to do with the daily operations, he was more involved with the products themselves. And I'm not sure that Sam would have been able to change things if he didn't have the heart attack. Every since he had taken over, the company itself was on a downward spiral. When Jack turned the company over to him, he had mananged to bring the company out of the red and in to the black - shedding all the debt they took on from Warner in the purchase. That was his dream after all, to be able to hand something solid over to his sons and retire. Sam managed to take it from a multi-division multi-product company to a single product company by the time Jack came back in. If they would have fought to the last $$$, there would have been nothing left of a legacy for his kids, hence the reverse merger to get out while they still could. Truthfully, I would rather have had Jack not retire back in the late 80's and have him stick around for the oncoming Wintel onslaught to see how he would have dealt with that. I can't picture just turning tail and closing down the computer division like that.

 

Again, Katz leaving in early 1989 probably played a role in that decline too. (ironic that he left just a few months before Atari would launch a new game system, and one from a company Katz had formerly been president of too)

 

 

One thing I've learned about engineering: Modern design ideas are well-known at least 10 years before they start showing up in mass market products. It's not like the designers of the Amiga/ST hadn't thought of chunky blitters. And because we all use chunky blitters today, it's obvious (to us) they're 'better'. But at the time, there were cost constraints that made that 'better' design quite a bit worse.

Oh, yes, obviously planar graphics made a lot of sense for the time . . . well not so much on the ST without a blitter. (bit manipulation on the 68k would be pretty inefficent, though block moves of pixels isn't too bad -which is what ended up usually being done, hence pre-shifted sprites becoming so important)

 

In general though, but the end of the 1980s, chunky pixel support was becoming more important in general (and had always been important for software renderers since CPUs are limited to byte addressing and 1/2/4 bit manipulation tends to be slow if natively supported at all -hell, the 16-bit chunky mode in the Falcon is probably faster to render to than the planar modes for most things, though using the Blitter would make the planar modes more usable). A 160x200 8-bit packed pixel mode on the ST probably would have been heavily used for games and certain graphics applications.

That could have been another rout for the ST's evolution to take, sort of a PC-like progression with no blitter being standard, but an updated SHIFTER with some VGA-like features (hardware scrolling, extended planar modes and new packed pixel modes, simple hardware acceleration to line fill and some others aiding software blits) and introducing faster CPUs to boost performance all around. (PCs did get some common blitter support in the early 90s, but they were mainly limited to windows acceleration and some windows-specific applications, pretty much none for games or DOS applications until around 1995 with 3D acceleration support appearing with some DOS games)

Albeit, VGA also had true text modes as well, so that would be a sticking point. (given the ST's bitmap-orieted design, it probably would havemade more sense to invest in blitter functionality on top of the other VGA-like features rather than shift to text modes)

 

In the ST's case, its blitter was rather rudimentary and could be useful for packed or planar graphics in any case. (but some features for planar graphics which would be useless for packed pixels; block moves would be useful in either case though you'd need packed-pixel specific line/block fill support to be really useful -a planar-oriented fill-function could still be useful for clearing a packed pixel frame buffer, but not so much for block color fills)

Link to comment
Share on other sites

Yes, but a more conservative extension of the Lynx chipset (even one leaving it a single bus design, but at least with a separate framebuffer bank) would probably have been far more cost-effective all around.

I guess the point is that you can't extend a chip this way.

 

This is due to the way chips are designed: Adding a new bus or widening a finished chip is like adding 20 floors to the middle of a finished skyscraper. It's way cheaper to build a completely new skyscraper.

 

And once you're starting a new chip, make it good at what it needs to be good at. The ST blitter had to be good at text, lines, and boxes. The Panther had to be good at rapidly moving sprites and scrolling backgrounds.

 

These days, transistors are so cheap that we're all spoiled on fully general purpose architectures. We don't have to choose -- we can have 3D and video and sprites and text and boxes. And there's plenty of bandwidth and RAM for everything.

 

In the early 90s, engineers had to make hard choices. Chips could only be good at one or two things, and transistors and bandwidth were too limited to deliver 60 FPS animation in a blitter architecture. When Sega made the 16-bit market all about speed, even some of the SNES's choices looked 'wrong'.

 

With that said, I'd definitely encourage you to read more on the 3DO. It was the very next chip designed by Suzy's designer. It bears striking resemblance to your proposal: Two banks of memory, source and destination, bridged by a Suzy-like blitter.

 

In any case, it's better to start with a blank piece of paper, unless the chip in question is already a perfect fit.

 

And I hope this creative streak helps lead you into designing real systems of your own. Dissecting these ancient designs taught me a lot about the design process, lessons I use to design systems in my day job.

 

- KS

Edited by kskunk
  • Like 1
Link to comment
Share on other sites

Never press the back button after posting. :x

Yeah . . . though doing that in firefox isn't usually a problem. (with a few exceptions where you will lose everything you just typed -AFIK, never a problem with separate response windows/tabs, but is a problem with some forums if you type in the quick response box at the bottom of a thread)

 

 

 

 

Yes, but a more conservative extension of the Lynx chipset (even one leaving it a single bus design, but at least with a separate framebuffer bank) would probably have been far more cost-effective all around.

I guess the point is that you can't extend a chip this way.

 

This is due to the way chips are designed: Adding a new bus or widening a finished chip is like adding 20 floors to the middle of a finished skyscraper. It's way cheaper to build a completely new skyscraper.

Yes, but I wasn't really thinking of messing with the Lynx custom chips at all (except perhaps removing/disabling unnecessary areas like the LCD driver and power management functionality -if not in silicon, at least to reduce pin count).

 

I was thinking more of leaving the shared/video bus alone (aside from adding additional banks of RAM, not a separate bus -both to extend address space and to allow the separate framebuffer bank). Any additional CPU bus would be a new addition separate from the shared bus . . . in that case, something the existing chipset couldn't "see" in the memory map, and either couldn't access at all, or accessed through a middle man (added interface logic -preferably on an embedded ASIC) to switch it into the shared bus and then switch it back to the dedicated CPU bus. (the cheaper option would probably be limiting that bus to CPU access only and having the CPU manually update it by digging into shared memory)

That added CPU bus could range from a small chunk of local memory (namely SRAM), or a full DRAM interface . . . or a larger chunk of SRAM, but then you're going back to higher cost. (8k SRAM as local CPU memory with shared main memory for everything else might have been a good option for boosting CPU performance -especially if bumped to 8 MHz; a 32kx8-bit 100-120 ns SRAM might have been pushing it for cost constraints but could have been really useful, but might have been feasible too -leave the remaining 32k of address space for accessing shared memory, either a fixed address range -requiring the blitter to assist in updating- or with bank switching)

 

Even then, you'd need some added interfacing logic and (as mentioned earlier), added hardware to support analog video output and higher resolution framebuffers.

 

 

Then there's the Slipstream ASIC ready-made that Atari could have licensed in 1989 . . . and one of the Flare engineers assisting with the Panther at that time to make that a real possibility. ;)

 

 

 

 

 

And once you're starting a new chip, make it good at what it needs to be good at. The ST blitter had to be good at text, lines, and boxes. The Panther had to be good at rapidly moving sprites and scrolling backgrounds.

Yes, I was sort of going on a tangent and thinking about multiple things at once. (part of that being thoughts on how they could/should have updated the ST blitter for added needs in the late 80s and early 90s -or the whole idea of a more PC-like CPU-oriented approach with limited graphics acceleration closer to VGA -but then the issue of VGA having real character modes too, so probably needing more blitter resource again to make up for that, or just have simpler/chreaper, but more limited system -which is what the ST was from the start, very fast and flexible in some areas, but limited in others)

 

In the early 90s, engineers had to make hard choices. Chips could only be good at one or two things, and transistors and bandwidth were too limited to deliver 60 FPS animation in a blitter architecture. When Sega made the 16-bit market all about speed, even some of the SNES's choices looked 'wrong'.

Sega made it about speed on the marketing end, not the hardware design end. ;)

And yes, there's actually a lot that's odd in the SNES and some that's odd/wasteful in the Genesis, some mainly in hindsight, but other things just really odd decisions. (using PSRAM rather than DRAM was an obvious R&D time/resource vs manufacturing cost decision, the way they implemented backwards compatibility was a bit odd -a lot of buil-in hardware tacked-on to the extent of weakening the new hardware, but still requiring an adapter- then some really simple mistakes like not connecting the interrupt lines from the YM2612 to the Z80)

Link to comment
Share on other sites

using PSRAM rather than DRAM was an obvious R&D time/resource vs manufacturing cost decision

I doubt it was a design time issue. Interfacing DRAM is not difficult or expensive. Especially since Sega already designed one DRAM controller into the Mega Drive!

 

They could not have possibly lost money on this decision. They sold 40 million Mega Drive consoles. Even if switching to DRAM saved 1 penny, that would pay for 5 man years of engineering (in 1990 dollars). You can engineer a second DRAM controller in that time.

 

I have a more likely suggestion:

 

In the 80s, some chip companies thought PSRAM would be popular. To bootstrap the market for the new RAM, they went to big customers like Sega, and offered them sweet deals if they would design in a particular chip. Sega has nothing to lose, and the chip vendor is (fairly) safe because they can bet Sega will buy a few million chips.

 

As you pointed out, it saves a few cents on board area and buffers, so it was a net win for Sega. Of course a few years later, PSRAM turned out to be a bust, and the Sega engineers never used it again. But if at any point they started losing money, they could switch.

 

Another example: The N64 used 9-bit RDRAM. Rambus thought this chip would be huge. It went nowhere, and in fact the only real use was in N64s. Nintendo got RDRAMs cheaper than SDRAMs. Good for Nintendo, too bad for Rambus.

 

Later on, Rambus tried again with 16-bit RDRAM. Chip salespeople gave Sony a killer deal and so they put it in the PS2. Once again, went nowhere.

 

Later still, XDR DRAM in the PS3. XDR DRAM was a total market failure. Does Sony care? Heck no. They're getting a discount!

 

To bring this back on topic a little, companies like Atari didn't have the opportunity to use exotic technologies, because nobody expected them to sell a million of anything.

 

I have friends in the toy industry who do sell millions of things, and they're always getting deals on these weirdo chips, as desperate chip makers try to bootstrap their market. Toys are full of bizarre eccentric things you've never heard of, and never hear of again.

 

- KS

Edited by kskunk
Link to comment
Share on other sites

using PSRAM rather than DRAM was an obvious R&D time/resource vs manufacturing cost decision

I doubt it was a design time issue. Interfacing DRAM is not difficult or expensive. Especially since Sega already designed one DRAM controller into the Mega Drive!

 

They could not have possibly lost money on this decision. They sold 40 million Mega Drive consoles. Even if switching to DRAM saved 1 penny, that would pay for 5 man years of engineering (in 1990 dollars). You can engineer a second DRAM controller in that time.

They had an embedded DRAM controller in the SMS too, but opted for SRAM for the Z80. (albeit maybe using DRAM would have risked compatibility issues with the SRAM based SG-1000 -an off the shelf design with relatively limited R&D investment, almost identical to the Colecovision -which also used SRAM for main memory)

 

In the 80s, some chip companies thought PSRAM would be popular. To bootstrap the market for the new RAM, they went to big customers like Sega, and offered them sweet deals if they would design in a particular chip. Sega has nothing to lose, and the chip vendor is (fairly) safe because they can bet Sega will buy a few million chips.

That's an interesting possibility too. I wonder if using VRAM was a cost effective decision in the end as well; the dual-ported nature didn't help updates at all since the VDP ended up needing access to both ports during active display. (if they could have made it effectively functional within normal DRAM, it probably would have been much cheaper in the long run -perhaps part of the issue was being limited to an 8-bit data bus; even the PC Engines video controllers were using 16-bit memory -and in that case, probably SRAM mainly to allow fast interleaved accesses with the CPU -I assume the SNES used SRAM for video purely for speed since it was limited to vblank DMA like the MD -albeit with tighter bandwidth constraints)

 

Again, the simplest/strangest single mistake on the MD seems to be leaving the interrupt line on the YM2612 disconnected. (useful for beat timing or fast interrupt driven PCM playback schemes -not very efficient, but rather foolproof for sloppy coders with uneven timing, which seemed to be a huge problem in the MD's life -with a decent interrupt scheme, 16 kHz should be possible too, well beyond what most engines pushed anyway)

 

 

I was going to comment on the SNES more in the previous post too. The SNES really seems to be wasteful in some areas . . . the SRAM for video makes some sense, the overbuilt sound system is a huge waste but that's a mistake seen clearing mostly in hindsight (and a mistake several others would make through the rest of the decade -and Nintendo would not make again with the N64 ;)).

It took quite a while for companies to realize that "plain" sample based synth support is what was pretty much all anyone was going to support, no matter how feature-rich a DSP based sound system was. (eventually you saw surround sound panning, and prior to that some occassional use of reverb, but that was about it; the rest was just a matter of plain DMA sound or somewhat more feature rich PCM sound hardware -often with ADPCM support, and back in the late 80s/early 90s, it was pretty much just "MOD" style PCM synth and various FM synth methods used in home/arcade video games -and FM took more skill/experience to make really good use of, and fell out of favor as sample based stuff got more and more competitive) That elaborate SPC system in the SNES was basically a glorified 8 channel MOD player with some added effects. (some things like interpolation and heavy filtering -both forced and unchangeable- could also be detrimental)

 

I find it a bit ironic that Ricoh, Nintendo's primary chip vendor (Ricoh) had a pretty good 8 channel PCM chip out by '89 (or earlier, but appearing on prominent commercial products in '89) which should have been far cheaper than the Sony system (especially with Nintendo's ties to Ricoh), and probably about as good (if not better in some areas) as the sony system from the perspective of the average consumer. (it's a much simpler 8-bit PCM specific sound chip with 32 kHz output with pitch control via hardware sample scaling -with oversampling and digital filtering but no interpolation- with 8 channels with 4-bit stereo panning per channel -ideally, it would be coupled with an external interface to allow DRAM to be used . . . if not able not share main RAM in the SNES's case -Sega just used 64 kB of PSRAM in the MCD)

 

 

 

 

 

OTOH, the slow CPU (presumably due to the memory interface used) really seems wasteful. I can't imagine Ricoh having trouble getting the CPU up to at least 7.2 MHz in reasonable yields and the fact that the CPU only runs at 2.68 MHz in DRAM (and is capable of 3.58 MHz in ROM -when fast ROM is used) is a real telltale sign.

 

One explanation is that the 650x bus requires data to be on the bus within 1/2 a cycle, so memory needs to be able to respond 2x as fast as the CPU clock speed, but even so (assuming commodity 120 ns FPM DRAM), that should have meant at least 3.58 MHz (if not ~4.3 MHz) unless fast page accessing wasn't supported either. Other 650x platforms got around that problem with additional buffering (namely an 8-bit latch iirc), including hudson implementing it on-chip with their PCE's CPU, and with that along with page mode accesses (and wait states for page breaks), the SNES easily should have been 7.16 MHz in DRAM using similar speed memory as the Lynx.

 

As you pointed out, it saves a few cents on board area and buffers, so it was a net win for Sega. Of course a few years later, PSRAM turned out to be a bust, and the Sega engineers never used it again. But if at any point they started losing money, they could switch.

Sega and Nintendo continued using PSRAM to a limited extent in newer products in the early 90s onward though. (it WAS cheaper than SRAM, so was preferable in cases where that would have been the simplest alternative . . . except pretty much all of those cases should have favored an embedded DRAM interface -including cases where you'd need added logic/buffering to interface DRAM to a device/component intended for SRAM or ROM only -like the sound chip in the Sega CD . . . or the SNES's sound system -though some revisions may have used true SRAM, not sure on that)

 

Sega continued to use PSRAM in the Sega CD (for sound RAM and the CD-ROM cache iirc), Nintendo used it for sound in some revisions, and Sega even invested in a special embedded 32kx16-bit PSRAM chip for later Genesis models. (AFIK, it wasn't a 128k chip, but an actual 64k -odd sized- PSRAM, though maybe PSRAMs more commonly suppot odd sizes)

 

Another example: The N64 used 9-bit RDRAM. Rambus thought this chip would be huge. It went nowhere, and in fact the only real use was in N64s. Nintendo got RDRAMs cheaper than SDRAMs. Good for Nintendo, too bad for Rambus.

The 9th bit was only used for pairity and 1 to 4-bit alpha channel mode selection with the RSP. (toggling 4-4-4-4 and 5-5-5-1 RGBA per-pixel)

 

Oh, and while Nintendo may have gotten a good deal on one end, I'm not sure there weren't some other strings attached. (like forcing Nintendo to only allow expensive SGI workstations as dev units -albeit 3rd parties quickly started offering alternatives in the $5-700 range like the Doctor 64)

Nothing to do with Nintendo's far more fateful (and stubborn) decision to drop optical media though. ;)

 

Later on, Rambus tried again with 16-bit RDRAM. Chip salespeople gave Sony a killer deal and so they put it in the PS2. Once again, went nowhere.

Licensing was too expensive it seems (same issue with MCA and somewhat like Motorola's fault with licensing later 68k arch chips), though it did gain some market share for a while. (a fair amount of late 90s and early 2000s PCs had RDRAM, but it never gained critical mass)

 

To bring this back on topic a little, companies like Atari didn't have the opportunity to use exotic technologies, because nobody expected them to sell a million of anything.

They might have been close in '88/89 (near their peak, when they were in the fortune 500), albeit their business model might have compromised that too. (the Transputer was pretty exotic and extremely contrasting to their general business -though that probably wasn't a good thing, and the TT didn't end up being either, though it was far less extreme in that sense)

Link to comment
Share on other sites

a very interesting discussion. My question is though would Atari have been better off to continue developing the Atari Panther until 94 and have well matured development tools that were easy to use for it and stuck with 32 bits. That's all the playstation was. Plus it did well because it was easy and cost effective to develop for, the Saturn was a pain and The N64 was not even out yet.

Link to comment
Share on other sites

a very interesting discussion.

Yeah, it kind of branched off from the Lynx-specific topic and more into ''what sort of system could/should atarti have hgad out for the 4th/16-bit generation?'' . . . which is a separate topic in general that Ive thought about. (not really sure what forum it would make sense to post in -maybe classic gaming general)

 

My question is though would Atari have been better off to continue developing the Atari Panther until 94 and have well matured development tools that were easy to use for it and stuck with 32 bits.

It sounds like youre talking about a 32-bit optimized Jaguar, not the Panther. The Panther was done in 1989 (not sure if there were bugs that would have required another silicon revision, but otherwise it was production ready by late that year -and pre-production dev systems were already out by that point). Of course, they'd want more time to build up software support for a proper launch.

 

The problem was the Panther was generally weak for the market at the time -1990. (mainly due to heavy bus contention and 32k shared RAM -limited by using fast, expensive SRAM) It would have been a 2D only jaguar with less color and very limited RAM. (sort of like a 16/32-bit 7800 ;))

 

Perhaps youre thinking in terms of the implications that Jaguar was chosen over Panther as it was being developed faster, but thats not the case. (the case was the jag progressing faster than expected, the panther was already pretty much complete for 1990)

 

That's all the playstation was. Plus it did well because it was easy and cost effective to develop for, the Saturn was a pain and The N64 was not even out yet.

No, the PSX was a very powerful and advanced design, newer and more advanced than the Jaguar in some respects (much more so due to being bug-free, well supported, and with a much more expensive -less conservative- RAM/CPU configuration), but it DID have the tools to match and tons of money and good management to back all that up too. (Incluidng insanely aggressive pricing)

 

 

 

 

Edit:

 

Kskunk addressed the Panther before:

 

See this thread for previous discussion on it:

http://www.atariage.com/forums/index.php?s...68#entry1750868

 

Despite the "32-bit" label, the Panther was not at all like an ST or Amiga or any other console or home computer. It was most similar to the 7800, which was notoriously difficult to port to. It didn't have a frame buffer OR character modes (the two dominant ways to handle backgrounds), sprite X/Y position registers, collision detection, or a blitter. It could reproduce most of those effects but they were coded very differently. It also had almost no memory, just a few kilobytes available to the game. Most ST/Amiga/home computer games of the day relied on frame buffers and generous amounts of memory.

 

The Jaguar on the other hand was a better fit for Amiga and ST ports, since it had a nice 2MB chunk of RAM, a frame buffer, and a very fast and friendly blitter (at least for 2D stuff).

The Panther's graphics chip was the predecessor of the Jaguar's Object Processor. The two are very similar in how they operate, although the Panther had a couple of interesting features not in the Jaguar and of course the Jaguar's OP was far more flexible and advanced, and capable of producing many more colors.

 

Probably the best way to think of the Panther is a slightly weaker Sega Genesis that can display larger, zoomable, sprites. The number of on screen colors is limited like the Genesis. Unlike the SNES, there is no rotation (or "Mode 7"), no 256 color palettes, no 15-bit RGB, and no color blending/semi-transparency effects.

 

The graphical tour de force for the Panther would be a game like Afterburner.

 

In terms of CPU power, it should have been somewhat weaker than a Genesis since the CPU could only run while the (bus-hungry) graphics chip was idle. Additionally, the Genesis had 128KB of RAM split 50/50 between the CPU and graphics, while the Panther had only 32KB and was shared between graphics and CPU.

Atari was very focused on making things cheap since the Atari ST (i.e., since the Tramiels). The 32KB limit was part of keeping it cheap. To get into gory details, the object processor required a ton of bandwidth to do its job, so Atari used a 32-bit wide static RAM buffer. More than 32KB of static RAM is too expensive. To supply 512KB would have required dynamic RAM. However, dynamic RAM could not have achieved the needed bandwidth with only 32-bits.

 

The Jaguar solved the Object Processor bandwidth issue using 64-bit dynamic RAM instead of 32-bit static RAM. Just one more reason the Jaguar was a better thought-out design.

 

 

 

 

At very least, they really needed to re-think the panther design to be able to work in DRAM. Maybe something closer to the jag object processor but targeting a late 1991 release date would have made sense. (if they limited it to 32-bit, it would probably be in the 40-66 MB/s range depending on the DRAM used) That still would have left the issues of it being a "different" architecture and purely 2D. (with fast 2D sprite scaling support)

Edited by kool kitty89
Link to comment
Share on other sites

I've just remembered why I rarely frequent the Lynx boards... Nerds!

 

:lol:

I rarely frequent the Lynx boards too, most of these sort of discussions have been in the Jag, A8, ST, 7800, or 5200 forums (at least the one's I've been involved in). :P

 

 

Oh, and this one in the prototypes forum:

http://www.atariage.com/forums/topic/143762-atari-panther/page__p__1745910

 

. . . except that went off the Panther topic a few pages in anyway. ;)

 

 

 

 

 

Kskunk,

I meant to post this last night, but forgot to:

 

With that said, I'd definitely encourage you to read more on the 3DO. It was the very next chip designed by Suzy's designer. It bears striking resemblance to your proposal: Two banks of memory, source and destination, bridged by a Suzy-like blitter.

Yes, but the 3DO had hardly been tempered or focused as a tight, low-cost (or high cost to performance ratio) design for the time like the Lynx had been. (or even the Amiga back in '83/84)

Plus, the 3DO sidesteps some of the things I was saying. It doesn't just have separate source and destination BANKs, but destination on a separate bus entirely with source shared with the CPU. (I was thinking more in terms of a shared bus with 2 banks for source and destination . . . but to actually keep both both pages open, you'd probably need 2 bank interleaving like the Jag supports and the lynx doesn't . . . or perhaps a simpler scheme based on bank switching with some help from external logic -though again, effectively needing 2 bank interleaving or 2 buses with bus-steering logic to bank between the 2 -which is exactly how the word RAM in the Sega CD works, or the framebuffers in 32x and Saturn ;))

 

So even in the best case, the Lynx, having 2 banks on a shared bus would still mean some added complexity and added logic if Susie wasn't to be changed internally. (and perhaps a snag with using the Slipstream as well; AFIK it doesn't support 2 bank interleaving as such, so would be limited to 333 ns blitter accesses for reading/writing textures, flat shaded 3D -or rasterized 2D- would be another matter though -it definitely supports 16-bit fast-page fill functionality at 12Mpix/s in 256 color mode, also significant for clearing areas for normal 2D blits) I don't know enough about all the specifics of the Slipstream design, so maybe there's some additional buffering to make better use of page mode. (it at least supports 16-bit read/writes, so possibly 6 Mpix/s in 16 color mode, even with full random accesses -but that's an ideal cases where you always moved blocks of 4 pixels at a time)

 

 

 

 

 

Aside from something engineered as extremely tight/aggressively and innovative as the Jaguar, the 3DO chipset (and end configuration) should still have been able to be far more cost effective, or at least cheaper (while maintaining good overall performance). Like pushing for greater consolidation and pushing a little harder on the chip design end (perhaps at least .8 micron), and tempering use of VRAM more (or perhaps switching to plain DRAM entirely, either with some performance hits for a line buffer to scan the framebuffer, or with added cost of dual hardware framebuffers more like the 32x -albeit that might not be much cheaper than VRAM for the same purpose, especially since you'd need at least 2 DRAM chips per buffer -like 2 64kx16-bit DRAM for 2 32-bit wide 256k framebuffers -and 256k was fine for pretty much any realistic resolutions/depths for the time). Hmm, does the 3DO even use the 32-bit bus width much for graphics? (does the blitter have a 32-bit destination buffer?)

Then, a CPU with a cache would be preferable as well. (another option might be dedicated texture memory, like a single 512kx16-bit DRAM plus the separate framebuffer banks (or shared frambuffer bank with some contention, or VRAM) A CPU with a cache (even one not especially powerful compared to the ARM60 -like a 25 MHz 68EC020, let alone an ARM610 -or even a 25 MHz ARM3) probably would have been cheaper overall, with just the 512k of framebuffer RAM and 2MB shared main RAM. (a chunk of SRAM as a texture buffer -software managed cache- might have been an interesting performance boosting feature as well) Or maybe even just the ARM 60, but an added SRAM texture "cache" to boost overall performance somewhat. (single cycle blitter reads from that buffer, and no hits to main, though some hits for updating that buffer and from working outside the buffer -assuming you never used slow direct read/writes within the framebuffer space)

 

 

The 3DO wasn't aimed at being an aggressively priced game console at all though . . . some of the features (like that 1MB VRAM) were sheerly for unnecessary multimedia support (like 640x480 truecolor images), though maybe the 640x480 interpolation support relied on that as well. (still rather unnecessary for the time, especially for the cost/performance issues) . . . Actually, is that 1 MB useful for anything but framebuffer space? If one wanted to, could VRAM be used as both source and destination for textures? (a huge hit to performance for sure, but it seems like it could be a pretty big advantage compared to heavy bus contention stalling the CPU -it should be about as fast as Jaguar texture mapping in DRAM, or somewhat faster if there's actually a 32-bit destination buffer, except unlike the jaguar there's no bus contention with the CPU)

 

 

OTOH, the 3DO's market success/failure really had relatively little to do with the actual hardware cost effectiveness or even general power/performance. The real issue was simply the market model used, it wasn't even marketing or management/funding/etc (like the many other problems Atari had), but an experimental market business model that ended up proving impractical due to the necessarily high hardware costs and not nearly as much of a drop in software pricing by comparison.

Had 3DO partnered with a single, heavily invested company for manufacturing and investment capital (probably Panasonic), and pushed the 3DO at as low a price as practical with their resources (including sustaining losses) with a normal "razor and blade" model for profiting from software sales and royalties, it might have managed to be a mass-market success.

In spite of the large chunk of VRAM and 1 micron chips, the 3DO was a relatively simple (and in some ways "cheap") design compared to the PSX or (especially) Saturn, and could have been considerably consolidated and cost reduced to keep up (especially with Sony's greater resources in general). Those 1 micron chips could have meant a huge amount of board space reduction with consolidation to newer processes in the mid 90s (.5 micron revisions in 1995 wouldn't have been especially risky), plus it used cheap, commodity 80 ns FPM DRAM for main memory and (relatively) lower-end slow VRAM on top of that (I believe the PSX used 33 MHz VRAM for video in its case -it might have used EDO DRAM and "VRAM" would be a confusing abbriviation for "RAM used in the video subsystem" -like how SNES or PCE "VRAM" is actually single-port SRAM, or single port DRAM in the SMS or TMS99x8 based systems)

 

 

 

In any case, it's better to start with a blank piece of paper, unless the chip in question is already a perfect fit.

Or an alternate existing design. ;) (even without any added buffering or separate destination, it seems like the Slipstream could have been a much better overall design than the Panther, in the context of Atari having something for 1989 and not waiting any longer -thinking back to the business side of things- maybe they could wait until 1991 -in that case, maybe they could have had the jaguar OPL ready, maybe even the blitter- but even a wait for that about of time could be worse than releasing a more limited 1989 design -Atari's market position steadily declined from '89 onward for both games and computers, though it was more gradual in Europe; plus, the 64-bit bus in general would have had inherent cost constraints for the time that may have made it difficult to be price competitive with Nintendo or Sega -more RAM to decompress into and potentially cheaper carts would be a mitigating factor, but a cheaper DRAM based system could also feature that)

 

The 1989 Panther really wasn't a practical long-term design with that RAM limitation, "different" architecture, heavy bus contention, etc.

(Brennan himself mentioned not really liking the concept that much, and that being a big reason for pushing the Jaguar; that also makes me wonder even more if he ever brought up the Flare1/Slipstream design to Atari as an alternative to Panther; maybe he thought pushing for a full newer generation system made more sense, but leaving the market totally open really didn't make sense -in hindsight, the Jaguar ended up being released under some even worse circumstances than just having a 7 year gap since the 7800's release, but also horrible cash flow/credit/debt issues across the board, deeper management problems, and a market wide slump in the US followed by an insane amount of competition with Sony pushing onto the market -granted, Europe was far more marketable in 1993 all around, and much cheaper to market to, aside from Atari's other persisting problems, but they ended up starving their European market when they had a remote chance of becoming established there -hyping in the US in '93 for investment made sense, but allowing the massive shortage problems and generally ignoring Europe probably killed the last chance Atari had at a stable long-term product -maintaining the Lynx would have been necessary as well, though the computers were pretty close to shot with a small hope in Europe with the Falcon line before they pulled out of that in late '92 -they'd screwed up so many things by that point, there wasn't much they could do to recover . . . but some bad management decisions pretty much compromised that as well -the jaguar was lucky to get as far as it did in the US, but it was artificially crippled in Europe by most accounts I've come across)

 

 

 

And I hope this creative streak helps lead you into designing real systems of your own. Dissecting these ancient designs taught me a lot about the design process, lessons I use to design systems in my day job.

Yes, it's certainly fascinating, though I'm still not sure what direction I'll end up in with computer science. (so far I'm leaning more toward the software end, the hardware side of things -and historical stuff in general- is really interesting too, but there's a lot of heavy electrical engineering stuff involved with real hardware design -and I'm OK with some simpler areas of that, and on a conceptual level, but I'm not sure I could really get into the deep low-level end of those things on a professional level) Actually, it's the historical part that's fascinating in general too, and I did consider a history major at one point, but there's not a lot on the professional end of that path that really makes sense for me. (either jobs that don't interest me, are in very specific/limited demand, or are hard to make good money at -and a job you love, but doesn't end up paying very well can be a bad conflict as I've learned 2nd hand from a few people -a teaching career can end up being that way among other things; granted, something you don't like that you're good at and pays well isn't a good option either, so I'm aiming at some sort of compromise on that end ;) and there seems to be a good chunk of different professions within the computer science field where I should be able to find something I fit into well, and even that specific field would tend to encompass a fair range of professions as well)

 

I haven't ruled out a position in the game end of the market either, but that may end up being limited to hobby in the long run. (might eventually extend to some actual programming for some old systems)

 

Hell, it took a while to realize I really wanted to go towards computer science (came out of high school most interested in engineering and physical science -a mix of chemistry and mechanical/material engineering interests), but I ended up realizing I only enjoyed those subjects so far and more on a hobby level. (there was a brief time I considered engineering in the aeronautics industry as well, but I rather quickly realized that was also more limited to a hobby interest -pretty much all of those technical/hobby interests are also routed heavily in the history of their development as well)

 

Again, I'm pretty sure the history interest is going to be pure hobby interest as well. (unless that hobby end of things actually ends up existing to actual research and archiving, but that's technically still hobby stuff, just a lot more serious -what Curt and Marty have ended up doing is really amazing, and that largely grew out of a hobby interest as well . . . technically it still is hobby/non profit in general, though I'd imagine they'll make some money off the books eventually -except I doubt money would be remotely close to the driving factor in writing those books, but obviously some monetary compensation for their work -on top of the obvious costs of publishing and distribution- make sense too)

 

 

 

I have friends in the toy industry who do sell millions of things, and they're always getting deals on these weirdo chips, as desperate chip makers try to bootstrap their market. Toys are full of bizarre eccentric things you've never heard of, and never hear of again.

And sometimes they do end up being successful too, but more often not or taking far longer than within the lifespan of said bootstrapped product. (PSRAM is a fair bit more common today -the "1TSRAM" of the GC/Wii is sort of an outgrowth of that as well, but more conventional PSRAM is in far more common use now than in the 80s or 90s)

Sega also must have gotten a really good deal on early FeRAM supplies given its used in a few games and the extremely exotic nature in the early 90s. (or maybe even a special deal on mass market testing -especially given the reliability issues seen in the few Sega games to use FRAM rather than battery+SRAM backup)

 

Still, the commodity option will still generally be better in the long run than an exotic option. An exotic option with an extremely attractive price may end up being more attractive than a high-end alternative though -like SDRAM vs RDRAM in the mid 90s. Though, taking that comparison, Nintendo might have had better overall cost savings with EDO DRAM on a 64-bit bus (the fastest EDO went was 33 MHz, so even at 64-bits it would be 266 vs 500 MB/s peak bandwidth -but there could be significant advantages in the proportionally much less dramatic latency issues -or especially page change penalties as 64-bit random EDO RAM accesses would still be much higher bandwidth than random 8-bit RDRAM hits). Though, in that specific case, there could have been bigger advantages with not being tied to SGI's royalties/licensing overhead, but I'm not sure on any specifics of that arrangement, so it's hard to comment further. (other trade-offs of licensing vs internal R&D or other forms of outsourcing -be it outsourced development of custom chips, or selective licensing of off the shelf parts -ie shopping around for the most cost effective options . . . though in the ~1995 period, the only overall graphics/multimedia chipset available was probably ATi's RAGE -OK 3D support for the time, probably better than any other consumer video card in 1995, let alone any all-in-one graphics solution with 3D support for a few years, plus the MPEG-1 support and relatively competent performance in EDO DRAM; no idea what sort of partnership/licensing/high-volume purchase options might have been possible at the time, plus you still needed a CPU and some added I/O logic -let alone the software/tools support end of things, though ATi's own documentation and API support would be very useful)

Edited by kool kitty89
Link to comment
Share on other sites

a very interesting discussion. My question is though would Atari have been better off to continue developing the Atari Panther until 94 and have well matured development tools that were easy to use for it and stuck with 32 bits.

Thinking on this again, a 32-bit Jaguar of sorts (again, Panther is a totally different story in general) might have been interesting. It would have meant lower bandwidth in general, but the lower pin/trace count and reduced number of RAM chips might have favored a dual bus design to curb some of those problems. (like 1 MB 32-bit DRAM for video and perhaps a 512k 16-bit DRAM for sound/CPU work RAM) That also might have been easier to get working satisfactorily slightly earlier-on. (though an earlier release would have meant an older chip process -like .8 nm- and thus a lot more limited use of silicon, probably meaning the GPU wouldn't be possible; though cutting the GPU out of the design and aiming at an earlier date may have been more foolproof in general -the 68k alone wouldn't be enough for decent 3D, so they'd at least need some coprocessing support to assist with that, perhaps something closer to the fast ALU and DSP of the Flare 1 design -much more primitive, much less silicon, but still a massive help for 3D rendering -and some pseudo 3D stuff, and audio in the DSP's case -dropping JERRY also would have significantly sped up development too, perhaps just a simple I/O ASIC with DMA sound built-in sharing the CPU bus -or embed that into the main system/graphics ASIC and have both an interface for the 32-bit video bus and the 16-bit CPU/sound bus on that chip -so the blitter, OPL, a rudimentary DSP-like coprocessor -maybe directly derived from Ben Cheese's design from the Flare 1 but die-shrunk and at a higher clock speed, a fast muliplier/ALU coprocessor, general purpose I/O logic -including a serial UART if that was a desired feature, and simple DMA stereo sound with on-chip DACs)

 

Having something ready with preliminary documentation out by late 1990, early revision preproduction dev systems out by mid/late 1991, and the actual release in late 1992 would have still left a wide gap for Atari, but it would have been a year less than the jaguar (2 years given the full national -not to mention European- release wasn't until mid 1994) and should have been a lot better than what they were stuck with in 1993 (they were in bad shape in '92, but truly desperate by early/mid 1993). Plus, in pure hindsight (ie not trying to think of the perspective of the time), in 1992 the Lynx was doing OK, their computers still had a small chance of maintaining some position in Europe at least (not to mention possibilities of an alternate Falcon incorporating part of the new console chipset), and the US video game market was extremely healthy at the time. (1993 saw the beginning of a major market-wide slump that wouldn't fully recover until late 1996, so a 1992 launch would have been all the more important in getting established -again, strong European support would also mitigate some of those problems as the slump didn't occur there and Atair had much better PR in general)

 

The Jaguar itself is sort of a fusion of the Flare 1 design and Panther design concepts (a Panther-like object processor, plus a powerful blitter, custom RISC GPU coprocessor, single shared bus with generic host processor, etc). Though one could argue that the Jag would have ended up better-off for the mid-90s market (albeit partially by coincidence) if it hadn't included the object processor and instead purely focused on a powerful, feature-rich blitter. (one of the biggest "lucky" side-effects would probably have been a far more powerful texture mapping feature -for fast scaled sprites since you lack the OP, also making rotated/scaled/warped 2D objects far faster to render, and making textured mapped 3D extremely practical -not just useful for polygon/line renderers, but a 2-pass scheme could make it useful for rendering texture mapped columns as well -Wolf3D, Doom, Duke3D, etc; AFIK, Doom on the jag uses the GPU to render columns in software, not the blitter's texture mapping feature -it's probably too slow to make 2 pass rendering worth it compared to software GPU rendering, though it would probably be faster if just used to draw the floor and ceiling -which use more polygon-like line rendering)

 

 

 

But for a 1989/1990 console, comparing the Panther and Slipstream, the Slipstream really seems like a better option overall. (not as much raw 2D power, but using cheap DRAM -so very realistically having a good chunk of RAM, perhaps 512k- and a lot of provisions for 3D; color would be less limited than the Panther when using 256 color mode, but 2D rendering would be slower in general -aside from solid color line/block fill operations which are pretty fast and actually fastest in 256 color mode- it would also have been much closer to conventional "standards" of the time rather than a fairly radical departure with the panther)

 

 

 

That's all the playstation was. Plus it did well because it was easy and cost effective to develop for, the Saturn was a pain and The N64 was not even out yet.

And one more note on this: As I touched on before, the PSX's success wasn't just tied to its hardware performance or architecture+tools that made it particularly attractive to work with at the time (albeit, those tools mainly helped with the learning curve and made it more fool proof for ports and early or lower investment games -the later stuff pushing the system tended to go low level for better optimization, or use custom APIs rather than Sony's libraries, or a mix of those -some developers ported their own APIs to make cross-platform development easier in general), but that was certainly part of it.

 

The really big issues that made the PSX a success was Sony's massive funds and good management backing that up (NEC had massive funds too, but that didn't keep them from screwing up marketing/management/distribution with the TG-16). They had massive ad campaigns, huge investment in getting 3rd parties interested and further investment in buying up exclusive games (and buying out some developers entirely), plus selling pretty expensive hardware at an unprecedented loss. (Sony did have some fairly tight hardware, but it was still using some fairly fast RAM for the time, 3 buses, dual port VRAM for video -I think, etc and it was certainly cheaper than the somewhat sloppy Saturn design; their massive volumes and vertical integration -especially for CD ROM- would have given a further advantage, but Sony was still selling at loss at $299, then $199, then $150 in '97 -though, by that point it may have been close to breaking even and further cost drops possibly made the $100 1998 price may have been at a small profit)

 

That and you had Sega and Nintendo screwing up in several areas (NEC too, in Japan -the PC Engine had been huge in Japan, but the PCFX was a massive flop), Nintendo with their still relatively unattractive licensing terms and dropping CD-ROM (the latter cost them Square which acted as a catalyst to draw a ton of support from Nintendo to Sony), and Sega had their own marketing/PR/management mess. (the Saturn's architecture -and lack of good tools- or the separate issue of it being relatively cost ineffective -expensive relative to the performance- were problems too, but the big issues were on the business/management/marketing end of things)

 

It wasn't even until late 1996 that the US market really hit mainstream with the 5th generation (32/64-bit consoles), and that was one of Sega's problems too. (not milking the Genesis enough late-gen, and not positioning it to smoothly transition into the budget market in 1996; Nintendo managed that expertly with the SNES)

That early push meant Sega was spending too much marketing on the next gen end of things when the profits were all in the older machines. It also ended up meaning that they were in much worse shape for a big marketing push when push really case to shove in late 1996.

Then there's the 32x (which wasn't a huge problem on its own, but not great in hindsight . . . but the real problem was effectively dumping it in mid 1995 with the launch of the Saturn -even though that was months before the official discontinuation it ruined Sega's PR and their chance to make a profit from the 32x). The early launch of the Saturn in the US probably marks the point where Sega's problems began in earnest. (it ended up doing nothing to help the Saturn -gave it bad PR due to the price, shortages, and limited launch lineup, pissed off retailers and developers left out of the loop and frustrated the rest of retailers with chronic shortages, made a huge mess out of the 32x, further compromised the Genesis's late-gen profitability, and opened wide up for Sony to steal their thunder right at E3)

This is a totally different topic though, and one that pops up on and off on Sega-16. ;)

Link to comment
Share on other sites

You two are incredible (kskunk/koolkitty). I always enjoy reading the 'behind the scenes' and technical stuff, even if I don't always fully understand all of it. Great reads all the same and I always end up learning *something* thanks to the way you write. Same with Marty and Curt's and others insight here at AA. There was so much more going on with the design and marketing of our beloved systems that many of us simply take for granted. Like reading or watching a documentary of your favorite artist or band, you usually end up with a greater appreciation for their work afterwards. Just wanted to express some gratitude and appreciation for the kind of banter that enlightens :)

Link to comment
Share on other sites

I always enjoy reading the 'behind the scenes' and technical stuff, even if I don't always fully understand all of it. Just wanted to express some gratitude and appreciation for the kind of banter that enlightens :)

I'm glad you enjoy it, despite how verbose we can be. KK89 has a lot of interesting ideas. Sadly, I never have enough time to address everything he posts. I'm afraid the posts would grow to be 10 pages... each. ;)

 

Thinking on this again, a 32-bit Jaguar of sorts (again, Panther is a totally different story in general) might have been interesting.

I know you enjoy diving into technical details, but I always wonder about the product side. Atari had no problem making neat technology, but the "whole product" (not just the tech) usually fell flat.

 

To be on-topic (for the Lynx board), the Gameboy was inferior to the Lynx technically, but most consumers buy products, not technology. And people buy for the games, not the tech.

 

By the time of the Lynx, Atari had completely lost their ability to make games. By 1984, they had no game programmers left in-house, and the arcade division was split off solely to save its profits from incompetent management. It's hard to be good at something you have no in-house experience with.

 

Sega benefited from its game development teams, and its arcade division. The Mega Drive design was based on Sega's System 16 arcade. And naturally they leveraged their own in-house talent and arcade properties to sell systems early on.

 

The Lynx was in a similar situation, thanks to Epyx. Epyx understood games, and developers, and leveraged their in-house talent and properties. They even helped recruit 3rd parties.

 

Because of Epyx, the Lynx was the last Atari console to really have a chance. It outsold Atari's homegrown Jaguar 33-to-1.

 

The Jaguar was impressive technology for 1993, but Atari had no hope making it into a product. Thank god there are people who are oblivious to business and just want to make games using cool tech. (Like us!) Otherwise there'd be no games at all.

 

It's a lot easier to design console technology than it is to make a successful console product. You're right that Atari was awash in technology, but no technology could change the fact that Atari lost its gaming soul in 1984.

 

- KS

Link to comment
Share on other sites

I always enjoy reading the 'behind the scenes' and technical stuff, even if I don't always fully understand all of it. Just wanted to express some gratitude and appreciation for the kind of banter that enlightens :)

I'm glad you enjoy it, despite how verbose we can be. KK89 has a lot of interesting ideas. Sadly, I never have enough time to address everything he posts. I'm afraid the posts would grow to be 10 pages... each. ;)

Plus I often end up answering some of my own questions while writing this stuff . . . or musing more than actually responding to any specific point. ;) (but then also not going back to take the time and remove anything redundant/unnecessary or difficult to read as I already spent too much time typing it the first time)

 

I know you enjoy diving into technical details, but I always wonder about the product side. Atari had no problem making neat technology, but the "whole product" (not just the tech) usually fell flat.

Yes and no, but in Atari Corp's case, their real big success was with the ST, albeit mainly in Europe and mainly in the late 80s. (it wasn't perfect, but prior to Sam taking over it was the dominant 16-bit computer in Europe and had a significant place in the US, if niche . . . makes you wonder how Jack may have handled things going forward -on the business end, of course, the tech end was somewhat separate, though obviously tempered by business decisions)

 

To be on-topic (for the Lynx board), the Gameboy was inferior to the Lynx technically, but most consumers buy products, not technology. And people buy for the games, not the tech.

I wouldn't say inferior, but different. The Gamebow has a significant number of technical advantages from an overall product standpoint (looking at far more than just game generation capabilities). It was simple and cheap to manufacture (so much so that, if Nintendo chose to, they could have dropped the price far lower than they actually had it much of the time -which in turns means they were making some nice profits off the hardware itself), it was oriented around grayscale graphics that thus made it feasible to use a reflective screen (at a time when color screens were impractical outside of backlighting . . . or only practical if very low contrast ratios were acceptable -probably barely 4-bit RGBI quality with poor viewing angle at that, and still more expensive than B/W screens), it had a compact form factor, used 4 AAs rather than 6 of the competition, reasonable overall sound and graphics performance (sound more or less equal to the Lynx -aside from software PCM- and decidedly better than the basic PSG in the GameGear), and (possibly the most important of all) it had exceptional battery life in the range of 4-5x that of the competition.

 

Of course, then there's all the business/market advantages Nintendo had on top of that (and obviously a position in Japan where Atari could never even hpe to get any real support), but one could argue that if Nintendo tried to launch the lynx (exacatly as it was in 1989) it would have not done nearly as well as the GB due to the inherent disadvantages. (battery life probably number 1, bulk being a big factor, cost notable too)

 

Look at the Game Gear: similar overall marketing/support/branding as Nintendo (albeit released 2 years later in the US), but it never had close to the market share of the GB either, even at its peak. Actually, it did surprisingly poorly in Europe (in spite of Sega's general advantage there) while the Lynx actually did best in that market and beat the GG by a good margin (especially comparing the Lynx and GG when both were active -Lynx obviously was dropped first).

Some European gamers mentioned that the GG was criticized of "only" being a handheld master system with mostly the same games . . . though I'd honestly thought that would have been more of an advantage. (albeit, there is the issue of Sega coming off trying to sell the same games twice and GG carts being incompatible with the SMS -3rd parties had SMS adapters for the GG though- . . . maybe if Sega made a more direct conversion of the SMS, that wouldn't have been a problem -the 12-bit RGB wasn't a big advantage on screens at the time anyway, so literally using 100% GG compatible games might have been better -with a cheap adapter to play GG games on the SMS . . . unless they simply used the card format already supported, but that had more restricted addressing than the GG cart slot)

 

 

By the time of the Lynx, Atari had completely lost their ability to make games. By 1984, they had no game programmers left in-house, and the arcade division was split off solely to save its profits from incompetent management. It's hard to be good at something you have no in-house experience with.

Oh, you don't have to tell me, there's a ton to that situation to process . . . and some (actually a lot) details that only came to light in the last couple years (thanks largely to Curt and Marty).

 

But, staying away from the specifics of the mess with Warner/Atari Inc/TTL, I understand the general situation already. Atari Corp was not Atari Inc, Atari Inc had been liquidated and Atair Corp was TTL with Atari Inc's consumer division properties, they also ended up a lot worse off than Atari Inc had been immediately prior to that (largely due to Warner's mismanagement of the transition . . . though one could argue that even in the best case, TTL never could have done as well with Atari as Morgan could if his NATCO plans had been completed).

 

Atari Corp had a steep up-hill climb to worth through thanks to Warner . . . much steeper than Morgan would have had to deal with under Atari Inc. (one of those massive snags included Warner contending the GCC contract as not being part of the sale to TTL, and that went back and forth until early 1985 when Tramiel gave in to Warner and paid for the contract separately and the 7800 project got rolling again -Katz was brought in shortly after and went to work building up for a launch)

The tight resources of Atari Corp severely weakened the 7800's position on the market on top of the delay . . . one could argue they shouldn't have bothered with the 7800 at all in that context. (one more separate platform to manufacture and support with software and advertising . . . it probably would have made a lot more sense to focus just on the A8 and ST at that point -with A8 as the mass market games machine, something like the XEGS in 1985 could have made a lot of sense, or maybe even by late 1984 -namely a direct repackaging of the 600XL followed by a more significant low-cost case revision with cheap membrane keyboard and a built-in game rather than BASIC, or something like that; on top of it being an established product, Atari Corp had tons of stockpiled components and software for the A8 to work with -this is off topic too, and something I already brought up in several other threads ;) )

 

 

But on the issue of Atari Corp in general, they were really just getting started in 1984 and had to work their way up. They were a very different company than Atari Inc (for better or worse) and also had a massive debt to work through. However, but the late 1980s, things had improved dramatically and Atari Corp had become a highly profitable international company in the fortune 500. That was their peak, unfortunately, and it was pretty much down hill from there on. (that was also the time to strike if they were ever going to get into the highly competitive and money-intensive North American mass market -Europe had more flexible options, but the US needed a decisive push with lots of resources backing it up, 1989 was probably the best time Atari ever got for that, maybe 1988)

 

Atari largely outsourced its software, though with the Lynx, they had begun to amass their own group of development teams in the Chicago division iirc. Strong 3rd party support is what drove the games on their computers though, and that sort of support might have made a console as well. (the 7800 was crippled far more by lack of any real 3rd party support than for not having compelling 1st party exclusives -if the thing had had absolutely no exclusives, but a fair amount of reasonably quality versions of popular 3rd party games of the time, it probably would have been far more popular -marketing would obviously play a huge role there too . . . good marketing and consumer interest is also integral to good 3rd party support -granted, low licensing fees and a low-risk, easy to develop for system can also mitigate an otherwise weak market position)

 

Sega benefited from its game development teams, and its arcade division. The Mega Drive design was based on Sega's System 16 arcade. And naturally they leveraged their own in-house talent and arcade properties to sell systems early on.

Yes, that's true, but good software alone isn't really going to do anything, at least in the US market. The Master System had lots of great software for its time, but it sold considerably worse than the 7800 in spite of a marketing budget around 6 times bigger from Sega. (they seem to have really squandered that budget . . . Michael Katz brought a massive change to that in late 1989, and pushed Sega's first truly successful marketing campaign through 1990)

Software is part of the recipe, but (in the US especially), hype and good marketing is creitical. (that, and it's a path to better software support on the 3rd party end in general -part of Sega's success with the Genesis was a massive build up in western publishing, both 1st/2nd party and licensed 3rd parties)

 

Without Sega of America's management under Katz and Kalinske, even Sonic couldn't have made the Genesis what it was in the US. (Europe is another story)

 

Of course, there's a rather extreme example with NEC as well. With the PC Engine, they had an established, highly popular platform in Japan for 2 years when they brought it to the US, they had a strong back library of games to draw from, they had relatively inexpensive hardware with vertical integration on top of that, and they had massive resources to build a truly awesome marketing campaign for the US launch. (so, in some areas, even better off than Sony with the PSX) However, bad management decisions managed to tear that all down with the TG-16, NEC could have been the Sony of the 16-bit era (so to speak), though perhaps it's better that they weren't. (if they'd truly managed what Sony did, it would have meant far less competitive variety . . . albeit Nitnendo and Sega were in far more competitive positions in general with their own management compared to the following generation, so that might have actually been really interesting to see play out)

 

 

In 1988, Atari Corp was actually in a significantly better position overall than Sega to bring a new console to the US (Europe they were both pretty competitive at that point with Console/Computer, so it's hard to say). In fact, Sega recognized that and attempted to negotiate a distribution deal for Atari to exclusively manage marketing/sales/distribution of the MegaDrive in North America with plans to roll the American Sega marketing division with Atari (or set up direct collaboration at least) . . . long story short, Katz was all for it, but Rosen and Tramel couldn't agree on terms (a conflict of interests in the European market, and Atari with their own console plans probably giving more 2nd thoughts on the matter -plus the MD was totally unproven and unreleased). Again, Atari Corp (alone or partnered) was at their best position to launch a new game console in 1989 with the right management (which Jack and Katz seemed to be capable of) and at least reasonable hardware for the time. (if they really managed to pull off a successful launch -let alone in Europe, that could have led to a real sustained period of growth for the entertainment division -marketing, in-house/outsourced software R&D, etc, etc) Instead, Atari entered a downward spiral of declining management and stagnating products. (with the exception of the Lynx, which wasn't so much stagnating but weak . . . and then the Jaguar)

 

Of course, as things turned out, Katz ended up leaving Atari in early 1989 (shortly after jack retired) for what he planned to be an extended vacation from the industry, but as fate would have it, he ended up getting called in by Sega and joined as president in late 1989 (a couple months after the launch of the Genesis). He ended up laying much of the groundward for the Genesis's success with the first really competitive ad campaign (Genesis Does!), expanded emphasis on western software development, use of celebrity licensing/tie-ins, and managed to secure a solid relationship with EA after they threatened to go unlicensed. (Sega Japan wanted to take legal action iirc, but Katz talked EA into agreeing to a specially favorably licensing agreement as well as contracting Jone Montana Football in 1990 -Sega had contracted Mediagenic for that purpose back in fall of 1989, but that development ended up going nowhere and dragged on without progress until Sega dropped them in 1990 in favor of EA)

 

The Lynx was in a similar situation, thanks to Epyx. Epyx understood games, and developers, and leveraged their in-house talent and properties. They even helped recruit 3rd parties.

Yes, and so did Katz (who'd managed all Atari's game/entertainment related things from 1985 to early 1989 ;)) . . . he was actually president of Epyx prior to joining Atari and he's part of the reason Epyx was one of the more notable developers on the 7800. (in light of Nintendo locking in most/all Japanese arcade devlopers by mid 1985, Katz decided to focus on US computer developers for the 7800 instead . . . I do wonder why they never dug into the vast European development resources of the time, a lot of good material to pull from on that end in the late 80s)

 

Had Atari launched a new console to directly follow the 7800, especially if Katz hadn't left, Epyx almost certainly would have been a major 3rd party developer for that machine.

 

Because of Epyx, the Lynx was the last Atari console to really have a chance. It outsold Atari's homegrown Jaguar 33-to-1.

Yes, but the 7800 outsold the Jaguar by close to that amount too. (haven't seen any worldwide figures, but it was a solid 3.77 million in the US market alone, not North America, just the US from 1986-1990, with most sales tightly concentrated in 1987 and 1988 -almost 3 million sold in those 2 years alone, 1989 dropped below 700k and 1990 was under 100k)

The 7800 outsold the Jaguar in 1986 alone. ;)

 

 

Good point on Epyx though, with the relationship Atari had in the late 80s/early 90s, Epyix made a very strong addition as a (de facto) 2nd party to Atari. (that could have extended to development on a new atari console at the time too . . . though by the time of the Jaguar, it was a different matter ;))

 

The Jaguar was impressive technology for 1993, but Atari had no hope making it into a product. Thank god there are people who are oblivious to business and just want to make games using cool tech. (Like us!) Otherwise there'd be no games at all.

 

It's a lot easier to design console technology than it is to make a successful console product. You're right that Atari was awash in technology, but no technology could change the fact that Atari lost its gaming soul in 1984.

Again, Atari was a totally different comapany after the liquidation . . . it was TTL renamed with added stuff put into it. ;) IT had its own soul, but was certainly far less into the software end of things (that may have eventually changed, but throughout Atari Corp's life, it was mainly a hardware company and a software publisher, but not developer -with few exceptions).

 

I'm sure if you asked someone from Europe who grew up in the late 80s, they'd say the ST had a lot of gaming "soul" :P.

 

Though, really, Mike Katz was the gaming "soul" of that company as it was in the late 80s, he created the entertainment division (one of his conditions on joining Atari), he built that division up from him in a single office and working with extremely tight budgets, and he managed to make do quite well with the 7800 and 2600 in spite of those limited software/marketing budgets and Nintendo's exclusivity contracts. ;)

I really wonder how he might have handled things had he stayed. (a bit ironic that he left just a few months before Atari took on a new game system . . . from his old company ;) -and his joining Sega around that same time)

 

 

I have no delusions of Atari Corp "reclaiming" Atari Inc's former glory (so to speak), but I was thinking more of Atari Corp growing on their own merits, both as a computer and entertainment company. Namely what might have been if things had kept moving up in '89 onward rather than falling apart. (granted, things were hardly perfect up to that point either, but the net outcome had been pretty positive and they did have a dominant position in the European computer market . . . except they were also facing newly founded direct competition from the Amiga in the low-end -especially from their strategic disadvantage back in '88 where they couldn't hold their $300 price on the 520 due to the DRAM shortages while CBM managed to pull the A500 from $500 to $400 at the same time Atari had to push that $300 point back to $400)

The business and technical history of the ST is another topic entirely though. ;) (among those things would be the questionable decision of a full closed-box design -not even a basic low-cost/flexible expansion port like the Spectrum had, and then the lack of "professional" styled desktop/tower form factor models until 1987 -the same time the Amiga was coming out with the console form-factor models ;))

 

 

Oh, and aside from the business side of things in general, there's musing on what technical options they had to work with at the time. (and what sort of merits would cater most to their business/market position)

 

 

 

 

That's all well beyond the initial scope of this topic though, and I think the Lynx-console question in general has been addressed. (Atari using Flare's Slipstream is another topic though ;) -and the whole "what if" of Atari launching any console in '89/90 and how it might have turned out)

Plus, if they HAD used the Slipstream, there's some areas where it wouldn't have made a bad fit for porting Lynx games from (let alone computer games of the time), but the game code would obviously need major reworking for 650x to x86. (there's the even more interesting factor of the Slipstream already having software development started for the Multisystem, so that would have boosted Atari's practical time to launch)

Edited by kool kitty89
Link to comment
Share on other sites

  • 4 weeks later...

 

With that said, I'd definitely encourage you to read more on the 3DO. It was the very next chip designed by Suzy's designer. It bears striking resemblance to your proposal: Two banks of memory, source and destination, bridged by a Suzy-like blitter.

 

KS

 

Do you have any pointers to the 3DO design? I've found references to the 3DO patents which gives some information on the design of the hardware, but nothing that digs into the various components and capabilities.

 

Thanks

Link to comment
Share on other sites

It was a shame thast atari didn't do a 'lynx 3', but this time squeezing in an '816 instead of the 6502 into mikey and extending some of suzy's features to taking on the likes of the GBA

 

The reason why i suggested the '816 is simply because of backward compatibility with the '02 and also remembering that the original designers of the lynx were dead against using a 68k (according to an article i saw written by robert jung)

Link to comment
Share on other sites

  • 6 years later...

I think that if atari did had decided to make a 16bit home console version of the lynx instead, it would not have make sence either,even not with those arcade ports on it,the snes would,ve still win the console war at the ens, and there was also the genesis and super graphix express it had to compeat with, it just would,ve be the same story as the turbo graphics anyway,remember those different 8bit machines atari released over the years,aside from the atari 2600 ,they were unsuccessfull.

 

So i don't mind that atari did go for a portible color 16bit(!) handheld instead, infact i found it very innovative and unique and it was a great and dared move from atari to go that way.

 

My only complain about the atari lynx is the uncommon cartride and link system wich set it apart from other systems,making it more clever to program for,and you got to find clever programming tricks to get around those limitations,also the used Dram don't speed things up either.

 

So if atari only had decided to put a normal ordinary cartride and a 2 way digital link system aswell as adding Sram instead, it would,ve speed things drastically and made it easier to program,atari could,ve cut costs by removing those uart modules and the metal plate and instead use alluminnum .

 

But i still like the atari lynx for what it is it's an amezing system with mind blowing features such as sprite scaling,polygon rendering and data decompression ,something the snes do needs a super fx chip to do that such as in yoshi's island,

 

And ofcourse we can get around those limitation of the lynx such as the scan line trick to get more colors onscreen,alternating send & recieve signal to fake a 2way direction via comlynx,isolating the rambus to fake a hard reset to allow multi roms on it.

 

That's why i am so proud owning a lynx.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...