Jump to content
IGNORED

Atari Panther and the controversy of the bits of your processor


Recommended Posts

The keyboad on the O2 was not really to be a computer but to be useful for alphanumeric entry and game variation. It was a cheap mylar type and probably added little cost. It did help with the educational programs that parents wanted. One thing I always noticed about used units, they most always have the box and the gate fold cartridge boxes were used more regularly to store the carts.

Link to comment
Share on other sites

16 hours ago, Video said:

Yeah the Adam was the most successful of computer addons, and it was still really rare compared to the already not common coleco. The main reason it was a success, if you can call it that, was it was sold as a standalone, or a console add-on.

I think it was the only one that actually made it to market, so it's "most successful " by default :)

Link to comment
Share on other sites

On 8/15/2023 at 2:04 PM, Video said:

Yeah the Adam was the most successful of computer addons, and it was still really rare compared to the already not common coleco. The main reason it was a success, if you can call it that, was it was sold as a standalone, or a console add-on.

The Colecovision platform itself was also based on a home computer chipset, the same TMS9918 graphics chip the TI-99/4 and went on to power the Sega SG-1000 Mk.1 and II (which did get a keyboard/computer expansion), but more notably, the MSX, which was probably the most successful use of that chipset, albeit almost entirely in Japan with a limited presence in Europe. (and possible absense in the US due to Jack Tramiel's own efforts aimed specifically at preventing Japanese entry into the North American computer market and potential for predatory business practices that might bring with it, albeit this is a whole other topic)

The Adam itself might've lasted longer if it hadn't been for the fatal flaw of its tape drives combined with misprinted manuals all on top of the video game crash meaning they didn't have the Colecovision to hold them over to work out the kinks with the Adam.

Granted, the real thing that got me thinking about a MARIA based computer system wasn't a 7800 add-on, but Leonard Tramiel's interview mentioning the missed potential of the TED chip at Commodore as a $50, 40 column full color text capable home computer (presumably what the C16 was supposed to have been in 1984), and while MARIA wouldn't be a good basis for something that cheap and that early, it got me thinking in terms of Jack Tramiel being interested in continuing to push hard into the low end of the computer market with aggressively priced new, competitive products. And then on top of that, the concept of compatibility achieved at the OS (or BASIC interpreter) level as well as full compatibility with existing peripherals (for the Atari 8-bit this would mean all SIO devices), but not full internal hardware architecture compatibility.

With engineering resources strained and all going to completing the ST, they would only have initially had Atari's existing (production ready) designs to work with, albeit MARIA wasn't even Atari Inc's property, it was a separate Warner communications contract, which is the main thing that led to delays and protracted negotiations for payment that eventually got sorted out in 1985.
MARIA did also support character modes, and only needed SRAM for the lists, so even just 2kB of SRAM + character set (or sets) in ROM and then re-using the bare minimum of Atari Inc's existing inventory (and portfolio) of custom chips and off the shelf 650x compatible I/O components to complete the system, presumably in parallel with the industrial design work going into the new production of the XE line. (like FREDDIE for interfacing 16kB of DRAM, POKEY for keyboard, POT, and joystick I/O re-using keyboard I/O lines and possibly using the 2 row-strobe lines to differentiate between 2 joystick ports by running them to what normally maps to ground) It also should've been simple to allow MARIA to use DRAM at ROM speeds with the 250 or 200 ns DRAMs already being used in the XL and early XEs (MARIA needs 3 cycles at 7.15909 MHz per ROM read, so cycled at 2.3864 MHz vs 1.79 MHz in the A8, so FREDDIE would need to be clocked a bit higher, though the CPU could also potentially be clocked at 2.3864 MHz as well, though they'd also need to switch to 1.79 MHz when accessing POKEY or also select POKEY chips that worked reliably at 2.3864 MHz).

Including the full Atari 8-bit chipset and MARIA would likely be too expensive to meet the sort of competitive/aggressive market price point desired,and it wasn't until around the time of the XEGS that they seemed to be seriously considering a new, fully integrated A8 on a chip with enhancements as a replacement for that. (with the Super XE chipset project) And then with a considerably more advanced system with graphics and sound more in the range of (or better than) the STe, and coming just in time to run into the DRAM shortage of 1988/89.  Granted there's several other potential options Atari Corp could've gone for in the 1985-1989 period with existing or slightly modified versions of chips they already had on-hand.


On the issue of the XEGS and 5200, and computer/console overlapping hardware or repurposing (one way or the other) that came up a few times earlier in this threadL

Selling a computer that doubles as a game console in general was more successful, whether that was the main marketing point of the parent company or the platform was just so well suited to such and got lots of 3rd party support. The Atari 400, ZX Spectrum.

The 5200 failed to cost reduce the system sufficiently to really merit its incompatibility with the 400/800 (compared to just cost-reducing the 16k 400 model re-using the case and low-cost membrane keyboard) while also trying to push itself as a premium system that ended up being priced higher than the 400 itself during, if not before the computer price war in 1983. (and the 600XL and 800XL redesign delaying things further, even without the added delay of management changeover from Ray Cassar to James Morgan)
The 5200 also simply changed its memory map and expanded cartridge address space to 32kB, but did nothing to actually ensure licensed 3rd party development/publishing, so aside from a very short period required for reverse engineering or leaked documentation, it failed the biggest practical reason for introducing the system at all. (The 7800 addressed this with its copywrite/trademark Atari Logo displaying encryption signature check at start-up) The 400 itself was already a fine successor to the 2600 and short of the specific set of advantages the 7800 itself brought, there was nothing needed but cost reduction via further internal board integration and conforming to the less aggressive FCC Class C electronics standard ... well, that and re-naming the 400 as a game machine in-name, to make it clear to consumers (something that was at the core of the later XEGS). Producing the 400 case in game console black plastic with matching keyboard color scheme wouldn't have hurt either.

They really needed a cost-reduced 400 replacement to go with the 1200XL's release and instead of the 5200. The 5200's analog joysticks or any other sort of analog joystick unrelated to the 5200 controllers themselves would also still easily be possible via standard VCS or 400/800 compatible joystick ports, though they also could've easily omitted the keypad and used the onboard membrane keyboard for games demanding that complexity) The they just needed a 32kB model with mechanical keyboard to fill the intermediate gap (like a 32kB 400 with aftermarket keyboard installed) either re-using the 400's case or the 600/600XL form factor (they could also design the motherboard to either just 16kx4-bit DRAM chips or "half bad" 64kx1-bit chips, with provision to allow the full 48 or 64 kB later on when normal 64kx1-bit chips got cheap enough). There's a number of other options Atari could've done to the A8 computers around 1982, and other options something closer to the 5200 could've taken as well, but most of those aren't very valid unless they implemented authentication/lockout or genuninely made the system cheaper and more price competitive. (omitting PIA and using POKEY for I/O was still an option for the latter but using POKEY lines to read just 2 normal joystick ports, among other things)



But this is all way, way outside the topic of the Panther, or Atari Corp in 1989-91 (except maybe the mention of the Super XE chipset) and worth moving to other threads to continue if anyone's interested.
 

Link to comment
Share on other sites

On 7/29/2023 at 11:15 AM, Crazyace said:

I think SIPPs were still only 30pin - and the TT was 64 bit - Atari could have just had extra sockets ( like the 1040st having 32 ram chips for 2 banks ) - to allow expansion.

Having the 64-bit DRAM soldered to the board would make sense (possibly dual-bank 32-bit feeding a 64-bit SHIFTER bus; the Falcon uses dual 16-bit banks and a 32-bit SHIFTER bus) and separate bank or separate bus of 16-bit or 32-bit DRAM for expansion, possibly with the feature allowing re-mapping that other DRAM as ST-RAM for 1/2/4 MB ST software compatibility (though much of that was written for TOS, so potentially could be handled at the OS level and not need full hardware compatibility or at least SHIFTER access to the full >1 MB area). A simpler, but faster DRAM controller using page-mode bursts on 16-bit DRAM with a 32-bit data bus latch for a 68020 or '030 would've been more cost effective and flexible, and could thus have used 8-bit (non-parity) 30-pin SIMMs in pairs the same as the STe uses. Granted, the TT SHIFTER itself should've been able to be fed via 16-bit DRAM using page mode and/or bank-interleave, and moreso if limited to half the bandwidth of actual TT modes (ie 640x200x4 and 320x200x8 at TV resolutions) if Atari had implemented a memory controller suitable for that later on. (sort of like the 16-bit TT bus, but earlier) There's a variety of configurations that could've used a 16 MHz 68020 or EC020 on a single, shared 16-bit bus with wait states and still been similar to or faster than the Amiga 1200 at least (if using bursts rather than interleaved DMA, the lower bandwidth video modes should've been faster, including the likely popular use of running existing ST productivity software, just faster)

Use of DIP sockets would be OK for final assembly overseas, but not for assembly in the US or for end user (or licensed service center) expansion as it ran into mandatory price floor for DRAM chips imported from Japan into the US, where no such limitations applied to assembled memory modules on circuit boards.


 72 pin 36-bit (32-bit without parity) SIMMs came out with later IBM PS/2 models some time in the late 1980s (after 1987), but were definitely available from Japanese and Korean DRAM manufacturers by 1989 (see 1989 Toshiba databook below). OKI had them by 1990 as well.
https://archive.org/details/bitsavers_toshibadatSMemory_66723306

30 pin SIMMs were available earlier than that, but I'm not sure when they became cheap/available from most/all manufacturers, and notably Samsung had 30 pin 8/9 bit SIMMs available by 1988, which would mean both Korean and Japanese sources were available.
 

On 7/29/2023 at 11:33 AM, Crazyace said:

The super xe doc is interesting - it isn't a cheap console ( the 512x512 4bit rotation buffer needs 128k ram just for itself ) , and just getting rotation working with the 250ns ram needs both banks to be fetching seperate 4 bit nibbles per cycle ( Hence the 100% bus usage ) - by having a line scroll ( and column scroll ) it's much more like the SNES , and the faster SNES ram ( along with 256 pixel display ) allows 2 banks to fetch character and pixel - making mode 7 practical.

I'm not sure how they were actually going to implement that mode, but a 512x512 bitmap would seem inefficient compared to using a 320x200 screen in virtual 512x512 space. Or perhaps a fixed 512x512 pixel map would be used, but not all of that need to be allocated, you'd just have to restrict what part of the screen was visible when rotating, and moreso if scaling was supported and you zoomed out. If using it for a rotated 3D floor/ground texture filling only 1/4 of the visible screen area (with normal bitmap or tilemap background above that), it seems like memory usage could be more conservative.

That said, the fact they were interested in a rotating bitmap effect is interesting in as far as the potential incentive to add that feature another design with a blitter for drawing scaled/rotated bitmaps as an alternative to the Panther's sprite-scaling feature or to complement it.

On that note, one way they could've made the Panther (and subsequent Jaguar) object processor logic more useful for a composite 3D or blitter rendered 2D scene would be to render objects to a framebuffer rather than a line buffer, thus working more like the Lynx's blitter, 3DO, or Sega Saturn VDP-1, except without the 2-point rotate or 4-point quad distortion feature, though that would be the simplest way to expand the Object Processor into a polygon architecture. However, you'd have the same disadvantage of forward texture mapping that the Saturn, 3DO, and Nvidia NV-1 architectures had (distortion on both gouraud shading and translucency effects, plus heavy overdraw on heavily warped quads or zoomed out objects) vs using more typical reverse texture mapping for rasterization like the Jaguar's rotate mode uses. (which, oddly enough, the scaling/rotation Gate Array in the Mega CD also uses along with a 16x16x4-bit stamp cache) Though for Wolfenstein 3D style ray-casting engines, a Panther style object processor rendering to framebuffer would probably work pretty well (one object per column and per sprite). The Jaguar's large line buffers also wouldn't have been so wasteful if they could be disabled for simple framebuffer scanning or for synchronous object scanning for composite framebuffer screen modes using much shorter line buffers swapped multiple times per line and using multiple, small object lists per line segment.

Though in terms of cancelling the Panther itself, but coming up with something related to the Panther and/or Slipstream, but quicker/simpler than the Jaguar, modifying the Panther Object Processor to render to a framebuffer in 8-bit wide DRAM (using the same 8-bit wide pixel write port) and using 2 banks of 64kx8-bit DRAM for 320x200 (or 288x224, or 256x256) 8bpp 256 color framebuffers, writing to the back buffer while the active buffer was read by a simple VDC (possibly directly derived from the Slipstream display controller), and you'd now have a fairly flexible extension of the Panther architecture which could also have either software rendered (or blitter rendered) 3D or pseudo 3D composited in with the objects, though you'd need object list interrupts to draw polygonal stuff between consecutive objects, or just render polygon animation to object tiles fetched from shared DRAM, or just multiple rendering passes using different object lists. You could even use 16-bit DRAM with fast-page-mode fetches in 16 byte bursts for object list reads almost as fast as 32-bit SRAM (with 16 or 20 MHz Panther using 100 or 80 ns FPM DRAM and 3 tick RC cycles and 1 tick page-mode cycles), assuming the Flare II team could manage a fast enough DRAM controller suitable for those sorts of cycle times. (also the option of using DRAM with shorter precharge rating specific to most 64kx4-bit and especially  64kx16-bit DRAM chips available, and doing 2 tick RC cycle times at 16 MHz, or more conservatively closer to 15 MHz, ans using similar timing to the ST and STe MMU, but with page-mode burst functionality)

 

Including the Flare DSP would probably be more important for assisting with 3D than the blitter, same for software texture mapping or rotation effects (if they didn't add that in hardware), plus having FIFOs feeding the PWM DAC registers (which the 1993 Slipstream doc notes were added) would allow more realistic dual purpose sound + coprocessing of the DSP and using lower sample rates to DAC writes without inducing PWM squeal. (granted, better still would be a DMA sound circuit to just fill the FIFOs automatically at a set timer frequency, to further free up the DSP and still do simple software mixed sound effects and/or music ... though the Ensoniq chips, even the older DOC chip would be great if Atari could get them cheap enough, and the DOC just needs 8-bit wide DRAM, so a single 64kx8-bit bank would be fine and a lot easier to get good/impressive sound output from more composers not on the super-talented chiptune composer + custom sound engine side and/or demoscene end of things)

Atari might have also reconsidered using an embedded 65816 core and removing the cost of the 68000 entirely, especially seeing as Nitntendo went that direction, though that would mean switching to a different chip vendor since Toshiba lacked a 65816 license. (sticking with VLSI like they had with the Lynx chips would've been an option) The 65816 is crappy for running compiled code, but if it was a good deal faster than the SNES's, it should've been pretty decent, and much better still for 650x assembly language programmers. (the planned 8 MHz on-chip and 4 MHz external speed of the Super XE should've been plenty) Switching to cheaper 8-bit ROMs for bulk storage and working mostly from code/data in RAM should also have cut costs further. As far as manipulating panther lists, there doesn't seem like a big dsadvantage for a fast processor on an 8-bit bus with short instruction cycles vs a 16 MHz 68000 on a 16-bit bus. Plus if you did want to add an external, separate local CPU RAM, it would only need to be 8 bits wide and not 16. Plus single byte-wise software rendered pixels (like for software rendered 3D) would be just fine for an 8-bit bus.
 

 

 


But aside from that, on the Panther architecture itself, I'd forgotten some things I'd made notes on years back and never discussed, and would be more relevant to something Atari could've quickly added in 1991 while trying to fix the (apparently) non functional or poor yields of the Panther chips as-is in early 1991.

 

Run-length objects have to use 16 bits per run with 8 bits for color and 8 bits for run length (width), and also must use 16-bit memory or use only 16 bits per 32-bit longword in 32-bit SRAM (very inefficient).

Those limitations of the Run Length object mode and inefficient use of RAM/ROM for just 32 colors, plus lack of a compact 8 bits per run format could be worked around by including a memory mapper chip that could let 8-bit wide RAM or ROM be seen as 16-bit RAM or ROM with the 8-bits unpacked to 5-bits per run and 3 bits per run length, so 32 colors with 1 to 8 pixel runs (really useful for a wide array of graphics). You could potentially also include an option for 4+4 bit mapping for 16 colors and 1 to 16 pixel run length and either mapping to bits 0 to 3 or 1 to 4 to use the upper or lower 16 colors of the palette.  16-bit wide ROM could be mapped as 2 banks of 8-bit ROM for this function when reading directly from ROM, or separate 8 and 16-bit ROM chips could be used on cart. Plus with only 8 (or 16) pixels max per run, you don't have to worry nearly as much about overdraw or clipping as you would with larger 1 to 256 pixel run format. (you could also just restrict max run length when encoding, but that also wastes the unused bits, where you just have the 3  or 4 bits to work with total here)

The Panther memory map includes areas reserved for both 32-bit wide and 16-bit wide SRAM, so presumably the Object Processor would access both regions with the 2 clock tick ~124 ns cycle time (at 16.108 MHz). A such adding just a single 32kx8-bit 120 ns SRAM chip to the existing 32kB of 32-bit SRAM could be mapped as 16-bit SRAM, cycled as such, and used for the above 5+3 bit run-length format, or for normal 1/2/4 bit packed pixel objects if using the 4+4 bit mapping, just limited to 16 colors rather than the full 32 colors. This would be the bare minimum cheapest option to upgrade the Panther while keeping it very low cost. (using an 80 or 85 ns PSRAM chip with the Panther down-clocked closer to 15 MHz cycled at closer to the rated 135 ns of 85 ns PSRAM, and basic refresh logic could be used as an alternative to SRAM; refresh might only be needed for vblank or inactive screen lines as PSRAM rows could be mapped to auto-refresh from Panther list and data reads).

Better would be 2x 32kx8bit SRAMs for 16 bits, useable for both the special case of the above 5+3 bit run-length data and as normal 16-bit wide SRAM for all other purposes (16-bit object data for 1/2/4 bit objects or even for 32 color 8-bit objects, probably mainly if you wanted a software rendered framebuffer window), but unlike 16-bit ROM, 16-bit SRAM would scan 8-bit pixel data at max Panther pixel output rate. (32-bit SRAM isn't needed for that, just for lists). You'd then have an additional 64kB of 0ws RAM for the 68000 to work in as well.

Now, hypothetically, they could even have switched to just 8kB of 32-bit SRAM via 4 2kx8-bit chips, but I only see that as useful if they were planning ahead to later embed 8kB of SRAM inside a later Panther ASIC revision since several major suppliers didn't even offer SRAMs smaller than 8kB by 1990, or only offered them in special high-speed grades (like 55 ns or faster) which the Panther probably wasn't going to make use of. Though it would also mean using existing supplies of 7800 SRAMs (if they had any) and suppliers providing 2kx8-bit SRAMs for the 7800. I've never seen a 7800 revision that switched to just a single 8kx8-bit SRAM chip, so either Atari had an oversupply of 2kB chips or they were still legitimately cheaper to use. (even if only marginally so)
In the worst case, Atari might have used a mix of 2k and 8k SRAMs (with only 8kB of 32-bit SRAM used in either case) until they could switch to embedded 8kB SRAM or possibly a custom, single 2kx32-bit SRAM chip.
8kB of 32-bit SRAM just for Panther lists would still allow a good deal, practically speaking, a maximum of 512 objects if every single byte of that 8kB was used for lists, and probably many fewer than that actually used for typical games and even for most atypical cases. You just wouldn't have the "2000 sprites" figure the Panther touted, but 512 sprites is still way more than anyone else at the time (including the Neo Geo) and not stretching reality as far as you'd need to actually have 2000 sprites on-screen. Even if you decide to double buffer object lists and/or reserve a good chunk of space for block-mode command lists, there should be more than enough to use within realistic limitations for actual games.

As such 8kB of 32-bit SRAM + 64kB of 16-bit SRAM would seem considerably more useful overall, and the 16-bit area could potentially be replaced with PSRAM or even DRAM later on with upgrades to the chips. (granted, you'd most likely have to use 128 kB of DRAM with half of it wasted, unless re-implemented in a new console, like the Jaguar with backwards compatibility)

Plus with RLE object data loaded into RAM, it can still be further compressed on cart, since RLE alone isn't that efficient of a compression format.
 

 


Also the Panther SHIFTER chip seems to be the only part Atari could make with their in-house Styra semiconductor company, so the 2-chip solution might've been lower cost for that reason. Also, the transistor count of that SHIFTER chip was likely very close to that of a VGA RAMDAC as it was, though the pin count was higher than a basic 28-pin DIP style RAMDAC and bulk volumes may have made the latter cheaper. The 32 color limit isn't so bad if you can actually take advantage of it and make efficient use of ROM and RAM, as you would with a packed 8-bit RLE data format with 5-bit color + 3 bit run length. Plus with 32 global colors useable for most background and sprite objects, it shouldn't have been too hard to have better color usage than many Mega Drive games even without using palette swaps, and a more limited number of cases where the 18-bit RGB shows advantages over the SNES's 15-bit RGB. (you wouldn't have translucent blending effects like the SNES could do, though, and flicker or dithering would be needed instead)

Plus, Atari potentially could've pitched the Panther object processor for Arcade use by Atari Games coupled with a more expensive 256 color SHIFTER mated to the Panther chip already outputting 8-bit pixel data (with just 5 bits being valid for the home console version).

 

That said, with a similar transistor budget, it should've been reasonably possible to use 6-bit line buffers rather than 5 bit ones, plus 64 colors of CLUT, especially if you consider that dual-port SRAM can take up close to double the chip space of single port SRAM. so the existing 32x18-bit SRAM might have worked as 64kx18 bits, but without the ability to update the CLUT during active display. (though should still be possible strictly within hblank and vblank)
If you assume all that, you'd have enough line buffer space for 266x6 bits instead of 320x5.
Or 2x 288x6-bit lines with 64x14-bit CLUT.

If you assume the CLUT SRAM takes exactly the same space as the line buffer SRAM cells, then you'd have enough for 2x 256x6-bit line buffers and 64x11 bits of CLUT (which could be 443 RGB, or probably better as 9-bit RGB colors with 4 shades each, or potentially 64 6-bit RGB colors x 32 shades output in 18-bit RGB space, though those sorts of color spaces are difficult if you're using entirely external resistors for the DACs, though fairly simple if you at least do the intensity step internally and output the same intensity to the external RGB outputs, in which case the 4-shade implementation would be much easier, albeit a simple 1-bit intensity that halves the output is simplest)
Or, again, just use 11 bit RGB and still have better than the Mega Drive's 9-bit RGB.
You could also do 2x 240x6-bit lines + 64x14-bit CLUT. (and not loose too much visible screen area compared to standard 5.369 MHz 256 pixel displays)

I'm also not convinced the 6.44 MHz pixel clock for NTSC mode (cited/suggested in the Panther documentation) would've been that great of a pixel output in terms of composite video quality.
Ideally, you'd want an integer number of chroma clocks and pixel clocks per line to minimize artifacting (and avoid pixel offset/jitter issues as would happen with a non-integer number of pixel clocks per line). Even better is when a relatively simple fraction or multiple of the chroma clock is used for the pixel clock.
7.15909 MHz would be an obvious easy choice (454 clocks per line = 227 chroma clocks = 15.7689 kHz h-sync, 456 clocks per line = 228 chroma clocks = 15.69976 kHz). Though this would leave a horizontal border but at least show all 320 pixels vs 6.4432 MHz showing only about 307 pixels on a typical TV. For 256 pixels, 5.3693175 MHz (342 clocks for 15.69976 kHz) is just about perfect and 240 pixels shows the same border space as with 320 pixels at 7.15909 MHz. The 5.3693 MHz clock rate is also simply 1/6 of the Panther's system clock of 32.215905 MHz.
6.2642 MHz gives ~299 pixels visible and makes for perfectly square pixels on typical NTSC TVs and 399 clocks per line = 15.69976 kHz (again, 228 chroma clocks). The latter would likely use a 25.056815 MHz crystal (7x NTSC colorburst) to feed the SHIFTER to derive the pixel clock and chroma clock from, and if they continued to have chip yield problems, wouldn't be that bad of a clock speed to drop to from the 32.215905 MHz one planned. (also would now allow 150 ns SRAM, 100 ns PSRAM, or 80 ns DRAM to be used in place of 120 ns SRAM) That or just 6.2642x5 = 31.321 MHz for Panther clock source.

288 pixels would be well matched to 3/16 the Panther clock or 6.040482 MHz, but you'd have to make do with the same fractional color clock count that the NTSC STe uses with 384 pixel clocks producing a 15.730422 kHz H-sync rate with 227.55555 3.579545 MHz chroma clocks per line. A 32.1531 MHz source clock would allow this, but generating the chroma clock now becomes more complicated with 228/2048. And at that point you might as well use a cheaper 6.02871 MHz crystal and 1/512 to synthesize H-sync, then x228 for chroma clock. (x5 = 30.14355 MHz would be OK for the Panther clock, too)
 

Link to comment
Share on other sites

Quote
On 8/18/2023 at 5:03 AM, kool kitty89 said:

Granted, the real thing that got me thinking about a MARIA based computer system wasn't a 7800 add-on, but Leonard Tramiel's interview mentioning the missed potential of the TED chip at Commodore as a $50, 40 column full color text capable home computer (presumably what the C16 was supposed to have been in 1984), and while MARIA wouldn't be a good basis for something that cheap and that early, it got me thinking in terms of Jack Tramiel being interested in continuing to push hard into the low end of the computer market with aggressively priced new, competitive products. And then on top of that, the concept of compatibility achieved at the OS (or BASIC interpreter) level as well as full compatibility with existing peripherals (for the Atari 8-bit this would mean all SIO devices), but not full internal hardware architecture compatibility.

 

The C16 has a great colour attribute system that could have been used to enhance Antic/GTIA on the 8 bits. By fetching colour bytes on a 2nd character line you would have the enhanced attributes, and by taking 3 lines ( charset, plus 2 colour bytes ) you could have independent fg/background colours per character cell , or even 3 independant colours per 8x4 multicolour cell, 

 

On 8/18/2023 at 5:03 AM, kool kitty89 said:

MARIA did also support character modes, and only needed SRAM for the lists, so even just 2kB of SRAM + character set (or sets) in ROM and then re-using the bare minimum of Atari Inc's existing inventory (and portfolio) of custom chips and off the shelf 650x compatible I/O components to complete the system, presumably in parallel with the industrial design work going into the new production of the XE line. (like FREDDIE for interfacing 16kB of DRAM, POKEY for keyboard, POT, and joystick I/O re-using keyboard I/O lines and possibly using the 2 row-strobe lines to differentiate between 2 joystick ports by running them to what normally maps to ground) It also should've been simple to allow MARIA to use DRAM at ROM speeds with the 250 or 200 ns DRAMs already being used in the XL and early XEs (MARIA needs 3 cycles at 7.15909 MHz per ROM read, so cycled at 2.3864 MHz vs 1.79 MHz in the A8, so FREDDIE would need to be clocked a bit higher, though the CPU could also potentially be clocked at 2.3864 MHz as well, though they'd also need to switch to 1.79 MHz when accessing POKEY or also select POKEY chips that worked reliably at 2.3864 MHz).

Although it's easy to think of MARIA as a computer chip it has a few real limitations, in particular the character mode limits all of the characters to the same pallette - making it difficult to have a 40 column 8 colour text mode. A tweak to the antic/gtia would have been better. Or even faster memory - on the BBC micro the memory runs at 4MHz , so even in highres ( 80bytes/line ) graphics modes the 6502 runs at full speed.. with the 8 bit allowing 6502 stalls you could support 160bytes ( for 80col chars, or high res and better player missiles )

 

  • Like 1
Link to comment
Share on other sites

On 8/18/2023 at 8:27 AM, kool kitty89 said:

Having the 64-bit DRAM soldered to the board would make sense (possibly dual-bank 32-bit feeding a 64-bit SHIFTER bus; the Falcon uses dual 16-bit banks and a 32-bit SHIFTER bus) and separate bank or separate bus of 16-bit or 32-bit DRAM for expansion, possibly with the feature allowing re-mapping that other DRAM as ST-RAM for 1/2/4 MB ST software compatibility (though much of that was written for TOS, so potentially could be handled at the OS level and not need full hardware compatibility or at least SHIFTER access to the full >1 MB area). A simpler, but faster DRAM controller using page-mode bursts on 16-bit DRAM with a 32-bit data bus latch for a 68020 or '030 would've been more cost effective and flexible, and could thus have used 8-bit (non-parity) 30-pin SIMMs in pairs the same as the STe uses. Granted, the TT SHIFTER itself should've been able to be fed via 16-bit DRAM using page mode and/or bank-interleave, and moreso if limited to half the bandwidth of actual TT modes (ie 640x200x4 and 320x200x8 at TV resolutions) if Atari had implemented a memory controller suitable for that later on. (sort of like the 16-bit TT bus, but earlier) There's a variety of configurations that could've used a 16 MHz 68020 or EC020 on a single, shared 16-bit bus with wait states and still been similar to or faster than the Amiga 1200 at least (if using bursts rather than interleaved DMA, the lower bandwidth video modes should've been faster, including the likely popular use of running existing ST productivity software, just faster)

Yes, a fast page mode would have been better ( The Archimedes VIDC used burst mode as well to fetch video data ) With that ( as the falcon proved ) you wouldn't need a 64 bit memory bus - in a perfect world there wouldn't be a TT in 1990. but instead a 1040TT in 89 with VGA compatible output ( 640x480 16 colour and 320x240 256 colour on monitor or interlaced TV ) , no blitter but just a 16MHz 68020 - all priced at STe prices ( Or the $999 magic number )

 

On 8/18/2023 at 8:27 AM, kool kitty89 said:

I'm not sure how they were actually going to implement that mode, but a 512x512 bitmap would seem inefficient compared to using a 320x200 screen in virtual 512x512 space. Or perhaps a fixed 512x512 pixel map would be used, but not all of that need to be allocated, you'd just have to restrict what part of the screen was visible when rotating, and moreso if scaling was supported and you zoomed out. If using it for a rotated 3D floor/ground texture filling only 1/4 of the visible screen area (with normal bitmap or tilemap background above that), it seems like memory usage could be more conservative.

That said, the fact they were interested in a rotating bitmap effect is interesting in as far as the potential incentive to add that feature another design with a blitter for drawing scaled/rotated bitmaps as an alternative to the Panther's sprite-scaling feature or to complement it.

Zooming out could show the whole 512x512 bitmap on screen at once. It did seem strange and I wonder if there was some overlap from Ricoh with what they eventually offered to Nintendo for SNES mode 7.

On 8/18/2023 at 8:27 AM, kool kitty89 said:

On that note, one way they could've made the Panther (and subsequent Jaguar) object processor logic more useful for a composite 3D or blitter rendered 2D scene would be to render objects to a framebuffer rather than a line buffer, thus working more like the Lynx's blitter, 3DO, or Sega Saturn VDP-1, except without the 2-point rotate or 4-point quad distortion feature, though that would be the simplest way to expand the Object Processor into a polygon architecture. However, you'd have the same disadvantage of forward texture mapping that the Saturn, 3DO, and Nvidia NV-1 architectures had (distortion on both gouraud shading and translucency effects, plus heavy overdraw on heavily warped quads or zoomed out objects) vs using more typical reverse texture mapping for rasterization like the Jaguar's rotate mode uses. (which, oddly enough, the scaling/rotation Gate Array in the Mega CD also uses along with a 16x16x4-bit stamp cache) Though for Wolfenstein 3D style ray-casting engines, a Panther style object processor rendering to framebuffer would probably work pretty well (one object per column and per sprite). The Jaguar's large line buffers also wouldn't have been so wasteful if they could be disabled for simple framebuffer scanning or for synchronous object scanning for composite framebuffer screen modes using much shorter line buffers swapped multiple times per line and using multiple, small object lists per line segment.

For Panther I think rendering to a framebuffer would complicate things a lot - especially with the sprite scaling to the linebuffer. ( The saturn was a mess in that way and it was a lot newer ) You basically end up with the slipstream/jaguar blitter and your pixel cost is 3x at worst ( Read pixel, write to frame buffer, read frame buffer to scanout ) At least with the Jaguar it was getting to the point where the blitter was getting fast enough to compete.

Rotated sprites would have been nice - but you would lose the linear memory read. ( It was mitigated on the PS1 by having a texture cache )

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...