Jump to content
IGNORED

Aritcle: TI's Biggest Blunder, the TMS9900


dhe

Recommended Posts

4 hours ago, dhe said:

The second part of this article is found here now:

 

https://spectrum.ieee.org/the-texas-instruments-994-worlds-first-16bit-computer

 

The link in the first article is broken.

 

The key problem was not having larger address space, intel was 20bit, the other problem was the lack of cp/m type os to port over, thirdly and not mentioned in the article is that the actual ibm pc design team decided to go with the early design layouts published in the popular mechanics magazine which cut down development time and those all used at the time 8088 and 8080 Intel, they always had the edge over 9900 or 68k, and IBM at the time were hardware people they needed software that could be easy ported over and an os and using the 9900 or 68k would have increased development time both hardware design and software, plus intel had more perpherial support chips available at the time, and second source manufacturers as well.

Edited by Gary from OPA
  • Like 2
Link to comment
Share on other sites

The IBM PC design was taken from the System/23 but with changes like the switching the CPU from the 8085 to the 8088. Unless one of the other suppliers were offering chips at an extreme loss, the IBM PC was going to be using Intel. 

 

The big mistake was TI using the expensive 9900 with added software to simulate a cheap albeit slow CPU. 

Link to comment
Share on other sites

It wasn't a blunder. It was nothing more than a logical progression. The TMS9900 was nothing more than a silicon implementation of something that already existed in discrete form. It made perfect sense to make the 9900 and reduce the chip count and chip density of the mini-computers that utilised the discrete version of the CPU.

 

Yeah, the 9900 has a weird architecture with off-chip registers etc... But at the time it all made sense. I don't see any problem with the 9900 as a peice of hardware. It was of its time and was a very innovative design. I'd say the issues came more from the marketing angles than the technical.

 

If they had produced a 32-bit version right after the 9900, with a small amount of on-chip memory for register workspaces (couple of kB?) and the rest off board they would have had a winner - at least from a technical standpoint. But they would have already been competing with the 68000 by then. 

 

It is what it is, I suppose. But I would never describe it as a blunder. Just a logical progression. The later versions of the chip for the more powerful minis have some great features. 

  • Like 5
Link to comment
Share on other sites

51 minutes ago, Willsy said:

It was of its time and was a very innovative design. I'd say the issues came more from the marketing angles than the technical.

I sometimes think they missed an opportunity by not having something like a C compiler [which was designed years earlier] that had a workspace which followed the stack, so a fairly nice low level language which didn't need push & pop as you got deeper into calls.  You just got a fresh set of registers as it shifted along.  But we've seen with the GCC port, there's useful instructions missing from the architecture that make it a bit cumbersome.

  • Like 3
Link to comment
Share on other sites

11 minutes ago, JasonACT said:

I sometimes think they missed an opportunity by not having something like a C compiler [which was designed years earlier] that had a workspace which followed the stack, so a fairly nice low level language which didn't need push & pop as you got deeper into calls.  You just got a fresh set of registers as it shifted along.  But we've seen with the GCC port, there's useful instructions missing from the architecture that make it a bit cumbersome.

 

Yes. Good point. A page of memory could have been reserved for workspace for each C subroutine. So, as you call a C function, it gets its own workspace to work with. BLWP/RTWP takes care of the nesting and unwinding for you. If 1K was reserved, we'd have space for 1024 bytes / 32 bytes = 32 levels of nesting. If anything was nested any deeper than that, the underlying C system could do a manual push of a used workspace page to ram (a software stack), use that page, and then restore it from the software stack.

 

Would have been neat.

  • Like 4
Link to comment
Share on other sites

5 hours ago, JasonACT said:

I sometimes think they missed an opportunity by not having something like a C compiler [which was designed years earlier] that had a workspace which followed the stack, so a fairly nice low level language which didn't need push & pop as you got deeper into calls

TI did that with Microprocessor Pascal. There were application notes like "Stacks on the 9900"--@Acadiel scanned one of those notes but the material is also in several books. See TI Software Development Guide 2nd Edition, RX Real-time Executive, others.
 

Pascal or MPP used  both stack for arguments, and workspace R8 and up for system variables.   A $CALL "macro" was actually BLWP R8 where R8 was the next workspace and R9 was the routine $$CALL. That bit of glue set up the new workspace including the RTWP vector. And much more. 
 

The SPARC CPU had a very nice workspace register stack or window. R24-31 were used to pass arguments , which the callee would find in its R0-R7. R8-15 were private to each subroutine.
 

 

  • Like 2
Link to comment
Share on other sites

8 hours ago, Willsy said:

It made perfect sense to make the 9900 and reduce the chip count and chip density of the mini-computers that utilised the discrete version of the CPU

I agree 100%. 
 

The 9900 was used in the 990/4 and 990/1(I think). It also went into TI disk controllers though I saw a the 9981 on the 990/1 disk controller.  
 

John Purvis oversaw the logical progression of 990/10 to 990/10A around 1979-1981. The discrete CPU was consolidated on the Alpha or 99000 chip.  

 

But it still had a 16-bit addressing space. As the catalog part 99105, it soon had to compete with 10 MHz 32-bit 68000. 

 

Where Wally Rhines sees a blunder was TI's insistence on "9900 First Family" for software compatibility. In a Jan 1983 memo, Frank Spitznogle pounds the table saying DSG  should have moved on five years ago.  

 

(I just read this in the TI Records Archive)

 

In Jan 1983, Wally is pushing TI to go all-in on 68000 with NuMachine, creating an engineering workstation. (As far as I know, that didn't happen and TI stuck with 68000 Apollo workstations.)  
 

Around then, TI produced the last industrial controllers to have 9995s, the 525. The 68010 rapidly took over in TI's System 505 and later PLCs. 545s are one example. 
 

Apologies if I got some facts wrong--I'll review and correct. 
 

 

Edited by FarmerPotato
Frank Spitznogle, not Wally Rhines
  • Like 4
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...