Jump to content
IGNORED

TMS-9900 CP/M?


OLD CS1

Recommended Posts

I was thinking the other night about potential operating environments for the TI. My mind has always wandered around the idea of GEOS (a disassembly of which is available from here) on the TI, but most times I fall across CP/M as I know it was/is available for various CPUs in the form of CP/M-86 (8086,) CP/M-80 (Z80, 8080,) and CP/M-68k (68000.)

 

From this thread I understand there was a CP/M card for the TI which incorporated a Z80, as was the way back then to partner a Z80 with the existing computer's CPU.

 

Then my mind wandered around the Atari ST and how Atari was going to use CP/M originally but instead went to GEMDOS, the source for which was apparently released some time ago.

 

Now when considering disk operating systems for the 4A I think of:

 

1) GEOS (disassembly available)

2) CP/M (source available)

3) GEMDOS (source maybe available)

 

What about CP/M-9900? Has anyone ever ventured into the realm of compiling CP/M for the 9900? The source code is available at the Unofficial CP/M Website. A bit beyond my experience but I would love to give it a shot sometime.

 

The question, of course, will be, what is the value of such projects? Really, what would GEOS or GEMDOS on the TI provide? How would CP/M be useful on the TI? None of these would have readily-available software, which was at least the core idea with CP/M-80 and a Z80 CPU.

 

Well, why do we climb the mountain?

  • Like 2
Link to comment
Share on other sites

CP/M, FLEX, MS-DOS, and DOS/65 are all similar. You have a set of function calls to output to the screen, input, perform disk I/O, etc...
You won't find any graphics APIs built in but you find the basic functions needed to run a lot of software.
One of the best uses is for a development environment.
Command line tools like compilers and assemblers don't need GUIs. For that matter, neither do word processors, spreadsheets, database programs or accounting packages.

GEM originally ran on top of MS-DOS and I'm sure the same type of thing could be added to these other operating systems.


For an older machine like the TI, such an OS would make a lot of sense. It would let you add a lot of power that the normal TI system lacks.
The problem is, that to have enough memory to be really useful, you'd probably need to hack the hardware.
While some of these other operating systems worked with as little as 16K or so, very little ran with such a configuration.

If someone knows how to mod the hardware to accommodate such an OS I think it would be a worthy effort.

Link to comment
Share on other sites

Of course, you could also say that this has already been done. MDOS for the Geneve is a command-line operating system for the TI that also bears a great similarity to CP/M and MS-DOS. . .and it has a 640K memory space in the base configuration, expandable to 2MB using a MEMEX card. More than 2,000 Geneve machines were sold (about 2,500, based on the terms of the MDOS buyout I participated in), but even when the machines were new, not much software was written to take advantage of it. That is unfortunate. . .but it happens a lot with really neat and useful TI hardware.

 

That doesn't make this a bad idea though--as some TI programmer might have a lot of fun doing it.

  • Like 1
Link to comment
Share on other sites

I think your last sentence sort of sums it up - you might be able to get CP/M to 'boot' as it were on the 9900, but there are no programs that will actually run on it as they're all written in Z80 or 68K machine code - or have I got it wrong somewhere?

 

I don't know about CP/M specifically, but a cross-platform OS would likely include a way of creating relatively cross-platform applications, typically in C. So although one would need to recompile the applications to run on the 9900, if the source is available that should be a fairly painless endeavor. The problem with MDOS was most likely that it required developers to target their API's specifically, if it was a CP/M compliant OS, CP/M application developers might have gone through the effort of compiling a binary for the platform?

Link to comment
Share on other sites

On 9/14/2015 at 8:52 AM, Stuart said:

I think your last sentence sort of sums it up - you might be able to get CP/M to 'boot' as it were on the 9900, but there are no programs that will actually run on it as they're all written in Z80 or 68K machine code - or have I got it wrong somewhere?

 

 

On 9/14/2015 at 9:12 AM, TheMole said:

 

I don't know about CP/M specifically, but a cross-platform OS would likely include a way of creating relatively cross-platform applications, typically in C. So although one would need to recompile the applications to run on the 9900, if the source is available that should be a fairly painless endeavor. The problem with MDOS was most likely that it required developers to target their API's specifically, if it was a CP/M compliant OS, CP/M application developers might have gone through the effort of compiling a binary for the platform?

 

This was my thought. Whether it be GEOS, GEMDOS, or CP/M, so long as the result on the TI is compliant, the source for the original program could be adapter with little fuss. Similar to how ixemul.library and ixnet.library on the Amiga provide a POSIX environment, albeit running within AmigaDOS, allowing easier cross-compiling of POSIX applications.

Link to comment
Share on other sites

Developing cross platform applications is not without it's hurdles. Well, depending on your definition anyway.
If you mean across CP/M-9900 machines it's not that difficult. Just target the OS APIs and don't assume anything more exists.
If you mean across CP/M-9900, CP/M, FLEX, CP/M 86, MS-DOS, DOS/65, etc... environments then you have some issues to deal with.

Most CP/M, FLEX, etc... machines support 80 column displays. CP/M for the Adam required patched applications for the 40 column screen and FLEX for the CoCo required support for the 51 column graphics text screen.
The 6502, Z80, etc... are little endian. 680x and 9900 cpus are big endian. Some ARM chips are configurable for either.

etc...

Some sort of cross platform build environment would need to be created that tells the application what the screen size is, whether the CPU is big or little endian, etc... at build time.
And some sort of environment isolation layer might be in order to isolate differences in I/O.

*edit*
And don't forget compiler differences.

The TI and CoCo have the benefit of GCC compiler support and it's pretty full featured.
The Z80 has several compilers but SDCC seems to be the best option. It is close but not fully ANSI compatible.
The 6502 has CC65 which is an expanded small C compiler but it's nowhere near ANSI compatible.

 



FWIW, this has been attempted before, perhaps a bit to the extreme end of the spectrum.
UCSD Pascal attempted to create a common development/target environment.
The biggest problem with UCSD Pascal is that it used a virtual machine.
That in itself isn't a horrible idea but the virtual CPU is stack based and most processors did not include stack relative instructions, plus the compiler wasn't a modern optimizing design.
I think Apple Pascal runs about 30% faster than Applesoft II BASIC which is about what you get from Applesoft BASIC compilers.

Apple added support for native code segments to add speed and to allow native hardware support but that isn't portable and a better design from the start would have made it faster without having to write native code.

Edited by JamesD
  • Like 2
Link to comment
Share on other sites

The weird thing with all of the older OS types is the minimal interfacing they provided. CP/M (and MS-DOS, for that matter) was primarily a command-line language it didn't really care what the underlying hardware was from an execution standpoint. If the batch file was in appropriate format, it would execute it. If the software it executed as part of that batch wasn't suitable for that system's microprocessor, it would fail. A lot of MS-DOS batch files worked perfectly well on the Geneve--but software to run on the machine still had to be hard-coded to run on that processor and under the constraints of the Geneve memory map. Some things were easy, as you could tie to the MDOS (or CP/M) routines to format a disk, for example, but CP/M was primarily just an interface that allowed your to easily invoke other software--it didn't do much else on its own. The real power came with the library of CP/M software designed to run on a specific processor family. Yes, there were flavors for chips other than the Z80, but the lion's share of all CP/M software needed the Z80 to run. Much of that was never ported to the other chip families--and the porting would not have been any more trivial than porting Assembly programs between platforms (not a trivial task at all).

  • Like 1
Link to comment
Share on other sites

Just cherry-picking while I am at the office: I can see where the 80 column mode would be a problem. While some CP/M software will run in 40 columns, the majority indeed need 80 columns to run. If we want CP/M on the TI, we can do 40 columns on the low-end expanded machine, but we could do 80 columns with an F18A-enhanced machine. My thoughts exclude an 80 column card as I do not have one so I do not think of one.

Link to comment
Share on other sites

I guess CP/M grew a little too organically across systems to be considered a cross-platform OS, more something that got started on a family of processors and was ported to others once it became popular (as a matter of fact, I thought it was an 8080/z80-only OS until OLD CS1 started this thread). Contiki might be a better example of a real cross-platform OS with real cross-platform apps? Not sure if any of the older OS's would qualify?

Link to comment
Share on other sites

UCSD p-System was definitely cross platform. The p-System virtual p-machine processor (also done in silicon by Motorola) would run pretty much any program written for any machine running it without modification (if the writer was careful not to try and include any functionality specific to one hardware platform). Each platform had platform specific extensions (or limitations on program size/screen size--like the TI), but the core compiled software world work on all of them without modification.

Link to comment
Share on other sites

I guess CP/M grew a little too organically across systems to be considered a cross-platform OS, more something that got started on a family of processors and was ported to others once it became popular (as a matter of fact, I thought it was an 8080/z80-only OS until OLD CS1 started this thread). Contiki might be a better example of a real cross-platform OS with real cross-platform apps? Not sure if any of the older OS's would qualify?

CP/M is just a core set of function calls.

One of the key problems with it is that there were no standards for disk formats or much of anything at the hardware level.

This kept people from swapping disks and certainly software as well. You still had to have the software delivered on a disk compatible with your system.

I think 80 columns was a bit of an unofficial standard imposed by the software developers themselves.

CP/M itself didn't even require a terminal, I think it could be run using a teletype printer/keyboard for user I/O initially.

Many applications depend on specific terminal emulation in order to function properly.

I think the same can be said for many of the other OSs I mentioned.

As long as you can create a cross platform library to isolate system specific functionality then that really isn't an issue.

There have been APIs for cross platform implementations of features in the past. Curses for example.

Contiki takes a more modern approach to an OS. It adds task switching, memory management, optional networking, etc...

But much of what has been added to Contiki in recent years goes beyond what is really practical on the original 8 bit ports.

It has outgrown it's original intent and I'm not sure it's as suited to really small systems the same way it used to be.

The definition of small is a bit different on modern systems.

Plus, exactly how many cross platform targets exist?

I don't know of any applications other than the core functionality provided with the OS.

 

Which is better? That depends on what you want to do.

Do you need multitasking or task switching of any kind? CP/M isn't thread safe. You can't lock system resources for exclusive access. You can perform time slicing within an application itself if you tie into some sort of timed interrupt. You just have to have your own semaphores around I/O that may have conflicts.

How much of your RAM do you want the OS to eat up? CP/M is very lightweight and leaves more RAM for your application than an OS like Contiki.

 

I think a CP/M like OS and some sort of OS isolation layer for a high level language would provide the broadest range of cross platform compatibility short of a virtual environment like UCSD.

The OS isolation layer could function on a single tasking OS or a multi-tasking OS.

Link to comment
Share on other sites

Why not creating just a PEB card with a Z80 CPU and 64K of RAM a la Apple II and just use that instead? There were somewhat similar cards made for the TI in the past but for what I understand they never fully exploited the TI's resources. Besides, they are rarer than hen's teeth anyway... Of course easy for me to say that as I have no clue as to the technical implications underlying such a project.

The bigger question would be of course why even consider CP/M nowadays? I doubt there are many of us still using a physical TI for any real day to day productivity or even development for that matter, so what would be gaining from all this other than bragging rights?

Link to comment
Share on other sites

 

What about CP/M-9900? Has anyone ever ventured into the realm of compiling CP/M for the 9900? The source code is available at the Unofficial CP/M Website. A bit beyond my experience but I would love to give it a shot sometime.

 

 

I think that is not so easy. Originally, CP/M was written in a higher level language ("PL/M"). This is '76-'77, CP/M 1.3. Only a small part was written in 8080 assembler. PL/M is a simple compiler, written in Fortran-66. It should not be too difficult to modify the PL/M compiler to output 9900 code (using Fortran on a PC). This can then be used to compile CP/M 1.3 to 9900 code, thus creating CP/M-9900.

 

However, this is an early CP/M. For performance reasons later versions of CP/M were coded in 8080 assembler. It is these later versions of CP/M (let's say 2.2 from 1979 and beyond) that conquered the micro computing world back then. Porting this would be a major effort beyond hobby time budgets I think.

 

Of the many thousands of CP/M software packages many were coded in assembler, often using 8080 instructions only to maximize compatibility. I think that only after 1980 did that change and of course by 1982 all CP/M software companies were shifting to PC-DOS and the 8086 en masse, often using semi-automatic conversion of 8080 CP/M code. Porting those CP/M apps (even if the source code would be available) would again be a major effort.

 

A possible exception could all sorts of business applications that existed for CP/M, that were written in CBASIC. CBASIC compiled to an intermediate byte code that was then interpreted. Software makers liked this because it meant they could protect their source (unlike MS Basic at the time). Once CBASIC was ported to CP/M-9900 all those programs would run. As an aside: this CBASIC market disappeared once dBase got traction.

 

However, not all is gloom. There already exists something that is very CP/M-9900 like (besides Geneve MDOS): an OS called "MDEX" written by John Walker (who later was the driving force behind Autodesk and AutoCAD). MDEX is a similar to CP/M and comes with a suite of application software:

http://www.powertrancortex.com/documentation.html

MDEX was originally written for a 9900 based S100 system:

http://www.s100computers.com/Hardware%20Folder/Marinchip/9900%20CPU%20board/9900%20CPU.htm

Sufficient source has survived to port MDEX to other 9900 systems:

http://ruizendp.websites.xs4all.nl/screenshot.png

  • Like 4
Link to comment
Share on other sites

 

I meant it as such. A quick search for "z80 emulator in c" returns several hits.

There's a CP/M emulator written in C for that matter.

The TI version of GCC generates pretty good code but I think it's going to be pretty short of RAM for that, not to mention the resulting speed.

The disassembler could be dumped and the bios code could just call the CP/M 9900 equivalent so there's some room for reduction in code size.

I just think the Z80 emulator code could be much smaller and faster in assembly.

Link to comment
Share on other sites

  • 1 month later...

Does anyone have copies of the Foundation CP/M Card floppies as disk images. Will the card run the same programs as the Kaypro II CP/M machine. I have the Foundation card and Kaypro II board board and some disks, but am afraid of trying the disk because of their age and my 360k floppy drives not being in the best of shape after 17 years of storage.

Link to comment
Share on other sites

RickyDean--you may have the only copies of those Foundation disks out there--I have the card and have been trying to get a good set of disks for it for over 10 years now. . .the manual for the Foundation card is also missing in action, although I think Ciro may have a copy of that. I only know of five or six surviving copies of the card, so it is not common (but it is much more common than the Morning Star CP/M card--I only know of one of those out there). On the disks--they are actually a CP/M workalike called RP/M. The disk format is supposed to be identical to the one for the Kaypro. They also sold an 80-column monochrome card that worked with the CP/M card. I have one of those as well--but I've never seen another one. . .getting an image of those disks of yours done with a Cryoflux or with a Supercard Pro would be a VERY good idea.

Link to comment
Share on other sites

Ok, Ksarul, I do not think I have the Foundation disks, only a set of Kaypro disks, but I could be wrong, because I don't fully know the extent of what I have left, as of yet. I will try to spend some time looking through my disk and see if the foundation ones may be there. But not very hopeful to be sure, Like documents, a lot of my disks were damaged in the flooding in Indiana, years ago, and in subsequent trailer leakage over the last 9 years. I have some good looking disks, but have had some trouble getting my Corcomp disk controller to work with a TI. Seems to work fine with a Geneve, but my Geneve's are down right now and I had transferred most of my disk to DSDD format in the 51/4 disks. years ago, and in my 3.5 inch disk's I had converted many of them to 720k and 1.44M with the Corcomp and HFDC's. When my Geneve was booting a couple of months ago, I was able to read some of them, but not all and even with the PC TI disk copiers out there, I am getting many with sector 0 errors when trying to copy to dsk image and goobledey gook or blank on the image when I open it, not TI file format.

Link to comment
Share on other sites

  • 2 months later...

RickyDean--you may have the only copies of those Foundation disks out there--I have the card and have been trying to get a good set of disks for it for over 10 years now. . .the manual for the Foundation card is also missing in action, although I think Ciro may have a copy of that. I only know of five or six surviving copies of the card, so it is not common (but it is much more common than the Morning Star CP/M card--I only know of one of those out there). On the disks--they are actually a CP/M workalike called RP/M. The disk format is supposed to be identical to the one for the Kaypro. They also sold an 80-column monochrome card that worked with the CP/M card. I have one of those as well--but I've never seen another one. . .getting an image of those disks of yours done with a Cryoflux or with a Supercard Pro would be a VERY good idea.

Are these floppies in double density format or single density? Can they be read on a PC with a 1.2 meg floppy or do they need to be read with a 360k floppy. I have found some of my cp/m stuff, but can't read single density on my pc machines and I have not been able(time wise to try to build a single density machine to read TI type disk, little else any other type) so if a regular 1.2 meg floppy drive should be able to read them with omniflop or anadisk or something I can chance reading them. Don't want to destroy them. Maybe will find rp/m for the foundation card, possibly.

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...