Jump to content
IGNORED

A800 vs IBM/PC 5150: Because now is the time...


Faicuai

Recommended Posts

I will start by saying that have a soft-spot for the legendary IBM-PC 5150... a true "maverick" product, born out of true mavericks inside what was otherwise a rigid and gelid corporate culture... Conceived and designed right here, in Boca Raton, Florida, from where the Personal Computer revolution was catapulted to a multi-billion industry stardom!

 

Yes, the PC 5150 carries a more powerful Intel 8088 CPU running at 4.77 Mhz, filled with 16bit registers and 1 Mbyte of addressable RAM, has a more capable, "open" architecture, also costing a whole lot more than the Atari series... yet its CPU handles 8-bits at a time, on its data-bus !

 

I always perceived the 5150 as being significantly faster, especially after seeing those dreary benchmarks of the time (almost all in Basic), in which most of the time, our A8s got deeply buried. But with the flurry of Basic interpreters and compilers recently optimized and/or developed for A8-series, maybe it is now the time to revisit history.

 

For the IBM side of the tests, we will use:

As for the Atari, tests will be run on :

  • Real 800 running in both OS-b and XL modes (with Incognito) will be used.
  • The basic of choice will be Altirra Basic 1.55 (8k interpreter-in-rom package that 1:1 replaces Atari revC),
  • Altirra OS (higher-precision) FPP in XL/XE mode
  • Newell FPP for OS/B
  • SDX as facilities environment. Any other DOS is also possible.

 

Test #1: AHL's bench:

 

Atari (34.2 secs):

post-29379-0-82472800-1545283997_thumb.jpg

 

IBM (23.56 sec):

post-29379-0-73541300-1545284110_thumb.jpg

 

NOTES:

  • A800 results are about 100x (hundred) times more precise than IBM's, thanks to Altirra's FPP package.
  • On OS/B Newell, Atari timing is 24.36 secs. with precision similar to IBM.
  • On high-performance FPP package for XL/XE OS, A800 execution time drops to 19.95secs, with precision similar to IBM.
  • All tests with Antic OFF

 

 

Test #2: integer-looped / cumulative & carry-over FPP basic operations:

 

 

Atari: (72.53 sec, Error=0)

post-29379-0-69040400-1545284938_thumb.jpg

 

IBM: (74.6 sec, Error=0)

NOTES:

  • Microsoft Basic II for A800 completes these tests in 74.5 secs.
  • On OS/B Newell, Atari timing is 78.4. secs, with lower precision.
  • On high-performance FPP package for XL/XE OS, A800 execution time drops to 50.90secs, with lower precision.
  • All tests with Antic OFF

 

 

Test #3: Prime Number Generator (size = 1000)

 

Atari: (7.72 secs)

post-29379-0-23443400-1545285613_thumb.jpg

 

IBM: (13.00 secs)

post-29379-0-28990500-1545285667_thumb.jpg

post-29379-0-46496200-1545285857_thumb.jpg

 

NOTES:

  • The executed Basic code (IBM and Atari) for this test are essentially IDENTICAL.
  • The A800 was switched to 80-cols. mode in SDX, and Antic was left ON during the execution of the test.
  • The IBM code was optimized for handling Integer-variables directly (A800 basic does not support such direct definitions).
  • When deliberately suppressing screen-output (but ANTIC=ON), A800 runs in 4.x secs. and IBM in 9.x secs.
  • It is clear that A800 is handling HIGHER screen-output overhead than IBM, even with Antic=ON.

 

Well, there you have some food for thought... With just a more efficient / intelligent Basic interpreter and a more optimized FPP package, the A800 seems to be showing fairly competitive performance with respect an IBM 5150 on some past and modern Basic tests (the testing venue of choice back in the day).

 

Forget about the Apple II and (much less) the C64. Those will never match these results.

Edited by Faicuai
  • Like 2
Link to comment
Share on other sites

Ahl's actual benchmark doesn't use A*A, it uses A^2
Not sure if that will make much difference.

10 REM Ahl_s simple benchmark
20 FOR N = 1 TO 100: A = N
30 FOR I = 1 TO 10
40 A = SQR(A): R = R + RND(0)
50 NEXT I
60 FOR I = 1 TO 10
70 A = A^2: R = R + RND(0)
80 NEXT I
90 S = S + A: NEXT N
100 PRINT "Accuracy ";ABS (1010-S/5)
110 PRINT "Random ";ABS (1000-R)
Link to comment
Share on other sites

 

Ahl's actual benchmark doesn't use A*A, it uses A^2

Not sure if that will make much difference.

 

10 REM Ahl_s simple benchmark
20 FOR N = 1 TO 100: A = N
30 FOR I = 1 TO 10
40 A = SQR(A): R = R + RND(0)
50 NEXT I
60 FOR I = 1 TO 10
70 A = A^2: R = R + RND(0)
80 NEXT I
90 S = S + A: NEXT N
100 PRINT "Accuracy ";ABS (1010-S/5)
110 PRINT "Random ";ABS (1000-R)

 

 

Already tested, NO difference on PC-Basic (16K + 32K package)... it already knows NOT to use trascendental functions for integer-exponents.

Atari Basic (a MUCH smaller and limited 8K package), does not.

Edited by Faicuai
Link to comment
Share on other sites

The IBM appears to be printing more characters on the prime number program, so there will be some difference due to that.

I'm guessing the MDA is responsible fore some of the speed difference.

 

Also tested, and no, there will not be a difference. Check the test notes.

 

The IBM version has one "." removed from one of its output lines. The Atari version does not. This is due to the fact that PC-Basic prints out numeric variables with trailing and/or preceding blank-spaces. Not to mention that the IBM version (actually ALL tests here) have been optimized to declare / use integer variables as such, by using trailing "%" everywhere possible.

 

In light of the above, I arbitrarily REM'd out all print sentences in both versions, just to verify "core" execution time. The Atari dropped from 7.5s to 4.33s, and IBM dropped to 9.1s. That is, NO characters printed out (just core logic). It is clear that. on this test, the Atari is being MUCH mode efficient (in spite of IBM's % variables), and that it is also being taxed A LOT more for on-screen output at 80 col (it spends 35%-40% of time drawing the output, whereas the IBM version spends about 20%-25%). ALL these #s with ANTIC ON, all the time (even when suppressing characters output). Change that, and you will see even FASTER execution on the Atari...

Edited by Faicuai
Link to comment
Share on other sites

Just a couple points... I don't see any results for the actual (unadulterated) Ahl's benchmark on the Atari.
IBM software has moved on as well. For that matter, even back in the day they had BASIC compilers, making the speed of one interpreter not too big of a deal even in the 80s.
And then then there is the whole math coprocessor and faster CPU thing.
At the very least an NEC V20 would yield somewhere around a 30% speedup at the same MHz.
I'm not sure there are a lot of bragging rights to be had here, though the Atari has clearly come a long way.

Link to comment
Share on other sites

Just a couple points... I don't see any results for the actual (unadulterated) Ahl's benchmark on the Atari.

IBM software has moved on as well. For that matter, even back in the day they had BASIC compilers, making the speed of one interpreter not too big of a deal even in the 80s.

And then then there is the whole math coprocessor and faster CPU thing.

At the very least an NEC V20 would yield somewhere around a 30% speedup at the same MHz.

I'm not sure there are a lot of bragging rights to be had here, though the Atari has clearly come a long way.

 

A*A (instead of A^2) was used IDENTICALLY in both machines, as you can see in the attached screen-shots. The play-field is pretty leveled for both machines, and actually favored / tilted towards IBM, wherever possible.

 

SW changes or optimizations (ONLY within SAME RAM or ROM memory-space footprint, with full backward compatibility with past Basic-tests source code), are Ok. However, NO HW changes of any kind, for anyone, other than RAM-space (which the 5150 test-VM was equipped with 256K for this test). NO COMPILERs, NO HW accelerators, NO CO-PROCESSORS, just bare-metal CPU vs. CPU plus operating-environment overhead.

 

From the perspective of the Atari, the IBM PC-Basic AHL's bench. is RIGGED. That is, it detects integer-based exponents and then DOES NOT invoke transcendental functions to compute results (whereas the Atari ALWAYS does it!). From the IBM perspective, the Atari interpretation is inefficient and ineffective at computing an integer power just as we learned them in high-school (NEVER needed transcendentals for it !!!),

 

Therefore, it is clear now (for all of us), that the Atari (with Atari Basic) NEVER REALLY had a chance to stand up against the 5150 on this very popular benchmark, and we were NEVER comparing apples-to-apples back then (including MANY other machines of that time !!!). All we were comparing is whether there was enough space on the 8K-crammed Atari Basic implementation to detect such exponents, vs. a 16k (floppy) + 32K (rom) Basic package on the IBM that left you with 10K of free ram on a 64K-equipped 5150, and was intelligent enough to see those cases (well, with such large expense of RAM/ROM, I would not be surprised).

Edited by Faicuai
Link to comment
Share on other sites

 

A*A (instead of A^2) was used IDENTICALLY in both machines, as you can see in the attached screen-shots. The play-field is pretty leveled for both machines, and actually favored / tilted towards IBM, wherever possible.

 

SW changes or optimizations (ONLY within SAME RAM or ROM memory-space footprint, with full backward compatibility with past Basic-tests source code), are Ok. However, NO HW changes of any kind, for anyone, other than RAM-space (which the 5150 test-VM was equipped with 256K for this test). NO COMPILERs, NO HW accelerators, NO CO-PROCESSORS, just bare-metal CPU vs. CPU plus operating-environment overhead.

 

From the perspective of the Atari, the IBM PC-Basic AHL's bench. is RIGGED. That is, it detects integer-based exponents and then DOES NOT invoke transcendental functions to compute results (whereas the Atari ALWAYS does it!). From the IBM perspective, the Atari interpretation is inefficient and ineffective at computing an integer power just as we learned them in high-school (NEVER needed transcendentals for it !!!),

 

Therefore, it is clear now (for all of us), that the Atari (with Atari Basic) NEVER REALLY had a chance to stand up against the 5150 on this very popular benchmark, and we were NEVER comparing apples-to-apples back then (including MANY other machines of that time !!!). All we were comparing is whether there was enough space on the 8K-crammed Atari Basic implementation to detect such exponents, vs. a 16k (floppy) + 32K (rom) Basic package on the IBM that left you with 10K of free ram on a 64K-equipped 5150, and was intelligent enough to see those cases (well, with such large expense of RAM/ROM, I would not be surprised).

The PC interpreter isn't rigged. That's a smart optimization. How are new optimizations okay when they are done on the Atari but unfair on 35 year old IBM BASIC?

You are running an optimized BASIC on the Atari that has benefited from around 40 years of additional knowledge over the original.

I'm pretty sure the IBM code could be optimized as well, but then there's no point on the IBM because compilers generate faster code.

While it is certainly interesting that the Atari has come so far, you are still comparing an optimized Atari BASIC vs a 35+ year old unoptimized IBM PC BASIC, and you are excluding every hardware advantage the IBM has available to it.

 

And I still say it;s not Ahl's benchmark if you change it. It gives different results. It's a logical optimization for sure, but that's not the point of the benchmark.

Since you have decided logical optimizations are allowed in Ahl's Benchmark, I did some logical optimizations (algebra) and changed test #2 with the following lines.

It generates the same results, but by multiplying by the reciprocals instead of dividing so it can take advantage of the hardware multiply optimization on my new MC-10 ROM which completes it in under 50 seconds..

140 D=1/A : E=1/B

220 C=C*A:C=C*B:C=C*D:C=C*E

Sure, it doesn't really measure the speed of the divide function so much anymore, so it doesn't accurately show what to expect in code that can't take advantage of such an optimization, but hey, it's okay since it works here right?

Point being that by changing Ahl's Benchmark, you aren't measuring the speed of the ^ function anymore above and you may not always be able to make that optimization in some real code.

I've actually taken advantage of this optimization inside the new ROM and with a few BASIC programs, but when divide has to be used, it's going to be slower.

 

I bring the MC-10 up for another reason. Microsoft didn't take advantage of the hardware multiply in the MC-10, or the Color Computer.

The 8088 has hardware multiply and divide instructions. If PC-BASIC doesn't take advantage of them either, then optimizing two functions in the PC BASIC floating point library would change the results significantly with no hardware changes at all. Every PC benchmark would be over 25% faster if my 6803 and 6809 optimizations are any indication.

You might want to keep that in mind when you look at these results.

  • Like 2
Link to comment
Share on other sites

The PC interpreter isn't rigged. That's a smart optimization. How are new optimizations okay when they are done on the Atari but unfair on 35 year old IBM BASIC? (...)

 

And I still say it;s not Ahl's benchmark if you change it. It gives different results. (...)
It's a logical optimization for sure, but that's not the point of the benchmark. (...)

(...)

Point being that by changing Ahl's Benchmark, you aren't measuring the speed of the ^ function anymore above and you may not always be able to make that optimization in some real code. (...)

I've actually taken advantage of this optimization inside the new ROM and with a few BASIC programs, but when divide has to be used, it's going to be slower.

 

(...)

The 8088 has hardware multiply and divide instructions. If PC-BASIC doesn't take advantage of them either, then optimizing two functions in the PC BASIC floating point library would change the results significantly (...).

 

Let's put the above to the test, then:

  1. As of a smart optimization, it is already written in my comments (".. From the IBM perspective, the Atari interpretation is inefficient and ineffective at computing an integer power just as we learned them in high-school (NEVER needed transcendentals for it !!!") Seems to me like pointless creating an argument out of a partially chopped statement.
  2. Doing our homework with PC-Basic disproves your point: using A^2 or A*A yields on essentially the exact same result, yielding a negligible difference on execution time The reason for this is that MATHEMATICALLY, X^N has DIFFERENT ways of being computed. Which way you choose?
  3. Since we know that both implementations CAN equally compute x^n in (at least) two different ways (and one of them ALWAYS chooses the shortest one, while the other ALWAYS chooses the longest one), then let's direct the machines to ONE COMMON way, and check how they both do it in the ***SAME*** way. This is an ABSOLUTELY and COMPLETELY valid argument to proceed with (and the essence of any benchmark, an apples-to-apples comparo). It can also be done with NO NEED of changing HW or SW or (anything else) to enable such common path (which shows why your counter-argument and suggested optimizations are mostly out of context).
  4. There is NO SUCH thing as "40 years" of improvement loaded on the (tiny) 8K Altirra Basic 1.55. And more so when compared to the CONSTANTLY changing PC-Basic versions (16k to 17K on disk and 32K on ROM), which ALL come bundled with their own FP package!!! Altirra Basic 1.55 it's just about getting stuff OUT of the way from interpreting the language, and cutting some interrupts that burn tons of CPU time (that's the essence).. and does not even touch the FPP package, yet!
  5. Proof or evidence of the above stems (again) from simply doing our homework. Let's take MICROSOFT BASIC II for Atari (~14K ROM, last version available, loaded with its OWN FP package, and from the same company PC-Basic stems from). Let's run BOTH computation paths:
    • Atari MSB-II => A*A : 42.56 secs, Error: 0.111523 (Antic OFF supported at OS-level)
    • Atari MSB-II => A^2: 65.50 secs, Error: 0.150879 (Antic OFF supported at OS-level)
    • PC-Basic => (A*A = A^2):: 23.5 secs, Error: 0.01159
    • CONCLUSION:
      • Not just x10 times less precise than IBM's version, but also completely UNAWARE of the A^n or A*A computational paths.
      • That implies that (with 100% certainty) the PC-Basic version AND/OR its FPP package were optimized for it, first hand !
      • Atari-8bit NEVER stood a chance against the 5150's implementation, even from the SAME company that made its Basic!!!
      • And all of this happening already in 1983-1985.

 

If you would like to insist on testing the mathematical assumption and choices made (or NOT) by each manufacturer & Basic / FPP SW writer, well, you are free to do so, at will. As for me (and I suspect many more people out there), I am interesting in the truth, at what is REALLY happening behind the curtains, and the actual / solid evidence of it, so we can level the playfield just enough to make some meaningful or, at least, relevant conclusions.

 

I do welcome, on the other hand, ANY and ALL ATTEMPTS to make the PC-version run FASTER! I already did my part with -% declaration of integer variables! We have just started with the Atari versions, also... it already can BLOW out of the water all of the results posted here, still within its original 8K + 16K rom space, and passing ALL of Avery's ACID-OS tests !!! :-)

Edited by Faicuai
  • Like 1
Link to comment
Share on other sites

To further exemplify the point (and seal it), let's now turn the "acid" lever all the way up on AHL's test, for BOTH platforms (mind you that you will break a ton a Apple, C64, and many other hearts out there):

  1. On line 222, we drop SQR function by stating A=A^(0.4) [meaning the 5/2 root of A, because inverse of 5/2 = 0.4)
  2. On line 241, we drop A*A multiply function by stating A=A^(2.5) [meaning the 5/2 power of A, because 5/2 = 2.5)

 

We now switch the 800-i to XL/XE mode, and run OS XE03-FP which offers a level of precision much LOWER than Altirra's 2K FPP, and lo and behold, watch the results:

  1. Atari: 74.3666 secs, ERROR = 0.331239
  2. IBM: 51.80 secs, ERROR = 1.285156

When you come up with results like these, you need to evaluate them in the context of BOTH speed AND precision, to get an idea of how much work you are getting done per each unit of "quality"... If we define ratio R= Er (error time ratio) / Tr (execution time ratio), we could then sate:

  • For any Tr (denominator) >= 1, and resulting R>1, you reached HIGHER precision per unit of time spent.
  • For any Tr (denominator) >= 1, and resulting R<1, you reached LOWER precision per unit of time spent.
  • NOTICE the TOTAL / COMPLETE absence of this fundamental notion in ALL results reported on this particular tests, 30+ years ago (so stupid!)

 

Here's what we've got:

  1. Er=[1.285156 / 0.331239] = 3.87984506 (that is, Atari's error is x3.8 LOWER (more precise) than IBM's output).
  2. Tr=[75.36666 / 51.80] = 1.43564864 (that is, Atari's execution time is x1.43 times HIGHER (slower) than IBM's)
  3. Dividing #1 by #2 above yields in R=2.7025 (!)
  4. Since Tr >1, this clearly indicates that the Atari is outputting HIGHER precision (quality) per each equivalent unit of time, when compared to IBM's output.

 

Once again, here we directed BOTH platforms to take the exact SAME (and most challenging) route, for a vis-to-vis comparison, which is what a REAL benchmark is all about. The question that remains on the table is what would IBM's execution time be if aiming at (say) x3.5 times MORE precise results?

 

:-)

Edited by Faicuai
  • Like 1
Link to comment
Share on other sites

This is like comparing a Cessna to a Boeing 747, and limiting testing to taxiway maneuverability.

 

Well, we just loaded them both with 100 tons worth of angry elephants... and went to runway, at full power...

 

It seems the 747 crew opened the cargo bay, and threw out 3/4 of the elephants before reaching vMax...

 

A real case of animal cruelty !!! 8-)

Edited by Faicuai
  • Like 1
Link to comment
Share on other sites

...

We now switch the 800-i to XL/XE mode, and run OS XE03-FP which offers a level of precision much LOWER than Altirra's 2K FPP, and lo and behold, watch the results:

  1. Atari: 74.3666 secs, ERROR = 0.331239
  2. IBM: 51.80 secs, ERROR = 1.285156

...

:-)

 

(taking the bait)

 

Now that is an excellent modification to Ahl's benchmark! Bravo! I approve of that change!

Torture tests are make for great benchmarks!

 

Accuracy (error) on the MC-10 is 4.33468819E-3, or 0.00433468819, and it finishes in around 62 seconds.

I know the Microsoft 6502 versions are less accurate than 6803 and 6809 BASIC even though they store numbers the same.

I'm guessing the multiply, divide, and/or normalization functions don't use the extended precision byte even though it's there.

No matter, it's an implementation difference in the FP library rather than a limitation in the CPU.

With the 0.89MHz 6803 beating the Atari in both speed (~16%?) and accuracy (a couple decimal places)...

I think its safe to say that could be fixed without killing performance.

But hey... what do I know?

Link to comment
Share on other sites

 

(taking the bait)

 

Now that is an excellent modification to Ahl's benchmark! Bravo! I approve of that change!

Torture tests are make for great benchmarks!

 

Accuracy (error) on the MC-10 is 4.33468819E-3, or 0.00433468819, and it finishes in around 62 seconds.

(...)

 

 

ONLY way to reach that precision on the Atari is to go back to Altirra's (2K) high-precision package (which I have been testing as part of XEGS-r04 load for Incognito, which is my daily-driver)... but it comes at a GREAT cost... Here are the results:

  1. Atari: 100.3 secs, Error: 0.005254. The figure of merit with respect to IBM's result is [ (1.285156 / 0.005254) / (100.3 / 51.8 ] = 126.07 (!)
  2. MC-10: 62 secs, Error: 0.004334. The figure of merit with respect to IBM's result is [ (1.285156 / 0.004334) / (62 / 51.8) ] = 247.74 (!!!)

The above means that both machines are outputting a HELLUVA more precision per unit of time with respect to IBM's output, being the MC10 the absolute winner, here (literally creaming the IBM/PC's retard-results, precision-wise), and doing almost double per unit-of-time of what could be extracted from the A800 with this setup.

Edited by Faicuai
Link to comment
Share on other sites

you might want to double check the mc-10 numbers,

May I have the software to run on it? I am not seeing such results, perhaps you have written something special?

 

I dusted the thing off but I can't find the 16k ram pack, so I am stuck at 4k.... Hopefully it's in one of these boxes somewhere... :(

If you remember the cat number or the whatever I might try auction site search in the meantime...

Edited by _The Doctor__
Link to comment
Share on other sites

you might want to double check the mc-10 numbers,

May I have the software to run on it? I am not seeing such results, perhaps you have written something special?

 

I dusted the thing off but I can't find the 16k ram pack, so I am stuck at 4k.... Hopefully it's in one of these boxes somewhere... :(

If you remember the cat number or the whatever I might try auction site search in the meantime...

You might want to read what I said.

 

...Since you have decided logical optimizations are allowed in Ahl's Benchmark, I did some logical optimizations (algebra) and changed test #2 with the following lines.

It generates the same results, but by multiplying by the reciprocals instead of dividing so it can take advantage of the hardware multiply optimization on my new MC-10 ROM which completes it in under 50 seconds..

140 D=1/A : E=1/B

220 C=C*A:C=C*B:C=C*D:C=C*E

...

ON... MY... NEW... MC-10... ROM.

 

I've been going through the MC-10 ROM optimizing it, and I'm going to add new commands. I also posted a speedup for the CoCo3.

On the left, the CoCo 3 in 1.77MHz mode (6809, no wait states). On the right, the MC-10 running at 0.89MHz (6803).

Notice that I start the MC-10 running the program a little after the CoCo 3. Sorry it's not hi-res, but I haven't added those commands yet. I did add ELSE already.

 

There are a bunch of other videos on my youtube channel, some details on my blog, and I have a laundry list of additional optimizations to make.

 

*edit*

Du-Oh! I was messing around with the clock speed setting on the emulator and forgot to reset it before running that.

It's about 69 seconds. Still faster than the Atari though.

And like I said... I'm still optimizing. The SIN, COS, TAN now use the hardware multiply, but I haven't finished the new SQR that only uses multiply.

I'm looking at options for LOG.

 

The factory ROM used pretty much strait 6800 8 bit code for the math library and about everything else.

Searching for line numbers used 16 bit code, but not much else did. I don't think it even used 16 bit code to clear the screen.

Some of the direct page accesses didn't even use the faster direct page instructions.

The math stuff in that video is around.. 38%(?) faster if I remember right.

Non math stuff... 6-9% faster... but array indexing, converting numbers from ASCII constants to float, etc... are a lot faster.

Comparing long strings, and memory moves are 16 bit now.

 

I'm hunting down bugs at the moment so I can release an early version of it.

 

Edited by JamesD
  • Like 1
Link to comment
Share on other sites

The Apple II code uses hi-res, and the MC-10 doesn't, but these are performing the same number of calculations and trying to set the same number of pixels.
The MC-10 is just setting a lot of them over and over again.
Basically, they are doing the same amount of work.


*edit*
Sorry the video is a little choppy. Edited by JamesD
Link to comment
Share on other sites

Well, now is the time to move on to the next level of performance, and check COMPILERS / FAST-parser results (I am not sure that a product that generates true machine code right from Basic statements actually exists for Atari, though).

 

Has ANYONE here tried to get the IBM compiler working (weather IBM's or MS compiler disks, at https://www.pcjs.org...achine/5150/mda?

 

There is no was I can make them work, but if anyone here can, that would wonderful, because we can make some comparisons with FastBasic / TurboBasic XL and IBM's actual compilers !!!

Link to comment
Share on other sites

DOSBox is the only thing that will run that stuff that I know of, but I haven't personally tried to use that compiler with it.

 

The thing with DOSBox is how to ensure 4.77Mhz speed equivalence.... I have it installed it here... and, on the other hand, does not recognize the geometry of the .IMG files you can download from PCjs.

Link to comment
Share on other sites

  • 2 weeks later...

 

The thing with DOSBox is how to ensure 4.77Mhz speed equivalence.... I have it installed it here... and, on the other hand, does not recognize the geometry of the .IMG files you can download from PCjs.

https://www.dosbox.com/wiki/4.77_MHz

 

MESS emulates some PCs, so that might be better since it tracks cycles.

 

*edit*

The article uses a link to MIPS.ZIP on simtel.net, which went away

It might be in the SIMTEL DOS CD.

https://archive.org/details/SIMTEL_0692

Edited by JamesD
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...