Jump to content
IGNORED

Interface Age Prime Benchmark


dhe

Recommended Posts

Code and Really Good Commentary:

 

https://docs.google.com/document/d/1u4iZYkWHVluzqbpyV-CPJxZLFkchP95xDryEYVpESD8/mobilebasic?mibextid=Zxz2cZ

 

How it started:

 

image.png.060d842f5d6bea09a5ffa0616e55112b.png

 

When it ended:

 

image.png.1d4c3108fb3664a96ca751b5b68605fe.png

 

Just the code:

100 REM  INTERFACE AGE's benchmark program to
110 REM  'discover' the first 1000 Prime numbers
120 REM
130 PRINT "Starting:"
140 FOR N = 1 TO 1000
150 FOR K = 2 TO 500
160 LET M = N/K
170 LET L = INT(M)
180 IF L = 0 THEN 230
190 IF L = 1 THEN 220
200 IF M > L THEN 220
210 IF M = L THEN 240
220 NEXT K
230 PRINT ;N;" ";
240 NEXT N
250 PRINT CHR$(7)
260 PRINT "Finished."
270 END

  • Like 6
Link to comment
Share on other sites

  • 1 month later...

@dhe  Nice article. 

 

I thought "trial division" was the sieve of Eratosthenes. Nope.

 

 

In 5th grade, the Sieve  would keep me busy with pencil and paper for quite a while. Writing out 100 numbers and crossing out the multiples of each prime.   I never got as far as 1000 on paper.
 

I don't remember if I ever did it in TI BASIC. Of course my brother and I  only had 99'er magazine, not Compute!
 

Hmm, but which one is more efficient in BASIC? The Sieve seems like it takes much more "bookkeeping" but I need paper and pencil to work that out. 
 

I appreciate the mathematical formalism that the author uses.  
 

But the proof of Claim 2 seems overly long:  "if an integer has factor k, then it has (at least one) prime factor k' ". Unless this precedes the definition of composite. If so, then it's not overly long. 

 

"Definition: a composite is a product of  prime factors" (1 is not a prime). Maybe that proof is used as the foundation of that definition. 

 

 

 

* we all learned how to spell Eratosthenes from Cosmos (Carl Sagan.) 

 

 


 

 

Link to comment
Share on other sites

Little nuggets like this are alway an interesting challenge to translate. Thanks @dhe .

 

You know me. I had to try this in Forth

At first I tried to copy the BASIC code. It really does not work in structured language to try and jump around the way BASIC can. 

For the curious here is what it took to reorganize things for Forth. 

 

The lines are compensating for a missing modulo divide function in BASIC. ( returns the remainder) 

 

160 LET M = N/K
170 LET L = INT(M)

 

Since Forth has a MOD function, I used it. :) 

We factor out a test for a prime number. 

: PRIME? ( n -- ?) 
  TRUE                        \ initial output flag on stack
  OVER 3 U< IF NIP EXIT THEN  \ if <3 remove n, & exit 
  OVER 2      \ loop: n to 2       
  ?DO                         
    OVER I MOD 0=             \ modulo divide, test for zero  
    IF DROP FALSE             \ if remainder=0, remove True, put false 
      LEAVE                   \ get out of the loop  
    THEN 
  LOOP 
  NIP                         \ remove 2nd item, n , from data stack
;

 

With that test we just put PRIME?  in a loop

: PRIMES ( -- )
    CR ." Starting:"
    1001 1 
    DO  
       I PRIME? 
       IF I . SPACE THEN   
    LOOP
    BEEP  \ chr$(7) is the BELL on terminals 
    CR ." Finished."  
;

image.thumb.png.805713650e32663627c278cefa11eefa.png

  • Like 3
Link to comment
Share on other sites

On 5/10/2023 at 5:10 PM, TheBF said:

We factor out a test for a prime number. 

: PRIME? ( n -- ?) 
  TRUE                        \ initial output flag on stack
  OVER 3 U< IF NIP EXIT THEN  \ if <3 remove n, & exit 
  OVER 2      \ loop: n to 2       
  ?DO                         
    OVER I MOD 0=             \ modulo divide, test for zero  
    IF DROP FALSE             \ if remainder=0, remove True, put false 
      LEAVE                   \ get out of the loop  
    THEN 
  LOOP 
  NIP                         \ remove 2nd item, n , from data stack
;

 

I am not sure this saves time, but another way to write PRIME? is

 

: PRIME? ( n -- ?)
   DUP 3 U< IF             \ if n<3,
      DROP TRUE            \ exit with TRUE flag
   ELSE
      DUP 2 ?DO             \ loop: 2 to n-1       
         DUP I MOD         \ modulo divide, test for zero  
         IF       \ not 0
            \ do nothing
         ELSE     \ 0
            FALSE AND      \ if remainder=0, convert n to FALSE 
            LEAVE          \ get out of the loop  
         THEN 
      LOOP           \ if loop completes, flag will be n<>0, which is "true"
   THEN
;

 

...lee

  • Like 2
Link to comment
Share on other sites

Cortex BASIC executes this teeny benchmark as-is in 1013 seconds on a console with 16-bit RAM. Which compares reasonably favorably to many of the figures for other BASICs reported in the original Interface Age article (since which time my interface has aged).

 

Moving lines 140, 150, 160 and 170 to a single line drops execution to 904 seconds (15 minutes, 4 seconds). Not bad. 

 

Which is really important given that Chipmunk Basic running natively on an Apple M2 executes the benchmark as-is 1000 times in 6 seconds - merely 168,272 times faster than the 9900. 

 

(Interestingly, an Intel version of Chipmunk Basic running 1000 iterations in emulation on Rosetta executes in ~8 seconds.) 

 

 

  • Like 2
Link to comment
Share on other sites

1 hour ago, Reciprocating Bill said:

Cortex BASIC executes this teeny benchmark as-is in 1013 seconds on a console with 16-bit RAM. Which compares reasonably favorably to many of the figures for other BASICs reported in the original Interface Age article (since which time my interface has aged).

 

Moving lines 140, 150, 160 and 170 to a single line drops execution to 904 seconds (15 minutes, 4 seconds). Not bad. 

 

Which is really important given that Chipmunk Basic running natively on an Apple M2 executes the benchmark as-is 1000 times in 6 seconds - merely 168,272 times faster than the 9900. 

 

(Interestingly, an Intel version of Chipmunk Basic running 1000 iterations in emulation on Rosetta executes in ~8 seconds.) 

 

 

When I first read this I thought you were talking about an Apple ][. :)

 

Ya the M2 is kind of fast.

 

Link to comment
Share on other sites

The Erasthones sieve is about an order of magnitude faster (I think) than this "brute force" method using division from what I can see,

so we can't compare your number to the Interface Age code.  

 

But, if we look over in the Development forum the TI entry searched  (819x2) = 1638 numbers searched in 270 seconds

( I multiplied the result by 10 to post the 2700 but it's probably not accurate for the larger search space if TI BASIC could DIM an 8K array)

 

Your array is 2047 so your are searching 4094 numbers for primes. This 1/4 of the original Byte benchmark space. (8192x2)

UCSD Pascal on TI-99 came in at 512 seconds un-optimized. So divided by 4 to scale it to your code it would be 128 seconds. 

 

Does this make sense? It seems too different.  Is the ZX-81 running a slow clock?

Is the ZX-81 PASCAL compiler generating native code or a virtual machine code like UCSD?

 

 

 

Link to comment
Share on other sites

21 hours ago, TheBF said:

The Erasthones sieve is about an order of magnitude faster (I think) than this "brute force" method using division from what I can see,

so we can't compare your number to the Interface Age code.  

 

But, if we look over in the Development forum the TI entry searched  (819x2) = 1638 numbers searched in 270 seconds

( I multiplied the result by 10 to post the 2700 but it's probably not accurate for the larger search space if TI BASIC could DIM an 8K array)

 

Your array is 2047 so your are searching 4094 numbers for primes. This 1/4 of the original Byte benchmark space. (8192x2)

UCSD Pascal on TI-99 came in at 512 seconds un-optimized. So divided by 4 to scale it to your code it would be 128 seconds. 

 

Does this make sense? It seems too different.  Is the ZX-81 running a slow clock?

Is the ZX-81 PASCAL compiler generating native code or a virtual machine code like UCSD?

 

 

 

No this makes sense. I believe the ZX-81 Partial Pascal compiles to pseudocode (like UCSD Pascal) and not Z80 code and requires a loader to run, so that probably explains the relative slow execution.  In looking at the benchmark results table provided in the article, Partial Pascal beats all the interpreted Basics of the time although Microsoft MBasic for Z80 comes very close, but comes in dead last by a significant margin compared to the compiled languages. 

I should really try to convert it to machine code...

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

2 hours ago, Vorticon said:

I should really try to convert it to machine code...

That would be interesting and probably a challenge.

Compilers for little machines many times make a Forth-like code to save space.

For programs where you have memory space you can inline primitives and get a about a 1.5..2X speedup.

(Assuming this compiler is not doing this already)

 

Link to comment
Share on other sites

2 hours ago, TheBF said:

That would be interesting and probably a challenge.

Compilers for little machines many times make a Forth-like code to save space.

For programs where you have memory space you can inline primitives and get a about a 1.5..2X speedup.

(Assuming this compiler is not doing this already)

 

I like this idea. Some time last week, while implementing parsers to evaluate an expression language, I realized that I could do some more work to assign Forth words to each node. Already, it builds an abstract syntax tree (AST). If each node of the tree had a Forth word assigned, they'd execute as either interpreted or compile! A lot of semantics I get for free from the Forth dictionary. 

I could go beyond "3 2 * 4 + 10 =", adding more keywords like WHILE. A C to Forth compiler? 
 

Microsoft implemented their pcode ("compact code") backend for C and BASIC back in the 80s, your choice of output pcode  or assembler. 
 

 

Link to comment
Share on other sites

8 minutes ago, FarmerPotato said:

I like this idea. Some time last week, while implementing parsers to evaluate an expression language, I realized that I could do some more work to assign Forth words to each node. Already, it builds an abstract syntax tree (AST). If each node of the tree had a Forth word assigned, they'd execute as either interpreted or compile! A lot of semantics I get for free from the Forth dictionary. 

I could go beyond "3 2 * 4 + 10 =", adding more keywords like WHILE. A C to Forth compiler? 
 

Microsoft implemented their pcode ("compact code") backend for C and BASIC back in the 80s, your choice of output pcode  or assembler. 
 

 

You are on to something there. I know Jeff Fox ran binary trees where each node contained Forth code to do the right thing when traversing the tree plus the data.

The result is the tree becomes code that is executed by the F21 CPU rather than data that is parsed. :) 

 

  • Like 1
Link to comment
Share on other sites

14 hours ago, TheBF said:

That would be interesting and probably a challenge.

Compilers for little machines many times make a Forth-like code to save space.

For programs where you have memory space you can inline primitives and get a about a 1.5..2X speedup.

(Assuming this compiler is not doing this already)

 

There is actually a great Forth for the Zeddy called Toddy Forth which I have not played with yet. More distractions predicted in my future. Damn it!

Incidentally here's a little demo of Pascal on the Zeddy 

 

  • Like 1
Link to comment
Share on other sites

On 5/15/2023 at 9:48 AM, TheBF said:

The Erasthones sieve is about an order of magnitude faster (I think) than this "brute force" method using division from what I can see,

so we can't compare your number to the Interface Age code.  

At least, and probably much faster the more primes you do. Of course, the sieve is limited by array space. Here is the Interface Age Prime Benchmark compared to the Sieve. Both compiled. Both programs go to 1000.

PRIMES.thumb.GIF.d9d7cc8772188aab9f6b907b73714dec.GIF

 

  • Thanks 1
Link to comment
Share on other sites

9 hours ago, Vorticon said:

There is actually a great Forth for the Zeddy called Toddy Forth which I have not played with yet. More distractions predicted in my future. Damn it!

Incidentally here's a little demo of Pascal on the Zeddy 

 

It will be interesting to compare the Pascal system to Forth in development effort and program performance.

Back in the 80s before the age of mega-byte size compilers, Forth compared very well to other compiled languages.

  • Like 2
Link to comment
Share on other sites

On 5/17/2023 at 8:32 AM, TheBF said:

It will be interesting to compare the Pascal system to Forth in development effort and program performance.

Back in the 80s before the age of mega-byte size compilers, Forth compared very well to other compiled languages.


In "How Not to Write a Sate Machine"  I found another argument for smaller, human-readable, extensible compilers (or at least the lexer.)  Of four Compiler books, it seems to regard Tremblay's over the famous "Dragon" book by Ullman, Aho. ( No Forth in Tremblay, and so far it's the same math.) 
 

I'm on a quest to the 1970s for computer science: sparked by @Lee Stewart's and @InsaneMultitasker's mention of the Top Down Operator Precedence paper by Pratt. (1973). Pratt laments the neglected  approaches to parsers. I find many arguments against "portable" code (one in Jeff Fox, thanks @TheBF!)

 

Finished two lexers (using C) this week. Beginning a Pratt parser for C, see how much of C it works for. Then I want to hook it up to Forth for interpreting or code generation. Maybe rewrite the whole thing in Forth?
 

 

  • Like 2
Link to comment
Share on other sites

2 hours ago, FarmerPotato said:


In "How Not to Write a Sate Machine"  I found another argument for smaller, human-readable, extensible compilers (or at least the lexer.)  Of four Compiler books, it seems to regard Tremblay's over the famous "Dragon" book by Ullman, Aho. ( No Forth in Tremblay, and so far it's the same math.) 
 

I'm on a quest to the 1970s for computer science: sparked by @Lee Stewart's and @InsaneMultitasker's mention of the Top Down Operator Precedence paper by Pratt. (1973). Pratt laments the neglected  approaches to parsers. I find many arguments against "portable" code (one in Jeff Fox, thanks @TheBF!)

 

Finished two lexers (using C) this week. Beginning a Pratt parser for C, see how much of C it works for. Then I want to hook it up to Forth for interpreting or code generation. Maybe rewrite the whole thing in Forth?

Thanks for the reference I will look for the Pratt paper.

(Of course the way to eliminate operator precedence is to explicitly tell the machine the order in which you want things done. :) 

Typical Forth thinking. Don't solve the problem, remove it. )

 

I accidentally found this rant by Mattias Koch of Mecrisp Forth, on C libraries used for embedded micro-controllers. 

https://mecrisp-stellaris-folkdoc.sourceforge.io/rants.html#the-c-library-myth

 

He is actually ranting here. :)

 

 

  • Like 2
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...