IGNORED

# Sorting: looking for a better algorithm

## Recommended Posts

This is a fascinating topic.

So I looked at my algorithm again, and I was actually mistaken as it's not really a gnome sort either because the latter goes through the entire list once the lowest swap is made in the current loop, whereas in my algorithm I jump directly to the location of the first swap for that loop iteration after the lowest swap for that iteration. Here's an example of a run for a worse case scenario:

5 4 3 2 1

4 5 3 2 1

4 3 5 2 1

3 4 5 2 1

3 4 2 5 1

3 2 4 5 1

2 3 4 5 1

2 3 4 1 5

2 3 1 4 5

2 1 3 4 5

1 2 3 4 5

That's 10 swaps for that run. Here are the swap numbers for each n from 2 to 7. To clarify, each time a swap situation exists, then 3 additional operations occur in the algorithm, otherwise no additional operations occur. So the number of swaps can be substituted for number of ops = swaps * 3.

n        swaps     swapsn-swapsn-1

2               1                             0

3               3                             2

4               6                             3

5               10                           4

6               15                           5

7               21                           6

Clearly it's a linear progression, so for the worst case scenario the algorithm has a growth of O(n) which is much better than even the combsort or the gnome sort which have a worst case scenario of O(n2)! Am I missing something here?

The number of swaps for any n here (n>2) is: ( [ i = 3 to n ]SUM ( i - 1 ) ) + 1

• 1
##### Share on other sites

99'er magazine had an article on sorting methods in TI BASIC. My older brother studied that, implemented them, but it was beyond me then.

Last summer I coded Heapsort in Forth.

Two interesting properties of Heapsort: First, it doesn't need recursion. It iterates through an array, but  skipping forward by powers of 2.  Imagine adding up the series 1,2,4,8. How many steps will get you up to N? The answer is log2(N). (More precisely, the sum is N-1.)

It does this loop on each of N elements. To move X to its destination, it has to compare-swap X with  log2(N) other elements.

Maybe that helps reinforce why we say N log N complexity. Outer loop N times, inner loop log2(N) times.

And you can see where N log N is faster than Bubble Sort (which might compare every element against each other, N*N.)

Second interesting property, Heapsort always  takes N log N steps. No more, no less. I should mention that it takes two passes. Disadvantages.

In contrast, quicksort has many opportunities to exit early, avoiding a lot of compare-swap. It too is O(N log N)--but imagine that , on average, a lot of  elements are left alone after one or two swaps.

Consider that there are special situations where one type of sort is advantageous to your kind of data, or for what you want to get out. And the fastest sort might not be the right choice.

One situation  is that you need a sorted list AND  the ability to quickly insert new elements or remove some. While still  keeping the list sorted.

And not by shoving a giant block of memory one position to the right!

Heapsort , after the first pass, gives you a "heap" which  is ideal for further insertion/removal. (Guaranteed.) Sometimes that is the special property you need.

Binary trees, in particular red-black trees, are another great choice for insertion/removal.

• 5
##### Share on other sites

1 hour ago, Vorticon said:

This is a fascinating topic.

So I looked at my algorithm again, and I was actually mistaken as it's not really a gnome sort either because the latter goes through the entire list once the lowest swap is made in the current loop, whereas in my algorithm I jump directly to the location of the first swap for that loop iteration after the lowest swap for that iteration. Here's an example of a run for a worse case scenario:

5 4 3 2 1

4 5 3 2 1

4 3 5 2 1

3 4 5 2 1

3 4 2 5 1

3 2 4 5 1

2 3 4 5 1

2 3 4 1 5

2 3 1 4 5

2 1 3 4 5

1 2 3 4 5

That's 10 swaps for that run. Here are the swap numbers for each n from 2 to 7. To clarify, each time a swap situation exists, then 3 additional operations occur in the algorithm, otherwise no additional operations occur. So the number of swaps can be substituted for number of ops = swaps * 3.

n        swaps     swapsn-swapsn-1

2               1                             0

3               3                             2

4               6                             3

5               10                           4

6               15                           5

7               21                           6

Clearly it's a linear progression, so for the worst case scenario the algorithm has a growth of O(n) which is much better than even the combsort or the gnome sort which have a worst case scenario of O(n2)! Am I missing something here?

How many times do you touch each element in the array?

That will determine the 'n' factor.

If your method is O(n) then you only have to scan the array once.

In other words: a 5 element array would need a 1 to 5 FOR loop run only once to complete the sort.

That's why my test example had a "passes" variable to count how many times I had to run through the array.

• 2
##### Share on other sites

1 minute ago, TheBF said:

How many times do you touch each element in the array?

That will determine the 'n' factor.

If your method is O(n) then you only have to scan the array once.

In other words: a 5 element array would need a 1 to 5 FOR loop run only once to complete the sort.

That's why my test example had a "passes" variable to count how many times I had to run through the array.

And that's exactly what happens. I have only one loop that runs from 1 to n. Inside each iteration, it backtracks until the previous number is smaller than the current one then picks back up where it was initially in the array. Here's the code:

```5000 REM ARRAY SORTING ROUTINE
5010 FOR I=1 TO R
5020 IF I=R THEN RETURN
5030 POINT=I
5080 V1=DLIST(POINT,1)::V2=DLIST(POINT+1,1)
5090 IF V1>V2 THEN DLIST(POINT,1)=V2::DLIST(POINT+1,1)=V1 ELSE 5140
5100 POINT=POINT-1::IF POINT>=1 THEN 5080 ELSE 5140
5140 NEXT I```

• 2
##### Share on other sites

18 minutes ago, Vorticon said:

And that's exactly what happens. I have only one loop that runs from 1 to n. Inside each iteration, it backtracks until the previous number is smaller than the current one then picks back up where it was initially in the array. Here's the code:

```5000 REM ARRAY SORTING ROUTINE
5010 FOR I=1 TO R
5020 IF I=R THEN RETURN
5030 POINT=I
5080 V1=DLIST(POINT,1)::V2=DLIST(POINT+1,1)
5090 IF V1>V2 THEN DLIST(POINT,1)=V2::DLIST(POINT+1,1)=V1 ELSE 5140
5100 POINT=POINT-1::IF POINT>=1 THEN 5080 ELSE 5140
5140 NEXT I```

Ok.

So I guess the only thing to do is time it for an array of size 50 and then an array of size 100 and see if it the time increase is double or what the factor actually turns out to be.  You may have a new sort algorithm. ##### Share on other sites

OK so I tested 2 arrays each initialized with the worst case scenario, i.e in descending order. One array is 50 elements and the second is 100 elements.

Sort times:

50 elements: 57 seconds

100 elements: 229 seconds

Definitely not quadratic! I wonder if @TheBF can compare this with his other sorts.

Here's the test code. Timing is manual.

```10 CALL CLEAR::OPTION BASE 1
20 DIM A(50),B(100)
30 REM FILL ARRAYS
35 PRINT "INITIALIZING ARRAYS...."::PRINT
40 FOR I=1 TO 100
60 B(I)=101-I::IF I<51 THEN A(I)=101-I
80 NEXT I
85 PRINT "READY! PRESS A KEY"::PRINT
86 CALL KEY(0,K,S)::IF S=0 THEN 86
90 CALL SORT(A(),50)
95 PRINT::PRINT "PRESS A KEY FOR NEXT ARRAY"::PRINT
100 CALL KEY(0,K,S)::IF S=0 THEN 100
110 CALL SORT(B(),100)
120 CALL KEY(0,K,S)::IF S<>0 THEN STOP

1000 SUB SORT(C(),N)
1005 PRINT "SORTING FOR ";N;" ELEMENTS"
1010 FOR I=1 TO N
1020 IF I=N THEN 1075
1030 P=I
1040 V1=C(P)::V2=C(P+1)
1050 IF V1>V2 THEN C(P)=V2::C(P+1)=V1 ELSE 1070
1060 P=P-1::IF P>=1 THEN 1040
1070 NEXT I
1075 PRINT "DONE SORTING!"
1080 SUBEND```

• 1
##### Share on other sites

2 hours ago, TheBF said:

How many times do you touch each element in the array?

This is the relevant count. The number of compares, regardless of whether each leads to a swap.

Consider the problem as "how many steps does it take to sort and prove that it is sorted?" The proof requires comparing a lot more numbers along the way.

Also, O() notation is an upper bound. Some algorithms require exactly that number of steps--better algorithms in the same O() class might avoid some of the compares or swaps.

Getting an element to its final spot with fewer swaps.

Quicksort and Heapsort are both O(N log N) but Quicksort  avoids  many swaps on average. While Heapsort always does the upper limit number of swaps!

We also worry about computer implementations: which takes longer,  compares or swaps? Does the computer stall when numbers are not near neighbors? (Computer waits on slow RAM.)

• 1
##### Share on other sites

57 minutes ago, Vorticon said:

50 elements: 57 seconds

100 elements: 229 seconds

Definitely not quadratic! I wonder if @TheBF can compare this with his other sorts.

50^2/57 = 43.859

100^2/229 = 43.668

Which fits  44 * x^2

• 1
##### Share on other sites

52 minutes ago, FarmerPotato said:

50^2/57 = 43.859

100^2/229 = 43.668

Which fits  44 * x^2

1 hour ago, Vorticon said:

OK so I tested 2 arrays each initialized with the worst case scenario, i.e in descending order. One array is 50 elements and the second is 100 elements.

Sort times:

50 elements: 57 seconds

100 elements: 229 seconds

Definitely not quadratic! I wonder if @TheBF can compare this with his other sorts.

Here's the test code. Timing is manual.

```10 CALL CLEAR::OPTION BASE 1
20 DIM A(50),B(100)
30 REM FILL ARRAYS
35 PRINT "INITIALIZING ARRAYS...."::PRINT
40 FOR I=1 TO 100
60 B(I)=101-I::IF I<51 THEN A(I)=101-I
80 NEXT I
85 PRINT "READY! PRESS A KEY"::PRINT
86 CALL KEY(0,K,S)::IF S=0 THEN 86
90 CALL SORT(A(),50)
95 PRINT::PRINT "PRESS A KEY FOR NEXT ARRAY"::PRINT
100 CALL KEY(0,K,S)::IF S=0 THEN 100
110 CALL SORT(B(),100)
120 CALL KEY(0,K,S)::IF S<>0 THEN STOP

1000 SUB SORT(C(),N)
1005 PRINT "SORTING FOR ";N;" ELEMENTS"
1010 FOR I=1 TO N
1020 IF I=N THEN 1075
1030 P=I
1040 V1=C(P)::V2=C(P+1)
1050 IF V1>V2 THEN C(P)=V2::C(P+1)=V1 ELSE 1070
1060 P=P-1::IF P>=1 THEN 1040
1070 NEXT I
1075 PRINT "DONE SORTING!"
1080 SUBEND```

I had to go out to a rehearsal, but I can try comparing this to combsort which I have in BASIC.

-or- I have a sort workbench in Forth so maybe I can translate your algorithm to Forth and then do comparisons on much bigger data.

• 2
##### Share on other sites

1 hour ago, FarmerPotato said:

50^2/57 = 43.859

100^2/229 = 43.668

Which fits  44 * x^2

But processing time is not. In the context of sort performance, if a sort algorithm is O(n2) shouldn't processing time increase quadratically? This is not the case here.

##### Share on other sites

For the comparison here is COMBSORT in BASIC using the VORTSORT chassis. (I named it) For the small amount of extra code COMBSORT is a lot of bang for the buck.

```1 ! VORTSORT chassis with COMBSORT
2 ! 50 elements 9.5 seconds
3 ! 100 elements 23 seconds
10 CALL CLEAR::OPTION BASE 1
20 DIM A(50),B(100)
30 REM FILL ARRAYS
35 PRINT "INITIALIZING ARRAYS...."::PRINT
40 FOR I=1 TO 100
60 B(I)=101-I::IF I<51 THEN A(I)=101-I
80 NEXT I
85 PRINT "READY! PRESS A KEY"::PRINT
86 CALL KEY(0,K,S)::IF S=0 THEN 86
90 CALL COMBSORT(A(),50)
95 PRINT::PRINT "PRESS A KEY FOR NEXT ARRAY"::PRINT
100 CALL KEY(0,K,S)::IF S=0 THEN 100
110 CALL COMBSORT(B(),100)
120 CALL KEY(0,K,S)::IF S<>0 THEN STOP

2000 SUB COMBSORT(C(),N)
2100 PRINT "SORTING FOR ";N;" ELEMENTS"
2420 PASSES=0
2424 GAP=N
2440 SORTED=1
2442 GAP=INT(GAP/1.3)
2450 FOR I=1 TO N-GAP
2460   IF C(I)<C(I+GAP) THEN 2530
2470 rem swap the array items
2480   TEMP=C(I)
2490   C(I)=C(I+GAP)
2500   C(I+GAP)=TEMP
2520   SORTED=0
2530 NEXT I
2540 PASSES=PASSES+1
2550 REM test if sort is finished
2570 IF GAP=1 THEN 2600
2580 GOTO 2440
2600 SUBEND
```

• 1
##### Share on other sites

Wow. The difference is impressive... So much for Vortsort I did track the number of comparisons instead for just swaps for arrays 50,100,150 and 200 elements and I got the following:

Clearly not linear. And as @FarmerPotato noted, n2/#comparisons is a constant = 2. So yeah, this sort is an O(n2) type under the worst case scenario.

Oh well, I guess fame will have to wait... 😝

• 3
• 2
##### Share on other sites

19 hours ago, SteveB said:

...still some Quicksort implementation totally freak out when handed an already sorted list ... The language for the implementation should support recursions, I wouldn't do it in TI BASIC. When implementing it in assembly you need to implement the recursion manually. A whole new science is sorting datasets larger than your RAM, which has been a big issue in the early mainframe world... tape-to-tape sorting.

The Quicksort I wrote once, when I learned about the algorithm, doesn't "freak out", but it does run for a longer time when the list is already sorted.
You can "unwrap" the recursion even when writing it in e.g. Pascal. They claimed where the article was described that it would be more efficient. It may of course depend on how recursion is implemented in the language.

Since I managed to sort a thousand integers in about half a second on the TI 99/4A, I was happy. Due to the limited memory capacity of such a small computer there's normally not the need to sort vast amounts of numbers either.

• 3
##### Share on other sites

7 hours ago, Vorticon said:

Oh well, I guess fame will have to wait... 😝

There is a reason that there is not "THE ONE[tm]" sorting algorithm. It depends very much on data, hardware, programming language ... you just need to find the niche where VORTSORT outperforms the others.

The O() logic also looks at infinity ... most data we are dealing with is finite. A simple O(N^3) may outperform a O(n*log(n)) on a small dataset on a real machine. And coding time is also important 🙃

• 4
##### Share on other sites

Since I managed to sort a thousand integers in about half a second on the TI 99/4A, I was happy. Due to the limited memory capacity of such a small computer there's normally not the need to sort vast amounts of numbers either.

Would you happen to have the code for it? I assume this was done using UCSD Pascal which supports recursion.

##### Share on other sites

16 hours ago, FarmerPotato said:

Two books that I learned from are Algorithms, by Sedgwick, and Numerical Recipes in C.

Sedgwick comes in Pascal, C and other flavors.

Numerical Recipes covers just enough sorting, and is a reference on many other mathematical computing topics.

I have used many references for this and other topics of scientific computing (more for data analysis than sorting) over the years—including Numerical Recipes: The Art of Scientific Computing by Press, Teukolsky, Vetterling, and Flannery and various volumes of The Art of Computer Programming by D. E. Knuth, but probably none more than Numerical Recipes.

...lee

• 4
##### Share on other sites

• 5
##### Share on other sites

On 4/18/2023 at 4:22 AM, SteveB said:

And coding time is also important

Indeed. Vortsort can be compressed into just 4 lines of code in XB and performance is likely comparable to Insertion or Gnome sort or perhaps even a little better.

```1000 SUB VORTSORT(C(),N)
1010 FOR I=1 TO N-1::P=I
1040 V1=C(P)::V2=C(P+1)::IF V1<V2 THEN 1070 ELSE C(P)=V2::C(P+1)=V1::P=P-1::IF P>=1 THEN 1040
1070 NEXT I::SUBEND```

I'm keeping it for future use.

• 4
##### Share on other sites

<pedant>

I think the ELSE 1070 at the end of line 1040 can be omitted - it's redundant.

</pedant> • 4
##### Share on other sites

The Quicksort I wrote once, when I learned about the algorithm, doesn't "freak out", but it does run for a longer time when the list is already sorted.
You can "unwrap" the recursion even when writing it in e.g. Pascal. They claimed where the article was described that it would be more efficient. It may of course depend on how recursion is implemented in the language.

Since I managed to sort a thousand integers in about half a second on the TI 99/4A, I was happy. Due to the limited memory capacity of such a small computer there's normally not the need to sort vast amounts of numbers either.

It would be very interesting to see how a non-recursive Quicksort performs in TI BASIC. The amazing speed of the sort would be offset by the extra BASIC instructions which are very slow.

Below some size of the data there would be no benefit versus a simpler algorithm I suspect.

I think I have textbook with the non-recursive version in Pascal.  I might take a run at it.

• 2
##### Share on other sites

18 minutes ago, Willsy said:

<pedant>

I think the ELSE 1070 at the end of line 1040 can be omitted - it's redundant.

</pedant> Yup. Forgot to remove it when I reformatted the code.

• 3
##### Share on other sites

Posted (edited)
3 hours ago, Vorticon said:

Would you happen to have the code for it? I assume this was done using UCSD Pascal which supports recursion.

Here it is. The assembly version, that is. I did start with a Pascal version, but don't have that handy now.

Quicksort
It's still kind of recursive in prinicple, but not with a full scale recursive routine call.

By mistake I wrote somewhere that it uses bubblesort to finish up, but it doesn't. It's insertionsort I use to finalize short lists.

The program is based on a program written in Algol from the beginning.

• 1
• 1
##### Share on other sites

7 hours ago, Vorticon said:

```1000 SUB VORTSORT(C(),N)
1010 FOR I=1 TO N::IF N=I THEN SUBEXIT ELSE P=I
1040 V1=C(P)::V2=C(P+1)::IF V1<V2 THEN 1070 ELSE C(P)=V2::C(P+1)=V1::P=P-1::IF P>=1 THEN 1040 ELSE 1070
1070 NEXT I::SUBEND```

I'm keeping it for future use.

Agree this is valuable because it's so short.

you can eliminate IF I=N

by saying FOR I=1 TO N-1

• 1
##### Share on other sites

Here it is. The assembly version, that is. I did start with a Pascal version, but don't have that handy now.

Quicksort
It's still kind of recursive in prinicple, but not with a full scale recursive routine call.

By mistake I wrote somewhere that it uses bubblesort to finish up, but it doesn't. It's insertionsort I use to finalize short lists.

The program is based on a program written in Algol from the beginning.

And here is a version that assembles with the default TI Assembler. You will have to un-comment the BLWP line and the two below.

Readme from my time working on it:

This code is from a demo in UCSD Pascal by @apperson850 on artiage.com.

The original has been modified to assemble with the stock TI-99 ASSEMBLER. The
DATA in the Forth's memory, taking the address and size of an array from Forth's
data stack and jumping into QUICKSORT.

1000 items can be sorted in under 2 seconds on a stock console.

Spoiler
```*        TITL "QUICKSORT FOR INTEGERS"

* Procedure sorting integers in ascending order
* Called from Pascal host by QSORT(A,N)*
* Declared as PROCEDURE QSORT(VAR A:VECTOR* N:INTEGER)* EXTERNAL*
* Vector should be an array [..] of integer*

* Author: @APERSON850 on Atariage.com

* Modified for TI Assembler, to be called from Camel99 Forth
* 2022 Brian Fox

*--------------
* Workspace registers for subroutine at START
*
* R0  Array base pointer
* R1  Array end pointer
* R2  L
* R3  R
* R4  I
* R5  J
* R6  KEY
* R7  Temporary
* R8  Temporary
* R9  Subfile stack pointer
* R10 Main stackpointer
* R11 Pascal return address (P-system WS)
* R12 Not used
* R13 Calling program's Workspace Pointer
* R14 Calling program's Program Counter
* R15 Calling program's Status Register
*-----------------

* The actual vector in the Pascal-program could be indexed [1..n]
* This routine assumes only that n indicates the number of elements, not the
* last array index.

DEF  QSORT

LIMIT    EQU    16               *Quick- or Insertionsort limit

*  * Don't need this for Forth **
*        BLWP @QSORT
*        AI   R10,4              *Simulate pop of two words
*         B    *R11              *Back to Pascal host

QSORT    DATA SORTWS,START       *Transfer vector for Quicksort

START    MOV  @12(R13),R10       *Get stackpointer (R6) from Forth's WP->R10
LI   R9,ENDSTK          *SUBFILE STACKPOINTER
MOV  *R10+,R1           * POP PARAMETER N
MOV  *R10+,R0           * POP ARRAY POINTER
DEC  R1
SLA  R1,1
A    R0,R1              *CALCULATE ARRAY ENDPOINT

MOV  R0,R2              *L:=1
MOV  R1,R3              *R:=N
MOV  R1,R7
S    R0,R7
CI   R7,LIMIT
JLE  INSERT             *FIGURE OUT IF QUICKSORT IS NEEDED

MNLOOP   MOV  R2,R7
SRL  R7,1
MOV  R3,R8
SRL  R8,1
A    R8,R7
ANDI R7,>FFFE           *R7:=INT((L+R)/2)
MOV  *R7,R8
MOV  @2(R2),*R7
MOV  R8,@2(R2)          *A[(L+R)/2]:=:A[L+1]

C    @2(R2),*R3
JLT  NOSWP1
MOV  @2(R2),R8
MOV  *R3,@2(R2)
MOV  R8,*R3             *A[L+1]:=:A[R]

NOSWP1  C    *R2,*R3
JLT  NOSWP2
MOV  *R2,R8
MOV  *R3,*R2
MOV  R8,*R3             *A[L]:=:A[R]

NOSWP2   C    @2(R2),*R2
JLT  NOSWP3
MOV  @2(R2),R8
MOV  *R2,@2(R2)
MOV  R8,*R2             *A[L+1]:=:A[L]

NOSWP3   MOV  R2,R4
INCT R4                 *I:=L+1
MOV  R3,R5              *J:=R
MOV  *R2,R6             *KEY:=A[L]
JMP  INCLOP

INRLOP   MOV  *R4,R8             *LOOP UNWRAPPING
MOV  *R5,*R4
MOV  R8,*R5             *A[I]:=:A[J]

INCLOP  INCT R4                 *I:=I+1
C    *R4,R6
JLT  INCLOP            *A[I]<KEY

DECLOP   DECT R5                 *J:=J-1
C    *R5,R6
JGT  DECLOP            *A[J]>KEY

C    R4,R5
JLE  INRLOP           *IF I<=J THEN CONTINUE

OUT      MOV  *R2,R8
MOV  *R5,*R2
MOV  R8,*R5             *A[L]:=:A[J]

DEL1     MOV  R5,R7              *Quicksort subfiles?
S    R2,R7              *R7:=J-L
MOV  R3,R8
S    R4,R8
INCT R8                 *R8:=R-I+1
CI   R7,LIMIT
JH   DEL2
CI   R8,LIMIT
JH   DEL2

CI   R9,ENDSTK          *LVL=0?
JEQ  INSERT             *No more Quicksorting at all?

MOV  *R9+,R2            *POP L
MOV  *R9+,R3            *POP R
JMP  MNLOOP

DEL2     C    R7,R8              *Determine what is small and large subfile
JL   ELSE2

MOV  R2,@LSFL
MOV  R5,@LSFR
DECT @LSFR
MOV  R4,@SSFL
MOV  R3,@SSFR
JMP  DEL3

ELSE2    MOV  R4,@LSFL
MOV  R3,@LSFR
MOV  R2,@SSFL
MOV  R5,@SSFR
DECT @SSFR

DEL3     CI   R7,LIMIT           *Is small subfile big enough to be sorted by
JLE  THEN3              *Quicksort?
CI   R8,LIMIT
JH   ELSE3

THEN3    MOV  @LSFL,R2           *Don't Quicksort small subfile, only large
MOV  @LSFR,R3
JMP  MNLOOP

ELSE3    DECT R9                 *Stack large subfile
MOV  @LSFR,*R9          *PUSH R
DECT R9
MOV  @LSFL,*R9          *PUSH L
MOV  @SSFL,R2           *Sort small subfile
MOV  @SSFR,R3
JMP  MNLOOP

*
* Insertion Sort finishing up
*
INSERT   MOV  R1,R4
DECT R4                 *I:=N-1
C    R4,R0
JL   LEAVE              *Check if any looping at al

FORI     C    @2(R4),*R4
JGT  NEXTI              *If next is greater than this, it's OK

MOV  *R4,R6             *KEY:=A[I]
MOV  R4,R5
INCT R5                 *J:=I+1

WHILE    MOV  *R5,@-2(R5)        *A[J-1]:=A[J]
INCT R5                 *J:=J+1
C    R5,R1
JH   ENDWHL             *J>N?
C    *R5,R6             *A[J]<KEY?
JLT  WHILE
ENDWHL MOV  R6,@-2(R5)           *A[J-1]:=KEY
NEXTI    DECT R4
C    R4,R0              *Check if passed array base point
JHE  FORI

LEAVE    RTWP                    * Return Forth workspace
*        AI   R6,4               * remove parameters from Forth stack
B    @R10               * Branch to Forth interpreter

*--------------
* DATA AREA

SORTWS   BSS  >20             *Workspace for sorting routine
SUBSTK   BSS  >40             *Internal subfile stack
ENDSTK   EQU  SUBSTK+>40      *End of that stack

* variable used by quicksort
LSFL     DATA   0             *Large SubFile Left pointer
LSFR     DATA   0             *Large SubFile Right pointer
SSFL     DATA   0             *Small SubFile Left pointer
SSFR     DATA   0             *Small SubFile Right pointer

END
```

• 1
##### Share on other sites

1 hour ago, FarmerPotato said:

Agree this is valuable because it's so short.

you can eliminate IF I=N

by saying FOR I=1 TO N-1

Yes! Every little trim helps.

• 1

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible. ×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.