Jump to content

Recommended Posts

14 hours ago, Willsy said:

Interrupts are off most of the time in TurboForth. I legacy of a naive design decision by yours truly. At the time though, I was going for speed.

This is not naive, it's how TI intended the computer to be operated. ;)

 

  • Like 2
11 hours ago, Tursi said:

This is not naive, it's how TI intended the computer to be operated. ;)

I find it exciting to think about working with a TMS99xxx where LIMI is a privileged instruction that you can only use in kernel mode, i.e. which your programs do not have access to. This means a turnaround in terms of operation.

9 minutes ago, mizapf said:

I find it exciting to think about working with a TMS99xxx where LIMI is a privileged instruction that you can only use in kernel mode, i.e. which your programs do not have access to. This means a turnaround in terms of operation.

Wasn't that the plan on the later ones? I never followed the line any further.

 

58 minutes ago, TheBF said:

Polled interrupts... :)

And people say Forth is weird.

Experience shows that most people don't know how to write interrupt-safe code, so it's probably better that way. But for a 60hz interrupt, my opinion is "who cares?" You aren't going for precision timing there. ;) Treating it more as a flag than an interrupt.

 

Look at the ColecoVision where they have no choice - it's on the non-maskable interrupt. Code breakage left and right as people write their VDP access code without considering that it might be interrupted at any time by a vertical blank handler that ALSO accesses the VDP. But at least there you can control what the interrupt handler actually does...

 

  • Like 2
6 hours ago, Tursi said:

Experience shows that most people don't know how to write interrupt-safe code, so it's probably better that way. But for a 60hz interrupt, my opinion is "who cares?" You aren't going for precision timing there. ;) Treating it more as a flag than an interrupt.

 

Look at the ColecoVision where they have no choice - it's on the non-maskable interrupt. Code breakage left and right as people write their VDP access code without considering that it might be interrupted at any time by a vertical blank handler that ALSO accesses the VDP. But at least there you can control what the interrupt handler actually does...

 

I guess precision timing is a relative term depending on your needs, but 16.6mS resolution guaranteed gives you something to work with.

 

On a machine like TMS9900 it doesn't get much easier to stay interrupt safe as long as there is some agreement on what scratchpad users must NEVER use. 

I guess the problem with TI-99 is that everybody wants the same workspace. :) 

 

But for sure having no way to mask interrupts is not fun either.

For better or worse I took the approach of only masking interrupts on critical code. I needed your input to understand what that really was but once I was educated it works great now.

 

  • Like 1
On 12/23/2020 at 3:11 PM, TheBF said:

I guess precision timing is a relative term depending on your needs, but 16.6mS resolution guaranteed gives you something to work with.

 

On a machine like TMS9900 it doesn't get much easier to stay interrupt safe as long as there is some agreement on what scratchpad users must NEVER use. 

I guess the problem with TI-99 is that everybody wants the same workspace. :) 

 

But for sure having no way to mask interrupts is not fun either.

For better or worse I took the approach of only masking interrupts on critical code. I needed your input to understand what that really was but once I was educated it works great now.

Well, the problem is that most vertical interrupt handler functions want to manipulate the VDP. Since the VDP has only one address register, any interrupt handler that reads, writes, or sets registers can corrupt any non-interrupt code that reads, writes, or sets registers. On the TI, you have to assume that the interrupt handler is going to do that, and so you need to lock it out during any VDP access. TI's suggestion was just leave it locked out, and poll at safe parts of your code.

 

As long as you poll more often than once every 16ms, you aren't missing anything at all. That's a huge window, even on an old machine like this. Short of raster effects or page flipping, there's little reason to be concerned about exactly when it happens - you'll still get 60fps.

 

Better than that, most software that uses this approach isn't going to use the full 16ms timeslice, so what /really/ happens is that the code finishes its work, and then /waits/ for the vertical blank to provide synchronization. This means that the frame is handled as soon as it's set without risk, and if a frame accidentally runs long, it is not accidentally corrupted by the interrupt firing.

 

The concept of protecting blocks of code like you do is basically defining "critical sections"... which does work well enough but puts more onus on you, the programmer, to keep track of those cases and whether you are currently in a critical section or not. This is most useful when you have high speed interrupts or interrupts that require low latency - and in those cases you tightly control what data is shared so as to minimize the locks.

 

Free and wild interrupts are a lovely concept, but in the real world you still tightly control them. In fact they really represent the first instance of multi-threaded programming and the resource contention that causes. People tend to start with a carefree "it will be fine" attitude, and the longer the software exists and the more often data gets corrupted by concurrent access, the tighter the controls get. It seems like you're losing something, but you really aren't, and after decades of chasing that sort of bug, I just write my code defensively the first time around now. The performance hit is less than you think it is, and the debugging time just isn't worth it. Hell, the hours I spent on YOUR debugging should alone illustrate that, but the reason I did it is education is the most valuable use of time there is. ;)

 

 

  • Like 6
8 hours ago, Tursi said:

Well, the problem is that most vertical interrupt handler functions want to manipulate the VDP. Since the VDP has only one address register, any interrupt handler that reads, writes, or sets registers can corrupt any non-interrupt code that reads, writes, or sets registers. On the TI, you have to assume that the interrupt handler is going to do that, and so you need to lock it out during any VDP access. TI's suggestion was just leave it locked out, and poll at safe parts of your code.

 

As long as you poll more often than once every 16ms, you aren't missing anything at all. That's a huge window, even on an old machine like this. Short of raster effects or page flipping, there's little reason to be concerned about exactly when it happens - you'll still get 60fps.

 

Better than that, most software that uses this approach isn't going to use the full 16ms timeslice, so what /really/ happens is that the code finishes its work, and then /waits/ for the vertical blank to provide synchronization. This means that the frame is handled as soon as it's set without risk, and if a frame accidentally runs long, it is not accidentally corrupted by the interrupt firing.

 

The concept of protecting blocks of code like you do is basically defining "critical sections"... which does work well enough but puts more onus on you, the programmer, to keep track of those cases and whether you are currently in a critical section or not. This is most useful when you have high speed interrupts or interrupts that require low latency - and in those cases you tightly control what data is shared so as to minimize the locks.

 

Free and wild interrupts are a lovely concept, but in the real world you still tightly control them. In fact they really represent the first instance of multi-threaded programming and the resource contention that causes. People tend to start with a carefree "it will be fine" attitude, and the longer the software exists and the more often data gets corrupted by concurrent access, the tighter the controls get. It seems like you're losing something, but you really aren't, and after decades of chasing that sort of bug, I just write my code defensively the first time around now. The performance hit is less than you think it is, and the debugging time just isn't worth it. Hell, the hours I spent on YOUR debugging should alone illustrate that, but the reason I did it is education is the most valuable use of time there is. ;)

 

 

Wise words from an experienced hand.

 

My critical sections at the moment are pretty simple so far. The VDP utilities and KSCAN and DSR link.

Maybe there should be more, but I probably don't have code that exercises anymore yet.

 

There is an alternative but it's counter modern thinking I believe.

The Forth systems I have worked with took the approach that interrupts were ONLY for things that were un-predictable and required immediate attention.

And every ISR in these systems was dead simple. It did something, recorded a byte  to a queue or whatever and exited.

 

The rest of the system consisted of Forth threads that are inherently re-entrant, or at least the coder is supposed to keep it that way. :)

The multi-tasking on such as system is cooperative so that the problems of interrupt based taskers never happens. Small pieces of code run to completion and give way to another thread when they do.

It worked remarkably well because the interrupts could stop any of these tasks for small period of time without issue. Like 9900, context switch only takes 3 registers on a Forth VM.  Data stack pointer, Return stack pointer and Interpreter pointer. Of course the ISRs and in fact all code was written to fit this model.

This is only possible when you control everything as in the case of PolyForth or something similar programs I built back in the steam driven age.

Most people today don't have that luxury, living on top of an O/S from what little I understand of modern O/Ses

Some of the people I worked with in the 90s refused to believe it was possible to make serious systems like this but at least to some scale it is.

 

Way off topic but a refreshing Christmas beverage can do that it to one. :) 

 

 

  • Like 1
10 minutes ago, TheBF said:

There is an alternative but it's counter modern thinking I believe.

The Forth systems I have worked with took the approach that interrupts were ONLY for things that were un-predictable and required immediate attention.

And every ISR in these systems was dead simple. It did something, recorded a byte  to a queue or whatever and exited.

Well, "modern" programming has grown to a terrible thing and the quality of modern code is my proof of that statement. When even the embedded code in an airliner is crashing planes due to sloppy practices (as opposed to honest mistakes), I think it's time to step back and re-evaluate what you're doing. Eventually I'll write a book or something. ;)

 

But in my experience, again, the ISR that sets a flag and exits leads to very stable code. However, it DOES turn all your interrupts into polled interrupts. ;) But it also means your code is never caught off guard, and you do what you need to do exactly when you mean to. Embedded code still has a place for this sort of predictability. My work on the Neato robot probably had the most interrupts to deal with - I needed to deal with interrupts from four different motor encoders in order to accurately measure their speed (in software!), and these couldn't be delayed by very much as it's very much a real time problem. Fortunately, measuring speed is just distance (or in this case, encoder ticks) over time, so it was just a quick increment with the actual measurement happening at fixed points (to provide the time). There were interrupts every time a serial port received a byte (just move it to a buffer) or finished sending a byte (send the next one), and there were three or four of those. And then we still had to do a dozen actual tasks as rapidly as possible. ;) 

 

Dealing with a buffer used in multiple threads can be tricky. In multi-threaded programming, you can use a mutex or a lock - when two threads want the object, the operating system will hold one up and let the other one finish. That's pretty handy. You can't do that in an embedded system's raw interrupt context, because there's no operating system to speak of, and anyway you don't want to hold up the interrupt! In this case you have to design the system rather carefully... one simple way is to have a separate read pointer and write pointer, and only allow one side to control either (for instance, only the main app can write and only the interrupt can read, for a send queue). This is to help ensure that two systems never try to write the same variable at the same time.

 

Coming around, this is the main issue with the TMS9918A interrupt handlers - if the interrupt handler and the main code both want to access VDP RAM, then they are both contesting that single address register. Simplest way around it - and what a lot of game systems did - just do all your VDP access in the interrupt handler. It can't fire again until you clear it, so you can take as long as you need. (In fact, many Nintendo games run the entire game that way - it's clever in its simplicity). The opposite works well too, and although I think in the back of my head that the NES way is better, I still tend to do NO VDP access in my interrupt handler - I just turn it off till I'm ready to check for it. (I actually have wrapper code for the Coleco that emulates the LIMI 0/2 sequence). If you don't want to do either of those, then you need to protect your VDP access -- and anything else your interrupt handler accesses. ;)

 

There was an original topic? ;)

 

  • Like 5

"Dealing with a buffer used in multiple threads can be tricky. In multi-threaded programming, you can use a mutex or a lock - when two threads want the object, the operating system will hold one up and let the other one finish. "

 

This is the kind of problem I was referring to that disappear in a PolyForth style O/S.

All the I/O operations and operations that read and write to a buffer, queue or whatever data structure are prefaced with a routine traditionally called PAUSE.

PAUSE is the context switch. :)   ( In my version I patch PAUSE with the address of the code below called 'yield)

The context switch is fast on these systems. For example my version on 9900 is RTWP + a local variable read, to test a sleep/wake flag.

This is a bit cryptic but here is my machine code with comments. (HERE is Forth for the memory location at that point in the code)

CREATE 'YIELD \ *this is the entire context switcher
      0380 ,         \ RTWP,                \ change tasks! :-)
HERE  02A1 ,         \ R1 STWP,             \ HERE on stack for later
      C021 , 20 ,    \ HEX 20 (R1) R0 MOV,  \ fetch TFLAG->R0
      13FB ,         \ -8 $$+ JEQ,          \ if tflag=0 jmp to RTWP
      NEXT,          \ run Forth interpreter (3 instructions)

On a x86 machine it was 3 pushes, change the stack register and three pops or something like that.

This is also considered heresy from what I understand in today's world. 

 

Because the context switch is so compact, putting it in each I/O primitive doesn't compromise performance much and the programmer is free to write special cases that "hog" the system for an extra uS or two if really needed but you control that, not a timer in a kernel somewhere that you can't get at.

  

For embedded programming it allows you to tune the time allocations for different threads really well.  Meanwhile the interrupts are firing as needed but don't interfere with the foreground threads because you make them the work the same way. In and out. Nothing fancy.

 

You do need to get involved for mission critical threads. It's not automated like a modern O/S. I would think many people would not want that level of detail today.

But for embedded work it is far less confusing because you are in complete control of the entire system.

 

I don't know if anybody even does this anymore. I know the commercial O/S based Forth systems now just use the O/S threading system and therefore are forced to use mutexes, locks and all the rest. 

 

Complexity breeds complexity.

 

 

 

  • Like 1
2 hours ago, TheBF said:

I don't know if anybody even does this anymore. I know the commercial O/S based Forth systems now just use the O/S threading system and therefore are forced to use mutexes, locks and all the rest. 

Even in co-operative multi-tasking you still want some form of mutex or lock. It's not always possible to finish your use of a resource before you yield to another thread (or even another process). Especially when dealing with real world hardware, you might want or even need to hold a resource for longer than you want to block multitasking, in that case you need a way to indicate to other threads that a resource is in use. Admittedly, it's a lot easier when you don't need to guarantee atomic operations - a simple flag is probably enough in most cases, but that's all a mutex really is. Admittedly, I'm straining to think of a case where I wouldn't just organize the code differently to avoid it, but... ;)

 

Locks and mutexes, at the basic level, are not terribly complicated. They're just a reference counter (or some are just boolean flags, but that's dumb because it puts all the onus to track it back on the programmer) - all that's necessary is a way to increment and test in a guaranteed atomic step (that is, one that can not be interrupted). Modern CPUs do that in a single instruction, older ones may require a little more work, like disabling interrupts, but it's still fast. The worst case is actually multi-CPU systems, since you need a way to tell all the CPUs about the change. Intels have enormously complicated hardware logic to do this and make sure everyone's cache is up to date and so on, but somehow it mostly works most of the time. ;)

 

QNX is the most modern cooperative multitasking system I know of... but I haven't done very much with it. But it's definitely still done today. ;)

 

(Edit: Ah! I thought of the case - multi-CPU. When you have more than one real CPU running, you have two or more actual threads running at the same time, even in a cooperative system. Locks or mutexes are necessary there. ;) )

 

 

Edited by Tursi

(Ed

10 minutes ago, Tursi said:

 

(Edit: Ah! I thought of the case - multi-CPU. When you have more than one real CPU running, you have two or more actual threads running at the same time, even in a cooperative system. Locks or mutexes are necessary there. ;) )

 

 

Yes. You got me there. This is something I have never done and it of course is how the world works now.

 

I know that Brad Rodriguez did his PhD thesis on controlling a particle accelerator using multiple CPUs. His solution was to just put a Forth system on each board and create a communication protocol between them.  I guess he kind of punted by not really having shared resources between CPUs but I did not read the paper so not sure.

The funny thing about that setup was the high voltages at some control points meant that his communication had to be over fibre to prevent current flow and scorched computer boards across the network.  :) 

 

I guess we are kind of far from VBLANK on TurboForth but it was fun.

  • Like 1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...