Jump to content
IGNORED

New GUI for the Atari 8-bit


flashjazzcat

Recommended Posts

Because everything is IPC based, and supervisor calls (since they're interrupts) can't be pre-empted. So only one process can be in the kernel context at any one time, and everything else is just message passing. SymbOS's kernel calling mechanism is a bit different, but Prodatron said that the need for re-entrant code was rare. I've identified a couple of frequently used routines (such as inter-bank jumps) which could be made re-entrant, but for now they're non-pre-emptable and that will work.

All the complex functionality is done in separate processes, which means, that it is IPC based. So re-entrant code is not needed here.

The few remaining functions, which can be called directly from an application, have to be re-entrant. Here we have three possibilities:

- make the function really re-entrant able. Not a problem for small and simple functions.

- lock the interrupts during execution. This only make sense for very small and short code parts, otherwise it will disarrange the system timing etc. Not a good solution.

- for more complex stuff: Add a "park" mechanism in front of the function for additional calls. With a special flag the function first checks, if it's already in use. If yes, it will perform one idle cycle and then check the flag again. E.g. this is done in the kernels' memory copy routine, which can also handle interbank-copies, but only has one copy buffer.

Link to comment
Share on other sites

I'll be quite interested to see what - if any - impact the software interrupt kernel calls have on interrupt latency in general on the Atari. It's only other IRQs which will feel the squeeze (i.e. the scheduler and the mouse sampler, the latter running at 800Hz). Worst case scenario is that the mouse starts stuttering, but most kernel calls are short enough that I don't think this will happen. Having the shorter kernel functions running entirely in the IRQ makes things appealingly uncomplicated (and thanks to Andy Mucho for suggesting the idea). For block moves and memory allocation, of course, we need to implement a "park" mechanism along the lines you have described. The system clock and mouse pointer run in the NMI, so short atomic sections should not cause unsightly disruption.

 

Does SymbOS handle drag-and-drop of objects, BTW? I was coding up the desktop manager event messages (including mouse click, double-click, etc), and added a "drag" event which will tell the application that a drag has been instigated on a given object. The application will then track the mouse until it gets a drop event on another object.

Edited by flashjazzcat
Link to comment
Share on other sites

Does SymbOS handle drag-and-drop of objects, BTW? I was coding up the desktop manager event messages (including mouse click, double-click, etc), and added a "drag" event which will tell the application that a drag has been instigated on a given object. The application will then track the mouse until it gets a drop event on another object.

Only rearrangement of icons is supported right now. With the same subroutines I am planning to implement drag'n'drop for the next major version as well. This should be done for icons and list elements, if a special flag is set for such objects. Don't know if it would make sense for more control types, too?

My plan was, to send a message from the desktop manager to the source process. This message contains the source object, destination process, destination position/object. Then it's the turn of the source process to communicate with the destination process. This communication should be standardized of course.

Now the question is how to handle such drag'n'drops. Is it usually (only) a list of one or more files (+path) which is drag and dropped? Or could it also be real data directly?

Link to comment
Share on other sites

Sounds good. Initiating drag and drop is straightforward enough (and a "draggable" object flag makes sense), but I have only a vague concept of how it would be handled at the other end. File browser windows (where I reckon most of the drag and drop will be happening) are all owned by the same process anyway, but it would be nice if the same communication protocol could be applied in both situations (dragging objects between windows of a single process, and between windows of different processes). So: object is dropped, desktop manager sends a message to the source process asking for the package of information, and specifying the target process. Source process builds the package (complete path and filename of icon, for example), and sends a message to the target process saying "here's everything you need to handle the dropped object"? Anyway: I'm a ways off implementing this, but it's probably worth thinking about now.

 

Quite interesting to see the IPC implemented in 6502. It's pretty economical, since of course applications spend most of their time communicating with the desktop manager, so the message passing routine becomes quite generic:

; ----------------------------------------------------------------------------
; Worker routine to open a window at a,x in bank y and wait for a response
; ----------------------------------------------------------------------------
	
	.local OpenWindow
	sty MessageBuffer+1 ; bank
	stax MessageBuffer+2 ; address
	mva #Desk.WindOpen MessageBuffer ; function
	jsr SendMsgDeskMgr ; send message
	jsr SleepMsgDeskMgr ; sleep till we get a response
	lda MessageBuffer+4 ; get window ID (only valid if status is OK)
	ldy MessageBuffer ; get status (bit 7 set if there was an error)
	rts
	.endl

; ----------------------------------------------------------------------------
; Send message to the desktop manager
; ----------------------------------------------------------------------------

	.local SendMsgDeskMgr
	lda #ProcessID.DesktopManager ; receiver ID
	ldxy #MessageBuffer
	SysCall Kernel.MessageSend ; send message		
	rts
	.endl

; ----------------------------------------------------------------------------
; Sleep on message from desktop manager
; ----------------------------------------------------------------------------

	.local SleepMsgDeskMgr
	lda #ProcessID.DesktopManager ; sender ID
	ldxy #MessageBuffer
	SysCall Kernel.MessageSleepReceive ; sleep on response from desktop manager only
	rts
	.endl

SysCall is a macro which just writes a BRK instruction with the function code inline behind it. This frees us the 6502's registers for args. There are only three registers, so we have to be quite creative. Kernel functions which require more than three 8-bit arguments can have them inline after the SysCall. "#ProcessID.DesktopManager" and similar are enumerated values; a nice feature of the MADS assembler which means hard-coded numeric values can be completely avoided. Of course you can do this in any assembler, but the syntax here is highly readable. Slowly getting there...

Edited by flashjazzcat
Link to comment
Share on other sites

It shouldn't take long to customise the application skeleton code. As for modularisation: with some kind of resource editor on the PC, and a bunch of library code, a bit of cut and paste will go a long way. I already discovered this when writing applications which use the text mode UI (see: FDISK and UFlash). Without the need to write a UI for every new project, things get a lot easier. Not quite in the same league as application bundles, though. :)

  • Like 1
Link to comment
Share on other sites

Your should replace .LOCAL/ENDL by .PROC/ENDP for things that are considered subroutine/procedures. As opposed to LOCALs, PROCs are exclusive in the same scope. Opposed to that LOCALs are cummulative, so this code below is no compile time error! But something you really don't want... believe me,it took days to find it :-)

.local test
lda #1
rts
.endl


.local test
lda #2
rts
.endl

Edited by JAC!
  • Like 1
Link to comment
Share on other sites

Your should replace .LOCAL/ENDL by .PROC/ENDP for things that are considered subroutine/procedures. As opposed to LOCALs, PROCs are exclusive in the same scope. Opposed to that LOCALs are cummulative, so this code below is no compile time error! But something you really don't want... believe me,it took days to find it :-)

 

.local test

lda #1

rts

.endl

 

 

.local test

lda #2

rts

.endl

 

 

Another nice compiler feature. :) Thanks for the warning. Unfortunately I already converted all my PROCs to LOCALs some time ago: PROCs don't work with SDX relocatables, and someone told me LOCALs were a lot faster for the compiler and/or WUDSN to process. I'll just have to be careful, then. ;)

Link to comment
Share on other sites

For WUDSN is does not make any difference and for MADS is actually should also not make a difference (have no reasonably large code example for testing that - maybe you have :-) ), as long as you don't use formal parameters with PROC. The formal parameters are probably also the only reason why I'd think relocating won't work. When calling the procedure, MADS generates stub code with absolute adresses to pass the values.

Link to comment
Share on other sites

For WUDSN is does not make any difference and for MADS is actually should also not make a difference (have no reasonably large code example for testing that - maybe you have :-) ), as long as you don't use formal parameters with PROC. The formal parameters are probably also the only reason why I'd think relocating won't work. When calling the procedure, MADS generates stub code with absolute adresses to pass the values.

 

Not sure if it was you who explained the problem with PROC and SDX relocatables, but they simply didn't work for me regardless of whether there were any formal parameters. Since I don't (currently) pass parameters via PROC I simply abandoned procedure blocks in every project and replaced them with LOCALs. It's relevant here because it looks like I'll have to write a loader to handle SDX relocatables in the GUI (MADS' native relocatable files being unsuitable for various reasons), so unless the PROC problem is fixed, LOCAL it shall be. Of course in the ROM-based (non-relocatable) code, we're talking about nothing more than an extensive search and replace operation to substitute one for another. But if the bug only arises because of cumulative definitions in the same scope, I can probably live with it. I noticed the issue before, IIRC, and posted in the MADS thread some time ago. I can remember being told at the time that the LOCAL behaviour was "as intended". :o

Link to comment
Share on other sites

@Prodatron: Curious to know how or if you avoid linear searching of the message queue to find the desired addressee:

 

http://atariage.com/forums/topic/227822-bitmap-allocation/?do=findComment&comment=3034655

For sending messages I just do a linear search through the message queue to find a free entry. The core loop takes 9 microseconds for each entry. So in worst case, when the queue is nearly full, it will take 9*64 microseconds to find the (last) free entry.

Receiving messages is always done at the same quick speed: There is a pointer for each process which is either NULL or points to the first message in the queue. Each queue entry itself contains a pointer to the next message. This is NULL, if there is no next message. So you only need to check the pointer, grab the message and update the "first" pointer and the "next" pointer and the empty flag.

 

I choosed the way with the linear search for empty entries, as it's very quick as long as there are not too much entries in the queue. Even if there are already 10 entries, it only takes 90 microseconds, which is probably still faster compared to a more complex algorythm, which is also able to search quickly in a full queue. But in most cases the message queue isn't that full (in general that only happens if something "hangs" a little bit - in this case the system is currently slow in any case, so who cares about the message queue :) ), so that was the reason for choosing the simple way which is faster then.

  • Like 1
Link to comment
Share on other sites

For sending messages I just do a linear search through the message queue to find a free entry. The core loop takes 9 microseconds for each entry. So in worst case, when the queue is nearly full, it will take 9*64 microseconds to find the (last) free entry.

Yeah: seems reasonable in all but extreme cases, as you say. It's about ten machine cycles per loop iteration on the Atari, so 640 cycles for a full list. As it happens, there's a much faster way, which I'll write about in the other thread.

 

Receiving messages is always done at the same quick speed: There is a pointer for each process which is either NULL or points to the first message in the queue. Each queue entry itself contains a pointer to the next message. This is NULL, if there is no next message. So you only need to check the pointer, grab the message and update the "first" pointer and the "next" pointer and the empty flag.

I'm still slightly unclear on a couple of points, although I may have misunderstood your original emailed description of the queue. The same queue stores messages between all different processes, so (in FIFO order), you might have:

 

[0] Message for process 1

[1] Message for process 2

[2] Message for process 3

[3] Message for process 2

[4] Message for process 3

 

So even if process 3's pointer points to [2], there'll still be a linear search, or some list walking, to get to process 3's next message at [4]? Unless the queue is maintained so that messages for a given process are kept together in the queue?

 

I'm also interested in a mechanism to handle messages intended for all processes. Such a message would have to be kept in the queue until all addressees have received it.

Edited by flashjazzcat
Link to comment
Share on other sites

Sorry I am currently out but will answer tomorrow!

The cool thing is now, that you brought me to an idea how to make searching for an empty entry fast and constant :) (so MessageSend will also have always the same constant speed like MessageReceive) It is just another linked list in one direction. There is a base pointer, which points into a list of free entries where each entry points to the next free one. Reserving and releasing an entry both is quite easy. Really cool!

  • Like 1
Link to comment
Share on other sites

You could try what the Amiga does - all the messages don't go in one queue, every task makes one or more ports that serve as the queue for messages for the task. You allocate one or more signals for each port, so the system knows when signals occurs which ports to process.

Link to comment
Share on other sites

This is the way how it works in SymbOS:

- each process has one origin pointer to its first message in the queue. If this pointer is NULL there is no message available

- each message within the queue has a pointer to the next message within the queue, which is available for the process as well. If this is NULL there is no other message available for the process.

- so if a process looks for a new message, it first checks its origin pointer. If it's not NULL it grabs the message. Then it updates the origin pointer with the pointer of the grabbed message.

 

The new linked list for free entries is done in the following way.

- there are 64 pointers for each message queue entry placed in the same order like the messages in the queue

- each points to a next free entry (so at the beginning, entry 1 points to 2, entry 2 points to 3 etc...)

- an origin pointer outside this list points to the first empty entry.

- if you allocate a new entry, you will take the one, where the origin pointer links to. The origin pointer then links to the entry, where the occupied one pointed before.

- if you free an entry, the origin pointer will now link to the new free one. The new free one then will point to the entry, where the origin pointer was linked before.

Link to comment
Share on other sites

Thanks for the explanation. So it seems to me that as long as the messages for a given process are consecutive in the main queue, everything will work perfectly using a head, tail and pending message counter in each process's pointer table. I wrote (yet) more linked list code last night which inserts a new node after node N (i.e. a process's "last message" pointer). So - when sending a message:

 

* If there are currently no pending messages for the target process, place the message at the end of the queue and point the process's "first" and "last" message pointers at the new message. Set pending message count to 1.

* If there are already pending messages for the target process, place the message immediately after the node pointed to by the process's "last message" pointer, update the pointer, and bump the pending message count.

 

When pulling a message off the queue:

 

* Get the process's "first message" pointer, grab the message, point "first message" at the "next" pointer, and decrement the pending message count.

 

This way keeps things nice and simple with just a couple of extra pointers in the process descriptions. I suppose messages intended for more than one recipient will have to be instanced multiple times, because messages without a specific addressee just don't fit this scheme at all. Still don't know what SymbOS does with those.

 

Anyway: we've banished linear searching from almost every area now. :D

Link to comment
Share on other sites

Here's the revised "Send Message" routine, using the free node list and the per-process head and tail pointers:

 

; ----------------------------------------------------------------------------
; SendMessage
; XY = message buffer
; A = Receiver ID (-1 = send to any process)
; Returns: Y = 1 = success, 2 = receiver doesn't exist, 0 = queue full
; ----------------------------------------------------------------------------

	.local MessageSend
	sty KernelPtr1+1 ; save message address MSB
	pla ; get X (buffer LSB)
	sta KernelPtr1
	pla ; get receiver ID (A)
	sta KernelTmp1
	lda QueueEntries
	cmp #MaxQueueEntries
	bcs QueueFull
	ldy KernelTmp1 ; we know we have space for the message, so see if addressee exists
	lda Process.State,y
	bpl @+
	ldy #KernelStatus.NonexistentProcess ; if receiver doesn't exist
	bne Fail
@
	ldx QueueFreeNodeHead ; get a free message node
	lda QueueFreeNodeNext-1,x
	sta QueueFreeNodeHead ; free node is in X

	lda Process.PendingMessages,y ; does process already have pending messages?
	beq NonePending ; if not, just add message to end of queue and initialise process head and tail pointers
	lda Process.MsgQueueTail,y ; get process's last queued message
	bne AddMsg
NonePending ; addressee has no pending messages, so add message at end of queue
	lda QueueTailNode
	beq IsEmpty
AddMsg
	tay
	sta QueuePrev-1,x ; our prev pointer
	
	lda QueueNext-1,y
	sta QueueNext-1,x ; our next pointer
	
	txa ; new node pointer
	sta QueueNext-1,y ; prev node's next pointer

	ldy QueueNext-1,x
	beq AtEnd
	txa
	sta QueuePrev-1,y ; next node's prev pointer
	ldy KernelTmp1 ; get IPID
	bne UpdateTail

IsEmpty ; come here if we're starting with a completely empty message queue
	stx QueueHeadNode ; our newly allocated node
	lda #0
	sta QueuePrev-1,x ;init prev and next pointers
	sta QueueNext-1,x
	txa
	ldy KernelTmp1 ; get IPID
	sta Process.MsgQueueHead,y ; point head at this, the process's only pending message
AtEnd
	stx QueueTailNode ; update tail pointer if we're at end of list
UpdateTail
	txa
	sta Process.MsgQueueTail,y ; point process's tail pointer at the newly added message

CopyMessage
	inc QueueEntries ; say we added a message
	inc Process.PendingMessages,y ; bump process's pending message count
	
	lda ActiveTask ; KernelSenderID
	sta QueueSenderID-1,x
	lda KernelTmp1 ; KernelReceiverID
	sta QueueReceiverID-1,x

	lda QueueNodeLo-1,x ; now make pointer to the new node
	sta KernelPtr2
	lda QueueNodeHi-1,x
	sta KernelPtr2+1
	ldy #MessageSize-1
@
	lda (KernelPtr1),y
	sta (KernelPtr2),y
	dey
	bpl @-

	ldy KernelTmp1 ; get IPID
	lda Process.State,y ; is receiving process sleeping unitl it gets a message?
	bne NotSleeping ; if process is ready or already waking up, just leave message in queue
	lda Process.WakeOnSender,y ; is process sleeping on message from this sender?
;	cmp #ProcessID.Any
;	beq StartWakeUp ; if receiver is waiting on message from any process, wake it up
	cmp ActiveTask ; otherwise, see if it's waiting on a message from us
	bne NotSleeping ; if not, leave it sleeping
StartWakeUp
	lda #ProcessState.WakingUp ; say process has incoming message and should be woken up by the scheduler
	sta Process.State,y
NotSleeping
	ldy #KernelStatus.OK ; say OK
	jmp ReturnWithContextSwitch
QueueFull
	ldy #KernelStatus.QueueFull
Fail
	jmp KernelReturn
	.endl
Note "cmp #ProcessID.Any" is currently commented out, since we have no way to deal with messages for multiple recipients yet using this data structure. The receive message code is now getting the same treatment. The only loop is now the message buffer copy code, so we've made lots of great cycle savings here. Edited by flashjazzcat
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...