Jump to content
IGNORED

New GUI for the Atari 8-bit


flashjazzcat

Recommended Posts

Did someone say "polluting the thread"

You stewed for 10 minutes, then edited your response from "Ho!" to this? I guess I got to you. Bravo! But please, just stop trashing this [assembly-based] thread that is technically light years beyond editing someone else's high-level language code or copying someone else's proven SIO2PC design. You said you would stay out of this thread. Once again, here (rather than "blah-blah-blah) is the DIRECT LINK and DIRECT QUOTE where you said you were....

 

 

In any case the best approach from me from now on would be just to keep quite, keep my opinion to myself, and wait and see. That's what I should have done in the first place anyway...

 

Is that clear enough for you? Any "blah blah blah" involved? JUST KEEP YOUR OPINION TO YOURSELF, LIKE YOU SAID YOU WOULD. Simple enough for ya?

Link to comment
Share on other sites

I just read yesterday an topic about construction kits in the 80s in Retro Gamer... and Pinball Construction Set was influenced by the Xerox experiements Bill has seen at Apple as employee? same what Steve Jobs have seen?

 

So a GUI was already un the head of the industry.

 

- Pinball Construction Set

- SEUCK

- Designers Pencil

- Racing Destruction Set

- Loderunner

- etc.

 

but just my 2 cents. I love to watch this but it's definitly out of my skills to do this kind of scheduling, interrupt handling, window handling etc... ;) I am just heading to the world of Fractalis and 3d operations and 3d engines... complete other cup of tea. :)

  • Like 2
Link to comment
Share on other sites

I said it could have been done but Atari software engineers were busy creating salable and usable products instead of spending their time on ego boosters.

And there we go: the transition from stage 1) to stage 2) as described by wood_jl, amply demonstrated.

 

Yes: by definition, It could have been done if someone had been able to sit down and do it. The hardware hasn't changed in thirty years, so obviously something about development has changed. We have a different view of what's possible with the benefit of hindsight. In any case, we've had from you:

 

1) It can't be done.

2) OK - maybe it can be done, but it can't be done well.

3) Maybe it can be done, and done well, but it's taking so long that I'm a fraud and a liar.

4) Maybe it can even be done well, but it's academic/pointless, and it's taking too long.

5) OK - maybe it can be done well and might be finished in a year or two, but so what: it could have been done thirty years ago. So it's no big achievement.

 

We get it. Your point of view is crystal clear. There's nothing especially exemplary about this project: indeed, it's overdue, over budget (that fiver's been spent), and doesn't really prove anything. Message received and understood. I think I'll go ahead and plod on with it anyway, if that's OK with you.

 

In fact, your opinion on why "it could have been done" wasn't especially objectionable, but unless everyone just nods their heads in blind agreement, the metamorphosis begins afresh. I've already explained to admin how tiresome this is...

 

Hey guys, let's focus on the project. If you want to have a philosophical discussion on whether or not Atari could have written a similar GUI in the 80s, that should probably take place in another thread.

^^This.

 

I just read yesterday an topic about construction kits in the 80s in Retro Gamer... and Pinball Construction Set was influenced by the Xerox experiements Bill has seen at Apple as employee? same what Steve Jobs have seen?

 

So a GUI was already un the head of the industry.

 

- Pinball Construction Set

- SEUCK

- Designers Pencil

- Racing Destruction Set

- Loderunner

- etc.

 

but just my 2 cents. I love to watch this but it's definitly out of my skills to do this kind of scheduling, interrupt handling, window handling etc... ;) I am just heading to the world of Fractalis and 3d operations and 3d engines... complete other cup of tea. :)

Yep - there's nothing at all unprecedented about GUIs on 8-bit computers. Been plenty of 'em, and some of them were quite usable. Been a few multi-tasking OS's as well (Lunix springs to mind), but a decent GUI on top of a multi-tasking OS is slightly less common. That's why it's doubly interesting when both goals are undertaken together. If there's anything to be proved, it's that the 6502 is a little more capable than we might have assumed; also that there are a lot of skilled and knowledgeable people around here who are really keen to help. It should be obvious that if no-one else wanted to see the thing realized, motivation would be next to nil. :)

 

Now - if we can't stay on topic, I'll stop posting updates - it's as simple as that.

Edited by flashjazzcat
  • Like 4
Link to comment
Share on other sites

I prefer "windows" library which does not use the CIO, works in text mode on standard atari with 64K. Do you have something to offer?

 

 

===

 

of course current project (windows in graphical mode) I found very interesting.

Edited by xxl
  • Like 1
Link to comment
Share on other sites

yes. something like this.

Drop me a PM or email, will you? I don't think I'll have time to document the library any time soon, so I could use some help with it to make it "publishable". I can send the library "as is", but it'll be up to you to make sense of it. If this sounds appealing, just let me know. ;)

 

do the external devices not be optimised for multitasking to gain full support?

Something I realized when abandoning the Atari OS is that the DCB at page three will have to stay put, along with the ZPSIO stuff starting at $30, otherwise external devices won't work at all. Of course the PBI supports CIOV, so we'll have to preserve that as well. As for multi-tasking: this would work great with PIO devices, but I don't think the busy interrupt load will sit well with serial IO, so we'll probably be better off shutting everything down when we call the SIO. The file system level can be pre-empted no problem, though: just not individual block transfers to and from devices.

 

I want external devices to work without alteration, so we'll have to be careful what goes where.

 

On a more positive note, I just told MrFish the window manager appeared to have been comprehensively munged during the transition to cartridge: I discovered this last night when attempts to render the desktop and icons resulted in either nothing at all or a huge crash. Given that the entire window manager was rewritten last November, I expected days of painful debugging. Fortunately, after about an hour, I found a bug in the ROM-based jump table, and the desktop - backdrop, icons, labels and all - suddenly sprang into view when the cart booted up. Big sigh of relief.

Edited by flashjazzcat
  • Like 4
Link to comment
Share on other sites

Jon, I think it has been a wonderful thing that you have been documenting, with the intention to explain the development process as you have, in a conversational manner; not just for us to draw on, but for your own recollection, as well. And I hope you continue, regardless of wherever this ends up, for your own sake. It not only provides incentive to document in a manner one wouldn't if left to ones own thoughts, often with haphazard notes, (though you probably do that as well for your own reference), but it helps organize the code construct in a more human-relatable way, through natural language. It is easy to get lost in there among the logic trees and you sometimes need another grouping hierarchy. You're commenting your code, just in a more elaborate way. ;)

  • Like 2
Link to comment
Share on other sites

Only when we exceed four processes do we need to start copying stacks, ........

 

Here's another thought...

 

You assign a fixed part of the stack to the UI/Desktop/kernel and divide the rest in slots, similar to the ZP slots and hand them out to processes based on how much stack space they need. Old UNIX (and I mean OLD old) had the process request a certain amount of stack and heap space (starting a new process could actually fail if there wasn't enough space). BTW even Minix 3 (which is not that old actually) had similar limitations before they implemented virtual memory.

 

There's a problem though, after a while the stack might get fragmented, similar to heap fragmentation with lots of mallocs/frees.

 

A solution might be to have fixed size stack slots, like:

 

64, 32, 32, 16, 16, 16, 16, 8, 8, 8, 8, 8, 8, 8, 8

 

A process, even desktop/ui, requests a certain amount of stack space and gets assigned the smallest space that satisfies that. A lot of processes probably won't need that much of stack space for a few pha/pla combo's and a few jsr's (depending on how deep the call-graph can get).

Also, it is not a crime to limit the number of processes to, say, 12 or 16 and have the stack and page-zero pool reflect that. Depending on the available resources (which is quite limited on an Atari 8-bit) even on modern Unices fork() or exec() can fail.

  • Like 1
Link to comment
Share on other sites

And there we go: the transition from stage 1) to stage 2) as described by wood_jl, amply demonstrated.

 

Yes: by definition, It could have been done if someone had been able to sit down and do it. The hardware hasn't changed in thirty years, so obviously something about development has changed. We have a different view of what's possible with the benefit of hindsight. In any case, we've had from you:

 

1) It can't be done.

2) OK - maybe it can be done, but it can't be done well.

3) Maybe it can be done, and done well, but it's taking so long that I'm a fraud and a liar.

4) Maybe it can even be done well, but it's academic/pointless, and it's taking too long.

5) OK - maybe it can be done well and might be finished in a year or two, but so what: it could have been done thirty years ago. So it's no big achievement.

 

We get it. Your point of view is crystal clear. There's nothing especially exemplary about this project: indeed, it's overdue, over budget (that fiver's been spent), and doesn't really prove anything. Message received and understood. I think I'll go ahead and plod on with it anyway, if that's OK with you.

 

In fact, your opinion on why "it could have been done" wasn't especially objectionable, but unless everyone just nods their heads in blind agreement, the metamorphosis begins afresh. I've already explained to admin how tiresome this is...

 

 

^^This.

 

 

Yep - there's nothing at all unprecedented about GUIs on 8-bit computers. Been plenty of 'em, and some of them were quite usable. Been a few multi-tasking OS's as well (Lunix springs to mind), but a decent GUI on top of a multi-tasking OS is slightly less common. That's why it's doubly interesting when both goals are undertaken together. If there's anything to be proved, it's that the 6502 is a little more capable than we might have assumed; also that there are a lot of skilled and knowledgeable people around here who are really keen to help. It should be obvious that if no-one else wanted to see the thing realized, motivation would be next to nil. :)

 

Now - if we can't stay on topic, I'll stop posting updates - it's as simple as that.

 

You're becoming a troll on your own thread, get over with this already, if you think it's worthwhile then it's worthwhile to you. Don't try to prove it to me with endless tirades just do it and get your bravos from the usual crew. Have you ever seen me cry over the AspeQt project despite all of your attempts to make me look like an ignorant twit, no, because I know who I am and don't need anybody's approval nor care about their disapproval, grow up!!!.

Link to comment
Share on other sites

Jon, I think it has been a wonderful thing that you have been documenting, with the intention to explain the development process as you have, in a conversational manner; not just for us to draw on, but for your own recollection, as well. And I hope you continue, regardless of wherever this ends up, for your own sake. It not only provides incentive to document in a manner one wouldn't if left to ones own thoughts, often with haphazard notes, (though you probably do that as well for your own reference), but it helps organize the code construct in a more human-relatable way, through natural language. It is easy to get lost in there among the logic trees and you sometimes need another grouping hierarchy. You're commenting your code, just in a more elaborate way. ;)

Yeah - thanks. I think it's a valuable aid, just talking things through in a manner which leaves a permanent record. And believe me, the code is commented in the conventional way too. :)

 

You assign a fixed part of the stack to the UI/Desktop/kernel and divide the rest in slots, similar to the ZP slots and hand them out to processes based on how much stack space they need. Old UNIX (and I mean OLD old) had the process request a certain amount of stack and heap space (starting a new process could actually fail if there wasn't enough space). BTW even Minix 3 (which is not that old actually) had similar limitations before they implemented virtual memory.

That's a pretty good idea. The various slot sizes sort of remind me of buddy allocation, in fact. The matters of page zero and the stack are really closely related and it makes sense to tackle them in a similar way. I think Jac! and I were chatting a while back about optimal stack slot size, and 32 bytes (instead of the 64 in use at the moment) was mentioned, which would probably be sufficient 99 per cent of the time (as an aside, the inter-bank JSR mechanism pushes the current bank onto the hardware stack; I coded up two versions: one which uses the stack, and another which uses a dedicated stack for the bank numbers, and funnily enough there was absolutely nothing to choose between them in terms of execution time, although obviously the version which uses a dedicated stack uses less hardware stack space).

 

But yep: allowing a process to choose between a number of stack sizes is quite appealing, and a nice way of making best use of the hardware stack. Eight bytes is probably too small to be useful (the exec mechanism loads up the new process's stack cache with a six byte frame [PC high, PC low, A, X, Y and P) which the scheduler PLAs when the task gets its first CPU slot, and the same thing is pushed every IRQ), but 16, 32, and 64 sound like good numbers. One saving grace about a segmented stack is that - as I've said before - even if we have eight processes running, a lot of them will be asleep most of the time, so their cached stacks will stay cached until such time as the user brings the application to the front and starts interacting with it: at that point, one of the stack slots gets swapped over, and we continue until another sleeping task with a cached stack is woken up. The UI process doesn't sleep, of course, and nor will any background process which is sitting there doing a long calculation or generally not waiting around for an event from the UI manager. So - the segmented stack isn't much of a bottleneck in typical operational situations, but I really like your idea of giving applications a choice regarding slot size, as long as having different slot sizes doesn't add undue complexity to the slot swapping mechanism.

 

And yes - sixteen tasks is PLENTY (and is the value of MaxTasks in GUIDEF.S). :)

Edited by flashjazzcat
  • Like 1
Link to comment
Share on other sites

You're becoming a troll on your own thread, get over with this already, if you think it's worthwhile then it's worthwhile to you. Don't try to prove it to me with endless tirades just do it and get your bravos from the usual crew. Have you ever seen me cry over the AspeQt project despite all of your attempts to make me look like an ignorant twit, no, because I know who I am and don't need anybody's approval nor care about their disapproval, grow up!!!.

 

This post seems to invite grammar correction. I'm no English major, but I'll take a stab at this one, and I'm open to correction, as ususal.

 

 

You're becoming a troll on your own thread. Get over with this already. If you think it's worthwhile, then it's worthwhile to you. Don't try to prove it to me with endless tirades. Just do it, and get your bravos from the usual crew. Have you ever seen me cry over the AspeQt project, despite all of your attempts to make me look like an ignorant twit? No, because I know who I am and I don't need anybody's approval, nor do I care about their disapproval. Grow up!!!.

 

 

I now formally "bravo" the GUI project. :)

Link to comment
Share on other sites

Can we all make a resolution not to acknowledge the fool when the next inevitable retort comes along? Seriously: the admin line is "use the ignore facility", but what is the point when everything is quoted verbatim even if I choose to ignore it? I give up...

 

If YOU stop replying to everything I write then no resolution is needed and you know it, or were you really making a call to the other guy who desperately tries to get attention with colorful letters as he otherwise can not show any signs of intelligence.

Edited by atari8warez
Link to comment
Share on other sites

Speak roughly to your little boy,
And beat him when he sneezes:
He only does it to annoy,
Because he knows it teases.

 

Ignoring the troll is the only way to disarm them, long diatribes and analyses of grammar and point by point rebuttals just feed them with delicious tears.

 

And Jon -- I bet you never expected to get a Lewis Carroll quote in this thread :)

  • Like 4
Link to comment
Share on other sites

Can we all make a resolution not to acknowledge the fool when the next inevitable retort comes along? Seriously: the admin line is "use the ignore facility", but what is the point when everything is quoted verbatim even if I choose to ignore it? I give up...

Oops, saw this after I made that other post. Yeah, not the place I guess, I'll stop anything here.

  • Like 3
Link to comment
Share on other sites

Can we all make a resolution not to acknowledge the fool when the next inevitable retort comes along? Seriously: the admin line is "use the ignore facility", but what is the point when everything is quoted verbatim even if I choose to ignore it? I give up...

 

Apololgies. I agree. A unified front of ignoring the troll is the best course of action. Ignore the attention-whore.

  • Like 2
Link to comment
Share on other sites

For a somewhat coarse measurement you could keep a running tally of the difference between VCOUNT at entry and exit for each process. Then every so often report the sum over the total for each process and reset the counters. Problem is you'd probably need 16-bit tallies to even things out over many frames. And you'd need a 16-bit divide to compute the percentage. Though you could probably just use a look-up table on the high byte if you read the counters at precise intervals, like say every 50 VBIs.

Finally digested this properly and it sounds like the perfect solution. Since the quanta aren't tied to the VBL, we'd just need to cater for the possibility of VCOUNT being lower when the process gave up the CPU than it was when it started, which is no problem (in any case, a quantum is never longer than 1/50 or 1/60 second anyway, so VBLANK is just about made for the job).

 

The percentage calculation seems beautifully simple if we spread the sample time across 10,000 VCOUNT ticks, which is equivalent to roughly 64 PAL frames (so we get an updated sample just under once a second). Divide each process's 16-bit VCOUNT tally by 100 and we have the percentage CPU usage. Subtract the idle process count and we also get the percentage of time spent in the kernel. Or we could sample every 6,400 ticks and then do the division by 64 with two left shifts (or using the big bit-shifting LUT we already have). Or sample every 25,600 ticks and then the MSB of the 16-bit counter IS the percentage value (in other words, divide by 256). Nice. :)

  • Like 2
Link to comment
Share on other sites

I implemented per-process CPU usage logging, although there's no task manager to display the stats yet. However, each process has a 16-bit counter which is continually updated with the number of VCOUNT ticks which elapsed during its quantum. I figure we could just have a "sample" kernel function which resets all the counters, then sends the caller the tick counts for every process once the sample time has elapsed (say, 12,800 VCOUNT ticks). I also implemented the idle process as a task (ID 1 - task IDs happen to begin at 1 here).

http://youtu.be/Y5OwmZGpLGg

In this rough and ready demonstration, we have four tasks:

Task	PID	Description
Idle	1	Infinite loop
System	2	Filesystem, services (currently dummy: sleeping waiting for a message)
UI	3	The UI manager
Test	4	Test application (desktop "finder" app)

In the video, the idle process is copying VCOUNT to $D01A in a closed loop (creating the horizontal colour bands). Idle has priority 7 (the lowest), while system and test have priority 4, and the UI priority 1 (the highest). The scheduling scheme (following SymbOS exactly) only gives CPU time to the lower-priority tasks if those with a higher priority did not exhaust their CPU allocation. If, on the other hand, a task is pre-empted by the IRQ, we finish the rest of the tasks with the same priority as the interrupted one, before resuming at the top of the order. Conversely, if all higher priority tasks are asleep or idle, the lower priority tasks (in this case, System Idle) get CPU time. So we can see that when the menus are being drawn and erased, System Idle isn't running.

 

The scheduling scheme used in SymbOS seems a good one to follow, since it gives absolute priority to the user interface, which is effectively never interrupted unless one creates another process with the same special top priority (which wouldn't be advisable if the UI is to be kept responsive).

 

Something of note here is that the UI process - as designed - doesn't go to sleep waiting for messages, since it has to periodically snapshot the mouse to see if it's in the menu bar, which becomes highlighted without mouse clicks. Once the UI has checked the mouse, it immediately idles. I wondered if the UI could go to sleep completely and be woken by messages from the mouse and keyboard drivers (which buffer everything using queues already anyway), but this would be a little more complex and it seems to work for now with the UI just idling when it's done what it has to do. MrFish and I have ideas for tooltips and such (which are really simple to implement), and the mouse cursor will need to change shape contextually when it's over a text box, for example, so the UI process can't get away with sleeping until something is pressed or it receives a message from an application.

 

In any case: four processes, three of which using the messaging system, and no serious bugs encountered yet, to my surprise.

 

@Prodatron: the A8 kernel uses a timer IRQ for the scheduler, but I was wondering about the pros and cons of using the NMI VBLANK interrupt at some point in the future. One of the nice things about the timer IRQ, however, (aside from the fact it's masked with SEI) is that the timer can be restarted every context switch, so we don't - say - yield to the next task just a few cycles before yet another IRQ hits. I wondered how important this really is, though, or if SymbOS uses a fixed-frequency interrupt like the Atari's vertical blank NMI.

 

Anyway: keeping track of CPU usage is really economical, and the idle process is just another task with a running percentage counter. I was quite wrong to speculate that we could calculate time spent in the kernel using this method, though: obviously this would involve expensive manipulation of the counters every time we called the kernel, but I think per-process and overall CPU usage are good enough. ;)

Edited by flashjazzcat
  • Like 7
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...