Jump to content
IGNORED

New GUI for the Atari 8-bit


flashjazzcat

Recommended Posts

As a Boss-X user, i was wondering if you had any plans for a icon editor that would let one pick the icons you wanted. Also, will the GUI support artifacting colors? Thanks, this thing looks awesome, can't wait!!

Surely. MrFish wants to write the resource editor, which will let you create your own icons in 16x16 and 12x12. As for artifacting: I'm not a fan of it personally (in fact I consider it the scurge of the Atari display on PAL machines, where the effect is generally "unpredictable"), but obviously the dithering used in various parts of the UI will likely create colours on NTSC systems as a side-effect. I suppose you'll be free to tweak the desktop patterns, etc, to create the kind of effect you want.

 

@James: I'll get back to your posts when I settle back into coding this evening. I'm fairly happy that we have pretty much all the info we need on the banked memory manager now (and plenty of options to choose from), but my most pressing concern now is finalizing the resource format and figuring out where to put the code. The memory manager won't "gel" until these two considerations are looked into properly. The demo library is compiling to 17KB at the moment, which includes the in-line menu resource in the demo app, 2.5KB of look-up tables, two fonts and a dozen icons. Clearly this isn't going to fit under the OS - even without the fonts and icons (when they move to their rightful banks) - so I'd rather find a home for the code now and then build the memory manager around that.

 

If the GUI uses the cart space at $A000-$BFFF, the screen will sit between $8000-$9FFF, and the display list and interrupt service routines will have to go below $4000. This doesn't leave much room for apps, so a natural place to put them is under the banking region and under the OS. App developers could also get clever and put code into extended banks.

 

The flip side of the coin is the GUI sitting in the Shadow RAM, releasing 8KB of conventional memory, but likely using $C000-$D000 as a code buffer, swapping segments in and out of that region. App developers would still have the option of putting code into extended RAM, but RealDOS, DOS-XE, and Sparta 3.x would be no-go.

Link to comment
Share on other sites

Clearly this isn't going to fit under the OS - even without the fonts and icons (when they move to their rightful banks) - so I'd rather find a home for the code now and then build the memory manager around that.

Well, the allocation routines I posted really don't care where you put the code but you are better having FREE RAM in one large block. It's easier to manage and allows larger allocations to take place.

 

If the GUI uses the cart space at $A000-$BFFF, the screen will sit between $8000-$9FFF, and the display list and interrupt service routines will have to go below $4000. This doesn't leave much room for apps, so a natural place to put them is under the banking region and under the OS. App developers could also get clever and put code into extended banks.

Where do the DOS's normally sit? *If* I didn't have to worry about backwards compatibility, I'd fill the areas above I/O and below bank switching first. That gives you a larger block of memory for apps.

 

The flip side of the coin is the GUI sitting in the Shadow RAM, releasing 8KB of conventional memory, but likely using $C000-$D000 as a code buffer, swapping segments in and out of that region. App developers would still have the option of putting code into extended RAM, but RealDOS, DOS-XE, and Sparta 3.x would be no-go.

Swapping code that way would be painfully slow. You couldn't guarantee a task could respond withing a small amount of time.

 

You are trying to write this like GEM Desktop for the PC. It was a GUI on top of DOS. If you do that you are stuck with the limitations of that approach. If you go more Mac like, you say the hell with compatibility and try to arrange the memory map as best you can.

 

If you want to run multiple tasks, you could dedicate a bank to each task. Then when you switch tasks easily by switching banks. Then programs don't have to be coded to specific addresses to play nice with each other. You just code them for a specific address in the bank area. Nice for small apps but not good for ones that want to use multiple banks for data.

An alternative is to support 2 different kind of apps. One that fits in a bank and the other fits in main memory. Multiple bank apps could be loaded at once but only one main memory app could be loaded. Then you'd need a flag on each bank to indicate it has been allocated for a bank app. Not the most elegant approach but probably doable.

 

OS-9 had the advantage of having a CPU supporting re-entrant/relocatable code. It could just allocate RAM for a program, stick it there and start it running. You aren't so lucky. If you used a P-Code compiler for apps you could do the same thing but then you loose speed. I suppose you could make a super smart loader that basically popped addresses in the code when you start a program but that doesn't sound very practical to me.

 

It looks like you have some tough choices to me and there isn't going to be a perfect solution.

There is a reason the IIgs uses the 65816.

Edited by JamesD
Link to comment
Share on other sites

As you say, it's more just hardware compatibility issues rather than a particular DOS he's dealing with, although there are some overlaps. This is just a heavily constrained environment.

 

I think a banked memory manager with a non-banked memory window is the way I would go. Apps request some memory and get back a compound pointer that can be used to deference the storage. All memory access must go through system routines, including individual byte access. Behind the scenes, the manager swaps in the correct bank and copies the necessary page into the non-banked cache. As long as the requested memory is still within the cached page, no copies are done. When a new page is loaded, the dirty flag on the current cached page is checked and its written back out if necessary before loading the new page. Slow? of course. Could be mitigated somewhat by adding special case routines, and by also providing app writers with a 'fast', local heap in their own bank with their code. They would use the fast heap for critical stuff, and use the 'slow' heap for less speed critical/larger items. The advantage is that it would open up vast amounts of storage, and you wouldn't need to have to worry about banked code/memory conflicts.

 

So, for instance :

App code cannot span banks.

Each bank starts off with a heap segment ( say 1k, for instance ).

Apps can use system routines to allocate their local ( fast ) or banked ( slow ) memory.

Accesses to fast heap are direct, accesses to slow heap are indirect through system routines.

Slow heap routines are provided to allow copying of an entire page of slow heap data into the fast local heap.

 

Well, any way you slice it its going to be really tough to do apps. I think something like the above is the best compromise. I don't like that an app would be limited to a single bank ( because it needs to stay attached directly to its 'fast' heap ), but something's got to give somewhere. This is why most of the 'GUI OS' applications for Atari have stumbled...the OS part looks good but there aren't any apps.

 

As far as tasking goes, I think it's unlikely that an actual preemptive threads implementation would be of any use, and likely would add a lot of useless complications. I would have the GUI drive the apps occasionally with an 'idle time' event if its been x milliseconds since their last event rather than really trying to swap them in and out as threads.

Link to comment
Share on other sites

Well, the allocation routines I posted really don't care where you put the code but you are better having FREE RAM in one large block. It's easier to manage and allows larger allocations to take place.

Note I was referring to the actual executable code. GUIs are quite code heavy as well as data heavy, and the prime concern right now is stretch my legs with the program code. Given the thing must run on a 130XE, putting GUI OS code in the banked RAM is hardly a solution, so a banked cartridge is looking increasingly attractive. Writing banked cart code will be no picnic, but it will immediately solve one space problem. As for data storage, I think we're just about on top of that now (see later).

 

Where do the DOS's normally sit? *If* I didn't have to worry about backwards compatibility, I'd fill the areas above I/O and below bank switching first. That gives you a larger block of memory for apps.

DOS is usually below $2000 (not counting those which partly reside under the OS or on a cart). Bearing in mind we need a clean 8KB for the 200 line screen display, the plan was to put the display RAM either immediately above or below the banking region (we have no choice, really). Then, depending on whether we use a cart, we have whatever is left above or below $4000-$7FFF, and the Shadow RAM (optionally), plus code space under the banking region itself.

 

Swapping code that way would be painfully slow. You couldn't guarantee a task could respond withing a small amount of time.

Agreed. I've already abandoned the idea of using $C000-$D000 as a code buffer.

 

You are trying to write this like GEM Desktop for the PC. It was a GUI on top of DOS. If you do that you are stuck with the limitations of that approach. If you go more Mac like, you say the hell with compatibility and try to arrange the memory map as best you can.

Indeed I am. With the cooperation of an author/curator of SpartaDOS or one of its variants, I would happily write a complete operating system. I always fancied writing a DOS, but I feel that right now is not the ideal time to start. Unfortunately, it's rather like reinventing the wheel, and it would have to be backwards compatible with existing FMS's, since we want to be able to run legacy applications. Somewhere down the line, though, yes - I would like the GUI to encapsulate the FMS as well.

 

If you want to run multiple tasks, you could dedicate a bank to each task. Then when you switch tasks easily by switching banks. Then programs don't have to be coded to specific addresses to play nice with each other. You just code them for a specific address in the bank area. Nice for small apps but not good for ones that want to use multiple banks for data.

 

An alternative is to support 2 different kind of apps. One that fits in a bank and the other fits in main memory. Multiple bank apps could be loaded at once but only one main memory app could be loaded. Then you'd need a flag on each bank to indicate it has been allocated for a bank app. Not the most elegant approach but probably doable.

This is an attractive idea. If it didn't lend itself to some kind of cooperative multi-tasking, it would at least make task switching quite easy to implement.

 

OS-9 had the advantage of having a CPU supporting re-entrant/relocatable code. It could just allocate RAM for a program, stick it there and start it running. You aren't so lucky. If you used a P-Code compiler for apps you could do the same thing but then you loose speed. I suppose you could make a super smart loader that basically popped addresses in the code when you start a program but that doesn't sound very practical to me.

I have even considered making all the code relocatable (via a custom loader), although the best way to accomplish that would be to implement a symbol chain similar to that of SpartaDOS X. The fact is, SDX handles dynamic relocation automatically and is a dream to program for. You really don't feel like going back to writing DOS 2.5 apps after writing relocatable native SDX apps using MADS.

 

It looks like you have some tough choices to me and there isn't going to be a perfect solution.

There is a reason the IIgs uses the 65816.

No doubt about it, but this is part of the fun. With a faster processor and the ability to easily smash the 64KB linear memory barrier, this thing would be far less of a challenge. I understand the limited memory addressing capabilities are a potential stumbling block, but I'm glad I've structured the project in this way. I already have a tangible system which is really going to take off when we overcome these issues. I wouldn't like to spend a month talking about this stuff, then be faced with finding out whether I can sample the mouse quickly enough. Once the banking is dealt with... instant fun! :D

 

As you say, it's more just hardware compatibility issues rather than a particular DOS he's dealing with, although there are some overlaps. This is just a heavily constrained environment.

Not only hardware, but most DOSes are a nightmare. Because there was never any agreed policy on extended memory access, it's hard to use extended banks at an "executive" level and not expect them to be corrupted by RAMdisk access or some application which uses extended memory. The Last Word would be a potential offender: under DOS 2.5, it would quite happily obliterate any data that was already in extended banks not already occupied by a RAMdisk. SDX is a different kettle of fish, however (you may see why I heap such praise on this DOS, because it is well designed). LW queries SDX to find out how many free banks there are, and uses only those banks. The GUI - running under SDX - could tell DOS it has used some extended memory banks, and DOS would report fewer free banks to LW. Of course, I deliberately wrote the word processor to work with a variety of DOSes. Unfortunately, most other DOSes take no such pains to manage extended memory, so even the best written non-GUI apps are likely to require a complete reboot of the GUI when they exit.

 

I think a banked memory manager with a non-banked memory window is the way I would go. Apps request some memory and get back a compound pointer that can be used to deference the storage. All memory access must go through system routines, including individual byte access. Behind the scenes, the manager swaps in the correct bank and copies the necessary page into the non-banked cache. As long as the requested memory is still within the cached page, no copies are done. When a new page is loaded, the dirty flag on the current cached page is checked and its written back out if necessary before loading the new page. Slow? of course. Could be mitigated somewhat by adding special case routines, and by also providing app writers with a 'fast', local heap in their own bank with their code. They would use the fast heap for critical stuff, and use the 'slow' heap for less speed critical/larger items. The advantage is that it would open up vast amounts of storage, and you wouldn't need to have to worry about banked code/memory conflicts.

Right. This is what I had in mind when we started discussing this a week or so back. Yes - there are overheads, but a caching system using indirect access is easier for the human brain to handle. It need not be that slow, either. I have a developmental version of The Last Word here which can load 48KB text files into extended banks and scroll through the text as if it was a contiguous memory block. All direct LDA/STA operations are replaced with indirect calls. I adopted a 64KB addressing scheme which transparently maps onto four extended banks. You provide an address, and you get the byte back. It's possible - also - to circumvent the indirection via time-critical code placed outside the banking region. The screen refresh works this way. It took about five seconds to scroll - byte-by-byte, calling cursor-right 48,000 times - from the top to the bottom of a 48KB document. This can be dramatically improved by using a different approach to scrolling (which is what the GUI text editor will do, by using an array of line pointers).

 

So, for instance :

App code cannot span banks.

Each bank starts off with a heap segment ( say 1k, for instance ).

Apps can use system routines to allocate their local ( fast ) or banked ( slow ) memory.

Accesses to fast heap are direct, accesses to slow heap are indirect through system routines.

Slow heap routines are provided to allow copying of an entire page of slow heap data into the fast local heap.

Makes sense. I don't want to burden application developers (and that includes myself at the head of a queue) with worries about keeping portions of code outside the banked region. A BIG gift from the indirect memory access approach is that code running out of extended banks is easily able to access data in extended banks. This in itself is a compelling supportive argument.

 

Well, any way you slice it its going to be really tough to do apps. I think something like the above is the best compromise. I don't like that an app would be limited to a single bank ( because it needs to stay attached directly to its 'fast' heap ), but something's got to give somewhere. This is why most of the 'GUI OS' applications for Atari have stumbled...the OS part looks good but there aren't any apps.

With clever coding, I don't think an app has to confine itself to one bank. What goes for code running out of a banked cartridge will probably go just as well for code in banked RAM. I think that's enough responsibility for the software developer, who is already going to have to get used to structuring their programs in a whole new way: not only the OOP way, but the tiny-OOP way.

 

As far as tasking goes, I think it's unlikely that an actual preemptive threads implementation would be of any use, and likely would add a lot of useless complications. I would have the GUI drive the apps occasionally with an 'idle time' event if its been x milliseconds since their last event rather than really trying to swap them in and out as threads.

If provision was made to copy pages zero, (and possibly four, five and six) to and from the bottom (or top) of an app's bank, some kind of judicious task switching could work really well here. It's still a lot more complex than it sounds, however. At least adopting the code-in-banks approach, we leave the door open for more complex stuff later on. I think the "idle time" facility would be useful, though.

Edited by flashjazzcat
Link to comment
Share on other sites

Yeah, psuedo threads are a way to access the semantic simplicities of threading without as much of the nightmare under the hood.

 

But, let me ask you this :

 

What would we get from a full tasking implementation that we don't have already?

Erm... full tasking. :D

 

Seriously, I guess nothing more than quick cut and paste between apps. It's not essential by any means. But it would be interesting if the system was designed with sufficient forward thinking to implement this in "version 2". :)

Link to comment
Share on other sites

Reading through some of the discussion on memory optimization, I was reminded of a really good book I read some years back as part of my studies, "Small Memory Software". It identified design patterns for limited memory systems. Maybe it can help. You can read it online at http://www.cix.co.uk/~smallmemory/book.html

Edited by pixelmischief
Link to comment
Share on other sites

Reading the discussion, I am wondering what the scope of this project is. Are you writing a graphical file manager and program launcher, a GUI library for inclusion in other software projects, or a shared-tasking system wherein multiple programs can run simutaneously, each with its own "viewport"?

Link to comment
Share on other sites

Reading through some of the discussion on memory optimization, I was reminded of a really good book I read some years back as part of my studies, "Small Memory Software". It identified design patterns for limited memory systems. Maybe it can help. You can read it online at http://www.cix.co.uk...emory/book.html

Thanks for the link. Very interesting reading indeed. :)

 

Reading the discussion, I am wondering what the scope of this project is. Are you writing a graphical file manager and program launcher, a GUI library for inclusion in other software projects, or a shared-tasking system wherein multiple programs can run simutaneously, each with its own "viewport"?

It's probably become somewhat blurred over time... It's GUI which sits on top of DOS in the first instance - just like Diamond GOS. This means apps don't have to carry the burden of the whole GUI library "on board", although a key feature is the ability to run lagacy apps from the desktop. Shared tasking will come later, if it's implemented at all. You'll be able to have MDI applications, though, of which the default file manager/browser will be an example.

Link to comment
Share on other sites

Yeah, psuedo threads are a way to access the semantic simplicities of threading without as much of the nightmare under the hood.

 

But, let me ask you this :

 

What would we get from a full tasking implementation that we don't have already?

Erm... full tasking. :D

 

Seriously, I guess nothing more than quick cut and paste between apps. It's not essential by any means. But it would be interesting if the system was designed with sufficient forward thinking to implement this in "version 2". :)

 

hehe, yep. Thats my point. There is really no reason to implement real task threading on this machine other than just to say its been done. The usual pressing reason would be a requirement for guaranteed time slices and/or deterministic real-time software interrupts, neither of which apply in our situation...yet(?). Even if they are at some future point, you can drive such functions off of a guaranteed hardware interrupt like vblank.

 

I don't understand the connection between 'quick cut and paste' and tasking. When a cut happens, the info is stored in a system buffer. The destination app is focused by a mouse event, and then when the paste happens it gets a 'hey you got pasted to' event. Drag and drop sort of work the same way, when drag commences the data is buffered, and when the drop happens the target gets a 'hey you got dropped on' event.

 

And now I'm going to hush up about tasking! better late than never.

Edited by danwinslow
Link to comment
Share on other sites

I don't understand the connection between 'quick cut and paste' and tasking. When a cut happens, the info is stored in a system buffer. The destination app is focused by a mouse event, and then when the paste happens it gets a 'hey you got pasted to' event. Drag and drop sort of work the same way, when drag commences the data is buffered, and when the drop happens the target gets a 'hey you got dropped on' event.

Of course. All I meant is that you could cut and paste from your running notepad to your running word processor and see instantaneous results since the switch in focus from one app to the other would be (hopefully) more or less instant. But admittedly you can achieve more or less the same functionality with simple brute-force task switching (i.e. "waking up" a sleeping app), or, more simpler yet, shutting down the notepad, going back to the desktop, and starting up the WP with the text still on the clipboard. Most probably one would write a simple notepad which functioned as a desk accessory (something which will fit entirely in a single extended bank) which you could access on systems with sufficient memory without terminating the main running app. Diamond allowed for something similar on a much smaller scale, I believe (I know the accessories were severely limited in size).

 

Funny you should mention the drop event, since I was just about to code that up before we got embroiled in all the tasking and memory management talk. :)

 

Anyway - I think folks know where I'm heading with this now, and my forum posting and emailing have easily outstripped my GUI coding over the past couple of weeks by a factor of 100 to one. Time to get back to it. :D

Link to comment
Share on other sites

OK: Dumb question. If we have a banked cartridge at $A000-$BFFF (which is capable of being completely banked out), can application code safely live in the RAM underneath (providing - of course - the cart banks itself out when running the code, and the code itself makes no direct calls to the cart "on top of it", obviously)?

Link to comment
Share on other sites

OK: Dumb question. If we have a banked cartridge at $A000-$BFFF (which is capable of being completely banked out), can application code safely live in the RAM underneath (providing - of course - the cart banks itself out when running the code, and the code itself makes no direct calls to the cart "on top of it", obviously)?

As long as it makes no calls to the cart or there are stub routines outside the cart area to bank it in and out... then yes it's ok.

<edit>

BTW, you might make an OS call that sits outside the cart area just to bank this area.

Edited by JamesD
Link to comment
Share on other sites

Thanks James: I guess it was pretty obvious, but you rarely see apps manipulating carts in this way. :)

 

Outside the realms of actually banking out the cartridge, writing banked cart code using MADS is pretty interesting. Of course, there are a couple of ways to handle the bank switching from inside the cart: you can either delegate it to code outside the cart space, or have an identical routine in the same place in each bank which will allow an "inline" switch.

 

I'm playing with something along these lines:

 

ljsr .macro ; do a long JSR to a label in a different bank
sta asave
sty ysave
stx xsave
lda #= :1 ; get bank number of target label
sta target_bank
?1	ldy #= ?1 ; get current bank
lda #< :1
ldx #> :1
jsr :long_jump
.endm
;



long_jump ; execute "far" subroutine
sta jmp_vec
stx jmp_vec+1
tya ; push return bank on stack
pha
lda #> [return-1] ; push address of return routine on the stack
pha
lda #< [return-1]
pha
ldy target_bank ; get bank number
lda #$FF
sta cart_banks,y ; switch in the target bank
lda asave
ldy ysave
ldx xsave
jmp (jmp_vec) ; execute the routine
;

return ; handle return from banked routine
sta asave
sty ysave
pla ; get return bank
tay
lda #$FF
sta cart_banks,y ; switch in originating bank
lda asave
ldy ysave
rts

The advantage here is that the target subroutine doesn't have to explicitly invoke any special returning mechanism, since RTS first pulls the address of the "return" routine, whose RTS then returns to the originating point in the originating bank. It's not yet tested but the mechanism compiles correctly. The target routine (from what I can gather from the translated MADS docs) must be called as follows:

 

ljsr :target

The reason for the initial colon in the label name is to inform the compiler that the label is outside of the current bank range (otherwise it generates a compile error).

Link to comment
Share on other sites

OK, we're back on it now. This is a very basic demo of overlapping windows. I'm currently writing the code to handle the ordering of the objects, so that the back window can be brought to the front (this doesn't happen in the video). Still, it gives a rought idea of how quickly multiple non-client areas redraw:

 

http://youtu.be/Z4FVVdMIVgw

 

Of course, until the extended banks are brought into play, I can't use a front window background buffer, which will dramatically speed up moving and resizing of the top window

 

I should add that as soon as you can bring the back window to the front, I'll release this program for download as is, since it might be rather a long time before we get to play with icons and such like, given the enormous task at hand with the memory banking and other back-end stuff.

Edited by flashjazzcat
  • Like 2
Link to comment
Share on other sites

Of course, there are a couple of ways to handle the bank switching from inside the cart: you can either delegate it to code outside the cart space, or have an identical routine in the same place in each bank which will allow an "inline" switch.

I've seen both approaches and it really makes no difference unless you are pressed for space on the cart or something else uses the RAM bank under the cart. Then you could just copy that section of code to the RAM bank and call the area reserved memory. But if you are going to take some RAM anyway... you might as well reserve it somewhere else and free up some ROM.

Once you get further along you'll know if you need the space or not.

Link to comment
Share on other sites

I've seen both approaches and it really makes no difference unless you are pressed for space on the cart or something else uses the RAM bank under the cart. Then you could just copy that section of code to the RAM bank and call the area reserved memory. But if you are going to take some RAM anyway... you might as well reserve it somewhere else and free up some ROM.

Once you get further along you'll know if you need the space or not.

Absolutely. Fortunately the switching code is almost insignificantly small.

 

Things are coming together quite nicely now. I'm busy handling the z-coordinates of windows, which means scanning the object lists in the reverse order to make sure the last things drawn on the same level are the first things "hit". Definitely need a way of referring directly to an object's last child. I still think making the "prev" pointer of the first child point to the last one makes sense. Then, instead of checking for a null pointer when walking backwards through the list, you just compare the prev pointer to the address of the first object you processed (i.e. the last one).This saves having to initially walk the whole list to find the last child as a starting point.

Edited by flashjazzcat
Link to comment
Share on other sites

Reversi is done.

 

post-22831-0-03323000-1302262168_thumb.png

 

Now I wait for the API to use your GUI as the interface :)

The AI algorithm weights are set by genetic algorithm.

To have a stronger player the learning process should be run for a few days (the current one took a few hours).

Some bugs in the main program are still possible.

SmallReversi.zip

Edited by ilmenit
  • Like 2
Link to comment
Share on other sites

Object rearrangement and initial placement code:

 

object_unlink ; unlink object from linked list
ldy #widget.parent
lda (object),y ; get parent object
sta parent
iny
lda (object),y
sta parent+1
ldy #widget.child ; get parent's first child
lda (parent),y
sta ptr1
iny
lda (parent),y
sta ptr1+1
ldy #widget.prev ; get parent's last child
lda (ptr1),y
sta ptr2
iny
lda (ptr1),y
sta ptr2+1

lda ptr1 ; are we unlinking the first item in the list?
cmp object ; if so, object will be the same address as ptr1, which is the first object in this list
bne not_first_unlink
lda ptr1+1
cmp object+1
bne not_first_unlink

ldy #widget.next ; this is the first item, so get next one
lda (object),y
sta ptr3
iny
lda (object),y
sta ptr3+1
ora ptr3
beq no_next_obj1

ldy #widget.child ; we're removing the first child in a list of more than one object, so amend the parent's child pointer
lda ptr3 ; to point to the next child along
sta (parent),y
iny
lda ptr3+1
sta (parent),y

ldy #widget.prev ; point this object's prev pointer to the last item in the list
lda ptr2
sta (ptr3),y
iny
lda ptr2+1
sta (ptr3),y

rts ; done	

no_next_obj1 ; there's no next object, so nullify parent's child pointer
lda #0  ; because the parent will have no children when we're done
ldy #widget.child
sta (parent),y
iny
sta (parent),y
rts ; and we're done

not_first_unlink ; we're not unlinking the first item in the list, so prev items's next link should point to object's prev link
ldy #widget.prev
lda (object),y
sta ptr3
iny
lda (object),y
sta ptr3+1
ldy #widget.next
lda (object),y
sta (ptr3),y
iny
lda (object),y
sta (ptr3),y

lda object
cmp ptr2 ; is this the last object in the list?
bne not_last_unlink
lda object+1
cmp ptr2+1
bne not_last_unlink

ldy #widget.prev ; if we're unlinking the last object in the list, we must point first object's next pointer at previous object in list
lda ptr3
sta (ptr1),y
iny
lda ptr3+1
sta (ptr1),y
rts ; and we're done


not_last_unlink ; if we're not unlinking last object in list, point next item's prev link to object's next link
ldy #widget.next
lda (object),y
sta ptr3
iny
lda (object),y
sta ptr3+1
ldy #widget.prev
lda (object),y
sta (ptr3),y
iny
lda (object),y
sta (ptr3),y
rts ; done
;


object_order ; move "object" to new position in list
; "object" should point to object
; acc should contain position: 0 = last, 1 = first, 2 = second, etc.
sta tmp1 ; save position
sta tmp2
jsr object_unlink ; unlink object from the list
lda ptr1 ; start at first object in list
sta ptr3
lda ptr1+1
sta ptr3+1
lda tmp1 ; do we want the last position?
bne find_new_position
lda ptr2 ; if last pos, we already have the pointer
sta ptr3
lda ptr2+1
sta ptr3+1
bne got_obj_pos1

find_new_position ; get a pointer to the target position
dec tmp2
beq got_obj_pos1 ; we've found the spot
ldy #widget.next+1 ; otherwise, get next item
lda (ptr3),y
tax
dey
lda (ptr3),y
sta ptr3
stx ptr3+1
bne find_new_position
got_obj_pos1
lda tmp1
beq ins_object_tail ; place at end of list
cmp #1
beq ins_object_head ; place at front of list

ldy #widget.prev ; neither front nor end: so, some other position in the list
lda (ptr3),y ; first we need pointers to the neighbours
sta ptr1
iny
lda (ptr3),y
sta ptr1+1

ldy #widget.next
lda (ptr3),y
sta ptr2
iny
lda (ptr3),y
sta ptr2+1

lda object+1
sta (ptr1),y ; set previous item's next pointer
dey
lda object
sta (ptr1),y

ldy #widget.prev
sta (ptr2),y ; and next item's prev pointer
iny
lda object+1
sta (ptr2),y

ldy #widget.parent
lda parent
sta (object),y
iny
lda parent+1
sta (object),y

rts
;




ins_object_head ; add object to head of list
ldy #widget.child
lda (parent),y
pha
lda object
sta (parent),y
iny
lda (parent),y
pha
lda object+1
sta (parent),y
ldy #[widget.next]+1
pla
sta (object),y
pla
dey
sta (object),y
ldy #widget.parent ; point child to parent
lda parent
sta (object),y
iny
lda parent+1
sta (object),y
ldy #widget.prev ; set this item's prev pointer to point to the last object in the list
lda ptr2
sta (object),y
iny
lda ptr2+1
sta (object),y
rts
;

ins_object_tail ; insert object at end of list
ldy #widget.next
lda object ; point last item in list to new object
sta (ptr2),y
iny
lda object+1
sta (ptr2),y

ldy #widget.prev ; point new object's prev pointer to previous item
lda ptr2
sta (object),y
iny
lda ptr2+1
sta (object),y

ldy #widget.prev ; set first item's prev pointer to point to this object
lda object
sta (ptr1),y
iny
lda object+1
sta (ptr1),y

ldy #widget.parent
lda parent
sta (object),y ; point child to parent
iny
lda parent+1
sta (object),y

lda #0
ldy #widget.next
sta (object),y ; set new object's next pointer to NULL
iny
sta (object),y
rts
;


find_list_last ; find last node in list
ldy #widget.prev+1
lda (ptr1),y
tax
dey
lda (ptr1),y
sta ptr2
stx ptr2+1
rts
;


init_object ; add child at a,x to object at parent
sta object
stx object+1
ldy #widget.child ; does parent have any children already?
lda (parent),y
sta ptr1 ; first item in list
iny
lda (parent),y
sta ptr1+1
ora ptr1
beq no_children
jsr find_list_last ; get pointer to last child in ptr2 (will point to itself if this is the only child)
bit head_flag
bmi at_head
jsr ins_object_tail
bne object2
at_head
jsr ins_object_head
bne object2
no_children
ldy #widget.child
lda object
sta (parent),y ; point parent to child
iny
lda object+1
sta (parent),y
ldy #widget.prev ; point new object's prev pointer to itself
lda object
sta (object),y
iny
lda object+1
sta (object),y

ldy #widget.parent
lda parent
sta (object),y ; point child to parent
iny
lda parent+1
sta (object),y
lda #0
ldy #widget.next
sta (object),y ; set new object's next pointer to NULL
iny
sta (object),y

object2
ldy #widget.child
sta (object),y ; set new object's child pointer to NULL
iny
sta (object),y
lda #0
ldy #widget.visible
sta (object),y ; make sure this object can't be clicked on until it's been rendered
lsr head_flag
rts
;

 

I wonder what this would have looked like in C? There's a lot of repetition in there which can be replaced by subroutine calls later on. I found it much easier - when moving an object in a list from one position to another - to first unlink it from the list, then insert it in the desired position. "object_order" ends up arriving at one of three conclusions: the object needs to go at the head of the list, the tail, or somewhere in the middle.

Link to comment
Share on other sites

Yes, it would probably be factored out somewhat into function calls. Actually, I'd expect to see some recursion too, its popular when working with trees and lists. It's not always the best way to do things though.

 

From a certain perspective, C can be looked at as kind of a very advanced macro assembler. You don't have to use the stack heavily or use the C library.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...