Jump to content
IGNORED

New GUI for the Atari 8-bit


flashjazzcat

Recommended Posts

This thread makes me feel horribly inadequate. And I'm Irish, for Christ's sake!

So you feel normal then?? :P

Oooh, a joke as original as your name. Think of that name all by yourself-y? =) What would be hilarious is if your name was actually Dave or something. =) Look at us, hijacking the thread with petty bantering. We're like a couple of rats scrapping it out under Einstein's desk. =) Enough from me. Let the genius commence.

Link to comment
Share on other sites

Jon, IMHO I think you should postpone the bank spanning idea for now. You've got plenty to do already :) So for your basic object data you'll be able to get current on a bank automatically.

 

When I was thinking about paged stuff, I worried about the overhead for a read/write cycle to different banks...its easy enough to do but you can get several LDn/STn just to write a single byte :

look up bank for dest

look up bank for src

swap correct bank in

read byte

swap correct bank in

write byte

 

The only mitigation I could think of would be to have a page cache where every time I went to get a byte or write a byte I could copy the whole page in which that byte resides into non banked memory, on the theory that most accesses would be within the same page as the last one was. So to read you'd just check if that bank was in the cache already and use it if so, or load the correct one if not. The problem is writes...when do you flush ( copy the page back )? If you flush after every read its obviously dumb...but you'd have to do it sometime. You'd have to flush on a new write page needed...but also on a read page that matches the one you currently have in cache...etc., etc. A real pain.

Link to comment
Share on other sites

How did 3 bytes turn into 4?

Size word and "next" pointer.

 

Sorry James: I had missed your earlier "FYI" on memory management, and I'll cover that in my response here.

 

Explain basic object structure please. Is this code or data you are referring to?

OK. Here's the basic object structure:

 

WIDGET		.struct	; generic widget object
class	.byte	; object type
parent	.word	; pointer to parent object
child	.word 	; pointer to first child
childlast	.word	; pointer to last child
selected	.word	; pointer to selected child
prev		.word	; previous peer (NULL = first child)
next		.word	; next peer (NULL = last child)
x		.word
y		.word	; 16 bits for virtual coordinate system
width	.word
height	.byte
visible	.byte	; set if object has been rendered (cleared when object is hidden)
render	.word	; pointer to render code
.ends
;

Now, this isn't set in stone but it works well. I've chosen to use 16 bit coordinates for good reasons I won't go into here. "Selected" can probably be jettisoned: it would certainly be better placed in the specific structures based on this simple object.

 

Other control structures (because of MADS's marvellous capabilities) can inherit the basic object structure. So, for example, here's a horizontal scroll bar slider's structure:

 

HSCRL_SLIDER	.struct
widget
travel	.word ; range of travel in pixels (internal)
scale	.word ; scaling factor for tracking (internal)
extent	.word ; document size
range	.word ; range of travel in document units
pos		.word ; position
.ends

 

The slider structure simply adds extra, context-specific fields to the widget structure.

 

Now, when an application (currently only the GUI "executive" itself) wants to create an object, it allocates memory for it and sets the parent's child pointer to the memory pointer (or a peer's "next" pointer). Naturally, the object create routine also returns a "handle", which is generally only relevant for top-level containers such as windows. There's also a desktop handle, a screen handle, a menu bar handle, etc. Currently the GUI enjoys the direct manipulation of objects in the tree, since they're in main memory. In any case, it may continue to do so, since the library code will never be in the $4000-$7FFF (RAM bank switching) region. This is not the case, however, with applications, which will necessarily occupy the banked region.

 

I'll come back to this problem in a minute...

 

Well, it depends on how much free mem is in the system. With a small app that doesn't use a lot of memory you would be correct. If you have an app that is pushing memory limits your garbage collection routine would have to run more often. Especially if the large app allocates and frees different sized blocks often. So... for a very simple text editor things might be quick... but for a full blown word processor, cuts and pastes might be costly.

If a given memory heap is never more than 16KB in size, I don't think linear garbage collection will ever be especially noticeable. However, I'll now explain how the current heap manager works:

 

As I've mentioned before, the current system only keeps free memory in the linked list. This means any kind of linear analysis of the memory is impossible. Anyway, "alloc" does the usual business of shrinking or removing free blocks in the list. The "free" routine does a complete walk of the list, trying to find a free block which is either in front of or behind the block being freed. If a free block is found ahead of the target block, its size is expanded to cover the target block. If a free block is found after the target block, it's merged with the target block. The exit condition is either: free blocks have been found on both sides of the target block and coalesced, or the end of the list is reached.

 

This memory manager has been tested and works. My concerns over performance centred around the following scenario: A file browser reads a disk directory and iteratively creates icon objects. It may create 256 such objects (a likely practical limitation). During the course of the window's life, the user may delete some files, which will cause some object memory to be released, or they may created new files, causing new objects to be allocated. At the end of this process, the user closes the window. At that point, the window and all its children (including the window controls and client area and all its child objects) need to have their memory released. It goes without saying that the "free" routine then has to walk the memory list several hundred times as each icon object is deleted.

 

Of course, we might amend things so that the application can allocate a single large buffer designed to contain a whole directory. It can then split this up as it sees fit, or create objects using pointers to sections of this single large buffer. This would be preferable, but would mean we'd either have to know in advance how many files are in the directory, or just take a best guess at how much RAM we're likely to need. We could also allocate in terms of "another eight directory entries", which would be slightly wasteful, but would cut down on the number of memory blocks.

 

When a program allocates a piece of memory, it can split it up however it wants, so you no longer have your bank info to transparently switch banks because a pointer no longer points at the memory block structure. Which is why the app must do the switching.

The key question here is "does the application directly access the memory?" On an 8-bit system, it's highly desirable to do so for reasons of speed. However, this harks back to what I was (and still am) intending to do with The Last Word, in order to implement contiguous text buffers of 48 or 64KB in size. I was going to implement "virtual" memory reads and writes to extended RAM. You'd pass - for example - a 16 bit address to the fetch/write routine, which would then check bits 14 and 15 and bank in the relevant 16KB bank, while masking out those upper bits to make any 64KB address an offset into the appropriate 16KB bank (kind of like a RAMdisk driver). Of course, the problem is that it's slower. But it does mean that you can address a full 64KB buffer from an application which might happily sit under the bank switching region (as long as the fetch/write routine is in safe memory, of course).

 

The point being that "transparent" bank switching would only work with such a method. Otherwise - as you point out - an app would have to do the switching. But if the app does the switching, the switching code must be very carefully position outside of the $4000-$7FFF region.

 

The current memory manager design might actually be perfectly serviceable as is. I might just delay changes to it while we explore different ideas. If I could get my head around a way of doing the ahead/behind free block cleanup on the fly (which actively prevents fragmentation from occuring in the first place, as you advised), without doing a list walk, it would be perfect.

 

Regarding the FYI: Your experience with that Z80 system is surely invaluable, and I'm listening carefully to your suggestions.

 

Re: keeping all memory in the linked list: OK. I was basically heading that way. How might having allocated blocks aid in efficient ahead/behind free block coalescing?

 

Re: allocating different types of memory. Depending very much on the ultimate design, the memory manager might work on conventional and extended memory in the same way (on a "behind the scenes" level: it all depends on whether any kind of indirection is implemented). But at the app level: two different kind of call makes total sense.

 

Re: different boards: we're lucky that there are only a couple of different hardware registers out there for paging RAM, and I think 99% of the upgrades out there use $d301 (portb). We're also lucky that we can deduce the amount of memory and the bits used to page it via software interrogation. Allowing the user to specify a different hardware paging address should cover the rest of the bases.

Link to comment
Share on other sites

Jon, IMHO I think you should postpone the bank spanning idea for now. You've got plenty to do already :) So for your basic object data you'll be able to get current on a bank automatically.

 

When I was thinking about paged stuff, I worried about the overhead for a read/write cycle to different banks...its easy enough to do but you can get several LDn/STn just to write a single byte :

look up bank for dest

look up bank for src

swap correct bank in

read byte

swap correct bank in

write byte

 

The only mitigation I could think of would be to have a page cache where every time I went to get a byte or write a byte I could copy the whole page in which that byte resides into non banked memory, on the theory that most accesses would be within the same page as the last one was. So to read you'd just check if that bank was in the cache already and use it if so, or load the correct one if not. The problem is writes...when do you flush ( copy the page back )? If you flush after every read its obviously dumb...but you'd have to do it sometime. You'd have to flush on a new write page needed...but also on a read page that matches the one you currently have in cache...etc., etc. A real pain.

Dan: I tend to agree. I'd rather spend time optimising the object structures so that we can pack everything into a couple of different heaps, each in their own 16KB bank. :) The fact is, that's tricky enough in itself.

 

I know you've done a lot of work on running stuff out of extended RAM, so I sense you can see what I'm up against. I mentioned my design for indirect extended memory access in The Last Word in the previous reply. I had it working pretty well: it used a 16 bit address to cover the whole 64KB address space and bank-switched transparently. But of course it was slower, and I thought really hard about caching ideas. This is especially relevant when you need to update the screen quickly, but fortunately I also came up with a way to make the screen redraw totally avoid the indirection. It works well: all I have to do is finish it.

 

Having a page cache is great in theory, but you still have extra code checking whether you've exceeded the limit of the cache, and a lot of simple computation has to be completely redesigned. However, on the other side of the coin, GUIs - with their profusion of small chunks of memory - would probably work quite well with the page cache.

 

This brings us almost full circle: what might be best is to have "get" and "put" object/data routines, which take care of getting the information you want from extended memory, and dump it somewhere easy to access. The application gets or changes the info it wants, and then sends the block back from whence it came.

 

That'll work fine for apps, but I rather think the GUI library itself had better persist with low-level, fast and dirty bank switching at the relevant places. As I've said, the GUI library has the advantage of not worrying about banking itself out. :)

Edited by flashjazzcat
Link to comment
Share on other sites

The different approaches to memory management have their own compromises and I'm not going to claim there is one best solution.

 

If you keep separate lists for allocated and freed memory, you have overhead in maintaining two lists and cleaning them up.

 

If you have one list and an allocated flag on each block, allocating takes longer as memory fills up but freeing memory is faster.

It can also take more or less memory than separate lists depending on implementation.

You can reduce the size of the structure for a single memory list if you calculate the location of the next block by adding the current block size to the current block pointer as well as adding the data structure size. (nextnode = currentnode + currentnode->size + 3) But the tradeoff is speed in traversing the list.

 

If you keep the memory in one list, you don't have to traverse the entire list when you free a block. Just mark it free.

To keep memory from fragmenting, just check to see if the block that follows is free and merge with it if it is. You could loop and keep merging as long as you keep finding free blocks but the majority of frees are in reverse order of how they are alloced.

 

As I said before, the normal pattern for freeing memory is in reverse order and that works in your favor. It sounds like you are already doing the merge but since you have separate lists it's less efficient.

 

You may want to include an OS call that does a cleanup. That way an app can call it before a speed critical routine or after one that is likely to cause fragmentation.

 

I think if you create a benchmark that creates and destroys windows over and over, the complexity of two lists will be slower overall. However, the two list approach might pop up objects a tiny bit faster... at the expense of destroying them slower. Whether *I* would choose the two list route or not would depend on what is more noticeable.

Link to comment
Share on other sites

Side note on memory management: For the example of the application creating/deleting a huge number of (small) objects (icons/dir entries)

Do you know the concept of ObjectPools/AutoReleasePools in ObjectiveC? The idea is that an application can create a (named) pool for objects which are small. Upon creation the objects are allocated from the pool (alloc is passed the id of the pool), upon destroying the memory is released to the pool again. The point of having the pool is that is acts a cache for allocation. It can allocate larger chunks in advance and and handle alloc/free locally without searching the global free list for every object. It can also deferr real deallocation to a later point in time (for example closing the window).

 

This greatly reduces the search space and overhead of you have mand small objects. In particular if the know the type/size of the objects in advance you can use simple multiplication and index operation instead of keeping lists. It is a kind a locality exploit/caching based on the fact the know know more about the instances.

 

WidgetPool.allocWidget { freeIndexCount++; return baseAddress+freeIndexCount*size; }

  • Like 1
Link to comment
Share on other sites

I certainly wasn't intending to have separate lists for allocated and free blocks and the current model doesn't use them: it maintains only one list - that of free memory blocks. The plan was to add free/allocated flags to the headers, but keep allocated blocks out of the list, and only use the allocated flag when doing a linear cleanup sweep of the memory bank, rebuilding the free list on the fly.

 

You mention some useful optimisations there, though. As you say: everything involves a tradeoff. Calculating the location of the next block requires that we keep both free and allocated blocks as part of the list, etc, etc. There are probably a million ways to do it.

 

I still don't see how we can avoid fragmentation when freeing blocks by only checking the block after the target block. Surely it's possible (and this is how I've coded it up) that there'll also be a free block in front of the target block (perhaps several steps away in either direction in the list, though) which needs to be coalesced. Perhaps I've over coded it - I don't know. Certainly the current routine is completely self-cleaning. I think your suggestion that I just brute-force benchmark it is very sensible.

 

Anyway: I think we've had a lot of very valuable input here. Time to test these theories out. :)

 

@Jac: nearly missed that - but thanks! Here my lack of knowledge of OOP C is plainly apparent. What you describe sounds like a hybrid version of the file browser allocation of a large buffer for file icons (which I mentioned earlier), and then creating objects by dividing up that allocated block. We still have the issue of having to "best guess" the memory required to hold a directory of indeterminate size, but I really like the pool approach, and it will make very useful space savings when dealing with large directories (since each object does not have the additional overhead of a memory block header).

 

Any views on the fetch/edit/send object idea, with regard to dealing with banked data from an application residing in the banking space?

Edited by flashjazzcat
Link to comment
Share on other sites

How did 3 bytes turn into 4?

Size word and "next" pointer.

Actually, it would be 3 or 5 depending on implementation.

 

Sorry James: I had missed your earlier "FYI" on memory management, and I'll cover that in my response here.

No prob.

 

Explain basic object structure please. Is this code or data you are referring to?

OK. Here's the basic object structure:

Thanks, I think you are thinking about the GUI and I'm just focusing on the memory manager and some things get crossed in the conversation. FWIW, it sounds like what you are doing works like other GUIs I've worked with.

 

If a given memory heap is never more than 16KB in size, I don't think linear garbage collection will ever be especially noticeable. However, I'll now explain how the current heap manager works:

 

As I've mentioned before, the current system only keeps free memory in the linked list. This means any kind of linear analysis of the memory is impossible. Anyway, "alloc" does the usual business of shrinking or removing free blocks in the list. The "free" routine does a complete walk of the list, trying to find a free block which is either in front of or behind the block being freed. If a free block is found ahead of the target block, its size is expanded to cover the target block. If a free block is found after the target block, it's merged with the target block. The exit condition is either: free blocks have been found on both sides of the target block and coalesced, or the end of the list is reached.

Your free routine is doing an expanded version of what I was suggesting on free. However, with a single list you just use the pointer to mark it as free and see if you can merge the next block... possibly repeating until you reach the end of the list or until you hit an allocated block. Usually just merging with the next item keeps you from having to do garbage collection for long periods even if you do a lot of allocation and freeing.

Memory allocation is normally first allocated, last freed (or last allocated, first freed if you want to look at it from another view). Your GUI will not always do that but it shouldn't fragment things too badly.

 

pseudo C code just off the top of my head... may be syntactically hosed.

void freemem(memstruct *memptr)
{
 memptr->free = 0;                  /* mark the block as free */
 if (memptr->next->free = 0)        /* check to see if the next block is free */
 {
   /* if so, merge the blocks */
   memptr->size += (memptr->next->size + sizeof(memstruct));  /* combine size */
   memptr->next = memptr->next->next;                         /* point past the old next block, to the new one */
 }
}

You could just use a do while or for type of loop in place of the if to merge all blocks that follow.

 

 

This memory manager has been tested and works. My concerns over performance centred around the following scenario: A file browser reads a disk directory and iteratively creates icon objects. It may create 256 such objects (a likely practical limitation). During the course of the window's life, the user may delete some files, which will cause some object memory to be released, or they may created new files, causing new objects to be allocated. At the end of this process, the user closes the window. At that point, the window and all its children (including the window controls and client area and all its child objects) need to have their memory released. It goes without saying that the "free" routine then has to walk the memory list several hundred times as each icon object is deleted.

This is a case where having a heap cleanup function call would be useful. If you have a situation like that, you have a catch all in case you can't allocate a block (like you said) and cleaning up on exit certainly wouldn't hurt.

If you clean up on exit, you should never need to clean up on entry. During... that's what the catch all and block merge in the free handle.

 

Of course, we might amend things so that the application can allocate a single large buffer designed to contain a whole directory. It can then split this up as it sees fit, or create objects using pointers to sections of this single large buffer. This would be preferable, but would mean we'd either have to know in advance how many files are in the directory, or just take a best guess at how much RAM we're likely to need. We could also allocate in terms of "another eight directory entries", which would be slightly wasteful, but would cut down on the number of memory blocks.

Logically it's more difficult to do that but it is most desirable since it requires less calls to the memory manager and memory is then managed where it could be done the quickest. The same idea *may* be usable for other window structures where you always allocate certain ones together. And those just need to be freed together since the structures are at a fixed location throughout their life.

 

 

When a program allocates a piece of memory, it can split it up however it wants, so you no longer have your bank info to transparently switch banks because a pointer no longer points at the memory block structure. Which is why the app must do the switching.

The key question here is "does the application directly access the memory?" On an 8-bit system, it's highly desirable to do so for reasons of speed. However, this harks back to what I was (and still am) intending to do with The Last Word, in order to implement contiguous text buffers of 48 or 64KB in size. I was going to implement "virtual" memory reads and writes to extended RAM. You'd pass - for example - a 16 bit address to the fetch/write routine, which would then check bits 14 and 15 and bank in the relevant 16KB bank, while masking out those upper bits to make any 64KB address an offset into the appropriate 16KB bank (kind of like a RAMdisk driver). Of course, the problem is that it's slower. But it does mean that you can address a full 64KB buffer from an application which might happily sit under the bank switching region (as long as the fetch/write routine is in safe memory, of course).

16 bits is just 64K... which is why I talked about separate allocation routines, 24 bit and all that.

If the bank switching is in low RAM you can get away with reusing the high bits for page info as long as you have enough bits.

But if banks are in high RAM you have to be able to fix the bits. Either way, using those bits is slow, you always have to grab the bits and mask them off to use the pointers. If you use 24 bit, no masking, no messing with the pointer to use it. The design should handle any reasonable memory expansion in the future as well.

 

Allocate banked memory into large memory blocks. Then you just tell the OS to set the block you want to access before you actually try to access it. I'm afraid you are trying too hard to conserve memory in the OS/GUI where it's best done in the application.

 

If you treat banked RAM more like a RAM disk with sectors and cache a block you are working on locally that won't work too bad but it's still fastest to move data as little as possible. Directly accessing a bank is the ideal and use a local buffer for moving between banks.

 

The point being that "transparent" bank switching would only work with such a method. Otherwise - as you point out - an app would have to do the switching. But if the app does the switching, the switching code must be very carefully position outside of the $4000-$7FFF region.

And you are now having to bank switch at the byte level. If you want to do this, put that in the app and just OS calls in the GUI/OS. Make a wrapper if you will.

I find it odd you want the fastest memory allocation (something done less often) but the slowest actual data access (something done the most often).

 

The easiest method to do what you want may be the RAM disk approach.

Then everything is optimized for the local buffers and you just swap in and out of the RAM disk.

 

The current memory manager design might actually be perfectly serviceable as is. I might just delay changes to it while we explore different ideas. If I could get my head around a way of doing the ahead/behind free block cleanup on the fly (which actively prevents fragmentation from occuring in the first place, as you advised), without doing a list walk, it would be perfect.

Implementing that code sample should be easy, the code should be small and fast. When combined with a fail safe heap cleanup and heap cleanup API call (actually what the failsafe would call) there should never be a problem.

 

Regarding the FYI: Your experience with that Z80 system is surely invaluable, and I'm listening carefully to your suggestions.

I appreciate that and will try not to abuse the privilege.

I have quite a bit more embedded experience but that was most applicable to the memory manager part of the discussion.

 

Re: keeping all memory in the linked list: OK. I was basically heading that way. How might having allocated blocks aid in efficient ahead/behind free block coalescing?

I think I've already answered that but I'll try to summarize some things.

 

1. If you keep 1 queue you reduce complexity. Smaller code is the result. Yes the data structures will probably be larger for the most speed though you can calculate the next pointer off of the size and current pointer if you don't mind dropping some clock cycles. The speed *you* are gaining on allocation is lost on free and maintaining two lists probably requires more code and is slower overall if you repeatedly allocate/free memory.

 

<edit> changed tree to list

2. You only have one list to traverse so there is no merging blocks back into the free memory list. In a minimal implementation you can just mark it free and you are done. It's very fast since you have the pointer to the block itself and go directly to it. It already sits where it belongs in the list.

 

3. Merging a freed block with the block that follows if it is also free is sort of a quick n dirty heap cleanup. Since you have the pointer or can calculate the pointer to the next block easily, there are no large traversals on free. Since memory allocation starts at one end and goes to the other and usually goes in the reverse direction on free, the heap is mostly cleaned up with no effort at all and your list should have fewer nodes if you do decide to do a cleanup.

<edit>

Your list will also have fewer nodes if you do perform a cleanup so a full traversal is faster.

 

4. Something I forgot. Memory can be allocated from low to high or high to low, whichever results in the fastest code and the largest free block can be first and you just allocate smaller blocks off the end of the first block large enough. You always find a free block right away unless you have weird code that allocates a lot of data and then frees stuff out of order. This is VERY fast but it fragments so you should have no qualms with calling a heap cleanup. Even if some blocks end up early in the memory list, you should have fewer allocated blocks to traverse than a regular combined list. The drawback is that to auto merge blocks back into the preceding block on free requires a two way linked list or traversing the list.

Edited by JamesD
Link to comment
Share on other sites

I certainly wasn't intending to have separate lists for allocated and free blocks and the current model doesn't use them: it maintains only one list - that of free memory blocks. The plan was to add free/allocated flags to the headers, but keep allocated blocks out of the list, and only use the allocated flag when doing a linear cleanup sweep of the memory bank, rebuilding the free list on the fly.

That makes a whole lot more sense that what I thought you were doing.

 

I still don't see how we can avoid fragmentation when freeing blocks by only checking the block after the target block.

Write up some simple psudo code and step through the memory allocation and freeing on paper with the merge.

Yes, you can get some fragmentation... but it's usually minimal and a call to a cleanup routine when you exit something should do the trick and when you do call it there are less nodes to traverse.

 

@Jac: nearly missed that - but thanks!

Jack is right on target. If you allocate all those little things in one large block you reduce OS calls and replace them with pointer calculations. Which do you think is faster?

And when you add/delete these little objects, you aren't fragmenting memory.

 

This is why I suggested allocating memory for window structures as a group.

 

Any views on the fetch/edit/send object idea, with regard to dealing with banked data from an application residing in the banking space?

I know that probably wasn't aimed for me but...

 

Look at how layers of different operating systems are structured. Unix is a prime candidate with layer upon layer.

Separate the OS parts from the GUI. The key parts of the OS sit in regular RAM and the GUI sits in a bank.

The app has to know enough to set the GUI bank before making a GUI call or things will get very messy.

So part of the app sits in regular memory.

 

An alternate approach commonly used in small memory environments is to use stub routines. I've seen it done on a machine (which shall remain nameless), where the code was spread over several banks. The build links the calls to different functions and each bank of the eprom code is generated separately, merged into one block to program the eprom and one block copies the stub routines to RAM on startup before anything else runs. The stub routines just select bank, make the call, and handle the return to the calling memory bank.

The key here is knowing how to build the application. In assembly it's pretty simple but in C... you have to know your tools.

 

<edit>

Actually, I made that sound mutually exclusive and it isn't.

Edited by JamesD
Link to comment
Share on other sites

The Boink and Super Boink demos have that characteristic.

Boink.zip

But can you click on the ball and throw it around with a mouse?

Does the ball accelerate as it falls and slow as it rises? (AKA physics)

I wasn't aware that you could do that with the original Amiga demo.

 

The Boink demo does allow you to change the speed of the ball and the direction of the spin, via the console keys. The FujiBoink demo has similar functionality activated by the console keys.

 

The SuperBoink demo allows various scrolling options simultaneously (push up on joystick to see options menu).

Edited by MrFish
Link to comment
Share on other sites

That makes a whole lot more sense that what I thought you were doing.

Heh - these things are, I think, notoriously difficult to explain. Despite that, I think we're doing OK. :)

 

Thank you for your copious explanation, advice, and examples. I'll take time to aborb all that and try out a few ideas here, rather than jabber on and pontificate further. :) It's clear, anyway, that with allocated/free flags in all the blocks, there's great flexibility available as to how we clean up or stop fragmentation occuring in the first place. While we can't easily (without list walking or linear analysis) check if the previous block is free, we can sure easily check if the next one is free, as you say, and coalesce as you describe. If that was coupled with periodic linear defragmentation, it's a good solution. :)

 

As for over-working the memory handler as far as apps go: this is probably true. I need to remember that anyone who wants to code for this thing is going to know their beans to start with. I like the "pool" idea, with direct access to memory. I'm sure an application writer, though, can be relied upon to appreciate that code which bank switches RAM must be outside of the bank-switched region.

 

In any case: I appreciate the time you've taken to explain and the interest you're showing. Let me digest it all, and I'm sure something good will come out the other end... what an awful analogy :).

Edited by flashjazzcat
Link to comment
Share on other sites

That makes a whole lot more sense that what I thought you were doing.

Heh - these things are, I think, notoriously difficult to explain. Despite that, I think we're doing OK. :)

Kinda difficult to discuss when you aren't in the same room with a whiteboard to write on.

 

 

Thank you for your copious explanation, advice, and examples. I'll take time to aborb all that and try out a few ideas here, rather than jabber on and pontificate further. :)

The more you jabber and pontificate, the better picture we have of what you are trying to do so no worries there. :D

 

It's clear, anyway, that with allocated/free flags in all the blocks, there's great flexibility available as to how we clean up or stop fragmentation occuring in the first place. While we can't easily (without list walking or linear analysis) check if the previous block is free, we can sure easily check if the next one is free, as you say, and coalesce as you describe. If that was coupled with periodic linear defragmentation, it's a good solution. :)

My examples would have been better with diagrams but I didn't really think of doing that at the time.

 

As for the implementation, I think if you benchmark different approaches you'll find that flag has the best overall performance and it won't cost a lot of memory.

 

 

I made a mistake on #4 above. Allocating from the end of the fist block with a 2 way link list is fast and means never having to do a cleanup since you can merge in both directions. But it does suck more RAM.

void freemem(memstruct *memptr)
{
 memptr->free = 0;                  /* mark the block as free */
 /* try to merge with next block */
 if (memptr->next->free = 0)        /* check to see if the next block is free */
 {
   /* if so, merge the blocks */
   memptr->size += (memptr->next->size + sizeof(memstruct));  /* combine size */
   memptr->next = memptr->next->next;                         /* point past the old next block, to the new one */
 }

 /* try to merge with previous block */
 if (memptr->prev->free = 0)        /* check to see if the next block is free */
 {
   /* if so, merge the blocks */
   memptr = memptr->prev;           /* point to previous block structure */
   memptr->size += (memptr->next->size + sizeof(memstruct));  /* combine size */
   memptr->next = memptr->next->next;                         /* point past the old next block, to the new one */
 }
}

 

 

I usually follow the K.I.S.S. (Keep It Simple Stupid) approach... simple is usually small, fast, and easy to maintain.

Now if I could just get some architects I work with to follow that approach.

 

As for over-working the memory handler as far as apps go: this is probably true. I need to remember that anyone who wants to code for this thing is going to know their beans to start with.

I wouldn't assume that if I were you. Some of the people I've worked with do really stupid things in their code.

Hell, I've done really stupid things in my code! :roll:

 

I used some code one of the original Amiga devs wrote as a starting point for some code, and found he wasn't closing files when he was done with them. Oops!

It happens.

 

The bottom line is, make your code efficient and if theirs sucks it's not your fault their program crashed or ran out of memory.

 

In any case: I appreciate the time you've taken to explain and the interest you're showing. Let me digest it all, and I'm sure something good will come out the other end... what an awful analogy :).

I don't think there is any shortage of interest here. :D

 

Taking time for digestion is good... it usually prevents indigestion later. (fits with your analogy?)

Link to comment
Share on other sites

Heh... thanks James. I'm currently tinkering with the MyIDE driver (by way of a quick breather), but I'm keeping a close eye on your contributions (and contributions they are; I might be authoring this project, but the end product is the sum of all the expertise of others which I've been lucky enough to share).

 

Regarding the code snippet: that's quite clear and makes total sense. But (you knew there'd be a "but", right?) it also throws up what might simply be a misconception on my part: "next" and "prev" - surely they don't necessarily point to what's physically adjacent to the taget block. This is the understanding on which I based the current single-pointer, free-only self-cleaning algorithm. The reason it walks the whole list to find ahead and behind free blocks is that they could conceivably be several steps away in the linked list. Does that make sense?

 

For example, here's a possible state of the (free only, forward pointer) list after a few random allocations and deallocations (offsets into buffer, with size in brackets):

 

root: -> 2048 (512), 4096 (256), 1024 (768)

 

The first free block we see is at 2048 and is 512 bytes long. The next free block is at 4096, etc. It's clear from this that memory was allocated and freed in a non-symmetrical fashion - but bear with me...

 

We now attempt to free a 256 byte block at 1792. Clearly, it has a free block directly ahead of (1024, 768 bytes) and behind it (2048, 512 bytes). Of course, with only free blocks in the list, my routine doesn't check prev and next pointers of the target block (since there are none). Instead, it walks the free list, checking each entry to see if it's either in front of or behind the target block. In this example, it would find the block at 2048 first, delete the node, and enlarge the target block by 512 bytes. Then - on the third iteration - it would discover that the block at 1024 is right in front of the target block. It would then enlarge that free node by the size of the (already enlarged) target block, without creating a new free block.

 

Now, I haven't quite gotten my head around whether this situation can occur with a two-way linked list which includes the free blocks as well. Surely, though, the order in which the blocks are represented in the list (i.e. the order in which the list would be walked), need not necessarily bear a relationship to the physical, linear arrangement of the blocks in memory. So, what I'm saying is that even if memptr->next isn't free, it doesn't necessarily mean there isn't actually a free block immediately - physically - behind this one.

 

Phew... I suppose it all depends on the order in which things are allocated and deallocated. Usually, there'll be a symmetrical relationship between allocations and deallocations - I realize this. But I didn't assume this to be the case when I wrote my first dynamic memory manager. It kind of assumes that the list nodes could be pointing anywhere, in spaghetti fashion, and it walks the list on each free operation in order to make absolutely sure it's coalesced any free blocks surrounding the target block.

 

This is probably what I've been trying to express for the past two pages. :) It makes sense to me, although I understand it may not be the most efficient method, and I've already taken on board a lot of alternative approaches. But this next/prev stuff still bothers me. I hope I've expressed myself well enough...

 

As for making assumptions on the competence of developers: well, you have it it one there. We ourselves are as fallable as the next coder. I think I'm trying to straddle two viewpoints here: someone who knows the source code inside out, and someone who doesn't but wants to write apps for it. I guess that's where the third skill comes into play: documenting the thing. :D

 

Regarding analogies: perfect! But I now have a headache... :)

Edited by flashjazzcat
Link to comment
Share on other sites

Regarding the code snippet: that's quite clear and makes total sense. But (you knew there'd be a "but", right?) it also throws up what might simply be a misconception on my part: "next" and "prev" - surely they don't necessarily point to what's physically adjacent to the taget block. This is the understanding on which I based the current single-pointer, free-only self-cleaning algorithm. The reason it walks the whole list to find ahead and behind free blocks is that they could conceivably be several steps away in the linked list. Does that make sense?

#4 was a two way linked list and allocating off the end of the first large enough memory block.

It works a little different than the one way linked list where we just merge with the block that follows.

This approach eats more ram for the structure.

 

The list is always in order through memory. Nothing is ever removed from the list, we may insert new nodes at the end of a free block or combine blocks but everything is always in order.

 

I only included it to demonstrate another approach.

It is very fast to find a free block of RAM since the largest block should stay first in most cases... at least until a program butchers up the memory map heavily.

Adjacent free RAM is always merged on free.

If you don't find a big enough memory block with alloc... there isn't one.

But the trade off is that the size of the linked list structure is larger... unless you calculate the next pointer to save 2 bytes. Normally that would be slow but since we are usually finding a free block immediately, we only do it once unless memory gets really full and fragmented... which is the application's fault for being weird.

 

Depending on what an application is doing, allocated memory may gradually walk toward the front of the list and once the first block isn't large enough for the requested mem, it starts traversing nodes looking for a free block. At that point calculating the next pointer may slow things down slightly... just depends on how fragmented memory has become.

 

The structure:

/* data structure for memory manager using a two way linked list */
struct char *memstruct {
 WORD size;     /* size of current memory block */
 BYTE alloc;    /* memory allocated flag */
 char *prev;    /* pointer to previous block */
 char *next;    /* pointer to next block */
}

 

 

Allocation goes something like this (off the top of my head so no promises)

/* memory allocation routine using a two way linked list to maintain free/allocated memory */
memstruct *alloc(WORD getsize) 
{
 memstruct *current_node = memlist_start;  /* memlist_start is the start of the memory list and global for this example */
 memstruct *tempmem;   /* temporary pointer holder */

 getsize += sizeof(memstruct);  /* make sure we are looking for a block large enough for the memory and the struct */

 /* find the first large enough mem block to use
  * If current_node points to null, the list has ended without finding a big enough block
  */
 while (current_node != NULL)
 {
   /* check to see if the current node has been allocated before we do anything else */
   if(current_node_alloc = 0)
   {
     /* If current_node->size < getsize, move to next block and try again */
     if(current_node->size >= getsize)
     {
       /* if the block is too big, grab mem off of the end and shrink the current block */
       if(current_node->size > getsize)
       {
         tempmem = current_node->next;   /* don't loose the current next memory block */

         current_node->size -= getsize;  /* adjust the size of the current block */
         current_node->next = current_node + current_node->size;   /* point to the new block in the list */

         /* init the structure of the new block */
         current_node = current node->next;                 /* point to the new node */
         current_node->next = tempmem;                      /* point next to the next node */
         current_node->prev = current_node->next->prev;     /* point back to previous node */
         current_node->size = getsize - sizeof(memstruct);  /* set up the size of the memory block */

         /* fix the next block */
         current_node->next->prev = current_node;           /* point back here */
       }

       current_node->alloc = 1;       /* mark the block as allocated (could use task #) */

       return(current_node);
     }
   }

    current_node = current_node->next;  /* no good, move to next node */
 }
 return (NULL);  /* didn't find any suitable memory */
}

 

 

 

 

For example, here's a possible state of the (free only, forward pointer) list after a few random allocations and deallocations (offsets into buffer, with size in brackets):

You are referring to the free only memory list approach with the forward pointer only.

This is what you are currently doing. Understood.

 

root: -> 2048 (512), 4096 (256), 1024 (768)

 

The first free block we see is at 2048 and is 512 bytes long. The next free block is at 4096, etc. It's clear from this that memory was allocated and freed in a non-symmetrical fashion - but bear with me...

And that will happen.

 

We now attempt to free a 256 byte block at 1792. Clearly, it has a free block directly ahead of (1024, 768 bytes) and behind it (2048, 512 bytes). Of course, with only free blocks in the list, my routine doesn't check prev and next pointers of the target block (since there are none). Instead, it walks the free list, checking each entry to see if it's either in front of or behind the target block. In this example, it would find the block at 2048 first, delete the node, and enlarge the target block by 512 bytes. Then - on the third iteration - it would discover that the block at 1024 is right in front of the target block. It would then enlarge that free node by the size of the (already enlarged) target block, without creating a new free block.

And you are doing many of the same steps... but you are having to deal with the limitation of the free only list approach.

 

Now, I haven't quite gotten my head around whether this situation can occur with a two-way linked list which includes the free blocks as well. Surely, though, the order in which the blocks are represented in the list (i.e. the order in which the list would be walked), need not necessarily bear a relationship to the physical, linear arrangement of the blocks in memory. So, what I'm saying is that even if memptr->next isn't free, it doesn't necessarily mean there isn't actually a free block immediately - physically - behind this one.

The list goes through memory in order. If the block at memptr->next isn't free, then the next physical block of memory is not free. So, yes it does mean there isn't a free block immediately - physically - behind this one.

 

 

Phew... I suppose it all depends on the order in which things are allocated and deallocated. Usually, there'll be a symmetrical relationship between allocations and deallocations - I realize this. But I didn't assume this to be the case when I wrote my first dynamic memory manager. It kind of assumes that the list nodes could be pointing anywhere, in spaghetti fashion, and it walks the list on each free operation in order to make absolutely sure it's coalesced any free blocks surrounding the target block.

In this approach, memory is maintained in order, blocks are joined when you do a free *if possible*. And no walk of the memory is ever required to clean up. But it comes at the price of using a two way linked list. You are stuck with larger structures or you have to calculate the next pointer using the size of the block.

 

This is probably what I've been trying to express for the past two pages. :) It makes sense to me, although I understand it may not be the most efficient method, and I've already taken on board a lot of alternative approaches. But this next/prev stuff still bothers me. I hope I've expressed myself well enough...

Once you wrap your head around it, it should be pretty easy to follow.

 

As for making assumptions on the competence of developers: well, you have it it one there. We ourselves are as fallable as the next coder. I think I'm trying to straddle two viewpoints here: someone who knows the source code inside out, and someone who doesn't but wants to write apps for it. I guess that's where the third skill comes into play: documenting the thing. :D

 

Regarding analogies: perfect! But I now have a headache... :)

I am recovering from a bad sinus infection... I already had a headache and popping out that code hasn't helped.

Link to comment
Share on other sites

BTW, one other little trick on the alloc. You can require a minimum block size so you don't end up with lots of tiny blocks in the list that only hold a few bytes. So if the requested block size is less than the minimum... just set it to the minimum size early in the alloc.

Link to comment
Share on other sites

This should be faster than the equivalent in the alloc routine above.

         /* init the structure of the new block */
         current_node = current node->next;                 /* point to the new node */
         current_node->next = tempmem;                      /* point next to the next node */
         current_node->prev = tempmem->prev;                /* point back to previous node */
         current_node->size = getsize - sizeof(memstruct);  /* set up the size of the memory block */

         /* fix the next block */
         tempmem->prev = current_node;                      /* point back here */

 

FWIW, the alloc & free in C actually return VOID * or VOID according to an example I looked up later. That really won't matter for assembly but it might matter for the C compiler.

Link to comment
Share on other sites

This version compiles.

//#include <ctype.h>

/* machine specific custom data types */
typedef long  WORD;
typedef char  BYTE;
#define NULL 0

/* definitions for memory allocated flag */
#define FREE 0
#define ALLOCATED -1

/* data structure for memory manager using a two way linked list */
typedef struct  NODE {
WORD size;						/* size of current memory block */
BYTE allocated;					/* memory allocated flag */
struct NODE *prev;				/* pointer to previous block */
struct NODE *next;				/* pointer to next block */
} memlist_node;

memlist_node *memlist_start;

void freemem(memlist_node *current_node)
{
current_node->allocated = FREE;						/* mark the block as free */
if (current_node->next->allocated = FREE)			/* check to see if the next block is free */
{
	/* if so, merge the blocks */
	current_node->size += (current_node->next->size + sizeof(memlist_node));  /* combine size */
	current_node->next = current_node->next->next;                         /* point past the old next block, to the new one */
}

/* try to merge with previous block */
if (current_node->prev->allocated = FREE)        /* check to see if the next block is free */
{
	/* if so, merge the blocks */
	current_node = current_node->prev;           /* point to previous block structure */
	current_node->size += (current_node->next->size + sizeof(memlist_node));  /* combine size */
	current_node->next = current_node->next->next;                         /* point past the old next block, to the new one */
}
}

/* memory allocation routine using a two way linked list to maintain free/allocated memory */
void *alloc(WORD getsize) 
{
memlist_node *current_node = memlist_start;  /* memlist_start is the start of the memory list and global for this example */
memlist_node *tempmem;   /* temporary pointer holder */

getsize += sizeof(memlist_node);  /* make sure we are looking for a block large enough for the memory and the struct */

/* find the first large enough mem block to use
 * If current_node points to null, the list has ended without finding a big enough block
 */
while (current_node->allocated != FREE)
{
	/* check to see if the current node has been allocated before we do anything else */
	if(current_node->allocated = FREE)
	{
		/* If current_node.size < getsize, move to next block and try again */
		if(current_node->size >= getsize)
		{
			/* if the block is too big, grab mem off of the end and shrink the current block */
			if(current_node->size > getsize)
			{
				tempmem = current_node->next;   /* don't loose the current next memory block */

				current_node->size -= getsize;  /* adjust the size of the current block */
				current_node->next = current_node + current_node->size;   /* point to the new block in the list */

				/* init the structure of the new block */
				current_node = current_node->next;                 /* point to the new node */
				current_node->next = tempmem;                     /* point next block pointer to the next node */
				current_node->prev = tempmem->prev;                /* point previous block pointer to previous node */
				tempmem->prev = current_node;                      /* point the next block previous block pointer back here */
				current_node->size = getsize - sizeof(memlist_node);  /* set up the size of the memory block */
			}
			current_node->allocated = ALLOCATED;       /* mark the block as allocated (could use task #) */

			return(current_node);
		}
	}
	current_node = current_node->next;  /* no good, move to next node */
}
return (NULL);  /* didn't find any suitable memory */
}

 

By having a memory list on each bank of RAM, another routine could reuse the allocation routine and allocate RAM off of different banks. You'd need a wrapper bankalloc routine that sets the bank, sets the memlist_start for the bank (should always be at the start of the bank), and then calls the alloc.

If the call to alloc from the bankalloc succeeds, the routine returns the address & bank, if not, it steps to the next bank and tries again until it finds some memory or it gets to the end of memory.

 

You might be able to skip banks if you keep a variable with the max block size at the base of each bank. As you allocate/free, you adjust the max size. Since you don't always free in order it will be wrong at times. That's ok. If an alloc fails on that bank you adjust the maz size down. This way you can skip banks where you know an alloc won't succeed by checking the max size against the size you are trying to allocate. It's quick n dirty, and will save *some* time.

You will also need a cleanup that walks the banks and adjusts the sizes to be accurate if you ever fail to find RAM.

There may be a better way but I don't know of one that could reuse the alloc/free code. You'd need custom versions that adjust the size and that would slow down the alloc/free all the time.

 

Trade offs.

Edited by JamesD
Link to comment
Share on other sites

FWIW, the alloc & free in C actually return VOID * or VOID according to an example I looked up later. That really won't matter for assembly but it might matter for the C compiler.

 

Right, that allows you to cast it back to whatever kind of pointer you need.

Edited by danwinslow
Link to comment
Share on other sites

A very rough implementation of the bank memory allocation routine.

The return code may be a problem if a compiler doesn't support over 16 bits and I wasn't sure if cc65 or any other 6502 C compiler supported 32 bit longs, so I left that unfinished.

 

typedef struct bank {
  WORD max_size;
  BOOL skipped;  /* so we can say if skipped == TRUE */
} bankdata;

/* 24 bit address pointer structure */
typedef struct baddress {
WORD address;	/* 16 bit memory address */
BYTE bank;		/* 8 bit bank number */
} bankaddress;

/* bank memory allocation routine 
* the return code is structure so probably a union of 32 bit and struct?
* I'm not aware of any 24 bit data types but on little endian we just add a byte at the end
* to do a union.  Maybe pass in a pointer to the struct in the call and put values in it.
* ------------incomplete---------
*/
bankaddress bankalloc(blocksize)
{
BYTE bank = FIRSTBANK;  /* start at first usable memory bank */
char *memptr;			/* pointer for return code from alloc */
bankaddress bmemaddress;	/* our memory pointer for the return code */

bankdata *membankinfo = bankbaseaddress;	/* memory structure is always at the start of the bank memory */
memlist_start = bankbaseaddress + sizeof(bankstructure);  /* all banks start at the same address so this is only set once */

/* clear all membank skipped flags */
for(bank = FIRSTBANK; bank <= LASTBANK; bank++
{
	setbank(bank);
	membankinfo->skipped = FALSE;	/* remember we skipped this bank */
}
	
/* try to allocate from each bank until we succeed or it fails on all */
for(bank = FIRSTBANK; (bank <= LASTBANK) && (memptr == NULL); bank++
{
	setbank(bank);  /* start a new bank */
	bmemaddress.bank = bank;	/* for return */
	
	if(membankinfo->max_size >= getsize)
	{
		/* run alloc on this memory list */
		memptr = alloc(getsize);
			
		/* if malloc fails, reduce the max size to less than how much we attempted to allocate */
		if(memptr == NULL)
		{
			membankinfo->max_size = getsize - sizeof(memlist_node) - 1;  /* adjust the max size to less than alloc that failed */
		}
	}
	else
	{
		membankinfo->skipped = TRUE;	/* remember we skipped this bank */
	}
}

/* if alloc failed on all blocks, cleanup max_size and rescan
 * doesn't rescan any further than required
 */
if(memptr == NULL)
{
	/* check skipped banks */
	for(bank = FIRSTBANK; (bank <= LASTBANK) && (memptr == NULL); bank++)
	{
		setbank(bank);	/* bank switch */
		bmemaddress.bank = bank;	/* for return */
		
		/* check if it's a skipped memory bank */
		if(membankinfo->skipped == TRUE)
		{
			setmaxsize();	/* adjust the max_size for this bank to actual */
			
			/* check again to see if alloc will succeed */
			if(membankinfo->max_size > getsize + sizeof(memlist_node)
			{
				memptr = alloc(getsize);
			}
		}
	}
}
bmemaddress.address = memptr;	/* for return */

return(memptr);
}

Link to comment
Share on other sites

If you have a custom free routine for the bank memory it shouldn't add too much to the code size and it can update the max_size on the bank memory structure. So a failed alloc will reduce it from the initial value and the free can increase it. No need for the cleanup in the 2nd half of the bank memory allocation routine and it stays pretty simple. Well... about as simple as you are going to get anyway.

<edit>

Assembly could reuse quite a bit of code between the alloc routines for banks or regular memory.

The skipped flag and all references to it can be removed but I didn't totally do it here.

typedef struct bank {
  WORD max_size;
//   BOOL skipped;  /* so we can say if skipped == TRUE */
} bankdata;

/* 24 bit address pointer structure */
typedef struct baddress {
WORD address;	/* 16 bit memory address */
BYTE bank;		/* 8 bit bank number */
} bankaddress;

/* bank memory allocation routine 
* the return code is structure so probably a union of 32 bit and struct?
* Note that multipe memory allocations may fail on a block before the max_size is adjusted to what the actual maximum is
*/
bankpointer bankalloc(blocksize)
{
BYTE bank = FIRSTBANK;  /* start at first usable memory bank */
char *memptr;			/* pointer for return code from alloc */
bankaddress bmemaddress;	/* our memory pointer for the return code */

bankdata *membankinfo = bankbaseaddress;	/* memory structure is always at the start of the bank memory */
memlist_start = bankbaseaddress + sizeof(bankstructure);  /* all banks start at the same address so this is only set once */

/* clear all membank skipped flags */
for(bank = FIRSTBANK; bank <= LASTBANK; bank++
{
	setbank(bank);
	membankinfo->skipped = FALSE;	/* remember we skipped this bank */
}
	
/* try to allocate from each bank until we succeed or it fails on all */
for(bank = FIRSTBANK; (bank <= LASTBANK) && (memptr == NULL); bank++
{
	setbank(bank);  /* start a new bank */
	bmemaddress.bank = bank;	/* for return */
	
	if(membankinfo->max_size >= getsize)
	{
		/* run alloc on this memory list */
		memptr = alloc(getsize);
			
		/* if malloc fails, reduce the max size to less than how much we attempted to allocate */
		if(memptr == NULL)
		{
			membankinfo->max_size = getsize + sizeof(memlist_node) - 1;  /* adjust the max size to less than alloc that failed */
		}
	}
}
bmemaddress.address = memptr;	/* for return */

return(memptr);
}


/* memory bank free mem routine */
bfree(bankaddress *bmemaddress)
{
bankdata *membankinfo = bankbaseaddress;	/* memory structure is always at the start of the bank memory */
current_node = bmemaddress.address;			/* extract the memory block address */
bank(bmemaddress.bank);						/* set the bank */

current_node->allocated = FREE;						/* mark the block as free */
if (current_node->next->allocated = FREE)			/* check to see if the next block is free */
{
	/* if so, merge the blocks */
	current_node->size += (current_node->next->size + sizeof(memlist_node));  /* combine size */
	current_node->next = current_node->next->next;                         /* point past the old next block, to the new one */
}
/* try to merge with previous block */
if (current_node->prev->allocated = FREE)        /* check to see if the next block is free */
{
	/* if so, merge the blocks */
	current_node = current_node->prev;           /* point to previous block structure */
	current_node->size += (current_node->next->size + sizeof(memlist_node));  /* combine size */
	current_node->next = current_node->next->next;                         /* point past the old next block, to the new one */
}
/* adjust the max block size on the bankdata structure */
if(current_node->size > membankinfo->max_size) membankinfo->max_size = current_node->size;
}

Edited by JamesD
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...