+MarkB Posted May 30 Share Posted May 30 (edited) Branching this topic from the Lua thread. I've seen several examples where devs are rolling their own memory banking scheme (Ghostbusters, Force-command, etc). It would be nice to have a generic solution that works for existing and future code ports. I can't remember if we just discussed this or if I posted this before so posting my latest thoughts here. A memory board with logic that combines a page and offset using a 4 bit shift and add ala 8086 to generate a 20-bit address would be nice. But even if we just restrict ourselves to the existing SAMS page registers it should be possible to do something in software. I've been musing how we could do automatic bank segmentation of code and data with gcc to allow builds for SAMS. SAMS supports 8k banks (or can it do 4k?) so imagine we have 4 switchable banks for code, data, stack and extra. We could modify gcc and gas and ld to be aware of this. The addresses for the segments could be for example: >2000 - stack, SS >a000 - data, DS >c000 - code, CS >e000 - extra, ES (so @Jason can do memcpy :-) ) Now let's say every compiled module was compiled so that it had its own CS and DS page/segment values. Static functions and static data would be referenced by "near" pointers which are 16-bit only. Referencing external functions or global data would force a reference to a "far" (32-bit) pointer. Calling a far function would be changed to be a call to a short static trampoline function, preferably located somewhere in >83XX. So a near call would be: bl @function and a far call would be (edited to use data to avoid using regs) bl @trampoline data PAGE(function) data OFFSET(function) trampoline is then: push @cs mov *r11+,@cs mov *r11+,r0 push r11 bl *r0 pop r11 pop @cs Accessing far data would cause the assembler to extend movb r1,@y to : mov @ds,r0 mov @PAGE(y),@ds movb r1,@OFFSET(y) mov r0,@ds Pointers would have to be 32-bit (PAGE and OFFSET) by default but could be cast to near (16-bit, cast to uint16_t) if it is known the pointer is near only. A NEARPTR macro would do the cast and we could have a NEARDATA() macro to dereference the pointer. The compiler treats pointers as ints so changing Psize (pointer size) from HImode (16-bit) to SImode (32-bit) would tell the compiler to generate 32-bit values (two consecutive registers) for any pointers passed as parameters. We could create insns for mem(SImode) to emit the above sequence, or a new pseudo op-code. Stack is a little tricky if we want more than 8k but it could be handled by creating stackenter() and stackleave() functions to create and tear down stackframes. If a new stack frame doesn't fit in the current stack segment, increment the segment by 1 and zero the offset to allocate a new stack page. It would be nicer still if the stack was a sliding window to make parameters still easily accessible but if pages are 8k this would mean allocating 16K of CPU space for the stack. But if a sliding window was possible then stackenter() would just need to add 8k to the SP and increment the upper and lower bank page numbers. Heap values would be far pointers by default. Then we have unlimited heap but any one structure bigger than 8KB would not be allowed. The compiler would issue an error if the code and data in any one module exceeded 8K. Or we could get clever and have separate DS and CS page numbers per module to give each module 8KB static data and 8KB code. Either way devs would have to manually split up large modules and optimally also have to manually merge smaller modules together to avoid too many far references. This could just be done using #includes of source files into a bank accumulator file. The assembler and linker would have to be told how to resolve PAGE and OFFSET references. Or maybe we just introduce new pseudo-opcodes for far data and code accesses and have the assembler or pre-assembly step expand these. A pre-assembly pass could also eliminate redundant DS setups. I'm not sure gcc knows if a function is local or not but the assembler must. By default, there will be lots of far pointers and that will hurt performance but at least almost ANY C/C++/Pascal/Java code would compile and run out of the box and it would give a working starting point from which to profile and optimise. It doesn't sound impossible does it? Edited May 30 by khanivore avoid regs in trampoline 10 Quote Link to comment Share on other sites More sharing options...
HOME AUTOMATION Posted May 30 Share Posted May 30 (edited) 17 hours ago, khanivore said: SAMS supports 8k banks (or can it do 4k?) 4K... AMS Schematic and Notes.pdf Sound interesting! Parts of your idea look familiar... I came up with this in a desperate attempt to utilize FinalGROM 99's, 512K RAM... I was just beginning to grasp how paged memory, works(by poking at it using EASYBUG). Hmm ...I haven't come any farther yet. P.S. DS, is the entry-point in the code example, I meant to change it to RUN. Edited May 31 by HOME AUTOMATION ...corrected the link. 5 Quote Link to comment Share on other sites More sharing options...
+TheBF Posted May 30 Share Posted May 30 2 hours ago, khanivore said: Branching this topic from the Lua thread. I've seen several examples where devs are rolling their own memory banking scheme (Ghostbusters, Force-command, etc). It would be nice to have a generic solution that works for existing and future code ports. I can't remember if we just discussed this or if I posted this before so posting my latest thoughts here. A memory board with logic that combines a page and offset using a 4 bit shift and add ala 8086 to generate a 20-bit address would be nice. But even if we just restrict ourselves to the existing SAMS page registers it should be possible to do something in software. I've been musing how we could do automatic bank segmentation of code and data with gcc to allow builds for SAMS. SAMS supports 8k banks (or can it do 4k?) so imagine we have 4 switchable banks for code, data, stack and extra. We could modify gcc and gas and ld to be aware of this. The addresses for the segments could be for example: >2000 - stack, SS >a000 - data, DS >c000 - code, CS >e000 - extra, ES (so @Jason can do memcpy :-) ) Now let's say every compiled module was compiled so that it had its own CS and DS page/segment values. Static functions and static data would be referenced by "near" pointers which are 16-bit only. Referencing external functions or global data would force a reference to a "far" (32-bit) pointer. Calling a far function would be changed to be a call to a short static trampoline function, preferably located somewhere in >83XX. So a near call would be: bl @function and a far call would be (edited to use data to avoid using regs) bl @trampoline data PAGE(function) data OFFSET(function) trampoline is then: push @cs mov *r11+,@cs mov *r11+,r0 push r11 bl *r0 pop r11 pop @cs Accessing far data would cause the assembler to extend movb r1,@y to : mov @ds,r0 mov @PAGE(y),@ds movb r1,@OFFSET(y) mov r0,@ds Pointers would have to be 32-bit (PAGE and OFFSET) by default but could be cast to near (16-bit, cast to uint16_t) if it is known the pointer is near only. A NEARPTR macro would do the cast and we could have a NEARDATA() macro to dereference the pointer. The compiler treats pointers as ints so changing Psize (pointer size) from HImode (16-bit) to SImode (32-bit) would tell the compiler to generate 32-bit values (two consecutive registers) for any pointers passed as parameters. We could create insns for mem(SImode) to emit the above sequence, or a new pseudo op-code. Stack is a little tricky if we want more than 8k but it could be handled by creating stackenter() and stackleave() functions to create and tear down stackframes. If a new stack frame doesn't fit in the current stack segment, increment the segment by 1 and zero the offset to allocate a new stack page. It would be nicer still if the stack was a sliding window to make parameters still easily accessible but if pages are 8k this would mean allocating 16K of CPU space for the stack. But if a sliding window was possible then stackenter() would just need to add 8k to the SP and increment the upper and lower bank page numbers. Heap values would be far pointers by default. Then we have unlimited heap but any one structure bigger than 8KB would not be allowed. The compiler would issue an error if the code and data in any one module exceeded 8K. Or we could get clever and have separate DS and CS page numbers per module to give each module 8KB static data and 8KB code. Either way devs would have to manually split up large modules and optimally also have to manually merge smaller modules together to avoid too many far references. This could just be done using #includes of source files into a bank accumulator file. The assembler and linker would have to be told how to resolve PAGE and OFFSET references. Or maybe we just introduce new pseudo-opcodes for far data and code accesses and have the assembler or pre-assembly step expand these. A pre-assembly pass could also eliminate redundant DS setups. I'm not sure gcc knows if a function is local or not but the assembler must. By default, there will be lots of far pointers and that will hurt performance but at least almost ANY C/C++/Pascal/Java code would compile and run out of the box and it would give a working starting point from which to profile and optimise. It doesn't sound impossible does it? I had recently come to the same conclusion about how best to make SAMS based Forth system. I have been using a DOS based segmented Forth system for almost 40 years so I have a template to follow. The alternative is to go all in and use 32bit addresses where the page number is simply the hi end of a double. Segmented architectures are a neat hack to hand bigger memory spaces but if there is way to create a virtual linear address space I think in the long run it would be easier for folks to grok. So my logic is, if we have to handle a 32 bit address with a page number, it comes down to how we interpret it. Segment + address or a 32 bit virtual address. I am not sure which computation is going to be slower at this time. 3 Quote Link to comment Share on other sites More sharing options...
+jedimatt42 Posted May 30 Share Posted May 30 Linking constant data so that code and literals are not paged together would be nice. I jump through a lot of hoops to pass display text out to the banked display routines from other banks. I got lazy and allocate a global 256 byte buffer and copy string parameters into that buffer before passing to a banked routine. There is also 8k banking in cartridges that is popular. And then there is banked code in SAMS making banked calls into cartridge space... Maybe that problem is unique to me. But maybe a 32bit virtual address space would have room for cartridge bank and SAMS banks. Once the concept is proven for SAMS, a bit could indicate cartridge space, but that would complicate code generation... Porting code usually starts to require a heap concept too. I could imagine a virtual memory manager that before the real function call, ensures that the parameters are all paged into real addresses.. but I haven't thought through nested calls... Or maybe the virtual address is always passed through but the compiler maps it to a real address as needed when reading and writing. I could also imagine a slower, but paging stack. With many of the same concerns... 1 Quote Link to comment Share on other sites More sharing options...
RXB Posted May 30 Share Posted May 30 I take it this is only backwards compatible with only your own apps? Lower 8K has Assembly support for anything you do and with out it how would you load that support unless it is also loaded but how? EA? XB? Forth? Pascal? If you jailbreak you still need Assembly support as it is not built into the Ti99/4A? Quote Link to comment Share on other sites More sharing options...
+TheBF Posted May 30 Share Posted May 30 (edited) 6 hours ago, RXB said: I take it this is only backwards compatible with only your own apps? Lower 8K has Assembly support for anything you do and with out it how would you load that support unless it is also loaded but how? EA? XB? Forth? Pascal? If you jailbreak you still need Assembly support as it is not built into the Ti99/4A? Since this is GCC making complete program images, RAM can be a blank slate. The run-time block for the C program can contain everything needed. (and even stuff you don't need) If it was in a cartridge then it would need to provide the loader for the programs. But it would be possible to use E/A's loader and bootstrap another loader that can load into SAMS. I suppose it could even be done with an XB based loader. Load the big loader and then load the rest. It's blue sky at this point in time. Edited May 31 by TheBF typo 4 Quote Link to comment Share on other sites More sharing options...
Asmusr Posted May 31 Share Posted May 31 18 hours ago, khanivore said: Heap values would be far pointers by default. Then we have unlimited heap but any one structure bigger than 8KB would not be allowed. Can you say a bit more about how you imagine memory management would work? 1 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 1 hour ago, Asmusr said: Can you say a bit more about how you imagine memory management would work? Dynamic memory? Ok let's say the loader starts loading text into bank/page 0 and finishes at page n. The heap could begin at page n+1. Blocks on the heap would have a header with a used/free marker and a pointer to the next free block. The heap could be initialised as a list of free blocks of 8K in size. malloc() would allocate space by walking the free list to find a large enough free space large and return a far (32-bit) pointer to a contiguous area contained within one 8k page. If there wasn't space in a page to satisfy the allocation request it would increase the page number by 1 and try again. free() would just change a block from used to free and merge with adjacent free blocks within the same page. Access to heap data would be through a far pointer only so any dereferencing would cause the compiler to emit a load of the high part of the 32-bit address into the data segment (DS) register. Loading the DS reg will add overhead but should be amortised over the number of bytes in whatever structure is being accessed. e.g. populating a 100-byte array will mean the 1st byte access has to load the DS reg but the other 99 accesses don't. 2 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 12 hours ago, RXB said: I take it this is only backwards compatible with only your own apps? Lower 8K has Assembly support for anything you do and with out it how would you load that support unless it is also loaded but how? EA? XB? Forth? Pascal? If you jailbreak you still need Assembly support as it is not built into the Ti99/4A? Well it would be source code backwardly compatible not binary compatible so all source code would need to be rebuilt, which it generally is anyway. I'm not sure how it would coexist with XB etc but no reason why it wouldn't since you can only run one of those at a time so the SAMS card just looks like RAM to XB when not being used by a compiled image. 3 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 14 hours ago, jedimatt42 said: Linking constant data so that code and literals are not paged together would be nice. I jump through a lot of hoops to pass display text out to the banked display routines from other banks. I got lazy and allocate a global 256 byte buffer and copy string parameters into that buffer before passing to a banked routine. The far pointers would fix that. If your text messages are global and if a module references them then it will generate a far pointer reference to the message. 14 hours ago, jedimatt42 said: And then there is banked code in SAMS making banked calls into cartridge space... Maybe that problem is unique to me. But maybe a 32bit virtual address space would have room for cartridge bank and SAMS banks. Once the concept is proven for SAMS, a bit could indicate cartridge space, but that would complicate code generation... I hadn't thought about cartridge space but we could just change the CS page to be >6000 instead of >C000 ? Given that the object code will contain 16-bit near addresses (e.g. BL @6012) it wouldn't be possible to allow pages to be loaded at either >C000 or >6000, they would have to be linked for one or the other. 14 hours ago, jedimatt42 said: I could imagine a virtual memory manager that before the real function call, ensures that the parameters are all paged into real addresses.. but I haven't thought through nested calls... Or maybe the virtual address is always passed through but the compiler maps it to a real address as needed when reading and writing. If the parameters are pass by value then they are on the stack so no paging needed. If they are by reference then the default will be a 32-bit (virtual) pointer and a DS load will occur before access. Hmmm. What happens if we pass a reference to a stack var? I'll have to think about that. 6 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 18 hours ago, HOME AUTOMATION said: 4K... AMS Schematic and Notes.pdf cool! that makes sliding windows possible for the stack. 4 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 7 hours ago, khanivore said: The far pointers would fix that. If your text messages are global and if a module references them then it will generate a far pointer reference to the message. Oh wait, I see the problem. If literal text is in a code seg and you call a function in another bank (e.g. printf ("hello world")), then you lose access. We would have to ensure nothing but executable code goes in CS, literals need to go into DS (and use far pointers as well). 1 Quote Link to comment Share on other sites More sharing options...
Asmusr Posted May 31 Share Posted May 31 9 hours ago, khanivore said: Blocks on the heap would have a header with a used/free marker and a pointer to the next free block. Thanks for the explanation - that sounds fine. But I think we would often like to allocate an entire page of 8192 (or 4096) bytes, so maybe it's better to reserve one page for the allocation table instead of having a header in each block? I often run into this issue with ROM banks where you're supposed to have a header in each bank, but that doesn't work if your data fit precisely into 8192 bytes. Quote Link to comment Share on other sites More sharing options...
RXB Posted May 31 Share Posted May 31 27 minutes ago, Asmusr said: Thanks for the explanation - that sounds fine. But I think we would often like to allocate an entire page of 8192 (or 4096) bytes, so maybe it's better to reserve one page for the allocation table instead of having a header in each block? I often run into this issue with ROM banks where you're supposed to have a header in each bank, but that doesn't work if your data fit precisely into 8192 bytes. You mean like XB3 or XB or SXB or RXB or XB GEM or XB 2.9? Quote Link to comment Share on other sites More sharing options...
Asmusr Posted May 31 Share Posted May 31 42 minutes ago, RXB said: You mean like XB3 or XB or SXB or RXB or XB GEM or XB 2.9? No bells are ringing here. 🙂 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted May 31 Author Share Posted May 31 3 hours ago, Asmusr said: Thanks for the explanation - that sounds fine. But I think we would often like to allocate an entire page of 8192 (or 4096) bytes, so maybe it's better to reserve one page for the allocation table instead of having a header in each block? I often run into this issue with ROM banks where you're supposed to have a header in each bank, but that doesn't work if your data fit precisely into 8192 bytes. Yes true it is annoying to have to lose a few bytes to the header. A separate list of pointers in another page would work too. 3 Quote Link to comment Share on other sites More sharing options...
RXB Posted May 31 Share Posted May 31 (edited) 5 hours ago, Asmusr said: No bells are ringing here. 🙂 All of these have the first 4K of a ROM page almost all the same except for the XML tables in upper 4K of each 8K page. Same as the XB module or Mini Memory. Edited May 31 by RXB missing text Quote Link to comment Share on other sites More sharing options...
RXB Posted May 31 Share Posted May 31 16 hours ago, khanivore said: Well it would be source code backwardly compatible not binary compatible so all source code would need to be rebuilt, which it generally is anyway. I'm not sure how it would coexist with XB etc but no reason why it wouldn't since you can only run one of those at a time so the SAMS card just looks like RAM to XB when not being used by a compiled image. Hmm my RXB has supported AMS and SAMS since 1999. Quote Link to comment Share on other sites More sharing options...
TheMole Posted June 3 Share Posted June 3 I like the sound of all of this. I'm a bit concerned about the overhead a fully automatic system might introduce, but if functions and global variables could be decorated to allow the programmer to optimize certain calls/data accesses, it would be a great way of making larger C programs portable to the TI. Having done the bankswitching manually in everything I've done so far, the lowest hanging fruit for me would be if we could get rid of the need to implement explicit trampoline functions. Imagine the following code running in bank 1, calling a function in bank 2, it would be awesome if the bank switching could be made part of the function prologue/epilogue: // In bank 1 __bank(1)__ void some_function() { int i = 5; int retval = some_other_function(i); } // In bank 2 __bank(2)__ int some_other_function(int param_by_value) { return param_by_value * 2; } If the function prologue for some_other_function() would include the switch to bank 2 and leaves the stack alone, and the function epilogue ensures we switch back to whatever the calling bank was (perhaps the bank needs to end up on the stack?), the single most annoying thing about coding bankswitched cartridges would be solved. It would be especially neat if gcc knew whether or not a bank switch was necessary and avoided the overhead if possible. By default, the compiler could pick the banks automatically, but decorations could give the programmer control over which functions reside in which banks based on their knowledge of program flow (I think it'll be hard to optimize automatically, unless the compiler has access to things like the maximum call stack depth, loop organization, etc... at compile time). None of this solves the issue of the heap, of course... How do you see things like memcpy from one heap segment to another heap segment working? 2 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted June 4 Author Share Posted June 4 17 hours ago, TheMole said: I like the sound of all of this. I'm a bit concerned about the overhead a fully automatic system might introduce, but if functions and global variables could be decorated to allow the programmer to optimize certain calls/data accesses, it would be a great way of making larger C programs portable to the TI. Yes fully automatic could be very inefficient but some hints / decorations to force near ptrs etc could be used to optimise the default. 17 hours ago, TheMole said: If the function prologue for some_other_function() would include the switch to bank 2 and leaves the stack alone, and the function epilogue ensures we switch back to whatever the calling bank was (perhaps the bank needs to end up on the stack?), Yes the caller bank number (CS) would have to be stored on the stack. I think the bank switch has to be done through a trampoline and not through prologue/epilogue as these will exist in the callee bank so you have to bank switch before you execute them. The caller can't do the bank switch as it is executing code in the current bank so can't swap itself out. But a call to a static nonbankable trampoline could be emitted automatically when a call is made to a function in another bank. 17 hours ago, TheMole said: It would be especially neat if gcc knew whether or not a bank switch was necessary and avoided the overhead if possible. I was thinking the easiest way to do this is to do a bank switch whenever a function is called that is not in the current module. If a called function is static or inline, then it is assumed to be in the current bank, if not, then assume a trampoline call is needed. 17 hours ago, TheMole said: (I think it'll be hard to optimize automatically, unless the compiler has access to things like the maximum call stack depth, loop organization, etc... at compile time) Well if the stack is in a separate bank then the same stack can continue to be used across text segment banks so I think the deciding factor is just code size. If code fits into 8K then it can all go in one code bank. 17 hours ago, TheMole said: How do you see things like memcpy from one heap segment to another heap segment working? I was thinking if heap structures are not allowed to span 8K banks, then by having two databanks (DS and ES at >a000 and >e000) we could copy data from one bank to another. Pointers are the big complication. They will have to be assumed to be 32-bit by default. But a pointer could be to data, stack or text, so the address will also have to taken into account. e.g. copying data from ptr >0004a000 to ptr >00062000 means copy data in bank 4 to bank 6 but if that data also contains pointers then we have to sure to map bank 4 to address >a000 and bank 6 to >2000 before dereferencing them. 1 Quote Link to comment Share on other sites More sharing options...
TheMole Posted June 4 Share Posted June 4 3 hours ago, khanivore said: I think the bank switch has to be done through a trampoline and not through prologue/epilogue as these will exist in the callee bank so you have to bank switch before you execute them. The caller can't do the bank switch as it is executing code in the current bank so can't swap itself out. But a call to a static nonbankable trampoline could be emitted automatically when a call is made to a function in another bank. Yeah, you're right. I tried to accomplish something akin to this in the past by using the __naked__ attribute, but it became very messy very fast. Automatically generating trampoline functions would probably be the way to go, indeed (or perhaps better yet, having a single trampoline function pre-defined somewhere that can re-used for all cross-bank calls?). 3 hours ago, khanivore said: Well if the stack is in a separate bank then the same stack can continue to be used across text segment banks so I think the deciding factor is just code size. If code fits into 8K then... I'm not sure I have it all straight in my head... would you mind explaining what you have in mind in terms of memory map, including what is bankable and when, etc...? 1 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted June 5 Author Share Posted June 5 (edited) 21 hours ago, TheMole said: I'm not sure I have it all straight in my head Me neither 🙂 It's just a brainstorm at this stage really 21 hours ago, TheMole said: would you mind explaining what you have in mind in terms of memory map, including what is bankable and when, etc...? What I was thinking was that if each of the pages at >2000, >a000, >c000 and >a000 were bankable then the registers that control bank switching for them (>4004>4006,>4014>4016,>4018>401a,>401c>401e) can be considered equivalent to segment registers like in segment 8086 memory, so that is SS, DS, CS and ES respectively. The code bank (CS / >c000) switches whenever a far call is made to another module. The trampoline must be in a memory location that isn't banked, preferably somewhere in >8300 if it will fit. If each module also has its own data segment then access to static data in a module will load that modules data bank number into DS. Calls will push caller bank number and address to the stack. Far data pointers are 32-bit. The high 16-bits is the bank number and the low 16 bits is the address. The highest 3 bits of the address is the segment register to load (right shift 11 bits and add >4000). So for example a ptr to >0005e000 will load the value 5 into >4018 and then access address >e000. The stack could be limited to 8K and could be non bankable but ideally it would be not limited to 8K and so must be bankable. Switching stack segments is a bit tricky because offsets are used for access (e.g. function parameter 1 might be @>2[sp]) and we don't want them to wrap around a page boundary. So I was thinking the prologue and epilogue functions could switch stack banks at a pre-determined threshold (e.g. less than >100 bytes of stack space left) or we could get clever and have a sliding window where the stack moves up or down by a 4K page when a limit is approach and then 4096 is added or subtracted from the SP. This would mess up pointers though if an address of a stack var is used so the sliding window might be a bad idea. Edited June 5 by khanivore Correct registers for 4K pages not 8K 3 Quote Link to comment Share on other sites More sharing options...
Asmusr Posted June 5 Share Posted June 5 6 hours ago, khanivore said: The stack could be limited to 8K and could be non bankable but ideally it would be not limited to 8K and so must be bankable. How much stack space do you need in C? In assembly I usually get by with 16-32 bytes. 🙂 1 Quote Link to comment Share on other sites More sharing options...
+MarkB Posted June 5 Author Share Posted June 5 5 minutes ago, Asmusr said: How much stack space do you need in C? In assembly I usually get by with 16-32 bytes. 🙂 Impossible to say really. Potentially a lot. The Linux default is like 10MB per thread ! But it totally depends on what you are building. If you have lots of local structs and arrays, then lots. If all your data is static, then not so much. Quote Link to comment Share on other sites More sharing options...
Tursi Posted June 5 Share Posted June 5 8 hours ago, Asmusr said: How much stack space do you need in C? In assembly I usually get by with 16-32 bytes. 🙂 My C projects generally need 200-300 bytes of stack by the time all is said and done. Depends entirely on how much you nest your function calls and how much you use local variables. 3 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.