Jump to content
IGNORED

Assmebly language for the Atari v's the x86


Recommended Posts

I've been away from the Atari world for a good few months and am trying to inch my way back given so many interesting programmes have been released over the last few weeks; the latest and greatest Altirra and Joey Z's excellent update to RespeQT are only two. However programming is still my focus and lately I have been getting quite into DOS assembler for the PC using VMWare, DOS 6.22 and MASM. Oddly it has been quite a synergistic process as many of the concepts I mostly-grasped while dabbling with P/MG in A8 assembly language helped no end to understand ASM on the x86. One thing that really hit me was how superior the segment:offset addressing mechanism of the PC was in comparison to the cramped indirect page zero addressing on the Atari! This feature alone must have been very persuasive to the first pioneers who jumped ship from Atari to PC in the middle eighties! It also made me smile no end while reading a book on x86 assembly to hear the author complain about DOS real-mode! 'Just try the A8' I thought!

 

Did any of you chaps move from Atari to PC when the latter was first gaining traction? Did you enjoy the improved features for the programmer on those new 16-bit machines? For that matter do you think there are any things the Atari does better? I would be fascinated to hear your thoughts.

Edited by morelenmir
Link to comment
Share on other sites

For myself, I went from the Atari 8-bit to the ST. I found the 68000 much easier to work with but that was also in my early 20's and Assembly 8bit time was more in my late teens so maybe age gave me some advantages in grasping concepts better. By the time I gave the x86 Assembly a look, there were so many instructions that C/C++ was just a much better option. I have recently started to look into the z80 and it seems very easy to comprehend.

  • Like 1
Link to comment
Share on other sites

The only thing the Atari does better is the price point.

 

Me being a cheapskate, pretty much is the end of the

discussion. Seriously those PCs were $3,000 back

in the day.

 

I didn't even do Atari until you could get a complete

system with drive for $40 at the flea market. .

  • Like 1
Link to comment
Share on other sites

I have programmed 65xxx and Z80. I can't stand Intel mnemonics. Zilog mnemonics are so much easier for me.

 

I forget the difference between an LXI and a MVI. On the other hand, LD C,9 LD DE,ADDRESS CALL BDOS seems natural.

 

Some of the extended 65802/816 stuff seems a little difficult, but nowhere near as cryptic as Intel.

 

I have used DEBUG to patch MS-DOS stuff, but it isn't fun. Not for me anyway.

 

Edit: By the way, Bill, Exactly WHY does the string end with a dollar sign $? He DOESN'T KNOW, only Gary did!

Edited by Kyle22
  • Like 1
Link to comment
Share on other sites

I think that the 8086 instruction set is OK. PC has more complex hardware, so when programming really low-level, you need lots of technical documentation - DMA Controller, PIC, SoundBlaster, AdLib, UART,VGA.

 

The 80386 and its protected mode brings this to another level.

I didn't bother to learn 80386 protected mode and changed track to C/C++, Pascal, Java, and C#. No regrets here.

 

I still program in assembler at work (IBM z Systems - convenient and vast instruction set, 16 registers to use), but when something new is to be developed, I write it in C/C++.

 

When it comes to good old 6502, I miss "only few" things:

 

Instruction to copy memory block (event though it would be implemented in "milicode")

Shifting bits by more than 1 (Damned ASL,ASL,ASL...)

Multiplication

 

All of those can be replaced with macros or pre-calculated tables when needed.

Edited by baktra
  • Like 1
Link to comment
Share on other sites

Absolutely fascinating replies as always guys!

 

For myself, I went from the Atari 8-bit to the ST. I found the 68000 much easier to work with but that was also in my early 20's and Assembly 8bit time was more in my late teens so maybe age gave me some advantages in grasping concepts better. By the time I gave the x86 Assembly a look, there were so many instructions that C/C++ was just a much better option. I have recently started to look into the z80 and it seems very easy to comprehend.

 

I think there could be an aspect of getting older and being able to grasp things more easily--but I also think the only way to properly apreciate improvements in equipment or software (and some steps backwards!) is to have at least some experience with the previous generation. This is what I think I found with the memory addressing. At first I would probably have agreed that segment:offset was tedious and intricate, but now it just feels like such a huge breath of fresh air! Atari also had some very weird oversights--like the fact you can move a player horizontally just by incrementing or decrementing a memory location, but to move it up or down you must literally move the entire bitpattern through the stripe of memory locations. I beleive the C64 improved on this and certainly the ST?

 

The only thing the Atari does better is the price point.

 

Me being a cheapskate, pretty much is the end of the

discussion. Seriously those PCs were $3,000 back

in the day.

 

I didn't even do Atari until you could get a complete

system with drive for $40 at the flea market. .

 

I hear you dude!!! I was effectively shut out of assembly and indeed any language other than built-in BASIC as a child by the sheer cost of the cartridges/disks. That did me a huge disservice while growing up. I did not even have chance to programme on the PC until I got a 'deals for students' package of Visual C++ and Visual Basic for £80 once I was in university. Amusingly--as at that point I was a 'Celtic Studies' undergraduate and nothing to do with IT!!!

 

I have programmed 65xxx and Z80. I can't stand Intel mnemonics. Zilog mnemonics are so much easier for me.

 

I forget the difference between an LXI and a MVI. On the other hand, LD C,9 LD DE,ADDRESS CALL BDOS seems natural.

 

Some of the extended 65802/816 stuff seems a little difficult, but nowhere near as cryptic as Intel.

 

I have used DEBUG to patch MS-DOS stuff, but it isn't fun. Not for me anyway.

 

Edit: By the way, Bill, Exactly WHY does the string end with a dollar sign $? He DOESN'T KNOW, only Gary did!

 

That is quite a fundamental point Kyle and also very subtle. Honestly the menemonics for all assembly is gibberish to me!!! I suppose I have learned 'Load Accumulator'. 'Jump' and 'Branch if Not Equal' and so on, yet after a while it was the oddest thing but I stopped reading the nemonics as abbreviations and begun to appreciate them as words in themselves. I suppose it is a bit like learning a foreign language, not that I have ever managed that task! One interesting place where that cropped up for me was in two books I was given as a child 'Subroutines for the 6502' and 'Learning Assembler' or some such very close titles. I recall flipping through them at the time and it was literally... mind-boggling, like opening a grimoir. The source code was as good as hieroglyphics and just looking at it was a weird and slightly unnerving experience. Yet now I can look at a piece of 6502 or x86 assembly and actually find myself understanding what is going on--not perhaps properly following the programme but at least understanding the various manipulations of the processor and registers. I actually find that new experience very good for my confidence--I am not immediately alienated and dissuaded from pushing onwards.

 

I think that the 8086 instruction set is OK. PC has more complex hardware, so when programming really low-level, you need lots of technical documentation - DMA Controller, PIC, SoundBlaster, AdLib, UART,VGA.

 

The 80386 and its protected mode brings this to another level.

I didn't bother to learn 80386 protected mode and changed track to C/C++, Pascal, Java, and C#. No regrets here.

 

I still program in assembler at work (IBM z Systems - convenient and vast instruction set, 16 registers to use), but when something new is to be developed, I write it in C/C++.

 

When it comes to good old 6502, I miss "only few" things:

 

Instruction to copy memory block (event though it would be implemented in "milicode")

Shifting bits by more than 1 (Damned ASL,ASL,ASL...)

Multiplication

 

All of those can be replaced with macros or pre-calculated tables when needed.

 

You bring up a very interesting point there baktra--the interface of hardware external to the processor. I am trying to get in to VGA graphics at the moment as a learning experience and also a goal to keep me interested. Somehow it actually feels a little easier than say player missile graphics on the Atari. I feel the PC was just a little better planned-ahead than the A8. For instance control of IOCB and the display list always seemed somewhat bodged together to me. That said, on the Atari everything feels like it is right there at your fingertips whereas the hardware abstraction on the PC by its nature causes a 'blackbox' mentality from the very start, even at the very lowest level of programming.

 

At the end of the day I am primarily a C++ and Perl programmer. I have always found those two languages--even to the exclusion of Python--can do everything I want. However the current moves in Windows 10 towards every programme that runs on your own machine having to come from the 'windows store' is starting to make me feel very worried indeed. Since 2000 I have been concerned about the spectre of '100% signed code' and it seems like this is getting very close to reality indeed. Just the fact that copying data on to your own hard drive is now considering a unique enough experience to get its own facetious name in 'side loading', as if it is an unusual and 'edgy' thing to do... Hopefully going way back to DOS and Atari assembler lets me escape that. I suppose I can always run a WIndows XP virtual machine if things get too bad!!!

  • Like 1
Link to comment
Share on other sites

I've been away from the Atari world for a good few months and am trying to inch my way back given so many interesting programmes have been released over the last few weeks; the latest and greatest Altirra and Joey Z's excellent update to RespeQT are only two. However programming is still my focus and lately I have been getting quite into DOS assembler for the PC using VMWare, DOS 6.22 and MASM. Oddly it has been quite a synergistic process as many of the concepts I mostly-grasped while dabbling with P/MG in A8 assembly language helped no end to understand ASM on the x86. One thing that really hit me was how superior the segment:offset addressing mechanism of the PC was in comparison to the cramped indirect page zero addressing on the Atari! This feature alone must have been very persuasive to the first pioneers who jumped ship from Atari to PC in the middle eighties! It also made me smile no end while reading a book on x86 assembly to hear the author complain about DOS real-mode! 'Just try the A8' I thought!

 

Did any of you chaps move from Atari to PC when the latter was first gaining traction? Did you enjoy the improved features for the programmer on those new 16-bit machines? For that matter do you think there are any things the Atari does better? I would be fascinated to hear your thoughts.

I self-taught myself 6502 assembly on the Atari. Then when I went to college, I had a course in assembly language on PC. At that time, real mode was still common and that's what I learned. It was a pain in the ass! I know even then it had more features, but coding on the PC real mode was still more painful. I don't remember exactly why, but it had to do with the memory scheme.

 

I'm sure it's much better without the real-mode limitations

Edited by zzip
  • Like 1
Link to comment
Share on other sites

The low end of the market in the 80s had 6502 against Z80 - clone vs clone since 6502 was based on the 6800 and the Z80 was a clone/fork of Intel's 8080.

 

The mid/upper by the mid 1980s was mostly 68K vs x86 which is Motorola vs Intel, the sort of ironic thing is that the low end was a proxy of that.

 

I went from 6502 to 68000, and was doing useful programming in about a week or less. The beauty of the 68000 is that it's instruction set read like a 6502 programmers wish-list. More registers, more addressing modes and the heritage is apparent in many ways.

 

Next for me a couple of years later was IBM mainframe's System-370/XA architecture which is different to all of the above but previous experience made it easier to learn.

 

I've looked at programs and even done a little bit of hand disassembly of the likes of Z80 and x86, but really for me they're confusing systems with an assortment of register names that at first don't seem to make a lot of sense.

Supposedly next to nobody does Asm programming on x86 any more, and given the number of instruction set extensions and addition of AMD-64 it's not really surprising.

  • Like 2
Link to comment
Share on other sites

Atari also had some very weird oversights--like the fact you can move a player horizontally just by incrementing or decrementing a memory location, but to move it up or down you must literally move the entire bitpattern through the stripe of memory locations. I beleive the C64 improved on this and certainly the ST?

 

I'm pretty sure this is discussed in "De Re Atari". The big problem the hardware designers were trying to overcome was the code to move 2D objects in linear addressable memory. Player-Missiles save you from having to write the (slow and complicated) stride and offset code to do 2D animation. By contrast, moving a bit pattern up and down is a relatively simple (and fast) block copy in 6502 assembler, especially since moves are all byte-aligned and page-aligned.

 

The C64 definitely improved on this by providing each sprite with a separately addressable X and Y location, so you can move it vertically and horizontally simply by writing to a memory location. The Atari ST doesn't have dedicated sprite hardware, which is both good and bad depending on how you look at it. Finally, the Amiga also improved on the A8's Player-missiles by providing 8 sprites that can be positioned in X,Y similar to the C64.

Edited by FifthPlayer
  • Like 2
Link to comment
Share on other sites

The low end of the market in the 80s had 6502 against Z80 - clone vs clone since 6502 was based on the 6800 and the Z80 was a clone/fork of Intel's 8080.

 

The mid/upper by the mid 1980s was mostly 68K vs x86 which is Motorola vs Intel, the sort of ironic thing is that the low end was a proxy of that.

 

I went from 6502 to 68000, and was doing useful programming in about a week or less. The beauty of the 68000 is that it's instruction set read like a 6502 programmers wish-list. More registers, more addressing modes and the heritage is apparent in many ways.

 

Next for me a couple of years later was IBM mainframe's System-370/XA architecture which is different to all of the above but previous experience made it easier to learn.

 

I've looked at programs and even done a little bit of hand disassembly of the likes of Z80 and x86, but really for me they're confusing systems with an assortment of register names that at first don't seem to make a lot of sense.

Supposedly next to nobody does Asm programming on x86 any more, and given the number of instruction set extensions and addition of AMD-64 it's not really surprising.

 

I can well imagine that is true Rybags! With modern processors running so speedy then I am sure the advantage of an ASM programme over a--non-MFC/.NET--C++ programme is academic. To be honest that factor stopped me going forwards with assembler on the PC for a long time. By and large if you are programming in windows with ASM then all you are doing is calling win32 API functions just as you would be in C/C++, except with all the added enjoyment of setting the stack and registers in conformity with the C calling convention... Totally pointless in my opinion. However if you are programming device drivers then I think you still have pretty much a Hobsons choice to do it in MASM. Although of course writing drivers is not considered a fitting thing for hobbyists to do as MS have locked down 64bit versions of Windows from 7 onwards to prevent them booting with unsigned drivers... You either have to hobble your system and set it in a special mode or pay for each build to get a signature from MS, which is of course totally impossible for the amateur. However, this business with calling win32 from ASM is why I have gone back to DOS to try to pick up x86 assembler. In that environment the DOS interrupts, which to my amateur mind seem basically the API of that environment were intended to be used via ASM. Although there are of course editions of C for DOS as well and if you want to use CodeView as a debugger then you end up having to install Visual C++ v1.52 anyway. I was going to use TASM, but then found it didn't come with an IDE or even dedicated editor (unlike their Borland PASCAL and even Quick Basic!) whereas MASM does. I actually considered using DEBUG.COM, but that really does feel like repairing a tricorder with stone-knives and Bareskins!

 

On the whole I find that period absolutely fascinating--the mid-80s and the handover from 8 to 16 bits. One thing I really like is if you look at the internet archive you can find tons of magazine scans from then which are pitched at the home user. They contain very helpful tutorials and tips for assembler. I have actually discovered that those magazine clippings are better to learn from than the text books of the time!

 

I did not realize the Z80 and 6502 were so closely related to Motorola and Intel processors Rybags! It is very illuminating to consider that rivalry in the setting of ST and PCs. I guess the Amiga lasted longer than the ST, but how serious a contender was its processor to the x86 and what was programming it like?

Link to comment
Share on other sites

 

I'm pretty sure this is discussed in "De Re Atari". The big problem the hardware designers were trying to overcome was the code to move 2D objects in linear addressable memory. Player-Missiles save you from having to write the (slow and complicated) stride and offset code to do 2D animation. By contrast, moving a bit pattern up and down is a relatively simple (and fast) block copy in 6502 assembler, especially since moves are all byte-aligned and page-aligned.

 

The C64 definitely improved on this by providing each sprite with a separately addressable X and Y location, so you can move it vertically and horizontally simply by writing to a memory location. The Atari ST doesn't have dedicated sprite hardware, which is both good and bad depending on how you look at it. Finally, the Amiga also improved on the A8's Player-missiles by providing 8 sprites that can be positioned in X,Y similar to the C64.

 

I think you are right about 'De Re Atari'. It certainly isn't hard to move the players up and down--even I managed it!--but is does feel... unfortunate. Especially when the c64 used the separate registers approach. However when you think about it, for all P/MG was a very big feather in Atari's cap and potentially a 'killer' feature at the time they really went out of their way to obscure it! There was no native access via BASIC and even in ASM there potholes to overcome as well.

Link to comment
Share on other sites

 

I think you are right about 'De Re Atari'. It certainly isn't hard to move the players up and down--even I managed it!--but is does feel... unfortunate. Especially when the c64 used the separate registers approach. However when you think about it, for all P/MG was a very big feather in Atari's cap and potentially a 'killer' feature at the time they really went out of their way to obscure it! There was no native access via BASIC and even in ASM there potholes to overcome as well.

 

The Atari designers were first, the C64 designers had the benefit of an example to improve upon. In the book "Commodore: A Company on the Edge" they quote the C64 chip designers as having looked at all the existing sprite hardware architectures (Atari, Intellivision, TI) and identifying their strengths and weaknesses before they designed the C64 sprites.

 

I agree not supporting PM/G in Atari BASIC was a huge mistake, but partly understandable given the 8K limit and the development timetable.

Edited by FifthPlayer
Link to comment
Share on other sites

I've been away from the Atari world for a good few months and am trying to inch my way back given so many interesting programmes have been released over the last few weeks; the latest and greatest Altirra and Joey Z's excellent update to RespeQT are only two. However programming is still my focus and lately I have been getting quite into DOS assembler for the PC using VMWare, DOS 6.22 and MASM. Oddly it has been quite a synergistic process as many of the concepts I mostly-grasped while dabbling with P/MG in A8 assembly language helped no end to understand ASM on the x86. One thing that really hit me was how superior the segment:offset addressing mechanism of the PC was in comparison to the cramped indirect page zero addressing on the Atari! This feature alone must have been very persuasive to the first pioneers who jumped ship from Atari to PC in the middle eighties! It also made me smile no end while reading a book on x86 assembly to hear the author complain about DOS real-mode! 'Just try the A8' I thought!

...

First of all, the Page 0 vs Segmented memory comparison s apples & oranges. They have a slightly different purpose.

 

Page 0 is just a way of saving code space and execution time, nothing more.

Opcodes using direct addressing don't need a full 16 bit address, the MSB of the address is assumed to be zero.

So instead of STA $0055, it's just STA $55.

Since the MSB doesn't have to be read from memory, it also saves a clock cycle. That is it's only purpose.

 

Segmented memory is more of a way of tacking more RAM onto an existing architecture and separating data into independent memory sections.

Yeah, it doesn't require as many bytes in the code vs a flat memory model, but a 68000 has 8 data and 8 address registers that are fairly orthogonal instruction wise..

It's not like you are saving space or speed over the competition there.

 

The 8088 was sort of a bridge between the 8080 and where intel wanted to go with their processors.

If you look what followed the 8088, the 286 had memory protection more like a mainframe.

The iAPX 432 had features supporting object oriented languages and has been referred to as a micromainframe.

If you look backwards from those to the 8088, it wasn't so much about speed or saving memory as moving towards mainframe style isolation and protection.

 

Did any of you chaps move from Atari to PC when the latter was first gaining traction? Did you enjoy the improved features for the programmer on those new 16-bit machines? For that matter do you think there are any things the Atari does better? I would be fascinated to hear your thoughts.

 

And my response got deleted here. Weird.

 

It's certainly an improvement in that you aren't having to program around the lack of any 16 bit registers and only having one accumulator.

Edited by JamesD
  • Like 2
Link to comment
Share on other sites

The low end of the market in the 80s had 6502 against Z80 - clone vs clone since 6502 was based on the 6800 and the Z80 was a clone/fork of Intel's 8080.

 

The mid/upper by the mid 1980s was mostly 68K vs x86 which is Motorola vs Intel, the sort of ironic thing is that the low end was a proxy of that.

 

I went from 6502 to 68000, and was doing useful programming in about a week or less. The beauty of the 68000 is that it's instruction set read like a 6502 programmers wish-list. More registers, more addressing modes and the heritage is apparent in many ways.

 

Next for me a couple of years later was IBM mainframe's System-370/XA architecture which is different to all of the above but previous experience made it easier to learn.

 

I've looked at programs and even done a little bit of hand disassembly of the likes of Z80 and x86, but really for me they're confusing systems with an assortment of register names that at first don't seem to make a lot of sense.

Supposedly next to nobody does Asm programming on x86 any more, and given the number of instruction set extensions and addition of AMD-64 it's not really surprising.

I've had a similar experience going from 8 bit to 68000. It might have been even easier for me after programming on the 6809.

But after college I spent a lot of time programming embeded systems and Unix.

 

I programmed the HD64180 which was derived from the Z80 and I've ported some 8080 code to Z80 mnemonics.

The Z80 mnemonics are much better than the 8080.

 

The problem with the Z80 is that the non-orthogonal nature of the Z80 registers/instructions makes it difficult to find an optimal approach quickly.

I think I rewrote portions of the 64 column graphics text stuff over 20 times, saving a few clock cycles each time.

Every time I think I've run out of clock cycles to find... nope... 10000 more in a single screen of text, and that's not an exaggeration.

The number of clock cycles per instruction means a tiny optimization can result in saving a lot of clock cycles.

On the 6502 or 680X, if you dump an instruction it's probably 3 or 4 clock cycles. On the Z80 it could be 11+!

 

The one thing you leave out is the intro of the MIPS, SPARK, Alpha, and other RISC CPUs starting around 1985.

Those were in mini-computers or workstations until the intro of the ARM.

Programming model wise they have more in common with 68000 assembly than 8088, just eliminate a lot of instructions.

The instruction sets are smaller but having a lot of registers and being able to use any register for any instruction makes things easy.

Compilers can generate really good code for RISC processors, so the only need to use assembly is usually for drivers.

The only time I've had to deal with RISC assembly is debugging.

 

I would not want to program in assembly on a modern intel processor other than maybe to make a device driver or something like that.

And for that you shouldn't have to worry about all the add on stuff.

 

Link to comment
Share on other sites

 

...

I did not realize the Z80 and 6502 were so closely related to Motorola and Intel processors Rybags! It is very illuminating to consider that rivalry in the setting of ST and PCs. I guess the Amiga lasted longer than the ST, but how serious a contender was its processor to the x86 and what was programming it like?

Look at the 6800 and 6502 register model and instructions... the 6502 is pretty much a ripoff but with some register changes.

Two index registers instead of one, one accumulator instead of two...

And look at the 8080 and Z80. The Z80 uses the same opcodes but uses different mnemonics. Then it adds new opcodes and the index registers.

That's why Z80 CP/M systems can run the same code as the original Altair which used the 8080.

 

But they all borrow ideas from older processors.

Accumulators and index registers existed before 8 bit CPUs.

Link to comment
Share on other sites

First of all, the Page 0 vs Segmented memory comparison s apples & oranges. They have a slightly different purpose.

 

Page 0 is just a way of saving code space and execution time, nothing more.

Opcodes using direct addressing don't need a full 16 bit address, the MSB of the address is assumed to be zero.

So instead of STA $0055, it's just STA $55.

Since the MSB doesn't have to be read from memory, it also saves a clock cycle. That is it's only purpose.

 

Segmented memory is more of a way of tacking more RAM onto an existing architecture and separating data into independent memory sections.

Yeah, it doesn't require as many bytes in the code vs a flat memory model, but a 68000 has 8 data and 8 address registers that are fairly orthogonal instruction wise..

It's not like you are saving space or speed over the competition there.

 

The 8088 was sort of a bridge between the 8080 and where intel wanted to go with their processors.

If you look what followed the 8088, the 286 had memory protection more like a mainframe.

The iAPX 432 had features supporting object oriented languages and has been referred to as a micromainframe.

If you look backwards from those to the 8088, it wasn't so much about speed or saving memory as moving towards mainframe style isolation and protection.

 

And my response got deleted here. Weird.

 

It's certainly an improvement in that you aren't having to program around the lack of any 16 bit registers and only having one accumulator.

 

Bad expression on my part regarding page zero. I meant to say the segment:offset addressing of the x86 is superior to the Atari page 0 based indexed indirect addressing... Or is it indirect indexed addressing... The one where you feed the machine the page zero address where the first part of another address is stored and it automatically combines this value with the one in the byte immediately following to give the actual address you are really interested. In my--heavily limited--experience this is the only way to specify a memory address at runtime with A8 assembler. It basically gives the facility of a pointer and that technique just seems so totally vital to all programming to me (possibly because I started with C), yet the A8 implementation felt... bodged together. On the x86 you just directly assign. increment or decrement a machine register to change the address you are interested in at run time, which feels so much more straightforward. Worst yet you have a very small number of bytes in page zero which the user can programme with, although FJC did explain to me how to free up a good amount more with some further work.

Link to comment
Share on other sites

 

Bad expression on my part regarding page zero. I meant to say the segment:offset addressing of the x86 is superior to the Atari page 0 based indexed indirect addressing... Or is it indirect indexed addressing... The one where you feed the machine the page zero address where the first part of another address is stored and it automatically combines this value with the one in the byte immediately following to give the actual address you are really interested. In my--heavily limited--experience this is the only way to specify a memory address at runtime with A8 assembler. It basically gives the facility of a pointer and that technique just seems so totally vital to all programming to me (possibly because I started with C), yet the A8 implementation felt... bodged together. On the x86 you just directly assign. increment or decrement a machine register to change the address you are interested in at run time, which feels so much more straightforward. Worst yet you have a very small number of bytes in page zero which the user can programme with, although FJC did explain to me how to free up a good amount more with some further work.

That's just to compensate for the lack of 16 bit address registers. It was a design choice to keep the die small and the chip inexpensive to produce.

If you look at the Motorola 680X series which has 16 bit index registers, it's absent and you don't have to add code to handle indexing across 256 byte "pages".

As long as your code is in RAM you can use self modifying code with LDA (address),y to eliminate the need to use page zero.

 

 

 

  • Like 2
Link to comment
Share on other sites

That's just to compensate for the lack of 16 bit address registers. It was a design choice to keep the die small and the chip inexpensive to produce.

If you look at the Motorola 680X series which has 16 bit index registers, it's absent and you don't have to add code to handle indexing across 256 byte "pages".

As long as your code is in RAM you can use self modifying code with LDA (address),y to eliminate the need to use page zero.

 

 

 

 

An interesting little dodge there JamesD!!! Many thanks - I will bear that in mind in the future!

 

I have a whole bucket-load of similar questions I could ask, but this is of course primarily an A8 site and I have already dragged things way of topic so shall not indulge. I wonder if you or any of the other chaps could suggest a good forum where I could pose questions explicitly about x86 assembly language as a beginner? I should say ahead of time that I am not a fan of stack exchange at all and in the past have suffered a great deal of verbal abuse while attempting to post there and join in the 'community'.

Link to comment
Share on other sites

Did any of you chaps move from Atari to PC when the latter was first gaining traction? Did you enjoy the improved features for the programmer on those new 16-bit machines? For that matter do you think there are any things the Atari does better? I would be fascinated to hear your thoughts.

- I was coding in Atari ASM till summer 1990. My last project was editor&compiler&disassembler (full Monitor) of Assembler that allowed dynamic reallocation of source code location, tokenizing source code (to save on source code RAM consumption) and doing syntax check (when feasible) upon hitting Return. It was way faster than Atmas II.

- Towards the end, I had memorized the full opcode map of A8, so I was coding straight in hexa (no source code)

- then we got 80286 PCs at school with Turbo Pascal, so Atari, frankly, did not stand a chance :)

- couple months later I got a book on 80286 Assembler, discovered mode 13h (320x200x256) and Atari 8-bit coding was gone forever :)

 

- I have actually never ever later experienced such an exciting paradigm/complexity shift as going from 6502 Assembler to 80286 Assembler. All other languages, APIs I learnt later (from Pascal, through C++, to DirectX,XNA,.NET,Java,Python,Perl...) were all merely incremental in comparison (when you consider the featureset and complexity of the DOS interrupts)

- Doing 13h graphics programming in 80286 was very exciting and productive, as the 80286 instruction set is so rich (loops, many registers, stack, ...)

- I never got to memorizing the hexa opcodes for 80286, as I did on Atari, though :)

 

- Right now I'm doing 68000 Assembler + RISC Assembler (GPU/DSP) for Atari Jaguar, and while 68000 is obviously very rich compared to the simplistic RISC Assembler, I don't really find it as comfortable or compelling as the x86 instruction set. 68000 is close in some places, but I still find it limited in many areas compared to 80286 (then again, it's not a completely fair comparison, I admit. probably 68020/30 vs 80286 would be more fair)

  • Like 2
Link to comment
Share on other sites

 

An interesting little dodge there JamesD!!! Many thanks - I will bear that in mind in the future!

...

Actually, I cut the wrong line of code, that version uses page zero.

I meant LDA #ADDRESS,X or LDA #ADDRESS,Y

 

The code is a byte larger but it's a clock cycle faster than using Page 0.

 

Absolute,X LDA $4400,X $BD 3 4+

Absolute,Y LDA $4400,Y $B9 3 4+

Indirect,X LDA ($44,X) $A1 2 6

Indirect,Y LDA ($44),Y $B1 2 5+

 

  • Like 1
Link to comment
Share on other sites

- I was coding in Atari ASM till summer 1990. My last project was editor&compiler&disassembler (full Monitor) of Assembler that allowed dynamic reallocation of source code location, tokenizing source code (to save on source code RAM consumption) and doing syntax check (when feasible) upon hitting Return. It was way faster than Atmas II.

- Towards the end, I had memorized the full opcode map of A8, so I was coding straight in hexa (no source code)

- then we got 80286 PCs at school with Turbo Pascal, so Atari, frankly, did not stand a chance :)

- couple months later I got a book on 80286 Assembler, discovered mode 13h (320x200x256) and Atari 8-bit coding was gone forever :)

 

- I have actually never ever later experienced such an exciting paradigm/complexity shift as going from 6502 Assembler to 80286 Assembler. All other languages, APIs I learnt later (from Pascal, through C++, to DirectX,XNA,.NET,Java,Python,Perl...) were all merely incremental in comparison (when you consider the featureset and complexity of the DOS interrupts)

- Doing 13h graphics programming in 80286 was very exciting and productive, as the 80286 instruction set is so rich (loops, many registers, stack, ...)

- I never got to memorizing the hexa opcodes for 80286, as I did on Atari, though :)

 

- Right now I'm doing 68000 Assembler + RISC Assembler (GPU/DSP) for Atari Jaguar, and while 68000 is obviously very rich compared to the simplistic RISC Assembler, I don't really find it as comfortable or compelling as the x86 instruction set. 68000 is close in some places, but I still find it limited in many areas compared to 80286 (then again, it's not a completely fair comparison, I admit. probably 68020/30 vs 80286 would be more fair)

 

A lot of the chaps have already said that to them the x86 was maybe not their first choice even now, but I have to say I agree with you VladR--the shift from (totally amateur!) Atari programming to having a crack at x86 for me is almost like... I don't know--being given a new playground! Its probably because I haven't run into any roadblocks yet, but I feel like with the PC then Intel's designers did take careful note of what was going on in other computers and deliberately tried to improve upon them. The increase in registers alone is very nice just so you don't have to shuttle data into and out of the CPU so often. Being able to write data copy routines where the source and destination locations can be both 'rememebered' by the CPU's registers and only need a simple increment, load and then store command is something in particular that has hit me just this morning. Not of course that such things were hard on the A8, just a little more involved.

 

Another aspect of PC assembler that has really made things easier in my mind is you do not have to struggle to find somewhere to put the programme in memory and specify that location at compile-time. Perhaps MASM is taking care of that for me behind the scenes and its not an intrinsic feature of the x86 but it seems like all addresses inside a programme are always relative to where the programme is currently loaded in memory. What I mean is you have to know ahead of time on the Atari exactly where the programme will live and also where the labels go in the address space. Those values are then hard-coded in to the programme. Is this what--more knowledgeable programmers than I!--refer to as 'relocatable code'? Although if I understand properly I get the sense that you can do relocatable code on the Atari using that 'self-modifying' trick that JamesD mentioned.

 

Right now I cannot even imagine programming in raw hex code! That is true class!!!

 

Actually, I cut the wrong line of code, that version uses page zero.

I meant LDA #ADDRESS,X or LDA #ADDRESS,Y

 

The code is a byte larger but it's a clock cycle faster than using Page 0.

 

Absolute,X LDA $4400,X $BD 3 4+

Absolute,Y LDA $4400,Y $B9 3 4+

Indirect,X LDA ($44,X) $A1 2 6

Indirect,Y LDA ($44),Y $B1 2 5+

 

 

No worries JamesD!!! I'll copy and paste that snippet to keep on hand and try out later!

Edited by morelenmir
Link to comment
Share on other sites

--the shift from (totally amateur!) Atari programming to having a crack at x86 for me is almost like... I don't know--being given a new playground!

Yep :) Same here :)

Arguably, even the context switch to 3D graphics APIs I did long time ago (OpenGL, MESA, DirectX, XNA, HLSL,...) did not bring the "Wow" factor of the switch from 6502 -> 80286 for me.

 

It literally happened only once for me :)

 

To be completely fair, there was one another API that almost did it for me : CUDA. However, CUDA, as much as it is a paradigm shift in thinking, did not really deliver the complexity factor for me. It's cool, but too high-level unfortunately.

 

Being able to write data copy routines where the source and destination locations can be both 'rememebered' by the CPU's registers and only need a simple increment, load and then store command is something in particular that has hit me just this morning. Not of course that such things were hard on the A8, just a little more involved.

Well, that's the thing exactly. You are just so much more productive with way less instructions. You don't have to scroll 20 pages to browse one function.

This is exactly what I'm experiencing right now with the RISC ASM on Jaguar. It kinda feels like coding on A8, as the instruction set is so basic (load,store, +/-/*/ /,compare,jump,...)

 

I always felt that if the x86 instruction set became one notch more verbose, it would actually become C :)

 

 

Another aspect of PC assembler that has really made things easier in my mind is you do not have to struggle to find somewhere to put the programme in memory and specify that location at compile-time. Perhaps MASM is taking care of that for me behind the scenes and its not an intrinsic feature of the x86 but it seems like all addresses inside a programme are always relative to where the programme is currently loaded in memory. What I mean is you have to know ahead of time on the Atari exactly where the programme will live and also where the labels go in the address space. Those values are then hard-coded in to the programme.

It's been few decades, but if I recall correctly, you have several logical address spaces (CS, SS, DS, ES), each paired with an appropriate offset register (IP, SI, DI,...). While the assembler made things easy for you by calculating how much memory each segment should occupy, before the concept of Protected memory appeared, this all lived in same physical address space (just carefully divided by the linker).

To me, this was quite a killer feature, honestly. It just made life so much easier !

 

 

 

Is this what--more knowledgeable programmers than I!--refer to as 'relocatable code'? Although if I understand properly I get the sense that you can do relocatable code on the Atari using that 'self-modifying' trick that JamesD mentioned.

Self-modifying code is very useful [amongst other things] for performance, where you just change few parameters and create a separate routine (say, to draw a different bitmap). It can save a whole lot of memory, since you create the routine only on demand.

As for relocatable code, after some time you learn to write with as little absolute addresses as possible, since the comfort you get greatly outweighs the discomfort of writing code like that (well, at least for me).

 

 

Right now I cannot even imagine programming in raw hex code! That is true class!!!

Actually, it became a necessity :)

Let me explain:

- I was using ATMAS II, which was linked somewhere around the middle of the RAM, which meant breaking up the code into chunks quite a hassle

- I did not have floppy drive, only cassette

- If you ever tried saving chunks of code onto tape at 600 Baud and compiling like that, you'd understand why I rather chose to learn all hexa opcodes :)

- this way, your source code is literally 0 Bytes

- I wrote a "Monitor"-type tool that had common functionality like Disassembler, HexDump, Run, thus I had almost all RAM available for code (compared to the memory footprint of ATMAS II). Eventually, I added an edit feature to the disassembler that allowed to change hexa code directly in the middle of disassembler output (effectively turning it into "editor")

- also, another huge advantage is there are no compile times, you just hit Run :) I can't stress enough how important this is for small changes. In ATMAS II, I was routinely waiting few minutes (for a recompile), but in pure hexa, you can do literally 10 code changes in 5 minutes

  • Like 2
Link to comment
Share on other sites

 

A lot of the chaps have already said that to them the x86 was maybe not their first choice even now, but I have to say I agree with you VladR--the shift from (totally amateur!) Atari programming to having a crack at x86 for me is almost like... I don't know--being given a new playground! Its probably because I haven't run into any roadblocks yet, but I feel like with the PC then Intel's designers did take careful note of what was going on in other computers and deliberately tried to improve upon them. The increase in registers alone is very nice just so you don't have to shuttle data into and out of the CPU so often. Being able to write data copy routines where the source and destination locations can be both 'rememebered' by the CPU's registers and only need a simple increment, load and then store command is something in particular that has hit me just this morning. Not of course that such things were hard on the A8, just a little more involved.

 

Another aspect of PC assembler that has really made things easier in my mind is you do not have to struggle to find somewhere to put the programme in memory and specify that location at compile-time. Perhaps MASM is taking care of that for me behind the scenes and its not an intrinsic feature of the x86 but it seems like all addresses inside a programme are always relative to where the programme is currently loaded in memory. What I mean is you have to know ahead of time on the Atari exactly where the programme will live and also where the labels go in the address space. Those values are then hard-coded in to the programme. Is this what--more knowledgeable programmers than I!--refer to as 'relocatable code'? Although if I understand properly I get the sense that you can do relocatable code on the Atari using that 'self-modifying' trick that JamesD mentioned.

 

Right now I cannot even imagine programming in raw hex code! That is true class!!!

 

 

No worries JamesD!!! I'll copy and paste that snippet to keep on hand and try out later!

Relocatable code means it isn't address dependent. Jumps are all relative to the Program Counter.

This is the main feature of the 6809 that made the multi-tasking operating system OS-9 possible. There are others though.

 

Self modifying code is not really to be relocatable so much as... it would take a lot more lines of code or clock cycles not to.

I use it for jump tables too.

 

You could relocate code on a 6502 but to do so would be... not so much difficult as it would be annoying.

You have to make sure code is aligned to 256 byte boundaries so any data indexed within the code doesn't cross a boundary.

The assembler needs to generate all 16 bit address references so they are relative to the start of the program and to generate a patch table at the end with the offsets to those addresses in the code.

Now need a loader that loads in the code section and then steps through the patch table adding the start address of the code to every relative offset pointed to in the patch table.

Even if that works, what do you do with Page 0? If you are only loading 1 program, why bother with any of this. If you are multitasking between several, how do you prevent page o conflicts? More patching!

There's a reason I've never seen someone do this.

Edited by JamesD
  • Like 2
Link to comment
Share on other sites

Many thanks guys!!! These are all topics that have niggled with me and never sat quite right. Your fascinating answers--even in the context of a comparison of processor features--have really helped to clarify things. Actually, I think sometimes a 'compare and contrast' exercise is the most useful way to learn!

 

VladR mentioned the speed of compilation was a big plus for his custom assembler and hex-code editing. I totally get that; for quite a few years I ran Visual C++ v4.2 standalone on a pentium 100 with 16MB of RAM... I could literally set a project to compile and build and then go away to make a cup of tea. I'd be gone five or ten minutes, finish my tea and the thing would still be plodding away mid-build and thrashing the harddrive like crazy!!!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...