Jump to content
IGNORED

Z80 vs. 6502


BillyHW

Recommended Posts

The fact of the matter though is that when they were actually relevant in the computer industry, there were much better 6502 machines than Z80 offerings.

 

Atari, C64, Apple IIgs - top of the pile. NES, SNes - top of the pile in their console generations.

 

The best Z80 offerings were probably the Sam Coupe and MSX2. Sam Coupe about 5 years after the 16-bit generation kicked off, so too late for the party. MSX never really made much impact outside Japan.

 

There's modern day incarnations of many of the older CPUs - 6502, 68000 included. Intel even made an embedded version of the '386 until not so long ago.

  • Like 1
Link to comment
Share on other sites

I have coded in both Z80 and 6502 but my favorite is Z80, maybe because I am more familiar with it over the years.

 

Yet, there are some things I loved with 6502 too. People say it doesn't have enough registers. Yes, but everything has to do with direct memory addreses, there is nothing similar to LD A,(HL). You have to use all the time LDA $nnnn:STA $nnnn. Or also with index numbers. Why this is interesting? First, we also have this thing on Z80, LD A,(&nnnn) or even LD HL,(&nnnn). But those are too costy on Z80 (maybe 16 to 20 cycles, while ld a,(hl) is only 7-8 cycles on CPC) and it's better to cleverly use LD A,(HL) or LDI or others. So, one thing I liked on 6502 (and that might be weird) was that pseudocode of good optimized routines was more obvious to read. You always used direct addresses which were nice labels. So, you do everything with direct addresses anyway (and have the zero page even), without caring if there is faster way to read/write memory, more simple, nicely readable.

 

You'd have something like

LDA Sine1,X

ADC Sine2,Y

STA Vram

 

oh a plasma

 

while Z80

ld b,(hl)

ld a,(de)

add b

ld (ix),a

inc ix

 

just an example, you can't see, you have to remember regs.

 

6502, simpler, easier to understand code if they use good labels.

 

Another thing that's nice on 6502 is it doesn't have OUT, so always you have direct mapping of hardware I/O. OUT are tedious and weird on Z80.

 

And of course the simple fact, similar opcodes take less time. 6502 at 2Mhz could be like Z80 at 4mhz, depending on what you do.

 

However, one thing that is interesting on Z80, I think the 16bit registers are much better for some fixed point math for 3d. Even if I have seen very good 3d on C64 that we never did on CPC, I believe that proper programming of good engine would be faster on Z80 computer.

 

Other things I like on Z80, extended registers (with EXX for example), really really useful. IX/IY are sometimes useful when you don't have registers left, even if more cycles. I like that you can move the stack everywhere, for example pointing it to videoram and use PUSH for faster single color filling of the screen. LDI/LDIR are nice too sometimes. And there is one single thing I like. Because Z80 is more complicated than 6502, you need more time to master it. There are many ways you can think of optimizing, so you could fill vram with just ld (hl),a or use series of many ldis, or do the push trick, and even more depending what you want to do. And fiddlign of regs, which while confusing on a listing, they are a bliss if you master it and always gain a cycle/free reg here and there. So, there are more unique ways to optimize code, maybe because I think my best code optimizations in z80 I love it.

Edited by Optimus
  • Like 2
Link to comment
Share on other sites

The Z80 by far is the better processor. As a matter of fact, there's a whole line of them still in production. They're used in alot of microcontroller applications (washing machines, etc), in variants from the original 2.3 Mhz all the way up to 50 Mhz. It's one of, if not THE best 8 bit processor every made.

I'm not sure I'd rate either processor as far better than the other.

 

FWIW it was 10 years between the release of the Z80 ('76) and the intro of the Z180 which was basically a licensed Hitachi HD64180. I don't think anything faster than the 64180/Z180 was released until the R800 in the MSX Turbo R in 1990. The longer lived 8 bits were discontinued around 1991 so the 8 bit era was pretty much over when modern versions of the Z80 started to hit the market and the eZ80 didn't come out until 2001.

 

Given all the instructions that only work on certain registers and other oddities with the Z80, I'd never rate the Z80 the best 8 bit CPU ever made but then I wouldn't call the 6502 the best 8 bit CPU ever made either.

  • Like 1
Link to comment
Share on other sites

Given all the instructions that only work on certain registers and other oddities with the Z80, I'd never rate the Z80 the best 8 bit CPU ever made but then I wouldn't call the 6502 the best 8 bit CPU ever made either.

 

Well that's a good thing, because the answer is... neither is the best! The best 8-bit CPU ever made was the 6809!

 

I got paid to program it in assembly language for 3 years, and even got to use position-independent code.

 

It's really a nice instruction set to program for, but after doing it for so long, I found myself consistently wishing for one more register. (Sure, the 6809 makes it easy to throw stuff on the stack, but that's slower, ya know.) I think I discovered pretty much every undocumented instruction including SWI variants that went through the FIRQ (pushing all registers!) and reset vectors.

 

14H/15H test mode was the worst because it killed my ICE that used one CPU for both target and emulator. And the watchdog timer on the hardware was kept happy by memory READS to its address as well as writes, so test mode would keep it from resetting. The hardware was used at unattended sites, so this was a bit of a problem.

 

Also, it has a lot of memory move bandwidth if you turn off the interrupts and PUSH/POP as many registers as possible at one time. And for the Vectrex, that 8-by-8 bit multiply can help with math.

Link to comment
Share on other sites

  • 1 month later...

The fact of the matter though is that when they were actually relevant in the computer industry, there were much better 6502 machines than Z80 offerings.

 

Atari, C64, Apple IIgs - top of the pile. NES, SNes - top of the pile in their console generations.

 

The best Z80 offerings were probably the Sam Coupe and MSX2. Sam Coupe about 5 years after the 16-bit generation kicked off, so too late for the party. MSX never really made much impact outside Japan.

 

Your statement is simply a non-sense . What does make a better machine with respect to other is not only the cpu.
Better in what aspect? computationally, graphically, sound?
Maybe probably, with respect graphically/sound the c64 is better than a zx81, but it is not due to the cpu. It's by the VIC-II, by the "SID"
The reality is that, the machines you've just mentioned as better than others were simply better because of they have a better custom hw (gfx, sound) than others. not for the cpu itself.
those machines were 'better' simply for one reason:
6502 is a very simplified cpu, in order to keep down price, others cpu are not. this allowed the computer maker to use the money savings in others components (gfx, sound).
Obviously this can be done in areas where the most computational work can be carried out by external dedicated hw, not by cpu itself.
So the equation computer with 6502 cpu better than computer with z80 cpu is somewhat a non sense. It's only a matter of production cost.
Link to comment
Share on other sites

The legendary Don Lancaster calls the 6502 "The world's first RISC microprocessor...." That's good enough for me.

 

http://www.tinaja.com/glib/waywere.pdf

This is also a legendary bullshit that most 6502 fans are proud to tell about.

It is based on the assumption that less instructions in set -> RISC CPU.

Unfortunately, RISC does not mean reduced (in number of instructions) set computing. It does mean Reduced (in complexity of each instruction) instruction set.

Nor the number of registers apply in RISC definitions. Modern RISC processors are register based (a lot of general purpose registers) and have a lot of instructions. Each instruction is somewhat simpler than on a CISC design, however.

 

So a big disappointment to all 6502 fans. Neither z80 or 6502 ARE RISC processors. They are CISC ones. Period.

But if one wants to see the RISC design in one of these ancient beast, i can say that the 6502 is the most anti-RISC in design.

The 6502 has pre/post indexed operations. those are *VERY* complex instructions. Under a true RISC, such kind of instruction does not exists. It's really implemented by multiple, simpler instructions.

So from this perspective a CISC cpu like the z80 is more "RISC" ( read: less CISC ) than the 6502.

Link to comment
Share on other sites

The Z80 by far is the better processor. As a matter of fact, there's a whole line of them still in production. They're used in alot of microcontroller applications (washing machines, etc), in variants from the original 2.3 Mhz all the way up to 50 Mhz. It's one of, if not THE best 8 bit processor every made.

better processor is somewhat inexact. The main limits of the z80 are the crappy instruction set (non orthogonal instruction set) and some design choices needed to expand the instruction set allowing to maintain the compatibility with 8080 cpu.

Link to comment
Share on other sites

 

Your statement is simply a non-sense . What does make a better machine with respect to other is not only the cpu.
Better in what aspect? computationally, graphically, sound?
Maybe probably, with respect graphically/sound the c64 is better than a zx81, but it is not due to the cpu. It's by the VIC-II, by the "SID"
The reality is that, the machines you've just mentioned as better than others were simply better because of they have a better custom hw (gfx, sound) than others. not for the cpu itself.
those machines were 'better' simply for one reason:
6502 is a very simplified cpu, in order to keep down price, others cpu are not. this allowed the computer maker to use the money savings in others components (gfx, sound).
Obviously this can be done in areas where the most computational work can be carried out by external dedicated hw, not by cpu itself.
So the equation computer with 6502 cpu better than computer with z80 cpu is somewhat a non sense. It's only a matter of production cost.

 

I think there was more too it than price.

The 6502 buss timing was easier to interleave clock cycles with than a Z80.

Most successful Z80 designs either interrupt the screen refresh when the CPU tries to access video RAM and you get sparkles on the screen (TRS-80 Model I, VZ200, etc...) or accessing video RAM creates wait states for the CPU if the screen is being refreshed (Spectrum). The wait states are bad enough that some speccy games draw to a buffer and then copy that to the display.

 

I realize that 6502 machines can and do have some wait states but it's not as bad.

The NEC Trek didn't isolate video RAM from the CPU and the CPU has a horrible amount of wait states during screen refresh.

 

Link to comment
Share on other sites

I think there was more too it than price.

The 6502 buss timing was easier to interleave clock cycles with than a Z80.

Most successful Z80 designs either interrupt the screen refresh when the CPU tries to access video RAM and you get sparkles on the screen (TRS-80 Model I, VZ200, etc...) or accessing video RAM creates wait states for the CPU if the screen is being refreshed (Spectrum). The wait states are bad enough that some speccy games draw to a buffer and then copy that to the display.

 

I realize that 6502 machines can and do have some wait states but it's not as bad.

The NEC Trek didn't isolate video RAM from the CPU and the CPU has a horrible amount of wait states during screen refresh.

 

 

 

>> The 6502 buss timing was easier to interleave clock cycles with than a Z80.

 

Probably. This is a false advantage however. By binding the CPU speed with video circuits speed you gain only one thing. You loose the freedom of using different bandwidth "quota" for each of the functions.

About the no - RAM contention you are referring probably to apple II that due to low requirements in video bandwidth can do it. On more featured systems, (the c64 for example) that have the same approach the cpu has to be stopped as z80 in zx from this kind of contention.

 

As a rule with standard DRAM chips the contention could not be avoided. And honestly, the real solution is to have a kind of arbitrer that allows for 100% bandwith usage, like in most sophisticated systems.

the solution of reserving the 50% of time to cpu and video is simply the solution one will approach to cut-down price. And is far from ideal.

 

For example this schema does not work when you need to upgrade cpu speed and want to mantain the video hw the same. On C128, commodore were forced to DROP the VIC-II when using the 8502 @ 2Mhz. Simply does not work. And even using more faster DRAM is not a solution. That's because of the fixed amount of bandwidth allocated. (Doubling the bandwith would have made more complex the time slot allocation because the video bandwith stay the same and 75% of time should have been for CPU. Thus they probably been resorted to a non regular timeslot distribution done with some kind of extra arbitration.)

The correct approach is to have some logic that does arbitrate between accesses or do DMA accesses. In this way each system works at its speed without compromises. (Like the 6502 & VIC-II do in C64). But of course, with this kind of design it does relatively matter if the CPU allows regular or not interleaved accesses, because you already do arbitration in a flexible way.

 

>> Most successful Z80 designs either interrupt the screen refresh when the CPU tries to access video RAM and you get sparkles on the screen (TRS-80 Model I, VZ200, etc...)

This is a bad approach, on zx there is no such kind of this problem, (*There is a similar problem but is due to hw bug), nor there is on Amstrad CPC. Both are z80 based.

The same thing was in early PC CGA video card. A stupid design approach, not a requirement imposed by CPU.

The term "successful" is not to be referred on quality of electronic engineering...

 

>> The wait states are bad enough that some speccy games draw to a buffer and then copy that to the display.

If you want to have flicker free animation with sw generated sprites, this is pratically the only thing you can do. Writing on a back buffer, then uploading to videoram the contents in a single operation.

This is nothing to do with the "bad" wait states.

This is only have to do with the possibility to get catched by the raster beam during vram manipulation, resulting in flickering.

The problem with sw sprites is that you need to perform those ops to get animations:

 

a) save the background

b) plot the sprite

c) restore the background when doing a move of the object then goto (a)

But these operations are not performed atomically. So, you will probably catched by the screen refresh. So the user see a flicker. To avoid this, one solution is to synchronize to the raster beam but this is often not a praticable solution, because (1) most hw cannot tell you where the raster beam is (the scanline position) (2) you have a very little time to do the steps above.

The other approach, that you described, is to do the steps on a backbuffer (or better if possible to use a double buffer approach) then upload in a single operation ALL modifications. In this way there is a single modification in a area of screen and the human eye cannot see any flicker.

 

 

>> I realize that 6502 machines can and do have some wait states but it's not as bad.

 

This has nothing to do with the fact the CPU is a 8088, Pentium, Z80, 6502 or a motorola 68000.

The fact we have or not wait states is only determined by:

- the available memory bandwidth

- the amount of bandwidth each concurrent device require to perform each function.

 

As said, maybe Apple II does not have to wait, due to simpler video hw, others have to wait. the c64 suffer of a bad line issue that occours every 8 scanline and is issue is heavily enough to stop the processor by 40us cycles over the 64us available on a scanline. this is bad, but the VIC-II does some beautiful things that apple hw does not. So it does worth the cost.

 

In conclusion, the simpler approach 50-50 is useful only when you can stay with this hard, unflexible, time slot allocation. When you need to use efficiently the available memory bandwidth (for example 20% to video, 80% to CPU) you cannot be satisfied by this approach. Plus if you take into account that different video resolutions require different bandwidth it's easy to understand that the 50%-50% approach is not a good solution.

Again is the result of a over simplified cpu architecture that has its roots in cost cut-down

Link to comment
Share on other sites

Atari probably had the opportunity to use a 6809 or 6509 if it had been released. But the 6509 seemed to be vaporware and there was no existing code base for the 6809 to port programs over to the new Atari. The existing 2600 game console had already used a stripped down 6502. Atari had a group of experienced game developers that could easily migrate to the new machine. I'm sure the 6809 was attractive but there were a lot of reasons to stick with the 6502. If the 6509 had been available, Atari probably would have used it and I'm guessing other manufacturers would have started shipping new models with the 6509 as well, after all it would have still supported the 6502 code base. Talk of the 6509 may also have helped convince Atari to stick with the 6502. After all, their computer had a separate board with the CPU and it would be relatively easy to swap in a new CPU that still runs the old software.

 

 

I want to reply to this before going through 9 pages of threads. According to Al Alcorn's speech at the Sunnyvale Atari Party, Atari did heavily consider switching to the Motorola 6809. This was based upon the desire to not have to deal with Jack Tramiel. Even Manny Gerard at Warner didn't want Atari having to deal with Tramiel back then. Suffice to say, if MOS hadn't allowed Atari the second-sourcing contract [buy 50k 6502s from MOS and receive the right to second source 6502s with Synertek in perpetuity], they would've most likely switched even if the 6809 was more expensive. MOS gave Atari that sweethearts deal while Commodore was in the process of acquiring the company [which Tramiel had allegedly purposefully drove into bankruptcy by not paying monies owed to MOS so he could acquire them cheaply - Al Alcorn even said so] and after MOS had begged Atari to purchase them for $1 million [Manny Gerard said "no"].

 

Even to this day, Al Alcorn believes it was the right decision because Atari didn't have the resources to run MOS on the other side of the country, not to mention the investment necessary to create new chips [and not mentioning the contamination site EPA SuperFund issue]. Commodore's essentially hostile takeover [or acquisition under distress] of MOS led Bill Mensch to leave [he didn't like Tramiel and believed he wouldn't continue investing in R&D] with a 6502 license and he set up Western Design (WDC) after that [i believe with his sister]. The later success of WDC led the Acorn folks to famously found ARM.

 

Other key MOS people stayed on and contributed to the success of the later Commodore 64 but as with the case of Bob Yannes [father of the SID], felt cheated by Tramiel and left to do their own stuff. Yannes and others created the "My First Computer" keyboard for the Atari 2600 which Steve Ross of Warner apparently thought was great and triggered several lawsuits brought about by Tramiel. Yannes later founded Ensoniq.

 

 

Now with having written all of this, which is the better CPU for sound generation or sound management? I ask this since so many arcade games used either the 6502 or the Z80 for use as a sound chip or to manage a bank of sound chips [and speech synthesizers]. Or apparently in the case of the Epyx Handy/Atari Lynx, the sound chip had a 6502 core embedded in it separate of the main 6502 CPU.

Edited by Lynxpro
Link to comment
Share on other sites

There is a rumour that the Z800 was so powerful that the USA military took it for its missiles.

 

Anyway Zilog finally released it in 1989 as the Z280, it provided protected memory, bank switching and a lot of other things. It could have been very good for a 16-bits UNIX like the one in PDP-11 or even Minix, but unfortunately it was too late. Finally the Z280 was discontinued my middle 1990's and completely erasen from Zilog history (now it's very hard to find any info)

 

I've saw also a Z380 that had 32 bits registers, it doesn't had success and also was wiped out.

 

Uhm, the Motorola 68000 was in the Tomahawk cruise missile. So the original Gulf War was essentially the US Military blowing up Saddam's military assets with Atari STs and Amigas. :)

  • Like 1
Link to comment
Share on other sites

The shitty thing about 6502 is that practically nothing that used it in it's prime ever ran over 2 MHz. At least with the Z80 there was a bit of upscaling where you had the TRS-80 @ 1.78 Mhz, early 80s machines typically double that and later machines going higher again.

 

Choice for machine - I have a bias towards 6502 mainly because I used it so much more and the "better" machines IMO used it, exceptions being Amstrad, MSX2, Sam Coupe but honestly both CPUs were on their last legs by the time the ST and Amiga were released.

 

Cost - really, I think the 6502 had the advantage for the most part there. C= owning MOS probably helped, they produced a reasonable suite of support chips to go with it at good prices and the sheer number of machines that used 6502+derivatives helped too.

You can't argue with the economies of scale that 25 million+ C64/128 and peripherals, 30 million Atari 2600s, and probably 10+ million more that went into Atari 8-bitters + peripherals.

 

Sure, Z80 had the low-end market sewn up with the likes of ZX80/81 and later the Spectrum but look under the hood - there's not exactly much there.

 

Millions. You had 6502s/6507s in Atari and Commodore disk drives [and probably in other manufacturers drives], 1 million units for the Atari 5200, 10 million [?] for Atari 8-bit computers, 4 million plus units in the Atari 7800 in North America [unknown how many for Europe and the rest of the world], a million times 2 for the Atari Lynx, 30+ million for the 6502 derivative in the NES, the Apple // clones such as the Laser128, the NEC PC Engine, etc.

 

That's not counting the 65816. Millions were sold when you count the SNES. How many Apple //gs machines did Apple sell?

Link to comment
Share on other sites

Apple IIgs runs the 65816 at 2.8 MHz. If you compare that to 1 Mhz of earlier Apple II then maybe you could claim 400% speedup for some tasks.

 

Realistically a 65C02 vs standard 6502 with the OS and Basic interpreter reworked to use the new instructions, you'd get maybe 4% speedup.

 

Where/how anyone could come to an 800% figure, who knows? An original Mac vs old A2 it'd be a realistic claim.

 

My last post for awhile here.

 

Let's remember that Apple gimped the //gs so it wouldn't supposedly cost Macintosh sales. They could've included a much faster 65816 in it.

 

If the 6509/6510 could address 1MB RAM easier than the 6502 [via bankswitching], how come there seems to be so few Commodore 64s upgraded to 1MB RAM these days? That seems to be a very popular upgrade for the 6502 equipped Atari 8-bits these days, and apparently the Apple // line was capable of being upgraded to 2MB [not counting the //gs].

 

Other points not mentioned in the thread…the Macintosh apparently was originally meant to have a 6809 CPU but Steve Jobs insisted on the 68000 when he took over the project. And there's been chatter on Facebook that IBM originally considered the 6809 for the PC but Motorola couldn't deliver the mass quantities of CPUs to IBM and/or wouldn't permit IBM to second-source the 6809 in their own fab plants. IBM could've also went with their own CPUs they could've used but apparently that wasn't in the cards due to the ongoing antitrust lawsuit against it that was originally initiated in the 1970s. They also considered purchasing Atari to design their "PC" as well [steve Ross was apparently interested in selling a 50% stake in Atari to IBM at the time].

Edited by Lynxpro
Link to comment
Share on other sites

The Apple video system worked on one phase of the clock, and was transparent to the CPU. Tandy color computer did the same thing with the 6809. (all models, even the 3, which can run at the higher clock no problem) That, doubling as refresh, made the 1 Mhz Apple fairly quick for it's clock. Normally, there would be waits for refresh cycles and additional waits for video system access. The only oddity in the Apple is a stretched 64th clock, to make the video timing match up. Maybe it's every 63rd clock. Can't recall, but it's documented in the "Inside The Apple ][" series of books clearly.

 

Apple got accelerator chips due to this simplicity. 4Mhz 6502 operation was done with the Zip Chip and some other devices. I've never got to own one, but I did run one for a time. (IIc+) Wicked fast. IMHO, that also made doing the other CPU add on cards fairly easy. A default system simply doesn't use interrupts or DMA.

 

DMA gets seen on some storage and data capture devices, and there was a circuit documented in BYTE that allowed for "transparent" DMA due to how the 6502 typically used memory. Essentially, a specific pattern fits into the times when the 6502 isn't requiring the RAM. Neat.

 

Interrupts are seen on a variety of add-ons, like the Mockingboard, and I think the mouse. (need to read up on that one)

 

Re: 6809 Man, I'm sort of just drooling over what could have been in Atari land had they picked that CPU. What an opportunity sort of wasted. I was tempted to mod a machine to run the hardware Boisy made, but just don't have the time. Still... I would love to see a title like "DEFENDER" slinging way more stuff around with the arcade collisions, not the short cut, "one bullet takes everything out on it's line" that we did get. Not that the game isn't loads of fun. It totally is, but it's not what the arcade level of fun and difficulty is. Hell, I'm rambling. Maybe the 6502 could do it, and they just chose not to.

 

In any case, having that chip would have ruled IMHO. But what I don't know is about fast responses to things. That's one area the 6502 really does well. One can get in and out of an interrupt very quickly, if done carefully. Not sure the other two, Z80, 6809 can do it that fast.

 

Re: Z80 I need to spend some time on one. I see a lot of nifty things I would like to try...

Edited by potatohead
  • Like 1
Link to comment
Share on other sites

...

In any case, having that chip would have ruled IMHO. But what I don't know is about fast responses to things. That's one area the 6502 really does well. One can get in and out of an interrupt very quickly, if done carefully. Not sure the other two, Z80, 6809 can do it that fast.

...

If you consider the original intent of the 6502 was as an I/O controller, the low latency interrupts make a lot of sense.

It's quick to push a few bytes around.

 

The 6502's low latency interrupts are great if you don't have to do a lot but part of the reason other CPUs have higher latency is they automatically save registers. If you have to use all the registers, manually saving registers is slower than if an interrupt automatically saves them. The 6502 also doesn't let you directly save or restore X and Y registers adding 2 more cycles to transfer through A. Seems to me it's 13 cycles to save all registers and 16 cycles to restore all cycles. The good news is that you can pick and choose when to save them or not.

PHA ; save A

TXA ; save X

PHA

TYA ; save Y

PHA

-

PLA

TAY ; restore Y

PLA

TAX ; restore X

PLA ; restore A

 

 

The 6809 has two interrupt modes, one saves the registers automatically and the fast interrupt doesn't. The best of both worlds.

 

Z80 latency is complex.

Just remember to take clock rate into account.

I've seen a lot of comparisons that focus on clock cycles but they have a habit of skipping the part where you adjust for clock differences.

http://z80.info/interrup.htm

  • Like 2
Link to comment
Share on other sites

>> The 6502 buss timing was easier to interleave clock cycles with than a Z80.

 

Probably. This is a false advantage however. By binding the CPU speed with video circuits speed you gain only one thing. You loose the freedom of using different bandwidth "quota" for each of the functions.

About the no - RAM contention you are referring probably to apple II that due to low requirements in video bandwidth can do it. On more featured systems, (the c64 for example) that have the same approach the cpu has to be stopped as z80 in zx from this kind of contention.

...

This is partially a response to this post and some other people's comments.

 

I was trying to keep it simple when I was talking about wait states on the 6502 and I wasn't just referring to 50/50 systems.

Most 6502 systems were based on the idea of alternating between CPU and video cycles.

Some machines were more complex in that they could halt the CPU and some doubled the number of CPU clocks when the video circuit didn't need access to RAM but they are all using alternating cycles between CPU and video. The C64's badlines use CPU clocks to prepare for the next row of characters. The Atari would be the most flexible since the ANTIC can interrupt the CPU at different times and for different lengths of time. The faster clocked machines like the Atari and Plus/4 took the extra step of giving the CPU access to any clock cycles the graphics chip didn't use. The faster machines that don't have separate video RAM never run at full speed all the time unless you turn off the graphics display which I know the Atari can do and I think the Plus/4 can do as well. Ultimately, cycles alternate between the graphics chip and CPU during screen refresh.

 

Two 6502 machines can run at full speed during screen refresh.

The C128 can't use the VIC-II at high speed but it has separate video RAM for the C128 graphics chip.

The Salora Manager uses the 9918 and has separate video RAM.

 

Part of the reason you didn't see 4MHz 6502 systems is RAM wasn't fast enough to allow access every cycle. At least not until the mid '80s and it was expensive then.

The Z80 got away with it because it has several internal cycles between memory cycles.

The MSX Engine actually inserted wait states on certain memory cycles to keep the Z80 from accessing RAM too fast and it was released in '82.

People that have disconnected the wait output from the state machine say the mod improves execution speed by 10% on machines that can handle it. Also keep in mind that video has it's own separate RAM on that system.

The Zip Chip and Apple IIc+ got away with a faster clock because they had a small amount of expensive high speed RAM and they were released much later. It would have been too costly to make all the RAM that high speed.

 

Z80 systems usually had separate graphics RAM that was isolated. As long as the CPU doesn't access video it is never halted. If the graphics chip isn't accessing RAM or the CPU blocks the graphics chip access to RAM (TRS-80 Model I, VZ200, etc...) then the CPU isn't halted.

On a machine like the Spectrum, if the graphics chip needs access to RAM you get a different number of wait states depending on what clock cycle you access the video RAM on. Wait states count down from 6 to 0 and then it starts over. The CPU gets access once ever 7 cycles. At least if I read the doc on contended RAM correctly. I don't know how many of those cycles the graphics chip uses but some wait states may be due to RAM speed.

 

The program I had in mind when I mentioned using a buffer was Carrier Command.

It isn't about flicker, erasing and drawing a frame takes a lot of accesses.

With the wait state system of the Spectrum, the game would be unplayable by directly accessing video RAM.

 

 

 

The term "successful" is not to be referred on quality of electronic engineering...

I'm not sure I'd use the word quality for a lot of 8 bit machines. Edited by JamesD
Link to comment
Share on other sites

Millions. You had 6502s/6507s in Atari and Commodore disk drives [and probably in other manufacturers drives], 1 million units for the Atari 5200, 10 million [?] for Atari 8-bit computers, 4 million plus units in the Atari 7800 in North America [unknown how many for Europe and the rest of the world], a million times 2 for the Atari Lynx, 30+ million for the 6502 derivative in the NES, the Apple // clones such as the Laser128, the NEC PC Engine, etc.

 

That's not counting the 65816. Millions were sold when you count the SNES. How many Apple //gs machines did Apple sell?

I think the NES alone sold over 60 million units.

 

The SNES worldwide sales were around 49 million systems.

 

Based on earlier discussions I'd say Atari would have been ecstatic to have sold 10 million 8 bit computers.

For whatever reason, people seem to like to use 5 million or 10 million when they estimate number of Atari 8 bits sold.

Nice round numbers I suppose. I think 5 million would be possible but Atari pretty much blew there chance to sell more.

 

Based on collected serial numbers, Apple sold around 3 million IIgs systems.

 

Don't forget embedded systems but Motorola and intel were dominant there.

Link to comment
Share on other sites

  • 2 weeks later...

Isn't LDIR capable of copying rather large blocks of memory, far more than 256 bytes? If so, the 6502 equivalent either would need to be self-modifying or use the zeropage indirect indexed addressing mode. Something like this:

 

 

 lda #<source
  sta zp0 ; two byte pointer which must be a memory address < $100
  lda #>source
  sta zp0 + 1
  lda #<destination
  sta zp1 ; ditto
  lda #>destination
  sta zp1 + 1
  ldx #num_blocks ; for simplicity, copy multiples of 256 bytes
  ldy #0
; 24 cycles to this point
 
.loop
  lda (zp0),y
  sta (zp1),y
  iny
  bne .loop
; 16-17 cycles per loop, total 4096-4352 cycles for a block of 256 bytes
 
  inc zp0+1
  inc zp1+1
  dex
  bne .loop ; another 15 cycles

 

So to copy e.g. 8 kilobytes of data, I believe this routine would take a little less than 140,000 cycles. Not sure how many cycles LDIR takes but as it is a CPU instruction it ought to be much faster than making your own routine. It might be possible the above routine could be improved a bit, but I'm unsure if self-modifying code is faster than going through zeropage addressing?

Link to comment
Share on other sites

Yes: ABS,y or ABS,x is faster than (ZP),y. You could have something like:

Loop
	lda $FFFF,x	; 4-5
Dest
	sta $FFFF,x	; 5
	inx		; 2
	bne Loop	; 3
	inc Loop+2
	inc Dest+2
	dey
	bne Loop

I've omitted the setup code (Y is used as a page counter). Assuming the code's all on the same page, you're looking at 14-15 cycles per byte for the lion's share of the data (each page worth). I understand LDIR takes 16 cycles per byte, but I suppose the Z80 is generally clocked faster anyway.

Link to comment
Share on other sites

From what I could find, LDIR takes 21 cycles per byte (some docs say 16 but that's only the case on the last byte where the instruction falls through to the next one).

 

6502 best-case scenario is 15 cycles in the <abs,x> formats although I would think the setup time for 6502 would be somewhat more in that and probably most other case types. But move operations as such tend to operate on fair sized chunks so the ground is made up pretty quickly.

 

For <(ind),y> format the base-case would probably be if you unwound the loop somewhat e.g. have 4 iterations of load/store/iny then branch but that relies on guaranteed prerequesits but for a large move might average out to under 14 cycles per byte.

 

So I would think in block move situations involving more than several bytes, 6502 would have the advantage in terms of cycles burnt, probably Z80 with slight edge in code size but the general faster clock speeds used on Z80 machines would mean actual elapsed times would be more favourable on Z80. Also, 6502 suffers when ideal conditions can't be guaranteed like page boundaries being crossed or loops with only single move operations per iteration.

Edited by Rybags
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...