Jump to content
IGNORED

Are 800XL's less stable than 65/130 XE's?


Larry

Recommended Posts

I seem to have more stability problems with 800XL's than 130XE's. I really don't think this has anything to do with socketed chips versus soldered, but has to do with how the two computers behave with PBI devices attached or react to multi-eprom OS's. It seems that with 800XL's I always have to do the Bob Puff stability mod #1 just to use a PBI drive (or even an internal MyIDE). Or I have to do the mod #2 (which is more complicated) if I use a multi-eprom OS or 32-in-1. With 130XE's, I virtually never have to do any stability mods. Has anyone else noted this, and/or would there be a logical reason to see this behavior?

 

-Larry

Link to comment
Share on other sites

I seem to have more stability problems with 800XL's than 130XE's. I really don't think this has anything to do with socketed chips versus soldered, but has to do with how the two computers behave with PBI devices attached or react to multi-eprom OS's. It seems that with 800XL's I always have to do the Bob Puff stability mod #1 just to use a PBI drive (or even an internal MyIDE). Or I have to do the mod #2 (which is more complicated) if I use a multi-eprom OS or 32-in-1. With 130XE's, I virtually never have to do any stability mods. Has anyone else noted this, and/or would there be a logical reason to see this behavior?

 

-Larry

 

Interestingly enough, I've had the complete opposite experience. I have multiple XL's and XE's and I can never get the XE's to be stable, but get rock solid performance from the XL's (No mods done at all)

Link to comment
Share on other sites

Reason? One could be that the XE has Freddie, which puts a bunch of stuff on one IC, rather than having several doing to do the same job and the propogation delays that are associated with it.

 

It also runs on a 14 MHz master clock rather than 3.6, so in theory it could supply some signals on a fractional basis rather than relying on other components to supply a delay (not sure if Freddie takes advantage of this or not).

Link to comment
Share on other sites

It also runs on a 14 MHz master clock rather than 3.6, so in theory it could supply some signals on a fractional basis rather than relying on other components to supply a delay (not sure if Freddie takes advantage of this or not).

 

It does. RAS and CAS timing are derived from the 14MHz clock on Freddie machines rather than from a delay line.

Link to comment
Share on other sites

Here is a quote from Bob Puff's stabilizing mods (mod #2):

http://www.nleaudio.com/css/stabiliz.htm

 

> This modification deals with a timing problem with the OS ROM. It seems that especially with multiple EPROM OSes, the output buffers of the ROM chip stay on even into the start of the next cycle. This causes RAM corruption, easily seen by bad bytes randomly appearing on the screen. ...<

 

Is this because many (especially older) eproms are slower than the original roms? If that is the case, would the newer, faster eproms be likely to perform better -- say 100 ns or faster? In my experience, multiple eprom OS (e.g. Ramrod XL) seem to work fine unless there is a PBI device hooked up. In that case, doing mod #1 and #2 always seems to clear things up, but obviously it would be nicer if no mod was necessary.

 

-Larry

Link to comment
Share on other sites

I seem to have more stability problems with 800XL's than 130XE's. I really don't think this has anything to do with socketed chips versus soldered, but has to do with how the two computers behave with PBI devices attached or react to multi-eprom OS's. It seems that with 800XL's I always have to do the Bob Puff stability mod #1 just to use a PBI drive (or even an internal MyIDE). Or I have to do the mod #2 (which is more complicated) if I use a multi-eprom OS or 32-in-1. With 130XE's, I virtually never have to do any stability mods. Has anyone else noted this, and/or would there be a logical reason to see this behavior?

 

-Larry

 

Yes. My systems exhibit the exact same behavior. Remember all of those MIO/Hard Drive tests I ran last year? I had so many stability problems with my XL's (which I like more than XE's), that I eventually ran the lion's share of the tests on a stock 130XE attempting to minimize the variables. This netted me the best results.

 

I would like to stabilize my XL's... maybe this fall.

Link to comment
Share on other sites

I seem to have more stability problems with 800XL's than 130XE's. I really don't think this has anything to do with socketed chips versus soldered, but has to do with how the two computers behave with PBI devices attached or react to multi-eprom OS's. It seems that with 800XL's I always have to do the Bob Puff stability mod #1 just to use a PBI drive (or even an internal MyIDE). Or I have to do the mod #2 (which is more complicated) if I use a multi-eprom OS or 32-in-1. With 130XE's, I virtually never have to do any stability mods. Has anyone else noted this, and/or would there be a logical reason to see this behavior?

 

-Larry

No stability problems with internal MyIDE, 4-in-1 OS, and Hias'/Bernd's SRAM upgrade. I don't own any true PBI devices.

 

My homebrewed SRAM cart is another matter.

 

I plan to upgrade to a 16-in-1 OS soon. I'm anxious to dump MyIDE OS 4.2i in favor of 4.3i and 4.4i beta (maybe 5.x if I can convince Sijmen to produce an internal version. I would like to play with the LBA mode).

 

I definitely hope I don't run into any stability issues.

 

-Steve Sheppard

Link to comment
Share on other sites

You guys are so full of crap.. (Larry & BF2k)

 

The only way you can ever say that 65/130xes are MORE STABLE than 800XLs on average, is if ALL of your 800XLs have massive crappy mods done to them, and all of your 130xes are stock, and you happened to get lucky enough to get decent built XEs from one of the later runs..

 

I have fixed more flakiness issues on 130xes than I can even remember... I mean LOTS.. Like over 50 machines.. maybe over 100 machines.. Its been ALOT of years.. 800XLs are generally pretty damn stable, especially in stock configuration.. You go tacking a bunch of slow assed TTL logic and ols slow EPROMS onto the bus of any machine, it's gonna get 'sloppy'.. Whether or not it pushes various timings over the threshold of what could cause errors... Well, that varies by indicidual situation.. But if you are saying that stock 130xes (on average) are anywhere near as stable as stock 800xls, I got some serious news for you: you got it completely opposite of the truth..

Link to comment
Share on other sites

Thanks for the "serious news," Ken.

 

My sample is not so large, but that has been my experience. Hence the original question.

 

-Larry

 

You guys are so full of crap.. (Larry & BF2k)

 

The only way you can ever say that 65/130xes are MORE STABLE than 800XLs on average, is if ALL of your 800XLs have massive crappy mods done to them, and all of your 130xes are stock, and you happened to get lucky enough to get decent built XEs from one of the later runs..

 

I have fixed more flakiness issues on 130xes than I can even remember... I mean LOTS.. Like over 50 machines.. maybe over 100 machines.. Its been ALOT of years.. 800XLs are generally pretty damn stable, especially in stock configuration.. You go tacking a bunch of slow assed TTL logic and ols slow EPROMS onto the bus of any machine, it's gonna get 'sloppy'.. Whether or not it pushes various timings over the threshold of what could cause errors... Well, that varies by indicidual situation.. But if you are saying that stock 130xes (on average) are anywhere near as stable as stock 800xls, I got some serious news for you: you got it completely opposite of the truth..

Link to comment
Share on other sites

I totally apologize for saying "you guys are full of crap"..

 

I should have said "Your experiences do not all coincide with my experience over a very long period and a large number of machines."

 

I mustve been in a mega hurry and not thinking when I posted that.. I work (and have for my whole career) in and around an Automotive shop full of arrogant, abrasive people.. So when I make posts from work, sometimes it reflects that.. This is why they have "Service Advisors." Mechanics would never convince customers of anything, even the most needed repairs, if it was left up to them and their personal relation skills. Im kinda immersed in this world during the day.. And I am not one of the people who is allowed to deal with customers. heheh..

Edited by MEtalGuy66
  • Like 2
Link to comment
Share on other sites

Hi Steve-

 

I haven't had any issues with the earlier "hardwired" MyIDE devices, nor the flash cart version, but when I recently installed the internal flash version in a stock (Taiwan) 800XL MB, it started locking up. I did fix #1 which is very simple, and all has been well. Strange.

 

BTW, what kind of switch(es) are you going to use for your 16-in-1 -- rotary? Is there are a wiring schematic somewhere? I don't recall ever seeing anything beyond a 4-in-1 using two switches.

 

-Larry

 

 

No stability problems with internal MyIDE, 4-in-1 OS, and Hias'/Bernd's SRAM upgrade. I don't own any true PBI devices.

 

My homebrewed SRAM cart is another matter.

 

I plan to upgrade to a 16-in-1 OS soon. I'm anxious to dump MyIDE OS 4.2i in favor of 4.3i and 4.4i beta (maybe 5.x if I can convince Sijmen to produce an internal version. I would like to play with the LBA mode).

 

I definitely hope I don't run into any stability issues.

 

-Steve Sheppard

Link to comment
Share on other sites

I've got the same experience as Ken, XEs seem to cause a lot more troubles than XLs.

 

Back then when I developed the Turbo Freezer 2005 we did quite large field tests. It already worked fine on all my XLs plus all XLs and XEs of the testers. One day Bernhard Engl (the developer of the original Turbo Freezer) visited my and brought his worst-case-600XL.

 

Whenever people had problems with their Freezer that weren't caused be the Freezer but the machine, he offered them to fix the Atari. He then replaced the (half-)faulty chips with good ones and put the faulty chip into the 600XL. So this was a good reference machine for testing. If something worked with this 600XL chances were really good it would work with all other machines in the world.

 

So I plugged my prototype into this 600XL and - no problems at all, it ran rock solid!

 

I thought I was finished with development, but then I got an email from one of the testers that he had problems on his 130XE. We did some more testing (I even bought a 130XE just for this purpose, although I really hate the XEs) and finally ended up with the 74HCT123 circuit and the input-latches in the CPLD.

 

so long,

 

Hias

Link to comment
Share on other sites

One thing's for sure, there was a lot of timing slop in these old digital circuits. It's not like today where transition times are known down to the ns. When you start introducing new IC's onto the bus, all kinds of things can happen.

Link to comment
Share on other sites

I don't agree. The design should not require tight tolerances and the ICs should not vary widely in their operation. There are basically two activities during each cycle - read or write. The address is valid at the rise of PHI2 and the data is valid at the fall of PHI2 on a read, and a similar fashion on a write. There are just two signals involved - PHI2 and R/W.

 

Can anyone demonstrate a proper design that fails between machines, and why?

 

These reports that 'xx works in two of my yys but not the third one' only indicate intolerant designs. (it doesn't prove bad design, either!) The Fte 65816 hack will not work with one version of ANTIC - is that bad-build ANTICs or bad design?

 

Bob

 

 

 

One thing's for sure, there was a lot of timing slop in these old digital circuits. It's not like today where transition times are known down to the ns. When you start introducing new IC's onto the bus, all kinds of things can happen.

Link to comment
Share on other sites

The design should not require tight tolerances and the ICs should not vary widely in their operation.

I totally agree.

 

These reports that 'xx works in two of my yys but not the third one' only indicate intolerant designs. (it doesn't prove bad design, either!)

I would go as far as saying the Atari design is bad because it leads to possible intolerance:

 

If Atari had used bus drivers/transceivers on all the lines (including address/data bus, RW) and not only the LS08 to buffer PHI2, there would be (almost) no skew between PHI2 and the rest of the signals. IMHO this would have helped a lot and we all could design hardware "by the book".

 

Then, of course, using low quality chips didn't really improve the (already critical) situation...

 

so long,

 

Hias

Link to comment
Share on other sites

I don't see anything on the buss that would have a problem with signal timing or loading. The Atari has 280ns of address setup time and 280ns of data access time. If it is designed correctly, no buffers or distributed clocks should be necessary. You assume the problem is PHI2, even though no comprehensive fix is available.

 

I'm thinking about building a clock/PHI2/R/W generator with a 28mhz oscillator. Then, we can put the signal edges anywhere we like.

 

Bob

 

 

The design should not require tight tolerances and the ICs should not vary widely in their operation.

I totally agree.

 

These reports that 'xx works in two of my yys but not the third one' only indicate intolerant designs. (it doesn't prove bad design, either!)

I would go as far as saying the Atari design is bad because it leads to possible intolerance:

 

If Atari had used bus drivers/transceivers on all the lines (including address/data bus, RW) and not only the LS08 to buffer PHI2, there would be (almost) no skew between PHI2 and the rest of the signals. IMHO this would have helped a lot and we all could design hardware "by the book".

 

Then, of course, using low quality chips didn't really improve the (already critical) situation...

 

so long,

 

Hias

Link to comment
Share on other sites

I'm interested in the suggestion made By Hias on "Chip quality".. This makes me wonder if at least part of the issues may lie with the output timing of the ATARI chips, themselves possibly being little "sloppy"..

 

And by this, I mean in comparisson to other machines of the era.. Look at the Apple II.. It runs it's display chip on a clock-interleave with the CPU, which you'd think would make timing twice as critical, yet those machines are notoriously ROCK-SOLID-STABLE.

 

Looking at the Apple II motherboard, I dont see that they are using higher quality components or TTL logic than is employed in the ATARI.. And if you look at the schematic, the design is realtively free of "embellishments" as far as redundant buffering of bus signals, etc..

 

Now, it's a given that the Apple II has alot of priciple design differences when compared to the ATARI.. But these machines are so freakin stable and cosistantly predictable (in my experience) that if you get one to crash, or flake out, you can usually determine the cause immediately, and 99% of the time, you can even exactly reproduce the event by doing the exact same thing again..

Link to comment
Share on other sites

I'm interested in the suggestion made By Hias on "Chip quality".. This makes me wonder if at least part of the issues may lie with the output timing of the ATARI chips, themselves possibly being little "sloppy"..

 

And by this, I mean in comparisson to other machines of the era.. Look at the Apple II.. It runs it's display chip on a clock-interleave with the CPU, which you'd think would make timing twice as critical, yet those machines are notoriously ROCK-SOLID-STABLE.

 

Looking at the Apple II motherboard, I dont see that they are using higher quality components or TTL logic than is employed in the ATARI.. And if you look at the schematic, the design is realtively free of "embellishments" as far as redundant buffering of bus signals, etc..

 

Now, it's a given that the Apple II has alot of priciple design differences when compared to the ATARI.. But these machines are so freakin stable and cosistantly predictable (in my experience) that if you get one to crash, or flake out, you can usually determine the cause immediately, and 99% of the time, you can even exactly reproduce the event by doing the exact same thing again..

 

I think it would be good to make it clear that these unstable experiences are for those are expanding/modifying their machines. I haven't experienced any instability in my 10+ Atari machines because only one was expanded to 256K RAM (all others are stock). Given all the exact video/audio timing the A8 uses in its software, I would think they tested their hardware similarly. Apple II doesn't have as many custom chips or as advanced that A8 has so it's a simpler design and nor does its software have the timing that the A8 has.

 

I am writing this to make it clear in case someone searches the web and ends up claiming stock A8 timing is off leading to an unstable system; nothing can be farther from the truth.

Link to comment
Share on other sites

A lot of it might comes down to the lack of expertise once the original team left.

 

They never got the video circuit right, there's "fixes" that in most cases made things worse.

 

The expansion concept as it originated was ditched in favour of the PBI, and it's obvious that there was little forethought there too as it lacks vital signals such as /HALT - additionally having CSYNC available could have made it much easier for any expansion that relied on video/refresh timing, ie anything that wanted to do DMA.

Link to comment
Share on other sites

I'm interested in the suggestion made By Hias on "Chip quality".. This makes me wonder if at least part of the issues may lie with the output timing of the ATARI chips, themselves possibly being little "sloppy"..

I'm interested in some more information about this, too. Unfortunately I don't have proper test equipment, no scope at all, just a crappy 100MHz logic analyzer which just gave me some headaches again...

 

So far I've seen some scope-screenshots, made by other users, which showed really bad edges (rise and fall times in the magnitude of centuries, more like sine- than square- waves). But not on all Ataris, the ones with "good" CPUs had quite normal signals.

 

I just ran several simple tests with my logic analyzer, but since it only has 10ns resolution it's hard to tell what the exact timing really is:

 

I sampled PHI2, A0 and D0 at the cartridge port and measured the address- and data- hold times after the falling edge of PHI2.

 

The data hold time was something larger than 60-80ns in all cases, so it shouldn't matter. This leaves us with the address hold time:

 

With my "good 800XL" it was 30-40ns. This is what one would expect.

 

With my "bad 800XL" it was only 10-20ns. This just isn't enough.

 

Another 800XL and a 130XE, both with Bob Puff's stabilizing mod, had 40-50ns. That's also what I expected.

 

It would be great if someone with a good >1GS scope could capture some samples of a "good" and a "bad" Atari so we could compare the signal quality and especially the edges. It would also be interesting to check if the slopes (and therefore the hi/lo transition times at the inputs) vary over time, and if the (possible) skew between address-slopes and PHI2-slope varies, too.

 

And by this, I mean in comparisson to other machines of the era.. Look at the Apple II.. It runs it's display chip on a clock-interleave with the CPU, which you'd think would make timing twice as critical, yet those machines are notoriously ROCK-SOLID-STABLE.

According to the Apple II "redbook" buffers were used on all CPU lines. This means the clock (PHI1 in the case of Apple2) was skewed by the same amount as address/data lines, so the relative skew between clock and address was (approx.) zero again.

 

I still suspect that the combination of (A) the LS08 used as PHI2 buffer and (B) crappy CPUs with bad slopes is the cause of all this trouble. Each thing alone shouldn't be a too big problem, but the combination of both things together is what makes the Atari XL/XE much more flaky than the Apple.

 

so long,

 

Hias

Link to comment
Share on other sites

Actually, I look at those timings in a little different light...

 

I am using a fast (14mhz) 65816 running at 1.79mhz, but the 6502 is similar.

 

The data/address/R/W hold time is spec at 10ns. The setup time for these signals is 30ns, which means that the new values have to be stable 30ns after PHI2 falls. I would contend that any CPU that does not meet setup times is bad, not good.

 

These 'fast' chips do run in a 1200XL, at either 1.79mhz or 7.16mhz.

 

A totally unmodified 1200XL will not run some cartridge games, regardless of CPU, ANTIC, memory, or otherwise. Yet, the 65816 machines do much better with these 'problem' carts.

 

Bob

 

 

 

 

 

I'm interested in the suggestion made By Hias on "Chip quality".. This makes me wonder if at least part of the issues may lie with the output timing of the ATARI chips, themselves possibly being little "sloppy"..

I'm interested in some more information about this, too. Unfortunately I don't have proper test equipment, no scope at all, just a crappy 100MHz logic analyzer which just gave me some headaches again...

 

So far I've seen some scope-screenshots, made by other users, which showed really bad edges (rise and fall times in the magnitude of centuries, more like sine- than square- waves). But not on all Ataris, the ones with "good" CPUs had quite normal signals.

 

I just ran several simple tests with my logic analyzer, but since it only has 10ns resolution it's hard to tell what the exact timing really is:

 

I sampled PHI2, A0 and D0 at the cartridge port and measured the address- and data- hold times after the falling edge of PHI2.

 

The data hold time was something larger than 60-80ns in all cases, so it shouldn't matter. This leaves us with the address hold time:

 

With my "good 800XL" it was 30-40ns. This is what one would expect.

 

With my "bad 800XL" it was only 10-20ns. This just isn't enough.

 

Another 800XL and a 130XE, both with Bob Puff's stabilizing mod, had 40-50ns. That's also what I expected.

 

It would be great if someone with a good >1GS scope could capture some samples of a "good" and a "bad" Atari so we could compare the signal quality and especially the edges. It would also be interesting to check if the slopes (and therefore the hi/lo transition times at the inputs) vary over time, and if the (possible) skew between address-slopes and PHI2-slope varies, too.

 

And by this, I mean in comparisson to other machines of the era.. Look at the Apple II.. It runs it's display chip on a clock-interleave with the CPU, which you'd think would make timing twice as critical, yet those machines are notoriously ROCK-SOLID-STABLE.

According to the Apple II "redbook" buffers were used on all CPU lines. This means the clock (PHI1 in the case of Apple2) was skewed by the same amount as address/data lines, so the relative skew between clock and address was (approx.) zero again.

 

I still suspect that the combination of (A) the LS08 used as PHI2 buffer and (B) crappy CPUs with bad slopes is the cause of all this trouble. Each thing alone shouldn't be a too big problem, but the combination of both things together is what makes the Atari XL/XE much more flaky than the Apple.

 

so long,

 

Hias

Link to comment
Share on other sites

I'm interested in the suggestion made By Hias on "Chip quality".. This makes me wonder if at least part of the issues may lie with the output timing of the ATARI chips, themselves possibly being little "sloppy"..

 

And by this, I mean in comparisson to other machines of the era.. Look at the Apple II.. It runs it's display chip on a clock-interleave with the CPU, which you'd think would make timing twice as critical, yet those machines are notoriously ROCK-SOLID-STABLE.

 

Looking at the Apple II motherboard, I dont see that they are using higher quality components or TTL logic than is employed in the ATARI.. And if you look at the schematic, the design is realtively free of "embellishments" as far as redundant buffering of bus signals, etc..

 

Now, it's a given that the Apple II has alot of priciple design differences when compared to the ATARI.. But these machines are so freakin stable and cosistantly predictable (in my experience) that if you get one to crash, or flake out, you can usually determine the cause immediately, and 99% of the time, you can even exactly reproduce the event by doing the exact same thing again..

 

I think it would be good to make it clear that these unstable experiences are for those are expanding/modifying their machines. I haven't experienced any instability in my 10+ Atari machines because only one was expanded to 256K RAM (all others are stock). Given all the exact video/audio timing the A8 uses in its software, I would think they tested their hardware similarly. Apple II doesn't have as many custom chips or as advanced that A8 has so it's a simpler design and nor does its software have the timing that the A8 has.

 

I am writing this to make it clear in case someone searches the web and ends up claiming stock A8 timing is off leading to an unstable system; nothing can be farther from the truth.

 

The only respect where the atari is "much more advanced" than teh Apple II is in Video.. POKEY implementation is rather straightforward and doesnt require a bunch fo system level architectural accomadations.. You can stick a POKEY chip on an Apple II relatively easily compared to the mess youd have trying to "graft in" and ANTIC & GTIA.. Same for the PIA..

 

If you look at Apple's design, I'll say again: The display chip is run on a clock interleave with the CPU. This should in theory make system bus timings twice as critical, where skew & overlap are concerned.. The Apple's floppy controller system is way the hell better than ATARIs.. Having the controller on the motherboard, and then a rudimentary interface to "dumb" drives, not only makes it's native data rate fast as hell compared to the ATARI's disk system, but makes it MORE programmable, and on a lower level than even Happy modified drives.. Right up until the end of the Apple II's era of popularity, there were software-only backup programs that could copy virtually any disk.. The software companies would develop a new type of copy protection, and a week later, the Guys who did "Copy II+" would just modify their code and release a new version that could copy all the latest stuff out there.. Also, on the note of the display being on a clock-interleave with the CPU, I'll say this: Yes it's tru that the Apple's video hardware was nowhere near as cool or advanced as the ATARI's.. But it didnt suffer any performance hits due to ANTIC DMA either.. Don't Ask's S.A.M. speech synthesizer program on the apple worked great, no matter what graphics mode you were in, or whatever was going on with the display.. Not so on the ATARI.. Also, later on, Apples got Double hi-res, nice native 80 columns, etc.. Stuff we'd have loved on the ATARI.. So, yeah, the APple II design was an earlier design.. But I would not say that in all respects, the ATARI was "more advanced"...

 

Im just being realistic here.. I own ONE Apple II... I don't even know how many ATARIs I own.. I'd have to go organize my stuff an count them all.. I'm an ATARI guy to the core.. But I also value the Apple II for what it is..

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...