Jump to content
IGNORED

SIO connector alternative


raster/c.p.u.

Recommended Posts

* the CLOCK OUT is needed for disk drives (and devices alike), if they want to determine _reliably_ the baudrate requested by the computer. Accommodating to the baudrate can be done without that, by trial and error, but the "error" part of this method is a bit risky:

 

Can't agree more. IMHO It is a rather importan signal.

 

As drac030 is saying, it is possible to go along without it. But it quite complicated and not very reliably. It is important to understand how USD/Happy hi-speed xfer works as an example.

 

Many people believes that the '?' command switches to hi-speed. But this is not true. It simply queries the drive for the pokey divisor. And some software never issues this command at all. What drives do is to detect that they are receiving in the "wrong" speed and then they try the other. In turn the computer tries with one speed, and if it doesn't work, it tries the other.

 

As you can see, if this trial and error is not done reliably, then you can get in a cyclic behavior. The 1050 can easily detect if it is trying to read on the wrong rate. Suprisingly enough, it is easier for the 1050 because it doesn't have an UART and performs serialization by pure software. It is much more difficult to do this with a true UART, specially when using the PC one that doesn't have the exact baud-rate value that Pokey use.

 

And again, what is more important, it seems the signal is already being used by the Indus. As I said above, I was not aware about this until somebody mentioned it here. So I'm not 100% sure about this. The Indus schematics are not very useful, all the SIO signals are connected and multiplexed. A ROM listing or disassembling is required to check this.

 

* the CLOCK IN signal can be used in a disk drive (and devices alike) to change the baud rate of the computer's Pokey without exchanging any commands between the computer and the drive.

 

This could be very interesting, but it seems it doesn't work. Somebody mentioned here some time ago that it tried and it didn't work. Might be he didn't try correctly. But it is very possible that this Pokey feature simply doesn't work, or at least doesn't work reliably.

 

On one hand is hard to believe that something in the chipset doesn't work as documented. The chipset is so well designed after all. But on the other hand is also hard to believe that, if this feature works, it was never ever used before.

Link to comment
Share on other sites

* the CLOCK OUT is needed for disk drives (and devices alike), if they want to determine _reliably_ the baudrate requested by the computer. Accommodating to the baudrate can be done without that, by trial and error, but the "error" part of this method is a bit risky: trial and error means that the drive is quite often receiving garbage instead of a valid command, and may act unexpectedly. Actually I know a type of a disk drive, which, when received certain command and wrong baudrate, interprets it as FORMAT DISK, and - due to an independent bug in the ROM, proceeds with execution (_not_ funny). Determining the baudrate according to the CLOCK OUT signal is much more elegant and safe.

Argument taken. But: this is an existing, old drive and if any new drives are to be manufactured I'd hope this bug will be fixed. And I also hope that none of the new devices will accept a command frame if any of the bytes contain a framing error (i.e. the state of the data line changes within a single bit) or if the command frame checksum doesn't match.

 

* the CLOCK IN signal can be used in a disk drive (and devices alike) to change the baud rate of the computer's Pokey without exchanging any commands between the computer and the drive. For example, the computer sends the normal READ SECTOR command at 19200, and the drive can, just by supplying an appropriate CLOCK IN signal, change the baudrate and send responses and data at the speed it likes.

 

That last possibility of course requires the XL OS SIO code to be slightly changed so that one could enable the external sync mode when needed. But this is not any problem these days.

Nice feature, but does anyone really need that? It's incompatible with standard computers, incompatible with 1050-2-PC interfaces, and not necessary if one designs the software SIO protocol appropriately (eg. Happy/Speedy ultra speed, Happy warp speed, XF551, ...).

 

so long,

 

Hias

Link to comment
Share on other sites

But: this is an existing, old drive and if any new drives are to be manufactured I'd hope this bug will be fixed. And I also hope that none of the new devices will accept a command frame if any of the bytes contain a framing error (i.e. the state of the data line changes within a single bit) or if the command frame checksum doesn't match.

 

... unless the code is buggy. The author of the firmware for the drive I was speaking about had _not_ done the bug intentionally. Feeding disk drives with garbage instead of commands may simply have such unexpected results.

 

Despite that, please read the post above, where ijor describes how the US protocol works in _some_ implementations, i.e. the ones not using the CLOCK OUT signal. It most of the time works only thanks to the fact, that the XL OS SIO repeats every command 14 times (or so), before it falls back to returning an error. The trial and error method of determining serial speed is really an ugly hack - and cutting off the CLOCK OUT line means, that nothing else except such an ugly hack will be possible for future devices.

 

Nice feature, but does anyone really need that?

 

Anyone who wants to use it, IMHO.

 

It's incompatible with standard computers

 

Contrary, it is present in all standard computers :)

 

incompatible with 1050-2-PC interfaces, and not necessary if one designs the software SIO protocol appropriately (eg. Happy/Speedy ultra speed, Happy warp speed, XF551, ...)

 

Repeating your argument: these are "existing, old" devices/protocols, and so they don't count - the thing being important is that there can be future devices using that feature. And it is rather easy to guess, why it hasn't been used so far too often: the standard SIO code doesn't take it into account, and so to make wider use you'd have to replace ROMs in all computers which was by no means easy or cheap. But now we have flash ROMs and the situation is different - a SIO upgrade is a question of mainly some coding.

 

And even if someone doesn't wan to mess in the ROM, there are softloaded SIO drivers, f.e. in SpartaDOS. An implementation is a question of agreement about a standard command (enable/disable external sync) and a bit of free time.

Edited by drac030
  • Like 1
Link to comment
Share on other sites

* the CLOCK OUT is needed for disk drives (and devices alike), if they want to determine _reliably_ the baudrate requested by the computer. Accommodating to the baudrate can be done without that, by trial and error, but the "error" part of this method is a bit risky:

 

Can't agree more. IMHO It is a rather importan signal.

 

As drac030 is saying, it is possible to go along without it. But it quite complicated and not very reliably. It is important to understand how USD/Happy hi-speed xfer works as an example.

Sorry, but I completely disagree. Automatically adapting to an unknown bitrate is a standard technique in asynchronous communications and modems are able to do this for some 20 years now. There's only a single situation where it will fail:

- you need a continuous bitstream without any pauses between the bytes and

- the bitstream must be formed in a way so that each byte has both a valid start bit and a valid stop bit.

Under these conditions you will get the right bitrate but might read a malformed bitstream. Please note that both conditions are strictly necessary.

 

In case of Atari SIO communication it's actually extremely easy: a command frame starts with the Command line going low, then 5 bytes are transmitted and then the command line goes high again. Before and after the command frame bytes there has to be a pause with defined minimum and maximum length.

 

It's even easier if you use an UART (like a standard 16x50 used in PCs) that reports framing errors. Depending upon how many framing errors you got within a command frame and on how many (correct and incorrect) bytes you received during the Command=low period you can choose if this was likely because of a standard transmission error (interference etc) or if the bitrate was wrong.

 

Without an UART detecting framing errors is a little bit more complicated since you have to do the oversampling "by hand", but you can always still check how many bytes you received and if the command line went back to high before the stop-bit of a byte was received.

 

Next thing: with Atari SIO devices you usually have only to switch between 2 bitrates (19200 bit/sec and some high-speed bitrate) whereas a modem has to adapt to an arbitrary bitrate ranging from 1200 (or even 300) to 115200 bit/sec, and the modem doesn't have the Command line as an additional synchronization source.

 

so long,

 

Hias

Link to comment
Share on other sites

... unless the code is buggy. The author of the firmware for the drive I was speaking about had _not_ done the bug intentionally. Feeding disk drives with garbage instead of commands may simply have such unexpected results.

 

Despite that, please read the post above, where ijor describes how the US protocol works in _some_ implementations, i.e. the ones not using the CLOCK OUT signal. It most of the time works only thanks to the fact, that the XL OS SIO repeats every command 14 times (or so), before it falls back to returning an error. The trial and error method of determining serial speed is really an ugly hack - and cutting off the CLOCK OUT line means, that nothing else except such an ugly hack will be possible for future devices.

Buggy code may have all kinds of unexpected results and should be fixed ASAP...

 

The point is: it is possible to build a robust, reliable implementations quite easily. This has already been done, and it has nothing to do with ugly hacks, just standard techniques that have been used for quite some time now. In modern computers (i.e. anything that's a little bit newer than the Atari) you won't find any synchronuous serial communication ports, just asynchronuous ones, and all these computers don't suffer the problem of unreliable communications and/or ugly hacks.

 

Repeating your argument: these are "existing, old" devices/protocols, and so they don't count - the thing being important is that there can be future devices using that feature. And it is rather easy to guess, why it hasn't been used so far too often: the standard SIO code doesn't take it into account, and so to make wider use you'd have to replace ROMs in all computers which was by no means easy or cheap. But now we have flash ROMs and the situation is different - a SIO upgrade is a question of mainly some coding.

 

And even if someone doesn't wan to mess in the ROM, there are softloaded SIO drivers, f.e. in SpartaDOS. An implementation is a question of agreement about a standard command (enable/disable external sync) and a bit of free time.

Now we are mixing some points of view. I was just talking about changing the SIO connector, so that (almost all, except a few) devices will still work fine. Exchanging the OS of an Atari usually means you have to do quite some soldering, whereas the SIO connector change just means plugging an adapter into the SIO port. From my experience: a lot of Atari users are willing to just plug in a new extension, but only very few freaks will want to do some soldering inside their Ataris.

 

Extending the Atari hardware/OS/... is a really interesting area, but I think we should just stick to discussing the SIO protocol here.

 

so long,

 

Hias

Link to comment
Share on other sites

Howdy folks

 

In my opinion, the way to go is trying to find a way to get new SIO connectors (read: exact duplicates of the existing ones).

 

But if a different standard for the connectors has to be developed, than why not multiplex some or all of the signals. That way you need (power, ground, all signals leaving the device, all signals entering the device and all bi-directional signals (if we have those)) five pins. Or four twisted pairs (power/ground, in/ground, out/ground and bi/ground).

 

It also means, you have to multiplex and demultiplex all signals inside the connector. But you could ofcourse opt, to only use one connector per device. Let me explain:

 

Let's say you have one computer and four devices. All connected via the daisy chain.

 

At the moment, the signal travels:

From the SIO connector of the computer to one of the SIO connectors of device number one.

From the second SIO connector on the first device to one of the SIO connectors of device number two.

From the second SIO connector on the second device to one of the SIO connectors of device number three.

From the second SIO connector on the third device to one of the SIO connectors of device number four.

 

If you use multiplexors and demultiplexors for each connection, a signal is multiplexed and demultiplexed four times before it reaches the last device.

 

That's the first idea.

 

Now for the second:

 

Since you have to have electronics where female SIO connectors used to be anyway, why not add some extra electronics. For every device, all signals going into it, are demultiplexed. But the multiplexed signal is also sent allong to all the other devices. Meaning, when a signal reaches a device, it only has been demultiplexed ones.

 

Since, apart from the interrupt line, only one device will talk to the computer at any moment, multiplexing a signal comming from a device shouldn't be a problem.

 

So you get:

 

When a signal comming from the computer travels to the last device in the chain:

 

The signal from the computer is multiplexed in the female SIO connector in the back of the computer.

The signal travels to the female SIO connector plugged into the first device.

There, it's:

a) send directly to the female SIO connector plugged into the second device.

b) demultiplexed and send to the male SIO connector of the first device.

In the female SIO connector plugged into the second device, the signal is:

a) send directly to the female SIO connector plugged into the third device.

b) demultiplexed and send to the male SIO connector of the second device.

etc., etc.

 

Of course, "female SIO connector" in the description of the second idea means "whatever you plug into the male SIO connector".

 

To keep this explaination simple :roll: :D :cool: :ponder: , I'll "forget" the signal comming from the devices in my examples. I don't want to fill a complete page by my self, I just want to get my idea accross (sp?).

 

Greetings

 

Mathy

Link to comment
Share on other sites

Buggy code may have all kinds of unexpected results and should be fixed ASAP...

 

Obviously, but that wasn't the point. The point was, that this bug could have been minor, if the drive didn't receive complete garbage commands and used the CLOCK IN line to determine speed as it should.

 

Now we are mixing some points of view. I was just talking about changing the SIO connector, so that (almost all, except a few) devices will still work fine.

 

Aye, sir. And serial protocol details are not the real point either, despite the discussion above. My real point from the beginning, as you certainly remember, is: missing signals = crippled SIO.

 

It is not very important, that some signals have only a few uses _at_the_moment_. Similar argumentation (namely: "nobody uses it - nobody understands it - nobody needs it") led some people to throw the PBI routines off the OS, which I think was a great mistake, because their presence itself creates a facility on which a device can be built. Such logic made QMEG 3.2 to not work with hard drives (certainly because "there are no hard drives available" <-- and THIS is WRONG, because a hard drive interface can be built rather quickly).

 

Back to SIO: we have two chips inside the Atari, Pokey and PIA, these provide some services, which may look not useful at the moment for _some_ people, but _other_ people may want to use them beyond your (or mine) knowledge and imagination. Cutting the lines down renders obsolete a part of the hardware and its services not because it really is obsolete, but because an available connector has too few lines to connect all the signals the Pokey/PIA provides. And in the light of this fact all the other argumentation looks really secondary.

 

But - this way a crippled pseudo-standard will be created, which you said you're afraid of creating, and I told you that I doubt, if it could be warmly accepted, being crippled. Well, I still doubt that, then :)

Link to comment
Share on other sites

Automatically adapting to an unknown bitrate is a standard technique in asynchronous communications and modems are able to do this for some 20 years now.

 

The old modems you are talking about performed autobaud detection using a completely different technique. Older modems required a given known sequence. That's why you had to type "AT" to change the baud rate. Otherwise they usually couldn't detect your rate and sent you garbage. Even AT wasn't foolproof and they still sometimes replied with garbage.

 

And if you remember, that's the same reason you had to type <RETURN> (or something else) several types when logging to a BBS at that time.

 

Later modems had hardware autobad. Many UARTs have this feature for quite some time.

 

Internal modems, except the older ones (which had a true UART), don't require autobad at all, because there is no actual serialization before the modulation phase.

 

In case of Atari SIO communication it's actually extremely easy: a command frame starts with the Command line going low

 

The SIO bus is one thing and the SIO protocol is another. Not all SIO communications follow the SIO standard or the SIO framing. Yes, most do, but not all.

 

It's even easier if you use an UART (like a standard 16x50 used in PCs) that reports framing errors. Depending upon how many framing errors you got within a command frame and on how many (correct and incorrect) bytes you received during the Command=low period you can choose if this was likely because of a standard transmission error (interference etc) or if the bitrate was wrong.

 

Doesn't sound as a very reliable method. In first place the number of framing errors is not always available. This depends on the platform, UART, FIFO, driver, etc. Sometimes all you get is that there was, at least, one framing error...

 

with Atari SIO devices you usually have only to switch between 2 bitrates (19200 bit/sec and some high-speed bitrate)

 

That's not always the case. It is true for cases like the USD. You could use only two possible bitrates, and if you used something else it was your problem. But with SIO2PC applications things are more complicated. You aren't emulating a specific device, you might want to emulate multiple ones at the some time. And ideally you might want to reach smart emulation and detect what the other side is trying to do.

 

And again, what makes things even more complicated (and unlike any other of the cases you mention), is that some SIO bitrates don't correspond exactly to any PC uart divisor. Some are just on the limit to be readable by the PC Uart.

 

Lastly, and once more again, you are again ignoring the fact that Indus drives might use this signal already.

 

Now, I'm not saying that you can't get along without this signal. You can. I'm not saying that you must implement it. You might decide that you don't care about extreme cases, or the Indus case. And you might be right.

 

All I'm saying is that this is something to consider.

Link to comment
Share on other sites

I might as well throw in my two cents on the debate :) First off, I agree that just getting rid of signals that aren't used now isn't a good idea as that limits future hardware. Multiplexing is a neat idea, but remember that now you actually need a circuit board and mux/demux as part of your adaptor. My first preferance would be to find a way to make origional-style SIO cables. I have to admit, though, if I were to go with some other type of connector, the RJ45 style would be neat, if for no other reason, because the adaptor is simple and small and it doesn't look home-made (it fits right up to the origional SIO connector). Now, when this connector was origionally suggested, the CLOCK OUT wasn't included, but only 7 signals were, why not just make it number 8? Also, because I'm using an 8 wire cable doesn't change the fact that my XE has a 13 pin SIO connector, my XL has a 13 pin SIO connector, my Indus has a 13 pin SIO connector, my 810 has a 13 pin SIO connector, etc. The point being, if the extra signals are useful to me, they're still there, and I still can use an origional cable. If they're not needed, then I can choose to use the adaptor or not. I only have one SIO cable, and let's face it, they're not becoming any more common. If 8 wires is less than ideal, at least I would still have the choise of a 13 wire cable as well. One final though, I know I've seen a 10 wire RJ type connector. While probably not very common, it would accept 8 wire plugs and would give 2 more conductors if/when needed.

Link to comment
Share on other sites

  • 2 weeks later...

Hello,

 

My scheme is simple, pins 1 to 13 on sio match to 1-13 on db25. 14-25 have not been determined, most ideas add complexity.

 

Rick D.

 

 

 

There is already a SIO alternative made by a8maestro.com.

It uses a DB25 connector. I have not used it so I do not know if its what you are looking for. Also DB25 switches and parts are probably going to be easier to find than DB15(in the US for sure). I do not know about your parts availability in the Czech republic.

Thanks. I wanted explore it, but I can't find scheme of "EASIO-S SIO adapter cable" wires connection on a8maestro's website. :sad:

btw - cannon d-sub15 connectors availability is good in the Czech Republic. It's a "normal" connector as well as d-sub9 and/or d-sub25.

Edited by a8maestro
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...