Jump to content
IGNORED

New project - Pico based SIO device...


Recommended Posts

4 hours ago, woj said:

One particular parameter was also a bit of a mystery, the CRC checksum calculation constant was defined at 2ms in the SDrive implementation, regardless of the emulated drive. Altirra sources suggest (if I found the right thing) 5136us for 810 and 270us for 1050. Especially the last one seems suspiciously low, and depending on "how" I execute this delay (never mind about the "how") one particular timing result in the tests is straying a bit off from the Altirra reference results. Probably need to recheck that again.

This would have to be computing SIO additive checksum, not a CRC. The floppy drive controller computes the CRC-16 for free. The only drive that has to compute the CRC-16 manually is the 815.

 

One reason for the discrepancy between the 810 and the 1050 is that the 810 computes the SIO checksum of the sector buffer after reading the sector and before sending the data frame, while the 1050 does it while sending the data frame. Thus, the checksum computation is free on the 1050. This is only possible at standard speeds, thus drives have to switch back to precomputing the checksum for high speed SIO.

 

The other thing to watch out for is precisely what parameter is being specified. In Altirra's case, 5136us isn't just for computing the SIO checksum -- it's for everything between the end of the sector read and the start of the Complete byte. Besides the checksum, the other thing in this time period is that the 810 does a Force Interrupt on the FDC that the 1050 doesn't. The 5136us figure comes from the ~2568 cycles it takes for the 810 rev. C firmware to execute the code path in between the two events, which is pretty determinstic from the computation loop for the checksum and the delay loop for the FDC reset.

 

  • Like 2
Link to comment
Share on other sites

So I have been sorting out the ATX implementation further and got it more or less where i wanted it (though not 100% sure if it reflects "the reality", however the reality is defined). The most important thing is that (a) the relevant ATX tests work as expected, (b) I learned a great deal of things about ATX and floppies. And this brings me to the next point, it seems that doing write support for ATX is a low hanging fruit, at least to certain extent. My assumption is that I would only allow "in situ" writing, which means no reconstructing of ATX files. But even with this I can take a couple of approaches:

 

1. Allow writing of only clean sectors - status byte == 0 and no duplicates of the sector

 

2. Keeping the one sector instance requirement, assume a "bad data clean overwrite" semantics, for sectors with extended data (long or weak) or deleted, or data lost, or CRC failure - reset the status byte to 0 and rewrite the sector.

 

3. As in 2. but allow for sector duplicates and do a "cleaning overwrite" of all of them. This, however, (a) makes the implementation code a tiny bit more difficult, (b) I'd believe with my limited knowledge of floppy controllers that this would not be what an actual drive would it. At best it would write to the first found sector, or to the first fully healthy found sector. In fact, even for alternative 2. I am not sure if the actual drive would allow it and I should stick to 1.

 

Opinions? Advice?

Link to comment
Share on other sites

2. is correct. If you write to an anyhow bad sector, the result is a good one.

 

Some disks have "hardware" protection by physically damaging the disk and check that this exact behaviour does/can not happen.

Besides this type of application I have not seen anything to try to overwrite a bad sector.

 

I have no idea what happens if the overwritten sector has a duplicate.

Link to comment
Share on other sites

When the FDC writes a sector, it reads the matching address field and then switches the head from read to write mode in the middle of gap II before laying down a new data field. This includes a new Data Address Mark. Because the data field is replaced, that means that any error condition stemming from the data field is cleared when the sector is overwritten. This therefore provides guidance for the various error conditions:

  • Missing sector entirely: ignore and time out, as no address field will be matched, and therefore no write will occur
  • Missing data field (sector present in image with RNF bit): replace with clean sector
  • Data CRC error (CRC bit): clear status, since rewritten sector will have a clean CRC
  • Address CRC error (CRC + RNF bits): ignore sector instance during sector scan, and write to the next sector without an address CRC error. Time out with CRC + RNF error if all sector instances have address CRC errors.
  • Deleted sector (record type bit(s)): clear status, since rewritten sector will have a replaced regular Data Address Mark (DAM)
  • Long sector (Lost Data or Lost Data + DRQ): complete write clearing record type / CRC / RNF bits, but keep Lost Data/DRQ bits and zero remaining sector on disk ($FF in decoded data). This still happens because the sector size expected by the FDC is determined by the sector size in the address field, which is not changed by the write. Therefore, the FDC will be expecting more data than the firmware provides, and will start writing $00s when it doesn't get data in time.
  • Phantom sectors: use the first sector instance with valid address CRC after the rotational point where the write command starts, similarly to reads. Other sector instances on the track with the same sector ID are unaffected.

Overlapping sectors should also be invalidated, but that is difficult to practically detect with the ATX format.

 

Note that there is a corner case in the ATX format to watch for. Overwriting a sector that had an address field but no data field (RNF) can be problematic because there is no space reserved in the image for the data. Altirra handles this by having two write paths, an incremental update path and a full write path, and when it detects that a write can't be handled incrementally, it forces the entire image to be rewritten on the next flush. You may choose to just punt on this case since writes to protected disks are already rare and writes to sectors with errors are even rarer.

 

  • Like 4
  • Thanks 3
Link to comment
Share on other sites

22 minutes ago, phaeron said:

Note that there is a corner case in the ATX format to watch for. Overwriting a sector that had an address field but no data field (RNF) can be problematic because there is no space reserved in the image for the data. Altirra handles this by having two write paths, an incremental update path and a full write path, and when it detects that a write can't be handled incrementally, it forces the entire image to be rewritten on the next flush. You may choose to just punt on this case since writes to protected disks are already rare and writes to sectors with errors are even rarer.

With this you mean this relates to your second bullet point / case, correct?

 

Super, thanks! Looks very much doable, especially that I already implemented my "clean" case 1. and this seems to work (with the exception that I have hit some corner case in the Flash writing code on the Pico that froze the device, one time only, I thought I resolved that problem by now, this has nothing to do with ATX). Just requires a bit more case splitting in the code it seems. (Well, with the exception of zeroing out the remaining part of the long sector, my code as it is is not ready to go beyond the predefined sector size when writing).

Link to comment
Share on other sites

So after carefully reading @phaeron's recipe, and figuring out that anything involving the RNF bit will give me headache and that I cannot have a true 100% write support for the RNF cases, I took the meet the thing in middle approach, namely: skip / ignore any sectors with RNF bit during write attempts, otherwise find the first angularly matching sector, write the sector data, clean up the CRC and record type bits, and if the sector happens to be long $FF the data behind the regular sector contents up until the length of the extended sector (checking first that it is in fact longer than than the actual sector). Sadly, apart from totally clean ATX sectors I do not really have a test case for this to check that it all works, but it looks obvious enough to just work. It does work for the "clean" ATX images (and I identified the problem with the Flash lock-up I think, I forgot to disable interrupts). 

 

The last remaining bit of this (extremely easy fix in the code for either alternative, so I just need a confirmation here) is how the failure should manifest itself, you say "ignore and timeout", which I interpret as there is no 'C' or 'E' coming from the drive at all, so far I have chosen to send 'E' not to stall the process. Again, I have no experience to know what the real drive would do. There is more - if the write is with verification, when the write succeeds but the verification fails (this is extremely unlikely to happen when emulating, but it still can), then it should be definitely 'E' rather than time-out?

 

 

Link to comment
Share on other sites

7 hours ago, woj said:

The last remaining bit of this (extremely easy fix in the code for either alternative, so I just need a confirmation here) is how the failure should manifest itself, you say "ignore and timeout", which I interpret as there is no 'C' or 'E' coming from the drive at all, so far I have chosen to send 'E' not to stall the process. Again, I have no experience to know what the real drive would do. There is more - if the write is with verification, when the write succeeds but the verification fails (this is extremely unlikely to happen when emulating, but it still can), then it should be definitely 'E' rather than time-out?

Yes, it should fail out with an Error code after the FDC times out, somewhere between 2-5 revolutions depending on the drive model and firmware (note that this is NOT always the count given in the FDC documentation). It's doubtful that any programs are sensitive to either that or write verification timing; the cases where you would notice it are either in how fast BOOT ERROR appears for an 810 with no disk, or how fast stock DOS 2 is at writing files.

 

  • Thanks 1
Link to comment
Share on other sites

Hi @woj,

 

   Have you though of rolling this project into yours? Edit: also, maybe using a Pico W, so you could support FujiNet style TNFS server access?

 

I'm assuming you could handle SIO with header pins and an SIO cable as are used in one of the multi-cartridges (I think SUB or AVG, but I don't have either):

 

 

Edited by E474
Added suggestion of using a Pico W for FujiNet style operations.
Link to comment
Share on other sites

2 hours ago, E474 said:

Have you though of rolling this project into yours? Edit: also, maybe using a Pico W, so you could support FujiNet style TNFS server access?

Well, two problems with the XEP80 development, one is I'd be out of GPIO pins (the device needs to drive the LCD display, full SIO (not only the tx/rx lines), a couple of joystick pins, and an SD card, that all pretty much exhausts the available pins). The second problem (but here I have no expertise yet) are the remaining resources to drive the HDMI. For example, writing to Flash requires locking out both cores for substantial time, this already causes the LCD display to be a bit unresponsive during write operations.

 

As for TNFS / FujiNet, so far I managed to stay away from this technology (a bit sceptic towards it TBH), I may want to look into this, but at least on the GUI side it would flip things upside-down a bit. In any case, the initial intent of this project was fullest possible support for CAS files, and the disk SIO part was added to make it less boring ;) At the moment I added ATX write support I realised I am pushing a bit far. My actual plan for now is to add SD card support and then think about packaging it / some carrier board / power protection circuitry.

  • Like 1
Link to comment
Share on other sites

Hi @woj,

 

   Ah, OK. I made a couple of SDrive-Max units, but usually used the SDrive menu program that loads on the 8-bit, rather than the touch screen. I also set up a Raspberry Pi with FujiNet-PC, which just uses a web page for configuration, so no screen there (but I think it's a pretty good setup). Good luck with the project, is it going to be open sourced at some point, or will it be a commercial release?

Link to comment
Share on other sites

17 hours ago, E474 said:

but usually used the SDrive menu program that loads on the 8-bit

Yeah, the Atari based GUIs for some things is another thing I cannot care too much for for some reason... (it's predominantly the multitude of key shortcuts one needs to remember, but also the need to double boot the Atari). 

 

In any case, the source code for this is available from day 1, only I did not publish the link because it is a heavy work in progress stuff, something really to be ashamed of so far (one big monolithic main.cpp that badly needs modularising into functions and separate files) and an ugly mixture of C and C++. In the process I realised how rusty and bad I am with C++, trying to refresh my memory did not really go that well. So until I fix all this and complete the implementation (as of today the SD card module support is the only actual thing left to do in terms of functionality) I will not post the link, but you are free to find it yourself ;) 

  • Like 1
Link to comment
Share on other sites

Hi @woj,

 

   I don't mind the 8-bit screen interface for booting things, though I think I had to fiddle with the SD-Drive Max's screen sometimes, though I haven't done much Atari'ing for a while. I usually hide my development code behind a private repo setting on GitHub, then make it public once it works (?) and is fully commented. I'm quite interested in what your Pico PIO code looks like, but haven't found any repos of yours (though only with a quick Google search).

Link to comment
Share on other sites

Right, so I managed to get the SD card interface to work, after not too much fight, still not without problems, some still unsolved. First, the Pimoroni Display library is not happy sharing the SPI0 interface with anything, and to use SPI1 I had to do a bit of rewiring - all SPI1 pins are occupied by this or that display function. This is nothing that a carrier board will not solve later on, but it was a bit annoying and some header pin bodging was required. But even with this, something is holding back the SD card interface, either something in my program, or some internal API interactions, it is unbearably slow. I know that an SD card cannot be expected to be as fast as the internal Flash, but it is orders of magnitude slower than SDriveMAX that has to deal with all this on a small AVR and driving the LCD display (on the same SPI bus I presume) at the same time. What's worse, if I browse the files at the same time that the SIO part is reading them for the Atari transfer things actually speed up! (Another remote possibility is that it has something to do with the Pico 2 gpio holding pins bug, I have to check that too, should that be the case, the Pico2 will probably end up in trash).

Link to comment
Share on other sites

OK made it better, had to bring the SPI baudrate down to 2-2.5Mbit, this stops the thing from choking and I get more decent read speeds. But it is still slow, and in particular messes up whole ATX timing due to delays caused by SD card reading. This is still not right, something is off, the baudrates in the library that I used are in the range of 12.5Mbit. I wish I knew more about SPI and Pico clock intricacies, I am probably missing something obvious...

Link to comment
Share on other sites

I got on top of things, software wise :D The issue I described was not with SPI speeds, but there were some weird unwanted DMA interrupt interactions and semaphore lock-ups in the SD card library I used, after some investigation it turned out the interrupt was entirely unnecessary (what the original author also figured out in his spin-off beefed up library). With that I got the SPI speed up to 32.5 Mbps, not that this much is needed, everything works perfectly at 12.5. What became an issue was that the ATX code is sensitive to transfer speeds and one part of the routine had to be rewritten. Essentially, with reading the disk data from Flash (or RAM) the time can be neglected in calculating the necessary delays, with SD cards not so anymore. Reading one sector worth of data from Flash takes 100-200us, from SD card 1000-3000us depending on the baud rate. But that has been taken care of too. (Interestingly but not really surprisingly, when writing the times are on different ends - writing to Flash takes substantially longer than to an SD card).

 

What is really nice now is that the FatFs library is so flexible that I can have both the internal Flash drive mounted and the SD card mounted at the same time and use Atari disk or cassette files from both at the same time. 

 

The next steps are on the hardware side, I need to measure the power consumption of the whole device, the figures for Pico on the net suggest that I may get results below 50mA (though I am sure the display is taking a looot) which would mean I could power this up directly from Atari, will see. Then a carrier board. And a lot of testing, today I think I have seen a concurrency related lock-up, but I did not pay too much attention to it as I was trying to solve another problem (apparently not all XEX files I have are correctly formed, some have some garbage data behind the last proper load block and that did not go well with my XEX loader). 

  • Like 1
Link to comment
Share on other sites

18 hours ago, woj said:

(apparently not all XEX files I have are correctly formed, some have some garbage data behind the last proper load block and that did not go well with my XEX loader). 

absolutely, I did a small analysis of the files in one of the atarionline.pl libraries: https://atarionline.pl/forum/comments.php?DiscussionID=6097&page=1#Item_47

the file invalid_xexes.txt lists bad xexes I found. Unnecessary data after a RUN statement is the most common offender.

Edited by pirx
  • Like 1
Link to comment
Share on other sites

So I am after some electrical advice, I am really not that good with things on this front. I measured the power draw of the complete device with everything hooked up, the display brightness at full and read 75mA. At first I thought OK, I read somewhere (presumably De Re SDrive) that the +5V on the SIO socket is rated at 50mA, so I should probably skip the option to power the device directly from the Atari. At least powering the SDrive from Atari did not work for me, the LCD crapped itself usually, though I am not sure what the power draw there was. So I connected the whole thing to Ready+5V, unplugged the USB from the Pico. It all ran very happily, so apparently I should attempt to suppoty powering it up from Atari on the carrier board. (I actually find it strange that the +5V SIO line is rated so low, first it does let the cassette player operate, second it is connected directly to the power rail inside Atari, this all really confuses me).

 

So, to the (first) point, I need a protection circuit for the USB and Atari power sources not to collide. @mytek has proposed a circuit in the Pico Cart thread, the snapshot of my interest attached below. Question 1a. would that be sufficient of a protection (just this snapshot) in this scenario? (See also below). Question 2b. - considering the power is now from the SIO socket, not the cart port - is the BAT48 diode still a valid choice (the spec says 40V, 350mA, especially the amperage sounds low to me). 

 

The second point - I was wondering how safe the USB powered device connected to all sorts of pins on the Atari SIO and Joystick 2 port when the Atari is not powered on (something that was discussed at length in the Pico Cart thread, but here we are not talking about connections directly to data or address lines) would be, is there anything I need to worry about there? In fact, I have it very often powered up with the Atari off and so far both my Ataris are fine. Just to know what I am dealing with, I measured voltages on all pins that are connect to Atari when the device sets up its gpio pin configurations and then goes idle (and it will always stay idle on the pins until the Atari would want something from it). I got these:

 

SIO motor - 0V (this one is pulled down externally (due to the silicon bug in Pico 2, I had it pulled down in software before).

SIO command - 0.30V (even though it is pulled up by Pico).

SIO data in - 0.03V (this one is connected through the 1N5819 diode like the suggestion for SDrive)

SIO data out - 0.02V

SIO proceed - 0.22V

SIO interrupt - 0.02V

 

Joystick pin 1 - 0V

Joystick pin 3 - 3.24V (this one is also pulled up by Pico)

Joystick pin 4 - 0V

 

Apart from pull-ups / pull-downs, all these are connected directly to the Pico's GPIO pins. (BTW, I would attach schematics for all this, but I still work on them...)

 

Any input is highly appreciated. 

 

Screenshot from 2024-09-23 20-47-10.png

Link to comment
Share on other sites

Good developments, first, I came across the section on externally powering the Pico in the datasheet (who would have guessed to look in there 😉), one method is indeed the shotkky diode (no pull downs necessary, that is already on the board from what I understand), the other one is a p-channel mosfet, with the added value in this particular situation that the USB power source has priority over the other one, so plugging it in always takes over, rather than choosing the source based on voltage levels in the first solution. Mosfet it will be then.

 

Also, drawing the carrier board schematics I realized I still have one GPIO pin available that I was missing so badly for the SD card detection signal. Tested it right away and the immediate added value is that booting does not hesitate for even one split millisecond when the card is not there. In the longer run it would also allow for SD card hot-swapping, but this has some deeper implications on the software, will have to see about that.

  • Like 1
Link to comment
Share on other sites

Oh for ... sake, I have to state this for the record: Pico2 is effectively unusable! I am not sure what is going on with the gpio pin states on this thing, but I just spent 3 hours chasing a bug that was again not there, I changed to Pico 1 and suddenly all my headaches were gone. And yes, I am quite sure it has something to do with a "lagging" pin state. 

Link to comment
Share on other sites

Got the Pico 2 back to the working order, this was actually something else than I thought, but still in a way the GPIO Pico2 pin problem related, I think. In short, I started getting the problems when I hooked everything up to it, all the SIO connections, joystick 2 port connections, SD card, etc. At first I thought I only had problems with one particular type of turbo CAS files when read from the SD card and that it is a mistake in my coding, but then I realized that all SD accesses got flaky. It turned out that the default pin drive strength was too much and (probably) with the leaking GPIO current (what the essence of the Pico 2 problem apparently is), when I brought it down to 2mA for the SD card interface it all got back to normal, uff... While at it, I made a small transistor based NOT gate to fix the problem with the one signal (SIO Motor) that was problematic for the Pico 2 and based on that I designed the carrier board that is now on its way to me (will still be at least a week). Even though it solves the problem for the Pico 2, one small glitch remains which is that the motor line gets read as active for a blink of an eye when the Atari is powered down. At first I thought this was a missing pull down resistor, but experimenting with one did not solve the problem, it's some other electrical issue / characteristic of the Atari PIA that I will probably never grasp. Functionally it does not destroy anything, it's more of an itch that I could not scratch. Related to this, the joystick pins also get pulled down to read 0 when the Atari is powered off, even though they are pulled up internally on the Pico (as I have read, the internal pull resistors are rather weak). May have to look into this a bit more later on.

 

Otherwise, I also coded in the hot swapping of the SD cards :) seems to work but needs a bit more testing.

Screenshot from 2024-09-29 21-49-32.png

  • Like 5
Link to comment
Share on other sites

Have been polishing the code a bit more, one of the last things to check was to see how my program would deal with non-basic ASCII characters in file names and, easily guessed, that opened another can of worms that I now managed to put the lid back on again. Took me a bit of reading about how the short and long file names are treated in FAT32, the code page encodings etc. The FatFs implementation used widely on Picos put some hindrance here with all its support to go back and forth between the FAT32 "native" UTF16 encoding that takes the code page encoding and application internal string representation. All in all the "to" and "from" methods are not entirely matching in combination with what some OSes (Linux most notably) do with the file names when recording them in the file system and I had to devise a small workaround for this, and that was only possible because I do not create arbitrarily named new files on the drive, only DISKxxxx.ATR new images, so some shortcuts could be taken. As much as my program now works nicely, it ended up with a sligthly butchered FatFs library 😕 In the meantime the carrier boards arrived from JLC, just need to be picked up from the post office and soldered up, that is the plan for the evening and the weekend. 

  • Like 4
Link to comment
Share on other sites

Ha, so Włóczykij was the game that broke it and triggered my workarounds :) Concretely, it is the letter ó that my Linux translates to ASCII hex $A2, which happens to be ó on code page 437. And that ends up in the short name which is the problem, because the FatFs when given it back to read the file goes through some convertions and creates something that does neither exist as a short name nor a long one, file opening fails. More interestingly, it is the only PL letter out of 18 (including lower and upper case) that trips this. Even more interestingly, when I copied the file using Windows all worked fine, this time the OS decided to convert ó to capital O, which did not trigger any weird back translations.

 

Internally for file opening everything now works with short file names, externally (user display) long file names are used, but since I can only display ASCII really, all non ASCII characters are displayed as ?. Without testing (but I will of course) I can already tell you that the Arabic examples will display as ???????~.atx, but they will display and open, which was not that certain before I fixed things.

  • Like 1
Link to comment
Share on other sites

Got the first board to work, but not without problems, SD card detection signal does not work and does weird things, electrically I mean. When I turn off this in the program the card is alive and working. I haven't been able to trace down the problem, could be a routing problem, soldering one, or, I know this can happen too, board manufacture problem.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...