mrvan
-
Posts
188 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Gallery
Events
Store
Posts posted by mrvan
-
-
5 hours ago, mizapf said:
Maybe also consider checking the CHD after using TIImageTool. And as always, backups are never a bad idea.
That's actually a great idea. I already have a script for it, so will do. Thanks!
-
@mizapf Thanks for the code update suggestions. I've added them and verified all works. I'm calling the script just before starting MAME and right after MAME concludes. I'm testing the return value and playing the sports airhorn sound if the return is non-zero. I wonder how long it'll be until I hear the horn...hopefully sooner than later. Will let you know the details once/if it happens.
-
6 hours ago, mizapf said:
One hunk is 4096 bytes long by default; see also the output of chdman. Note that hunks are not tracks; in our case, they are half a track, but they are not related to the geometry. They are simply an allocation unit inside the CHD file.
I checked my complete stock of HD image files with the "find" line from above, and it is only that one image from you which has this error.
As for the return value, you should simply add exit(n) at the end (indented like the "if len(sys.argv)<2"). I'd set n=0 as the OK case at the start, and n=1 once we have an error.
Concerning the byte order, the CHD files use big endian. Interesting though that your Python demands the argument; it should assume big by default (see https://docs.python.org/3/library/stdtypes.html#int.from_bytes).
Cool. Thank you. I’ll make the mods and integrate into my tool chain. I look forward to the final solution. At least it seems this is rare.
-
16 hours ago, mizapf said:
Still trying to get some practice in Python...
Maybe this can be helpful.
#!/usr/bin/env python3 #import ... if __name__ == "__main__": import sys if len(sys.argv) < 2 : print("Syntax: checkhunks.py <file>") else: print("Checking", sys.argv[1]) f = open(sys.argv[1], mode="rb") f.seek(12) if (int.from_bytes(f.read(4))==5): f.seek(32,0) length = int.from_bytes(f.read(8)) mapoff = int.from_bytes(f.read(8)) f.seek(56,0) hunkbytes = int.from_bytes(f.read(4)) hunkcount = int(length / hunkbytes) # print("Hunk count:", hunkcount) f.seek(mapoff, 0) used = [0] * 65536 * 8 for i in range(hunkcount): hunkadd = int.from_bytes(f.read(4)) if (hunkadd != 0): # print("Hunk", i, "at", hex(hunkadd*hunkbytes)) used[hunkadd] = used[hunkadd]+1 f.close() for i in range(hunkcount): if (used[i] > 1): print("- Hunk", hex(i), "multiply linked") else: print("- Not a version 5 CHD")
[Edit: A bit refined; checks for CHD version (please upgrade any v4 CHD to v5). For now, I set the maximum hunk address as 65536*8 (for 8 partitions), but this needs some more thoughts, since the hunk number is not necessarily equal the hunk offset / 4096 inside the CHD file.]
This is my bulk check:
find ~/mame/disks -name "*.hd" -exec ~/src/python/checkhunks.py {} \;OK, that is some awesome work--I very much appreciate it. I feel confident now we'll be able to pinpoint the root cause.
I ran this script, slightly modified, and received the following, with the hdone.hd file showing the multiple links error.
[marko@napili ~]$ cd Documents/ti99/dsk/hd/
[marko@napili hd]$ ls
hd1.hd hd1test.hd hdone.hd hdtwo.hd
[marko@napili hd]$ chd_ckhunks hd1.hd
Checking hd1.hd
[marko@napili hd]$ chd_ckhunks hd1test.hd
Checking hd1test.hd
[marko@napili hd]$ chd_ckhunks hdone.hd
Checking hdone.hd
- Hunk 0x157 multiply linked
- Hunk 0x158 multiply linked
- Hunk 0x159 multiply linked
- Hunk 0x15a multiply linked
[marko@napili hd]$ chd_ckhunks hdtwo.hd
Checking hdtwo.hd
Considering the analysis I did last week where 8KB was damaged, I take it that a hunk is 2 KB long?
I don't know python well at all, but when run, I received the following error from your unmodified program, which I found at least a quick hack on stack exchange to address.
Checking /Users/marko/Documents/ti99/dsk/hd/hdone.hd
Traceback (most recent call last):
File "/Users/marko/Documents/ti99/bin/chd_ckhunks", line 13, in <module>
if (int.from_bytes(f.read(4),little)==5):
TypeError: from_bytes() missing required argument 'byteorder' (pos 2)
Adding 'big' to the int.from_bytes () call took care of it. I'm not sure if big, little, or something else is the best choice, although 'big' worked. I believe my M2 MacBook Pro is big endian.
A couple of suggested improvements would help, at least the first here:
- return a number that means the file is ok or damaged. This way I can integrate the script well and capture my attention on failure, maybe play an airhorn sound 🙂. I have a funny feeling it may take a while before I see this error again.
- output an OK message when the file is ok
Again, excellent and thank you!
-
1
-
-
13 hours ago, mizapf said:
OK, this is almost certainly a defect in the CHD file (hdone.hd). Take TIImageTool, use the sector editor in Tools, open sector 262176. This is record 1024 in the b2 file. Apply some change, e.g. change the first byte to FF. Save changes, reopen the image in the sector editor. Check the contents of sector 262144 (which is the above sector - 32). You see that it exactly mirrors the change.
CHD files contain a map of "hunks" (CHD = Compressed Hunks of Data) which contain 16 sectors each (4096 bytes). The map is a list of 32-bit integers which are the offset in the file, divided by the hunk size. Hunks that are filled with 0 are not allocated at all, hence the small size of an empty hard disk image (containing 00000000 as a pointer).
For a documentation, see the file MameCHDFormat.java in TIImageTool: https://github.com/mizapf/tiimagetool/blob/b43fce75aa4cf55fe0b62d5fe0c085a7cad18262/src/de/mizapf/timt/files/MameCHDFormat.java
It seems to me that the hunk pointers in the map point to the same location in the data region of the CHD file. When you change a sector in one of those hunks, the change will also be visible in the other hunk because both share their data.
The question is how this could have happened. Is it a flaw in MAME or in TIImageTool?
This seems like it's going to be a difficult question to answer. If an integrity check was performed for the CHD files at the initialization of MAME and prior to termination, that could help. I suppose it could even be an external check program. If there was such program I'd happy add it to my mame script and I'd be able to report the answer at first occurrence of the issue.
-
The image was initially created with TI Image Tool, and most writing to it was done with the same tool. Most of my work is developing an OS and related libraries. The most common writes in MAME are short text files that describe a process (path, parameters, redirection for stdin/out/err) so <10 lines usually. Others are either redirected stdout/in and pipes. So it is a mix.
I agree on the image, I've already created a new HD image that I'm using (did so with TI Image Tool). Luckily 99% of my files are already on my Mac and stored in an image directory that I can copy over to an HD image. Eventually I hope to auto-generate the HD image.
-
2
-
-
@mizapf, it looks like you are hot on the trail.
If it matters, I'm currently using mame0261 but I suppose it's possible the root cause could have been introduced by an older version, if MAME at all. I generally keep up with the versions.
-
4 hours ago, mizapf said:
I think this works. In fact, I wrote an assembly language program that creates a file on WDS1 with a given record count. Records are structured as "AAAA RecNo 00 ... 00 5555". RecNo is a 16-bit number starting at 0000.
The program creates the records on the hard disk, then reads them again, checking whether they look as expected and contain the correct number.
This is a somewhat crude design and may certainly be refined as desired.
* Record write/read test (TI-99/4A) DEF START REF DSRLNK,VMBW LOOPS DATA 100 PABLOC EQU >1000 PAB DATA >0009 BUFFER DATA >1100 DATA >FFFF RECNO DATA >0000 DATA >000A TEXT 'WDS1.FILE1' EVEN OPENC EQU 0 CLOSEC EQU 1 READC EQU 2 WRITEC EQU 3 WRECT TEXT 'WRITE REC >' TEXT 'xxxx' EVEN CRECT TEXT 'CHECK REC >' TEXT 'xxxx: ' OKT TEXT 'OK ' ERRT TEXT 'ERROR' HEX TEXT '0123456789ABCDEF' START BL @DSR DATA OPENC CLR R10 WRLOOP MOV R10,@RECNO BL @DEFREC BL @SHOW DATA WRECT BL @DSR DATA WRITEC INC R10 C R10,@LOOPS JL WRLOOP BL @DSR DATA 1 BL @DSR DATA OPENC CLR R5 CLR R10 RDLOOP MOV R10,@RECNO BL @SHOW DATA CRECT BL @DSR DATA READC BL @CHECK MOV R5,R5 JNE STOP INC R10 C R10,@LOOPS JL RDLOOP STOP BL @DSR DATA 1 LIMI 2 JMP $ DSR MOV *R11+,R0 SLA R0,8 MOVB R0,@PAB LI R0,PABLOC LI R1,PAB LI R2,20 BLWP @VMBW LI R0,PABLOC+9 MOV R0,@>8356 BLWP @DSRLNK DATA 8 * JEQ PROB RT SHOW MOV *R11,R1 MOV R10,R8 AI R1,14 LI R2,4 HEXL MOV R8,R7 ANDI R7,>000F MOVB @HEX(R7),*R1 DEC R1 SRL R8,4 DEC R2 JNE HEXL LI R0,72 MOV *R11+,R1 LI R2,15 BLWP @VMBW RT * Check the read record CHECK MOV @BUFFER,R0 SWPB R0 MOVB R0,@>8C02 SWPB R0 MOVB R0,@>8C02 NOP MOVB @>8800,R1 SRC R1,8 MOVB @>8800,R1 SRC R1,8 CI R1,>AAAA JNE FAIL MOVB @>8800,R1 SRC R1,8 MOVB @>8800,R1 SRC R1,8 C R1,R10 JNE FAIL LI R2,249 C1 MOVB @>8800,R1 JNE FAIL DEC R2 JNE C1 MOVB @>8800,R1 SRC R1,8 MOVB @>8800,R1 SRC R1,8 CI R1,>5555 JNE FAIL * All is OK LI R1,OKT JMP CHECKP FAIL SETO R5 LI R1,ERRT CHECKP LI R2,5 LI R0,89 BLWP @VMBW RT DEFREC MOV @BUFFER,R0 ORI R0,>4000 SWPB R0 MOVB R0,@>8C02 SWPB R0 MOVB R0,@>8C02 NOP LI R1,>AA00 MOVB R1,@>8C00 MOVB R1,@>8C00 MOV @RECNO,R1 MOVB R1,@>8C00 SWPB R1 MOVB R1,@>8C00 LI R2,249 CLR R1 DR1 MOVB R1,@>8C00 DEC R2 JNE DR1 LI R1,>5500 MOVB R1,@>8C00 MOVB R1,@>8C00 RT ENDIf you delete the b2 file and then run your program in my image, does that work right?
-
7 hours ago, InsaneMultitasker said:
Using MAME and your bad hard drive image, I ran a test with the HFDC DSR. In XB, I wrote to "b2old" records 991 to 1056, read them back, and everything looked correct. I also tested a similar approach with the Geneve ABASIC and SECTOR ONE (sector editor) to no avail. I was not able to reproduce the problem. MAME logging with a system/setup that can reproduce the problem seems necessary at this point?
I'm thinking that the recorded allocations, etc. were correct so overwriting should have worked if I'm right.
-
4 hours ago, mizapf said:
It should make it easy to change the iteration count and to rebuild.
Unfortunately my 9900 ASM skills are insufficient to this task. But yes, XB I'm sure is much slower. When I ran it within MAME, I ran it at full speed which was 8-10 x real time. Still takes a good hour or so.
-
3 hours ago, mizapf said:
This literally takes hours in Extended Basic ... would someone of you probably like to port that to assembly language? I'm currently a bit busy, otherwise I'd have done it on my own.
Is your desire for compiled code, specially an asm version or would a gcc built version work?
-
20 minutes ago, mizapf said:
In a first run of 1100 iterations, no issue occurred, so this happens later. I'll let it run for the 16384 loops now.
Hmmm…well, hope you see the behavior.
-
2 hours ago, mizapf said:
BTW, I noticed you are using all lower-case file names. I guess you know that uppercase is default in TI systems for virtually everything; just a short point of caution, not to stumble over some freak case issue.
Well, yes I do. I’m not fond of upper case names. I’ll bend if you or others know of gotchas. One could probably argue similarly about use of directories.
-
3 hours ago, mizapf said:
I'd like to recreate this. Would you like to share your program that you used, or could this be done with a simple BASIC program, writing those 1024 records to a D/V 255 file?
Within the hdone image in my last post is the ti Xb program ‘test’ that does this. It actually writes 16k of records and reads them back, failing at the read of the 1024th.
thanks for being willing to continue looking into this.
-
I've been analyzing the defective hard drive image, having extracted the raw data using the chdman tool.
The file in question, b2 in the root directory has two fragments with sectors 261136-261279 and 261296-277551. Nothing wrong there.
The file is corrupted at the 1024th record, which is 1024 * 256 bytes into the file's image (256 = 255 rec data + 1 len byte). Since each record contains the rec # and some filler data, I can see that it contains the info for record #992. The subsequent 31 records contain #993, ... #1023. So 32 records are wrong, but they are an EXACT copy of the records contained in their rightful positions in records 992-1023. 32 records * 256 bytes = 8 KB. The missing records, #1024-1055 are not contained in the file b2 at all.
As a spot check, I thought to check records 2047-2048, but they are fine.
The addresses for the original data copied at 0x4002000 are from 0x4000000, multiples of 8K if that matters.
I don't know whether the underlying hardware being emulated has 8 KB RAM for caching. If it doesn't then it seems it and the DSR could not cause this "perfect" 8 KB copy. Perhaps this points to a MAME bug?
I've attached the disk image itself, my analysis to find addresses and data for the records in the image (excel file), the extracted disk image, and the hex dump of the disk image.
hdone.hd.gz disk extract analysis.xlsx disk_extract_hexdump.txt.gz disk_extract.bin.gz
-
1
-
-
2 hours ago, mizapf said:
Can you reproduce the above effect, that is, copy a file to a hard disk image where it is allocated at AU 6? (That is, it would ignore the reserved space.)
Sure, please find the attached hd image.
-
21 minutes ago, mizapf said:
But we still don't know how this happened.
Sorry, mizapf, I assume you meant what caused the corruption for which I opened the thread, not the specific test I described, right? If so, I/we don't know.
-
On 1/19/2024 at 8:55 PM, Ksarul said:
Yes--but that required the Video Interface, which is one of the rarest of the original Hexbus peripherals out there. I know of about five of them in the wild (and yes, I do have one of them).
That video interface would sure be nice to have. Lucky you.
-
On 1/19/2024 at 8:56 AM, FarmerPotato said:
Back in the day, I took the CC-40 on car trips. I was always writing tiny game programs, what you would now call rogue-like except starring the Yen symbol vs the other chars.
The David Ahl "More BASIC Computer Programs" was a good source for character based programs. Like Camel, Hammurabi, Deepspace. One in particular, Horse I think, needed only one line of display in between entering your move toward the Horse, that kind of thing could translate well.
I have a lot to learn about such games.
-
On 1/19/2024 at 7:51 AM, pixelpedant said:
The 18K isn't a custom mod. It's just a rarer version with more RAM. It's original hardware.
Here's a fun little demo I wrote up a while back to run on your new CC-40, if you feel like it:
10 FOR C=0 TO 6 20 CALL CHAR(C,RPT$("1F",C)&"0E04") 30 NEXT C 40 FOR C=6 TO -5 STEP -1 50 F$=F$&CHR$(ABS(C)) 60 NEXT C 70 FOR P=1 TO 12 80 DISPLAY AT(1),SEG$(RPT$(F$,4),P,31) 90 NEXT P 100 GOTO 70Thank you for this, that's neat. I admit disappointment with the CALL CHAR being limited to just a small number of characters. I had imagined being able to redefine any.
-
On 1/22/2024 at 8:59 PM, InsaneMultitasker said:
This may be normal or it may be a symptom of what looked/s like a problem with emulation and/or DSR support beyond 8 heads. The HFDC DSR reserves some space for FDRs (file descriptor records) at the start of the disk, I don't know the calculation offhand. If I understand what you have stated, there are files written to the disk well before the large file and before the observed gap. You'd have to check whether there is a smaller, reserved space between the first FDR and the first file written. This is easy enough to test with a blank image: create a file, look for its FDR, and look for the first used AU of that file. Determine how many AUs are in between. I think the maximum is >FF (256) reserved AUs, though the HFDC may use a multiplier to reserve more.
For new files (and re-created files), file allocation starts with the first available AU beyond the reserved space however, in the case of an existing file that is being updated, the DSR allocation process should scan for open AUs beyond the first currently-allocated clusters for the file, to avoid fracturing files in a reverse cluster order. I don't remember what happens if space is open between clusters - do the fractures fill in the gap or of the DSR scan to the last cluster - I haven't deep-dived that in a long time. Anyway, writing 'deep' into the disk is what I observed on your 'bad' image with a skip of approximately 1/8th total allocation. This to me suggests a bug in the DSR or MAME that only more in-depth debugging and logging can confirm. You could create a disk with 8 or fewer heads to see if the large gap diminishes significantly, and that might be a clue. We know your 'bad' image reproduces the problem, and that may be a factor of how much space was already used on the image by the time you wrote your 'bad file'.
Note: Not that it matters here but I seem to recall that TI ImageTool ignores the reserved space when copying files to an image.
I should have read your last statement, that TI ImageTool ignores the reserved space when copies files to an image. That's what was confusing me...I was copying my test program with TI image tool and then running, and the resulting large file was being written much further down the disk, exactly beyond the reserved space as it should.
My regular work flow is to copy files via TI Image Tool, so most of the files written to my disk are IN the reserved area. What is the reserved area used for? Obviously for something...maybe something very important?
From a blank disk, using TI Image Tool to copy my test program, I end up with this allocation:
Sector 80 = FIB of file test, occupying interval [96..111]
The AU used is 96/16 = 6, which is very early into the disk, near the beginning of the reserved area.
I don't know if any of this matters...but I'm interested in the reserved area purpose.
-
I've been chasing this problem for a while now and still don't have a truly definitive answer.
I've created three different hard drive files, one being the same geometry as the one that was leading to corrupted files.
None of these produce corrupted files. In the smaller two hard drive files, I've been able to generate multiple files and then delete some to create fragmented free regions, to ultimately force the creation of fragmented files during writing. This has behaved as expected, resulting in fragmented files, but no corruption.
With the hard drive file with same geometry the behavior is different, where I can generate the same conditions, but the large file rather than being written in the fragmented free regions gets written deep into the disk beyond the last used AUs on the disk. Seems clear that the algorithm is not entirely simplistic as I had surmised, where the first unallocated AU would be used, etc.. What is the algorithm?
-
FYI, I'm still working this as I can. Work has been hectic and the weekends (last and this) have a lot of travel and site seeing.
I've not found the same problem to occur in a freshly generated HD image with the same geometry. I plan to try many more times permuting what I do. But it could be that something was / is defective in the HD image I was having trouble with.
-
2
-
-
I’ve not yet found any specific options for the configuration other than cartridges. Are there any to support the 18 KB upgrade done within the unit or external storage? I’ve preordered the HEX-TI-r HEXBUS disk drive from ArcadeShopper so I’ll eventually have a storage device. I’m hoping to be able to write software first on my Mac and then transfer into MAMe followed by the hardware.
-
1
-

Keyboard stopped working! Ideas on a fix?
in TI-99/4A Computers
Posted
I feel for you. My console did the same thing a few months ago. I replaced the 9901 and all is well. Seems they may be easy to kill. I touched my console during a period of low humidity = static. Sad part is that I knew the conditions called out at the lab at work and then I went home and touched her.