mrvan
Members-
Posts
188 -
Joined
-
Last visited
Content Type
Profiles
Forums
Blogs
Gallery
Events
Store
Everything posted by mrvan
-
I feel for you. My console did the same thing a few months ago. I replaced the 9901 and all is well. Seems they may be easy to kill. I touched my console during a period of low humidity = static. Sad part is that I knew the conditions called out at the lab at work and then I went home and touched her.
-
That's actually a great idea. I already have a script for it, so will do. Thanks!
-
@mizapf Thanks for the code update suggestions. I've added them and verified all works. I'm calling the script just before starting MAME and right after MAME concludes. I'm testing the return value and playing the sports airhorn sound if the return is non-zero. I wonder how long it'll be until I hear the horn...hopefully sooner than later. Will let you know the details once/if it happens.
-
Cool. Thank you. I’ll make the mods and integrate into my tool chain. I look forward to the final solution. At least it seems this is rare.
-
OK, that is some awesome work--I very much appreciate it. I feel confident now we'll be able to pinpoint the root cause. I ran this script, slightly modified, and received the following, with the hdone.hd file showing the multiple links error. [marko@napili ~]$ cd Documents/ti99/dsk/hd/ [marko@napili hd]$ ls hd1.hd hd1test.hd hdone.hd hdtwo.hd [marko@napili hd]$ chd_ckhunks hd1.hd Checking hd1.hd [marko@napili hd]$ chd_ckhunks hd1test.hd Checking hd1test.hd [marko@napili hd]$ chd_ckhunks hdone.hd Checking hdone.hd - Hunk 0x157 multiply linked - Hunk 0x158 multiply linked - Hunk 0x159 multiply linked - Hunk 0x15a multiply linked [marko@napili hd]$ chd_ckhunks hdtwo.hd Checking hdtwo.hd Considering the analysis I did last week where 8KB was damaged, I take it that a hunk is 2 KB long? I don't know python well at all, but when run, I received the following error from your unmodified program, which I found at least a quick hack on stack exchange to address. Checking /Users/marko/Documents/ti99/dsk/hd/hdone.hd Traceback (most recent call last): File "/Users/marko/Documents/ti99/bin/chd_ckhunks", line 13, in <module> if (int.from_bytes(f.read(4),little)==5): TypeError: from_bytes() missing required argument 'byteorder' (pos 2) Adding 'big' to the int.from_bytes () call took care of it. I'm not sure if big, little, or something else is the best choice, although 'big' worked. I believe my M2 MacBook Pro is big endian. A couple of suggested improvements would help, at least the first here: - return a number that means the file is ok or damaged. This way I can integrate the script well and capture my attention on failure, maybe play an airhorn sound 🙂. I have a funny feeling it may take a while before I see this error again. - output an OK message when the file is ok Again, excellent and thank you!
-
This seems like it's going to be a difficult question to answer. If an integrity check was performed for the CHD files at the initialization of MAME and prior to termination, that could help. I suppose it could even be an external check program. If there was such program I'd happy add it to my mame script and I'd be able to report the answer at first occurrence of the issue.
-
The image was initially created with TI Image Tool, and most writing to it was done with the same tool. Most of my work is developing an OS and related libraries. The most common writes in MAME are short text files that describe a process (path, parameters, redirection for stdin/out/err) so <10 lines usually. Others are either redirected stdout/in and pipes. So it is a mix. I agree on the image, I've already created a new HD image that I'm using (did so with TI Image Tool). Luckily 99% of my files are already on my Mac and stored in an image directory that I can copy over to an HD image. Eventually I hope to auto-generate the HD image.
-
@mizapf, it looks like you are hot on the trail. If it matters, I'm currently using mame0261 but I suppose it's possible the root cause could have been introduced by an older version, if MAME at all. I generally keep up with the versions.
-
I'm thinking that the recorded allocations, etc. were correct so overwriting should have worked if I'm right.
-
Unfortunately my 9900 ASM skills are insufficient to this task. But yes, XB I'm sure is much slower. When I ran it within MAME, I ran it at full speed which was 8-10 x real time. Still takes a good hour or so.
-
Is your desire for compiled code, specially an asm version or would a gcc built version work?
-
Hmmm…well, hope you see the behavior.
-
Well, yes I do. I’m not fond of upper case names. I’ll bend if you or others know of gotchas. One could probably argue similarly about use of directories.
-
Within the hdone image in my last post is the ti Xb program ‘test’ that does this. It actually writes 16k of records and reads them back, failing at the read of the 1024th. thanks for being willing to continue looking into this.
-
I've been analyzing the defective hard drive image, having extracted the raw data using the chdman tool. The file in question, b2 in the root directory has two fragments with sectors 261136-261279 and 261296-277551. Nothing wrong there. The file is corrupted at the 1024th record, which is 1024 * 256 bytes into the file's image (256 = 255 rec data + 1 len byte). Since each record contains the rec # and some filler data, I can see that it contains the info for record #992. The subsequent 31 records contain #993, ... #1023. So 32 records are wrong, but they are an EXACT copy of the records contained in their rightful positions in records 992-1023. 32 records * 256 bytes = 8 KB. The missing records, #1024-1055 are not contained in the file b2 at all. As a spot check, I thought to check records 2047-2048, but they are fine. The addresses for the original data copied at 0x4002000 are from 0x4000000, multiples of 8K if that matters. I don't know whether the underlying hardware being emulated has 8 KB RAM for caching. If it doesn't then it seems it and the DSR could not cause this "perfect" 8 KB copy. Perhaps this points to a MAME bug? I've attached the disk image itself, my analysis to find addresses and data for the records in the image (excel file), the extracted disk image, and the hex dump of the disk image. hdone.hd.gz disk extract analysis.xlsx disk_extract_hexdump.txt.gz disk_extract.bin.gz
-
Sure, please find the attached hd image. hd1test.zip
-
Sorry, mizapf, I assume you meant what caused the corruption for which I opened the thread, not the specific test I described, right? If so, I/we don't know.
-
MAME with CC-40 - options
mrvan replied to mrvan's topic in Tomy Tutor, CC40, 99/2, 99/8, Cortex, 990 mini
That video interface would sure be nice to have. Lucky you. -
I have a lot to learn about such games.
-
Thank you for this, that's neat. I admit disappointment with the CALL CHAR being limited to just a small number of characters. I had imagined being able to redefine any.
-
I should have read your last statement, that TI ImageTool ignores the reserved space when copies files to an image. That's what was confusing me...I was copying my test program with TI image tool and then running, and the resulting large file was being written much further down the disk, exactly beyond the reserved space as it should. My regular work flow is to copy files via TI Image Tool, so most of the files written to my disk are IN the reserved area. What is the reserved area used for? Obviously for something...maybe something very important? From a blank disk, using TI Image Tool to copy my test program, I end up with this allocation: Sector 80 = FIB of file test, occupying interval [96..111] The AU used is 96/16 = 6, which is very early into the disk, near the beginning of the reserved area. I don't know if any of this matters...but I'm interested in the reserved area purpose.
-
I've been chasing this problem for a while now and still don't have a truly definitive answer. I've created three different hard drive files, one being the same geometry as the one that was leading to corrupted files. None of these produce corrupted files. In the smaller two hard drive files, I've been able to generate multiple files and then delete some to create fragmented free regions, to ultimately force the creation of fragmented files during writing. This has behaved as expected, resulting in fragmented files, but no corruption. With the hard drive file with same geometry the behavior is different, where I can generate the same conditions, but the large file rather than being written in the fragmented free regions gets written deep into the disk beyond the last used AUs on the disk. Seems clear that the algorithm is not entirely simplistic as I had surmised, where the first unallocated AU would be used, etc.. What is the algorithm?
-
FYI, I'm still working this as I can. Work has been hectic and the weekends (last and this) have a lot of travel and site seeing. I've not found the same problem to occur in a freshly generated HD image with the same geometry. I plan to try many more times permuting what I do. But it could be that something was / is defective in the HD image I was having trouble with.
-
I’ve not yet found any specific options for the configuration other than cartridges. Are there any to support the 18 KB upgrade done within the unit or external storage? I’ve preordered the HEX-TI-r HEXBUS disk drive from ArcadeShopper so I’ll eventually have a storage device. I’m hoping to be able to write software first on my Mac and then transfer into MAMe followed by the hardware.
