Jump to content

mrvan

Members
  • Posts

    188
  • Joined

  • Last visited

Posts posted by mrvan

  1. I've been reading a bit about the CC-40 for a while and decided to pull the ebay trigger. Found one that seems to be in pretty good shape, but no bargain maybe fair.

     

    It has the 18 KB expansion--I think that's a custom mod made to the unit--verified with FRE(3), the hexbus cable and the word processor cartridge.

     

    I've not really decided what to do with it, but I'm comfortable with TI BASIC and XB and understand the BASIC in this unit is quite similar. Probably will beat the heck out of programming my HP 15C :-).

     

    I really appreciate that the prices of the 1980s TI computers, this and the TI-99's, are reasonable enough that one can own functioning history. I'll be watching for interesting upgrades, including storage, connectivity and display.

    • Like 1
  2. 10 hours ago, mizapf said:

    For a more thorough check, if you can build MAME yourself, add the logging flags LOG_READ and LOG_COMMAND in file src/devices/machine/hdc92x4.cpp, line 115, and rebuild.

     

    #define VERBOSE (LOG_GENERAL | LOG_WARN | LOG_COMMAND | LOG_READ)

     

    This file emulates the controller chip on the HFDC, and when you run with -oslog or -log, you will then get output that tells you about the operations (like sector read, sector write etc.). oslog goes to the current terminal, log goes to the file error.log.

     

    Don't wonder, LOG_WRITE is defined but not used. Seems as if I have some leftovers in my code.

     

    I saw this late so will do this in the next few days.

  3. On 1/14/2024 at 12:50 PM, InsaneMultitasker said:

    Load MDM5 and press "V" at startup.  Or, you can inspect the DSR ROM and you should see >0B as the version byte. 

     

    For the image, it is helpful to see the corrupt image so that the bitmap, sector 0, and FDR can be inspected, as well as look at what else is there that could contribute to the issue.  ZIPping might reduce image size enough.

     

    sector 1024 is definitely a boundary condition, though your previous example was not, so this isn't conclusive. 

    The EPROM version is H11.

  4. 6 hours ago, mrvan said:
      10 hours ago, InsaneMultitasker said:

    It would still be worthwhile, in my opinion, to rerun the test as I described in my earlier "thought".  That might tell us more about the problem 32 sectors, and whether the DSR or MAME is accessing them correctly.  I am cautiously optimistic to hear that your recent test succeeded.

    The test fails identically at record 1024.

  5. 5 hours ago, mizapf said:

    If your hard disk image is a CHD file (*.hd), you can use MAME's "chdman" tool.

     

    chdman info -i yourhdimage.hd

     

    I find a directory with the chdman name but no executables with that name. Is there something I need to do to build that executable?

  6. 3 hours ago, InsaneMultitasker said:

    It would still be worthwhile, in my opinion, to rerun the test as I described in my earlier "thought".  That might tell us more about the problem 32 sectors, and whether the DSR or MAME is accessing them correctly.  I am cautiously optimistic to hear that your recent test succeeded.

    I’m running this test now. Code only needed one line updated, setting R$ differently. Will post findings later today.

  7. I generated a hard drive image using the default TI Image Tool settings (615 cylinders, 4 heads, 32 sectors, 256 length) and ran the test. The test passed, with the resulting file spanning many more sections of the disk (fractures), not just two as the failed case had.

     

    To populate this new image, I first copied all files from the source disk that had the issues, deleted the large binary file and the copies of the test program. Then I made 6 new copies of the test program and deleted every other test program copy. That should have resulted in multiple smaller free regions. When run clearly each of those regions was used.

     

    I plan to recreate my original disk tonight with the original geometry just to try that and make sure there wasn't something wrong with the disk itself otherwise. I see nothing that indicates the geometry either in MAME or TI Image Tool. If I can bother one of you to tell me what they are?

     

    I very much appreciate you folks helping me work through this problem. I don't think I have a definitive answer yet but suspicion is pointing to 16 heads, or that I don't have the right DSR ROM to support that.

  8. I can fairly easily regenerate a new drive with different geometry, including less than 16 heads.

     

    when I made it a year or more ago I made it as large as allowed.

     

    i can transfer the programs to the new and retest.

     

    seems unlikely Mame would have the modified myarc card if that’s what was needed.

  9. 3 hours ago, Lee Stewart said:

     

    I know little about the HFDC DSR, so I may be off base here, but if records are set up the same way as for the TI DSR, a 255-byte fixed record takes up 256 bytes (1 sector) on disk. That would mean that record 1024 should be at file location 1024 x 256 = 0x40000.

     

    ...lee

    Hi Lee, the dump is from the file contents not the raw sectors. I apologize that I didn't make that clear.  

  10. 21 hours ago, InsaneMultitasker said:

    If the renamed file still exists and is still 'bad', it would be nice to see the test program, the corrupted file, and the resulting drive image.  I am curious about the record sequence throughout the bad file and how the file was written.  The Geneve DSR is built upon the HFDC dsr code, so I could also run the program on the Geneve for comparison, if my last MFM drive will power up. 

     

    Thinking back to when I used the HFDC with large fixed files (thousands of records) for BBS/message base activity, my hard drives were typically 20MB or 40'ish MB.  What size is your test image?  Maybe the bitmap or cluster allocation routine is flawed at higher disk capacities. 

     

    Lastly, I assume you are using HFDC DSR version H11.

    How can I find the version of the DSR I have?

  11. 20 hours ago, InsaneMultitasker said:

    If the renamed file still exists and is still 'bad', it would be nice to see the test program, the corrupted file, and the resulting drive image.  I am curious about the record sequence throughout the bad file and how the file was written.  The Geneve DSR is built upon the HFDC dsr code, so I could also run the program on the Geneve for comparison, if my last MFM drive will power up. 

     

    Thinking back to when I used the HFDC with large fixed files (thousands of records) for BBS/message base activity, my hard drives were typically 20MB or 40'ish MB.  What size is your test image?  Maybe the bitmap or cluster allocation routine is flawed at higher disk capacities. 

     

    Lastly, I assume you are using HFDC DSR version H11.

    The first defect found in the file is at record 1024. The program stores the values as radix 100 so the value that should be found at 1024 x 255 = 0x3FC00 should be 41 0A 18. Prior to that, 1024 (41 0A 17) should be found and is. The record found at 0x3FC00 has 41 09 5C. That value is found also in its correct spot earlier in the file.

    I've attached the XB program, hexdump of the file and the file itself. Note that the file is also fragmented on the disk. I'll look into how to construct a fresh hard disk image of a suitable size for uploading--it's been a while--and provide that image.

    Here's a partial hexdump of the file where it goes jelly side down.

    03fb00: 21 08 41 0a 17 00 00 00 00 00 08 41 0a 17 00 00     !.A........A....

    03fb10: 00 00 00 08 41 0a 17 00 00 00 00 00 08 41 0a 17     ....A........A..

    03fb20: 00 00 00 00 00 08 41 0a 17 00 00 00 00 00 08 41     ......A........A

    03fb30: 0a 17 00 00 00 00 00 08 41 0a 17 00 00 00 00 00     ........A.......

    03fb40: 08 41 0a 17 00 00 00 00 00 08 41 0a 17 00 00 00     .A........A.....

    03fb50: 00 00 08 41 0a 17 00 00 00 00 00 08 41 0a 17 00     ...A........A...

    03fb60: 00 00 00 00 08 41 0a 17 00 00 00 00 00 08 41 0a     .....A........A.

    03fb70: 17 00 00 00 00 00 08 41 0a 17 00 00 00 00 00 08     .......A........

    03fb80: 41 0a 17 00 00 00 00 00 08 41 0a 17 00 00 00 00     A........A......

    03fb90: 00 08 41 0a 17 00 00 00 00 00 08 41 0a 17 00 00     ..A........A....

    03fba0: 00 00 00 08 41 0a 17 00 00 00 00 00 53 30 31 32     ....A.......S012

    03fbb0: 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38     3456789012345678

    03fbc0: 39 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34     9012345678901234

    03fbd0: 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30     5678901234567890

    03fbe0: 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36     1234567890123456

    03fbf0: 37 38 39 30 31 32 33 34 35 36 37 38 39 5f 40 21     7890123456789_@!

    03fc00: 08 41 09 5c 00 00 00 00 00 08 41 09 5c 00 00 00     .A.\......A.\...

    03fc10: 00 00 08 41 09 5c 00 00 00 00 00 08 41 09 5c 00     ...A.\......A.\.

    03fc20: 00 00 00 00 08 41 09 5c 00 00 00 00 00 08 41 09     .....A.\......A.

    03fc30: 5c 00 00 00 00 00 08 41 09 5c 00 00 00 00 00 08     \......A.\......

    03fc40: 41 09 5c 00 00 00 00 00 08 41 09 5c 00 00 00 00     A.\......A.\....

    03fc50: 00 08 41 09 5c 00 00 00 00 00 08 41 09 5c 00 00     ..A.\......A.\..

    03fc60: 00 00 00 08 41 09 5c 00 00 00 00 00 08 41 09 5c     ....A.\......A.\

    03fc70: 00 00 00 00 00 08 41 09 5c 00 00 00 00 00 08 41     ......A.\......A

    03fc80: 09 5c 00 00 00 00 00 08 41 09 5c 00 00 00 00 00     .\......A.\.....

    03fc90: 08 41 09 5c 00 00 00 00 00 08 41 09 5c 00 00 00     .A.\......A.\...

    03fca0: 00 00 08 41 09 5c 00 00 00 00 00 53 30 31 32 33     ...A.\.....S0123

    03fcb0: 34 35 36 37 38 39 30 31 32 33 34 35 36 37 38 39     4567890123456789

    03fcc0: 30 31 32 33 34 35 36 37 38 39 30 31 32 33 34 35     0123456789012345

    03fcd0: 36 37 38 39 30 31 32 33 34 35 36 37 38 39 30 31     6789012345678901

    03fce0: 32 33 34 35 36 37 38 39 30 31 32 33 34 35 36 37     2345678901234567

    03fcf0: 38 39 30 31 32 33 34 35 36 37 38 39 5f 40 21 08     890123456789_@!.

     

     

    Screenshot 2024-01-14 at 8.31.54 AM.png

    test.tfi b2.txt b2.tfi

  12. 15 hours ago, InsaneMultitasker said:

    Can you pin down what is generating the fragmentation?  Both the level 3 record write (opcode >03) and binary direct write IO (subroutine >25) should add allocation units to the File Descriptor Record (FDR) in a contiguous fashion until/unless there is an already-used AU at the next position, in which case the DSR will skip to the next available AU. The same operation is called whether adding sequential records or random records.  If you are interleaving files, fragmentation is expected; if you are using a "clean" disk without interleaving, I find it curious you are experiencing this problem. 

     

    I often run sanity tests from XB in cases like this, though I don't know that your routines can be easily adapted to strict level 3 record IO.

    I took your advice and wrote an XB program to approximate the same type of scenario albeit simplistically. I created an INT/FIX 255 file, wrote records to it, 16384, writing the record number within the values within each record, closed the file and then reopened to read back. On read back, it failed at the 664th record with the values read back not matching, and exited. Checking the file it was fragmented. I renamed the generated file and tried it again. It ran successfully. The resulting file also was not fragmented.

     

    So the good news obviously is that the behavior between XB and C is the same, as should be expected. 

     

    I'm not done chasing this, though. I'm planning on testing in the TIPI environment shortly. I suspect that there's no possibility of fragmentation there.

    • Like 1
  13. 2 minutes ago, InsaneMultitasker said:

    Can you pin down what is generating the fragmentation?  Both the level 3 record write (opcode >03) and binary direct write IO (subroutine >25) should add allocation units to the File Descriptor Record (FDR) in a contiguous fashion until/unless there is an already-used AU at the next position, in which case the DSR will skip to the next available AU. The same operation is called whether adding sequential records or random records.  If you are interleaving files, fragmentation is expected; if you are using a "clean" disk without interleaving, I find it curious you are experiencing this problem. 

     

    I often run sanity tests from XB in cases like this, though I don't know that your routines can be easily adapted to strict level 3 record IO.

    I'm certain that when there's no fragmentation all works well. Your idea of using XB to check this out is good. I can probably write the critical aspects that would raise this issue, if it indeed exists in XB.

  14. 7 hours ago, FarmerPotato said:

    Then perhaps the HFDC DSR has an overflow bug when the file is fragmented. 
     

    I may have missed it, but how big was your failing file? Is it no more than 4 Meg? (I guess 32767 records) 

     


     

     

    One file that failed was just 256 KB. It is fragmented. Just renaming that file and rerunning the program works and the new file is fine (and not fragmented). I'm suspicious of the HFDC DSR as well. 

     

    The largest file I've generated is slightly larger than 4 MB and uses 16449 records. This is using Int/Fix 255.

  15. 2 hours ago, FarmerPotato said:

    Can you post the  test code for fwrite() fread() ?

    is it one pass writing, fclose fopen, one pass reading--or something more complex?

     

    I have two unit tests, one that just writes sequentially through the file, writing known sequences, which correspond to the position within the file. The second is similar but more complex and hits the edge cases. The file is written skipping well past the logical end of the file (as fwrite permits) by using an fseek call. It starts by writing the first record and then skipping ahead 32 and writing the next and so on to the end. This loops until the skip is 0 and all records are written. Record in this case is not the DSR record but simply writing a struct of a given size to a specific position. These implement true file random access.

     

    I'm mapping the DSR file records as blocks of fixed length. The first record is used to store the file attributes not available elsewhere, in particular length.

     

    int fseek (FILE *stream, long offset, int whence) {

       int r = UNDEFINED;

       long pos;

     

       switch (whence) {

          case SEEK_SET:

             pos = offset;

             if (pos >= 0) {

                stream->bstate.pos = pos;

                r                  = 0;

             }

             break;

          case SEEK_CUR:

             pos = stream->bstate.pos + offset;

             if (pos >= 0) {

                stream->bstate.pos = pos;

                r                  = 0;

             }

             break;

          case SEEK_END:

             pos = stream->bstate.len + offset;

             if (pos >= 0) {

                stream->bstate.pos = pos;

                r                  = 0;

             }

             break;

          default:

             break;

       }

     

       return r;

    }

     

    int fwrite (const void *restrict ptr, int size, int nitems, FILE * restrict f) {

     

       int r = 0;

     

       // determine if this is a binary or text file that is open

       if (f->is_binary) {

     

          int bytes_remaining, block_num, block_pos, block_bytes;

     

          // capture the initial address of data to be written

          char *p = (char *) ptr;

     

          // calculate the total number of bytes to be written

          bytes_remaining = size * nitems;

     

          // loop until all bytes are written

          while (bytes_remaining) {

     

             // calculate where to start writing

             block_num   = __divdi3 (f->bstate.pos, FILE_BINARY_BLOCK_SIZE) + 1; // add one since the first block contains the binary file stats

             block_pos   = __moddi3 (f->bstate.pos, FILE_BINARY_BLOCK_SIZE);

             block_bytes = min (FILE_BINARY_BLOCK_SIZE - block_pos, bytes_remaining);

     

             // handle creation of blocks after a seek operation has moved beyond the current physical end of the file. This will write blank

             // blocks between the span. The TI file system doesn't support sparse files so this will indeed use space on the storage device

             if (f->bstate.last_block + 1 < block_num) {

                vdpmemset (f->dsr->vdp_data_buffer_addr, 0x00, FILE_BINARY_BLOCK_SIZE);

                f->dsr->pab.OpCode    = DSR_WRITE;

                for (int eb = f->bstate.last_block + 1; eb < block_num; eb++) {

                   f->dsr->pab.RecordNumber = eb;

                   f->dsr->pab.CharCount    = FILE_BINARY_BLOCK_SIZE;

                   r = dsrlnk (&f->dsr->pab, f->dsr->vdp_pab_buffer_addr);

                   if (r) {

                      r = UNDEFINED;

                      break; // breaks out of the for loop, need one more break to get out of the while loop (a few lines later)

                   }

                   f->bstate.last_block++;

                }

                if (r) {

                   break;    // break out of the while loop now since we needed a double break from within the for loop

                }

             }

     

             // load the existing block if the entire block isn't being overwritten

             if (block_bytes != FILE_BINARY_BLOCK_SIZE && block_num <= f->bstate.last_block) {

                f->dsr->pab.OpCode       = DSR_READ;

                f->dsr->pab.RecordNumber = block_num;

                f->dsr->pab.CharCount    = FILE_BINARY_BLOCK_SIZE;

                r = dsrlnk (&f->dsr->pab, f->dsr->vdp_pab_buffer_addr);

                if (r) {

                   r = UNDEFINED;

                   break;

                }

             } else {

                // this will be a new block so initialize it unless the entire block is going to be written

                if (block_bytes < FILE_BINARY_BLOCK_SIZE) {

                   vdpmemset (f->dsr->vdp_data_buffer_addr, 0x00, FILE_BINARY_BLOCK_SIZE);

                }

             }

     

             // write the data to the vdp

             vdpmemcpy (f->dsr->vdp_data_buffer_addr + block_pos, (unsigned char *) p, block_bytes);

             p += block_bytes;

     

             // write the block to storage

             f->dsr->pab.OpCode       = DSR_WRITE;

             f->dsr->pab.RecordNumber = block_num;

             f->dsr->pab.CharCount    = FILE_BINARY_BLOCK_SIZE;

             r = dsrlnk (&f->dsr->pab, f->dsr->vdp_pab_buffer_addr);

             if (r) {

                r = UNDEFINED;

                break;

             }

     

             // update values

             f->bstate.pos       += block_bytes;

             f->bstate.len        = max (f->bstate.len, f->bstate.pos);

             f->bstate.last_block = max (f->bstate.last_block, block_num);

             bytes_remaining     -= block_bytes;

          }

     

          // set the return value

          if (r != UNDEFINED) {

             r = nitems;

          }

     

       } else {

     

          // process text

          char s[258];

          char *p = (char *) ptr;

          int i, j;

          for (i = 0; i < nitems; i++) {

             for (j = 0; j < size; j++) {

                s[j] = *p;

                p++;

             }

             s[size] = 0x00;

             r = min (r, fputs (s, f));

          }

       }

     

       return r;

    }

     

     

    int fread (void *restrict ptr, int size, int nitems, FILE *f) {

     

       int r = 0;

     

       // ensure this is a binary file

       if (f->is_binary) {

     

          int total_bytes, bytes_remaining, block_num, block_pos, block_bytes;

          char *p = (char *) ptr;

     

          // calculate the total number of bytes to be written:

          // nominally the number of bytes to be read is size * nitems, but that may be truncated by overrunning the end of the file;

          // the total number of bytes calculated can be negative if fseek was used to move beyond the end of file (a legal operation),

          // so ensure the final tally is non-negative.

          total_bytes = max (min (size * nitems, f->bstate.len - f->bstate.pos), 0);

     

          // loop until all bytes are read

          bytes_remaining = total_bytes;

          while (bytes_remaining) {

     

             // calculate where to write

             block_num   = __divdi3 (f->bstate.pos, FILE_BINARY_BLOCK_SIZE) + 1; // add one since the first block contains the binary file stats

             block_pos   = __moddi3 (f->bstate.pos, FILE_BINARY_BLOCK_SIZE);

             block_bytes = min (FILE_BINARY_BLOCK_SIZE - block_pos, bytes_remaining);

     

             // read the data from storage

             f->dsr->pab.OpCode       = DSR_READ;

             f->dsr->pab.RecordNumber = block_num;

             f->dsr->pab.CharCount    = FILE_BINARY_BLOCK_SIZE;

             r = dsrlnk (&f->dsr->pab, f->dsr->vdp_pab_buffer_addr);

             if (r) {

                break;

             }

     

             // copy the data from vdp to the target object

             vdpmemread (f->dsr->vdp_data_buffer_addr + block_pos, (unsigned char *) p, block_bytes);

             p += block_bytes;

     

             // update stats

             f->bstate.pos   += block_bytes;

             bytes_remaining -= block_bytes;

          }

     

          // set the return value

          if (!r) {

             r = total_bytes / size;

          }

     

       }

     

       return r;

    }

  16. On 1/5/2024 at 9:12 AM, mizapf said:

    I'll have to work over all those features in CommandShell before finishing the v3, anyway.

     

    Wondering about the error message, do you have those files (file_bm, pio, file_bl) in your temp folder?

    So sorry, I just saw this. I'll have to perform the check again tonight after work. For some reason I get very late notifications...

  17. 2 hours ago, mizapf said:

    Yes, the CommandShell class inside TIImageTool has a command line argument parser.

     

    java -classpath tiimagetool.jar de.mizapf.timt.CommandShell -h

     

    lists some options. Still missing in the description are "export" and "import".

     

    As for the context menu, I could probably add it to the top menu line, so instead of right-clicking on a file, you would mark the file with a left click, and pick the function from the drop-down menu. I could add that to TIMT 3.

    Cool. Thanks for this. I was able to ls and list files. Both of these are nice to have.

     

    I see text from the -h command shell option about import and export and gave an import a shot. Just errored out. Not sure why.

     

    java -classpath ~/Documents/ti99/tools/tiimagetool/tiimagetool.jar de.mizapf.timt.CommandShell import ~/Documents/ti99/dsk/hd/hdone.hd temp

    file_bm: invalid name

    pio: invalid name

    file_bl: invalid name

     

    java.io.IOException: Bad file descriptor

    at java.base/java.io.RandomAccessFile.writeBytes0(Native Method)

    at java.base/java.io.RandomAccessFile.writeBytes(RandomAccessFile.java:562)

    at java.base/java.io.RandomAccessFile.write(RandomAccessFile.java:578)

    at de.mizapf.timt.files.MameCHDFormat.writeCurrentHunk(MameCHDFormat.java:560)

    at de.mizapf.timt.files.MameCHDFormat.writeSector(MameCHDFormat.java:478)

    at de.mizapf.timt.files.Volume.writeSector(Volume.java:196)

    at de.mizapf.timt.files.TFile.writeFIB(TFile.java:442)

    at de.mizapf.timt.files.Directory.insertFile(Directory.java:484)

    at de.mizapf.timt.CommandShell.importFile(CommandShell.java:449)

    at de.mizapf.timt.CommandShell.main(CommandShell.java:139)

     

    If you're looking for v3 features, I'd love to see a command line function like a simple rsync--update directory on the image with the contents of a directory [recursively] on the source (host). But if I can get the command above working I could fairly easily script it in bash. 

     

    BTW, thanks for making this tool. It's very useful.

  18. 12 hours ago, InsaneMultitasker said:

     

    Is the file stored on the hard drive image or the floppy image?  Since you ran a 4MB test, I am guessing hard drive only?  Just a few thoughts though little that is likely to solve the issue:

     

    The hard drive DSR is pretty solid, though files that fracture beyond the first FDR (70 or so different clusters) are subject to corruption, worst case is overwriting sector 0 (VIB) or bitmap sectors, which in turn corrupts other files. For fixed files, it is often good practice to allocate the anticipated maximum file size, or some portion thereof, upon file creation and/or when you know the file needs to expand by a considerable record count.  Use caution with appended files, especially in "logging" scenarios and when interleaving writes between two files. 

     

    The floppy drive DSR is also pretty solid but it will not properly allocate sectors if the drive capacity exceeds 800K, i.e., use 720K or smaller capacities. 

    Yes, I'm using the hard drive. I find it hard to believe that I'm exceeding the 70 or so limit, but hey, I might be. I was thinking about that same thing, how to prevent later corruption with these files. The pre-allocation idea is a good one. I'm not getting any dsrlnk errors so I've not been able to detect corrupted files except by writing known values to known locations. I suppose the pre-allocation could perform the write and check and then reinit all the values after proving the file is in good shape. The degree of corruption possible would be a bit scary if this was my daily machine, in circa 1985 or so. If that happens now, though, just regenerate a virtual HD and load the files.

    • Confused 1
  19. 12 hours ago, mizapf said:

    Oh ... no one ever reported that the right-click does not work on macOS.  😕

     

    I just browsed through some reports on the web; this seems to be a general problem with creating a platform-independent user interface for Windows, Linux, and macOS together. The context menu contains a lot of stuff that is not reachable by the menus; see the screenshot. I wonder how I could possibly enable that, since the alternative of CTRL-click is also already defined (additive marking).

     

     

    Screenshot_20240104_175143.png

    Yes, lots of goodness there. Some of it is in the main menus but not all. TI Image Tool is my go-to for transferring files.

     

    Any thoughts on supporting any type of command line type actions? My work flow does require a number of manual actions, including using TI Image Tool. Most else I have is automated via make, including generating the directory structures and files. I just keep the Finder window open and transfer files from it to TI Image Tool.

    • Like 1
×
×
  • Create New...