Harry Potter Posted September 25, 2021 Author Share Posted September 25, 2021 I found a bug. I was getting the unsigned value of a signed number and forgot to cast it to a signed char. BTW, the value in question is -1 if a block can't be compressed by a particular technique. It's now doing one block worse. Quote Link to comment Share on other sites More sharing options...
xxl Posted September 26, 2021 Share Posted September 26, 2021 13 hours ago, Harry Potter said: I just implemented one of my compression techniques and cut down the size of the compressed file to 17 blocks , but I first need to debug it. Again, how do I implement the bit stream the right way? absolute revelation, it looks like your compressor is the most efficient of all I know. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted September 26, 2021 Author Share Posted September 26, 2021 Again, I found a bug. It's no where near that good. Quote Link to comment Share on other sites More sharing options...
ivop Posted September 27, 2021 Share Posted September 27, 2021 I know that feeling. That you think you have found the holy grail of compression. The file is so small. And then you realize you made a mistake, and cannot decompress that small file because your compressor was not right 1 Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted September 29, 2021 Author Share Posted September 29, 2021 Okay. I added a technique to the compressor that gives me one block on my test file. Should I post it here? Or should I try to do better first? Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 1, 2021 Author Share Posted October 1, 2021 In case you're wondering, I don't mean the size is one block but rather one block less than before. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I'm still looking for ways to improve the ZX0 compression technique. I tried everything but Adaptive Huffman Codes, but only one idea worked. I don't want to try Adaptive Huffman, as my current version is buggy, and the compressor requires 8k for the Huffman data, and the decompressor 12k. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I was good at math in high school but never formally attended college classes. Now, the Shannon technique shows promise. Can somebody explain it using less technical terms? Quote Link to comment Share on other sites More sharing options...
ivop Posted October 2, 2021 Share Posted October 2, 2021 2 hours ago, Harry Potter said: I was good at math in high school but never formally attended college classes. Now, the Shannon technique shows promise. Can somebody explain it using less technical terms? Shannon-Fano coding is sub-optimal compared to Huffman coding. But here you go: https://www.geeksforgeeks.org/shannon-fano-algorithm-for-data-compression/ Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I have a version of Adaptive Huffman Codes, but it's buggy and costs a lot of memory. I'm not willing to use it in the program. Is there another way to shorten individual bytes? BTW, I tried wikipedia.com. It was very useful to me, but the ideas there don't help here. Quote Link to comment Share on other sites More sharing options...
ivop Posted October 2, 2021 Share Posted October 2, 2021 I think what you are looking for is entropy encoding. Huffman coding is part of that, but the optimum is Arithmetic Encoding. That's pretty computationally expensive, but Range Coding is almost as good, and is integer based. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I'm willing to sacrifice 2k for the technique. Should it work there? Quote Link to comment Share on other sites More sharing options...
ivop Posted October 2, 2021 Share Posted October 2, 2021 If you mean (adaptive) range coding, it needs 32-bit ranges for each symbol, so that's at least 1kB. It's less computational expensive than arithmetic encoding, but more than Huffman. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 32-bit ranges wouldn't work well on an 8-bit computer. Quote Link to comment Share on other sites More sharing options...
+David_P Posted October 2, 2021 Share Posted October 2, 2021 16 minutes ago, Harry Potter said: 32-bit ranges wouldn't work well on an 8-bit computer. Clearly you need to upgrade your 800xl with a 65816 Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I have the Altirra64 emulator, and it has the functionality. However, I want to code for the 6502, as most of the hardware has that. Speaking of such, I have to use the 1200XL model setting to get MemXAtari to work. How do I access aux RAM on an 150XE model? Quote Link to comment Share on other sites More sharing options...
+David_P Posted October 2, 2021 Share Posted October 2, 2021 Memory is accessed in 16K banks at $4000-7FFF, controlled through PORTB. The 130xe banking permits separate CPU/ANTIC bank access; some third part upgrades do not. If you program for current versions of SpartaDOS X, you can make system calls within your program to request expanded RAM and be less concerned about banking. ...of course, the original 800 uses an entirely different banking scheme for expanded RAM... Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I already read and used the docs., but the library only works on the 1200xl system. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 2, 2021 Author Share Posted October 2, 2021 I am looking for an entropy technique. I want it to be fast enough on an 8-bit system to be worth it and take no more than 2k extra RAM. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 19, 2021 Author Share Posted October 19, 2021 I found an article on Shannon Coding on Wikipedia at Shannon coding - Wikipedia. However, I don't have a college education in Math. Can somebody please explain the method to a person with only a high school education? Quote Link to comment Share on other sites More sharing options...
ivop Posted October 19, 2021 Share Posted October 19, 2021 24 minutes ago, Harry Potter said: I found an article on Shannon Coding on Wikipedia at Shannon coding - Wikipedia. However, I don't have a college education in Math. Can somebody please explain the method to a person with only a high school education? You'll have similar problems to Huffman and Range Coding. You need large tables to keep track of the probabilities of each symbol. And Shannon Coding is old, and sub-optimal. Anyway, if (ceil x), and (sum all pk for k=1 to i-1) are abracadabra to you, you might be aiming for something that is out of your reach. For now at least. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 19, 2021 Author Share Posted October 19, 2021 I know it is old and sub-optimal. I thought it would take less memory than Adaptive Huffman. I guess not. I have some methods but am only willing to reveal one, and it didn't work noticeably. Quote Link to comment Share on other sites More sharing options...
ivop Posted October 19, 2021 Share Posted October 19, 2021 Have a look at universal prefix-codes: https://en.wikipedia.org/wiki/Universal_code_(data_compression) Which code works best depends on the distribution of your input data. Takes something between 512-1024 bytes of fixed tables. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted October 19, 2021 Author Share Posted October 19, 2021 I found Fibonacci codes. Thank you. Quote Link to comment Share on other sites More sharing options...
Harry Potter Posted November 9, 2021 Author Share Posted November 9, 2021 I find that universal codes may cost me on larger numbers. Many of the documents to compress are likely to have just text, but I'm looking for a technique that is likely to compress bytes in both text and binary data better. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.