Jump to content

matthew180

Members
  • Posts

    3,216
  • Joined

  • Last visited

  • Days Won

    4

matthew180 last won the day on January 21 2022

matthew180 had the most liked content!

Profile Information

  • Gender
    Male
  • Location
    Central Florida
  • Interests
    My family, FPGA, electronics, retro/vintage computers, programming, coin-op games, reading, outdoor activities.
  • Currently Playing
    Divinity2

Recent Profile Visitors

22,183 profile views

matthew180's Achievements

River Patroller

River Patroller (8/9)

3k

Reputation

  1. You should really use SR1 for this, since you don't know if you have an F18A yet. I think I know why use used bits >E0 of SR1 for the F18A, so those will remain constant. For any releases going forward the three zero bits >1C will contain the F18A type, i.e. MK0, MK1, MK2. I cannot promise I won't go over V1F, but once the MK2 is ready the version for all F18As will change to V2.0. For the sprite number in SR0 bits >1F, the real 9918A (and F18A) will retain whatever was the highest processed sprite on a line which will depend on: 1. If the >D0 byte is set in the sprite table to stop sprite processing at a certain value. 2. If five or more sprites are on a line during the frame. Keep in mind that the sprite number is only valid if the 5S flag is set. IIRC the sprite number has been characterized to follow the sprite counter unless the 5S flag gets set, at which point it always retains the 5th sprite number until SR0 is read. So, it is possible that the value is always >1F depending on what happens during the frame, but I don't remember if it gets cleared when reading SR0 (I don't think it does).
  2. GPU detection will not be affected, and neither will not checking at all. It is the SR1 tests that would be a problem. But since the number of programs that use SR1 detection is n > 1, I'll probably have to put the >E0 bits back and just call it a day. Yeah, I should probably re-read the early posts in this thread, since I might answer my own questions. It probably has more to do with detecting the 9918A vs F18A, and what you would get back on a 9918A if you read the status register. For 9918A detection, if you wait for an interrupt and read SR0 twice, then bits >E0 in SR0 will never be set on the 2nd read. Maybe that's why I used those bits...
  3. Yes, SR1 (the 2nd status register... ). Thanks, I fixed the post. I also checked the Mario Bros code, which was the software that brought the MK1 problem to my attention, and it does mask with >F0 and compare with >E0. I'm testing (well, Arcadeshopper is testing) a version that uses >20 for the mask and compare, but I think I might have to just put back the >C0 bits in SR1.
  4. With the release of the MK1, and future MK2, I have inadvertently created a problem for myself related to firmware updates. Specifically, the in-system updater now need to be able to tell the difference between the F18A original (back designed MK0), the MK1, and future MK2. For the last year the MK1 boards have been released without any way to be able to tell the difference between them and the MK0 from a host computer, so this is going to be a hurdle when the first firmware update is released. I was looking at SR1 (Status Register 1) which holds the "VDP type" information, and I'm a little confused why I chose bits >E0 to indicate the F18A. SR1 is used by the 9938/58 to indicate the VDP type, but there are also light-pen and other various bits in the register, and bits >C0 (used by the F18A ident) overlap with some of those. L=light pen flag F=light pen switch V=version bits B=blanking period (horz or vert) H=horz interrupt flag LFVVVVVH LF00000H 9938 LFVVVVVH LF00010H 9958 VVV000BH 111000BH F18A Nothing like throwing common sense out the window on that one. I don't recall WTF I was thinking, but I certainly did not consider future versions. Technically the zeros in bits >1C are "don't care" for identifying the F18A, but in the register use spreadsheet they are shown as "0". Thus it is possible existing software might use a mask of >FC on SR1 data, and expect to find >E0 for the F18A. The 9938/58 could also report something close if the light pen inputs are active, with only bit >20 being clear in that case. I don't know why I didn't stick to the five version bits >37 in the middle of the register... I *could* change to that, but risk some compatibility problems with software written to detect the F18A. Alternatively I could leave SR1 as-is, and use SR12 to indicate MK0, MK1, or MK2, but that is yet another register to implement, and more code to detect the version. I think I would like to use something like this: MK0 v19 1110000H Original MK1 v19 1110000H Can't distinguish from MK0 MK0 v1a xx10000H Starting with v1a MK1 v1a xx10010H Starting with v1a MK2 v20 xx10100H Initial release This removes the conflict with the light pen flags of the 9938/58, and removes the combined (horz+vert) blanking flag (which was probably useless anyway since horz blanking happens much too fast for reading a status register). A mask of >20 would indicate "F18A", and a mask of >1C would indicate 0, 4, or 8, for MK0, MK1, MK2 respectively (or 0, 1, 2 if you only consider the three masked bits). But this would break any existing software looking for at lease >E0 in SR1 for "F18A". Alternatively I could leave bits >C0 set, and the new scheme would still work, if existing software is not expecting bits >1C to be zero. If anyone has written F18A software that reads SR1 to check for the F18A, I would be interested in feedback. I know @Tursi and @Asmusr have written such software, and any insight would be appreciated!
  5. Heh. It recent days, hit-or-miss. People are probably curious, but quickly realize it takes a real amount of time and effort to learn enough to make a program, and probably decide they would rather do something else. You can speculate about that all day long. However, back in the day you didn't just buy the E/A Module (cartridge), that was mostly useless on its own. You needed a 32K memory expansion, a disk controller, and disk drive; and that usually meant a PEB, but could also have been a collection of side-cars. That is a pretty significant monetary investment, so people were probably a lot more careful about the decision and maybe knew more about what they were getting into? However I knew nothing in 1983 (13yo) other than a vague idea of what assembly language was, and apparently my dad had more money than sense that day (or really believed in me?) when he bought the PEB package (TI was blowing them out by then, but it still cost about $400). I had many many frustrating days and nights, sometimes ending in tears, trying to figure out assembly from the E/A manual and the Tombstone City source code[1]. But I am stubborn as hell, infinitely curious, I must know how things work, I pay attention to detail, and most importantly I knew all coin-op video games were written in assembly. BASIC is sooo slow on the 99/4A (as we all know), and that was also a very strong motivator. So, the choices were, 1. play marginal games, 2. program in slow BASIC / XB, 3. learn assembly and own the machine with all the power and speed in the world! Muhaaa!! I furiously typed in the E/A examples, some worked, some did not (as we now know, there are errors). The first assembly program I tried to write was clearing the screen, trying to start simple, you know, like CALL CLEAR. I wrote this: `CLR R1`. I did not know what this "register" thing was that the "CLR" instruction needed, but that was the closest things to "CLS" that I could find, since that must be assembly for clear screen... (other BASIC dialects have CLS, so...) It assembled! It loaded! It RAN! It did *not* clear the screen. It was not until I got the Lottrup book, which does start with simple programs like clearing the screen and animating the `@` symbol, that things really started to click for me. Having to write a space character to every location on the screen was probably the first algorithm I was every directly exposed to, and it made so much sense. I don't really recommend the book these days (for reasons explained in the early pages of this thread), but it absolutely opened the door to assembly and low-level computer programming to me BITD. I still remember being so giddy about moving the `@` symbol across the top of the screen, and that I had to SLOW IT DOWN with a delay loop just to see it! Oh assembly language, you are soooo cool! I had to "slow it down!", something you would *never* have to do in BASIC where you spent your time screaming for it to go FASTER! I bragged about that program, to anyone who would listen, for weeks. I had many firsts on the 99/4A, and this was one of them. Giving up was never an option that entered my conscious. I do question my choice sometimes though, these days, as I scream at modern computers for entirely different reasons... [1] (The Tombstone City source came with the E/A manual, and was a really cool move on TI's part, IMO. Giving away source code to one of their commercial titles?! It really contradicts TI's stinginess of wanting to have all developers pay them for a license to publish software for the 99/4A).
  6. Don't use assembly... ? Just kidding, you should absolutely learn assembly, it is worth the effort and can be very rewarding. However, assembly language is only a mnemonic representation of machine code to help the programmer not have to write programs in hex or binary, so there are not going to be many ways to keep you from treating the data in an expected way. My recommendation would be: 1. Use xdt99's ability to use longer names for labels to its full extent. Picking good names for routines and variables is hard, but it is very important to help remind you what the data is and how you are supposed to use it. 2. Adopt simple prefixes or suffixes to name what the data is (i.e. integers start with `s16_` or `u16_`, bytes with `s8_` or `u8_`, addresses with `adr_` (address), etc.). These days, with xdt99, I have no problem writing 9900 assembly that does not work with TI's assembler (it was good BITD, but we have better now). 3. Use equates to name "magic numbers" and memory addresses. 4. Use larger block comments before code to explain what it is doing, which gives context to the code that follows that you don't get by commenting each line of code (which I find less useful since it does not tell you "why" or "what" is going on, only "how"). The CPU's view of memory is simply an address that holds a value. The CPU does not know if the data is a signed number, unsigned, an address, part of a larger value, or anything else. It is up to the programmer to keep all that straight, know what any particular data value is supposed to be, and select the proper assembly instructions to work with the values as intended. This flexibility comes at a cost though, and assembly programmers need to be very meticulous and detail oriented. You have to build a mental model of your data as you design and write your code. With a small retro computer a human can keep all this detail in their head at once, and is one of the main aspects of retro computing that separates it from modern computing. This is also why higher level programs were one of the first programs written for computers, with their abstractions and simplifications to what you can do with data, etc.. Such abstractions are needed to help make writing much larger programs even possible, but to also make computers usable and approachable for people who do not need or want to know how the computer works, but rather use it as a tool for doing other things (which may not even be computer related, i.e. writing a book, plans for building something, etc.).
  7. I'm sure I cannot take credit for that font, but I may have tweaked it a little. With the Classic99 smoothing filter turned on it is hard to know for sure which font it is. Probably taken from a coin-op game though, I like the way video games did their fonts. Yeah, but when you think of it like a 40 page book, suddenly it seems rather short. Although content density is probably way higher on the thread than in a book. The best opportunity for helping is when a learner is engaged, asking questions, and following through. Opry99 did pretty well and got what he needed from assembly, and AirShack (RIP) finished a game in assembly. IIRC, it was a buck-list of his that he achieved. It was very rewarding to see their "ah ha!" moments.
  8. I hope that includes the first 40 pages of this thread. They pretty much cover everything in detail, no hand waving or mumbling around topics. The second 40 pages are mostly a rehash of the first 40 pages. Sometimes it is hard to give an answer without slipping into some lower level detail. There is a lot of nuance at the hardware layer that leaks over into the assembly layer. The first thing people try to do is put abstractions on top of these details in an attempt to make the system easier to use. But there are trade-offs, as there always are. The hardest part about assembly language is not assembly language; it is understanding the system upon which you are trying to write code for. If the console ISR is allowed to run at all, for any reason, *every time* you want to do *anything* with the VDP you have to disable interrupts, set up the VDP address, read / write your data, and re-enable interrupts. And if the console ISR runs and you need VDP status in your own program, you have to get it from the dedicated location in scratch-pad (I don't remember the exact address) where the ISR stores a copy of the VDP status. @dhe The VDP can only receive one byte at a time. It takes two writes to the VDP to set up its internal address register, and reading the VDP status will reset that sequence. The 9900 (and most CPUs) will check for interrupts between instructions. This means if CPU interrupts are enabled, your code can be interrupted between any instructions in your program. If you are in the middle of the sequence to set up the VDP's address register, and the interrupt fires and does any communication with the VDP, then it will wreck your sequence and the VDP address register will not be set. Also, if you have set the VDP address register and are now in the process of reading / writing VRAM when the interrupt fires, and if the ISR is set to auto-play sound or process sprite motion, then it will change the VDP address register to what it needs and read / write VRAM. When your code resumes, you are now reading / writing to the wrong VRAM location. In the 9918A Datasheet, pg 2-1, section 2.1.2, along with a big NOTE. The 9918A Datasheet was not laid out very well, but the info is in there, it just takes some focused reading and taking lots of notes. What you don't get from the datasheet is the interplay between your code, the console ISR, and the VDP. This is why I really like to turn off the console ISR, and recommend for people learning to do the same. The VDP is very straight forward, and reinventing VSBW, VMBW, etc. on your own is part of learning. All the console ROM and GROM abstraction routines are usually thought of as helpers, however if you don't know the details of what they are doing and all the details about the ISR and such, they can be foot-guns when trying to get started and just do some simple things like putting graphics on the screen. IMO, the simple way to co-exist with the ISR: START LIMI 0 . . . Init code . . . MAINLOOP . . . All your code . . . LIMI 2 LIMI 0 B @MAINLOOP Allow interrupts once in your main loop, in a very controlled place.
  9. It is never safe to write to the VDP with interrupts enabled. You need to have a `LIMI 0` sometime before you call `BLWP @VMBW`.
  10. *nods* Sorry if I came across harsh. It always sounds better and kinder in person.
  11. Use emulation then? Or stick with your PEB. Or sell the PEB to pay for the TIPI... This is a hobby, so spending lots of money is pretty much in the definition. These devices are made by regular people here in the community who put a lot of their own time, money, and effort into creating the devices. And then even more time and effort to make them into some thing people can just buy and plug in. This is no small effort and it is not cheap. People selling and supporting their devices deserve to recuperate the time and life they gave up to make it available for others. I'm sure the designs for the TIPI and SAMS are out there for anyone who wants to make their own PCB and assemble the devices. Don't know, I bought them both.
  12. No, not really. I would like to, but the chips I'm using are a PITA to put on a breadboard, and sometimes the parts are only available in SMD, and the frequencies are too high (100MHz access to an SRAM or SPI Flash is not really going to work on a breadboard). Digital logic is pretty straight forward, the hardest parts are the analog bits, and of course noise (which is only worse on a breadboard). But it depends on what you are doing, so sometimes it makes sense (to prototype on a breadboard), sometimes not. For the F18A, I developed initially using an FPGA devboard that had the FPGA I was going to use, and I made a cable to plug into the 9918A socket on the host computer (99/4A in this case, see photos). Once it was mostly working, I went directly to a custom PCB, and it took three revisions to work out the electronic problems. You can use simulation these days too, to great effect, especially for digital stuff. You can also use HDLs (VHDL, Verilog, etc.) to write simulations, test them, see the timing diagrams, and use that to prove you circuits that you build with discrete logic. HDL is not just for programmable logic. Photos are of F18A early days of development.
  13. No need to justify anything, I'm just giving my thoughts as I look at the board. Apologizes if it came across any differently. Paying attention to your design rules (based heavily on what PCB house you use), and getting them set up first, will make your life much easier in the long run. Trace / space is a big one, keep-out and distance between components, and via drill to annular ring size are important. Having a clear and detailed silk screen will make your future-self very happy when you go to do assembly and troubleshooting.
  14. Why is U8 so close to the edge? You have a ton of unused space on the board, I would keep things well back from the edges. Consider spending some time working on the silk screen to make all the labels big, clear, easily visible, and add any information you need to configure the card. Every component should have a designation, and make sure all pin-1 designators are clear. It is hard to tell, but the input power trace to the regulator looks like any other trace; you are paying for the layer, so use the copper. Also, the regulator is close to the edge, which I realize is typical, but I never understood why they were done what way. IIRC, the regulators would also short out to the metal case if assembly was done incorrectly. No need to perpetuate a problematic design. What trace/space are you using for signals, and what are the specs for the vias? Edit: These are just my thoughts as I look at the board, they are not intended to be criticism. Just things you might want to consider.
  15. https://github.com/hneemann/Digital "Direct export of JEDEC files which you can flash to a GAL16v8 or a GAL22v10." GALs are getting harder to find support for, and the best options seem to be some form of PALASM, CUPL, or ABEL, and some various of the open source tools. I was recently introduced to Renesas "GreenPak" devices, which come in small packages and can cost as little as $0.50 (a 20-pin device that would easily replace a GAL is about $1.32). It has current software support for Linux, MAC, Windows, and the screenshots look very schematic and drag-n-drop. The specs of the devices are nice and it appears they can go to 5V Vcc, and therefore can support 5V TTL directly. I have not used any yet, but they look nice. https://www.renesas.com/us/en/products/programmable-mixed-signal-asic-ip-products/greenpak-programmable-mixed-signal-products As for validating your PALASM, I hope someone who knows the language and has some spare cycles will chime in. It looks easy enough, and if your chip works as expected then that is probably the best validation you can get.
×
×
  • Create New...