Jump to content
IGNORED

Fixing Intellivision Music Tracker Drums


DZ-Jay

Recommended Posts

Hi, guys,

 

I thought it would be useful to start a new thread for this topic. An issue was recently brought up by carlsson on the way the Intellivision Music Tracker handles drum definitions. It seems that, although the tracker supports up to 15 different instrument definitions, it will expect all drum definitions to start at instrument index #5. That is, drum instrument definitions are hard-coded to start on the fifth record of the instruments table.

 

This is only an issue if you intend to use the built-in drums feature. All "normal" instruments are still read as expected.

 

An additional consequence of defining drums is that it makes the instruments table a little weird. You see, an instrument definition is three words long, but a drum is defined with a single word. Therefore, if you intend to define more than the first 4 "normal" instruments, you will have to pad your drums in such a way as to make the 5th instrument occur in 3-word align records after the drum definitions.

 

In other words, consider the following instruments table with 6 "normal" and two "drums":

;; ------------------------------------------------------------------------ ;;
;;  Standard instruments (pitch effect, vibrato, envelope)                  ;;
;; ------------------------------------------------------------------------ ;;
@@instr     DECLE   MUSIC.pitch01, 1, MUSIC.env01   ; Instrument #1
            DECLE   MUSIC.pitch01, 2, MUSIC.env02   ; Instrument #2
            DECLE   MUSIC.pitch02, 2, MUSIC.env01   ; Instrument #3
            DECLE   MUSIC.pitch01, 2, MUSIC.env01   ; Instrument #4

;; ------------------------------------------------------------------------ ;;
;;  Drums                                                                   ;;
;; ------------------------------------------------------------------------ ;;
            DECLE   MUSIC.drum1                     ; Drum #1 (but also instrument #5)
            DECLE   MUSIC.drum2
            DECLE   $00                             ; (padding to a three-word record)

;; ------------------------------------------------------------------------ ;;
;; More Standard instruments (pitch effect, vibrato, envelope)              ;;
;; ------------------------------------------------------------------------ ;;

            DECLE MUSIC.pitch02, 2, MUSIC.env01     ; Instrument #6
            DECLE MUSIC.pitch01, 2, MUSIC.env01     ; Instrument #7

 

 

 

When addressing the normal instruments, the tracker will still count three fields per record, even the drums, so you need to bear that in mind.

 

 

So... how do we solve this? There are a few things I'd like to take into consideration prior to addressing this issue.

  • I don't intend to re-write the music tracker, so the solution should be as easy as possible.
  • There are existing tracks already out there, so we want to be as careful as possible to avoid (much) breakage.
  • I want to avoid increasing the RAM variable requirements (my games are already tight in that regard).
  • I want to avoid requiring the programmer to keep too much of the complexity in his head, so let's use and abuse Macros as much as we can.

With that in mind, it occurs to me that our best options are, of course, to keep track of a pointer directly to the drums table separately from the regular instruments, or derive the drums from an existing pointer.

 

The first option will require one more variable, so I want to avoid that. The second option is what is done right now, except that the drums are derived from the top of the instruments table, which necessarily limits the number of instruments or the length of the table.

 

So, my idea is to add a pointer to the music header, and derive the drums table from there. Since the address to the drums definition is only acquired at a single place in the code, this narrows down the impact.

 

The relevant code is shown below:

;; ------------------------------------------------------------------------ ;;
;;  Drum                                                                    ;;
;; ------------------------------------------------------------------------ ;;
;;  NOTE: drums are currently supported on channel A only                   ;;
;; ------------------------------------------------------------------------ ;;
@@drum      CMPI    #8,     R2          ; end of drum ?
            BGE     @@end_drum

            SUBI    #85-3*4,R1          ; get pointer to drum data
            ADD     INS_PTR,R1
            MVI@    R1,     R4

The instrument index is compared to 85 as a threshold between "normal" and "drum" instruments: those below 85 are "normal" instruments; those above are drums.

 

Notice then that the starting address of the drums table is computed as the 5th record in the instruments table (3 words * 4 records). This is what we want to patch.

 

A song header currently has three fields:

;; ------------------------------------------------------------------------ ;;
;;  SONG HEADER                                                             ;;
;; ------------------------------------------------------------------------ ;;
    DECLE   <SPEED>, <POINTER TO PATTERNS>, <POINTER TO INSTRUMENTS>

What we want to do is change the SONG header to include a pointer to the drums, and modify the code above to compute the pointer to the drums table from there.

;; ------------------------------------------------------------------------ ;;
;;  SONG HEADER                                                             ;;
;; ------------------------------------------------------------------------ ;;
    DECLE   <SPEED>, <POINTER TO PATTERNS>, <POINTER TO INSTRUMENTS>, <POINTER TO DRUMS>

We must be aware that changing the header will alter the position of the song definition, which currently starts right after the header, being the 4th field from the top of the SONG record.

 

The pattern sequence is read at the top of the TRKPATINIT() routine and apparently nowhere else.

;; ======================================================================== ;;
;;  TRKPATINIT    Pattern initialization                                    ;;
;; ======================================================================== ;;
TRKPATINIT  PROC

            MVI     SONG,   R4
            INCR    R4
            MVI@    R4,     R2          ; R2 = address of 1st pattern
            INCR    R4
            MVI     PAT,    R0          ; R0 = position in patterns order table
            ADDR    R0,     R4

            MVI@    R4,     R1          ; R1 = pattern number
            TSTR    R1
            BPL     @@pat_ok            ; end of patterns ? ...

So, once we make our header changes, the second "INCR R4" above will need to be changed to an "ADDI #2, R4" to compensate for the extra field in the header.

 

And now, finally, with all that in place, our pointer to the drums table can be derived like this:

;; ------------------------------------------------------------------------ ;;
;;  NOTE: drums are currently supported on channel A only                   ;;
;; ------------------------------------------------------------------------ ;;
@@drum      CMPI    #8,     R2          ; end of drum ?
            BGE     @@end_drum

            SUBI    #85,    R1          ; get drum index

            ; DRUM FIX:
            ; --------------------------
            MVI     SONG,   R4          ; \_ Get pointer to the base of the drums table
            ADDI    #3,     R4          ; /
            ADD@    R4,     R1          ; Add "note" index to point to the actual drums entry
            MVI@    R1,     R4          ; Get pointer to drums instrument
            ; --------------------------

            SLL     R2,     2
            ADDR    R2,     R4

            MVI@    R4,     R0          ; tone period

Thoughts? Ideas? Comments?

 

-dZ.

 

 

EDIT: Updated the last patch to compute drums table pointer.

Edited by DZ-Jay
  • Like 2
Link to comment
Share on other sites

I think any solution is good. I understand adding another pointer definition would introduce a minimum of breakage. It leads to the question if Arnauld (or anyone else) is planning additional changes at the same time. The latest official version dates from 2008, so it isn't too soon for a new version.

 

As it currently stands, I would be good with using the hack way of doing things. As it seems drums is something most other users have avoided to use, perhaps it isn't a pressing matter, so the few people who like to use it, would be good with hack solutions for the time being, instead of a slightly fixed version with minimum breakage from before, and then there might be a later version with bigger breakage over time.

  • Like 1
Link to comment
Share on other sites

  • 2 months later...

This is kind of a tengential point: using drums in many in-game background music cases is not useful because the single noise channel in the AY chip is needed for sound effects. I am working on background tracks that just use two sound channels and one of those with a very low pitch to give something (very) roughly similar to the noise channel; that frees up 3rd voice and noise for effects.

Link to comment
Share on other sites

This is kind of a tengential point: using drums in many in-game background music cases is not useful because the single noise channel in the AY chip is needed for sound effects. I am working on background tracks that just use two sound channels and one of those with a very low pitch to give something (very) roughly similar to the noise channel; that frees up 3rd voice and noise for effects.

If you check out my sound engine, which is included in the Ms. Pac-Man source distribution, this would not be necessary.
You could easily write three-channel compositions, and have higher priority sound effects temporarily interrupt those channels, if you don't mind that kind of effect. Temporarily interrupting the percussion channel does not seem very intrusive to me.
Link to comment
Share on other sites

  • 4 weeks later...

 

If you check out my sound engine, which is included in the Ms. Pac-Man source distribution, this would not be necessary.
You could easily write three-channel compositions, and have higher priority sound effects temporarily interrupt those channels, if you don't mind that kind of effect. Temporarily interrupting the percussion channel does not seem very intrusive to me.

 

 

Hi, Carl, thanks for the suggestion. Are you releasing the source code to your sound engine into the public domain or in some sort of open source license?

 

Also, do you happen to have documentation on the data format and usage of your sound engine? I spent considerable time and effort trying to figure out the Arnauld's tracker format and functionality, which I tried to document in another thread. I don't wish to spend all that effort in reverse-engineering or figuring out an entirely new platform from scratch.

 

Perhaps a simple demo with a sample song and a description of how to use it would help. :)

 

-dZ.

Link to comment
Share on other sites

I just noticed a stupid bug in my patch code when decoding the drums instrument: Instead of adding the "note" index to the drums table base pointer, I add the offset from the SONG header where I defined that pointer. :dunce:

 

The correct patch code should be:

;; ------------------------------------------------------------------------ ;;
;;  NOTE: drums are currently supported on channel A only                   ;;
;; ------------------------------------------------------------------------ ;;
@@drum      CMPI    #8,     R2          ; end of drum ?
            BGE     @@end_drum

            SUBI    #85,    R1          ; get drum index

            ; DRUM FIX:
            ; --------------------------
            MVI     (SONG + 3),   R4    ; Get pointer to the base of the drums table
            ADD@    R1,     R4          ; Add "note" index to point to the actual drums entry
            ; --------------------------

            SLL     R2,     2
            ADDR    R2,     R4

            MVI@    R4,     R0          ; tone period

I've updated the first post. Thanks to user shazz for trying it and finding out that it did not work. :)

 

-dZ.

Edited by DZ-Jay
  • Like 1
Link to comment
Share on other sites

 

Hi, Carl, thanks for the suggestion. Are you releasing the source code to your sound engine into the public domain or in some sort of open source license?

 

Also, do you happen to have documentation on the data format and usage of your sound engine? I spent considerable time and effort trying to figure out the Arnauld's tracker format and functionality, which I tried to document in another thread. I don't wish to spend all that effort in reverse-engineering or figuring out an entirely new platform from scratch.

 

Perhaps a simple demo with a sample song and a description of how to use it would help. :)

 

-dZ.

It's been released -- it's part of the Ms. Pac-Man distribution, in the file MSSOUND.SRC.
There's not much in the way of general documentation, but all of the commands do have their opcodes listed and their bit fields mapped out, including examples of usage. However, since my assembler does not support macros at all, the macros that you are seeing have been baked in.
It would be necessary to write the macros from scratch in Joe's assembler, but since the opcodes are provided to you it is definitely doable to a determined person.
As far as examples, there is Ms. Pac-Man, though it is not a very good example as it hardly uses any features of the sound engine. The reason for this is that I mainly took output values I calculated from reverse engineering the arcade sound engine, and then just pushed them out to the PSG in a long series of set tone period commands. (Lazy but accurate.)
The music is there though, in the proper format.
Another thing that may be difficult to grasp is that the sound engine has to live at certain addresses. I'm not sure Joe's assembler is capable of that.
Either way, I think it's worth a study. In the future, I hope to update it with a few new features and eventually document it and convert it over to Joe's assembler as well, but it may be some time before that happens.
Carl
Link to comment
Share on other sites

Carl, how is your music format organized? As noted, Arnauld's tracker has something of a Soundtracker, modern chip music type with playlists encompassing all channels, and individual blocks of music that could be reused on either channel. On the contrary, something like the IntyBASIC music format inlines all the music note by note, all channels in parallel, into a long list with a few options for repeat and jump. Decle's MIDI playback routine may be rather simple and straightforward, not intended for hand composing in.

 

I realize there are additional variants inbetween those, like a format where all channels are in parallel but divided into music blocks that can be repeated in any order (actually classic SoundTracker format). Some might want to think in terms of traditional stave notation where you'd have indicators for opening and closing of a repeat, various Dal Segnos, Codas, omitting repeat on segno etc.

 

The more the merrier, but if there are more than three openly available formats it might be good to know how they differ and which is suitable for which occasion. A piece of music that is constantly changing with few or no repeated sections, will just have an overhead by adding playlists and block structures. A pop song that has verses and chorus on the other hand saves a lot of space by that structure, even if only a few patterns really get reused and the others are different. Some music formats also will have commands in the playlist for repeating a pattern N times instead of writing out every pattern each time. Some will also allow patterns to be transposed -N to +N number of semitones on the fly, which saves space but may make it more complex.

 

The fact that your routine has prioritized sound effects that will mute any music going on, and then after the effect is done, let the music continue sounds cool and I understand it would have lots of use in games. For title screen or even demo music, it is not that important compared to being able to use every channel supported.

 

By the way, how many of the available routines support the extra voices in the ECS?

Edited by carlsson
Link to comment
Share on other sites

By the way, if changes to the music format are inevitable, I would recommend yet one more change while we're onto it:

 

@@drum      CMPI    #16,     R2          ; end of drum ?
            BGE     @@end_drum
This will make each drum pattern twice as long. I made some rudimentary checks so a song at a high tempo wouldn't crash because the drum pattern counter never would reach the last step before a new note is played.

 

Why I want extra long drum patterns is to enable falling tom drums and other type of effects that the resolution of 8 frames is a bit too short for. The difference is greater than you'd expect.

Link to comment
Share on other sites

Please understand this is code I wrote well over 10 years ago, so I can't possibly answer this as intelligently as I could back then, but I'll try to explain a few things.


For one, there really is no structure, like with patterns and such (like in Arnauld's tracker routines). it's just a bunch of commands, PLAY being one of many. If you want to organize them into patterns that can be repeated at whim, you can, just use a bunch of CALL statements, with the patterns being your subroutines. There are also 3 sound registers, which you can use as loop counters. You can also load them with random values, and choose to branch or not depending on the contents. The neat things is that all the same commands are available for sound effects as well.


Channels can be in one to three modes: sound effect, drums, or music. The drums function in exactly the same way as Arnauld's -- I pretty much took them as is (not in code, but in design). The sound effects work almost exactly like they do in the EXEC. You can pretty much use EXEC sound effects as is… The only thing is that my code does not cap the volume levels (prevent them from going below zero or above 15), they just roll around (which is neat for sound effects). So you would have to modify the code to make sure that they don't, but that's the only difference. Of course the code is totally different and much more efficient.


The encoded format is extremely compact. Most commands don't take more than one opcode (one program location), even the PLAY command which allows you to specify volume, duration, and note. The lengths are encoded into a lookup table, and you can change the base address by the SPEED command, so it's possible to play compositions at different speeds (but not sound effects). It has a variable opcode width, so for example, I don't just have four bits for distinguishing between commands, it depends on the command. The PLAY command has an opcode width of one bit, for example. I really can't explain it, you will have to look at the code in this case.


As to your questions about the ECS, yes it does support the second PSG, in fact in a very interesting way. I mentioned that lower priority sound effects are temporarily "muted" so that higher ones can play (like having a sound effect temporarily interrupt the channel of music, and then having it resume when the sound effect is over). The fact of the matter is, it just pushes lower priority sound effects onto the second PSG. The effect is that when the ECS is plugged in, you don't hear any interruptions. But it's totally optional… You don't have to directly support the second PSG and it works either way.


, The record format and memory, is also very compact. Uses both 16-bit and 8-bit RAM… The process records use both, all the channel records use only eight bit RAM. You can decide to use as many processes as you want… The most powerful option is to use a separate process for each channel, but each process is actually capable of reserving and commanding up to three channels at once. Using fewer processes of course uses less memory. You could really just set aside memory for two processes, and actually fully command both PSG is at once if you want.


One limitation it has is that it can only command one envelope generator, even if you have the two PSGs.


Anyway, now that I've probably made it about as clear as mud, I'll just stop. Someday I hope to go back and add a couple of features to it, document it at least a bit better, and get it out there. Eventually I'll have to convert it over to Joe's assembler, although that would not be too hard; it would take time to write all the macros out to support each command, though.


Carl

  • Like 3
Link to comment
Share on other sites

 

Please understand this is code I wrote well over 10 years ago, so I can't possibly answer this as intelligently as I could back then, but I'll try to explain a few things.
For one, there really is no structure, like with patterns and such (like in Arnauld's tracker routines). it's just a bunch of commands, PLAY being one of many. If you want to organize them into patterns that can be repeated at whim, you can, just use a bunch of CALL statements, with the patterns being your subroutines. There are also 3 sound registers, which you can use as loop counters. You can also load them with random values, and choose to branch or not depending on the contents. The neat things is that all the same commands are available for sound effects as well.
Channels can be in one to three modes: sound effect, drums, or music. The drums function in exactly the same way as Arnauld's -- I pretty much took them as is (not in code, but in design). The sound effects work almost exactly like they do in the EXEC. You can pretty much use EXEC sound effects as is… The only thing is that my code does not cap the volume levels (prevent them from going below zero or above 15), they just roll around (which is neat for sound effects). So you would have to modify the code to make sure that they don't, but that's the only difference. Of course the code is totally different and much more efficient.
The encoded format is extremely compact. Most commands don't take more than one opcode (one program location), even the PLAY command which allows you to specify volume, duration, and note. The lengths are encoded into a lookup table, and you can change the base address by the SPEED command, so it's possible to play compositions at different speeds (but not sound effects). It has a variable opcode width, so for example, I don't just have four bits for distinguishing between commands, it depends on the command. The PLAY command has an opcode width of one bit, for example. I really can't explain it, you will have to look at the code in this case.
As to your questions about the ECS, yes it does support the second PSG, in fact in a very interesting way. I mentioned that lower priority sound effects are temporarily "muted" so that higher ones can play (like having a sound effect temporarily interrupt the channel of music, and then having it resume when the sound effect is over). The fact of the matter is, it just pushes lower priority sound effects onto the second PSG. The effect is that when the ECS is plugged in, you don't hear any interruptions. But it's totally optional… You don't have to directly support the second PSG and it works either way.
, The record format and memory, is also very compact. Uses both 16-bit and 8-bit RAM… The process records use both, all the channel records use only eight bit RAM. You can decide to use as many processes as you want… The most powerful option is to use a separate process for each channel, but each process is actually capable of reserving and commanding up to three channels at once. Using fewer processes of course uses less memory. You could really just set aside memory for two processes, and actually fully command both PSG is at once if you want.
One limitation it has is that it can only command one envelope generator, even if you have the two PSGs.
Anyway, now that I've probably made it about as clear as mud, I'll just stop. Someday I hope to go back and add a couple of features to it, document it at least a bit better, and get it out there. Eventually I'll have to convert it over to Joe's assembler, although that would not be too hard; it would take time to write all the macros out to support each command, though.
Carl

 

 

Hi, Carl,

 

In the interest of keeping on topic, may I suggest you create a new thread for your music tracker support? There are several questions here about Arnauld's tracker, which is the subject of this thread, and I think it would just cause confusion to discuss both.

 

With a dedicated thread, you can offer all sorts of details on your tracker and people can provide feedback and discuss it at length without confusing the topic.

 

-dZ.

Link to comment
Share on other sites

 

Hi, Carl,

 

In the interest of keeping on topic, may I suggest you create a new thread for your music tracker support? There are several questions here about Arnauld's tracker, which is the subject of this thread, and I think it would just cause confusion to discuss both.

 

With a dedicated thread, you can offer all sorts of details on your tracker and people can provide feedback and discuss it at length without confusing the topic.

 

-dZ.

Thanks for the suggestion, but I have nothing further to say on my sound code at this time. I may never get around to documenting it "properly" (although all the information is there, I just don't handhold), so I've released the code for others to study and hopefully take up the "cause".
Have fun. ;-) Carl
Link to comment
Share on other sites

By the way, if changes to the music format are inevitable, I would recommend yet one more change while we're onto it:

 

@@drum      CMPI    #16,     R2          ; end of drum ?
            BGE     @@end_drum
This will make each drum pattern twice as long. I made some rudimentary checks so a song at a high tempo wouldn't crash because the drum pattern counter never would reach the last step before a new note is played.

 

Why I want extra long drum patterns is to enable falling tom drums and other type of effects that the resolution of 8 frames is a bit too short for. The difference is greater than you'd expect.

 

 

Sounds like a good use case. Do you happen to have a demo to illustrate it? :)

 

-dZ.

Link to comment
Share on other sites

  • 11 months later...

For those paying attention (*crickets* *crickets*), attached find a modified version of the Arnauld's Intellivision Tracker source. This version includes the following changes:

  • Drums fix discussed above to support drums and tone on the same channel.
  • 16-step drum patterns as suggested by carlsson.
  • Support for secondary PSG in ECS module.
  • Fix to enable "global fade" (it didn't work before).
  • Lots of source code annotations (comments) describing how the tracker functions.
  • Various code optimisations to improve performance (and compensate for the extra processing incurred by the second PSG).

This version of the tracker was used in a project that required the ECS, therefore the code expects the ECS to be available. It shouldn't be too hard to make it optional, but I just haven't gotten around to do it. I am providing the source code as is, taken directly from our project's code base.

 

Downloads:

 

Please let me know if anybody has any questions or if you encounter any problems.

 

-dZ.

 

 

EDIT: Updated with drums+tone fix mentioned below.

  • Like 3
Link to comment
Share on other sites

Nice. One of those days I'll download it to determine how we got "drums and tone on the same channel" working. Perhaps every second note being one or the other, or did we go as far as creating arpeggio/instrument effects that have a combination of fixed and relative frequencies so you really could play e.g. a bass note with a percussive effect at the start?

 

Of course in a project utilizing the ECS with another 3 channels of sound that is not important as it would give you plenty of channels for everything.

Edited by carlsson
Link to comment
Share on other sites

Nice. One of those days I'll download it to determine how we got "drums and tone on the same channel" working. Perhaps every second note being one or the other, or did we go as far as creating arpeggio/instrument effects that have a combination of fixed and relative frequencies so you really could play e.g. a bass note with a percussive effect at the start?

 

Yeah, I think that's what we did: just play tones first and never go back after activating drums on channel A.

 

I seem to recall me claiming I had fixed the bug, yet it still manifested; and then we just ran out of time, so we didn't worry about it.

 

Now someone else was asking me about it and I took a fresh look at it and found a very stupid bug. I can see why I didn't notice it before, it's very subtle: a pointer was off by one. So, if you don't really analyse carefully the address, you can just totally miss it and assume the problem is somewhere else.

 

Of course in a project utilizing the ECS with another 3 channels of sound that is not important as it would give you plenty of channels for everything.

 

I think that's what we did: focus on ECS support so that you could avoid the problem completely. :)

 

-dZ.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...