Jump to content
IGNORED

Music-generation code strategies


Propane13

Recommended Posts

Hello!

 

I've written TIA music before, and it's always a pain for me to get started.

There are so many variables, and I always bang my head against a wall before I start writing music.

 

Here's why.

 

Let's say the simplest algorithm for music generation is:

set channels once, never set again.

Read frequencies from a list at certain intervals-- if that frequency is $FF then change volume for that channel to zero, else turn the volume to a pre-specified amount (say, 6).

 

The above is a very simple practice, but there is so much more that people do with the 2600's sound capabilities.

I can see that some people may want to keep a table for volume toggling, and some may want a table for sound channel swapping.

 

If you take those into consideration, you may have more thatn 255 bytes of notes to deal with, since you have to do things in-between notes, and this could take up a lot of space (by this, I mean you may be toggling notes every 10 frames, but volume every frame). Plus, if you want to "fake out" that you have more sound channels, you may want to have AUDC switch between frames to 2 different sounds. It may sound a little funny, but this may allow you to feel like you put more sound into the mix.

 

And, to add one final headache to the mix, let's say that you have actual sounds in a game, and you want the "second track" of music to go quiet while a sound plays on screen-- you can see this happen in Pitfall 2; the music doesn't stop, but one channel seamlessly mutes whenever you get a gold bar.

 

Of course, all of this comes at a price, and I was wondering what some of the best coding algorithms are that people use to make their msuic better than the simplest method of "just notes, and no volume tweaks". Any recommendations?

 

Thanks!

-John

Link to comment
Share on other sites

The Nerdy Nights Sound tutorial, on the Nintendo Age forums, is a pretty good introduction to sound engine design:

http://www.nintendoage.com/forum/messageview.cfm?catid=22&threadid=7155

 

Obviously the NES audio hardware specs are different than the 2600's, but the basic principles are the same: a sound engine works by writing data about pitch and volume (and sometimes tone quality) to each channel of the audio hardware at just the right times.

Link to comment
Share on other sites

Trashmania: Remix uses two different music engines, one for the in-game music and one for the title screen music.

 

The in-game music engine is simple, and made to take up little processor time. It has a list of frequencies (115 for each song), the channel is set to 6, and the volume is set to 8 at the same time it reads the next frequency, otherwise it's set to 0. It's setup to read the next frequency once every certain number of frames. The number of frames before the next frequency read shortens more and more throughout the game, speeding up the music.

 

The title screen engine is similar, but instead of reading from a list of frequencies, it uses the RAND function (this is batari Basic, by the way) to determine the next note. It alternates between 2 different sets of notes (one set has four notes, and the second set removes the first set's lowest note and replaces it with a higher note) every four notes played to make it sound a little more varied.

 

For example, if the rand number is between the numbers 0 and 64, it will play the lowest note. If the rand number is between 64 and 128, then play a slightly higher note, and so on. The title screen music also has the second audio channel used as a metronome/beat in the background. This was mainly used for testing, but the music sounded weird without it.

Link to comment
Share on other sites

I tend to keep it simple. For the engine I used in 21 Blue and The Byte Before Christmas, I set AUDC+AUDF and initial volume for each note, and every frame I decrease the volume if the note is still playing and it's volume is above 4.

 

The result is a simple envelope that resembles a struck instrument with some sustain. It's not as grating on the ears as the constant tone you get from simpler engines, and the envelope allows for both quick note changes and longer sustained notes. (with some dampening)

 

For an understated echoey beat, I use a note with a silent AUDC value and let the envelope volume changes make the noise.

 

My engine isn't as fancy as some of the techno type engines with automatic beats in every pause and various instruments to pick from, but my engine suits my simple composition style. And as you said John, that complexity comes with a price.

Link to comment
Share on other sites

I've been using Paul Slocum's player so far, but it doesn't do envelopes except for that emphasis bit. Hence I'm considering something like defining a set of eight "instruments", where each one has an explicit envelope and AUDC value(s?).

I'd add some kind of priority/arpeggio system to this.

Finally, being able to layer and loop various patterns would be good for ROM compactness.

Link to comment
Share on other sites

I should add that for actual composition I use a midi keyboard and Rosegarden. To convert the song to assembly, I quantize the hell out of it, export it as in csnd format, plug the notes used into webTune2600 and then plug the optimised frequency+channel info into an ad-hoc C program that does the csnd->asm conversion.

 

Ideally I'd bypass webTune and do the tuning optimisation directly within my converter, but my current method works so my motivation to improve things is pretty low.

Link to comment
Share on other sites

Sure, I can delve into the code a bit... In the engine I use 6 variables to track the song progress. Most of these could be nibbles, though in my case I wasn't tight on memory...

  • songindex ; the note position within the current music passage.
  • songtick ; used to count off time between notes
  • note0duration ; how many frames the note in channel 0 has left
  • note1duration ; how many frames the note in channel 1 has left
  • note0env ; the present volume for channel 0
  • note1env ; the present volume for channel 1

The first thing I do when I enter the routine is check if we're in-between note-positions. If so, I skip the music routine...

 

; if songtick isn't 0 then skip this frame for our delay between notes...
 lda songtick
 beq carryon1
 dec songtick
 rts
carryon1

 

...otherwise I do the envelope adjustments. I do these adjustments only when a note could be played, rather than every frame, to slow down the envelope changes...

 

 ; decrease the volume for the note0 envelope if it's above our sustain level
 lda note0env
 cmp #4
 bcc carryon2
 ;sec
 sbc #2
 sta note0env
 sta AUDV0
carryon2

; decrease the volume for the note1 envelope if it's above our sustain level
 lda note1env
 cmp #4
 bcc carryon3
 ;sec
 sbc #2
 sta note1env
 sta AUDV1
carryon3

 

...from there it's pretty straightforward. For any notes that have start positions that match the current sound index, I setup the first voice that isn't already playing a note.

 

If you're curious about the rest I can polish it up a bit and post it, though it's far from optimal.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...