Jump to content
IGNORED

What do you guys use to create Pokey Music?


Recommended Posts

Hi:

 

Question, if I may:

 

I have to create at least 6 sets of background music (as well as other incidentals) for my "Bentley Bear" game. Not wanting to code music by hand, I attempted writing an app that takes a (musical) keyboard input and translates it to Pokey AUDFx data. Got pretty far with it, too. But unfortunately ran into an issue which caused me to abandon it.

 

Anyway, I am using 4 channels, one of which for Sound Effects which leaves me three for music. (just FYI).

 

Is there *anything* I can use to create music and output Pokey Data or do I have to hand-code the music?

 

Thanks for your help, guys,

Bob

Link to comment
Share on other sites

There are several trackers that you can compose your music in and that provide an assembly language player routine that you can incorporate into your program.

 

Raster Music Tracker is a Windows program:

 

http://raster.infos....ari/rmt/rmt.htm

 

Theta Music Composer runs on the Atari or an emulator:

 

http://jaskier.atari8.info/#

 

(For English, click American Flag -> Old Work -> TMC2

 

Those are my favorites, but there are others as well.

  • Thanks 1
Link to comment
Share on other sites

If you're not happy having resident RMT in your program (it uses about 16 zp locations + the player itself takes a few K) I had the idea some time back to modify the player to just dump out the raw data.

 

From there, some sort of optimisation and compression could be applied. The final result would be a CPU saving within the game, memory saving if any would be highly variable depending on how much music you have in the first place and the complexity of the tune.

 

Best usage would be in cases where there's only 2-3 short reasonably simple tunes.

 

What went wrong with your Plan A ? I'd have thought such an application could be really useful and present another way for music to be created.

  • Like 2
Link to comment
Share on other sites

Thank you --- now I just have to figure out how RMT works ;)

 

Thanks again,

Bob

 

EDIT - Sorry Rybags - saw your response after I posted:

It's not that I'm not happy with RMT, I don't fully understand how it works and how it would integrate with my game. :(

 

As far as the 'Instrument Input' application, there were too many issues; couldn't get the Metronome/Quantization/MIDI Input synchronized, and there was a (quite) noticeable delay between hitting the piano keys and actually hearing the sound, which makes timing almost impossible. :(

Link to comment
Share on other sites

  • 1 month later...
  • 1 year later...

If you're not happy having resident RMT in your program (it uses about 16 zp locations + the player itself takes a few K) I had the idea some time back to modify the player to just dump out the raw data.

 

From there, some sort of optimisation and compression could be applied. The final result would be a CPU saving within the game, memory saving if any would be highly variable depending on how much music you have in the first place and the complexity of the tune.

 

Best usage would be in cases where there's only 2-3 short reasonably simple tunes.

I know this is older topic but it's exactly the problem that I have right now...

 

I have only couple of sound effects (done as instruments in RMT) and would like to use least memory possible for playing them.

Rmt player routine+reserved space is almost 2K if I remember correctly...

Is it possible to play RMT instruments without rmtplayer routine ?

 

I know instrument playing is most of what RMT actually does, but I hope that Song, tracks playing part takes memory too so it would be large save...

 

Any experiences with stuff like this would be much appreciated :)

Link to comment
Share on other sites

  • 4 weeks later...

Out of curiosity, does anyone know how much cpu time the rmt player takes per frame? Also, how often the rmt player routine loops?

Pretty sure it's variable, based on the instruments used, song complexity, etc. I know many of the song demos done with it use "timing bars" to show the usage.

Link to comment
Share on other sites

Pretty sure it's variable, based on the instruments used, song complexity, etc. I know many of the song demos done with it use "timing bars" to show the usage.

 

There's a couple of reasons why I ask, 1 reason is because I was previously using the Torsten Karwoths sound mon & that didn't take more than 2 scan-lines every frame to process, whatever the tune, I can't remember whether that supported 8 channels or just the 4. The other reason is that I do not know how frequently the audio volume channel registers need to be updated in order to get vocal frequency patterns sounding pure. Does anyone know this, is once a frame good enough for vocal sounds update/or adjustment or are we talking update/change as fast and continuous as the Atari allows.

Visually, the picture only needs to be updated every 50th of a second/1 frame (uk) in order to appear solid, but what speed is required for music/sound or vocals?

Link to comment
Share on other sites

Visually, the picture only needs to be updated every 50th of a second/1 frame (uk) in order to appear solid, but what speed is required for music/sound or vocals?

Probably the majority of music composed on the A8 only updates the POKEY registers at 50/60Hz. That's typically enough for sufficient ADSR envelopes and fast enough to offer various lengths for sixteenth notes so you can choose a suitable tempo. There are some tunes that update two or more times per frame taking them up to 100/120Hz or greater. That provides more detail for ADSR envelopes and gives more options for the length of sixteenth notes. Of course, then there are "digi" tunes that update anywhere from ~4kHz to 15kHz+ in order to play digital samples.

 

Usually vocals would have be done with digital samples. However, it might be interesting to try to adapt the techniques in a recent set of C64 demos by Algorithm that only update the SID registers every 50Hz but still manage to produce fairly recognizable vocal sounds. See here:

 

Frodigi 2 - CSDB

Frodigi 2 - Pouet

Frodigi - CSDB

Frodigi - Pouet

 

Though POKEY doesn't have the variety of filters that SID has nor an easy way to select waveform duty cycles, so it might not translate so well.

Edited by Xuel
  • Like 1
Link to comment
Share on other sites

>Is it possible to play RMT instruments without rmtplayer routine ?

If you export a song from RMT, you also get a source include file with defines containing the subset of RMT features your song actually uses.

The RMT Replayer uses the defines for conditional assembly,hence the player for a simple song that uses less advanced features will be effectively smaller.

Raster did an awesome job in optimizing this both with regards to speed and memory. I wouldn't expect there's any room for optimization, because it is already great.

 

IFT FEAT_COMMAND2

frqaddcmd2 org *+1
EIF
IFT TRACKS>4
org PLAYER-$400+$40
ELS
org PLAYER-$400+$e0
EIF
IFT FEAT_PORTAMENTO
trackn_portafrqc org *+TRACKS
trackn_portafrqa org *+TRACKS
trackn_portaspeed org *+TRACKS
trackn_portaspeeda org *+TRACKS
trackn_portadepth org *+TRACKS
EIF
  • Like 2
Link to comment
Share on other sites

If you export a song from RMT, you also get a source include file with defines containing the subset of RMT features your song actually uses.

The RMT Replayer uses the defines for conditional assembly,hence the player for a simple song that uses less advanced features will be effectively smaller.

Raster did an awesome job in optimizing this both with regards to speed and memory. I wouldn't expect there's any room for optimization, because it is already great.

That's just what I was looking for! Thanks for info!

Link to comment
Share on other sites

1

 

Probably the majority of music composed on the A8 only updates the POKEY registers at 50/60Hz. That's typically enough for sufficient ADSR envelopes and fast enough to offer various lengths for sixteenth notes so you can choose a suitable tempo. There are some tunes that update two or more times per frame taking them up to 100/120Hz or greater. That provides more detail for ADSR envelopes and gives more options for the length of sixteenth notes. Of course, then there are "digi" tunes that update anywhere from ~4kHz to 15kHz+ in order to play digital samples.

Usually vocals would have be done with digital samples. However, it might be interesting to try to adapt the techniques in a recent set of C64 demos by Algorithm that only update the SID registers every 50Hz but still manage to produce fairly recognizable vocal sounds. See here:

Frodigi 2 - CSDB
Frodigi 2 - Pouet

Frodigi - CSDB

Frodigi - Pouet

 

Though POKEY doesn't have the variety of filters that SID has nor an easy way to select waveform duty cycles, so it might not translate so well.

 

Am I the only one baffled here or what, the visual picture needs to be updated at least 50 (ish) times a second, yes!? Then why do sounds need to be produced at a higher frequency, I've always come to believe that light travels faster than sound. Is it just me being stupid or what? Someone please explain.

Link to comment
Share on other sites

Am I the only one baffled here or what, the visual picture needs to be updated at least 50 (ish) times a second, yes!? Then why do sounds need to be produced at a higher frequency, I've always come to believe that light travels faster than sound. Is it just me being stupid or what? Someone please explain.

 

It's not related to the speed of light or sound but to how quickly the human brain can discern changes in either. Persistence of vision means you only need around 25fps to make a movie that is pleasing to a human observer. However, the ear can discern much more rapid changes and indeed these rapid changes in frequency and amplitude are what makes one voice/instrument/noise sound distinct from another.

 

Of course visible light is really electromagnetic radiation which has frequencies in the ~400THz to ~800THz range (that's terahertz) so technically pictures are being updated very quickly indeed by the phosphors in your monitor.

Link to comment
Share on other sites

Another thing to consider is that a typical music composition might have a tempo of 120 beats per minute. That means that a sixteenth note occurs eight times per second or at 8Hz. If you want those sixteenth notes to sound staccato, i.e. short, then you have to tell POKEY to play them initially with a certain volume and then decrease the volume rapidly before starting the next note. Say we want to update the volume 6 times before the next sixteenth note to form a nice ramp, well then you need to update POKEY at 48Hz.

 

EDIT: Grammar and clarity.

Edited by Xuel
Link to comment
Share on other sites

 

It's not related to the speed of light or sound but to how quickly the human brain can discern changes in either. Persistence of vision means you only need around 25fps to make a movie that is pleasing to a human observer. However, the ear can discern much more rapid changes and indeed these rapid changes in frequency and amplitude are what makes one voice/instrument/noise sound distinct from another.

 

Of course visible light is really electromagnetic radiation which has frequencies in the ~400THz to ~800THz range (that's terahertz) so technically pictures are being updated very quickly indeed by the phosphors in your monitor.

 

Another thing to consider is that a typical music composition might have a tempo of 120 beats per minute. That means that a sixteenth note occurs eight times per second or at 8Hz. If you want those sixteenth notes to sound staccato, i.e. short, then you have to tell POKEY to play them initially with a certain volume and then decrease the volume rapidly before starting the next note. Say we want to update the volume 6 times before the next sixteenth note to form a nice ramp, well then you need to update POKEY at 48Hz.

 

EDIT: Grammar and clarity.

 

That helps (a bit), I did realise it's the brain that discerns the difference, but my (somewhat small) knowledge of frequency's still makes it difficult to put things in perspective. Audible frequencies are slower than visual frequencies right? Then howcome the ear, as you say, discerns much more rapid alterations?

Link to comment
Share on other sites

  • 3 weeks later...

One thing to bear in mind is that even though the Frodigi method updates once per frame (or even once per two frames), the sid oscillators in the SID update every cycle (which is just below 1mhz a second!) Hence sample rate is not 50hz (Its only the update rate of the parameters that is 50hz, the SID does the rest without cpu intervention)

  • Like 2
Link to comment
Share on other sites

SID has the advantage to play the ADSR without cpu usage.

But the sound generation in bot chips is ofcourse running faster. POKEY is able to create signals at approx. 3.5MHz.

Also, several "features" of POKEY act without CPU usage.

 

 

Have a listen to the complexity of available sounds and the small cpu usage bars.

I use to play the tunes in Altirra at 60Hz, because the sound-emulation is more stable there. The tune itself is using 50Hz programming.

Edited by emkay
  • Like 1
Link to comment
Share on other sites

There are a few disadvantages in the SID (when taking into account the FRODIGI methods)

 

First all, holding onto any sustain value per frame is an issue when the new sustain value is higher than the previous. It is possible to have the same sustain value or lower than the previous, but not higher. (A workaround is to update sustain then turn off and on the gate for that channel. This results in noticeable clicking.

 

The method i am using changes the master volume for all three channels. This makes it less accurate (but more compact in data). Issue again is that on the old SID (6581) there will be noticable clicking. In order to reduce this, the master volume can be updated inbetween with the interpolated value from previous and next (which halves the volume of the click)

 

Another Idea is using fast attack and slow release and setting gateoff before the sustain phase is reached (at a desired interval until the attack reaches its amplitude) unfortunately this may still have some issue with quality due to the curve in amplitute at the beginning of the update as well as the amplitude level before the next update.

 

Changing duty cycles can somewhat mimick lower/higher volume but this is linked with the frequency. Same as with filters

Link to comment
Share on other sites

The "filter" of the POKEY isn't just only a filter. The "modulation" that it can built , can be used for many kinds of wave generating. It doesn't really cut the low frequency, it "hacks" the 1st channel with the frequency and wave of the 2nd channel.

You can adjust that "hacking" by programming the timing that the generators produce , for some special thing like "frequency range enhancement" or "wave shaping" thus you get a higher resolution when using filters. Well, it also helps with the volume control.

As all Ataris have the dependency of CPU and POKEY in common, you could do the timing programming, very well working for better sounding music.

I'm pretty sure, some people would think, the filter sound have been built in the 15khz mode. It's actually using 64kHz mode and modulations.

But it sounds too different in RMT, to have a "perfect" demonstration of the waveshaping for a better "sweep over the correct note" ...

 

Still 50Hz programming though...

 

">
" type="application/x-shockwave-flash" wmode="transparent" width="425" height="350">
Link to comment
Share on other sites

A last one ;)

Originally converted by miker, I was threatend ;) to put the tune into one POKEY and 50 programming.

 

It's the only tune that uses 3 channel modulations for the Chorus synth, and ofcourse it uses modulated sawtooth waves in the arpeggios.

 

 

The start of the main synth tends to cancel, that's due to the difference of the emulations and timings.

 

 

">
" type="application/x-shockwave-flash" wmode="transparent" width="425" height="350">
Link to comment
Share on other sites

One thing to bear in mind is that even though the Frodigi method updates once per frame (or even once per two frames), the sid oscillators in the SID update every cycle (which is just below 1mhz a second!) Hence sample rate is not 50hz (Its only the update rate of the parameters that is 50hz, the SID does the rest without cpu intervention)

 

SID has the advantage to play the ADSR without cpu usage.

But the sound generation in bot chips is ofcourse running faster. POKEY is able to create signals at approx. 3.5MHz.

Also, several "features" of POKEY act without CPU usage.

 

 

Have a listen to the complexity of available sounds and the small cpu usage bars.

I use to play the tunes in Altirra at 60Hz, because the sound-emulation is more stable there. The tune itself is using 50Hz programming.

 

That's very helpful, though, what is SID, also, what is the frodigi method?

Link to comment
Share on other sites

The SID (Sound Interface Device) is the soundchip in the C64 and C128.

 

The Frodigi method (Free Running Oscillator Digi) recreates audio by placing frequency and waveforms into the three channels of the SID once per frame to recreate the original source audio. The decode is nothing more than just reading some bitpacked data and placing this into a few registers.

 

The encoder is a different thing altogether ofcourse

Link to comment
Share on other sites

The SID (Sound Interface Device) is the soundchip in the C64 and C128.

 

The Frodigi method (Free Running Oscillator Digi) recreates audio by placing frequency and waveforms into the three channels of the SID once per frame to recreate the original source audio. The decode is nothing more than just reading some bitpacked data and placing this into a few registers.

 

The encoder is a different thing altogether ofcourse

In terms it is a synthesizing method. You clearly hear the difference between the original and the produced sound, but the clearness of the result is good.

 

And, the question is, why the synthesizing with POKEY is "almost not used" , and even today not in anyone's agenda.

 

We have already approved that the 1.79MHz Filter (resulting in ~3.5MHz waveshaping) is able to produce various types of waves . Sawtooth with back and forward forms, triangle...

 

Waveshaping already works in RMT, with restrictions.

 

Just two videos with the same tune and same count of instruments. Just some timings changes, some volume , and for the "broad" sounds, the pitch gets an offset ...

 

 

 

 

Well, if the results can be handled and reproduced, you could put them into a formula.... for synthesizing digis... for example ;)

Link to comment
Share on other sites

I'm curious, by the sounds of it, once or twice per frame seems ample enough for a program as good as RMT to use to create numerous instruments, but knowing that DIGI sounds & synthesized sounds require much quicker updating, could (or does) RMT or any music program in general (if created) allow DIGI instruments, as, if you combine your regular instruments in the same package as creating DIGI instruments to run faster than the rest of the music piece, ie. all usual instruments run once or twice per frame, but when DIGI instruments are detected update them more often, perhaps, about 5 times or more per frame.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...