thomzane,

@inversephase What is the workflow like to add voices to NES music like in the track Duke NES'em 3D? I have always wondered that. https://inversephase.bandcamp.com/track/duke-nesem-3d

inversephase,

@thomzane on oldskool systems, are a mess. hardware like the has dedicated sample hardware called DPCM (delta PCM) or some people call it DMC, you get these 1-bit bitstreams that tell the waveform whether it goes up or down over time, and then the hardware can play that back without much resource drain. When you use , you can just load a WAV into a slot and for the most part (barring memory limitations) it automagically prepares it for playback.

Two other ways to do samples on (which are closer to what modern hardware does) are either (1) to play the lowest note on the hardware where the sound wave for the square wave "sticks" high or low, and vary the volume to the level of the samples, or (2) use PWM, where rather than varying the amplitude of the wave (which is how you'd visualize it on a scope) you use another 1-bit bitstream and the length of time spent on a high or a low would pull the speaker in that direction for that amount of time, and the amount of time is so incredibly small that the speaker might not make it all the way to the other extreme before it gets sent in the other direction by the opposite bit.

Duke NES'em 3D was one of the first NES I ever wrote, so the song was written in soundtracker, and I used a bunch of shell scripts and compilers to compile it into an NSF.

For the voices themselves, first I got Jon St. John (Duke Nukem Himself) to record the lines. Then, to be honest I did a whole lot of bumping around in the dark. I didn't really know what I was doing, other than that I knew the hardware wouldn't push out a full-quality WAV. I somehow found that using a 3-bit amplitude worked out really well. Reflecting, the octal thing was almost definitely a mistake, because that's 3-bit and the amplitude on the square waves can be 4-bit. Maybe I just mis-entered the bit depth to convert the WAV I got from Jon? There was some PWM employed as well because I then converted the sample data to a macro that would get pushed out of a square wave channel rather than the sample channel. It..... kinda worked out? If you think it sounds good, I probably just threw my hands in the air and said "good enough" because the voice was one of the last things I added to the track.

So there you go! My method is NOT the way I would recommend doing it, or the way most people do it these days, but that's how you get that exact result.

Tagging this with a few extra tags so that other people can find this weird piece of history.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • random
  • ngwrru68w68
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • thenastyranch
  • magazineikmin
  • megavids
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • provamag3
  • JUstTest
  • All magazines