It's been a while so here's some chiptune music I wrote (and finished my last stream on Twitch). It's for my in-progress Mega Man fan game thingy I'm doing. Say hello to Blacksmith Woman's stage! I hope you enjoy!
Hey guess what, videos are finally being released from Chipspace's showcases for 2024 starting with
nmlstyl! https://youtu.be/Yb-kN5iq9Fo
Bonus MCing from Sam Mulligan
(Special thanks to GeekBeat Radio for hosting!) #chiptunes#chiptune#magfest#chipspace
(I'll be threading these as they're released)
Been a while since I did a #chiptune upload, so here, have another #megaman cover. If I was smart I'd have gotten this out in time for the Switch collection release, but I wasn't. Oh well!
Pretty happy with how this one turned out all things considered. The pulse drums are a bit unusual but they pack quite a punch.
@thomzane on oldskool systems, #samples are a mess. hardware like the #NES has dedicated sample hardware called DPCM (delta PCM) or some people call it DMC, you get these 1-bit bitstreams that tell the waveform whether it goes up or down over time, and then the hardware can play that back without much resource drain. When you use #famitracker, you can just load a WAV into a slot and for the most part (barring memory limitations) it automagically prepares it for playback.
Two other ways to do samples on #retro (which are closer to what modern hardware does) are either (1) to play the lowest note on the hardware where the sound wave for the square wave "sticks" high or low, and vary the volume to the level of the samples, or (2) use PWM, where rather than varying the amplitude of the wave (which is how you'd visualize it on a scope) you use another 1-bit bitstream and the length of time spent on a high or a low would pull the speaker in that direction for that amount of time, and the amount of time is so incredibly small that the speaker might not make it all the way to the other extreme before it gets sent in the other direction by the opposite bit.
Duke NES'em 3D was one of the first NES #chiptunes I ever wrote, so the song was written in soundtracker, and I used a bunch of shell scripts and compilers to compile it into an NSF.
For the voices themselves, first I got Jon St. John (Duke Nukem Himself) to record the lines. Then, to be honest I did a whole lot of bumping around in the dark. I didn't really know what I was doing, other than that I knew the hardware wouldn't push out a full-quality WAV. I somehow found that using a 3-bit amplitude worked out really well. Reflecting, the octal thing was almost definitely a mistake, because that's 3-bit and the amplitude on the square waves can be 4-bit. Maybe I just mis-entered the bit depth to convert the WAV I got from Jon? There was some PWM employed as well because I then converted the sample data to a macro that would get pushed out of a square wave channel rather than the sample channel. It..... kinda worked out? If you think it sounds good, I probably just threw my hands in the air and said "good enough" because the voice was one of the last things I added to the track.
So there you go! My method is NOT the way I would recommend doing it, or the way most people do it these days, but that's how you get that exact result.