Technical: #MorseCode was the first lossless static stream compression for natural language.
Theoretical: #Morse#Code allows to increase the amount of text that can be transmitted in the same time at the cost of sacrificing Fano Condition Compliance, which was postulated decades later.
Practical: You can - at least in theory - morse text faster than if you were to send ASCII characters in binary and you can easier and faster learn to decode and/or encode signals than ASCII bits.
@kkarhan@jupiter Interesting assertion, but do you have any math to back that statement up? The Morse code alphabet of supported characters is not that large and Morse does not optimise for packing density AFAIK.
@jupiter@kkarhan I understand the idea, but you don't need 7 bits to cover all the characters represented in the Morse alphabet and a dah is the length of 3 dits. Also the binary encoding isn't sparse, but AFAIK Morse code is. I haven't done the math (yet), but I suspect that binary encoding is more efficient.
OFC it won't be even remotely efficient when compared to modern compression...
Even #LZ77 will run circles around it, not to mention #bzip2, #lzma or high-efficiency vocoders like #Codec2....
@vk6flab@jupiter so yeah it's the "first lossless static compression" for "streaming data" (or rather text), but that doesn't mean it's efficient.
It's advantage is that it's static and can encode and decode messages without a specified header, which is why it was used in pre-#TTY telegraphy systems.
Modern data compression will be more efficient but only once your message exceeds thousands of characters and thus can compensate the header dara needed.
@vk6flab@jupiter another problem is that most compressed files are compressed multiple passes/ways so unlike #plaintext and espechally #Morsecode a disrupted transmission can't be partially recovered.
Some streaming codecs - espechally for Audio and Video - as used in broadcasting and PTT radio applications are lossy and don't require a continous stream of data at the expense of latency via buffering and quality under poor signal conditions.
@vk6flab@jupiter another "lossless compression" - because that bar is very low - are the specific #SignLanguague|s which are different in order to optimize the use of culturally significant terms and communication speeds based off the languague they're designed to translate into.
This allows for synchronous sign languague translation even tho the "symbol rate" (vowels and consonants vs. handsigns) is far lower.
Same with Stenography.
In short, "compression" is relative on data or time needed.
Add comment