arc, (edited )

The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don’t either. And most underlying processors don’t have arbitrary precision support either.

So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).

mlg, avatar

uses 64 bit double instead

LinearArray, avatar

Stop using floats

dylanTheDeveloper, avatar

Integers have fallen billions must use long float


While we’re at it, what the hell is -0 and how does it differ from 0?


It’s the negative version


So it’s just like 0 but with an evil goatee?


Look at the graph of y=tan(x)+ⲡ/2

-0 and +0 are completely different.


For integers it really doesn’t exist. An algorithm for multiplying an integer with -1 is: Invert all bits and add 1 to the right-most bit. You can do that for 0 of course, it won’t hurt.


I know this is in jest, but if 0.1+0.2!=0.3 hasn’t caught you out at least once, then you haven’t even done any programming.


Me making my first calculator in c.

dylanTheDeveloper, avatar

what if i add more =


That should really be written as the gamma function, because factorial is only defined for members of Z. /s


IMO they should just remove the equality operator on floats.


There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.

For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.


While it’s true the PS1 couldn’t do floating point math, it did NOT have a z-buffer at all.


What’s the problem with -0?
It conceptually makes sense for to negativ values to close to 0 to be represented as -0.
In practice I have never seen a problem with -0.

On NaN: While its use cases can nowadays be replaced with language constructs like result types, it was created before exceptions or sum types. The way it propagates kind of mirrors Haskells monadic Maybe.
We should be demanding more and better wrapper types from our language/standard library designers.

33550336, avatar

From time to time I see this pattern in memes, but what is the original meme / situation?


It’s my favourite format. I think the original was ‘stop doing math’

33550336, avatar

Thank you 😁



  • Loading...
  • ripcord, avatar

    That doesn’t really answer the question, which is about the origins of the meme templete


    Yikes. placed this in the wrong spot. Thank you.


    Out of topic but how does one get a profile pic on lemmy? Also love you ken.

    33550336, avatar

    Thank you!

    Go to “Settings” (cog wheel) and then “Avatar”:


    you can configure it in the web interface. just go to your profile

    TotalSonic, avatar

    Obviously floating point is of huge benefit for many audio dsp calculations, from my observations (non-programmer, just long time DAW user, from back in the day when fixed point with relatively low accumulators was often what we had to work with, versus now when 64bit floating point for processing happens more as the rule) - e.g. fixed point equalizers can potentially lead to dc offset in the results. I don’t think peeps would be getting as close to modeling non-linear behavior of analog processors with just fixed point math either.


    Audio, like a lot of physical systems, involve logarithmic scales, which is where floating-point shines. Problem is, all the other physical systems, which are not logarithmic, only get to eat the scraps left over by IEEE 754. Floating point is a scam!

    Buttons, avatar

    No real use you say? How would they engineer boats without floats?


    Just build submarines, smh my head.


    Just invert a sink.


    Float is bloat!


    Call me when you found a way to encode transcendental numbers.


    Do we even have a good way of encoding them in real life without computers?


    Just think about them real hard




    Here you go


    Perhaps you can encode them as computation (i.e. a function of arbitrary precision)


    Hard to do as those functions are often limits and need infinite function applications. I’m telling you, math.PI is a finite lie!

    Chadus_Maximus, (edited )

    May I propose a dedicated circuit (analog because you can only ever approximate their value) that stores and returns transcendental/irrational numbers exclusively? We can just assume they’re going to be whatever value we need whenever we need them.


    Wouldn’t noise in the circuit mean it’d only be reliable to certain level of precision, anyway?

    Chadus_Maximus, (edited )

    I mean, every irrational number used in computation is reliable to a certain level of precision. Just because the current (heh) methods aren’t precise enough doesn’t mean they’ll never be.


    You can always increase the precision of a computation, analog signals are limited by quantum physics.

    qevlarr, avatar

    I’m like, it’s that code on the right what I think it is? And it is! I’m so happy now


    Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.

    But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory

    jabjoe, avatar

    As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I’ve not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.

    Briefly ARM MCU work was non-float, but loads of those have float support now.

    I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don’t like. Sometimes the market/bazaar has spoken and it’s wrong, but you still have to grudgingly go with it or everything is too difficult.


    all work in floats

    We even have float16 / float8 now for low-accuracy hi-throughput work.


    Even float4. You get +/- 0, 0.5, 1, 1.5, 2, 3, Inf, and two values for NaN.

    Come to think of it, the idea of -NaN tickles me a bit. “It’s not a number, but it’s a negative not a number”.


    I think you got that wrong, you got +Inf, -Inf and two NaNs, but they’re both just NaN. As you wrote signed NaN makes no sense, though technically speaking they still have a sign bit.


    Right, there’s no -NaN. There are two different values of NaN. Which is why I tried to separate that clause, but maybe it wasn’t clear enough.

    AnUnusualRelic, avatar

    But if you throw an FPU in water, does it not sink?

    It’s all lies.


    I’d have to boulder check, but I think old handheld consoles like the Gameboy or the DS use fixed-point.

    jabjoe, avatar

    I’m pretty sure they do, but the key word there is “old”.


    IMO, floats model real observations.

    And since there is no precision in nature, there shouldn’t be precision in floats either.

    So their odd behavior is actually entirely justified. This is why I can accept them.

    jabjoe, avatar

    I just gave up fighting. There is no system that is going to both fast and infinitely precision.

    So long ago I worked in a game middleware company. One of the most common problems was skinning in local space vs global space. We kept having customers try and have global skinning and massive worlds, then upset by geometry distortion when miles away from the origin.


    How do y’all solve that, out of curiosity?

    I’m a hobbyist game dev and when I was playing with large map generation I ended up breaking the world into a hierarchy of map sections. Tiles in a chunk were locally mapped using floats within comfortable boundaries. But when addressing portions of the map, my global coordinates included the chunk coords as an extra pair.

    So an object’s location in the 2D world map might be ((122, 45), (12.522, 66.992)), where the first elements are the map chunk location and the last two are the precise “offset” coordinates within that chunk.

    It wasn’t the most elegant to work with, but I was still able to generate an essentially limitless map without floating point errors poking holes in my tiling.

    I’ve always been curious how that gets done in real game dev though. if you don’t mind sharing, I’d love to learn!

    jabjoe, avatar

    That’s pretty neat. Game streaming isn’t that different. It basically loads the adjacent scene blocks ready for you to wonder in that direction. Some load in LOD (Level Of Detail) versions of the scene blocks so you can see into the distance. The further away, the lower the LOD of course. Also, you shouldn’t really keep the same origin, or you will hit the distort geometry issue. Have the origin as the centre of tha current block.


    Floats make a lot of math way simpler, especially for audio, but then you run into the occasional NaN error.

    jabjoe, avatar

    On the PS3 cell processor vector units, any NaN meant zero. Makes life easier if there is errors in the data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • DreamBathrooms
  • InstantRegret
  • cisconetworking
  • magazineikmin
  • Youngstown
  • slotface
  • everett
  • tester
  • ethstaker
  • rosin
  • thenastyranch
  • Durango
  • kavyap
  • osvaldo12
  • HellsKitchen
  • tacticalgear
  • normalnudes
  • mdbf
  • anitta
  • modclub
  • GTA5RPClips
  • khanakhh
  • cubers
  • Leos
  • lostlight
  • relationshipadvice
  • bokunoheroacademia
  • sketchdaily
  • All magazines