arc, (edited )

The problem is, that most languages have no native support other than 32 or 64 bit floats and some representations on the wire don’t either. And most underlying processors don’t have arbitrary precision support either.

So either you choose speed and sacrifice precision, or you choose precision and sacrifice speed. The architecture might not support arbitrary precision but most languages have a bignum/bigdecimal library that will do it more slowly. It might be necessary to marshal or store those values in databases or over the wire in whatever hacky way necessary (e.g. encapsulating values in a string).

mlg,
@mlg@lemmy.world avatar

uses 64 bit double instead

LinearArray,
@LinearArray@programming.dev avatar

Stop using floats

dylanTheDeveloper,
@dylanTheDeveloper@lemmy.world avatar

Integers have fallen billions must use long float

Psythik,

While we’re at it, what the hell is -0 and how does it differ from 0?

Reddfugee42,

It’s the negative version

ShepherdPie,

So it’s just like 0 but with an evil goatee?

Knock_Knock_Lemmy_In,

Look at the graph of y=tan(x)+ⲡ/2

-0 and +0 are completely different.

computerscientistI,

For integers it really doesn’t exist. An algorithm for multiplying an integer with -1 is: Invert all bits and add 1 to the right-most bit. You can do that for 0 of course, it won’t hurt.

Blackmist,

I know this is in jest, but if 0.1+0.2!=0.3 hasn’t caught you out at least once, then you haven’t even done any programming.

nexussapphire,

Me making my first calculator in c.

dylanTheDeveloper,
@dylanTheDeveloper@lemmy.world avatar

what if i add more =

CanadaPlus,

That should really be written as the gamma function, because factorial is only defined for members of Z. /s

labsin,

IMO they should just remove the equality operator on floats.

dejected_warp_core,

There are probably a lot of scientific applications (e.g. statistics, audio, 3D graphics) where exponential notation is the norm and there’s an understanding about precision and significant digits/bits. It’s a space where fixed-point would absolutely destroy performance, because you’d need as many bits as required to store your largest terms. Yes, NaN and negative zero are utter disasters in the corners of the IEEE spec, but so is trying to do math with 256bit integers.

For a practical explanation about how stark a difference this is, the PlayStation (one) uses an integer z-buffer (“fixed point”). This is responsible for the vertex popping/warping that the platform is known for. Floating-point z-buffers became the norm almost immediately after the console’s launch, and we’ve used them ever since.

CrayonRosary,

While it’s true the PS1 couldn’t do floating point math, it did NOT have a z-buffer at all.

www.ncesc.com/gaming-faq/does-ps1-have-z-buffer/

anton,

What’s the problem with -0?
It conceptually makes sense for to negativ values to close to 0 to be represented as -0.
In practice I have never seen a problem with -0.

On NaN: While its use cases can nowadays be replaced with language constructs like result types, it was created before exceptions or sum types. The way it propagates kind of mirrors Haskells monadic Maybe.
We should be demanding more and better wrapper types from our language/standard library designers.

33550336,
@33550336@lemmy.world avatar

From time to time I see this pattern in memes, but what is the original meme / situation?

Sadbutdru,

It’s my favourite format. I think the original was ‘stop doing math’ https://sopuli.xyz/pictrs/image/05be123f-6888-40c5-b759-4296afbfebef.webp

33550336,
@33550336@lemmy.world avatar

Thank you 😁

dejected_warp_core,

deleted_by_author

  • Loading...
  • ripcord,
    @ripcord@lemmy.world avatar

    That doesn’t really answer the question, which is about the origins of the meme templete

    dejected_warp_core,

    Yikes. placed this in the wrong spot. Thank you.

    Eyck_of_denesle,

    Out of topic but how does one get a profile pic on lemmy? Also love you ken.

    33550336,
    @33550336@lemmy.world avatar

    Thank you!

    Go to “Settings” (cog wheel) and then “Avatar”:

    https://lemmy.world/pictrs/image/86dde646-0fac-4f0a-aedc-b08fba34a53f.png

    gandalf_der_12te,

    you can configure it in the web interface. just go to your profile

    TotalSonic,
    @TotalSonic@lemmy.world avatar

    Obviously floating point is of huge benefit for many audio dsp calculations, from my observations (non-programmer, just long time DAW user, from back in the day when fixed point with relatively low accumulators was often what we had to work with, versus now when 64bit floating point for processing happens more as the rule) - e.g. fixed point equalizers can potentially lead to dc offset in the results. I don’t think peeps would be getting as close to modeling non-linear behavior of analog processors with just fixed point math either.

    ExFed,

    Audio, like a lot of physical systems, involve logarithmic scales, which is where floating-point shines. Problem is, all the other physical systems, which are not logarithmic, only get to eat the scraps left over by IEEE 754. Floating point is a scam!

    Buttons,
    @Buttons@programming.dev avatar

    No real use you say? How would they engineer boats without floats?

    anton,

    Just build submarines, smh my head.

    WhiskyTangoFoxtrot,

    Just invert a sink.

    kekwa,

    Float is bloat!

    Magnetar,

    Call me when you found a way to encode transcendental numbers.

    smeg,

    Do we even have a good way of encoding them in real life without computers?

    fossphi,

    Just think about them real hard

    Magnetar,

    \pi

    Knock_Knock_Lemmy_In,

    Here you go

    ytg,

    Perhaps you can encode them as computation (i.e. a function of arbitrary precision)

    Magnetar,

    Hard to do as those functions are often limits and need infinite function applications. I’m telling you, math.PI is a finite lie!

    Chadus_Maximus, (edited )

    May I propose a dedicated circuit (analog because you can only ever approximate their value) that stores and returns transcendental/irrational numbers exclusively? We can just assume they’re going to be whatever value we need whenever we need them.

    frezik,

    Wouldn’t noise in the circuit mean it’d only be reliable to certain level of precision, anyway?

    Chadus_Maximus, (edited )

    I mean, every irrational number used in computation is reliable to a certain level of precision. Just because the current (heh) methods aren’t precise enough doesn’t mean they’ll never be.

    anton,

    You can always increase the precision of a computation, analog signals are limited by quantum physics.

    qevlarr,
    @qevlarr@lemmy.world avatar

    I’m like, it’s that code on the right what I think it is? And it is! I’m so happy now

    en.wikipedia.org/wiki/Fast_inverse_square_root

    RustyNova,

    Floats are only great if you deal with numbers that have no needs for precision and accuracy. Want to calculate the F cost of an a* node? Floats are good enough.

    But every time I need to get any kind of accuracy, I go straight for actual decimal numbers. Unless you are in extreme scenarios, you can afford the extra 64 to 256 bits in your memory

    jabjoe,
    @jabjoe@feddit.uk avatar

    As a programmer who grew up without a FPU (Archimedes/Acorn), I have never liked float. But I thought this war had been lost a long time ago. Floats are everywhere. I’ve not done graphics for a bit, but I never saw a graphics card that took any form of fixed point. All geometry you load in is in floats. The shaders all work in floats.

    Briefly ARM MCU work was non-float, but loads of those have float support now.

    I mean you can tell good low level programmers because of how they feel about floats. But the battle does seam lost. There is lots of bit of technology that has taken turns I don’t like. Sometimes the market/bazaar has spoken and it’s wrong, but you still have to grudgingly go with it or everything is too difficult.

    GroteStreet,

    all work in floats

    We even have float16 / float8 now for low-accuracy hi-throughput work.

    frezik,

    Even float4. You get +/- 0, 0.5, 1, 1.5, 2, 3, Inf, and two values for NaN.

    Come to think of it, the idea of -NaN tickles me a bit. “It’s not a number, but it’s a negative not a number”.

    zaphod,

    I think you got that wrong, you got +Inf, -Inf and two NaNs, but they’re both just NaN. As you wrote signed NaN makes no sense, though technically speaking they still have a sign bit.

    frezik,

    Right, there’s no -NaN. There are two different values of NaN. Which is why I tried to separate that clause, but maybe it wasn’t clear enough.

    AnUnusualRelic,
    @AnUnusualRelic@lemmy.world avatar

    But if you throw an FPU in water, does it not sink?

    It’s all lies.

    calcopiritus,

    I’d have to boulder check, but I think old handheld consoles like the Gameboy or the DS use fixed-point.

    jabjoe,
    @jabjoe@feddit.uk avatar

    I’m pretty sure they do, but the key word there is “old”.

    gandalf_der_12te,

    IMO, floats model real observations.

    And since there is no precision in nature, there shouldn’t be precision in floats either.

    So their odd behavior is actually entirely justified. This is why I can accept them.

    jabjoe,
    @jabjoe@feddit.uk avatar

    I just gave up fighting. There is no system that is going to both fast and infinitely precision.

    So long ago I worked in a game middleware company. One of the most common problems was skinning in local space vs global space. We kept having customers try and have global skinning and massive worlds, then upset by geometry distortion when miles away from the origin.

    swordsmanluke,

    How do y’all solve that, out of curiosity?

    I’m a hobbyist game dev and when I was playing with large map generation I ended up breaking the world into a hierarchy of map sections. Tiles in a chunk were locally mapped using floats within comfortable boundaries. But when addressing portions of the map, my global coordinates included the chunk coords as an extra pair.

    So an object’s location in the 2D world map might be ((122, 45), (12.522, 66.992)), where the first elements are the map chunk location and the last two are the precise “offset” coordinates within that chunk.

    It wasn’t the most elegant to work with, but I was still able to generate an essentially limitless map without floating point errors poking holes in my tiling.

    I’ve always been curious how that gets done in real game dev though. if you don’t mind sharing, I’d love to learn!

    jabjoe,
    @jabjoe@feddit.uk avatar

    That’s pretty neat. Game streaming isn’t that different. It basically loads the adjacent scene blocks ready for you to wonder in that direction. Some load in LOD (Level Of Detail) versions of the scene blocks so you can see into the distance. The further away, the lower the LOD of course. Also, you shouldn’t really keep the same origin, or you will hit the distort geometry issue. Have the origin as the centre of tha current block.

    ZILtoid1991,

    Floats make a lot of math way simpler, especially for audio, but then you run into the occasional NaN error.

    jabjoe,
    @jabjoe@feddit.uk avatar

    On the PS3 cell processor vector units, any NaN meant zero. Makes life easier if there is errors in the data.

  • All
  • Subscribed
  • Moderated
  • Favorites
  • programmer_humor@programming.dev
  • DreamBathrooms
  • InstantRegret
  • cisconetworking
  • magazineikmin
  • Youngstown
  • slotface
  • everett
  • tester
  • ethstaker
  • rosin
  • thenastyranch
  • Durango
  • kavyap
  • osvaldo12
  • HellsKitchen
  • tacticalgear
  • normalnudes
  • mdbf
  • anitta
  • modclub
  • GTA5RPClips
  • khanakhh
  • cubers
  • Leos
  • lostlight
  • relationshipadvice
  • bokunoheroacademia
  • sketchdaily
  • All magazines