From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
From Friday, April 19th (11:00 PM CDT) through Saturday, April 20th (2:00 PM CDT), 2024, ni.com will undergo system upgrades that may result in temporary service interruption.
We appreciate your patience as we improve our online experience.
02-08-2022 05:50 AM
Hi guys,
I was wondering how do people deal with floating-point errors? Do you guys always round your answers and numbers before spitting them out? Is there a "conventional way" of dealing with this issue? When precision matters, after X amount of computations you inevitably end up with an answer that is off by a certain amount. For example, if I subtract 0.2 from 2.0 10 times in a row, the answer is not zero. It's close to zero but not zero and when you compare two numbers that's a big issue. If you scale numbers, that's a big issue as well. Any thoughts?
Solved! Go to Solution.
02-08-2022 06:00 AM
Hi NDT,
@datatechNDT wrote:
I was wondering how do people deal with floating-point errors?
Ignore them! 😄
@datatechNDT wrote:
Do you guys always round your answers and numbers before spitting them out? Is there a "conventional way" of dealing with this issue?
What's the datasource?
When using DAQ data you end up with 16-24bit of information per sample: this gives you <=7.5 decimal digits. Why care about small issues in the 12th+ digit? (I guess you are talking about DBL/EXT values when calling them "floats".)
For anything else (CSV files, any kind of reports) than data storage (like TDMS files) I round the output values…
02-08-2022 06:13 AM
I meant Single Precision Point when I was talking about a "float". And the error becomes an issue only when I compare two numbers or I scale a number. If I get a number thats 0.000000002980232239 (due to floating point error) and I drop the "is equal to zero", my answer is false even though it should be true. the particular case I encountered that was when i subtracted 0.2 from 2.0 10 times in a row, where you expect the answer to be zero... it wasnt zero.. took me ages to find out what was wrong :D. So I guess i need to write helper functions to compare two numbers if they are equal, where I compare the difference between the numbers and just never use the equality statement?
The particular example in which I got the error was when I did an FFT of a wave that was saved as an array of Single Precision Point, then calculated the Total Harmonic Distortion, and I knew that the answer was zero as there are no harmonics.. the answer was not zero it was 0.0145..
02-08-2022 06:21 AM - edited 02-08-2022 06:22 AM
Hi NDT,
@datatechNDT wrote:
If I get a number thats 0.000000002980232239 (due to floating point error) and I drop the "is equal to zero", my answer is false
Simple rule: NEVER compare floats for equality! REALLY!!!
@datatechNDT wrote:
I meant Single Precision Point when I was talking about a "float".
Why do you use SGL data?
Is there a specific reason? (The only reason I know of is LabVIEW-FPGA - and even then you can replace most of that math by using FXP data.)
02-08-2022 06:44 AM
I was using SGL, as I was concerned about memory. I am processing a wave that has 10Million points in it, and I thought that a Double(64bits) might be too extreme.
02-08-2022 07:15 AM
You should determine a tolerance for that will meet your accuracy requirements and use that. Don't compare for equality but rather for in range. For you subtraction example, check to see if your two values are within 0.0001 (or whatever tolerance is appropriate) of each other.
02-08-2022 07:40 AM - edited 02-08-2022 07:41 AM
@Mark_Yedinak wrote:
You should determine a tolerance for that will meet your accuracy requirements and use that. Don't compare for equality but rather for in range. For you subtraction example, check to see if your two values are within 0.0001 (or whatever tolerance is appropriate) of each other.
Yes, this is the way I do it. You can raise both sides of the comparison by 10^x where x is the amount of decimal places, convert them to integers, do your comparison. (It also works for places to the left of the decimal place as well. To compare to the nearest ten, for example, use -1. Useful for when you want to compare to the nearest MHz, for example.)
02-08-2022 11:08 AM
Rounding Numbers to Various Decimal Places in LabVIEW - NI Community
Works nicely
02-08-2022 02:48 PM
@James_W wrote:
Rounding Numbers to Various Decimal Places in LabVIEW - NI Community
Works nicely
Yes, that's exactly what I described, except after rounding to integer, I also coerce to integer, even though it's probably not needed because i believe that whole numbers are always represented exactly by FP.
02-08-2022 04:33 PM
@datatechNDT wrote:
Hi guys,
I was wondering how do people deal with floating-point errors? Do you guys always round your answers and numbers before spitting them out? Is there a "conventional way" of dealing with this issue? When precision matters, after X amount of computations you inevitably end up with an answer that is off by a certain amount. For example, if I subtract 0.2 from 2.0 10 times in a row, the answer is not zero. It's close to zero but not zero and when you compare two numbers that's a big issue. If you scale numbers, that's a big issue as well. Any thoughts?
Over here is @ message 14 is a zipped up project with polymorphic instances for floating point comparisons with a tolerance based on ULP (Units in Last Place).
The polymorphic instances could be improved by converting to *.vim (Malleable vis) but work very well. The ULP method has several advantages over tolerance by %, log10 precision, or significant digit.