Why Do My Floating Point Numbers Lose Precision in LabVIEW?

Updated Jun 30, 2022

Reported In

Software

  • LabVIEW

Issue Details

I am using a floating point or double precision number in my programming environment and I notice that the number's value changes slightly. For example, I enter 2.4 and I now see 2.3999999999999999. Why is my floating point number not quite accurate?

Solution

There are two primary causes of floating point inaccuracies:

1. The binary representation of the decimal number may not be exact
 
It is not unusual for specific floating point or double precision numbers to not be represented exactly as desired. Floating-point decimal values generally do not have an exact binary representation due to how the CPU represents floating point data. For this reason, you may experience a loss of precision, and some floating-point operations may produce unexpected results. In the case named above, the binary representation of 2.4 may not be exactly 2.4. Instead, the closest binary representation is 2.3999999999999999. The reason for this is that floating point numbers are made up of two parts, the exponent and the mantissa. The value of the floating point number is actually calculated using a specific mathematical formula. For more details on this formula, or on the mantissa, reference the link below, titled "What is a Mantissa or a Significand?"

The loss of precision you are seeing will occur on any operating system and in any programming environment.

Note: You can use a Binary Coded Decimal (BCD) library to maintain precision. BCD is a method of encoding numbers where each decimal digit is encoded separately.
For more information on why this loss of precision occurs you may want to read IEEE/ANSI Standard 754-1985, which documents the Standard for Binary Floating Point Arithmetic.

2. There is a type mismatch
 
You have possibly mixed the float type and double type. Please ensure that when you perform arithmetic operations between numbers that each of the numbers is of the same type. 

Note: A variable of type float only has about 7 digits of precision, whereas a variable of type double has about 15 digits of precision.