Misc

The Floating Point Precision Error

📣 Sponsor

If you are familiar with Javascript, or perhaps any other language, you may have run into an issue like this:

console.log(0.1 * 0.2) // Returns 0.020000000000000004 // Should return 0.02

Despite 0.1 * 0.2 being equal to 0.02, we somehow get this long number with a random digit at the end. What's going on? Is Javascript broken? Is our understanding of Math wrong? Well actually, this is known as the floating point precision error.

How binary handles numbers differently

In base 10, which is the normal base we use in day to day life, we can represent a number such as 0.1 as a finite number - that is to say, it can be represented to one decimal place. If we try to write this as binary, however, we get something weird:

0.0001100110011001101 ... repeats to infinity

So 0.1 in binary is a repeating decimal. That means if we round this number, to say, 4 decimal places, we get 0.0001, which when we convert back to base 10, is 1/16 - a little bit different than our original value of 1/10.

What a floating point is

In computing, there is a balance between accuracy, and performance. To try to establish a balance, a group came up with a standard type called the floating point which is defined by standard IEE 754.

This takes a number, like 0.1, and represents it in binary representation. As you might've guessed, since 0.1 is a repeating decimal in binary, the floating point representation will limit 0.1 to a certain number of decimal places. It's representation is shown below:

SIGN | EXPONENT | MANTISSA 0 01111011 10011001100110011001101

The 32 bit floating point type represents our number in scientific notation, which you may have seen shown as 1 x 10-4 in the past. For binary, the difference we use x2 since it is base 2, so an example would be 1 x 2-4. Therefore, the floating point binary representation is split into a few pieces:

  • The first digit is the sign, so it is 1 if negative.
  • The next 8 digits are the exponent, i.e. the number we raise the x 2 to.
  • The next 23 digits are the mantissa, which is the number that is multiplied by the x 2.

This gives us the balance of having a large amount of numbers we can reference, while still having a good performance.

Back to 0.1

In our floating point notation, we limit 0.1 in binary to a certain number of decimal places. The mathematical notation we have conveyed into binary is shown below:

SIGN | EXPONENT | MANTISSA 0 01111011 10011001100110011001101 # is the equivalent in mathematical notation of ... 1.600000023841858 * 2^-4

If you try to put 1.600000023841858 x 2-4 into Google, you'll get a value of 0.10000000149. As we've said, since binary represents numbers differently, meaning certain numbers are repeating decimals when they wouldn't normally be in base 10, when we try to present them in binary, we have to round them to a certain amount of decimals.

This rounding then shows up when we go back to base 10, meaning our once exact base 10 number, is now 0.10000000149. Now if we try to use this in a calculation, we end up with a rounding error which is why console.log(0.1 * 0.2) returns 0.020000000000000004.

Can I avoid it?

Some languages have a decimal datatype, which is a way to solve this issue. However, you don't necessarily need this. For simple errors like this, you can round the number down to your required decimal length. Overall, this is an interesting example of how computers attempt to balance speed with performance.

If you focus on Javascript, there is a new standard being proposed to try and solve this problem, which can be found on GitHub. Let us know what you think on twitter.

Last Updated 1623861250771

More Tips and Tricks for Misc

Subscribe for Weekly Dev Tips

Subscribe to our weekly newsletter, to stay up to date with our latest web development and software engineering posts via email. You can opt out at any time.

Not a valid email