This post is a more carefully thought out and peer reviewed version of a floatingpoint comparison article I wrote many years ago. This one gives solid advice and some surprising observations about the tricky subject of comparing floatingpoint numbers. A compilable source file with license is available. Show We’ve finally reached the point in this series that I’ve been waiting for. In this post I am going to share the most crucial piece of floatingpoint math knowledge that I have. Here it is: [Floatingpoint] math is hard. You just won’t believe how vastly, hugely, mindbogglingly hard it is. I mean, you may think it’s difficult to calculate when trains from Chicago and Los Angeles will collide, but that’s just peanuts to floatingpoint math. Seriously. Each time I think that I’ve wrapped my head around the subtleties and implications of floatingpoint math I find that I’m wrong and that there is some extra confounding factor that I had failed to consider. So, the lesson to remember is that floatingpoint math is always more complex than you think it is. Keep that in mind through the rest of the post where we talk about the promised topic of comparing floats, and understand that this post gives some suggestions on techniques, but no silver bullets. Previously on this channel…This is the fifth chapter in a long series. The first couple in the series are particularly important for understanding this point. A (mostly) complete list of the other posts includes:
Comparing for equalityFloating point math is not exact. Simple values like 0.1 cannot be precisely represented using binary floating point numbers, and the limited precision of floating point numbers means that slight changes in the order of operations or the precision of intermediates can change the result. That means that comparing two floats to see if they are equal is usually not what you want. GCC even has a (well intentioned but misguided) warning for this: “warning: comparing floating point with == or != is unsafe”. Here’s one example of the inexactness that can creep in: float f = 0.1f; float sum; sum = 0; for (int i = 0; i < 10; ++i) sum += f; float product = f * 10; printf("sum = %1.15f, mul = %1.15f, mul2 = %1.15f\n", sum, product, f * 10); This code tries to calculate ‘one’ in three different ways: repeated adding, and two slight variants of multiplication. Naturally we get three different results, and only one of them is 1.0:
Disclaimer: the results you get will depend on your compiler, your CPU, and your compiler settings, which actually helps make the point. So what happened, and which one is correct? What do you mean ‘correct’?Before we can continue I need to make clear the difference between 0.1, float(0.1), and double(0.1). In C/C++ the numbers 0.1 and double(0.1) are the same thing, but when I say “0.1” in text I mean the exact base10 number, whereas float(0.1) and double(0.1) are rounded versions of 0.1. And, to be clear, float(0.1) and double(0.1) don’t have the same value, because float(0.1) has fewer binary digits, and therefore has more error. Here are the exact values for 0.1, float(0.1), and double(0.1):
With that settled, let’s look at the results of the code above:
So, answer one is incorrect, but it’s only one float away. Answer two is correct (but inexact), whereas answer three is completely correct (but appears wrong). Now what?Now we have a couple of different answers (I’m going to ignore the double precision answer), so what do we do? What if we are looking for results that are equal to one, but we also want to count any that are plausibly equal to one – results that are “close enough”. Epsilon comparisonsIf comparing floats for equality is a bad idea then how about checking whether their difference is within some error bounds or epsilon value, like this:
With this calculation we can express the concept of two floats being close enough that we want to consider them to be equal. But what value should we use for epsilon? Given our experimentation above we might be tempted to use the error in our sum, which was about 1.19e7f. In fact, there’s even a define in float.h with that exact value, and it’s called FLT_EPSILON. Clearly that’s it. The header file gods have spoken and FLT_EPSILON is the one true epsilon! Except that that is rubbish. For numbers between 1.0 and 2.0 FLT_EPSILON represents the difference between adjacent floats. For numbers smaller than 1.0 an epsilon of FLT_EPSILON quickly becomes too large, and with small enough numbers FLT_EPSILON may be bigger than the numbers you are comparing (a variant of this led to a flaky Chromium test)! For numbers larger than 2.0 the gap between floats grows larger and if you compare floats using FLT_EPSILON then you are just doing a moreexpensive and lessobvious equality check. That is, if two floats greater than 2.0 are not the same then their difference is guaranteed to be greater than FLT_EPSILON. For numbers above 16777216 the appropriate epsilon to use for floats is actually greater than one, and a comparison using FLT_EPSILON just makes you look foolish. We don’t want that. Relative epsilon comparisonsThe idea of a relative epsilon comparison is to find the difference between the two numbers, and see how big it is compared to their magnitudes. In order to get consistent results you should always compare the difference to the larger of the two numbers. In English:
In code:
This function is not bad. It works. Mostly. I’ll talk about the limitations later, but first I want to get to the point of this article – the technique that I first suggested many years ago. When doing a relative comparison of floats it works pretty well to set maxRelDiff to FLT_EPSILON, or some small multiple of FLT_EPSILON. Anything smaller than that and it risks being equivalent to no epsilon. You can certainly make it larger, if greater error is expected, but don’t go crazy. However selecting the correct value for maxRelativeError is a bit tweaky and nonobvious and the lack of a direct relationship to the floating point format being used makes me sad. ULP, he said nervouslyWe already know that adjacent floats have integer representations that are adjacent. This means that if we subtract the integer representations of two numbers then the difference tells us how far apart the numbers are in float space. That brings us to: Dawson’s obviousinhindsight theorem:
In other words, if you subtract the integer representations and get one, then the two floats are as close as they can be without being equal. If you get two then they are still really close, with just one float between them. The difference between the integer representations tells us how many Units in the Last Place the numbers differ by. This is usually shortened to ULP, as in “these two floats differ by two ULPs.” So let’s try that concept:
This is tricky and perhaps profound. The check for different signs is necessary for several reasons. Subtracting the signedmagnitude representation of floats using twoscomplement math isn’t particularly meaningful, and the subtraction would produce a 33bit result and overflow. Even if we deal with these technical issues it turns out that an ULPs based comparison of floats with different signs doesn’t even make sense.
After the special cases are dealt with we simply subtract the integer representations, get the absolute value, and now we know how different the numbers are. The ‘ulpsDiff’ value gives us the number of floats between the two numbers (plus one) which is a wonderfully intuitive way of dealing with floatingpoint error. One ULPs difference good (adjacent floats). One million ULPs difference bad (kinda different). Comparing numbers with ULPs is really just a way of doing relative comparisons. It has different characteristics at the extremes, but in the range of normal numbers it is quite well behaved. The concept is sufficiently ‘normal’ that boost has a function for calculating the difference in ULPs between two numbers. A one ULP difference is the smallest possible difference between two numbers. One ULP between two floats is far larger than one ULP between two doubles, but the nomenclature remains terse and convenient. I like it. ULP versus FLT_EPSILONIt turns out checking for adjacent floats using the ULPs based comparison is quite similar to using AlmostEqualRelative with epsilon set to FLT_EPSILON. For numbers that are slightly above a power of two the results are generally the same. For numbers that are slightly below a power of two the FLT_EPSILON technique is twice as lenient. In other words, if we compare 4.0 to 4.0 plus two ULPs then a one ULPs comparison and a FLT_EPSILON relative comparison will both say they are not equal. However if you compare 4.0 to 4.0 minus two ULPs then a one ULPs comparison will say they are not equal (of course) but a FLT_EPSILON relative comparison will say that they are equal. This makes sense. Adding two ULPs to 4.0 changes its magnitude twice as much as subtracting two ULPs, because of the exponent change. Neither technique is better or worse because of this, but they are different. If my explanation doesn’t make sense then perhaps my programmer art will: ULP based comparisons also have different performance characteristics. ULP based comparisons are more likely to be efficient on architectures such as SSE which encourage the reinterpreting of floats as integers. However ULPs based comparisons can cause horrible stalls on other architectures, due to the cost of moving float values to integer registers. These stalls generally happen when doing float calculations and then immediately doing ULP based comparisons on the results, so keep that in mind when benchmarking. Normally a difference of one ULP means that the two numbers being compared have similar magnitudes – the larger one is usually no larger than 1.000000119 times larger than the smaller. But not always. Some notable exceptions are:
That’s a lot of notable exceptions. For many purposes you can ignore NaNs (you should be enabling illegal operation exceptions so that you find out when you generate them) and infinities (ditto for overflow exceptions) so that leaves denormals and zeros as the biggest possible problems. In other words, numbers at or near zero. Infernal zeroIt turns out that the entire idea of relative epsilons breaks down near zero. The reason is fairly straightforward. If you are expecting a result of zero then you are probably getting it by subtracting two numbers. In order to hit exactly zero the numbers you are subtracting need to be identical. If the numbers differ by one ULP then you will get an answer that is small compared to the numbers you are subtracting, but enormous compared to zero. Consider the sample code at the very beginning. If we add float(0.1) ten times then we get a number that is obviously close to 1.0, and either of our relative comparisons will tell us that. However if we subtract 1.0 from the result then we get an answer of FLT_EPSILON, where we were hoping for zero. If we do a relative comparison between zero and FLT_EPSILON, or pretty much any number really, then the comparison will fail. In fact, FLT_EPSILON is 872,415,232 ULPs away from zero, despite being a number that most people would consider to be pretty small. For another example, consider this calculation:
Our test shows that someFloat and nextFloat are very close – they are neighbors. All is good. But consider what happens if we subtract them:
While someFloat and nextFloat are very close, and their difference is small by many standards, ‘diff’ is a vast distance away from zero, and will dramatically and emphatically fail any ULPs or relative based test that compares it to zero. There is no easy answer to this problem. If you think you have found an easy and generic answer to this problem then you should reread the material above. The most generic answer to this quandary is to use a mixture of absolute and relative epsilons. If the two numbers being compared are extremely close – whatever that means – then treat them as equal, regardless of their relative values. This technique is necessary any time you are expecting an answer of zero due to subtraction. The value of the absolute epsilon should be based on the magnitude of the numbers being subtracted – it should be something like maxInput * FLT_EPSILON. Unfortunately this means that it is dependent on the algorithm and the inputs. Charming. The ULPs based technique also breaks down near zero for the technical reasons discussed just below the definition of AlmostEqualUlps. Doing a floatingpoint absolute epsilon check first, and then treating all other differentsigned numbers as being nonequal is the simpler and safer thing to do. Here is some possible code for doing this, both for relative epsilon and for ULPs based comparison, with an absolute epsilon ‘safety net’ to handle the nearzero case:
Catastrophic cancellation, hiding in plain sightIf we calculate f1 – f2 and then compare the result to zero then we know that we are dealing with catastrophic cancellation, and that we will only get zero if f1 and f2 are equal. However, sometimes the subtraction is not so obvious. Consider this code:
It’s straightforward enough. Trigonometry teaches us that the result should be zero. But that is not the answer you will get. For doubleprecision and floatprecision values of pi the answers I get are:
If you do an ULPs or relative epsilon comparison to the ‘correct’ value of zero then this looks pretty bad. It’s a long way from zero. So what’s going on? Is the calculation of sin() really that inaccurate? Nope. The calculation of sin() is pretty close to perfect – in recent glibc versions it is perfect. Even Intel’s fsin instruction, which gives an answer with just a few bits of precision in this case, still gets the correct order of magnitude. The problem is not with sin()/fsin – the problem lies elsewhere. But to understand what’s going on we have to invoke… calculus!
But first we have to acknowledge that we aren’t asking the sin function to calculate sin(pi). Instead we are asking it to calculate sin(double(pi)) or sin(float(pi)). Since pi is transcendental and irrational it should be no surprise that pi cannot be exactly represented in a float, or even in a double. Therefore, what we are really calculating is sin(pitheta), where theta is a small number representing the difference between ‘pi’ and float(pi) or double(pi). Calculus teaches us (since the derivative of sin(pi) is negative one) that, for sufficiently small values of theta, sin(pitheta) == theta. Therefore, if our sin function is sufficiently accurate we would expect sin(double(pi)) to be roughly equal to pidouble(pi). In other words, sin(double(pi)) actually calculates the error in double(pi)! This is best shown for sin(float(pi)) because then we can easily add float(pi) to sin(float(pi)) using double precision:
If you haven’t memorized pi to 15+ digits then this may be lost on you, but the salient point is that float(pi) + sin(float(pi)) is a more accurate value of pi than float(pi). Or, alternately, sin(float(pi)) tells you nothing more than the error in float(pi). Woah. Dude.Again: sin(float(pi)) equals the error in float(pi). I’m such a geek that I think that is the coolest thing I’ve discovered in quite a while. Think about this. Because this is profound. Here are the results of comparing sin(‘pi’) to the error in the value of ‘pi’ passed in: sin(double(pi)) = +0.0000000000000001224646799147353207 Wow. Our predictions were correct. sin(double(pi)) is accurate to sixteen to seventeen significant figures as a measure of the error in double(pi), and sin(float(pi)) is accurate to six to seven significant figures. The main reason the sin(float(pi)) results are less accurate is because operator overloading translates this to (float)sin(float(pi)) which rounds the result to float precision. I think it is perversely wonderful that we can use doubleprecision math to precisely measure the error in a double precision constant. Visual C++ 2015 can print the value of double(pi) to arbitrarily many digits, and glibc now calculates pi() accurately, so we can use this code and some hand adding to calculate pi to over 30 digits of accuracy!
You can also see this trick in action with a calculator. Just put it in radians mode, try these calculations, and note how the input plus the result add up to pi. Nerdiest bar trick ever:
The point is, that sin(float(pi)) is actually calculating pifloat(pi), which means that it is classic catastrophic cancellation. We should expect an absolute error (from zero) of up to about 3.14*FLT_EPSILON/2, and in fact we get a bit less than that. Know what you’re doingThere is no silver bullet. You have to choose wisely.
Above all you need to understand what you are calculating, how stable the algorithms are, and what you should do if the error is larger than expected. Floatingpoint math can be stunningly accurate but you also need to understand what it is that you are actually calculating. If you want to learn more about algorithm stability you should read up on condition numbers and consider getting Michael L Overton’s excellent book Numerical Computing with IEEE Floating Point Arithmetic. If your code is giving large errors then, rather than increasing your epsilon values, you should try restructuring your code to make it more stable – this can sometimes give stunning results. The true value of a floatIn order to get the results shown above I used the same infinite precision math library I created for Fractal eXtreme to check all the math and to print numbers to high precision. I also wrote some simple code to print the true values of floats and doubles. Next time I’ll share the techniques and code used for this extended precision printing – unlocking one more piece of the floatingpoint puzzle, and explaining why, depending on how you define it, floats have a decimal precision of anywhere from one to over a hundred digits.
CreditsThanks to Florian Wobbe for his excellent error finding and many insights into this topic. Other references:
