Those of us working with relatively high-level languages spend most of our time able to blissfully ignore the mucky little details that go into making a program actually run. C# allows us to not think about allocating and freeing memory, for example, or how our strings are terminated. It’s all too easy to extend this willful blindness to other details, like how numeric types actually work. Can this variable ever have a fractional part? Yes? Double! No? Int! Can it be really big? Long! Done, sorted.
Recently I was writing some functions to do unit conversions. Specifically, I was converting between points (1/72 of an inch) and EMUs (1/914400 of an inch). Part of the point of EMUs is that you can use them for high-precision values without needing floating point arithmetic, so I was storing them in a long. The application in question deals with fractional points, so those went into a double. Now, what’s the largest positive point that we can convert into an EMU without any overflow? If we do a little math we see that there are 12,700 EMUs per point, so the obvious answer should be long.MaxValue / 12700.0, which comes out to a bit over 7.2E14. Excellent. However, if I turn that around and convert that many points back into EMUs, I get -9.2E18. Oh dear.
Those of you familiar with IEEE 754 saw this coming half a paragraph ago, but I’ll spell it out for the rest of us. C#’s double value (as defined by section 4.1.1 of the C# spec) is a 64-bit floating point value. It’s magnitude goes up to roughly 10E308, but it only has about 15-16 digits of precision. So for values above 10E15, it’ll have more and more error. In the case of long.MaxValue, which has 19 digits, simply converting that to a double gives us a value that’s actually greater than long.MaxValue by almost 5000. Naturally, when we do some double arithmetic (namely, (long.MaxValue / 12700.0) * 12700.0) that should wind up at long.MaxValue we actually end up slightly higher, so when we convert that back into a long, it overflows and becomes negative.
So what is the largest point that we can convert into an EMU? If we stick with storing points in a double, we should restrict ourselves to only use so many points that we won’t get EMU values larger than what double precision floats can accurately distinguish between. This leaves us with a ceiling somewhere between 1.0E15 and 1.0E16 EMUs. There are 914,400 EMUs in an inch, so this us a maximum magnitude of rather more than seventeen thousand miles, which happens to be enough for our purposes. So that’s a viable solution.
We could also use C#’s decimal type. Intended for use with currency values, decimal has a smaller magnitude than double, but a full 28 digits of precision, at the cost of 4 extra bytes. We don’t happen to need 13 more orders of magnitude in our lengths, and 4 bytes can add up quickly when we start storing tens of millions of objects, which isn’t uncommon.
C# has a reasonably good set of abstractions that let we programmers focus on more higher-level tasks, but there are still some boundaries where our high-level expectations can collide with the low-level reality. These liminal states are ripe for unexpected errors, when our convenient abstractions (in this case, that double is equivalent to a real number) run up hard against reality.
Share the post "A loss of precision, but not a loss of magnitude"