Distance from a point to a line
The Endeavour 2024-03-16
Eric Lengyel’s new book Projective Geometric Algebra Illuminated arrived yesterday and I’m enjoying reading it. Imagine if someone started with ideas like dot products, cross products, and determinants that you might see in your first year of college, then thought deeply about those things for years. That’s kinda what the book is.
Early in the book is the example of finding the distance from a point q to a line of the form p + tv.
If you define u = q − p then a straight-forward derivation shows that the distance d from q to the line is given by
But as the author explains, it is better to calculate d by
Why is that? The two expressions are algebraically equal, but the latter is better suited for numerical calculation.
The cardinal rule of numerical calculation is to avoid subtracting nearly equal floating point numbers. If two numbers agree to b bits, you may lose up to b bits of significance when computing their difference.
If u and v are vectors with large magnitude, but q is close to the line, then the first equation subtracts two large, nearly equal numbers under the square root.
The second equation involves subtraction too, but it’s less obvious. This is a common theme in numerical computing. Imagine this dialog.
[Student produces first equation.]
Mentor: Avoid subtracting nearly equal numbers.
[Student produces section equation.]
Student: OK, did it.
Mentor: That’s much better, though it could still have problems.
Where is there a subtraction in the second equation? We started with a subtraction in defining u. More subtly, the definition of cross product involves subtractions. But these subtractions involve smaller numbers than the first formula, because the first formula subtracts squared values. Eric Lengyel points this out in his book.
None of this may matter in practice, until it does matter, which is a common pattern in numerical computing. You implement something like the first formula, something that can be derived directly. You implicitly have in mind vectors whose magnitude is comparable to d and this guides your choice of unit tests, which all pass.
Some time goes by and a colleague tells you your code is failing. Impossible! You checked your derivation by hand and in Mathematica. Your unit tests all pass. Must be your colleague’s fault. But it’s not. Your code would be correct in infinite precision, but in an actual computer it fails on inputs that violate your implicit assumptions.
This can be frustrating, but it can also be fun. Implementing equations from a freshman textbook accurately, efficiently, and robustly is not a freshman-level exercise.
Related posts
- Math library functions that seem unnecessary
- Don’t invert that matrix
- Avoiding overflow in Bayesian calculations