Numerical computing surprises
If you fit a polynomial to a function at more points, you expect a better approximation to the function. It’s fairly well known that this breaks down for some functions, such as the famous example by Runge. As you increase the number of interpolation points, the fit gets worse between the points.
What’s not as well known is that even functions that are immune to this pathology in theory may still be vulnerable in practice. The approximation error can grow exponentially due to the interpolation process amplifying the limitations of floating point arithmetic.
I give an example here of a function that interpolates nicely in theory, and also in practice for around 50 interpolating points. But then somewhere between 50 and 100 points the interpolation goes badly wrong.
Writing the interpolation post reminded me of Wilkinson’s polynomial. I’ve referred to this example often, but I didn’t have a blog post to point to until now.
James Wilkinson discovered in 1963 that a tiny change to one coefficient of a polynomial can result in an enormous change to the roots of the polynomial.
The two posts are related. The problems with the interpolation example stem from the interpolation points being evenly spaced. And the sensitivity of the roots of Wilkinson’s polynomial to changes in the coefficients stem from the roots being evenly spaced as well.
Enjoy!