The mathematically correct way of doing a fit with fixed points is to use Lagrange multipliers. Basically, you modify the objective function you want to minimize, which is normally the sum of squares of the residuals, adding an extra parameter for every fixed point. I have not succeeded in feeding a modified objective function to one of scipy's minimizers. But for a polynomial fit, you can figure out the details with pen and paper and convert your problem into the solution of a linear system of equations:
def polyfit_with_fixed_points(n, x, y, xf, yf) :
mat = np.empty((n + 1 + len(xf),) * 2)
vec = np.empty((n + 1 + len(xf),))
x_n = x**np.arange(2 * n + 1)[:, None]
yx_n = np.sum(x_n[:n + 1] * y, axis=1)
x_n = np.sum(x_n, axis=1)
idx = np.arange(n + 1) + np.arange(n + 1)[:, None]
mat[:n + 1, :n + 1] = np.take(x_n, idx)
xf_n = xf**np.arange(n + 1)[:, None]
mat[:n + 1, n + 1:] = xf_n / 2
mat[n + 1:, :n + 1] = xf_n.T
mat[n + 1:, n + 1:] = 0
vec[:n + 1] = yx_n
vec[n + 1:] = yf
params = np.linalg.solve(mat, vec)
return params[:n + 1]
To test that it works, try the following, where n
is the number of points, d
the degree of the polynomial and f
the number of fixed points:
n, d, f = 50, 8, 3
x = np.random.rand(n)
xf = np.random.rand(f)
poly = np.polynomial.Polynomial(np.random.rand(d + 1))
y = poly(x) + np.random.rand(n) - 0.5
yf = np.random.uniform(np.min(y), np.max(y), size=(f,))
params = polyfit_with_fixed_points(d, x , y, xf, yf)
poly = np.polynomial.Polynomial(params)
xx = np.linspace(0, 1, 1000)
plt.plot(x, y, 'bo')
plt.plot(xf, yf, 'ro')
plt.plot(xx, poly(xx), '-')
plt.show()
And of course the fitted polynomial goes exactly through the points:
>>> yf
array([ 1.03101335, 2.94879161, 2.87288739])
>>> poly(xf)
array([ 1.03101335, 2.94879161, 2.87288739])