I don't think you can use nnls
directly as the Fortran code it calls doesn't allow extra constraints. However, the constraint that the equation sums to one can be introduced as a third equation, so your example system is of the form,
60 x1 + 90 x2 + 120 x3 = 67.5
30 x1 + 120 x2 + 90 x3 = 60
x1 + x2 + x3 = 1
As this is now a set of linear equations, the exact solution can be obtained from x=np.dot(np.linalg.inv(A),b)
so that x=[0.6875, 0.3750, -0.0625]
. This requires x3
to be negative. Therefore, there is no exact solution when x
is positive to this problem.
For an approximate solution where x
is constrained to be positive, this can be obtained using,
import numpy as np
from scipy.optimize import nnls
#Define minimisation function
def fn(x, A, b):
return np.sum(A*x,1) - b
#Define problem
A = np.array([[60., 90., 120.],
[30., 120., 90.],
[1., 1., 1. ]])
b = np.array([67.5, 60., 1.])
x, rnorm = nnls(A,b)
print(x,x.sum(),fn(x,A,b))
which gives, x=[0.60003332, 0.34998889, 0.]
with a x.sum()=0.95
.
I think if you wanted a more general solution including sum constraints, you'd need to use minimise with explicit constraints/bounds in the following form,
import numpy as np
from scipy.optimize import minimize
from scipy.optimize import nnls
#Define problem
A = np.array([[60, 90, 120],
[30, 120, 90]])
b = np.array([67.5, 60])
#Use nnls to get initial guess
x0, rnorm = nnls(A,b)
#Define minimisation function
def fn(x, A, b):
return np.linalg.norm(A.dot(x) - b)
#Define constraints and bounds
cons = {'type': 'eq', 'fun': lambda x: np.sum(x)-1}
bounds = [[0., None],[0., None],[0., None]]
#Call minimisation subject to these values
minout = minimize(fn, x0, args=(A, b), method='SLSQP',bounds=bounds,constraints=cons)
x = minout.x
print(x,x.sum(),fn(x,A,b))
which gives x=[0.674999366, 0.325000634, 0.]
and x.sum()=1
. From minimise, the sum is correct but the value of x
is not quite right with np.dot(A,x)=[ 69.75001902, 59.25005706]
.