Get the count of non-zeros in each row and use that for averaging the summation along each row. Thus, the implementation would look something like this -
np.true_divide(matrix.sum(1),(matrix!=0).sum(1))
If you are on an older version of NumPy, you can use float conversion of the count to replace np.true_divide
, like so -
matrix.sum(1)/(matrix!=0).sum(1).astype(float)
Sample run -
In [160]: matrix
Out[160]:
array([[0, 0, 1, 0, 2],
[1, 0, 0, 2, 0],
[0, 1, 1, 0, 0],
[0, 2, 2, 2, 2]])
In [161]: np.true_divide(matrix.sum(1),(matrix!=0).sum(1))
Out[161]: array([ 1.5, 1.5, 1. , 2. ])
Another way to solve the problem would be to replace zeros with NaNs
and then use np.nanmean
, which would ignore those NaNs
and in effect those original zeros
, like so -
np.nanmean(np.where(matrix!=0,matrix,np.nan),1)
From performance point of view, I would recommend the first approach.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…