Following @ChrisFonnesbeck's advice, I wrote a small tutorial notebook about incremental prior updating. It can be found here:
https://github.com/pymc-devs/pymc3/blob/master/docs/source/notebooks/updating_priors.ipynb
Basically, you need to wrap your posterior samples in a custom Continuous class that computes the KDE from them. The following code does just that:
def from_posterior(param, samples):
class FromPosterior(Continuous):
def __init__(self, *args, **kwargs):
self.logp = logp
super(FromPosterior, self).__init__(*args, **kwargs)
smin, smax = np.min(samples), np.max(samples)
x = np.linspace(smin, smax, 100)
y = stats.gaussian_kde(samples)(x)
y0 = np.min(y) / 10 # what was never sampled should have a small probability but not 0
@as_op(itypes=[tt.dscalar], otypes=[tt.dscalar])
def logp(value):
# Interpolates from observed values
return np.array(np.log(np.interp(value, x, y, left=y0, right=y0)))
return FromPosterior(param, testval=np.median(samples))
Then you define the prior of your model parameter (say alpha
) by calling the from_posterior
function with the parameter name and the trace samples from the posterior of the previous iteration:
alpha = from_posterior('alpha', trace['alpha'])
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…