Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
171 views
in Technique[技术] by (71.8m points)

python - SciPy optimization with grouped bounds

I am trying to perform a portfolio optimization that returns the weights which maximize my utility function. I can do this portion just fine including the constraint that weights sum to one and that the weights also give me a target risk. I have also included bounds for [0 <= weights <= 1]. This code looks as follows:

def rebalance(PortValue, port_rets, risk_tgt):
    #convert continuously compounded returns to simple returns
    Rt = np.exp(port_rets) - 1 
    covar = Rt.cov()

    def fitness(W):
        port_Rt = np.dot(Rt, W)
        port_rt = np.log(1 + port_Rt)
        q95 = Series(port_rt).quantile(.05)
        cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
        mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
        return -1 * mean_cVaR

    def solve_weights(W):
        import scipy.optimize as opt
        b_ = [(0.0, 1.0) for i in Rt.columns]
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
              {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W))
                                                          * 252) - risk_tgt})
        optimized = opt.minimize(fitness, W, method='SLSQP', constraints=c_, bounds=b_)  

        if not optimized.success: 
           raise BaseException(optimized.message)
        return optimized.x  # Return optimized weights


    init_weights = Rt.ix[1].copy()
    init_weights.ix[:] = np.ones(len(Rt.columns)) / len(Rt.columns)

    return solve_weights(init_weights)

Now I can delve into the problem, I have my weights stored in a MultIndex pandas Series such that each asset is an ETF corresponding to an asset class. When an equally weights portfolio is printed out looks like this:

Out[263]:
equity       CZA     0.045455
             IWM     0.045455
             SPY     0.045455
intl_equity  EWA     0.045455
             EWO     0.045455
             IEV     0.045455
bond         IEF     0.045455
             SHY     0.045455
             TLT     0.045455
intl_bond    BWX     0.045455
             BWZ     0.045455
             IGOV    0.045455
commodity    DBA     0.045455
             DBB     0.045455
             DBE     0.045455
pe           ARCC    0.045455
             BX      0.045455
             PSP     0.045455
hf           DXJ     0.045455
             SRV     0.045455
cash         BIL     0.045455
             GSY     0.045455
Name: 2009-05-15 00:00:00, dtype: float64

how can I include an additional bounds requirement such that when I group this data together, the sum of the weight falls between the allocation ranges I have predetermined for that asset class?

So concretely, I want to include an additional boundary such that

init_weights.groupby(level=0, axis=0).sum()
Out[264]:
equity         0.136364
intl_equity    0.136364
bond           0.136364
intl_bond      0.136364
commodity      0.136364
pe             0.136364
hf             0.090909
cash           0.090909
dtype: float64

is within these bounds

[(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]

[UPDATE] I figured I would show my progress with a clumsy psuedo-solution that I am not too happy with. Namely becuase it doesn't solve the weights using the entire data set but rather asset class by asset class. The other issue is that it instead returns the series rather than the weights, But I am sure someone more apt than myself, could offer some insight in regards to the groupby function.

So with a mild tweak to my initial code, I have:

PortValue = 100000
model = DataFrame(np.array([.08,.12,.05,.05,.65,0,0,.05]), index= port_idx, columns = ['strategic'])
model['tactical'] = [(.08,.51), (.05,.21),(.05,.41),(.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]


def fitness(W, Rt):
    port_Rt = np.dot(Rt, W)
    port_rt = np.log(1 + port_Rt)
    q95 = Series(port_rt).quantile(.05)
    cVaR = (port_rt[port_rt < q95] * sqrt(20)).mean() * PortValue
    mean_cVaR = (PortValue * (port_rt.mean() * 20)) / cVaR
    return -1 * mean_cVaR  

def solve_weights(Rt, b_= None):
    import scipy.optimize as opt
    if b_ is None:
       b_ = [(0.0, 1.0) for i in Rt.columns]
    W = np.ones(len(Rt.columns))/len(Rt.columns)
    c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
    optimized = opt.minimize(fitness, W, args=[Rt], method='SLSQP', constraints=c_, bounds=b_)

    if not optimized.success: 
        raise ValueError(optimized.message)
    return optimized.x  # Return optimized weights

The following one-liner will return the somewhat optimized series

port = np.dot(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))), 
solve_weights(port_rets.groupby(level=0, axis=1).agg(lambda x: np.dot(x,solve_weights(x))), 
list(model['tactical'].values)))

Series(port, name='portfolio').cumsum().plot()

enter image description here

[Update 2]

The following changes will return the constrained weights, though still not optimal as it is broken down and optimized on the constituent asset classes, so when the constraint for target risk is considered only a collapsed version of the initial covariance matrix is avaliable

def solve_weights(Rt, b_ = None):

    W = np.ones(len(Rt.columns)) / len(Rt.columns)
    if b_ is None:
        b_ = [(0.01, 1.0) for i in Rt.columns]
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1})
    else:
        covar = Rt.cov()
        c_ = ({'type':'eq', 'fun': lambda W: sum(W) - 1},
              {'type':'eq', 'fun': lambda W: sqrt(np.dot(W, np.dot(covar, W)) * 252) - risk_tgt})

    optimized = opt.minimize(fitness, W, args = [Rt], method='SLSQP', constraints=c_, bounds=b_)  

    if not optimized.success: 
        raise ValueError(optimized.message)

    return optimized.x  # Return optimized weights

class_cont = Rt.ix[0].copy()
class_cont.ix[:] = np.around(np.hstack(Rt.groupby(axis=1, level=0).apply(solve_weights).values),3)
scalars = class_cont.groupby(level=0).sum()
scalars.ix[:] = np.around(solve_weights((class_cont * port_rets).groupby(level=0, axis=1).sum(), list(model['tactical'].values)),3)

return class_cont.groupby(level=0).transform(lambda x: x * scalars[x.name])
See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Not totally sure I understand, but I think you can add the following as another constraint:

def w_opt(W):
    def filterer(x):
        v = x.range.values
        tp = v[0]
        lower, upper = tp
        return lower <= x[column_name].sum() <= upper
    return not W.groupby(level=0, axis=0).filter(filterer).empty

c_ = {'type': 'eq', 'fun': w_opt}  # add this to your other constraints

where x.range is the interval (tuple) repeated K[i] times where K is the number of times a particular level occurs and i is the ith level. column_name in your case happens to be a date.

This says constrain the weights such that the sum of the weights in the ith group is between the associated tuple interval.

To map each of the level names to an interval do this:

intervals = [(.08,.51), (.05,.21), (.05,.41), (.05,.41), (.2,.66), (0,.16), (0,.76), (0,.11)]
names = ['equity', 'intl_equity', 'bond', 'intl_bond', 'commodity', 'pe', 'hf', 'cash']

mapper = Series(zip(names, intervals))
fully_mapped = mapper[init_weights.get_level_values(0)]
original_dataset['range'] = fully_mapped.values

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...