Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Welcome To Ask or Share your Answers For Others

Categories

0 votes
321 views
in Technique[技术] by (71.8m points)

python - How to access weighting of indiviual decision trees in xgboost?

I'm using xgboost for ranking with

param = {'objective':'rank:pairwise', 'booster':'gbtree'}

As I understand gradient boosting works by calculating the weighted sum of the learned decision trees. How can I access the weights that are assigned to each learned booster? I wanted to try to post-process the weights after training to speed up the prediction step but I don't know how to get the individual weights. When using dump_model(), the different decision trees can be seen in the created file but no weighting is stored there. In the API I haven't found a suitable function. Or can I calculate the weights by hand with the shrinkage parameter eta ?

See Question&Answers more detail:os

与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
Welcome To Ask or Share your Answers For Others

1 Reply

0 votes
by (71.8m points)

Each tree is given the same weight eta and the overall prediction is the sum of the predictions of each tree, as you say.

You'd perhaps expect that the earlier trees are given more weight than the latter trees but that's not necessary, due to the way the response is updated after every tree. Here's a toy example:

Suppose we have 5 observations, with responses 10, 20, 30, 40, 50. The first tree is built and gives predictions of 12, 18, 27, 39, 54.

Now, if eta = 1, the response variables passed to the next tree will be -2, 2, 3, 1, -4 (i.e. the difference between the prediction and the true response). The next tree will then try to learn the 'noise' that wasn't captured by the first tree. If nrounds = 2, then the sum of the predictions from the two trees will give the final prediction of the model.

If instead eta = 0.1, all trees will have their predictions scaled down by eta, so the first tree will instead 'predict' 1.2, 1.8, 2.7, 3.9, 5.4. The response variable passed to the next tree will then have values 8.8, 18.2, 27.3, 36.1, 44.6 (the difference between the scaled prediction and the true response) The second round then uses these response values to build another tree - and again the predictions are scaled by eta. So tree 2 predicts say, 7, 18, 25, 40, 40, which, once scaled, become 0.7, 1.8, 2.5, 4.0, 4.0. As before, the third tree will be passed the difference between these values and the previous tree's response variable (so 8.1, 16.4, 24.8, 32.1. 40.6). Again, the sum of the predictions from all trees will give the final prediction.

Clearly when eta = 0.1, and base_score is 0, you'll need at least 10 rounds to get a prediction that's anywhere near sensible. In general, you need an absolute minimum of 1/eta rounds and typically many more.

The rationale for using a small eta is that the model benefits from taking small steps towards the prediction rather than making tree 1 do the majority of the work. It's a bit like crystallisation - cool slowly and you get bigger, better crystals. The downside is you need to increase nrounds, thus increasing the runtime of the algorithm.


与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…
OGeek|极客中国-欢迎来到极客的世界,一个免费开放的程序员编程交流平台!开放,进步,分享!让技术改变生活,让极客改变未来! Welcome to OGeek Q&A Community for programmer and developer-Open, Learning and Share
Click Here to Ask a Question

...