Well, you are correct in that the documentation is actually obscure about this (but to be honest, I am not sure about its usefulness, too).
Let's replicate the example from the documentation with the iris data:
from sklearn.datasets import load_iris
from sklearn import tree
iris = load_iris()
clf = tree.DecisionTreeClassifier()
clf = clf.fit(iris.data, iris.target)
Asking for clf.tree_.value
, we get:
array([[[ 50., 50., 50.]],
[[ 50., 0., 0.]],
[[ 0., 50., 50.]],
[[ 0., 49., 5.]],
[[ 0., 47., 1.]],
[[ 0., 47., 0.]],
[[ 0., 0., 1.]],
[[ 0., 2., 4.]],
[[ 0., 0., 3.]],
[[ 0., 2., 1.]],
[[ 0., 2., 0.]],
[[ 0., 0., 1.]],
[[ 0., 1., 45.]],
[[ 0., 1., 2.]],
[[ 0., 1., 0.]],
[[ 0., 0., 2.]],
[[ 0., 0., 43.]]])
and
len(clf.tree_.value)
# 17
To realize what exactly this array represents it is useful to look at the tree visualization (also available in the docs, reproduced here for convenience):
As we can see, the tree has 17 nodes; looking closer, we see that the value
of each node is actually an element of our clf.tree_.value
array.
So, to make a long story short:
clf.tree_.value
is an array of arrays, of length equal to the number of nodes in the tree
- Each of its element arrays (which corresponds to a tree node) is of length equal to the number of classes (here 3)
- Each of these 3-element arrays corresponds to the amount of training samples that end up in the respective node for each class.
To clarify on the last point with an example, consider the second element of the array, [[ 50., 0., 0.]]
(which corresponds to the orange-colored node): it says that, in this node, end up 50 samples from the class #0, and zero samples from the other two classes (#1 and #2).
Hope this helps...