The is recursively partitioned into smaller sub-clusters

The PCT framework views a decision tree as a hierarchy of clusters. The top-node corresponds to one cluster containing all examples from dataset. Further, top-node is recursively partitioned into smaller sub-clusters using standard top-down induction of decision trees algorithm. Method for partitiong of dataset into smaller sub-clusters can be viewed as a heuristic which reduce the variance of the examples in sub-clusters making sub-cluster more homegeneius. In other words, the cluster homogeneity is maximized by maximizing the variance reduction. Since examples in one sub-cluster are homegeneious it is expected to achieve better predictive performance. cite{kocev2013tree}

The PCTs algorithm, therefore, uses the variance function, for partitioning of dataset, and the prototype function, that computes a label for each leaf, as parameters that can be instantiated for a given learning task. In this work, we instantiated PCTs for both multi-label classification cite{madjarov2012extensive, kocev2013tree} and hierarchical multi-label classification cite{vens2008decision}.

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Multi-label classification uses Gini index instantiation of PCTs, which is able to predict multiple binary targets simultaneously. Namely, the variance function for the PCTs for the task of multi-label classification is computed as the sum of the Gini indices of the target variables, i.e. $Var(E) = sum_{i = 1}^T Gini(E,Y_i)$. The prototype function returns a vector of probabilities that an instance is labeled with a given label (binary target variable) using Euclidean distance. Finnaly, the labels are calculated by applying a threshold on vector of probabilities.

For the task of hierarchical multi-label classification, the variance of a set of examples $E$ is defined as the average squared distance between each example’s label vector $(L_i)$ and the set’s mean label vector $(L_i): Var(E) = frac{1}{vert E vert} sum_{E_i in E} d(L_i, overline{L})^2$, where the label vector is defined as a vector with binary components. The $i$-th component of the vector is 1 if the example is labeled with the label $l_i$ and $0$ otherwise. In the hierarchical multi-label meaning, the similarity at higher levels of the hierarchy is more important than the similarity at lower levels. This is reflected by using a weighted Euclidean distance: $d(L_1, L_2)= sqrt{(sum_{(l = 1)}^{vert L vert} w(l_l) (L_{1,l} – L_{2,l})^2}$, where $L_{i,l}$ is the $i$-th component of the label vector $L_i$ of an instance $E_i$, $vert L vert$ is the size of the label vector, and the label weights $w(l)$ decrease with the depth of the label in the hierarchy ($w(l) = w_0 ? w(p(l))$, where $p(l)$ denotes the parent of label $l$ and $0