Divisive Hierarchical Clustering is known as DIANA which stands for Divisive Clustering Analysis. It was introduced by Kaufmann and Rousseeuw in 1990.
Divisive Hierarchical Clustering works similarly to Agglomerative Clustering. It follows a top-down strategy for clustering. It is implemented in some statistical analysis packages.
It is the reverse of AGNES. Its process starts with the root where all the points are clustered in a single cluster. Then it recursively splits the higher-level cluster into two clusters this is considered as heterogeneous. This technique is known as the top-down clustering approach. This approach starts with a single cluster containing all objects and then splits the cluster into two least similar clusters based on their characteristics. We proceed with the same process until there is one cluster for each observation.
These techniques are either monothetic or polythetic methods.
Monothetic Methods only involve one variable at a time while in polythetic methods all variables are considered simultaneously.These underlying clustering criteria involves some kind of distance or some type of dissimilarity measure and since there are many sets of data so there is the possibility that some distance/dissimilarity measures so these ensuring clusters are not unique across these methods/ measures. The divisive techniques use the generic “dissimilarity” measure unless it is a distance measure specifically under discussion. There are a number of other aspects needing attention for divisive hierarchy trees. There is a different type of which is class of classification and regression trees. Therefore for these methods, the construction of the hierarchy tree is formed at each stage by either a classification method or by regression analysis.
It is considered as a wrapper approach for creating cluster hierarchies that can turn a flat clustering algorithm into a hierarchical clustering algorithm by its repeated application.
Hierarchical clustering is most commonly presented as serving the purpose of discovering and presenting the similarity structure of the training set and the domain from which it comes.
Divisive Hierarchical Clustering is the clustering technique that works in inverse order. It firstly includes all objects in a single large cluster. Then at each step, these clusters are divided into two. The process is iterated until all objects are in their own cluster.
It approaches the reversal algorithm of Agglomerative Hierarchical Clustering. Suppose there is one large cluster consisting of all n objects. And then at each step, these large clusters are divided into two until all clusters don't have a single object. Thus this technique is completed in n-1 steps. For the first step of an agglomerative method, all possible fusions of two objects are considered leading to n (n − 1)/2 combinations, and for the divisive method which works on the same principle, there are 2n−1 − 1 possibility to split the data into two clusters. These numbers are larger in comparison to an agglomerative method. To avoid these calculations we follow these steps.
Step-1: Divisive Hierarchical Clustering follows Agglomerative Hierarchical Clustering for the combination of all the objects for forming clusters. Then It follows a top-down strategy assuming it to be a single cluster for level L(0)=n and sequence number m = 0.
Step-2:Then the most dissimilar pair of clusters is found from the current cluster and which is considered as (r) and (s) in which d [(r), (s)] = min d [(i), (j)], where min is the number of pairs of the cluster in the current cluster.
Step-3: The number of sequences increases gradually in the manner as m=m+1. Then cluster is broken into (r) and (s) which forms next cluster to make the level of clustering by using L (m1) = d [(r)] and L (m2) = d [(s)].
Step-4: Then the distance matrix gets updated by adding rows and columns corresponding to two broken clusters which are (r) and (s). The similarity between the new cluster, denoted by (r, s) and old cluster (k) is defined in this way:
Importance of Divisive clustering:
- Divisive clustering is more complex.
- Divisive clustering is more efficient.
- A divisive algorithm is also more accurate.