Hierarchical Cluster Analysis

Hierarchical Cluster Analysis

(Statistical Analysis > Cluster Analysis Hierarchical)

Description

Hierarchical Cluster Analysis is performed using a set of dissimilarities for all objects being clustered. Initially, each object is assigned to its own cluster and then the algorithm proceeds iteratively, at each stage joining the two most similar clusters, continuing until there is only one cluster. Hierarchical cluster analysis can be used to discover structures in a data set without providing an explanation.

Inputs

Name – Cluster Analysis Hierarchical
Cluster Analysis Hierarchical Dataset Input – select a dataset that contains the variables of interest.
Cluster Analysis Variable List – a set of independent variables.
Cluster Analysis Distance Metric – The distance measure to be used. This must be one of “Euclidean”, “maximum”, “manhattan”, “Canberra”, “binary” or “minkowski”.
Cluster Analysis Cluster Metric – The agglomeration method to be used. This should be one of “ward”, “single”, “complete”, “average”, “mcquitty”, “median” or “centroid”.
Cluster Analysis Labels – A character vector of labels for the leaves of the tree. By default the row names or row numbers of the original data are used. If labels = FALSE no labels at all are plotted. This argument is not shown and is set as default (TRUE).

Outputs

Output includes the following:
Distance Matrix : dissimilarity matrix between all pairs of data records.
Merge : an n-1 by 2 matrix. Row i of merge describes the merging of clusters at step i of the clustering. If an element j in the row is negative, then observation -j was merged at this stage. If j is positive then the merge was with the cluster formed at the (earlier) stage j of the algorithm. Thus negative entries in merge indicate agglomerations of singletons, and positive entries indicate agglomerations of non-singletons.
Height : a set of n-1 real values (non-decreasing for ultrametric trees). The clustering height: that is, the value of the criterion associated with the clustering method for the particular agglomeration.
Order : a vector giving the permutation of the original observations suitable for plotting, in the sense that a cluster plot using this ordering and matrix merge will not have crossings of the branches.
Labels : labels for each of the objects being clustered.
Method : the agglomeration method that has been used.
Distance Method : The distance measure that has been used to create a dissimilarity structure.

Note : The results of a hierarchical cluster analysis can be displayed using the following graphical tools:
1) Tools > Charts > Hierarchical Clustering Dendrogram
2) Tools > Charts > Hierarchical Clustering Tree Chart
3) Tools > Charts > Hierarchical Clustering Heat Map

Advanced
The following algorithm is used in the implementation of the Hierarchical Clustering.

• Compute the distance matrix
• Each object is assigned to its own cluster
• Repeat
o Merge the two most similar clusters
o Update the distance matrix
• Until there is only one cluster

References
(1) An Introduction to R (3.1.0 (2014-04-10)).
(2) Aldenderfer, M. S. and R. K. Blashfiled (1984) Cluster Analysis, SAGE Publications, Inc, Newbury Park.
(3) Gordon, A. D. (1999) Classification (Second Edition), Chapman & Hall/CRC, Boca Raton.
(4) http://www.public.asu.edu/~jye02/CLASSES/Fall-2007/NOTES/Basic-cluster.ppt.