Expectation Step : calculate the probability that each object belongs to each cluster
Maximization Step : find the new estimates of the parameters that maximize the expected likelihood
end until
Kernel K-Means
Transform datapoint into a feature space and then apply the basic K-means
Homogeneous polynomial kernel:
Inhomogeneous polynomial kernel (c >= 0):
Gaussian Kernel:
Self-Organizing Feature Maps
Unlike sequential K-Means, the centres (called units or neurons) have a pre-determined topographic ordering relationship
make initial guesses for centroids
repeat
acquire the next example
if is closest to , then update by ; update (neighbor of ) by
end until centroids do not change or a threshold is exceeded
Agglomerative Hierarchical Clustering
Start out with each data point forming its own cluster and gradually merge clusters until all points have been gathered together in one big cluster
compute the proximity matrix
repeat
merge the two closest clusters
update the proximity matrix to reflect the proximity between the new cluster and the original clusters
until only one cluster remains
Different methods to define the distances between clusters:
Single Linkage Clustering:distance between groups is defined as that of the closest pair of data
Pros: can handle non-elliptical shapes Cons: sensitive to noise and outliers
Complete Linkage Clustering:distance between two clusters is given by the distance between their most distant members
Pros: less susceptible to noise and outliers Cons: tend to break large clusters; biased towards globular clusters
Group Average Clustering:distance between two clusters is defined as the average of the distances between all pairs of records
Pros: less susceptible to noise and outliers Cons: biased towards globular clusters
Centroid Clustering:distance between two clusters is defined as the distance between the mean vectors of the two clusters
Median Clustering:distance between two clusters is defined as the distance between the prototypes of the two clusters
the prototype is the data point when there is only one point in the clusterl; When we merge two clusters, the new prototype is the mid point of the two prototypes
Ward's Method:Similarity of two clusters is based on the increase in squared error when two clusters are merged
DBSCAN
Core point:has more than a specified number of points (MinPts) within Eps
Border point:has fewer than MinPts within Eps, but is in the neighborhood of a core point
Noise point:neither is core point nor border point
Label all points as core, border, or noise points
Eliminate noise points
Put an edge between all core points that are within Eps of each other
Make each group of connected core points into a separated cluster
Assign each border point to one of the clusters containing a core point from which the border point is directly density- reachable
Pros: resistant to noise, can handle clusters of different shapes and sizes
Cons: sensitive to denses, can not handle high dimensional data well
Cluster Evaluation
Silhouette Coefficient:
average distance of to the points in its cluster
:min (average distance of to points in another cluster)
Classification
Minimum Distance Classifier
The goal is to minimize
Which is equivalent with maximizing the decision function for class j