Welcome!

0 %
Yun Peng
Ph.D. Candidate in CUHK
  • Chinese Name
    彭昀
  • Major
    Computer Science
  • City
    Hong Kong
  • Age
    24
  • Email
    normal@yunpeng.work
Research Interest
Software Engineering
Artificial Intelligence

数据挖掘总结

April 16, 2021
Data Mining Cheat Sheet

Data Mining Cheat Sheet

Introduction

Definition 1: The nontrivial extraction of implicit, previous unknown, and potentially useful information from data.

Definition 2: Data mining is the process of automatically discovering useful information in large data repositories.

Directed Knowledge Discovery: Goal-oriented. Example: classification

Undirected Knowledge Discovery: Bottom-up approach, no target field. Example: clustering

Data Mining Tasks:

Classifcation, Regression, Clustering, Association Rule Discovery, Anomaly Detection

Data Preprocessing

4 types of attributes:

Nominal, Ordinal, Interval and Ratio

Data quality examples:

Missing values, duplication and inconsistent data, noise and outliers

Methods for improving data quality:

Sampling, aggregation, dimensionality reduction, feature subset selection, feature creation, discretization and binarization, attribute transformation

Clustering

Similarity: often falls in the range [0,1] or [-1,1]

Dissimilarity: upper bound can be 1 or

Different types of similarities

Euclidean Distance: Norm

City Block:

Minkowski Distance:

Supremum Distance:

Cosine Similarity:

Pearson's Correlation:

Mahalanobis Distance:

Sequential K-Means

  1. make the initial guess for the means

  2. set the counts to 0

  3. until Interrupted

    1. acquire the next example
    2. if is closest to , then increase and update by
  4. end

Forgetful Sequence K-Means

Replace the counts with a constant , update by

Expection-Maximization Algorithm (Gaussian Mixture Methods)

Initialize the model parameters

Until there is no change in any parameters

Expectation Step : calculate the probability that each object belongs to each cluster

Maximization Step : find the new estimates of the parameters that maximize the expected likelihood

end until

Kernel K-Means

Transform datapoint into a feature space and then apply the basic K-means

Homogeneous polynomial kernel:

Inhomogeneous polynomial kernel (c >= 0):

Gaussian Kernel:

Self-Organizing Feature Maps

Unlike sequential K-Means, the centres (called units or neurons) have a pre-determined topographic ordering relationship

  1. make initial guesses for centroids

  2. repeat

    1. acquire the next example
    2. if is closest to , then update by ; update (neighbor of ) by
  3. end until centroids do not change or a threshold is exceeded

Agglomerative Hierarchical Clustering

Start out with each data point forming its own cluster and gradually merge clusters until all points have been gathered together in one big cluster

  1. compute the proximity matrix

  2. repeat

    1. merge the two closest clusters
    2. update the proximity matrix to reflect the proximity between the new cluster and the original clusters
  3. until only one cluster remains

Different methods to define the distances between clusters:

Single Linkage Clustering: distance between groups is defined as that of the closest pair of data

Pros: can handle non-elliptical shapes Cons: sensitive to noise and outliers

Complete Linkage Clustering: distance between two clusters is given by the distance between their most distant members

Pros: less susceptible to noise and outliers Cons: tend to break large clusters; biased towards globular clusters

Group Average Clustering: distance between two clusters is defined as the average of the distances between all pairs of records

Pros: less susceptible to noise and outliers Cons: biased towards globular clusters

Centroid Clustering: distance between two clusters is defined as the distance between the mean vectors of the two clusters

Median Clustering: distance between two clusters is defined as the distance between the prototypes of the two clusters

the prototype is the data point when there is only one point in the clusterl; When we merge two clusters, the new prototype is the mid point of the two prototypes

Ward's Method: Similarity of two clusters is based on the increase in squared error when two clusters are merged

DBSCAN

Core point: has more than a specified number of points (MinPts) within Eps

Border point: has fewer than MinPts within Eps, but is in the neighborhood of a core point

Noise point: neither is core point nor border point

  1. Label all points as core, border, or noise points
  2. Eliminate noise points
  3. Put an edge between all core points that are within Eps of each other
  4. Make each group of connected core points into a separated cluster
  5. Assign each border point to one of the clusters containing a core point from which the border point is directly density- reachable

Pros: resistant to noise, can handle clusters of different shapes and sizes

Cons: sensitive to denses, can not handle high dimensional data well

Cluster Evaluation

Silhouette Coefficient:

average distance of to the points in its cluster

: min (average distance of to points in another cluster)

Classification

Minimum Distance Classifier

The goal is to minimize

Which is equivalent with maximizing the decision function for class j

Linear Discriminant Classifier

is assigned to one class if and another class if

k Nearest Neighbor

lazy learners, does not build model explicitly

can have arbitary decision boundries

Bayes Classifier

A point belongs to class if is the largest

Normally we are given and to compute

the decision function is

How to estimate :

 

Posted in TechnologyTags:
© 2024 All Rights Reserved.