G06V 10/762 using clustering, e.g. of similar faces in social networks
Introduced: January 2022
Full Title
Arrangements for image or video recognition or understanding > using pattern recognition or machine learning > using clustering, e.g. of similar faces in social networks
Classification Context
- Section:
- PHYSICS
- Class:
- COMPUTING OR CALCULATING; COUNTING
- Subclass:
- IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
Scope Notes
Glossary: AFC adaptive fuzzy clustering alternating cluster estimation (ACE) alternating cluster estimation ACE when a partitioning with a specific shape is to be obtained, the user can define membership functions U(V,X) and prototype functions V(U,X). The clustering will be estimated as follows: AO alternative optimisation CCM compatible cluster merging clustering by graph partitioning a weighted graph is partitioned into disjoint subgraphs by removing a set of edges (cut). The basic objective function is to minimise the size of the cut, which is calculated as the sum of the weights of all edges belonging to the cut. compatible cluster merging (CCM) compatible cluster merging starts with a sufficiently large number of clusters, and successively reduces the number by merging similar (compatible) clusters with respect to some criteria such as: where: the set of eigenvectors of the ith cluster. DBSCAN density-based spatial clustering of applications with noise, a non-parametric clustering algorithm which does not require specifying the number of clusters in advance. FACE Fast-ACE FCQS (Fuzzy C-quadric shells) FCQS Fuzzy C-quadric shells in case of quadric shaped clusters, FCQS can be employed for recovering them. For the estimation of the clusters the following cost function is minimised: FCSS fuzzy C-spherical shells FCV fuzzy C-varieties FHV fuzzy hyper volume fuzzy c-means clustering ● Choose a number of clusters. ● Assign randomly to each point coefficients for being in the clusters using the formula. ● Repeat until the algorithm has converged: Compute the centroid for each cluster, using the formula; For each point, compute its coefficients of being in the clusters. Gustafson-Kessel (GK) the GK algorithm associates each cluster with the cluster centre and its covariance. The main feature of GK clustering is the local adaptation of distance matrix in order to identify ellipsoidal clusters. The objective function of GK is: where: HCM hard c-Means K-means clustering KNN K-nearest neighbour; a classification algorithm which, for a given data sample, chooses the k most similar samples from a training set, retrieves their respective class labels, and assigns a class label to the data sample by majority decision; variant: 1NN, which is KNN for k=1. LVQ learning vector quantisation partitioning around medoids (PAM) – the most common realisation of k-medoid type algorithms partitioning around medoids PAM 1. Initialise: randomly select k of the n data points as the medoids. 2. Associate each data point to the closest medoid. ("closest" here usually in a Euclidean/Manhattan distance sense). 3. For each medoid m. - For each non-medoid data point x. Swap m and x and compute the total cost of the configuration. 4. Select the configuration with the lowest cost. 5. Repeat steps 2 to 4 until there is no change in the medoid.