Clustering - different size clusters

I have a pretty large matrix of data which I want to cluster against the first column which can be separated into six clusters / categories of different sizes. I know the k means clustering algorithm allows input of number of clusters but allows those to be determined iteratively. Is there anything on MATLAB which would be suitable for my task?

 Accepted Answer

Yes, silhouette() lets you graphically judge the quality of the clustering produced by kmeans(). evalclusters() lets to evaluate the quality of the clustering achieved with a range of k values so you can pick the right k if you don't know it for certain.
% Try values of k 2 through 5
clustev = evalclusters(X, 'kmeans', 'silhouette', 'KList', 2:5);
% Get the best one value for k:
kBest = clustev.OptimalK

6 Comments

Thank you Image Analyst. I also wanted to ask you if you had experience of validating data that has already been clustered. I am reading lots of conflicting stuff about how this should be approached. I was hoping to produce p values for the clusters to say if they are real or not but I am not sure if this would be a sensible approach
An observation's silhouette value is a normalized (between -1 and 1) measure of how close the observation is to others in the same cluster, compared to observations in other clusters. Looking at the shape of the curves it generates can tell you how good the clusters are.
You can also use hierarchical clustering with linkage(), dendrogram(), and cluster() to see how close the various clusters are to each other.
Z = linkage(X);
dendrogram(Z);
You can divide the observations into groups, according to teh linkage distances Z:
grp = cluster(Z, 'maxclust', 6);
With the maxclust criterion, the observations are assigned to mo more than the given number of groups.
To examine the quality of the hierarchical structure, you can determine the Cophenetic correlation coefficient, which quantifies how accurately the tree represents the distances (dissimilarities) between the observations. The cophenet() function requires the linkage() distances and the pairwise distances between the points as input arguments
Y = pdist(X)
C = cophenet(Z, Y);
Values of C close to 1 indicate a high quality solution (similar to a linear correlation coefficient). I'm guessing this is what you would like.
Bran
Bran on 4 Nov 2015
Edited: Bran on 4 Nov 2015
Hi,
Thank you for the suggestions. Just wanted to note that the data has already been seperated into groups of different sizes and in some cases they have been assigned as opposed to clustered via an algorithm. As a result I was thinking maybe hypothesis testing would be appropriate. I am currently looking at the linkage values etc for my clusters. Also I was wondering, as in some cases it is unclear where there is a cluster at all even though they have been grouped together whether it would be OK to do a ttest(). For example I was considering testing to see if the values from the group are simply random are if they do indeed differ from the normally distributed data and produce a p value that way. The other method I have worked with is generating the p value via monte carlo sampling
No - I don't believe so. I'm not a Ph.D. statistician but I'm pretty sure you would not use ttest2() to create your model. The function you want to use if your scattered points are normally spaced/distributed is the fitcnb() function to create a Naive Bayes Classification. The Naive Bayes Classification was one of the first formal classification algorithms and remains on of the most popular methods. Its popularity is primarily due to the ease of constructing the classifier and largely due to its interpretable output. Naive Bayes classification models are based on Baye's rule of conditional probability. During the training step, the model estimates the parameters of a normal probability distribution, assuming the features are independent of one another within each class.
nbModel = fitcnb(xTrain, yTrain);
To estimate the class of some non-training data:
yPredicted = predict(bnModel, xTest);
To compare data with a standard probability distribution, a probability plot can be used as a simple visual check:
probplot('normal', xTrain);
If the points fall close to the line, it's normal, if not, it's not normal.
Also look up jbtest(), lillietest(), and kstest() - they all deal with testing data for normality.
Thank you very much Image Analyst for all your help and advice. I've been looking at the various features offered by MATLAB and it is very useful. Just a final quick question, does MATLAB have a Mann-Whitney test that also accounts for clusters? For example comparing the distribution of two groups that may have several clusters within them?
This is all I could find:
p = ranksum(x,y) returns the p-value of a two-sided Wilcoxon rank sum test. ranksum tests the null hypothesis that data in x and y are samples from continuous distributions with equal medians, against the alternative that they are not. The test assumes that the two samples are independent. x and y can have different lengths. This test is equivalent to a Mann-Whitney U-test.

Sign in to comment.

More Answers (0)

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!