This article is in continuation to our previous topic ‘Unsupervised Machine Learning’. Today I’m giving you another powerful tool on this topic named ‘k means Clustering‘. The work in this article is on the continuation of the previous WHO data set featured in ‘Machine Learning: Unsupervised – Hierarchical Clustering and Bootstrapping’. This artifact demonstrates implementing k means clustering and bootstrapping to make sure that the algorithm and clusters formed stand true. The bootstrapping will also evaluate whether we have the right amount of clusters or not and if not what should be the ‘k’ i.e. number of clusters based on ‘Calinski-Harabasz Index’ and ‘Average Silhouette Width (ASW)’. Further, this report relates to how closely identical results are given by k-means with respect to the hierarchical clusters which were formed in the previous post. This is important because, no matter what different algorithms I use for computing clusters, the relation between, the data will remain the same. This indicates clusters should be formed more or less the same (with minimal trade-offs) irrespective of the algorithms used by an individual as data and relationship between the data does not change. Excited?
2. The Datum Promise
Before we dive straight into business, ‘The Datum’ is overwhelmed with the response perceived for its first blog ‘Machine Learning: Unsupervised: Hierarchical Clustering and Bootstrapping’. The blog was viewed in 12 countries: The United States, India, Germany, United Kingdom, Mexico, Jordan, Netherlands, Georgia, Nigeria, Brazil, Chile, Poland, and France. ‘The Datum’ extends its gratitude towards such a welcome and response for the very first article. Datum promises to continue its work with same energy and sincerity for a single motivation of giving you the practical Data Science and Machine Learning tools, with algorithms, codes, and data so that you can enhance and build upon a strong base that Datum provides. Data science is booming and is knocking hard, supposed to have a bright future and the Datum is striving to make your data science experience a cake walk. In return to keep us here at ‘The Datum’ we highly encourage you to spend a few minutes and like, share and subscribe ‘The Datum’ blog space so that we can better experience your needs and try to come up better each time. Datum is ready to provide support if any errors occur during the algorithm implementation/execution, please feel free to write back. You keep us going and we expect a two-way communication so that we can give you more in Data Science.
From the feedback of the previous article, it was unclear about which tools I have been using to execute the algorithms. I execute my algorithms in R. R is a statistical language used in Data Science for programming and implementing algorithms. R-Studio is a studio which runs on R language and is used by me mainly for better interface and GUI. To set up R-Studio you first need to download R and later you can download R-Studio on it. This link can probably help get you through(ping if you have any further difficulties): https://www.andrewheiss.com/blog/2012/04/17/install-r-rstudio-r-commander-windows-osx/
3. The Audience
For our audiences, we expect only knowledge and functioning of beginner level of R language. We try to elaborate and explain all the listings clearly and as simple as possible so you do not worry about the listing used of implementing algorithms. The Datum will try to cover major practical algorithms used in Data Science and Machine Learning to power you in this field. We just need your interest and ability to learn things and your continuous support.
4. Content Briefing
Following are the contents covered in this article for k-means clustering
- k-means theoretical implementation
- Implementing k-means algorithm in R
- Addressing optimal ‘k’ value using Calinski-Harabasz Index and Average Silhouette Width Index
- Bootstrapping k-means clusters for its validation and confidence
5. The Model
I have used R language to code for clustering on the World Health Organization (WHO) data, containing 35 countries limited to the region of ‘America’. Following are the data fields:
- $Country: contains names of the countries of region America (datatype: string)
- $Population: contains population count of respective countries (datatype: integer)
- $Under15: count of population under 15 years of age (datatype: number)
- $Over60: count of population over 60 years of age (datatype: number)
- $FertilityRate: fertility rate of the respective countries (datatype: number)
- $LifeExpectancy: Life expectancy of people of respective countries (datatype: integer)
- $CellularSubscribers: Count of people of respective countries possessing cellular subscriptions (datatype: number)
- $LiteracyRate: Rate of Literacy of respective countries (datatype: number)
- $GNI: Gross National Income of respective countries (datatype: number)
5.2. K Means Clustering
K means is another popular clustering algorithm where we try to categorize data based on forming clusters. K means to find it large applications in document classifications, delivery store optimizer, identifying crime localities, customer segmentation, etc. as all the clustering algorithms. So, the question arises how is k-means different? and why do we need it? K means is one of the simplest algorithms to implement when you have the data which is all numeric and the distance metric used in squared Euclidean. It is a high ad hoc algorithm to be used in clustering with only one major disadvantage, any guess?
5.3. Theoretical Implementation – K Means Clustering
- Select random number of ‘k’ cluster centers for your data. (this is the disadvantage of k-means clustering that we need to pre-define the number of clusters we need, can lead to unwanted results)
- For the selected centers we now assign all the data points which are nearest to it forming our clusters
- Now, for the clusters formed we compute the centers of the clusters
- Further, we start reassigning the data points as per the new cluster centers we just formed in step 3.
- Steps 3 and 4 are iterative and should be done until no further possibility of trading data points between clusters or you may have reached to maximum number of iterations that might be set.
All of the above steps for building a k means model will be understood clearly with the following figure illustration below:
5.4. Practical Implementation
It is finally time to implement k means clustering using R language. The function to run k means clustering in R is kmeans(). The function gives the cluster attributes that includes cluster labels, the cluster centers, the total sum of the square, total WSS (within the sum of squares) and total BSS. k-means does not have a stopping point that is unique, thus the possibility of k-means being fairly unstable is high as the fact that the final cluster is dependent on the initial cluster centers. To nullify this effect we can run k-means clusters several times with different random starts and based on results we can select the one with least WSS. And the best part? kmeans() function can manage this on its own unless otherwise stated it defaults to one random start.
My model for k-means clustering is built on the ‘WHO’ Health data limited to the American region containing 35 countries. This is the same data which was used by me for implementing Hierarchical Clustering. This will also help us understand whether using different algorithms on the same set of data produces the same clusters or different? Thinking logically if the data is the same which means the relationship between the data remains the same, indicating that the weights and distances of the data points in a plane also will be the same. So irrespective of what algorithm we use we should get the same set of clusters. Will we? We’ll figure out very soon. So get ready with your R-Studio and make sure you have the data downloaded from my website. Here is the data download link: https://talktomedata.files.wordpress.com/2019/05/who3.xlsx
5.5. R Algorithm for k-means
5.5.1 Reading and Scaling Data
First things first, I am loading the WHO3.csv file in the object ‘data’ using the function read.csv(). The arguments are ‘header = T’ stating to R that the file contains headers and since it is a CSV – comma separated values, meaning data values are separated with comma, I mention in the argument as ‘sep = “,”‘.
Moving ahead I have scaled the data, which is a really important aspect to get good clusters and without any flaws. I have scaled data to set mean = 0 and with a standard deviation of 1 for all the data columns except the first one which is ‘Country’. First, I create an object ‘scaling.variables’ for all the columns except the first as I mentioned. Going ahead, I am using scale() function to scale the columns isolated except the first column. The scale function gives two attributes ‘scaled: center’ and ‘scaled: scale’ which are saved into different objects, these objects can be handy if we need to remove the data scaling.
Printing clusters is crucial and it is not a good idea to print them with all the columns of data as this may be confusing in finding the cluster relations. You have to try combinations of data columns to make sure you are printing the right amount of columns so that you can find a relation between the clusters. To print them I am forming a function named ‘clusters_print’. This will help me print the clusters formed by kmeans() limiting the columns to “Country”, “CellularSubscribers”, “LiteracyRate” and “GNI”. Below is the listing for the clusters_print function.
5.5.2 Clustering using kmeans()
In previous hierarchical clusters, the algorithm used a number of clusters k=5 was used. I am using the same k i.e. equal to 5 clusters for k-means as well. This will help me compare whether the clusters formed will be the same or different. Moving ahead I form the object ‘kclust’ in which I am saving the output of kmeans function. The k-means is formed using ‘scaled’ data for k number of clusters along with ‘nstart=100’ and ‘iter.max=100’. As I mentioned earlier that the k-means final output depends upon the initial centers which are selected. Thus, ‘nstart=100’ will start clustering with 100 different initial start points and having 100 iterations for each i.e. ‘iter.max=100’. This is very complex computation which is carried out on ease with R’s kmeans() function. The output of k-means stored in ‘kclust’ is used to extract the clusters as kclust$cluster and this is saved in an object ‘grouped’. This grouped object is used to print our clusters using the print_cluster function which we formed earlier. Below are the listing and the output showing k-means clusters.
Following can be inferred from the clusters formed above:
- Here, clusters formed are somewhat in relation to the geographic mapping of the countries
- Cluster 1 contains countries with least average GNI all being in 4 figures and literacy rate averaging 80.89
- Cluster 2 is a good mix of countries with 4 and 5 digits GNI along with literacy rate averaging 92.18
- Cluster 3 has countries with highest literacy rate with minimum being 99%
- Cluster 4 too has countries with good literacy rates with minimum being 90% and GNI for all the countries in cluster 4 is in 5 figures
And guess what, these are more or less the same clusters with little trade-offs which were formed in Hierarchical Clusters from my previous blog: Machine Learning: Unsupervised – Hierarchical Clustering and Bootstrapping. Below is the clusters image which was formed in Hierarchical Clustering for comparison purposes.
5.5.3. Optimal ‘k’ for Clusters
Again, the question arises; how are we sure about having 5 clusters and not 5 or 3 or may just 2? How do we know we have selected an optimal number of k i.e. clusters for clustering? This can be figured out by ‘Calinski-Harabasz Index’ and ‘Average Silhouette Width’. These both concepts help us figure out what should be our optimal number of clusters i.e. k. I am going to give you a little theoretical idea about both the concepts in depth and detailed explanation is out of scope for this article.
126.96.36.199. Calinski-Harabasz Index (ch)
The Calinski-Harabasz index of a clustering is the ratio of the between-cluster variance
(which is essentially the variance of all the cluster centroids from the dataset’s grand
centroid) to the total within-cluster variance (basically, the average WSS of the clusters
in the clustering).
188.8.131.52. Average Silhouette Width (asw)
Measures how well an individual data point is clustered and further estimates average distance between clusters. The silhouette plot displays a measure of how close each point in one cluster is to points in the neighboring clusters.
184.108.40.206. Computing ch and asw
kmeansruns() another function in ‘fpc’ package of R can be used to compute Calinski-Harabasz Index (ch) and Average Silhouette Width (asw) automatically. This runs k-means over a range of k and then estimates best k. It further returns the best value of k and output of kmeans for the respective k value. Using kmeansruns() on the scaled data for ‘krange=1:10’ i.e. all values of k from 1 to 10 and criterion = ‘ch’ and ‘asw’ respectively. The kmeansruns() gives the best k value as one of its output that can be extracted as $bestk. Below are the R listing and best value of k obtained.
The Calinski-Harabasz index gives the best k-value as 2 and Average Silhouette Width gives the best k value as 3. Thus, 2 or 3 is our optimal number of ‘k’ for this data set. Now, you can question me why did I use k = 5? This is simply because in my last blog of Hierarchical Clustering k was 5 so I used the same k here too, to illustrate how close we get the same clusters irrespective of the clustering algorithm that we deploy.
5.5.4. Visual Examination of ‘ch’ and ‘asw’ indexes
As per the listing for computing ‘asw’ and ‘ch’, I have used krange = 1:10. I am now going to compute both the indexes for a range of 1 to 10 against the ‘score’ of individual indexes. The index with the highest score is returned as best ‘k’. I have computed a data-frame that combines criterion values one of the outputs of kmeasnruns() function. Further, I reshape this data frame using the melt() function of R in ‘reshape2’ library. Further using ‘ggplot2’ library I have plotted the data frame. The listing and output are shown below.
The plot for both ch and asw can be seen in the above figure for a range of k value from 1 to 10. ch (red) peaks at maximum score at 2 thus returning us best k value as 2 and asw peaks best score at index value 3 thus returning us with best k value as 3.
5.5.5 Bootstrapping of k-means
Bootstrapping is the litmus test that our model needs to validate positively so that we can rely on our model and can bet upon its findings and maybe if required test them for unknown data points to predict clusters. Clusterboot() function in ‘fpc’ package does the bootstrapping by re-sampling to evaluate how stable our clusters are. It works on Jaccard co-efficient a similarity measure between sets. Jaccard coefficient values should be greater than 0.5 for all our clusters to make sure our clusters are best formed. For an in-depth explanation and understanding of bootstrapping theory refer to my blog on Hierarchical Clusters.
For the test again I am selecting k = 5 as bestk to evaluate previously formed clusters. Further, using cluster boot on scaled data with arguments as clustermethod = kmeansCBI to indicate kmeans cluster-boot, running it i.e. booting it 100 times and maximum iterations for each booting to be 100. All this to be done for krange of my best k selected i.e. 5 clusters. the seed is set equal to random number 9898 for result reproducibility. We can extract clusters formed by bootstrap in $result$partition of the bootstrap model formed. These clusters are taken in to object ‘groups’ and clusters are printed using the cluster_print function that we formed earlier. Below are the listings and the clusters formed over evaluation of 100 runs and 100 iterations respectively.
As we can see above the clusters formed initially in k-means are same as the clusters formed here in bootstraps. This shows that we have fairly stable clusters that can be trusted and deployed in practical use as they pass the test of 100 booting and 100 iterations.
Further, our results obtained in bootstrapping can be rectified by seeing each cluster’s maximum Jaccard Co-efficient which should be greater than 0.5 value indicating stability and number of time on 100 did each cluster dissolved proving its stability. Below is the combined listing and output for Jaccard coefficient and the number of times a cluster dissolved.
Here, we got all the Jaccard coefficient of all the clusters above 0.6 and a good number for cluster dissolving too in 100 times. These values just give us another layer of confidence on our clusters via bootstrapping.
- k-means is one the best and simplest ways to compute unsupervised machine learning with clustering
- Always remember to scale your data preferably with mean 0 and stdv. of 1 for all the numeric data columns for getting clean results
- If required perform ch and asw indexing before computing k-means if you are not sure about what should be your optimal ‘k’ value for k-means clustering
- Always bootstrap your model by trying to produce similar results as original to be confident about your model and findings
7. Clustering Takeaways:
- The goal is simply to draw out similarities by forming a subset of your data.
Good clusters indicate the data points in one cluster are similar and dissimilar to data points in another cluster
- Units of data field matter, different units in different respective data fields produce potentially different results, it is always a good idea to have a clear sense of data and data units before clustering
- You will always like the scenario of unit change of output change across categories to represent the same unit change (consistency). Thus, it becomes important that you scale your data as I have scaled with mean set 0 and standard deviation of 1.
- Clustering is always deployed as a data exploration algorithm and can act as a precursor to supervised learning
- Can be considered similar to visualization, with more iterations and interactions you can understand your data better unlike automated machine learning processes like supervised learning
- Many theories to attain best k value for your clusters, the best are indicated here and others too can be explored easily, but it is always wise to choose the best as per the situation
8. End Note
Feels great to produce something that is accepted all across the globe. We here at The Datum will keep continuing and maintaining our standards so that we can give you the best each time. And to do this repeatedly and consistently we need your support so that we can always come back strong. You can support us by liking, sharing and subscribing to ‘The Datum’ and making this space reach out to more and more people who can make the best use of. You can also always right back to my personal email: firstname.lastname@example.org for any queries/ambiguities.
Ever wondered how do you get recommendations whenever you check out from online shopping? I am sure each one of you might have read this somewhere “Buyers who bought this also bought this…” Curious and want to learn the data science behind this? Stay tuned. Coming up next “Machine Learning: Association Rule Mining”.
- Data used here is available on my site:https://talktomedata.files.wordpress.com/2019/05/who3.xlsx
- For in depth asw refer this: http://www.sthda.com/english/wiki/print.php?id=241
- Original Data Source: World Health Organization (WHO)
- Link to ‘Machine Learning: Unsupervised – Hierarchical Clustering’: https://thedatum.data.blog/2019/05/26/machine-learning-unsupervised-hierarchical-clustering-and-bootstrapping/