Speeding up k-Means algorithm by GPUs

作者:

Highlights:

摘要

Cluster analysis plays a critical role in a wide variety of applications; but it is now facing the computational challenge due to the continuously increasing data volume. Parallel computing is one of the most promising solutions to overcoming the computational challenge. In this paper, we target at parallelizing k-Means, which is one of the most popular clustering algorithms, by using the widely available Graphics Processing Units (GPUs). Different from existing GPU-based k-Means algorithms, we observe that data dimensionality is an important factor that should be taken into consideration when parallelizing k-Means on GPUs. In particular, we use two different strategies for low-dimensional data sets and high-dimensional data sets respectively, in order to make the best use of GPU computing horsepower. For low-dimensional data sets, we design an algorithm that exploits GPU on-chip registers to significantly decrease the data access latency. For high-dimensional data sets, we design another novel algorithm that simulates matrix multiplication and exploits GPU on-chip shared memory to achieve high compute-to-memory-access ratio. Our experimental results show that our GPU-based k-Means algorithms are three to eight times faster than the best reported GPU-based algorithms.

论文关键词:Clustering,k-Means,GPU computing,CUDA

论文评审过程:Received 7 January 2011, Revised 23 July 2011, Accepted 1 May 2012, Available online 8 May 2012.

论文官网地址:https://doi.org/10.1016/j.jcss.2012.05.004