000218590 001__ 218590
000218590 005__ 20181114182522.0
000218590 037__ $$aCONF
000218590 245__ $$aCompressive Spectral Clustering
000218590 269__ $$a2016
000218590 260__ $$c2016
000218590 336__ $$aConference Papers
000218590 520__ $$aSpectral clustering has become a popular technique due to its high performance in many contexts. It comprises three main steps: create a similarity graph between N objects to cluster, compute the first k eigenvectors of its Laplacian matrix to define a feature vector for each object, and run k-means on these features to separate objects into k classes. Each of these three steps becomes computationally intensive for large N and/or k. We propose to speed up the last two steps based on recent results in the emerging field of graph signal processing: graph filtering of random signals, and random sampling of bandlimited graph signals. We prove that our method, with a gain in computation time that can reach several orders of magnitude, is in fact an approximation of spectral clustering, for which we are able to control the error. We test the performance of our method on artificial and real-world network data.
000218590 6531_ $$agraph signal processing
000218590 6531_ $$aspectral clustering
000218590 6531_ $$amachine learning
000218590 700__ $$aTremblay, Nicolas
000218590 700__ $$0242927$$aPuy, Gilles$$g179918
000218590 700__ $$aGribonval, Rémi
000218590 700__ $$0240428$$aVandergheynst, Pierre$$g120906
000218590 7112_ $$a33rd International Conference on Machine Learning (ICML)$$cNew York, USA$$dJune 19-24
000218590 8564_ $$uhttps://arxiv.org/abs/1602.02018$$zURL
000218590 909C0 $$0252392$$pLTS2$$xU10380
000218590 909CO $$ooai:infoscience.tind.io:218590$$pconf$$pSTI$$pGLOBAL_SET
000218590 917Z8 $$x120906
000218590 937__ $$aEPFL-CONF-218590
000218590 973__ $$aEPFL$$rREVIEWED$$sACCEPTED
000218590 980__ $$aCONF