2023, Volume 7 Issue 2
FRACTIONATION USING K MEANS CLUSTERING
AUTHOR(S)
Kannammal A, Sindhu P, Santhiya R, Sujitha S, Yuvetha S
DOI: https://doi.org/10.46647/ijetms.2023.v07i02.079
ABSTRACT
The k-means algorithm is often used in clustering applications but its usage requires a complete data matrix. Missing data, however, is common in many applications. Mainstream approaches to clustering missing data reduce the missing data problem to a complete data formulation through either deletion or imputation but these solutions may incur significant costs. Our k-POD method presents a simple extension of k-means clustering for missing data that works even when the missingness mechanism is unknown, when external information is unavailable, and when there is significant missingness in the data.
Page No: 740 - 748
References:
[1] P.K. Agarwal and C.M. Procopiuc, ªExact and Approximation Algorithms for Clustering,º Proc. Ninth Ann. ACM-SIAM Symp. Discrete Algorithms, pp. 658-667, Jan. 1998.
[2] K. Alsabti, S. Ranka, and V. Singh, ªAn Efficient k-means Clustering Algorithm,º Proc. First Workshop High Performance Data Mining, Mar. 1998.
[3] S. Arora, P. Raghavan, and S. Rao, ªApproximation Schemes for Euclidean k-median and Related Problems,º Proc. 30th Ann. ACM Symp. Theory of Computing, pp. 106-113, May 1998.
[4] S. Arya and D. M. Mount, ªApproximate Range Searching,º Computational Geometry: Theory and Applications, vol. 17, pp. 135- 163, 2000.
[5] S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, and A.Y. Wu, ªAn Optimal Algorithm for Approximate Nearest Neighbor Searching,º J. ACM, vol. 45, pp. 891-923, 1998.
[6] G.H. Ball and D.J. Hall, ªSome Fundamental Concepts and Synthesis Procedures for Pattern Recognition Preprocessors,º Proc. Int'l Conf. Microwaves, Circuit Theory, and Information Theory, Sept. 1964.
[7] J.L. Bentley, ªMultidimensional Binary Search Trees Used for Associative Searching,º Comm. ACM, vol. 18, pp. 509-517, 1975.
[8] L. Bottou and Y. Bengio, ªConvergence Properties of the k-means Algorithms,º Advances in Neural Information Processing Systems 7, G. Tesauro and D. Touretzky, eds., pp. 585-592. MIT Press, 1995.
[9] P.S. Bradley and U. Fayyad, ªRefining Initial Points for K-means Clustering,º Proc. 15th Int'l Conf. Machine Learning, pp. 91-99, 1998.
[10] P.S. Bradley, U. Fayyad, and C. Reina, ªScaling Clustering Algorithms to Large Databases,º Proc. Fourth Int'l Conf. Knowledge Discovery and Data Mining, pp. 9-15, 1998.
[11] V. Capoyleas, G. Rote, and G. Woeginger, ªGeometric Clusterings,º J. Algorithms, vol. 12, pp. 341-356, 1991.
[12] J.M. Coggins and A.K. Jain, ªA Spatial Filtering Approach to Texture Analysis,º Pattern Recognition Letters, vol. 3, pp. 195-203, 1985.
[13] S. Dasgupta, ªLearning Mixtures of Gaussians,º Proc. 40th IEEE Symp. Foundations of Computer Science, pp. 634-644, Oct. 1999.
[14] S. Dasgupta and L.J. Shulman, ªA Two-Round Variant of EM for Gaussian Mixtures,º Proc. 16th Conf. Uncertainty in Artificial Intelligence (UAI-2000), pp. 152-159, June 2000.
[15] Q. Du, V. Faber, and M. Gunzburger, ªCentroidal Voronoi Tesselations: Applications and Algorithms,º SIAM Rev., vol. 41, pp. 637-676, 1999
How to Cite This Article:
Kannammal A, Sindhu P, Santhiya R, Sujitha S, Yuvetha S
. FRACTIONATION USING K MEANS CLUSTERING
. ijetms;7(2):740-748. DOI: 10.46647/ijetms.2023.v07i02.079