On Subsampling Procedures for Support Vector Machines

Roberto Bárcenas, Maria Gonzalez-Lima, Joaquin Ortega, Adolfo Quiroz

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Herein, theoretical results are presented to provide insights into the effectiveness of subsampling methods in reducing the amount of instances required in the training stage when applying support vector machines (SVMs) for classification in big data scenarios. Our main theorem states that under some conditions, there exists, with high probability, a feasible solution to the SVM problem for a randomly chosen training subsample, with the corresponding classifier as close as desired (in terms of classification error) to the classifier obtained from training with the complete dataset. The main theorem also reflects the curse of dimensionalityin that the assumptions made for the results are much more restrictive in large dimensions; thus, subsampling methods will perform better in lower dimensions. Additionally, we propose an importance sampling and bagging subsampling method that expands the nearest-neighbors ideas presented in previous work. Using different benchmark examples, the method proposed herein presents a faster solution to the SVM problem (without significant loss in accuracy) compared with the available state-of-the-art techniques.
Original languageEnglish (US)
Pages (from-to)3776
JournalMathematics
Volume10
Issue number20
DOIs
StatePublished - Oct 13 2022

Fingerprint

Dive into the research topics of 'On Subsampling Procedures for Support Vector Machines'. Together they form a unique fingerprint.

Cite this