Counterexample to theorems of Cox and Fine
Joseph Y. Halpern
aaai 1996
The least squares support vector machine (LS-SVM), like the SVM, is based on the margin-maximization performing structural risk and has excellent power of generalization. In this paper, we consider its use in semisupervised learning. We propose two algorithms to perform this task deduced from the transductive SVM idea. Algorithm 1 is based on combinatorial search guided by certain heuristics while Algorithm 2 iteratively builds the decision function by adding one unlabeled sample at the time. In term of complexity, Algorithm 1 is faster but Algorithm 2 yields a classifier with a better generalization capacity with only a few labeled data available. Our proposed algorithms are tested in several benchmarks and give encouraging results, confirming our approach. © 2009 IEEE.
Joseph Y. Halpern
aaai 1996
Shai Fine, Yishay Mansour
Machine Learning
Rei Odaira, Jose G. Castanos, et al.
IISWC 2013
Yehuda Naveli, Michal Rimon, et al.
AAAI/IAAI 2006