검색 전체 메뉴
PDF
맨 위로
OA 학술지
Genetic Outlier Detection for a Robust Support Vector Machine
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Genetic Outlier Detection for a Robust Support Vector Machine
KEYWORD
SVM , Robust SVM , Genetic algorithm , Support vectors , Outlier
  • 1. Introduction

    Support vector machine (SVM) was proposed by Vapnik et al.[1, 2]; it implements structural risk minimization [3]. Beginning with its early success with optical character recognition [1], SVM has been widely applied to a range of areas [4-6]. SVM possesses a strong theoretical foundation and enjoys excellent empirical success in pattern recognition problems and industrial applications [7]. However, SVM also has the drawback of sensitivity to outliers and its performance can be degraded by their presence. Even though slack variables are introduced to suppress outliers [8, 9], outliers continue to influence the determination of the decision hyperplane because they have a relatively high margin loss compared to those of the other data points [10]. Further, when quadratic margin loss is employed, the influence of outliers increases [11]. Previous research has considered this problem [8-10, 12-14].

    In [12], an adaptive margin was proposed to reduce the margin losses (hence the influence) of data far from their class centroids. The margin loss was scaled based on the distance between each data point and the center of the class. In [13, 14], the robust loss function was employed to limit the maximal margin loss of the outlier. Further, a robust SVM based on a smooth ramp loss was proposed in [8]. It suppresses the influence of outliers by employing the Huber loss function. Most works have aimed at reducing the effect of outliers by changing the margin loss function; only a small number have aimed at identifying the outliers and removing them in the training set. For example, Xu et al. proposed an outlier detection method using convex optimization in [10]. However, their method is complex and relaxation is employed to approximate the optimization.

    In this paper, a new robust SVM based on a genetic algorithm (GA) [15] is proposed. The proposed method locates the outliers among the samples and removes them from the training set. The basic idea of this SVM parallels that of genetic feature selection, wherein GAs locate the irrelevant or redundant features and remove them by mimicking natural evolution. In the proposed method, GA detects and removes outliers that would be considered as support vectors by the previous soft margin SVM

    The remainder of this paper is organized as follows. In Section 2, we offer preliminary information on GAs. In Section 3, we describe the proposed method. Section 4 details the experimental results that demonstrate the performance and our conclusions are presented in Section 5.

    2. Genetic Algorithms

    Genetic Algorithms (GAs) are engineering models obtained from the natural mechanisms of genetics and evolution and are applicable to a wide range of problems. GAs typically maintain and manipulate a population of individuals that represents a set of candidate solutions for a given problem. The viability of each candidate solution is evaluated based on its fitness and the population evolves better solutions via selection, crossover, and mutation. In the selection process, some individuals are copied to produce a tentative offspring population. The number of copies of an individual in the next generation is proportional to the individual’s relative fitness value. Promising individuals are therefore more likely to be present in the next generation. The selected individuals are modified to search for a global optimal solution using crossover and mutation. GAs provide a simple yet robust optimization methodology [16].

    3. Genetic Outlier Selection For Support Vector Machines

    In this section, a new outlier detection method based on a genetic algorithm is proposed. First, dual quadratic optimization is formulated in a soft margin SVM and support vector candidates are selected from the training set based on the Lagrange multiplier. Then, the candidates are divided into either support vectors or outliers using GA. Figure 1 presents the overall procedure for the proposed method.

    Suppose that M data points {x1, x2, ..., xM} (xiRn) are given, each of which is labeled with a binary class yi ∈ {−1, 1}. The goal of the SVM is to design a decision hyperplane

    image

    that maximally separates two classes

    image

    and

    image

    where W and w0 are the weight and bias of the decision function, respectively. The SVM is trained by solving

    image

    subject to

    image
    image

    where X = [x1, x2, ..., xM] T, Y = diag (y1, y2, ..., yM), 1 = [1, 1, ..., 1]T, and 0 = [0, 0, ..., 0]T. Ξ = [ξ1, ..., ξM] T is a slack variable and implies a margin loss at each data point. C is a constant and denotes the penalty for a misclassification. The above formulation can be recast into

    image

    subject to

    image
    image

    where Λ = [λ1, λ2, ..., λM] T is a Lagrange multiplier vector and the nonnegative number λi is a Lagrange multiplier associated with xi. In a standard SVM, the data points with positive λi are support vectors and contribute to the decision hyperplane according to

    image

    The interesting point is that if outliers are included in the training set, the outliers are likely to have positive margin loss and contribute to the hyperplanes. Further, the outliers tend to have relatively large margin loss and significantly influence the determination of the hyperplane, thereby making the SVM sensitive to the presence of outliers. In this paper, a robust SVM design scheme is proposed based on GA. First, a set of support vector candidates

    image

    is prepared by collecting the data points with positive Lagrange multipliers. As stated, not only the support vectors but also some outliers may be included in S. The goal of support vector selection is to determine a subset SvS that includes only support vectors such that the classification accuracy of the SVM is maximized while the number of data points in the subset card(Sv) is minimized, where card(·) denotes the cardinality. This is a bi-criteria combinatorial optimization problem and is usually intractable because it has an NP-hard search space. The implementation of the support vector selection parallels the feature selection method. The use of GA is a promising solution to this bi-criteria optimization problem because the feature selection methods based on GA outperform the non-GA feature selection methods [16-18]. To retain the support vectors and discard the outliers in subset Sv, the GA chromosome is represented by a binary string consisting of ones and zeros, as illustrated in Figure 2. In this figure, “1” and “0” indicate whether the associated data points should be retained or discarded in the set of support vectors, respectively. Genetic operators are applied to generate new chromosomes in the new generation. There are two types of genetic operators: crossover and mutation. The purpose of the crossover is to exchange information among different potential solutions. The mutation introduces genetic material that may have been missing from the initial population or lost during crossover operations [19]. In this paper, one-point crossover and bit-flip mutation [20] are employed as genetic operators. When a validation set V is denoted as V = {v1, v2, ...vm}, the fitness function of a chromosome is computed using

    image

    where

    image

    In this equation, m is the number of validation data points and α is a design coefficient. The fitness function actually implies the bi-criteria that the classification accuracy of the SVM should be maximized while the number of data points in the subset card(Sv) should be minimized. The first term is aimed at improving the classification performance and the second term is aimed at the compactness of the SVM. The coefficient α plays an essential role in striking a balance between the classification performance and the classification cost. The parameters of the GA and SVM are given in Table 1.

    [Table 1.] Experiment parameters

    label

    Experiment parameters

    In Table 1, α is set to 0.1 to emphasize the classification accuracy over the classification cost.

    4. Experimental Results

    In this section, the validity of the proposed scheme is demonstrated by applying it to five databases of the UCI repository [21]. The UCI repository has been widely used within the pattern recognition community as a benchmark problem for machine learning algorithms. The five databases are the Wine, Haberman, Transfusion, Garman, and Pima sets. All the sets except the Wine set are binary; first and second classes are used in the Wine set. The databases used in the experiments are summarized in Table 2.

    [Table 2.] Datasets used in the experiments

    label

    Datasets used in the experiments

    In this experiment, the databases are randomly divided into four equal-sized subsets. Two subsets are used for training and the remaining two subsets are used for validation and testing. The training and validation sets are used to design a robust SVM and the test sets are used to evaluate the performance of the algorithms. To demonstrate the robustness of the proposed method against outliers, approximately 5% and 10% of the training samples were randomly selected from each class and their labels were reversed. Five independent runs were performed for statistical verification; the linear kernel was used for SVM. The performances of the proposed method and the general soft margin SVM were compared in terms of average testing accuracy and the number of support vectors. The results are summarized in Tables 3 and 4. In the tables, the proposed robust SVM is denoted as GASVM. It is observed that the standard SVM exhibits a marginally better performance than that of the proposed method for only the Australian database in the non-outlier case. In the majority of the cases, the proposed method achieves superior classification accuracy using a smaller number of support vectors than that of the standard SVM. That is, the proposed method is less sensitive to outliers and requires a reduced number of support vectors compared to the standard SVM. Further, by comparing the cases with 5% outliers and 10% outliers as indicated in Figure 3 and Figure 4, it can be observed that in the standard SVM, the greater the number of outliers that are included in the training set, the greater the number of support vectors generated and hence, the more the performance is degraded. In the proposed method, however, less sensitivity is exhibited toward the outliers and the increase in support vectors is limited. The reason for the improved performance of the proposed method is that only useful and discriminatory support vectors are selected and the brunt of the outlier influence on the SVM training is removed. To highlight the robustness of the proposed method, the test accuracy of the GASVM was normalized with respect to that of the standard SVM and the relative performances of the two SVMs are presented in Figure 5. In this figure, the length of the bar l denotes

    image

    [Table 3.] Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy

    label

    Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy

    [Table 4.] Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors

    label

    Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors

    where CGASV M and CSV M are the correct classification rates of GASVM and standard SVM, respectively. From this figure, it is clear that the greater the number of outliers included, the higher the relative excellence of the proposed method over the standard method.

    5. Conclusions

    In this paper, we presented a new method for detecting outliers to improve the robustness of SVM. The proposed method detected outliers within the support vectors assigned by soft margin SVM using GA, and demonstrated recognition performance and a total number of support vectors superior to those of previous methods. Using the proposed method, the robustness of SVM was improved and the SVM was simplified by outlier deletion. The validity of the suggested method was demonstrated through experiments with five databases from the UCI repository.

참고문헌
  • 1. Cortes C., Vapnik V. 1995 “Support-vector networks,” [Machine Learning] Vol.20 P.273-297 google
  • 2. Vapnik V. N. 1998 Statistical Learning Theory google
  • 3. Jun S. 2008 “An outlier data analysis using support vector regression,” [Journal of The Korean Institute of Intelligent Systems] Vol.18 P.876-880 google
  • 4. Hoang V., Le M., Jo K. 2014 “Hybrid cascade boosting machine using variant scale blocks based HOG features for pedestrian detection,” [Neurocomputing] Vol.135 P.357-366 google cross ref
  • 5. Seo S., Yang H., Sim K. 2008 “Behavior learning and evolution of swarm robot system using support vector machine,” [Journal of The Korean Institute of Intelligent Systems] Vol.18 P.712-717 google cross ref
  • 6. Shin H., Jung H., Cho K., Lee J. 2012 “A prediction method of learning outcomes based on regression model for effective peer review learning,” [Journal of The Korean Institute of Intelligent Systems] Vol.22 P.624-630 google cross ref
  • 7. Kumar S. 2005 Neural Networks: A Classroom Approach google
  • 8. Wang L., Jia H., Li J. 2008 “Training robust support vector machine with smooth ramp loss in primal space,” [Neurocomputing] Vol.71 P.3020-3025 google cross ref
  • 9. Lee H., Hong S., Lee B., Kim E. 2010 “Design of robust support vector machine using genetic algorithm,” [Journal of The Korean Institute of Intelligent Systems] Vol.20 P.375-379 google cross ref
  • 10. Xu L., Crammer K., Schuurmans D. 2006 “Robust support vector machine training via convex outlier ablation,” [Proc. the 21st National Conference on Artificial Intelligence] P.536-542 google
  • 11. Suykens J. A. K., Vandewalle J. 1999 “Least squares support vector machine classifiers,” [Neural Processing Letters] Vol.9 P.293-300 google cross ref
  • 12. Song Q., Hu W., Xie W. 2002 “Robust support vector machine with bullet hole image classification,” [IEEE Trans. Systems, Man, and Cybernetics-Part C: Applications and Reviews] Vol.32 P.440-448 google cross ref
  • 13. Krause N., Singer Y. 2004 “Leveraging the margin more carefully,” [Proc. the 21st International Conference on Machine Learning] Vol.69 google
  • 14. Bartlett P., Mendelson S. 2002 “Rademacher and Gaussian complexities: risk bounds and structural results,” [Journal of Machine Learning Research] Vol.3 P.463-482 google
  • 15. Davis L. 1991 Handbook of Genetic Algorithms google
  • 16. Lee H., Kim E., Park M. 2007 “A genetic feature weighting scheme for pattern recognition,” [Integrated Computer-Aided Engineering] Vol.14 P.161-171 google
  • 17. Kuncheva L., Jain L. 1999 “Nearest neighbor classifier: simultaneousediting and feature selection,” [Pattern Recognition Letters] Vol.20 P.1149-1156 google cross ref
  • 18. Oh I., Lee J., Moon B. 2004 “Hybrid genetic algorithms for feature selection,” [IEEE Trans. Pattern Analysis and Machine Intelligence] Vol.26 P.1424-1437 google cross ref
  • 19. Juo H., Chang H. 2004 “A new symbiotic evolution-based fuzzy-neural approach to fault diagnosis of marine propulsion systems,” [Artificial Intelligence] Vol.17 P.919-930 google
  • 20. Michalewicz Z. 1996 Genetic Algorithms + Data Structures = Evolution Programs google
  • 21. Murphy P. M., Aha D. W. 1994 “UCI Repository for Machine Learning Databases,” Technical report google
OAK XML 통계
이미지 / 테이블
  • [ Figure 1. ]  Procedure of the proposed method.
    Procedure of the proposed method.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Figure 2. ]  Chromosome used in the support vector selection.
    Chromosome used in the support vector selection.
  • [ ] 
  • [ ] 
  • [ Table 1. ]  Experiment parameters
    Experiment parameters
  • [ Table 2. ]  Datasets used in the experiments
    Datasets used in the experiments
  • [ ] 
  • [ Table 3. ]  Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy
    Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of testing accuracy
  • [ Table 4. ]  Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors
    Comparing the results of the proposed method (GASVM) with those of a previous method (SVM) in terms of the number of support vectors
  • [ Figure 3. ]  Correct classification ratio of the SVM and GASVM.
    Correct classification ratio of the SVM and GASVM.
  • [ Figure 4. ]  Number of support vectors of the SVM and GASVM.
    Number of support vectors of the SVM and GASVM.
  • [ Figure 5. ]  Relative performance of the proposed method compared to a general SVM.
    Relative performance of the proposed method compared to a general SVM.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.