검색 전체 메뉴
PDF
맨 위로
OA 학술지
On a Novel Way of Processing Data that Uses Fuzzy Sets for Later Use in Rule-Based Regression and Pattern Classification
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
On a Novel Way of Processing Data that Uses Fuzzy Sets for Later Use in Rule-Based Regression and Pattern Classification
KEYWORD
Rule based regression , Pattern recognition , Fuzzy set
  • 1. Introduction

    Regression and pattern classification are very widely used in many fields and applications.1 Both face four major challenges: 1) choosing the variables/features; 2) choosing the nonlinear structure of the regressors/discriminant functions; 3) choosing how many terms to include in the regression model/pattern classifier; and, 4) optimizing the parameters that complete the description of the regression model/pattern classifier.

    For Challenge 1, how to choose the variables/features is crucial to the success of any regression model/pattern classifier. In this paper we assume that the user has established the variables that affect the outcome, using methods already available for doing this. For Challenge 4, there are a multitude of methods for optimizing parameters, ranging from classical steepest descent (back-propagation) to a plethora of evolutionary computing methods (e.g., simulated annealing, GA, PS0, QPSO, ant colony, etc. [4]), and we assume that the user has decided on which one of these to use. Our focus in this paper is on Challenges 2 and 3.

    For Challenge 2, in real-world applications the nonlinear structures of the regressors/ discriminant functions are usually not known ahead of time, and are therefore chosen either2 as products of the variables (e.g., two at a time, three at a time, etc.), or in other more complicated ways (e.g., trigonometric-, exponential-, logarithmic-functions, etc.). Sometimes knowledge about the application provides justifications for the choices made for the nonlinear terms; however, often one does not have such knowledge, and a lot of time is spent, using trial and error, trying to establish the nonlinear dependencies. For Challenge 3, how to determine how many terms to include in the regression model/pattern classifier is also usually done by trial and error, and this can be very tedious to do. In this paper we present a novel method that chooses the nonlinear structure of the regressors/discriminant functions as well as the number of terms to include in the regression model/pattern classifier simultaneously and automatically. This is accomplished using a novel way of pre-processing the given data.

    The rest of this paper is organized as follows: Section 2 explains how data can be treated as cases; Section 3 explains how each variable must be granulated; Section 4 describes the Takagi-Sugeno-Kang (TSK) rules that are used for regression/ pattern classification; Section 5 presents the main results of this paper, a novel way to simultaneously determine the nonlinear structure of the regressors/discriminant functions and the number of terms to include in the regression model/pattern classifier; Section 6 provides some discussions; and, Section7 draws conclusions and indicates some directions for further research.

    2. Data Treated as Cases

    A data pair is denoted (x(t), y(t)) where x=col(x1, x2, ..., xp), xi is the ith variable/feature and y(t) is the output for that x(t). As is commonly done in the social sciences [5, 6], each data pair is treated as a “case” and index t denotes a data case. Treating data as cases is motivated by a method called fuzzy set qualitative comparative analysis (fsQCA), which was developed by the prominent social scientist Ragin [5, 6], and has been thoroughly quantified by Mendel and Korjani [7, 8].

    Note that there may or may not be a natural ordering of the cases over t. In multi-variable function approximation or pattern classification applications the data have no natural ordering; but in time-series forecasting applications the data cases have a natural temporal ordering. We assume that N data pairs are available, and refer to the collection of these data pairs as SCases, where

    3. Preprocessing

    To begin, each of the p variables is granulated [9] into a fixed number of terms. Our suggested approach is to begin with only two terms per variable, design the regression model/pattern classifier, and determine if acceptable performance is obtained. If it has not, then increase the number of terms to three (then four, etc.) and repeat this process, stopping when acceptable performance has been obtained. Due to space limitations, we explain our preprocessing only for two terms per variable. Its extensions to more than two terms is straightforward.

    For illustrative purposes, we shall call the two terms high (H) and low (L). Each variable xi (i = 1, …, p), xiR+ (or xiR), is mapped into the membership functions (MFs) of two type-1 fuzzy sets, one each for high and low. There are many different ways to do this, e.g. choose each MF as a prescribed two-parameter sigmoidal or piecewise-linear function.

    In order to use the construction that is described in Section 5, it is required that the two MFs must be the complement of one another. This is easily achieved by using fuzzy c-means (FCM) for two clusters [10], (or linguistically modified FCM [LM-FCM] [11]), because it is well known that the MFs for the two FCM clusters are constrained so that one is the complement of the other.

    As a result of this preprocessing step, the MFs 𝜇Li(xi) and 𝜇Hi(xi) = 1 −𝜇Li(xi)(i = 1, ..., 𝑝) will have been obtained. Note that, if independent MFs are used for Li and Hi(for which 𝜇Hi(xi) ≠ 1–𝜇Li(xi), then our Section 5 method will use Li and Hi as well as their complements−four quantities. When, however, 𝜇Hi(xi) = 1 −𝜇Li(xi) the four quantities reduce to two.

    4. Rules

    Our rules for a rule-based regression model or classifier have the following TSK structure [12]:

    image

    For the rule-based regression model, the β𝜐 play the role of the regression coefficients which are determined by means of optimizing a regression objective function (e.g., minimizing a root mean square error), whereas for a binary pattern classifier the β𝜐 are either +1 or -1, depending upon which class the rule is for. Regardless of whether Eq. (1) is used for regression or pattern classification, observe that the antecedent structure (x1 is … and xp is ) and the number of rules (RS) must be specified, after which it is straightforward to convert Eq. (1) into a so-called fuzzy basis function expansion [12, 13]. It is the mathematics of this conversion that establishes the nonlinear natures of the regressors/discriminant functions (Challenge 2); but this requires determining RS and the antecedent structure. We show how to determine these simultaneously and automatically in the next section.

    5. Establish Antecedents of Rules and the Number of Rules

    The (compound) antecedent of each rule contains one linguistic term or its complement for each of the p variables, and each of these linguistic terms is combined with the others by using the word “and” (e.g., A1 and A2 and … and Ap). We refer to this interconnection as a causal combination3. Note that in a traditional if-then rule the antecedents only use the terms and not their complements. In our approach (as in fsQCA), protection about being wrong for postulating a term is achieved by considering each term as well as its complement.

    To begin, 2𝑝 candidate causal combinations (the 2 is due to both the term and its complement4) are conceptually postulated (we will show below that these causal combinations do not actually have to be enumerated). If, e.g., 𝑝 = 6 there would be 64 candidate causal combinations, or, if 𝑝 = 10, there would be 1,024 candidate causal combinations.

    One does not know ahead of time which of the 2𝑝 candidate causal combinations should actually be used as a compound antecedent in a rule. Our approach prunes this large collection by using the MFs that were determined in Section 3, as well as the MF for “A1 and A2 and… and A𝑝” (obtained using fuzzy set mathematics) and a simple test. The results of doing this are called Rs surviving causal combinations.

    Let SF be the collection of the following 2𝑝 candidate causal combinations, Fj (j = 1,…, 2𝑝 and i = 1, …, 𝑝):

    image

    where ∧ denotes conjunction (the “and” operator) and is modeled using minimum and (using Ragin’s [5] notation) ci denotes the complement of Ci. The RS surviving causal combinations are found from all of the 2𝑝 candidate causal combinations by keeping only those causal combinations whose MF > 0.5 for at least f cases, where f is a threshold that has to be specified ahead of time5. A brute force way to do this is to create a table in which there are N rows, one for each case, and 2𝑝 columns, one for each of the causal combinations. The entries into this table are μFj (t) and there will be N2𝑝 such entries. Such a table is called a truth table by Ragin [4, 5]. One then searches through this very large table and keeps only those causal combinations whose MF entries are > 0.5. If f = 1 then all such causal combinations, removing duplications, become the set of RS surviving causal combinations. It is very easy for N2𝑝 to become very large6 and so this brute force way to carry out this procedure is impractical.

    Ragin [5] observed the following in an example with four causal conditions: “... each case can have (at most) only a single membership score greater than 0.5 in the logical possible combinations from a given set of causal conditions (i.e., in the candidate causal combinations).” This somewhat surprising result is true in general and in [8] the following theorem that locates the one causal combination for each case whose MF > 0.5 was presented:

    Theorem 5.1 (min-max theorem). [8]: given p causal conditions, C1, C2,…, C𝑝 and their respective complements, c1, c2, …, c𝑝. Consider the 2𝑝 candidate causal combinations (j = 1, …, 2𝑝) where or ci and i = 1, …, 𝑝.

    Proof. Let

    image

    Then for eacht (case) there is only one j, j * (t), for which 𝜇Fj*(t) (t) > 0.5 and 𝜇Fj*(t) (t) can be computed as:

    image

    where is determined from the right-hand side of Eq. (4), as:

    image

    In Eq. (5), arg max denotes the winner of max namely Ci or ci.

    A proof of this theorem is in [8]. When nc independent terms are used for each variable, replace p by ncp. A numerical example that illustrates the computations can be found in [7].

    This min-max Theorem leads to the following procedure for computing the RS surviving causal combinations7:

    1. Compute Fj(t) using Eq. (5).

    2. Find the J uniquely different Fj(t) and re-label them Fj (j = 1, …, J).

    3. Compute tFj, where (t = 1, …,N)

    image

    4. Compute NFj, where

    image

    5. Establish the RS surviving causal combinations (𝜐 =1. …,RS), as:

    image

    where Fj (jv) means Fj is added to the set of surviving causal combinations as , and v is the index of the surviving set.

    Numerical examples that illustrates this five-step procedure can be found in [8].

    In order to implement Eq. (8) threshold fhas to be chosen. In our works, we often choose f = 1. This choice is arbitrary and depends on an application and how many cases are available. Discussions on how to choose f are given in [5-7, 14]. One popular way to choose f is as the integer such that at least 80% of all cases are covered by the set of surviving causal combinations.

    From Fj in Eq. (2) and in Eq. (8), it follows that (v = 1, …, RS):

    image

    In [8] it is shown that the speedup between our method for determining the surviving causal combinations and the bruteforce approach is ≈ O (2ncp), where nc is the number of terms used for each variable (assumed to be the same for all variables).

    Example: This example illustrates the number of surviving causal combinations for eight readily available data sets: abalone [15], concrete compressive strength [15], concrete slump test [15], wave force [16], chemical process concentration readings [17], chemical process temperature readings [17], gas furnace [17] and Mackey-Glass Chaotic Time Series [18]. Our results are summarized in8 Table 1. For each problem a two-cluster FCM was applied to all of its cases. The five-step procedure described above was then used to determine RS.

    Observe that: (1) for three variables (as occurs for wave force, chemical process concentration reading and chemical process temperature readings), the number of surviving causal combinations is either the same number, or close to the same number, as the number of candidate causal combinations, which suggests that one should use more than two terms per variable; and, (2) In all other situations the number of surviving causal combinations is considerably smaller than the number of candidate causal combinations. Although not shown here, this difference increases when more terms per variable are used, e.g., using three terms per variables the candidate causal combinations for the concrete slump test data set is 134,217,728 whereas the number of surviving causal combinations is only 97 [19].

    [Table 1.] Number of surviving causal combinations for eight problems

    label

    Number of surviving causal combinations for eight problems

    Observe, from the last column in Table 1, that for four of the problems RS ≥ 25. We seriously doubt that a human designer could postulate the non-linear structures for so many regressors/discriminant functions. Our method not only shows that so many of them are necessary, it also finds their nonlinear structures.

    6. Discussion

    In Korjani and Mendel [19] have shown how the surviving causal combinations can be used in a new regression model, called variable structure regression (VSR). Using the surviving causal combinations one can simultaneously determine the number of terms in the (nonlinear) regression model as well as the exact mathematical structure for each of the terms (basis functions). VSR has been tested on the eight small to moderate size data sets that are stated in Table 1 (four are for multi-variable function approximation and four are for forecasting), using only two terms per variable whose MFs are the complements of one another, has been compared against five other methods, and has ranked #1 against all of them for all of the eight data sets.

    Specific formulas for fuzzy basis function expansions can be found in [12, 13]. Similar formulas for rule-based binary classification can be found in [12].

    Surviving causal combinations have also been used to obtain linguistic summarizations using fsQCA [7, 8].

    7. Conclusions

    This paper presents a novel method for simultaneously and automatically choosing the nonlinear structures of regressors or discriminant functions, as well as the number of terms to include in a rule-based regression model or pattern classifier. Variables are first partitioned into subsets each of which has a linguistic term (called a causal condition) associated with it; fuzzy sets are used to model the terms. Candidate interconnections (causal combinations) of either a term or its complement are formed, where the connecting word is AND which is modeled using the minimum operation. The data establishes which of the candidate causal combinations survive. A novel theoretical result leads to an exponential speedup in establishing this. For specific applications, see [7, 8, 19].

    Much work remains to be done in using surviving causal combinations in real-world applications. The extension of the min-max Theorem to interval type-2 fuzzy sets is currently being researched.

    Conflict of Interest

    No potential conflict of interest relevant to this article was reported.

참고문헌
  • 1. Pushpa B, Vasuki R 2013 “A least absolute approach to multiple fuzzy regression using Tw-norm based operations,” [International Journal of Fuzzy Logic Systems] Vol.3 P.73-84 google cross ref
  • 2. Ritz C, Streibig J.C. 2008 Nonlinear Regression with R google
  • 3. Duda R.O., Hart P.E., Stork D.G. 2001 Pattern Classification google
  • 4. Simon D. 2013 Evolutionary Optimization Algorithms google
  • 5. Ragin C. C. 2008 Redesigning Social Inquiry: Fuzzy Sets and Beyond google
  • 6. Rihoux B., Ragin C. C. 2009 Configurational Comparative Methods: Qualitative Comparative Analysis (QCA) and Related Techniques google
  • 7. Mendel J. M., Korjani M. M. 2012 “”Charles Ragin’s fuzzy set qualitative comparative analysis (fsQCA) used for linguistic summarizations,” [Information Sciences] Vol.202 P.1-23 google cross ref
  • 8. Mendel J. M., Korjani M. M. 2013 “Theoretical aspects of fuzzy set qualitative comparative analysis (fsQCA),” [Information Sciences] Vol.237 P.137-161 google cross ref
  • 9. Bargiela A., Pedrycz W. 2003 Granular Computing: An Introduction google
  • 10. Bezdek J. C. 1981 Pattern Recognition with Fuzzy Objective Function Algorithms google
  • 11. Korjani M. M., Mendel J. M. 2013 “Challenges to using fuzzy set qualitative comparative analyses (fsQCA) and their solutions: modified-fsQCA,” [IEEE Transactions on Fuzzy Systems] google
  • 12. Mendel J. M. 2001 Uncertain Rule-Based Fuzzy Logic Systems: Introduction and New Directions google
  • 13. Wang L. X., Mendel J. M. 1992 “Fuzzy basis functions, universal approximation, and orthogonal least-squares learning,” [IEEE Transactions on Neural Networks] Vol.3 P.807-814 google cross ref
  • 14. Fiss P. C. 2011 “Building better causal theories: a fuzzy set approach to typologies in organization research,” [Academy of Management Journal] Vol.54 P.393-420 google cross ref
  • 15. Frank A., Asuncion A. “UCI Machine Learning Repository,” google
  • 16. Hyndman R. J. “Time Series Data Library,” google
  • 17. Box G. E. P., Jenkins G. M., Reinsel G. C. 1994 Time Series Analysis: Forecasting and Control google
  • 18. Cowder R. S., Touretzky D. S. 1990 “Predicting the Mackey-Glass time series with cascade correlation learning,” [in Connectionist Models: Proceedings of the 1990 Summer School] P.117-123 google
  • 19. Korjani M. M., Mendel J. M. 2014 “Non-linear variable structure regression (VSR) and its application in timeseries forecasting,” [in Proceedings of FUZZ-IEEE, 2014] google
OAK XML 통계
이미지 / 테이블
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Table 1. ]  Number of surviving causal combinations for eight problems
    Number of surviving causal combinations for eight problems
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.