Document Clustering Using Semantic Features and Fuzzy Relations
- Author: Kim Chul-Won, Park Sun
- Organization: Kim Chul-Won; Park Sun
- Publish: Journal of information and communication convergence engineering Volume 11, Issue3, p179~184, 30 Sep 2013
-
ABSTRACT
Traditional clustering methods are usually based on the bag-of-words (BOW) model. A disadvantage of the BOW model is that it ignores the semantic relationship among terms in the data set. To resolve this problem, ontology or matrix factorization approaches are usually used. However, a major problem of the ontology approach is that it is usually difficult to find a comprehensive ontology that can cover all the concepts mentioned in a collection. This paper proposes a new document clustering method using semantic features and fuzzy relations for solving the problems of ontology and matrix factorization approaches. The proposed method can improve the quality of document clustering because the clustered documents use fuzzy relation values between semantic features and terms to distinguish clearly among dissimilar documents in clusters. The selected cluster label terms can represent the inherent structure of a document set better by using semantic features based on non-negative matrix factorization, which is used in document clustering. The experimental results demonstrate that the proposed method achieves better performance than other document clustering methods.
-
KEYWORD
Cluster label , Document clustering , Non-negative matrix factorization , Semantic features , WordNet
-
The rapidly growing availability of a large quantity of textual data, such as online news, blogs, emails, and Internet bulletin boards, has created the need for effective document clustering methods. In addition, document clustering has been receiving increased attention as an important method for unsupervised document organization, automatic summarization, topic extraction, and information filtering or retrieval [1-7].
Recent studies on document clustering methods use machine learning [2,5,8-10] techniques, graph-based methods [7,11], and matrix factorization-based methods [12-14]. Machine learning-based methods use a semisupervised clustering model with respect to prior knowledge and documents’ membership [2,5,8-10]. Graphbased methods model the given document set using an undirected graph in which each node represents a document [7,11]. Matrix factorization-based methods use semantic features of the document sets for document clustering [12-15].
Recent studies on document clustering methods use machine learning [2,5,8-10] techniques, graph-based methods [7,11], and matrix factorization-based methods [12-14]. Machine learning-based methods use a semisupervised clustering model with respect to prior knowledge and documents’ membership [2,5,8-10]. Graphbased methods model the given document set using an undirected graph in which each node represents a document [7,11]. Matrix factorization-based methods use semantic features of the document sets for document clustering [12-15]. cover all the concepts mentioned in a collection [5]. In addition, a matrix factorization approach might limit successful decomposition of semantic features from any data set as data objects viewed from extremely different viewpoints, or highly articulated objects [13,16,17].
In this paper, we propose a document clustering method using semantic features by non-negative matrix factorization (NMF) and fuzzy relations. The proposed method uses fuzzy relations between semantic features and terms in a document set to resolve the matrix factorization approach problem. The NMF can represent an individual object as the non-negative linear combination of partial information extracted from a large volume of objects [13,16,17]. NMF has great power to easily extract semantic features representing the inherent structure of data objects. The factorization result of NMF has a better semantic interpretation, and the clustering result can be easily derived from it [16]. Fuzzy relations [18] use the concept of fuzzy set theory [19] to model the vagueness in the information retrieval. The basic concept of fuzzy relations involves the construction of index terms from a set of documents [18].
The proposed method has the following advantages. First, it can extract important cluster label terms in a document set using semantic features by NMF. By this means, it can identify major topics and subtopics of clusters with respect to their semantic features. Second, it can remove the dissimilar documents in clusters using fuzzy relations between semantic features and document terms. Thus it can improve the quality of document clustering by assisting with the removal of dissimilarity information. The rest of the paper is organized
The rest of the paper is organized as follows: Section II describes works related to document clustering methods. In Section III, we review NMF and fuzzy relations in detail. In Section IV, the proposed document clustering method is introduced. Section V shows the evaluation and experimental results. Finally, we conclude in Section VI.
Traditional clustering methods can be classified into partitioning, hierarchical, density-based, and grid-based methods. Most of these methods use distance functions as object criteria and are not effective in high dimensional spaces [1,3,4,6,20].
Li et al. [20] proposed a document clustering algorithm called adaptive subspace iteration (ASI) using explicit modeling of the subspace structure associated with each cluster. Wang et al. [21] proposed a clustering approach for clustering multi-type interrelated data objects. It fully explores the relationship between data objects for clustering analysis. Park et al. [12] proposed a document clustering method using NMF and cluster refinement. Park et al. [15] proposed a document clustering method using latent semantic analysis (LSA) and fuzzy association. Xu et al. [14] proposed a document partitioning method based on the NMF of the given document corpus. Xu and Gong [13] proposed a data clustering method that models each cluster as a linear combination of the data points, and each data point as a linear combination of the cluster centers. Li and Ding [22] presented an overview and summary of various matrix factorization algorithms for clustering and theoretically analyzed the relationships among them.
Wang et al. [7] proposed document clustering with local and global regularization (CLGR). It uses local label predictors and a global label smoothness regularizer. Liu et al. [9] proposed a document clustering method using cluster refinement and model selection. It uses a Gaussian mixture model and expectation maximization algorithm to conduct initial document clustering. It also refines the initially obtained document clusters by voting on the cluster label of each document.
Ji and Xu [8] proposed a semi-supervised clustering model that incorporates prior knowledge about documents’ membership for document clustering analysis. Zhang et al. [10] adopted a relaxation labeling-based cluster algorithm to evaluate the effectiveness of the aforementioned types of links for document clustering. It uses both content and linkage information in the dataset [11]. Hu et al. [5] proposed a document clustering method exploiting Wikipedia as external knowledge. They used Wikipedia for resolving the external ontology of document clustering problem. Fodeh et al. [2] proposed a document clustering method using an ensemble model combining statistics and semantics [7].
III. NMF AND FUZZY RELATION THEORY
In this paper, we define the matrix notation as follows. Let
X*j be thej ′th column vector of matrixX ,Xi* be thei ′th row vector andXij be the element of thei ′th row and thej ′th column.NMF involves decomposing a given
m ×n matrixA into a non-negative semantic feature matrixW and a nonnegative semantic variable matrixH , as shown in Eq. (1).where
W is am ×r non-negative matrix andH is ar ×n non-negative matrix. Usuallyr is chosen to be smaller thanm orn , so that the total sizes ofW andH are smaller than that of the original matrixA .We use the objective function that minimizes the Euclidean distance between each column of
A and its approximationA =WH , which was proposed in Lee and Seung [16,17]. As an objective function, the Frobenius norm is used:We keep updating
W andH until ΘE (W ,H ) converges under the predefined threshold or exceeds the number of repetitions. The update rules are as follows:Example 1. We illustrate the example using a NMF algorithm: Letr be 3, the number of repetitions be 50, and the tolerance be 0.001. When the initial elements of theW andH matrices are 0.5, it decomposes the matrixA into theW andH matrices as in Fig. 1.Fig. 2 shows an example of sentence representation using NMF. The column vector
A *3 corresponding to the third sentence is represented as a linear combination of semantic feature vectorsW*l and semantic variable column vectorH *3.The powers of the two non-negative matrices
W andH are described as follows: all semantic variables (Hlj ) are used to represent each sentence.W andH are represented sparsely. Intuitively, it make more sense for each sentence to be associated with some small subset of a large array of topics (W*l ), rather than just one topic or all the topics. In each semantic feature (W*l ), the NMF has grouped together semantically related terms [3].In this section, we give a brief review of fuzzy relations theory [1,18,19], which is used in document clustering. The fuzzy set is defined as follows:
Definition 1. A fuzzy relation between two finite sets X = {x1, …, xu} and Y = {y1, …, yv} is formally defined as a binary fuzzy relation f: X × Y →[0,1], where u and v are the numbers of elements in X and Y, respectively.Definition 2. Given a set of index terms, T = {t1, …, t2}, and a set of documents, D={d1, …, dv}, each ti is represented by a fuzzy set h(ti) of documents, h(ti) = {F(ti,dj) |∀di∈D}, where F(ti, dj) is the significance (or membership) degree of ti, in dj. Definition 3. The fuzzy related terms (RT) relation is based on the evaluation of the co-occurrences of ti and tj in the set D and can be defined as follows. A simplification of the fuzzy RT relation based on the cooccurrence of terms is given as follows:
where
ri,j represents the fuzzy RT relation between termsi andj ,ni,j is the number of documents containing bothi 'th andj 'th terms,ni , is the number of documents including thei 'th term, andnj is the number of documents including thej 'th term.Definition 4. The membership degrees between each document to each of the cluster sets can be defined as follows: where μi,j is the membership degree of di belonging to CTj, ra,b is the fuzzy relation between term tj∈dj and term tb∈CTj. CT is the term set with respect to representing a cluster topic. IV. PROPOSED DOCUMENT CLUSTERING METHOD
In this section, we propose a method that clusters documents by semantic features by NMF and fuzzy relations.
The proposed method consists of the preprocessing phase, cluster label extraction phase, and the document cluster phase. We next give a full explanation of the three phases shown in Fig. 3.
In the preprocessing phase, we remove all stop-words by using Rijsbergen’s stop-words list and perform word stemming by Porter’s stemming algorithm [3,6]. Then we construct the term-frequency vector for each document in the document set [1,3,6].
Let
Ti = [t1i ,t2i …tni ]T be the term-frequency vector of documenti , where elementstji denote the frequency in which termj occurs in documenti . LetA be anm ×n terms by documents matrix, wherem is the number of terms andn is the number of documents in a document set.> B. Cluster Label Term Extraction by NMF
In the cluster label terms extraction phase, we use semantic features by NMF [15-17] to extract cluster label terms. The proposed cluster label term extraction method is described as follows. First, the preprocessing phase is performed, and then the term-document frequency matrix is constructed. Table 1 shows the term-document frequency matrix with respect to 7 documents and 6 terms. Table 2 shows the semantic features matrix
W by NMF from Table 1. The cluster label terms having the top semantic values are selected in each column for the cluster label terms in Table 2. Table 3 shows the extracted cluster label terms from Table 2.> C. Clustering Document by Fuzzy Relations
The document clustering phase is described as follows. First, we construct the term correlation matrix
M with respect to the relationship between cluster label terms and terms of the document set using the fuzzy RT relation by Eq. (5). The term correlation matrix is ann byn symmetric matrix whose element,mij , has the value on the interval [0,1] in which 0 indicates no relationship and 1 indicates a full relationship between the termsti andtj . Therefore,mij is equal to 1 for alli =j . Since a term has the strongest relationship to itself [18]. Table 4 shows the term correlation matrix using Table 1 and Eq. (5).Second, a document
di is clustered into the clusterCj , where the membership degreeμi,j is the maximum by Eq. (6). The termta ini is associated with clusterCj if the termskb ’s inCTj (for clusterCj ) are related to the termta [18]. Table 5 shows the result of document clustering using Table 4 and Eq. (6).> V. EXPERIMENTS AND EVALUATION
We use the
Reuters (http://kdd.ics.uci.edu/databases/reuters21578/reuters21578.html) document corpora to evaluate the proposed method. TheReuters corpus has 21,578 documents, which are grouped into 135 clusters [7]. We use the normalized mutual information metricMI for measuring the document clustering performance [7,13,14].We have conducted a performance evaluation by testing the proposed method and comparing it with 6 other representative data clustering methods using the same data corpus. We implemented 7 document clustering methods: FNMF, FLSA, RNMF, KM, NMF, ASI, and CLRG. In Fig. 4, Fisher NMF (FNMF) denotes our proposed method. Feature LSA (FLSA) denotes our previous proposed method using LSA and fuzzy relations [15]. Robust NMF (RNMF) denotes our previous method by using NMF and cluster refinement [14]. KM denotes the partitioning method using traditional
k -means [1,3,4,6]. NMF demotes Xu’s method using non-negative matrix fac-torization [14]. ASI denotes Li’s method using adaptive subspace iteration [20]. CLRG denotes Wang’s method using local and global regularization [7].The evaluation results are shown in Fig. 4. The eval-uations were conducted for the cluster numbers ranging from 2 to 10. For each given cluster number
k , 50 ex-periment runs were conducted on different randomly chosen clusters, and the final performance values were obtained by averaging the values from the 50 experiments.In this paper, we have presented a document clustering method using semantic features by NMF and fuzzy relations. The proposed method in this paper has the following advantages. First, it can identify dissimilar documents between clusters by fuzzy relations with respect to cluster label terms and documents, thereby improving the quality of document clustering. Second, it can easily extract cluster label terms that cover the major topics of a document well using semantic features by NMF. Experimental results show that the proposed method outperforms 6 other summarization methods.
-
[Fig. 1.] Result of the non-negative matrix factorization algorithm.
-
[Fig. 2.] Example of sentence representation using semantic features and semantic variables.
-
[Fig. 3.] Document clustering method using semantic features and fuzzy relations. NMF: non-negative matrix factorization.
-
[Table 1.] Term-document frequency matrix
-
[Table 2.] Semantic features matrix W by non-negative matrix factorization from Table 1
-
[Table 3.] Result of cluster label terms extraction from Table 2
-
[Table 4.] Term correlation matrix from Table 2 by the fuzzy related terms relation
-
[Table 5.] Result of document clustering from Table 4
-
[Fig. 4.] Evaluation results of performance comparison. NMI: normalized mutual information, KM: k-means, NMF: non-negative matrix factorization, ASI: adaptive subspace iteration, CLRG: clustering with local and global regularization, RNMF: robust NMF, FLSA: feature latent semantic analysis, FMNF: Fisher NMF.