검색 전체 메뉴
PDF
맨 위로
OA 학술지
Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Locality-Sensitive Hashing for Data with Categorical and Numerical Attributes Using Dual Hashing
KEYWORD
Locality sensitive hashing , Data analysis , Search , Hashing
  • 1. Introduction

    Nearest neighbor search is a fundamental operation in data processing that determines a specified number of data objects nearest to an object specified in a query from a collection of data objects. As the amount and dimensionality of data increase, this simple operation places a considerable burden on search time and space. A related problem to which nearest neighbor search techniques are applicable is the similar pair identification problem, where all pairs of data objects similar to each other need to be found. Conventional tree-based indexing techniques, such as kd-tree and spatial tree, work well to guarantee exact solutions for low-dimensional data, but break down and almost deteriorate to an exhaustive search in cases involving high-dimensional data.

    As an enabling technique for nearest neighbor search and similar pair identification for large amounts of high-dimensional data, locality-sensitive hashing (LSH) techniques use hashing to conduct rapid searches at the cost of the exactness of the results. LSH techniques create indexing structures, for which they extract short signatures for data objects using hashing functions and maintain the data objects in buckets according to their signatures. Since there is no guarantee that all similar objects are mapped to the same bucket, LSH techniques are approximate. For nearest neighbor search or similar pair identification, LSH techniques first extract the signature for a given query, and find the candidates to be exampled by calling out data objects with the same signature as the query. The volume of the resulting candidates is much smaller than that of the original data set.

    When measurements are recorded from different perspectives, the corresponding data comes to have as many attributes as measurements. Some attributes are numerical and others can be categorical. Attributes are sometimes specified by a set, e.g., text values in addresses, memos, and messages. Most LSH techniques have been developed for numerical data, and a few LSH techniques have been attempted for categorical data and set-valued data [1]. Nevertheless, much data in real life contain numerical and categorical attributes together. Little attention has been paid to LSH techniques to deal with data containing such mixed attributes.

    This paper is concerned with an LSH technique applicable to large collections of data with both numerical and categorical attributes. The remainder of the paper is organized as follows: Section 2 contains a description of LSH and existing techniques for numerical and categorical data, respectively. Section 3 details the proposed LSH technique to incorporate dual hash functions to handle both numerical and categorical attributes. Some experimental results of the proposed method are presented in Section 4, and conclusions are offered in Section 5.

    2. Related Works

       2.1 Locality-Sensitive Hashing

    To efficiently locate a specific data item in a collection of data, a number of indexing structures have been developed in the literature [28]. Most of these adopt either tree-based or hashingbased approaches. Tree-based indexing methods encode the space partitioning information into a tree structure in which the data of interest can be found by scanning down the tree. Examples of tree-based indexing methods are kd-tree, R-tree, hierarchical k-means, ball tree, spatial tree, and spill tree [5, 9]. They are efficient at handling search tasks in low-dimensional space, but perform drastically poorly as the dimensions of space increase. Hashing is an approach for directly locating data by computing its location from its value using a special function called a hash function. A hash function is a computable map to project a large data set, called keys, to a smaller index set of a fixed length. Each index of the index set has its own bucket to which corresponding keys are mapped. When two or more data objects fall into a bucket together, they are said to collide. Since the index set is usually much smaller than the data set, buckets come to have multiple colliding data [1].

    In nearest neighbor search and similar pair identification, it is desirable to find data similar to the given data rather than to locate it. LSH is an approximate search technique for data items similar to the one being queried. The query uses special hash functions that map similar data objects to the same bucket with a high probability, whereas conventional hash functions do not make any assumptions about the similarity or homogeneity in buckets.

    The notion of locality-sensitive hashing was first introduced by Gionis et al. [10], Indyk et al. [11], who defined localitysensitive hash functions as follows:

    A family of functions = {h : SU} is an LSH family when for any two points p, q S, for any function h from , the following conditions hold:

    if d(p, q) r1, then (h(p) = h(q)) P1.if d(p, q) r2, then (h(p) = h(q)) P2.

    Here, d(p, q) denotes the distance between p and q, (h(p) = h(q)) indicates the probability of h(p) = h(q), r1 and r2 are constants for distances (r1 < r2), and P1 and P2 are constants for probabilities (P1 > P2). A family of functions satisfying the above conditions is called (r1, r2, P1, P2)-sensitive.

    Most LSH techniques encode data objects into short binary strings that play the role of signatures. Data objects with the same binary strings are maintained in the same buckets. When a query is made, it is also encoded into a binary string using the same LSH technique. Following this, data objects from the bucket with the binary string are examined according to a nearest neighbor search because the bucket might contain data objects similar to the one being queried.

       2.2 LSH for Numerical Data

    Most LSH techniques make use of multiple binary hash functions, each of which produces a bit value such that all bit values are subsequently concatenated into a binary string. The binary strings play the role of bucket identifiers for corresponding data objects. Many LSH techniques have been developed for data with only numerical attributes [1218]. The following describes a few of them.

    LSH hashing for binary codes by Andoni and Indyk [12], Datar et al [13] is a method that uses several hyperplanes specified randomly selected projection vectors to define hyperplanes. Each hyperplane plays the role of a hash function that produces a single bit: 1 for one side of the hyperplane and 0 for the other side.

    Boost similarity-sensitive hashing [19] uses a supervised method that first samples a subset of data to determine similar and dissimilar pairs among them, and regards similar pairs as positive examples and dissimilar pairs as negative ones. Following this, it learns weak classifiers to separate positives from negatives using the AdaBoost algorithm [20].

    The restricted Boltzmann machine (RBM) is a kind of Markov random field that has a layered network to allow connections only between the visible layer and the hidden layer, where the nodes of the network are stochastic binary units. RBM can learn to maximize the probability of reconstructing the original input at the visible layer by activating the hidden layer using a contrastive divergence sampling-based update rule. The RBMbased LSH method [16] uses a stacked RBM with multiple layers, where the upper layers have a gradually decreasing number of nodes. The outputs of the topmost layer are the binary hash codes for the data fed into the bottom-most layer as input.

    The spectral hashing method [18, 21] first finds the principal axes of the data set and adjusts the data along these axes. It then chooses the k smallest one-dimensional eigenfunctions to produce binary codes for data by thresholding the eigenfunction values for data at level zero.

    In semi-supervised hashing [17, 22], both labeled and unlabeled data pairs are used to determine projection vectors to minimize the empirical error in the labeled data while maximizing the entropy of the generated hash bits over the unlabeled data. The projection vectors are obtained by eigendecomposing a K × K matrix, where K is the number of hash functions.

    In unsupervised sequential projection learning for hashing [17], hash functions are sequentially learned as pseudo-labels are generated at each iteration. Subsequent hash functions are determined so that they correct errors made by the preceding hash functions. The pseudo-labels are assigned to pairs of closed data points residing on the opposite side of the hyperplane and pairs of far data points on the same side of the hyperplane. Once pseudo-labels are assigned, the method works in a similar manner to semi-supervised hashing [22].

    In density-sensitive hashing [15], hash functions are determined by taking into account the distribution of the data set. The LSH method first applies a k-means algorithm to the data set to generate small groups and determines pairs of adjacent groups. It then finds median planes to separate adjacent groups and evaluates them according to their entropy. It chooses as hash functions the top-ranked median planes in order of descending entropy score.

       2.3 LSH for Categorical Data

    In LSH for categorical data [1], the similarity matrix for categorical values is first determined for each categorical attribute using a data-driven distance [23]. A hierarchical clustering is then carried out for categorical values with respect to the similarity matrix for them. Following this, clusters for categorical values are selected at an appropriate level of the dendrogram of the hierarchical cluster. Each cluster now contains a set of categorical values and is assigned a binary code. A hash function is defined for each categorical attribute that produces a binary code corresponding to the value of the categorical attribute. When there are multiple attributes in data, an LSH function is defined by combining the hash functions for the categorical attributes. The LHS function produces a binary string for categorical values and associates a bucket with a binary string in the domain of the LSH function.

    3. Locality-sensitive Hashing for Data with Numerical and Categorical Attributes

    There is no method of assigning data with categorical attributes in a coordinate system because there is no inherent ordering relationship among categorical values. When numerical attributes occur together with categorical attributes, data objects cannot be projected into a coordinate system. LSH techniques for numerical data basically partition the (Euclidean) data space into subspaces, each of which corresponds to a bucket of a hash function. It is not easy to have available a distance measure for data having both numerical and categorical attributes. Hence, it is better to consider numerical attributes and categorical attributes separately. The similarity between a query and data object is regarded as high when both the similarity of numerical attributes and that of categorical attributes are simultaneously high. When we collect the candidate data into a query, it is effective to take the intersection of the data objects with numerical attributes similar those of the query, as well as that of data objects with categorical attributes similar to those of the query. We thus propose an LSH technique that takes this approach.

       3.1 The Proposed Dual Hashing Structure

    To support LSH for data with numerical and categorical attributes, our proposed method uses two ways of hashing: one for numerical attributes and the other for categorical attributes. For convenience of description, we use the following notations: A = {AN1, ..., ANp, AC1, ..., ACq denotes the set of numerical attributes with p numerical attributes AN1, ... ANp, and q categorical attributes AC1, ... ACq· N(AN1, ..., ANp; Q) indicates the hashing result for numerical attributes of a query Q using hashing function N, and C(AC1, ... ACq; Q) is the hashing result for the categorical attributes of a query Q using hashing function C· BN1, ..., BNm and BC1, ..., BCn are buckets for the corresponding hashing values of hash functions N and C, respectively. Let D = {D1, ..., Ds} be the given data set where is the i-th data item.

    Figure 1 shows the proposed dual hashing structure for data with both numerical and categorical attributes. For the given data set D, the proposed method maps each data object Di into a numerical bucket and a categorical bucket . The buckets maintain only the IDs of data objects belonging to them in order to not store duplicated copies of data objects.

    When a query Q is given, its numerical hash code hN and categorical hash code hC are computed as follows:

    image
    image

    The candidate data set DQ similar to query Q is the intersection of the corresponding buckets and

    image

       3.2 LSH for Numerical Attributes

    To get the specified number k of nearest neighbors to the query Q, the numerical similarity SN(Di, Q) and the categorical similarity SC(Di, Q) are computed for each data item Di DQ . When SN(Di, Q) is computed, only the numerical attribute values () of Di and the corresponding numerical attribute values of query Q are considered. Most LSH techniques for numerical data assume that similarities are computed through Euclidean distance after the standardization of data to have mean 0 and standard deviation 1.

       3.3 LSH for Categorical Attributes

    A typical distance measure dk(a, b) for categorical values a and b is defined as dk(a, b) = 0 if a = b; otherwise, dk(a, b) = 1. That is, it outputs 1 as the distance when they are different, and 0 when the two values are the same. The corresponding similarity measure (a, b) is defined as follows:

    image

    Hence, the measure cannot have any distance information when the compared categorical values are different. To handle this problem, data-driven distance measures have been studied for data with categorical attributes [23]. They make use of attribute value distributions in a given data set and determine the distances between attribute values, i.e., between categorical values. Boriah et al. [23] have surveyed data-driven similarity measures on categorical values. Three measures among them are presented in the following, where Sj(a, b) denotes the similarity of categorical values a and b for the j-th attribute ACj , fk(a) is the number of times attribute takes the value a in data set D, and pk(a) is the sample probability of to take a, i.e., pk(a) = fk(a)/N, where N is the number of data in D. (a) is another probability estimate of to take a, defined as = fk(a)(fk(a) − 1)/(N(N − 1)).

    (IOF measure)

    image

    (OF measure)

    image

    (Goodall measure)

    image

    When there are multiple attributes AC1, AC2, . . . ,ACq , the similarity S(Di, Dj) is computed by the weighted sum of the similarity of each attribute. Here, ωk is the weighting factor for attribute and avk(Di) is the k-th attribute value of the i-th data item Di.

    image

    The similarities are combined into a single value S(Di, Q) using a weighted sum, where w (0, 1).

    image

    4. Experiments

    To test the performance of the proposed method, we conducted a few experiments under the following conditions: for LSH with numerical attributes, we used density-sensitive hashing, which explores the geometric structure of data to generate hash codes. For LSH with categorical attributes, we used data-driven categorical LSH [1], which makes use of the data-driven similarity measure for categorical values.

    We conducted experiments for two synthetically generated data sets. Each synthetic data set consists of 1,000,000 data objects with 10 numerical attributes and five categorical attributes, each of which has 10 categorical values. The first data set D1 was generated from uniformly distributed spaces in which numerical attributes were drawn from the space [0, 10]10, and the categorical attributes had uniform probabilistic distributions over the 10 categorical values. The second data set D2 was generated to contain 40 clusters. In each of these, the cluster mean vectors were randomly selected and the corresponding covariance matrices were set to diag(3, 3,..., 3). Each categorical attribute was set to follow a multinomial distribution, the probabilities of which are selected from a uniform Dirichlet distribution Dir(0.1, 0.1, ..., 0.1). A total of 25,0000 data objects were generated for each cluster.

    For numerical attributes, 10-bit LSH codes were generated using density-sensitive hashing. For categorical attributes, the categorical values were encoded into three disjoint groups, and 35 indices were created altogether. For both data sets D1 and D2, the 30 queries randomly generated were handled for candidates suggested by the proposed method, and the five nearest neighbors were searched for. The evaluation was carried out using the ground truth created through an exhaustive search of the entire data for the data being queried. When the candidate subset was selected using the LSH method for numerical attributes, the data set having the same hash code as that of the query was chosen without considering neighboring buckets. In the hash codes of LSH for categorical attributes, the Hamming distance between hash codes does not contain any neighborhood information. Table 1 shows the experimental results in terms of recall, where Num. LSH indicates the LSH method that considers only numerical attributes, Cat. LSH is the LSH method that takes into account only categorical attributes, and Mixed LSH denotes the proposed dual hashing technique that uses both LSH methods together.

    [Table 1.] Experimental results of the proposed method

    label

    Experimental results of the proposed method

    5. Conclusions

    For big data applications, nearest neighbor search and similar pair identification can be burdensome tasks even though they are rather fundamental operations. Various LSH techniques have been developed to handle such tasks in an efficient but approximate manner. Most LSH techniques are applicable to only numerical data, and little attention has thus far been paid to how LSH can be engineered for data sets containing numerical and categorical attributes together. We proposed in this paper a method to combine numerical and categorical LSH techniques to handle this problem. From the experiments, we concluded that the proposed approach provides improved performance, as we had expected.

참고문헌
  • 1. Lee K. M. 2012 “Locality-Sensitive Hashing Techniques for Nearest Neighbor Search,” [International Journal of Fuzzy Logic and Intelligent Systems] Vol.12 P.300-307 google cross ref
  • 2. Bentley J. L. 1975 “Multidimensional Binary Search Trees used for Associative Searching,” [Communications of the ACM] Vol.18 P.509-517 google cross ref
  • 3. Cormen T. H., Leiserson C. E., Rivest R. L. 2001 Introduction to Algorithms google
  • 4. Kim D. W., Lee K. H. 2003 “A fuzzy clustering algorithm for clustering categorical data,” [Journal of Korean Institute of Intelligent Systems] Vol.13 P.661-666 google cross ref
  • 5. Omohundro S. M. 1989 Five balltree construction algorithms,, International Computer Science Institute Technical Report google
  • 6. Pandey S., Broder A., Chierichetti F. April 20-24, 2009 “Nearest-Neighbor Caching for Content-Match Applications,” [Proceedings of the 18th International Conference on World Wide Web] P.441-450 google cross ref
  • 7. Potthast M., Stein B., Preisach C., Burkhardt H., Schmidt-Thieme L., Decker R. 2008 “New Issues in Near-Duplicate Detection,” P.601-609 google
  • 8. Uhlmann J. K. 1991 “Satisfying general proximity/similarity queries with metric trees,” [Information Processing Letters] Vol.4 P.175-179 google cross ref
  • 9. Nister D., Stewenius H. June 17-22, 2006 “Scalable Recognition with a Vocabulary Tree,” [Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition] P.2161-2168 google cross ref
  • 10. Gionis A., Indyk P., Motwani R. 1999 “Similarity Search in High Dimensions via Hashing,” [Proceedings of the 25th International Conference on Very Large Data Bases] P.518-529 google
  • 11. Indyk P., Motwani R. May 24-26, 1998 “Approximate Nearest Neighbors: Towards Removing the Curse of Dimensionality,” [Proceedings of the 30th Annual ACM Symposium on Theory of Computing] P.604-613 google cross ref
  • 12. Andoni A., Indyk P. 2008 “Near-optimal Hashing Algorithms for Approximate Nearest Neighbor in High Dimensions,” [Communications of the ACM] Vol.51 P.117-122 google cross ref
  • 13. Datar M., Immorlica N., Indyk P., Mirrokni V. S. June 8-11, 2004 “Locality-sensitive Hashing Scheme based on p-stable Distribution,” [Proceedings of the 20th Annual Symposium on Computational Geometry] P.253-262 google cross ref
  • 14. Lee K. M., Lee K.M. 2012 “A Locality Sensitive Hashing Technique for Categorical Data,” [Applied Mechanics and Materials] Vol.241-244 P.3159-3164 google cross ref
  • 15. Lin Y., Cai D. “Density Sensitive Hashing,” google
  • 16. Salakhutdinov R. R., Hinton G.E. 2009 “Semantic hashing,” [International Journal of Approximate Reasoning] Vol.50 P.969-978 google cross ref
  • 17. Wang J., Kumar S., Chang S.-F. June 21-24, 2010 “Sequential Projection Learning for Hashing with Compact Codes,” [Proceedings of the 27th International Conference on Machine Learning] google
  • 18. Weiss Y., Torralba A., Fergus R. December 8-10, 2008 “Spectral hashing,” [Proceedings of the 22nd Annual Conference on Neural Information Processing Systems] P.1753-1760 google
  • 19. Shakhnarovich G., Viola P., Darrell T. October 13-16, 2003 “Fast Pose Estimation with Parameter Sensitive Hashing,” [Proceedings of the 9th IEEE International Conference on Computer Vision] P.750-757 google cross ref
  • 20. Schapire R. E., Denison D., Hansen M., Holmes C., Mallick B., Yu B. 2003 “The Boosting Approach to Machine Learning : An Overview,” P.149-171 google
  • 21. von Luxburg U. 2007 “A Tutorial on Spectral Clustering,” [Statistics and Computing] Vol.17 P.395-416 google cross ref
  • 22. Wang J., Kumar S., Chang S.-F. 2012 “Semi-Supervised Hashing for Large Scale Search,” [IEEE Transactions on Pattern Analysis and Machine Intelligence] Vol.34 P.2393-2406 google cross ref
  • 23. Boriah S., Chandola V., Kumar V. April 24-26, 2008 “Similarity Measures for Categorical Data: A Comparative Evaluation,” [Proceedings of the 8th SIAM International Conference on Data Mining] P.243-254 google
OAK XML 통계
이미지 / 테이블
  • [ Figure 1. ]  Dual hashing for numerical and categorical attributes.
    Dual hashing for numerical and categorical attributes.
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ ] 
  • [ Table 1. ]  Experimental results of the proposed method
    Experimental results of the proposed method
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.