검색 전체 메뉴
PDF
맨 위로
OA 학술지
An Object-Level Feature Representation Model for the Multi-target Retrieval of Remote Sensing Images
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
An Object-Level Feature Representation Model for the Multi-target Retrieval of Remote Sensing Images
KEYWORD
Remote sensing , Image processing , Spatial representation , 9DLT , Content-based remote sensing image retrieval
  • Ⅰ. INTRODUCTION

    Along with the rapid progress of satellite sensor technology and their application to high-resolution remote sensing images in Earth observation systems, a large amount of remote sensing data have become readily available for acquisition. In terms of spatial information, terrain geometry, and texture information, high-resolution remote sensing images have more features than middle or low-resolution images. To use the image database fully and to retrieve interesting information automatically and intelligently, a new efficient technology for multi-target retrieval (MTR) in an image, particularly in a specified region, is expected to be developed.

    The number of image-processing applications for target retrieval is increasing, such as query by image content from IBM [1]. Most studies in this area have focused on content-based image retrieval (CBIR) and content-based remote sensing image retrieval (CBRSIR), and have achieved significant results. In these processes, the contents of an image, which specify several low-level features, such as color, texture, shape, longitude and latitude, and spatial relationships among objects, are the bases of multidimensional image feature vectors. Regarding the differences in imaging conditions of various forms of remote sensing images, we cannot exactly express image contents by using only a single feature. Therefore, constructing the comprehensive features of an image is the key to improving extraction performance [2]. However, if the combined features cannot be purified to form a unified model, then the accuracy of the similarity extraction and efficiency improvement of the images will be affected. For example, if we focus more on spatial relationship, then the detail of each target will be minimal. Thus, the efficiency of MTR will be higher than comparing the features of a single object. To effectively reveal the information retrieval process for remote sensing images, an objectlevel model is proposed, which can represent the contents of an image with overall accuracy. By using this model, we can retrieve and operate the information pre-stored in a symbolic image database (SID) with high efficiency, and neglect intrinsic information, such as color, texture, and shape. To date, research on feature representation models of image data for MTR remains limited. To build feature indices and to realize rapid retrieval, we propose an object-level feature representation model, based on a previous research on CBRSIR and the reference for MPEG-7 standards, starting with representing the contents of an image on an object-level feature, particularly the spatial relationship among targets.

    The rest of this paper is organized as follows. Section Ⅱ discusses related literature on representation techniques of image contents for CBIR or CBRSIR. Section Ⅲ introduces calculation and representation feature values, and mainly describes the spatial representations of the extended nine-direction lower-triangular matrix (9DLT). Section Ⅳ presents a model of image content feature representation. Section Ⅴ proposes an MTR model and similarity calculation. The last section presents several experiments to validate the accuracy of the content-based feature representation model and the efficiency of image target extraction. A conclusion to the study is also presented in this section.

    Ⅱ. RELATED STUDIES

    In the past three decades, academia has achieved a large number of results on CBIR and CBRSIR. At present, CBIR has many successful applications in the fields of facial recognition, medical diagnosis, and trade registration. Most of these systems have adopted single feature or combined features as image indices [3-10]. CBRSIR is similar to CBIR, because both contain visual and geographic information. Several systems have focused on the issue of spectral retrieval, such as texture representation, and different combinations with spectral bands [11]. A special feedback approach has been employed to precisely describe the desired search characteristic in a remote sensing image [12]. Some researchers even presented a code stream of images for remote sensing image retrieval [13]. In addition, other scholars combined a scheme with an automatic classifier, and proposed the use of new feature ‘texton histograms’, to capture the weaktextured characteristic of remote sensing images for image retrieval [14]. Meanwhile, others applied a texture analysis approach, called the local binary pattern operator, to implement image retrieval [15]. Some of these studies even applied independent component analysis to extract independent components of feature values via linear combinations to realize multi-spectral image retrieval [16]; or adopted principal component analysis and a clustering technique to index remote sensing images for image retrieval [17]. Considering various features, such as color, texture, and spectra, a prototype model for CBRSIR based on color moment and gray level co-occurrence matrix feature was proposed [18]. A number of researchers combined several properties (color, texture, and points of interest) that were automatically extracted and immediately indexed images [19]. In addition, some researchers proposed a framework based on a domaindependent ontology to perform semantic retrieval in image archives [20]. Other scholars also presented a universal semantic data model for image retrieval [21]. Regardless of how a feature vector is established, this vector still depends upon the representation of contents in images. To date, the contents of images can be represented in numerous ways. Some approaches adopt a quad-tree structure or a quin-tree method that splits large-scale remote sensing images into sub-images, to extract multiple features, such as color and texture [22, 23]. Others use the 2D C-string to represent spatial knowledge of an image database [24]; or the spanning representation of an object to realize spatial inference and similarity retrieval in an image database, through directional relation referenced frames [25]. Others depict the relationships among spatial objects by using the methods of the nine-direction spanning area [26] or 9DLT [27]; and represent image colors by using pyramid technology [28]; or express an image by employing a symbol index, which is established in image space stratification [29]. All the aforementioned related representations include color, space, and subimages that belong to the feature representation method on image contents. Implementing rapid and accurate retrieval with a massive remote sensing image is difficult, because its features include various data types, resolution scales, and data sources. In our investigation, we analyze the contents of an image based on the MPEG-7 standard to organize the features of the image, build an SID, and index the SID to accelerate target retrieval.

    Ⅲ. CALCULATING FEATURE VALUE

    The key to improving image retrieval efficiency is the index technique, which involves obtaining objects after image segmentation and building an SID for the image database. In the present study, we adopt a mature algorithm, called object-oriented multiscale image segmentation. The object-oriented image processing algorithm is a synthetic algorithm that fuses spectrum characteristics, geometric information, and structural information. This algorithm regards an object as a minimal process unit by retrieving its multiple characteristics to form a logical relationship among images and objects. Then, we analyze the image from the local to the entire level, and ultimately, implement its understanding. In general, multiscale image segmentation begins with any pixel by using a region merging method, from the bottom to the top, to form objects. Small objects can be merged to form large objects, and the size of each object must satisfy the demand of which the heterogeneity of a merged object is less than a given threshold. In this case, heterogeneity is decided by differences in the spectra and shapes of objects. However, various features correspond to different scales of observation, in which each feature can be extracted and accurately analyzed in an image layer on a proper scale [30]. In particular, we use the threshold value method on a multiple scale to segment an image.

    After processing the calibration, segmentation, and raster vectorization of the image based on a specified region of latitude and longitude, the basic unit of the image is no longer a single pixel, but a polygon that is composed of homogeneous pixels. Each polygon can be used to calculate the spectral information of pixels, including shape, texture, color information, and topological relationships among the polygons. Next, we will introduce the method for calculating the feature vectors to implement the representation model.

      >  A. Shape

    Shape is a key feature used to differentiate two objects. It is also the basis for characteristic retrieval, and the classified process mentioned in the latter part of this paper. In general, in the field of object-level content retrieval, shape remains as the most basic feature for distinguishing objects. At present, two approaches are used to describe shapes: parametric and geometric approaches. In the present investigation, we adopt a geometric approach to characterize the shapes of different objects, namely, the model of centroid radii representation [31].

    For an arbitrary polygon, such as the one shown in Fig. 1(a), the results of resampling an image with the angle θ interval around, and counterclockwise to the y axis, are shown as Fig. 1(b). Let lk be the distance between the centroid of the polygon, and the boundary sampling point. The shape descriptor of the polygon can be expressed by a centroid-radius model, as follows:

    The condition for measuring similarity between two polygons based on shape is: if and only if the numerical difference between the central radii in all directions is less than a given minor threshold value ε. That is, when two polygons are similar, the shape descriptor must satisfy the following regulation:

    To ensure scale invariance of shape by using regulation (2), we need to normalize the Euclidean distance between the centroid and each vertex, within the range of [0, 1]. In this study, we discuss most of the possible transformations between two feature vectors. One of these transformations involves the possible rotations between two shapes, and the distances that are independent from rotation, including the starting and ending points.

    After transforming image shape into matrix space, we store data, using the antipole tree structure [32].

      >  B. Texture

    Different features in high-resolution images typically have similar spectral appearances to human vision. The mean value floating of a spectral feature may also cause similar spectra among different homogeneity samples to converge as similar modes in a feature space, thus resulting in spectra with similar features. This phenomenon is attributed to the human eyes being insensitive to some portion of visible light. Therefore, we can improve the reliability of retrieval results by using features, such as shape, texture, and spatial relationships as references. Texture is a significant geometric (spatial) feature that can be used to distinguish among different objects and regions to reflect the changing discipline in gray space. A 2D Gabor filter is suitable for narrow-band coding of texture, because of its adjustable filtering direction, bandwidth, general band-center frequency, and optimal timedomain and spatial-domain analysis abilities. After finishing gray-scaling and normalization processes on image segmentation, we apply a Gabor filter to extract the texture feature of the objects. The Gabor filter function g(x, y) is a 2D Gaussian function that is modulated by the complex sine window function g(x, y). Its Fourier transform function G(u, v) can be expressed by the following equations:

    where, , , and σx, σy are the Gabor filter spatial range and bandwidth of the frequency domain, respectively. In this case, (f, 0) is the central frequency in the filter of the orthogonal coordinate in the frequency domain. Let g(x, y) be a function of the mother that generates the Gabor filter family. The set of functions gm,n(x, y), which is a complete non-orthogonal dataset, can be generated through rotation and scaling, according to Eq. (4).

    where x' = a−m(xcosθn + ysinθn), y' = a−m(−xysinθn + ycosθn), a > 1, θn = /K, m = 0, 1, ..., S − 1, and n = 0, 1, ..., K − 1. Parameter θn is the counterclockwise rotation angle along the filter axis. S, K are the total scale and rotation, respectively. After obtaining the energy value of each filter and the convolution of the image, we calculate the mean value and the mean square deviation of the filtering value on the energy of each object. Finally, we mark the texture feature vector of the object, as shown in Eq. (5).

    where, K is the central frequency, and L is the directional angle. k = 0, 1 ..., K − 1, l = 0, 1, ......, L − 1, and Ek,l(x, y) is the filtering energy value of the filter (k, l). Normalization is required to proceed toward Ek,l(x, y), to ensure that the energy value of each element in the energy information is not affected by the actual size. We commonly use Ek,l(x, y) = to calculate the energy value, according to the gray value p(x, y) in localocation (x, y). Finally, the mean value μ of the energy, and the mean square deviation σ of the target object (n × n pixels), can be obtained as Eqs. (6) and (7), respectively.

      >  C. Spatial Representation of the Extended 9DLT

    The spatial representation of an image describes the spatial relationships among objects to easily distinguish images with multiple targets. The spatial relationships in an image can be classified into two categories: positional and directional relationships. The former can be represented by a 2D string; whereas, the latter can be represented by 9DLT methods [25]. For a calibrated remote sensing image within the region of a certain latitude and longitude, the directional relationship relative to the four corners among the objects is confirmed. In this section, we introduce problem definitions and preliminary concepts, through formal methods.

    DEFINITION 1. Let α = (α1, α2, ..., αk) be a set of objects in the same image. Hence, αi is a subset of α.

    DEFINITION 2. The spatial relationship between two objects can be defined as one of the codes in nine directions, which is called 9DLT.

    DEFINITION 3 (The 9DLT matrix). Let V = {v1, v2, v3, ..., vm} be composed of m distinct sets of objects, and Z be composed of z1, z2, z3, ..., zs in order, wherei= 1, 2, ..., s, ziV. Suppose C is a collection of 9D encodings, as shown in Fig. 2(a). Each direction code can then be used to specify the spatial relationship between two objects. Thus, a 9DLT matrix T is an s × s matrix that is composed of tij, which belongs to the collection of 9D encoding C. The item tij at row i column j represents the direction code from zj to zi, only when i and j satisfy the condition . j < i ∈(1,s)

    As shown in Fig. 2(a), let R be the referred object expressed by 0, in which we define the direction code in a 45° interval from the northern counterclockwise, as 1…8. Each object from the source image will be represented by one centroid in a 9DLT expression. Fig. 2(b) shows a feature image that contains four objects. Fig. 3(a) exhibits the direction map in the grid between the objects; whereas, the direction code of the LT matrix in Fig. 3(b) demonstrates the spatial relationships among objects. The 9DLT string is (A, B, C, D, 6, 6, 6, 7, 5, 4) in column order. A relationship between two objects exists in the matrix.

    DEFINITION 4. A pattern consists of the sets of objects and the spatial relationships among these objects. For example, α = (α1, α2, ..., αk, αr1, αr2, ..., αrm) is a pattern, αi is an object and αrj is the corresponding spatial relationship, where 1 ≤ ik, m = = k(k-1)/2, 1 ≤ jm, and k ≥ 2. That is, the spatial relationships between any two objects in this pattern are recorded. The length of a pattern is equal to the amount of objects. A pattern with a length that is equal to k is called the k-pattern.

    Constraints:(1) An item or object in a pattern is stored in alphabetical order.(2) No spatial relationship exists, if the length of a pattern is equal to 1.

    The 9DLT expression is in accordance with the definition of the pattern.

    DEFINITION 5. Pattern α = (α1, α2, ..., αi, αr1, αr2, ..., αrm) is a sub-pattern of pattern β = (β1, β2, ..., βj, βr1, βr2, ..., βrn), where (α1, α2, ..., αi) is a subset of β1, β2, ..., βj), where ji ≥ 2. The spatial relationship between any two items in α is the same as in β. Pattern β contains pattern α, where. βα The amount of a sub-pattern is N = Cii + Cii−2 + ··· + Ci2. For example, pattern α = (A, B, C, 6, 6, 7) is a sub-pattern of pattern β = (A, B, C, D, 6, 6, 6, 7, 5, 4), because (A, B, C) is a subset of (A, B, C, D), and the code values of the spatial relationship of objects A, B, and C are the same as the code values underlined in pattern β.

    DEFINITION 6. The minimum support is the amount of objects that satisfy the spatial relationships, which is equal to the required amount of search objects.

    Inference 1. Two k-patterns can be joined, only if k−1 objects and the corresponding relationships between them are the same, and k satisfies condition k≥2.

    Inference 2. Suppose a pattern does not contain any (k−1) pattern; then, this pattern cannot be contained in the k-pattern.

    Inference 3. The pattern of feature images and their specific sub-pattern can be obtained from a 9DLT string. By contrast, if (k−1)-pattern, k−1 objects, and the spatial relationships in the object sets are given, then the relative candidate sets of the k-pattern can be acquired.

    Generating candidate sets can significantly help object retrieval. To extract the image with the object (A, B, C) of minimum support 3 and spatial relationship αr in the image database, two 2-patterns are required, namely, (A, B, 4) and (A, B, 5), in which the same object A belongs to both patterns, and satisfies the joining condition. Then, we can calculate the candidate 3-pattern (A, B, C, 4, 5, Δ). As shown in Fig. 4, the possible results are (A, B, C, 4, 5, 7), (A, B, C, 4, 5, 8), and (A, B, C, 4, 5, 6). The direction codes of the possible relationship between B and C are 7, 8, and 6; therefore, the spatial representation model is ABC(4, 5, X : {7, 8, 6}).

    Similarly, the 9DLT string of each image is known in the image database. That is, the spatial relationship between objects has been confirmed, and the problem of finding all images that satisfy minimum support is the process of matching patterns. In fact, the process can be converted to search the LT matrix with a problem on inclusion relationship. As shown in Fig. 5, according to the difference of the given objects and the minimum support, the position of matching matrix P in the LT matrix may only be a part of the relation direction codes. The range of mapping to candidate matrix C is also in k × k.

    The description of the match algorithm is as follows.

    Ⅳ. THE OBJECT-LEVEL FEATURE REPRESENTATION MODEL

    Typically, a data model is a framework that can be used to provide representation for information, and an operation method in the database system. The object-level feature representation model belongs to a section of this data model. For remote sensing images, the data model also includes metadata, such as location, resolution, and light intensity. However, the standard of measurement for a content retrieval system determines the efficiency and accuracy of extraction. Hence, each image needs a good model with an efficient content-based representation. Moreover, selecting a formula for similarity calculation is also vital. Based on this concept, we present the objectlevel feature representation model for the image data in the next section.

    According to MPEG-7 standards and the object-oriented concept, the object-level feature representation model for image data is described as a structural tree via layers [33]. As Fig. 6 shows, the first layer is the object name, while the second layer is the feature name of the feature information that the object contains. Further down are the layers for sub-features, feature attributes, attribute values, etc. Constructing this structural tree is convenient for indexing feature information.

    The overall model of the feature image can be represented by a formal method, as follows:

    where, EA stands for the description of the object extraction algorithm, and MA stands for the description of the object matching algorithm.

    We adopt the centroid-radii model Fshape = (objID, Centriod, Radii) = in Section Ⅲ-A, the 9DLT extended model Fspace = (objID, Flocal, Fdirec) in Section Ⅲ-C, and the calculation value of the different direction angle energy Ftexture in Section Ⅲ-B. We choose parameters that refer to the methods in the literature [34]. The color feature of an object can be expressed as Fcolor = {μcolor, σcolor}, through the mean value and the mean square deviation of its color.

    Through this model, we can express the content of multiple targets in an image by using multiple records to represent a single object, such as color, shape, and texture features. Then, a logical expression is implemented by the spatial relationship among objects. Thus, we transform the MTR problem into a record-querying problem to enable the image indexing technology to further accelerate target retrieval in CBRSIR.

참고문헌
  • 1. Flickner M., Sawhney H., Niblack W., Ashley J., Huang Q., Dom B., Yanker P. 1995 “Query by image and video content: the QBIC system,” [Computer] Vol.28 P.23-32 google
  • 2. Lu L. Z., Ren R. Y., Liu N. 2004 “Remote sensing image retrieval using color and texture fused features,” [China Journal of Image and Graphics] Vol.9 P.74-78 google
  • 3. Guldogan E., Gabbouj M. 2008 “Feature selection for content- based image retrieval,” [Signal, Image and Video Processing] Vol.2 P.241-250 google
  • 4. Kapela R., Sniatala P., Rybarczyk A. 2011 “Real-time visual content description system based on MPEG-7 descriptors,” [Multimedia Tools and Applications] Vol.53 P.119-150 google
  • 5. Capar A., Kurt B., Gokmen M. 2009 “Gradient-based shape descriptors,” [Machine Vision and Applications] Vol.20 P.365-378 google
  • 6. Pun C. M., Wong C. F. 2011 “Fast and robust color feature extraction for content-based image retrieval,” [International Journal of Advancements in Computing Technology] Vol.3 P.75-83 google
  • 7. Rao M. B., Rao B. P., Govardhan A. 2011 “Content based image retrieval using dominant color, texture and shape,” [International Journal on Engineering Science and Technology] Vol.3 P.2887-2896 google
  • 8. Yue J., Li Z., Liu L., Fu Z. 2011 “Content-based image retrieval using color and texture fused features,” [Mathematical and Computer Modelling] Vol.54 P.1121-1127 google
  • 9. Kavitha C., Rao B. P., Govardhan A. 2011 “Image retrieval based on combined features of image sub-blocks,” [International Journal on Computer Science and Engineering] Vol.3 P.1429-1438 google
  • 10. Yeh W. H., Chang Y. I. 2008 “An efficient iconic indexing strategy for image rotation and reflection in image databases,” [Journal of Systems and Software] Vol.87 P.1184-1195 google
  • 11. Newsam S. D., Kamath C. 2004 “Retrieval using texture features in high resolution multi-spectral satellite imagery,” [in Data Mining and Knowledge Discovery: Theory, Tools, and Technology VI (Proceedings of SPIE)] P.21-32 google
  • 12. Li Y., Bretschneider T. 2003 “Supervised content-based satellite image retrieval using piecewise defined signature similarities,” [in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium] P.734-736 google
  • 13. Niu L., Ni L., Lu W., Yuan M. 2005 “A method of remote sensing image retrieval based on ROI,” [in Proceeding of the 3rd International Conference on Information Technology and Applications] P.226-229 google
  • 14. Sawant N., Chandran S., Krishna Mohan B. 2006 “Retrieving images for remote sensing applications,” [in Proceedings of the 5th Indian Conference on Computer Vision, Graphics and Image Processing] P.849-860 google
  • 15. Wang A. P., Wang S. G. 2006 “Content-based high-resolution remote sensing image retrieval with local binary patterns,” [in Geoinformatics 2006: Remotely Sensed Data and Information (Proceedings of SPIE)] google
  • 16. Shahbazi H., Kabiri P., Soryani M. 2008 “Content based multispectral image retrieval using independent component analysis,” [in Proceedings of the 1st International Congress on Image and Signal Processing] P.485-489 google
  • 17. Peijun D., Yunhao C., Hong T., Tao F. 2005 “Study on content- based remote sensing image retrieval,” [in Proceedings of the IEEE International Geoscience and Remote Sensing Symposium] google
  • 18. Maheshwary P., Srivastava N. 2010 “Retrieval of remote sensing images using color, texture, and spectral feature,” [International Journal of Engineering Science and Technology] Vol.2 P.4306-4311 google
  • 19. Ait-Aoudia S., Mahiou R., Benzaid B. 2010 “YACBIR: yet another content based image retrieval system,” [in Proceedings of the14th International Conference on Information Visualisation] P.570-575 google
  • 20. Ruan N., Huang N., Hong W. 2006 “Semantic-based image retrieval in remote sensing archive: an ontology approach,” [in Proceedings of the IEEE International Conference on Geoscience and Remote Sensing Symposium] P.2903-2906 google
  • 21. Wei L., Weihong W., Feng L. 2009 “Research on remote sensing image retrieval based on geographical and semantic features,” [in Proceedings of the International Conference on image Analysis and Signal Processing] P.162-165 google
  • 22. El-Qawasmeh E. 2003 “A quadtree-based representation technique for indexing and retrieval of image databases,” [Journal of Visual Communication and Image Representation] Vol.14 P.340-357 google
  • 23. Wan Q., Wang M., Zhang X., Jiang S., Xie Y. 2010 “High resolution remote sensing image retrieval using quin-tree and multi-feature histogram,” [Journal of Geo-Information Science] Vol.12 P.275-280 google
  • 24. Lee S. Y., Hsu F. J. 1990 “2D C-string: a new spatial knowledge representation for image database systems,” [Pattern Recognition] Vol.23 P.1077-1087 google
  • 25. Huang P. W., Hsu L., Su Y. W., Lin P. L. 2008 “Spatial inference and similarity retrieval of an intelligent image database system based on object's spanning representation,” [Journal of Visual Languages and Computing] Vol.19 P.637-651 google
  • 26. Huang P. W., Lee C. H. 2004 “Image database design based on 9D-SPA representation for spatial relations,” [IEEE Transactions on Knowledge and Data Engineering] Vol.16 P.1486-1496 google
  • 27. Lee A. J. T., Hong R. W., Ko W. M., Tsao W. K., Lin H. H. 2007 “Mining spatial association rules in image databases,” [Information Sciences] Vol.177 P.1593-1608 google
  • 28. Urdiales C., Dominguez M., de Trazegnies C., Sandoval F. 2010 “A new pyramid-based color image representation for visual localization,” [Image and Vision Computing] Vol.28 P.78-91 google
  • 29. Khan N. M., Ahmad I. S. 2012 “An efficient signature representation for retrieval of spatially similar images,” [Signal, Image and Video Processing] Vol.6 P.55-70 google
  • 30. Tan Q., Liu Z., Shen W. 2007 “An algorithm for object-oriented multi-scale remote sensing image segmentation,” [Journal of Beijing Jiaotong University] Vol.31 P.111-119 google
  • 31. Tan K. L., Ooi B. C., Thiang L. F. 2000 “Indexing shapes in image databases using the centroid-radii model,” [Data Knowledge Engineering] Vol.32 P.271-289 google
  • 32. Cantone D., Ferro A., Pulvirenti A., Recupero D. R., Shasha D. 2005 “Antipole tree indexing to support range search and k-nearest neighbor search in metric spaces,” [IEEE Transactions on Knowledge and Data Engineering] Vol.17 P.535-550 google
  • 33. Vetro A. 2001 “MPEG-7 applications document v.10,” ISO/IEC JTC1/SC29/WG11/N3934 google
  • 34. Grigorescu S. E., Petkov N., Kruizinga P. 2002 “Comparison of texture features based on Gabor filters,” [IEEE Transactions on Image Processing] Vol.11 P.1160-1167 google
  • 35. Nastar C., Mitschke M., Meilhac C. 1998 “Efficient query refinement for image retrieval,” [Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition] P.547-552 google
  • 36. Liu T., Du Q., Yan H. 2011 “Spatial similarity assessment of point cluster,” [Geomatics and Information Science of Wuhan University] Vol.36 P.1149-1152 google
  • 37. Kruizinga P., Petkov N., Grigorescu S. E. 1999 “Comparison of texture features based on Gabor filters,” [in Proceedings of the 10th International Conference on Image Analysis and Processing] P.142-147 google
  • 38. Salton G., McGill M. J. 1983 Introduction to Modern Information Retrieval google
이미지 / 테이블
  • [ Fig. 1. ]  Model of the centroid radii. (a) Resampled polygon with θ interval around, and counterclockwise to, the y axis; and (b) expression of the resampling result.
    Model of the centroid radii. (a) Resampled polygon with θ interval around, and counterclockwise to, the y axis; and (b) expression of the resampling result.
  • [ Fig. 2. ]  Representation of nine-direction lower-triangular. (a) Nine-direction code and (b) symbolic figure of the object.
    Representation of nine-direction lower-triangular. (a) Nine-direction code and (b) symbolic figure of the object.
  • [ Fig. 3. ]  Map of the matrix expression in nine-direction lowertriangular (9DLT). (a) Direction map in grids between objects and (b) matrix expression of four objects in 9DLT.
    Map of the matrix expression in nine-direction lowertriangular (9DLT). (a) Direction map in grids between objects and (b) matrix expression of four objects in 9DLT.
  • [ Fig. 4. ]  Generation of candidate 3-pattern from 2-pattern.
    Generation of candidate 3-pattern from 2-pattern.
  • [ Fig. 5. ]  Map of the candidate matrix to match the threshold.
    Map of the candidate matrix to match the threshold.
  • [ Fig. 6. ]  Model of representation of feature objects.
    Model of representation of feature objects.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.