검색 전체 메뉴
PDF
맨 위로
OA 학술지
Cloud-Type Classification by Two-Layered Fuzzy Logic
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Cloud-Type Classification by Two-Layered Fuzzy Logic
KEYWORD
Cloud-type classification , Infrared , Near-infrared , Fuzzy logic , False positive
  • 1. Introduction

    Cloud analysis is a challenging but critically important task for many practical applications such as weather forecasting and air traffic control. One of the main difficulties is that the texture of clouds is variable under different atmospheric conditions. Although the infrared or near-infrared or vapor image is available from the satellite [1], it is not very helpful in quantifying small and/or low-altitude cloud features owing to their limited spatial resolutions and unknown surface influences on the measured radiances [2].

    Cloud types are classified into several groups. When the updraft is strong, cumuliform type clouds with a vertical shape appear and the horizontal-shape stratus-form clouds appear when the updraft is weak. The types of clouds are also different according to their altitude. At high altitude, we find cirrus, cirrustratus, and cirrocumulus clouds. At low-altitude, we find stratocumulus, nimbostratus, and stratus. Altostratus and altocumulus clouds are found in the middle, and cumulus and cumulonimbus clouds are known to spread from low altitude to high altitude [3]. Many have taken traditional and theoretical approaches in cloud-type classification, and many have taken intelligent approaches based on machine learning with different purposes and paradigms. The approaches, include maximum likelihood [4], instance based learning [5], neural network [6], fuzzy logic [7,8] and the K-nearest approach [9]. Some researchers also tried to use different feature extraction methods to improve classification quality [2,9-12].

    While no single methodology prevails over others in terms of accuracy, the choice of algorithm is largely dependent on the purpose of the classification and the research environment. For example, a fuzzy logic approach deals with discriminating single-layered clouds, multi-layered clouds and clear skies [8] while a K-nearest approach classifies cumulus, towering cumulus, cumulo-nimbus clouds, and other clouds and skies [9,13].

    In this paper, we revisit the fuzzy logic approach for the traditional cloud-type classification problem. We have six cloudtype classes (cirrus, cirrustratus, cumulus, cumulonimbus, stratus, cumulus) that are frequently used in weather forecasting analysis. Our new approach differs from previous studies using fuzzy logic, because the noise removal process to extract the cloud area from the image was improved to avoid recognizing fog as a cloud. Fuzzy logic used in classification is designed to use different characteristics of three possible source images by using a two-layered fuzzy reasoning structure.

    Because we primarily use color information from three different source images, the noise removal process is designed to exclude areas other than the cloud, and this preprocessing phase is further explained in the next section. The main classifier based on fuzzy logic is discussed in Section 3, followed by the experimental results and discussion.

    2. Extraction of Cloud Area

    From satellites, we can obtain the thickness of the cloud and other related information from visible-light images and the height information from the infrared images. Near-infrared images contain features of infrared images and visible-light images.

    A previous study [7] used only information from infrared and visible-light images; therefore, it is unable to reliably distinguish clouds from fog. The fog is typically recognized as part of the cloud, because the brightness is not sufficiently different in the available data. The fog must be removed during the noise removal process to reduce this type of false positives.

    In this study, we remove noises using additional color information from near-infrared images to determine the region of interest (ROI), which includes only land and the cloud area. We

    use the threshold-cut method, as in the previous study, but the final color information of a pixel is determined as the average of the same pixel in different images. In the visible-light image, the color information from the near-infrared image is averaged. In the infrared image, the color information from the infrared images and the color information from the near-infrared images are averaged. In the near-infrared image, the color information from all three images is averaged.

    Figure 1 shows the original source images and the resultant ROIs after noise removal.

    3. Cloud Type Classification by Fuzzy Logic

    If the cloud exists in only one of the two main source images (visible-light or infrared), it is relatively easy to classify, because the height and thickness information is sufficient to classify the cloud type. Therefore, we need only one fuzzy membership function for classification.

    If the cloud appears in both source images, we need the nearinfrared source image, which contains both the cloud thickness and height. In such a case, we need two different inference rule sets to characterize the target. However, the output of the inference is only qualitative.

    Therefore, we need the second fuzzy membership function and another inference rule set with outputs from the first rule set as inputs. That makes our fuzzy inference logic two-layered.

    [Table 1.] Interval point of membership function for R channel

    label

    Interval point of membership function for R channel

    First, the R channel information from three source images (visible-light [V], near-infrared [N], and infrared [I]) are classified into three qualitative value levels (low, medium, and high) and used as inputs for our fuzzy membership function, which is designed as shown in Figure 2. With these three qualitative value levels, five interval points are assigned as specified in Table 1, with respect to the sources of the R channel. The threshold values of those intervals are determined empirically as shown in Table 1.

    In order to classify cloud types in to our six classes, the membership degree is determined by two-layered qualitative fuzzy reasoning. Since we include the channel information from the near-infrared source image, the first membership degrees are represented as two numbers: 1) a membership degree that combines near-infrared and visible-light source images, and 2) a membership degree that combines infrared and near-infrared source images. These are the firsthand membership degrees as determined by fuzzy inference rules, as shown in Tables 2 and 3. These qualitative levels are later used in our secondhand inference rules.

    The inference is performed by a well known max-min method [3].

    [Table 2.] Firsthand inference rules 1 (near-infrared and visible-light)

    label

    Firsthand inference rules 1 (near-infrared and visible-light)

    [Table 3.] Firsthand inference rules 2 (near-infrared and infrared)

    label

    Firsthand inference rules 2 (near-infrared and infrared)

    If the cloud exists in only one of the two main source images (visible-light or infrared), the computed membership degree is defuzzified by the center of gravity rule and the classification is done by interval rules shown in Table 4 where Wz denotes the defuzzified value.

    If the cloud exists in both infrared and visible-light images, we need a second fuzzy inference rule, as shown in Table 5, using firsthand membership degrees explained in Tables 2 and 3 as input. Again, the max-min method and the center of gravity defuzzication are used in the process.

    Therefore, the shape of the fuzzy membership function is different, as shown in Figure 3.

    Then the cloud-type classification rules are determined as shown in Table 6, with the defuzzified value Wz.

    With this two-layered fuzzy inference, we can produce the final cloud-classified source images as shown in Figure 4(d), from three input images.

    4. Experiment Results

    We collected 50 532×512-pixel images of visible-light, infrared, and near-infrared images provided by the Korean National Weather Service [1]. The software was written in VC++ on IBM compatible PC with an Intel Core Duo CPU E7300 with 2.0 GB RAM.

    The previous study [7], used reflection and release characteristics of the near-infrared image to classify salient cloud types first, and then it applied fuzzy logic for further classification. Therefore, during the noise removal process, the fog area tended to be wrongly classified as stratus. In this study, we used all

    [Table 4.] Firsthand classification

    label

    Firsthand classification

    [Table 5.] Secondhand inference rules

    label

    Secondhand inference rules

    [Table 6.] Secondhand classification

    label

    Secondhand classification

    three source images with the brightness computed as the average of brightness from two to three source images. The change successfully avoids false positives and threshold value sensitivity. Figure 5 compares the difference of the two approaches in producing ROIs.

    As expected, the new approach succeeded in removing the fog area from the cloud area.

    Table 7 summarizes the experimental result of the proposed method, as compared with the previous fuzzy-logic-based method [7].

    In the proposed method, we do not use reflection/release characteristics as the classifying criteria. However, we observe if the cloud exists in both infrared and near-infrared images. If

    [Table 7.] Cloud area extraction rate

    label

    Cloud area extraction rate

    it appears only in one of them, we apply the first set of fuzzy inference rules, if it appears in both source images, the second set of membership functions and inference rules are also applied for further classification. The results show that the proposed method is more accurate and effective (Figure 6).

    5. Conclusion

    While many different methods are available to perform cloudtype classification from satellite images, no single algorithm outscored others on accuracy nor is any single algorithm widely used in practice. Fuzzy logic is one of several possible methodologies that could be used, because the nature of the problem contains a high level of dynamic uncertainty.

    In this paper, we proposed a new method using a two-layered fuzzy inference system, using all possible source images to strengthen the classification accuracy. In addition, our new ROI-producing logic successfully avoids false positives caused by fog as in a previous study.

    The primary goal of this study was to avoid false classification of fog as part of a cloud. While using all three possible source images in classification, our study further explored the characteristics of clouds depending on which source images the extracted cloud is found. We converted the traditional process to a two-layered fuzzy logic process. If the cloud exists in only one of the two main source images (visible-light or infrared), the classification follows only one fuzzy decision rule. On the other hand, if it exists in both source images, the data is processed first through our qualitative fuzzy inference rules and then through another fuzzy membership function and a new inference rule set; i.e., a two-layered fuzzy logic process.

    Cloud-type classification has many practical applications and use different features of source images, depending on the target

    applications and classification purposes. Our first concern in this paper was to contribute to weather forecasting, but the same paradigm may be extended to other applications, such as air traffic control, using different satellite image databases, which is our next goal.

    While many different paradigms and algorithms were developed for different purposes in this research domain, the classification accuracy we gained (90%) is one of the better results reported, if not the best. Considering that we only used primitive attributes, such as color information, improvement is easily achievable if we employ more informative features as others have tried [11,12], and we could also compare our approach with others in the same environment [13].

참고문헌
  • 1.
  • 2. Calbom J., Sabburg J. 2008 “Feature extraction from wholesky ground-based images for cloud-type recognition” [Journal of Atmospheric and Oceanic Technology] Vol.25 P.3-14 google cross ref
  • 3. Kandel A., Langholz G. 1994 Fuzzy Control Systems google
  • 4. Jun L., Paul Menzel W., Yang Z., Frey R. A., Ackerman S. A. 2003 “High-spatial-resolution surface and cloud-type classification from MODIS multispectral band measurements” [Journal of Applied Meteorology and Climatology] Vol.42 P.204-226 google cross ref
  • 5. Bankert R. L., Wade R. H. 2007 “Optimization of an instance-based goes cloud classification algorithm” [Journal of Applied Meteorology and Climatology] Vol.46 P.36-49 google cross ref
  • 6. Hong Y., Hsu K. L., Sooroshian S., Gao X. 2004 “Precipitation estimation from remotely sensed imagery using an artificial neural network cloud classification system” [Journal of Applied Meteorology and Climatology] Vol.43 P.1834-1853 google cross ref
  • 7. Kim K. B., Woo Y. W. 2009 “Cloud analysis using a fuzzy reasoning method” [Journal of The Korea Institute of Maritime Information & Communication Systems] Vol.13 P.1181-1187 google
  • 8. Baum B. A., Tovinkere V., Titlow J., Welch R. M. 1997 “Automated cloud classification of global AVHRR data using a fuzzy logic approach” [Journal of Applied Meteorology and Climatology] Vol.36 P.1519-1540 google cross ref
  • 9. Singh M., Glennen M. 2005 “Automated ground-based cloud recognition” [Pattern Analysis and Applications] Vol.8 P.258-271 google cross ref
  • 10. Liu L., Sun X., Chen F., Zhao S., Gao T. 2011 “Cloud classification based on structure features of infrared images” [Journal of Atmospheric and Oceanic Technology] Vol.28 P.410-417 google cross ref
  • 11. Wang Z., Sassen K. 2001 “Cloud type and microphysical property retrieval using multiple remote sensors” [Journal of Applied Meteorology and Climatology] Vol.40 P.1665-1682 google cross ref
  • 12. Kuril S., Saini I., Saini B. S. 2013 “Cloud classification for weather information by artificial neural network” [International Journal of Applied Physics and Mathematics] Vol.3 P.28-30 google cross ref
  • 13. Bankert R. L., Mitrescu C., Miller S. D., Wade R. H. 2009 “Comparison of goes cloud classification algorithms employing explicit and implicit physics” [Journal of Applied Meteorology and Climatology] Vol.48 P.1411-1421 google cross ref
OAK XML 통계
이미지 / 테이블
  • [ Figure 1. ]  Noise removal from different source images. (a) Original visible-light image, (b) after noise removal, (c) original infrared image, (d) after noise removal, (e) original near-infrared image, (f) after noise removal.
    Noise removal from different source images. (a) Original visible-light image, (b) after noise removal, (c) original infrared image, (d) after noise removal, (e) original near-infrared image, (f) after noise removal.
  • [ Figure 2. ]  Shape of membership function.
    Shape of membership function.
  • [ Table 1. ]  Interval point of membership function for R channel
    Interval point of membership function for R channel
  • [ Table 2. ]  Firsthand inference rules 1 (near-infrared and visible-light)
    Firsthand inference rules 1 (near-infrared and visible-light)
  • [ Table 3. ]  Firsthand inference rules 2 (near-infrared and infrared)
    Firsthand inference rules 2 (near-infrared and infrared)
  • [ Table 4. ]  Firsthand classification
    Firsthand classification
  • [ Table 5. ]  Secondhand inference rules
    Secondhand inference rules
  • [ Table 6. ]  Secondhand classification
    Secondhand classification
  • [ Figure 3. ]  Membership function for the second classification.
    Membership function for the second classification.
  • [ Figure 4. ]  Cloud-type classification by fuzzy logic. (a) Original visible-light, (b) original infrared, (c) original near-infrared, (d) cloudtype classified.
    Cloud-type classification by fuzzy logic. (a) Original visible-light, (b) original infrared, (c) original near-infrared, (d) cloudtype classified.
  • [ Table 7. ]  Cloud area extraction rate
    Cloud area extraction rate
  • [ Figure 5. ]  Region of interest (ROI) comparison. ROI from (a) the previous study: visible-light, (b) proposed method: visible-light, (c) the previous study: infrared, (d) the proposedmethod: infrared, (e) the previous study: near-infrared, (f) the proposed method: near-infrared.
    Region of interest (ROI) comparison. ROI from (a) the previous study: visible-light, (b) proposed method: visible-light, (c) the previous study: infrared, (d) the proposedmethod: infrared, (e) the previous study: near-infrared, (f) the proposed method: near-infrared.
  • [ Figure 6. ]  Comparison of classified result image. Result image (a) by previous method, (b) proposed method.
    Comparison of classified result image. Result image (a) by previous method, (b) proposed method.
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.