검색 전체 메뉴
PDF
맨 위로
OA 학술지
Robust Motion Compensated Frame Interpolation Using Weight-Overlapped Block Motion Compensation with Variable Block Sizes to Reduce LCD Motion Blurs
  • 비영리 CC BY-NC
  • 비영리 CC BY-NC
ABSTRACT
Robust Motion Compensated Frame Interpolation Using Weight-Overlapped Block Motion Compensation with Variable Block Sizes to Reduce LCD Motion Blurs
KEYWORD
LCD motion blurs , Variable block sizes , Weight-overlapped block motion compensation , Motion compensated frame interpolation
  • I. INTRODUCTION

    Liquid crystal displays (LCDs) have slow responses as compared with cathode ray tube (CRT) displays because of LCD hold time [1-3], so ghost phenomenon or motion blurs are often perceived around fast moving objects in LCDs. To reduce the motion blurs, backlight flashing [4], signal overdrive [5], or frame rate up-conversion (FRUC) [6-12] can be used. However, luminance fluctuation may be seen when the flashing rate is not high enough in backlight flashing, and signal overdrive does not consider image contents [1]

    To increase video frame rates for high temporal resolution, frame repetition, back/gray frame insertion and linear interpolation may be used; however, these methods cannot remove motion blurs and judders because they do not use motion information [13]. To produce unaffected frames without block artefacts and motion blurs, motion compensated frame interpolation (MCFI) must be used, where motion vectors (MVs) must be correctly estimated by means of various motion estimation (ME) methods.

    The conventional MCFI utilizes unidirectional ME (UME); forward ME, backward ME and bilateral ME [14-17]. The critical drawback of the UME is shown in Fig. 1. Forward and backward MEs cannot estimate all pixels’ MVs, so holes and overlaps result from multiple MVs and no MVs. Meanwhile, bilateral ME may not estimate correct MVs when a matching score of a block including moving objects is lower than matching scores of other blocks. In the result of MCFI using bilateral ME of Fig. 1, the block including a moving ball is not compensated [18].

    For the outstanding interpolation results, bidirectional ME (BME) and overlapped block motion compensation (OBMC) are used [19-22]. The results of BME + OBMC are very sensitive to the sizes of blocks and search regions for block matching as shown in Fig. 2. When the sizes are small, the large motions are not estimated, while the detailed motions are not compensated and huge processing time is needed when large sizes are utilized.

    Various methods have been proposed to improve the performance or computation of FRUC [23-25]. Regions overlapping with large or complex motions are not, however, correctly compensated. To solve this problem, variable block sizes and overlapping ranges are needed. A method using variable block sizes was presented [26]; however, this used block splitting by quad scales (coarse-grained block size is 16×16, and fine-grained block size is 4×4), and refined motion estimation is always performed for all blocks whether or not a block is located at an object boundary.

    This paper is aimed at developing a novel FRUC method that robustly compensates regions overlapped with large or complex motions. Accordingly, the proposed method uses variable block sizes and overlapping ranges. The block sizes are enlarged by checking a matching score, and the weighted overlapping ranges (WOBMC, weight-OBMC) are also enlarged by the ratio of block enlargement. Therefore, the block sizes and the weighted overlapping ranges can be suitably determined according to the movement and the size of objects, so the proposed method can interpolate frames with high visual and objective performance.

    II. MCFI USING BME AND WOBMC WITH VARIABLE BLOCK SIZES

    To estimate forward MVs, the proposed method uses a block matching algorithm (BMA) using mean of absolute differences (MAD), which can directly compare matching scores for blocks with variable sizes in contrast to sum of absolute differences (SAD) [27], in the direction from the current frame (It) to the previous frame (It−1) as follows:

    image

    where the block size is b×b, b is an odd positive integer, and i and j are integers, where b/2 is the largest integer that is less than or equal to the real value of b/2. The block sizes to estimate MVs are normally used as even windows; however, we use odd windows to estimate pixel’s MVs instead of block’s MVs. Therefore, the MV of It(x, y) can be determined by

    image

    where u∈[xr/2, x+r/2] and v∈[yr/2, y+r/2] so the search range is r×r and r is an odd integer.

    To estimate an MV at pixel It(x, y) with variable block sizes, an initial block size is set to b0×b0, and its matching block is found within the initial search range r0×r0. If the matching score S0, which is the reciprocal of the MAD, is less than τm, the block size and the search range are set to α1b0 and α1r0×α1r0, respectively (α>1); otherwise the MV computed in the initial size and the initial expansion rate (α0=1) are stored without expansion. If the matching score S1 using the enlarged block and search range is less than τm (matching score threshold for expansion of block sizes), the block size and the search region are enlarged by α2 to compute large MVs; otherwise the MV and the expansion rate are stored and the iterative expansion is terminated. When the expansion rate αk is greater than τα (expansion limit), the iterative adjustment is terminated and the MV and the αk with the greatest matching score are selected. The ME using variable block sizes is shown in Fig. 3. Backward MVs are estimated in the same manner as forward MVs, and are found by using the direction from the previous frame to the current frame.

    Using the estimated forward and backward MVs, vf and vb, WOBMC is performed to interpolate the intermediate frame It−0.5 as shown in Fig. 4. In the conventional OBMC, motion blocks are enlarged by predetermined sizes, and then overlapped region are estimated by the weights. Whereas the weight for the motion compensation (MC) is the 2-dimensional Gaussian kernel, and pixels within the overlapping range (all pixels within the enlarged block) are compensated in proposed WOBMC. The overlapped range is assigned as αkO0, where O0 is the initial overlapping range for b0.

    Finally, the brightness of the interpolated frame may be reduced owing to the OBMC; the method compensates for this by using the difference with the interpolated image using the averaging Mt−0.5 = (It+It−1) / 2. The compensated brightness is

    image

    where denotes the low-pass filtering of f and the windows size of the median filter is large for large motion regions. In our method, the median filter is used for , because the median filter can ignore irregular variations.

    III. EXPERIMENTAL RESULTS

    To evaluate the proposed method, the proposed method tested 12 image sequences; CIF (common intermediate format, 352×288) with 30 Hz, full HD (high definition, 1920×1080) with 50 Hz, and used 51 odd frames and produced 50 intermediate frames. The 50 images were compared with the 50 real even frames to evaluate their peak signal-to-noise ratio (PSNR) as shown in Fig. 5. The initial sizes of the block, search range and overlapping range are set to 5×5, 9×9 and 7×7, respectively; other parameters were selected as follows: is the typical value for upscaling of images, and τα =7, and τm= 0.05 were experimentally selected to reduce computation complexities and to find optimal block sizes. Some test results with variable parameter values are shown in Table 1. Although the PSNR results with , τα =7 and τm= 0.05 are not the highest, we used these parameters to consider the computation complexities. Therefore, the parameter values can be changed for the variable environments of FRUC such as characteristics of image sequences, hardware performance and network environments; however, we used , τα =7 and τm= 0.05 in these experiments to objectively compare the proposed method. In addition, one MV is estimated for 2×2 pixels to reduce complexities.

    [TABLE 1.] Comparison of Variable Parameter values (PSNR [DB] and processing time [ms])

    label

    Comparison of Variable Parameter values (PSNR [DB] and processing time [ms])

    Visual comparison of the results of OBMC, WOBMC and the proposed method is shown in Fig. 6. The result images interpolated by WOBMC have fewer block artefacts than the images interpolated OBMC in almost all regions because WOBMC uses weights of all pixels. The proposed method, moreover, removed most block artefacts and motion blurs, and was able to produce unaffected intermediate frames owing to the variable block sizes

    Some result images are shown in Fig. 7; many regions of motion blurs in averaging images (Fig. 7(a)) are observed, small motion regions are compensated in the results by static block size (Fig. 7(b), green boxes), and most motion regions are more clearly interpolated by the proposed method (Fig. 7(c), red boxes). In the results of our method, the face region is preserved in the ‘foreman’ sequence, and the shape of the boat and the river bank regions is clearly interpolated in the ‘coastguard’ sequence. The spectators are clearly separated, the text of the billboard is distinctly shown, and the motion blur of the play is reduced in the ‘stefan’ sequence. Especially, the table tennis ball is shown with one object in the ‘tennis’ sequence.

    For objective comparison, the proposed method was compared with other recent methods [23-26] as shown in Table 2. In most test sequences, the PSNRs are higher than those of other methods. Especially, in sequences having large motions, such as ‘football’, ‘tennis’ and ‘stefan’, the performance of the proposed method is improved. In Fig. 8, the resultant images of the proposed method and other conventional methods [23-26] are shown (significant motion blurs and block artifacts are marked as red regions). Even though variable block size method [26] can reduce some motion blurs, large motion blurs are not corrected; however, the proposed method reduced most motion blurs and block artifacts.

    [TABLE 2.] PSNR Comparison with Other Methods (PSNR [DB])

    label

    PSNR Comparison with Other Methods (PSNR [DB])

    In the test of full HD image sequences, the results of our method are very similar to the original frames as shown in Figs. 9 and 10, where difference values between the results of the proposed method and the original images are very low values.

    To evaluate the complexities of the proposed method, we compared the proposed method with a symmetric motion estimation method with variable search ranges [28], which was implemented by software. The Lim’s method [28] uses variable search ranges, while our method uses variable MV sizes and variable search ranges. The average PSNR of the proposed method is a little higher than the Lim’s method, while our method can speed up about 144 times on average. In addition, we compared our method and the replica of Lee’s method [26] as shown in Table 3, where we tested the proposed method with two parameter sets; one set is the same parameters of the above experiments, and the other set (say speed-up parameters) is 7×7 and 5×5 for initial search range and overlapping range, respectively, and one MV is estimated for 5×5 pixels. The proposed method has the most complexity; however, the result with speed-up parameters has the least complexity, and has higher PSNR than the result of the replica of Lee’s method [26].

    [TABLE 3.] Comparison with replica [26] (PSNR [DB] and processing time [ms])

    label

    Comparison with replica [26] (PSNR [DB] and processing time [ms])

    The average processing time of the proposed method for 10 CIF sequences is 4,746 ms; however, this is the processing time under software implementation. The processing time of variable block size ME realized by dedicated hardware for HD images is fast enough for real-time processing [29, 30], so the proposed method can be used for real-time FRUC processing by hardware implementation.

    IV. CONCLUSIONS

    A FRUC method using WOBMC with variable block sizes has been proposed in this paper; in the proposed method, bidirectional MVs are estimated using variable block sizes and search ranges comparing matching scores. The overlapping range for WOBMC is also enlarged by using the ME expansion rate. In addition, brightness compensation is utilized by comparing with the median filtered image of averaging interpolation. Therefore, the proposed method can provide high-performance FRUC regardless of the magnitude and complexity of motions to reduce LCD motion blurs.

참고문헌
  • 1. Chan S. H., Nguyen T. Q. 2011 “LCD motion blur: modeling, analysis, and algorithm,” [IEEE Trans. Imag. Proc.] Vol.20 P.2352-2365 google cross ref
  • 2. Liu Y., Tian F., Liu R. 2011 “A novel approach to frame rate up-conversion based on morphing,” [J. Comput. Inf. Syst.] Vol.7 P.5219-5226 google
  • 3. Hong S. J., Kwon O. K. 2014 “An RGB to RGBY color conversion algorithm for liquid crystal display using RGW pixel with two-field sequential driving method,” [J. Opt. Soc. Korea] Vol.18 P.777-782 google cross ref
  • 4. Becker M. 2008 “LCD response time evaluation in the presence of backlight modulations,” [SID Symp. Tech. Dig. Papers] Vol.39 P.24-27 google cross ref
  • 5. Zhao H. X., Chao M. L., Ni F. C. 2008 “Overdrive LUT optimization for LCD by box motion blur measurement and gamma-based thresholding method,” [SID Symp. Tech. Dig. Papers] Vol.39 P.117-120 google cross ref
  • 6. Wang T. S., Choi K. S., Jang H. S., Morales A. W., Ko S. J. 2010 “Enhanced frame rate up-conversion method for UHD video,” [IEEE Trans. Consum. Electron.] Vol.56 P.1108-1114 google cross ref
  • 7. Liu H., Xiong R., Zhao D., Ma S., Gao W. 2012 “Multiple hypotheses Bayesian frame rate up-conversion by adaptive fusion of motion-compensated interpolations,” [IEEE Trans. Curc. Syst. Vid. Techn.] Vol.22 P.1188-1198 google cross ref
  • 8. Haavisto P., Juhola J., Neuvo Y. 1989 “Franctional frame rate up conversion using weighted median filter,” [IEEE Trans. Consum. Electron.] Vol.35 P.272-278 google cross ref
  • 9. Wang D., Lauzon D. 2000 “Hybrid algorithm for estimating true motion fields,” [Opt. Eng.] Vol.39 P.2876-2881 google cross ref
  • 10. Dane G., Nguyen T. Q. 2006 “Optimal temporal interpolation filter for motion-compensated frame rate up conversion,” [IEEE Trans. Imag. Proc.] Vol.15 P.978-991 google cross ref
  • 11. Choi B. D., Han J. W., Kim C. S., Ko S. J. 2006 “Frame rate up-conversion using perspective transform,” [IEEE Trans. Consum. Electron.] Vol.52 P.975-982 google cross ref
  • 12. Lee S. H., Yang S. 2002 “Adaptive motion-compensated frame rate up-conversion,” [Electron. Lett.] Vol.38 P.451-452 google cross ref
  • 13. Han R., Men A. 2013 “Frame rate up-conversion for high-definition video applications,” [IEEE Trans. Consum. Electron.] Vol.59 P.229-236 google cross ref
  • 14. Wang D., Vincent A., Blanchfield P., Klepko R. 2010 “Motion-compensated frame rate up-conversion-part II: new algorithms for frame interpolation,” [IEEE Trans. Broadc.] Vol.56 P.142-149 google cross ref
  • 15. Juang C. L., Chai T. T. 1996 “Motion-compensated interpolation for scan rate up-conversion,” [Opt. Eng.] Vol.35 P.166-176 google cross ref
  • 16. Liu S., Kuo C. C. J., Kim J. W. 2003 “Hybrid global-local motion compensated frame interpolation for low bit rate video coding,” [J. Vis. Com. Imag. Repres.] Vol.14 P.58-76 google cross ref
  • 17. Kang S. J., Cho K. R., Kim Y. H. 2007 “Motion compensated frame rate up-conversion using extended bilateral motion estimation,” [IEEE Trans. Consum. Electron.] Vol.53 P.1759-1767 google cross ref
  • 18. Jung H. S., Kim U. S., Sunwoo M. H. 2012 “Simplified frame rate up-conversion algorithm with low computational complexity,” [Proc. 20th European Signal Processing Conference] P.385-389 google
  • 19. Orchard M. T., Sullivan G. J. 1994 “Overlapped block motion compensation: an estimation theoretic approach,” [IEEE Trans. Imag. Proc.] Vol.3 P.693-699 google cross ref
  • 20. Zhai J., Yu K., Ki J., Li S. 2005 “A low complexity motion compensated frame interpolation method,” [Proc. Int. Symp. Circuits and Systems] P.4927-4930 google
  • 21. Girod B. 2000 “Efficiency analysis of multihypothesis motion-compensated prediction for video coding,” [IEEE Trans. Imag. Proc.] Vol.9 P.173-183 google cross ref
  • 22. Choi B. D., Han J. W., Kim C. S., Ko S. J. 2007 “Motion-compensated frame interpolation using bilateral motion estimation and adaptive overlapped block motion compensation,” [IEEE Trans. Circ. Syst. Video. Technol.] Vol.17 P.407-416 google cross ref
  • 23. Wang C., Zhang L., He Y., Tan Y. P. 2010 “Frame rate up conversion using trilateral filtering,” [IEEE Trans. Circ. Syst. Video. Technol.] Vol.20 P.886-893 google cross ref
  • 24. Kang S. J., Yoo S., Kim Y. H. 2010 “Dual motion estimation for frame rate up-conversion,” [IEEE Trans. Circ. Syst. Video. Technol.] Vol.20 P.1909-1914 google cross ref
  • 25. Kim U. S., Sunwoo M. H. 2014 “New frame rate up-conversion algorithm with low computational complexity,” [IEEE Trans. Circ. Syst. Video. Technol.] Vol.24 P.384-393 google cross ref
  • 26. Lee G. G., Chen C. F., Hsiao C. J., Wu J. C. 2014 “Bi-directional trajectory tracking with variable block-size motion estimation for frame rate up-conversion,” [IEEE J. Emerg. Select. Top. Circ. Syst.] Vol.4 P.29-42 google cross ref
  • 27. Choi I. Y., Kang Y. J., Hong K. M., Kim S. J., Lee G. D. 2014 “Study on the improvement of the image analysis speed in the digital image correlation measurement system for the 3-point bend test,” [J. Opt. Soc. Korea] Vol.18 P.523-530 google cross ref
  • 28. Lim H., Park H. W. 2011 “A symmetric motion estimation method for motion-compensated frame interpolation,” [IEEE Trans. Imag. Proc.] Vol.20 P.3653-3658 google cross ref
  • 29. Deng L., Gao W., Hu M. Z., Ji Z. Z. 2005 “An efficient hardware implementation for motion estimation of AVC standard,” [IEEE Trans. Cons. Electr.] Vol.51 P.1360-1366 google cross ref
  • 30. Atitallah A. B., Arous S., Loukil H., Masmoudi N. 2012 “Hardware implementation and validation of the fast variable block size motion estimation architecture for H.264/AVC,” [Int. J. Electr. Commun.] Vol.66 P.701-710 google cross ref
OAK XML 통계
이미지 / 테이블
  • [ FIG. 1. ]  Various UME methods for MCFI.
    Various UME methods for MCFI.
  • [ FIG. 2. ]  BME + OBMC using various block sizes and search regions for FRUC.
    BME + OBMC using various block sizes and search regions for FRUC.
  • [ ] 
  • [ ] 
  • [ FIG. 3. ]  Concept of motion estimation using variable block sizes.
    Concept of motion estimation using variable block sizes.
  • [ FIG. 4. ]  WOBMC with variable block sizes.
    WOBMC with variable block sizes.
  • [ ] 
  • [ FIG. 5. ]  Quality comparison.
    Quality comparison.
  • [ TABLE 1. ]  Comparison of Variable Parameter values (PSNR [DB] and processing time [ms])
    Comparison of Variable Parameter values (PSNR [DB] and processing time [ms])
  • [ FIG. 6. ]  Visual comparison of result images (‘foreman’ sequence); (a) Results of OBMC, (b) Results of WOBMC, and (c) Results of the proposed method.
    Visual comparison of result images (‘foreman’ sequence); (a) Results of OBMC, (b) Results of WOBMC, and (c) Results of the proposed method.
  • [ FIG. 7. ]  Sample result images (from top to bottom: ‘foreman’, ‘coastguard’ ‘stefan’ and ‘tennis’ sequences); (a) Images interpolated by averaging, (b) Images interpolated by static block size, and (c) Results of the proposed method.
    Sample result images (from top to bottom: ‘foreman’, ‘coastguard’ ‘stefan’ and ‘tennis’ sequences); (a) Images interpolated by averaging, (b) Images interpolated by static block size, and (c) Results of the proposed method.
  • [ TABLE 2. ]  PSNR Comparison with Other Methods (PSNR [DB])
    PSNR Comparison with Other Methods (PSNR [DB])
  • [ FIG. 8. ]  Sample result images (from top to bottom: replica [23], replica [24], replica [25], replica [26] and the proposed method); (a) ‘flower’, (b) ‘football’, and (c) ‘stefan’ sequences.
    Sample result images (from top to bottom: replica [23], replica [24], replica [25], replica [26] and the proposed method); (a) ‘flower’, (b) ‘football’, and (c) ‘stefan’ sequences.
  • [ FIG. 9. ]  Sample result images (‘into tree’ Full HD, from top to bottom: interpolated images and difference images with an original image); (a) Averaging (PSNR: 30.95dB) and (b) Proposed method (PSNR: 35.83dB).
    Sample result images (‘into tree’ Full HD, from top to bottom: interpolated images and difference images with an original image); (a) Averaging (PSNR: 30.95dB) and (b) Proposed method (PSNR: 35.83dB).
  • [ FIG. 10. ]  Sample result images (‘crowd run’ Full HD, from top to bottom: interpolated images and difference images with an original image); (a) Averaging (PSNR: 25.21dB) and (b) Proposed method (PSNR: 29.82dB).
    Sample result images (‘crowd run’ Full HD, from top to bottom: interpolated images and difference images with an original image); (a) Averaging (PSNR: 25.21dB) and (b) Proposed method (PSNR: 29.82dB).
  • [ TABLE 3. ]  Comparison with replica [26] (PSNR [DB] and processing time [ms])
    Comparison with replica [26] (PSNR [DB] and processing time [ms])
(우)06579 서울시 서초구 반포대로 201(반포동)
Tel. 02-537-6389 | Fax. 02-590-0571 | 문의 : oak2014@korea.kr
Copyright(c) National Library of Korea. All rights reserved.