Robust VideoBased Barcode Recognition via Online Sequential Filtering
 Author: Kim Minyoung
 Organization: Kim Minyoung
 Publish: International Journal of Fuzzy Logic and Intelligent Systems Volume 14, Issue1, p8~16, 25 March 2014

ABSTRACT
We consider the visual barcode recognition problem in a noisy video data setup. Unlike most existing singleframe recognizers that require considerable user effort to acquire clean, motionless and blurfree barcode signals, we eliminate such extra human efforts by proposing a robust videobased barcode recognition algorithm. We deal with a sequence of noisy blurred barcode image frames by posing it as an online filtering problem. In the proposed dynamic recognition model, at each frame we infer the blur level of the frame as well as the digit class label. In contrast to a framebyframe based approach with heuristic majority voting scheme, the class labels and framewise noise levels are propagated along the frame sequences in our model, and hence we exploit all cues from noisy frames that are potentially useful for predicting the barcode label in a probabilistically reasonable sense. We also suggest a visual barcode tracking approach that efficiently localizes barcode areas in video frames. The effectiveness of the proposed approaches is demonstrated empirically on both synthetic and real data setup.

KEYWORD
Hidden Markov models , Online sequential filtering , Barcode recognition

1. Introduction
In recent years, the barcodes have become ubiquitous in many diverse areas such as logistics, post services, warehouses, libraries, and stores. The success of barcodes is mainly attributed to easily machineread representation of data that aautomates and accelerates the tedious task of identifying/managing a large amount of products/items.
In barcodes, data are typically represented as images of certain unique and repeating onedimensional (1D) or twodimensional (2D) patterns. The task of retrieving data from these signal patterns is referred to as
barcode recognition , and designing a fast, accurate, and robust recognizers is of significant importance. Although traditional optical laser barcode readers are highly accurate and reliable, these devices are often very expensive and not very mobile.Because of the availability of inexpensive camera devices (e.g., in mobiles or smartphones), imagebased barcode recognition has become considerably attractive. Recently considerable research work has been conducted on this subject. Here we briefly list some recent imagebased barcode localization/recognition methods. In [1] blockwise angle distributions were considered for localizing barcode areas. A regionbased analysis was employed in [2] whereas discrete cosine transformation was exploited in [3]. Image processing techniques including morphology, filtering, and template matching were serially applied in [4]. The computational efficiency was recently considered by focusing on both turning points and inclination angles [5].
However, most existing imagebased barcode recognition systems assume a
single barcode image as input, and require it to be relatively high quality with little blur or noise for accurate recognition. The drawbacks, also shared by optical readers, are clearly the extra human effort needed to capture a goodquality still picture and consequently the delayed reading time.In this paper, we consider a completely different yet realistic setup: the input to the recognizer is a
video of a barcode (i.e., a sequence of image frames) where the frames are assumed to be considerably blurred and noisy. Hence with this setup, we eliminate the extra effort needed to acquire motionless, clean and blurfree image shots. Our main motivation is that although it is ambiguous and difficult to recognize barcodes from each single, possibly corrupted frame, we can leverage the information or cues extracted from entire frames to make a considerably accurate decision about the barcode labels.Perhaps the most straightforward approach for recognizing barcodes in video frames is a keyframe selector, that basically selects the socalled key frames with least blur/noise, and then applying offtheshelf singleimage recognizer in series. The potential conflict of differently predicted codes across the key frames can be addressed by majority voting. However, such approaches can raise several technical issues: i) how would you judge rigorously which frames should be preferrably selected and ii) which voting schemes are the most appropriate with theoretical underpinnings? Moreover, the approach may fail to exploit many important frames that contain certain discriminative hints about the codes.
Instead, we tackle the problem in a more principled manner. We consider a temporal model for a sequence of barcode image frames, similar to the popular hidden Markov model (HMM). The HMMs were previously applied to visual pattern recognition problems [610]. In our model, we introduce hidden state variables that encode the blur/noise levels of the corresponding frames, and impose smooth dynamics over those hidden states. The observation process, contingent on the barcode label class, is modeled as a Gaussian density centered at the image/signal pattern of the representing class with the perturbation parameter (i.e., variance) determined from the hidden state noise level.
Within this model, we perform (online) sequential filtering to infer the most probable barcode label. As this inference procedure essentially propagates the belief on the class label as well as the frame noise levels along the video frames, we can exploit all cues from noisy frames that are useful for predicting the barcode label in a probabilistically reasonable sense. Thus the issues and drawbacks of existing approaches can be addressed in a principled manner.
The paper is organized as follows. We briefly review barcode specifications (focusing on 1D EAN13 format) and describe conventional procedures for barcode recognition in Section 2. In our videobased data setup, the task of barcode localization (detection) must be performed for a series of frames, and we pose it as a tracking problem solved by a new adaptive visual tracking method in Section 3. Then in Section 4, we describe the proposed online sequence filtering approach for barcode recognition, The experimental results follow in Section 5.
2. OneDimensional Barcodes
Although 2D barcode technologies are now emerging, in this paper we deal with 1D barcodes for simplicity. As it is straightforward to extend it to 2D signals, we have left it for future work. In 1D barcodes, information is encoded as a set of digits where each digit is represented as parallel black and white lines of different widths. Although there are several different types of coding schemes, we focus on the widelyused EAN13 format. The detailed specifications of EAN13 barcodes are described here.
An example EAN13 barcode image is shown in the top panel of Figure 1, where the black and white stripes encode 12digit barcodes (
a _{1..12}) with the checksum digitc as also shown at the bottom. In fact, the barcode stripes encodes 12 digits ofa _{2..12} and c while the first codea _{1} can be easily retrieved from the checksum equation, namelyc =S _{1} −S _{0} whereS _{0} =a _{1} +a _{3} +··· +a _{11} + 3 ＊ (a _{2} +a _{4} + ··· +a _{12}) andS _{1} is the nearest 10’s multiple not less thanS _{0}. The barcode stripe image is composed of: 1) the leftmost BWB (boundary indicator), 2) 6 WBWB patterns (one for each of the left 6 digits), 3) the middle WBWBW (left/right separator), 4) 6 BWBW patterns (one for each of the right 6 digits), and 5) the rightmost BWB (right boundary).How each digit is encoded is determined by the widths of the alternating 4 B/W bar stripes (red boxes in the figure). The specific digit encoding scheme is summarized in the bottom panel, for instance, the digit 7 in the right part is encoded by BWBW with widths proportional to (1, 3, 1, 2). Note that each digit in the left part can be encoded by either of two different width patterns (e.g., digit 1 can be depicted as WBWB with either (2, 2, 2, 1) or (1, 2, 2, 2) widths).
Hence the barcode recognition mainly estimates the bar widths accurately from the image/signal. Of course, in an imagebased setup, one first needs to localize the barcode region, a task often referred to as
barcode detection . Barcode recognition commonly uses the tightly cropped barcode image from a barcode detector, then to decode barcode signals. In our videobased framework, however, the barcode detection has to be performed for each and every frame in real time. This can be posed as anobject tracking problem. So we propose a fast adaptive visual tracking method in the next section.3. Barcode Tracking
In this section we deal with the visual barcode tracking problem. Although one can invoke a barcode detector for each frame from scratch, one crucial benefit of tracking is computational speedup that is critical for realtime applications. A similar idea has arisen in the field of computer vision, known as the object tracking problem. Numerous object tracking approaches have been proposed, and we briefly list a few important ones here. The main issue is modeling the target representation: a viewbased lowdimensional appearance model [11], contour models [12], 3D models [13], mixture models [14], and kernel representations [15, 16], are the most popular among others.
Motivated by these computer vision approaches, we suggest a fairly reasonable and efficient barcode tracking method here. Tracking can be seen as an online estimation problem, where given a sequence of image frames up to current time
t , denoted byF _{0},...,F _{t} , we estimate the location and the pose of the target object (the barcode in this case) in the frameF _{t} . One way to formalize the problem is the online Bayesian formulation [12], where we estimateP (u _{t} F _{t} ) at each timet = 1, 2,.... Hereu _{t} is the tracking state specifying the two end points (with respect to the coordinate system of the current frameF _{t} ) whose line segment tightly covers the target barcode area. Typically, we defineut = [a_{x},a_{y}, b_{x}, b_{y} ]^{⊺} where the first (last) two indicates starting (end) position. It is straightforward to read the barcode signalz _{t} fromu _{t} andF _{t} by a simple algebraic transformationz _{t} =ω (u _{t} ,F _{t} ).We consider a temporal probabilistic model for tracking states and target appearances. We setup a Gaussiannoise Markov chain over the tracking states, and let each state be related to an appearance model that measures the goodness of track (how much
z _{t} =ω (u _{t} ,F _{t} ) looks like a barcode). Formally, we set the state dynamics asP (u _{t} u _{t} _{1}) =N (u _{t} ;u _{t} _{1},V _{0}) with some small covarianceV _{0} for whitenoise modeling. The essential part is the appearance model, and we consider a generic energy model,where
E (z _{t} ;θ ) is the energy function that assigns a lower (higher) value when zt looks more (less) like the target modelθ . The online Bayesian tracking can then be written as the following recursion.with initial track
P (u _{0}F _{0}) can be set as a delta function determined from any conventional barcode detector. The recursion (2) can be typically solved by the samplingbased method (e.g., particle filtering [12]).Since the target barcode appearance can change over time (mainly due to changes in illumination and camera angles/distances or little hand shaking), it is a good strategy to adaptively change the target model. Among various possible adaptive models, we simply use the previous track z
_{t} _{1} for the purpose of computational efficiency. Specifically, we define the energy model as E (z_{t} ;θ ) = z_{t} − z_{t} _{1}^{2}. The number of samples (particles) trades off the tracking accuracy against the tracking time, and we fix it as 100 which performs favorably fast with accurate results.4. Barcode Recognition
Once tracking is performed for the current frame, we have the tracked end points that tightly cover the barcode (an example is shown in the left panel of Figure 2). We read the grayscale intensity values along the line segment, which are regarded as the barcode signal to analyze. Based on the prior knowledge of the relative lengths of the left/rightmost boundary stripes as well as the middle separator, we can roughly segment the tracked barcode signal into 12 subsegments, each of which corresponds to each digit’s 4 B/W stripes pattern. This can be done simply by equal division of the line segment.
As a result, we get 12 digit patterns, one for each digit. We show, for instance, the intensity patterns of the two digits 1 and 7 on the left part in the middle panel of Figure2. Then for each digit, we normalize its intensity pattern with respect to: 1) a fixed value xaxis length (say, 30) and 2) intensity normalization such that scales up/down the intensity values to range from 0 to 1. The former can be done by interpolation/extrapolation, while we use a simple linear scaling for the latter. The right panel of Figure 2 depicts the normalized signals for the example digits. In this way, the digit signals have the same length and scale across different digits, and hence they can be contrasted meaningfully with each other. We denote by x
_{t}^{k} the digit pattern signal of thek th (k = 1,..., 12) digit at framet .From the tracking and the feature extraction procedure, we obtain a sequence of digit pattern signals, X = x_{1}x_{2} ...x
_{T} , one for each digit, forT video frames. Here we drop the superscript indexk for notational simplicity. We then classify the sequence as a digit class labely ∈{0, 1, 2,... , 9}. This sequence classification task is performed individually and independently for each of 12 digits in the barcode.Instead of heuristic approach such as keypoint selection, we tackle the problem in a more principled manner. We consider a temporal model augmented with the class variable
y , which essentially builds classwise HMM. The graphical model representation of our model is depicted in Figure 3.In our model, we introduce the hidden state variables
s_{t} (associated with each x_{t} ), wheres_{t} encodes the blur/noise level of the observation signal x_{t} . Specifically, we consider three different noise levels, i.e.,s_{t} ∈{1, 2, 3}, indicating thats_{t} = 1 for relatively clear signal,s _{t} = 3 for severe noise, ands_{t} = 2 in between. To reflect the motion/scene smoothness property in real videos, we also impose a smooth dynamics over the adjacent hidden state variables. In particular, we use a simple table shown in Figure 4 for the transition probabilitiesP (s _{t} s _{t} _{1}). We basically placed higher (lower) values to diagonal (offdiagonal) entries to enforce smooth noise level transition.The crucial part is the observation modeling, where we employ a Gaussian density model whose mean and variance is determined by the digit class
y and the current noise levels_{t} . More specifically, we set the mean of the Gaussian as the signal code (shown in the bottom tables of Figure 1) corresponding to the digit classy , where the WBWB width codes are duration/intensitynormalized similar to that of the observation signal x_{t} . The perturbation parameter (i.e., variance) of the isotropic Gaussian is determined from the hidden state noise levels_{t} . In our experiments, we fix the variances as corresponding tos_{t} = 1, 2, 3, respectively. Note that for the digits in the left part, since there are two possible width codes, we model them as mixtures of two Gaussians with equal proportions and the same variance. In summary, the observation model for x_{t} can be written as follows:where is the normalized signal code vector for the digit
y in the right table, and for the digity in the columnd of the left table (d = 1, 2). Also, I is the identity matrix. The class priorP (y ) which we set simply as uniform distribution (i.e., ).Within this model, we perform online sequential filtering to infer the most probable barcode label
y for a given sequence X = x_{1} ⋯x_{T} . That is, the most probable digit labely up to timet can be obtained from:where the posterior of
y can be computed recursively:The last quantity, namely
P (s_{t} x_{1} ⋯x_{t} ,y ) is the wellknown forward inference for the HMM corresponding to the classy (detailed derivations cab be found in [17]). In our model, it is also possible to infer the noise level at timet , i.e.,P (s _{t} x_{1} ⋯x_{t} ), from a similar recursion:In essence, the inference procedures in our model consider the propagating beliefs on the class label as well as the framewise noise levels of the video frame sequence. In turn, we are able to exploit all the cues from noisy signal frames potentially useful for predicting the barcode label in a probabilistically reasonable sense. Thus the drawbacks of existing heuristic approaches such as keyframe selection can be addressed in a principled way.
5. Evaluation
In this section we empirically demonstrate the effectiveness of the proposed videobased barcode recognition approach.
5.1 Synthetically Blurred Video Data
To illustrate the benefits of the proposed method, we consider a simple synthetic experiment that simulates the noise/blur that can be observed in real video acquisition. First, a clean unblurred barcode image is prepared as shown in the topleft corner of Figure 5. We then apply Gaussian filters (i.e.,
G (x, y ) = with different scales σ. The resulting noisy barcode video frames (of length 8) are shown in the top panel of Figure 5. One can visually check that the frame att = 1 is corrupted the most severely, while the frame att = 5 has the lowest noise level, and so on.As described in the previous section, we split each barcode frame into 12 areas, one for each digit, normalize regions, and obtain sequences of observation signal vectors x_{1} · · ·x
_{T} . We show the observation vectors for three digits (the first, the third in the left part, and the sixth in the right) at the top rows in three bottom panels of Figure 5.The online class filtering (i.e.,
P (y x_{1} · · ·x_{t} )) as well as state (noise level) filtering (i.e.,P (s_{t} x_{1} ···x_{t} )) is performed for each framet . The results are depicted in second and third rows in each panel. For the 3rd/left digit case, the class prediction at the early stages was highly uncertain and incorrect (i.e.,y = 3 instead of true 1), however, the model corrects itself to the right class when it observes the cleaner frame (t = 5). Although further uncertainty was introduced after that, the final class prediction remains correct.In addition, in the middle panel (for the 6th/right digit case), as the model observes more frames, the belief to the correct class, that is,
P (y = 7x_{1} ···x_{t} ), increased, indicating that our online filtering approach can leverage and accumulate potentially useful cues from noisy data effectively. The state filtering information is also useful. For instance, looking at the changes ofP (s_{t} x_{1} · · ·x_{t} ) alongt in the 1st/left digit case, the noise level prediction visually appears to be accurate.5.2 Real Barcode Videos
We next apply our online temporal filtering approach to the real barcode video recognition task. We collect 10 barcode videos by taking pictures of real product labels using a smartphone camera. The video frames are of relatively low quality with considerable illumination variations and pose changes, most of which are defocused and blurred. Some of the sample frames are shown in Figure 6. The videos are around 20 frames in length on average.
After running the visual barcode tracker using a conventional detector, we obtain the cropped barcode areas. We simply split the area into 12 signals for digits according to the relative boundary/digit width proportions. Then the proposed online filtering is applied to each digit signal sequence individually. For a performance comparison, we also test with a baseline method, a fairly standard framebyframe based barcode recognizer, where the overall decoding for a whole video can be done by majority voting.
In Table 1 we summarize the recognition accuracies of the proposed approach and the baseline method over all videos. As there are 12 digits for each video, we report the proportions of the accurately predicted digits out of 12. The proposed approach significantly improves the baseline method with nearly 80% accuracy on average, which signifies the impact of the principled information accumulation via sequential filtering. On the other hand, the heuristic majority voting scheme fails to decode the correct digits most of the time, mainly due to its independent treatment of the video frames with potentially different noise/blur levels.
5.2.1 More Severe Pose/Illumination Changes
We next test the proposed barcode tracking/recognition approaches on the video frames with more severe illumination and pose variations. We have collected additional seven videos (some sample video frames illustrated in Figure 7), where there are significant shadows, local light reflections, in conjunction with deformed objects (barcode on plastic or aluminum foil bag containers) and outofplane rotations.
1 The recognition results are summarized in Table 2. The results indicate that the proposed sequential filtering approach is viable even for the severe appearance conditions. In particular, the partial lighting variations (e.g., shadows) can be effectively handled by the intensity normalization in the measurement processing, while the pose changes (to some degree) can also be effectively handled by the noisy emission modeling in HMM.
6. Conclusion
In this paper we proposed a novel videobased barcode recognition algorithm. Unlike singleframe recognizers the proposed method eliminates the extra human effort needed to acquire clean, blurfree image frames by directly dealing with a sequence of noisy blurred barcode image frames as an online filtering problem. Compared to a framebyframe based approach with heuristic majority voting scheme, the belief propagation of the class label and framewise noise levels in our model can exploit all cues from noisy frames. The proposed approach was empirically shown to be effective for accurate prediction of barcode labels, achieving significant improvement over conventional singleframe based approaches. Although in practice, the current algorithm needs frames of higher quality to achieve 100% accuracy, the results show that the proposed approach can significantly improve the recognition accuracy for blurred and unfocused video data. be effectively handled by the noisy emission modeling in HMM.
Conflict of Interest
No potential conflict of interest relevant to this article was reported.

[Figure 1.] (Top) Example EAN13 barcode image that encodes 12digit barcodes (a1..12) with the checksum digit c. (Bottom) Encoding scheme for each digit.

[]

[]

[Figure 2.] (Left) Endpoints (red) of the line that tightly covers the tracked barcode. (Middle) Unnormalized intensity values for digit 1 and 7 in the left part. (Right) Normalized signals to 130 duration and scaled to a range of 01.

[Figure 3.] Model of barcode signal sequence recognition.

[Figure 4.] Transition probabilities for hidden state variables

[]

[]

[]

[]

[Figure 5.] (Top) Unblurred clean barcode image with the sequence of noisy video frames generated from it using Gaussian blur with different scales. (Bottom) Each of three panels depicts the sequences of normalized signal vectors, online class filtering, and state filtering results for three digits (the first, the third in the left part, and the sixth in the right) whose true digit classes are shown in the parentheses.

[Figure 6.] Sample frames from barcode videos.

[Table 1.] Barcode recognition accuracy on real videos

[Figure 7.] Sample frames from additional barcode videos with severe illumination and pose changes.

[Table 2.] Barcode recognition accuracy on real videos with severe illumination and pose variations.