Robust Motion-Based Image Segmentation Using Fusion (WP-P6)
Author(s) :
Michael Farmer (Eaton Corporation, USA)
Xiaoguang Lu (Michigan State University, China)
Hong Chen (Michigan State University, China)
Anil Jain (Michigan State University, USA)
Abstract : Accurate and robust tracking of humans is of growing interest in a variety of image processing and computer vision applications. To support real-time tracking of objects in video sequences, there has been considerable effort directed at developing optical flow and general image motion estimation algorithms for resolving various types of motions. The goal is to estimate motion parameters when there are multiple moving objects in the image in the presence of lighting variations. Simple illumination effects such as global light level changes are relatively easy to correct. Complex lighting effects involving rapidly moving light bands or rapidly moving shadow bands are much more difficult to resolve. The combination of multiple motions and complex lighting effects can lead to dramatic image variations which may not be adequately accounted for by a single motion estimation algorithm. We propose to fuse the results of multiple motion estimation algorithms to improve the system robustness. Our approach uses the Expectation Maximization (EM) algorithm as a fusion engine. It also uses Principal Components Analysis (PCA) to perform dimensionality reduction to improve the performance of the EM algorithm and reduce the processing time. The performance of the proposed fusion algorithm has been demonstrated in the application of monitoring occupants in a moving automobile to determine if they are too close to the instrument panel (airbag).

Menu