Tracking as Repeated Figure/Ground Segmentation
   Xiaofeng Ren and Jitendra Malik, to appear in CVPR '07, Minneapolis 2007.



Abstract

Tracking over a long period of time is challenging as the appearance, shape and scale of the object in question may vary. We propose a paradigm of tracking by repeatedly segmenting figure from background. Accurate spatial support obtained in segmentation provides rich information about the track and enables reliable tracking of non-rigid objects without drifting.

Figure/ground segmentation operates sequentially in each frame by utilizing both static image cues and temporal coherence cues, which include an appearance model of brightness (or color) and a spatial model propagating figure/ground masks through low-level region correspondence. A superpixel-based conditional random field linearly combines cues and loopy belief propagation is used to estimate marginal posteriors of figure vs background. We demonstrate our approach on long sequences of sports video, including figure skating and football.