Introduction
Many empirical studies, including our work on boundary
contours, have confirmed the intuition that natural images are
scaleinvariant. It is our belief that scaleinvariance will become more and
more important as we move to tackle vision problems in more realistic settings,
where we cannot assume the knowledge of object/scene scales.
How can we cope with scaleinvariance in natural images? One possible approach,
being topdown and bruteforce, is to explicitly search over all possible
scales. This becomes expensive if one has to cover a large range of scale and a
large number of object categories.
An alternative approach, as we choose here, is to develop a scaleinvariant
representation from bottomup, and build on this representation later stages of
visual processing.

Piecewise Linear Approximation of Contours
Suppose that we are given a boundary contour, some parts of which are straight
and some of which are curved. How can we represent this contour? A common way
is to parametrize it by arclength, i.e. to sample it uniformly. Such a
representation is rather inefficient: we would like to put enough sample points
at finescale details, but only a few would suffice for straight lines.
Attneave had his theory that information along contours is mostly concentrated
at highcurvature locations. How about we break up a contour at highcurvature
locations, as done in our test of
Markov assumption? Such a decomposition has the desired property that
straight line segments remain undivided, and finescale details are
sufficiently sampled.
Of course, curvature is not a scaleindependent measure. We can easily fix this
problem by using a scaleinvariant measure, i.e. angle, as the criterion of
decomposition. The result is a piecewise linear approximation of the input contour.

Constrained Delaunay Triangulation
Empirical Validation
The CDT graph can be viewed as a boundarybased, scaleinvariant superpixel map. Before using it, we need to ask
two questions:
 As a discrete image representation, how much structure is lost when we move from the image to the CDT graph?
 How good is the CDT graph at completing gradientless gaps?
These questions can only be answered by empirical validation on large datasets of natural images.


 Figure 3: empirical validation of CDT graphs on
the Berkeley Segmentation Dataset (BSDS). The blue curve is obtained by
averaging Pb values on each CDT edge, and then project the averages back to the
pixelgrid and
benchmark.
There is little loss by using the CDT edges instead of pixels. The green curve
shows the upperbound (generated from matching to groundtruth boundaries) using
CDT graphs. The precision is close to 100% (again little loss of structure);
and the asymptotic recall rate is significantly increased (completions at gradientless
locations).




 Figure 4: relative merits of the CDT versus an alternate
completion scheme based on connecting each vertex to the knearest
visible vertices for k={1,3,5,7}. The plot shows the asymptotic
recall rate (i.e. the number of illusory contours found) versus the number
of potential completions which need to be considered. An ideal algorithm
would achieve asymptotic recall of 1 with very few potential completions.
The single filled marker shows performance of the CDT based completion
while the curve shows performance over a range of choices of k.
For each dataset, we find that the CDT based completion gives better
recall rate at a given number of potential completions the than knearest
visible neighbor algorithm.


Applications
References
 ScaleInvariant Contour Completion using Conditional Random Fields.
[abstract]
[pdf]
[ps]
[bibtex]
Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, to appear in ICCV '05, Beijing 2005.
 Cue Integration in Figure/Ground Labeling.
[abstract]
[draft]
[bibtex]
Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, to appear in NIPS '05, Vancouver 2005.
 Recovering Human Body Configurations using Pairwise Constraints between Parts.
[abstract]
[pdf]
[bibtex]
Xiaofeng Ren, Alex Berg and Jitendra Malik, to appear in ICCV '05, Beijing 2005.
 Learning and Matching Line Aspects for Articulated Objects.
[abstract]
[pdf]
[ps]
[bibtex]
Xiaofeng Ren, in CVPR '07, Minneapolis 2007.
 Midlevel Cues Improve Boundary Detection.
[abstract]
[pdf]
[ps]
[bibtex]
Xiaofeng Ren, Charless Fowlkes and Jitendra Malik, Berkeley Technical Report 051382, CSD 2005.
