Nov 2010: This page is a bit out of date; I am hoping to update is soon.
My research interests are briefly summarized below, followed by more
detailed descriptions of specific research projects I have been
- Machine Learning
I am interested in both core machine learning (theoretical analysis of
computation involved in statistical learning and design of new
algorithms and model classes) and applied machine learning, where
the primary goal is to provide a practitioner in a particular problem
domain with suitable tools for the problem at hand. I believe that
theory and applications of learning greatly benefit from each
other. In particular, a challenging practical problem can often
provide inspiration for a new way of thinking which in turn leads to
conceptually new models or algorithms.
- Learning similarity, as a task-specific concept (this was the subject of
my PhD thesis).
- Non-parametric methods for classification, density estimation and regression.
- Modeling structured time series in unsupervised and
- Brain-machine interfaces
My current work at Brown with Michael Black focuses on developing mathematical methods
for decoding neural (cortical) code for movement and using it for
direct cortical control of motor activity in artificial systems. The
primary application for this is in neuro-motor prosthetics for patients
whose motor cortex is intact but who have lost control control of motor
function due to injury or decease. We would like to bypass the damaged
neural pathways by means of computation: decode the
"commands" issued by the brain and translate them into commands to,
say, a robotic manipulator. Of course, such understanding of the neural
code would also have great scientific implications. We are working in
collaboration with Donoghue
Lab and Cyberkinetics, Inc.
- Computational Vision
Much of my past work has been on problems in computer vision. I remain
very interested in this area, in particular the following topics.
- Articulated body modeling and pose estimation. I have worked on
new, example-based approaches to these problems; details below.
Visual object categorization and recognition.
Neural decoding of dexterous hand manipulation. This is the main project of my postdoctoral work, done in
collaboration with Michael Black, Carlos Vargas-Irwin and John
Donoghue. We are working on learning structure (motor primitives) in
natural hand movements and, at the same time, try to decode the
movements from simultaneously recorded neural signal. An inportant
context of this work is brain-machine
interfaces however it is of great scientific interest in its own right.
- Risk of approximate nearest-neighbor
classification. In many practical applications, empirical performance
is excellent, given enough data. That, however, leads to often
infeasible computational expense involved in searching a database for
neighbors. One remedy is to use fast approximate search algorithms. With John Fisher, we have started investigating the tradeoff
between the speedup gained by approximation and the potential loss in
accuracy. Preliminary results were presented at the 2006 Learning
workshop at Snowbird; see our poster.
- Example-based articulated pose
estimation and tracking. This is an ongoing project, in collaboration with L. Taycher, D. Demirdjian and T. Darrell. We have developed a framework that allows to accurately and
very efficiently search a large database of images of people for
examples with pose similar to the pose of the person in an single
input image. The approach, Parameter-Sensitive
Hashing, is based on learning hash functions that are sensitive to the similarity in pose
space. The key contributions in this work is the connection between
hashing and classification of pairs, and the notion of learning
embeddings that reflect distance in the parameter space for regression
problem involved in pose estimation. We have explored a number
of methods for integrating such example-based mechanisms into
motion-based articulated tracking systems.
- Learning task-specific similarity - subject of my PhD thesis. I am interested in the retrieval setup, whereby a "user"
provides the algorithm with a set of pairs of examples known to be
similar under some (hidden) notion of similarity, and possibly another
set of pairs of dissimilar examples. Then, given a set of reference
examples, the goal is for previously
unseen query to retrieve reference examples similar to the query -
that is, the ones that the user would deem similar - with high
precision/recall. The core of this problem is how to learn
task-specific similarity, and how to do the search with respect to the
learned similarity in an efficient way that would allow dealing with
very large data sets. Example applications of this approach are example-based pose estimation, Motion-based animation and visual fragment
- Face recognition from sets of
images. This is an ongoing project; I have worked on this with John Fisher and Trevor Darrell, and
recently we have been collaborating with Ognjen Arandjelovic and Roberto Cipolla from
the University of Cambridge. We are insterested in a scenario in which
a classification algorithm gets to see a set of observations that are
known to all belong to the same (unknown) class - for instance, if a
surveillance camera collects a set of face snapshots from a
person. Our goal is to develop an approach to classification that
would use the information contained in a set.
Motion-based animation interface, collaboration with Liu
Hodgins and Haspeter Pfister. In this work we extended the approach taken in
PSH and introduced a number of
important improvements, namely the greedy learning of similarity
classifiers, which improves the performance significantly by properly
dealing with dependencies), and the integration with temporal context
by using the similarity classifiers in conjunction with the motion graph.
3D Structure with a Statistical Image-Based Shape Model. Joint
work with Kristen
Grauman and Trevor
- Hypercuts -
boosted dyadic discriminants. A new ensemble classifier, which can be
seen as an SVM contructed one pair of opposing support vectors at a
time. Joint work with Baback Moghaddam at
Integrated face and gait recognition
with multiple views. Joint work with Lily Lee and Trevor Darrell.