(bio) I am a Research Assistant Professor at TTI-C. I maintain sparse-meshing.com. In December 2007 I completed my Ph.D. at CMU in the computer science department, working with Gary Miller on computational geometry and scientific computing. My first year and a half there I was working with Tuomas Sandholm on auction design.
Prior to this I worked at NASA Ames Research Center in the model-based autonomy group, most notably working on the L2 diagnosis engine, now in orbit aboard the EO-1 spacecraft; and on hcc, a hybrid dynamics simulator now busily bitrotting. Prior to that, I did my undergrad at Brown.
A quality mesh is necessarily well-spaced, and a well-spaced point set has a quality mesh. So as a first step to dynamic meshing, we dynamize generating a well-spaced point set given an arbitrary input set of points. We produce a linear-size structure (i.e. constant factor overhead) that can maintain a minimal well-spaced superset in logarithmic time per insertion or deletion, and can compute the Voronoi cell of an arbitrary vertex in constant time. Our bounds are tight in the worst case. Experiments demonstrate the approach is practical. Presented at FWCG 2009; full version coming soon at SoCG 2010.
We compute the full persistent homology in low dimensions (at best, maybe 10; in implementation up to 4 so far). Previous solutions had runtimes and space usage in nO(d), we get 2O(d2)n. That is, we reduce runtime from polynomial in n (with big constants) to linear in n (with even bigger constants). The trick is to perform mesh refinement, and use the resulting mesh as the simplicial complex in the filtration. Presented at FWCG 2009; full version coming soon at SoCG 2010.
My FWCG paper from 2008, with the details worked out. It reduces surface reconstruction to integer sorting: linear work with words of length O(log n), and logarithmic parallel depth.
If a point cloud is well-spaced, we can perturb it just a bit and store it in O(n) bits regardless of the machine word size. We can do quadtree queries on the sorted point cloud in logarithmic time if the word size is logarithmic. So we can do surface reconstruction and mesh refinement just a little bit slower than linear time, but we use a third as many bits in practice. Presented at FWCG 2009.
A generalization of Steiner point choice for incremental Delaunay mesh refinement algorithms. I prove that any algorithm that chooses points according to my rules will terminate with a mesh that is not too large. The rules encompass the techniques that most Delaunay refinement algorithms use to mesh point cloud, piecewise linear, or piecewise smooth complexes in arbitrary (fixed) dimension; the user can also give explicit direction to refine certain areas of interest. I give examples to show how to apply my result to prove in a paragraph what has traditionally taken two pages to show.
Anecdotally, if we have a surface mesh and from it generate a graded volume mesh, we only need to add a small number of vertices (maybe a factor of two or ten). Indeed, intuition would say that the grading condition means that every ring of vertices around the surface has half as many vertices as the previous ring, so the total number should be linear. But there are many pathological cases where this does not hold, so giving a proper proof is non-trivial. We prove that under a quite reasonable condition, the volume mesh has only linearly more vertices than the surface mesh.
The main efficiency question in mesh refinement is how to update the mesh after each insertion. SVR has a top-down refinement strategy that guarantees that updates are always cheap. Har-Peled and Ungor use a bottom-up strategy and build a point location structure on the side. Their result was only in two dimensions; we extend it to any fixed dimension. We also clarify (and, I claim, simplify) the interface between the mesher and the point location structure.
The results collect the SVR implementation -- dynamism by recomputing from scratch really fast -- along with a few other details that I've since expanded on: quadtree-based operation (see the CCCG'08 paper), a dynamization thereof, and a more general way of choosing Steiner points. See also my defense talk (Keynote pdf).
An implementation of SVR. For point clouds, SVR is the fastest meshing code available. The code is downloadable cost-free for researchers at sparse-meshing.com. See also the talk (tarred Keynote file).
We show how to produce a balanced quad-tree over a point set in any dimension, and then to update the quad-tree as points are adversarially added or removed from the point set. Our solution is the first to provably be able to maintain an optimal-size quality mesh in the optimal O(lg L/s) timebound. In two dimensions, the mesh we maintain is as small as is known how to produce in practice: we dynamize a quad-tree post-processing algorithm of Har-Peled and Ungor (and we conjecture the same holds in three and higher dimensions). Furthermore, the algorithm is easy to implement because we use self-adjusting computation to automatically dynamize it: just the static algorithm plus a few annotations suffice to get working dynamic code.
See also an earlier experimental result: Optimal-time dynamic mesh refinement: preliminary results. Umut A. Acar, Benoît Hudson. Fall workshop on Computational Geometry, 2006. (pdf bib)
SVR, in parallel. We get essentially O(log2 m) parallel depth and optimal O(n log n + m) work bounds for meshing PLC input in any fixed dimension. We produce good aspect-ratio meshes rather than merely good radius-edge meshes by using the Li-Teng technique, which we show how to parallelize.
Mesh refinement in any fixed dimension, producing a quality radius-edge mesh that is within a constant factor of optimal in size, and does so in optimal time on a broad class of inputs. In two dimensions, we match two prior results. In three and higher dimensions, we have the first subquadratic runtimes.See also the technical report CMU-CS-06-132 for a longer version with full algorithms and proofs.
We show how to do rotations in Kirkpatrick's 1982 planar point location structure. In more recent work (not yet published) we extend this to Delaunay triangulations in arbitrary dimension.
Given n agents bidding on k items, where the agents have complicated preferences over the items, how can the auctioneer get all the information it needs to clear the auction, without requiring a full set of prices from each agent? This is an experimental study of a few policies.Presented at:
Lead article of April 2001 Dr.Dobb's Journal. Describes a library of fundamental data structures, among the first implemented for Java. Largely obsoleted by the Java Collections framework, but still in use in the popular textbook by Goodrich and Tamassia.
A report stemming from my work with Preparata and Upfal on testing their algorithms for fast shotgun sequencing of short genomes such as H. influenza. The theoretical results were described in Preparata, Frieze, and Upfal 1999
Other interests, in no particular order:
Rock climbing, cross-country skiing, backpacking and hiking, swing dance, civ2 and lookalikes. When I have time I try to read Science and NYRB.