Lecture Notes in Computer Science 1st Edition by Springer ISBN
Lecture Notes in Computer Science 1st Edition by Springer ISBN
Lecture Notes in Computer Science 1st Edition by Springer ISBN
Lecture Notes in Computer Science 1st Edition by Springer ISBN
1. Lecture Notes in Computer Science 1st Edition by
Springer ISBN download
https://ebookball.com/product/lecture-notes-in-computer-
science-1st-edition-by-springer-isbn-10572/
Download more ebook instantly today - Get yours now at ebookball.com
2. Get Your Digital Files Instantly: PDF, ePub, MOBI and More
Quick Digital Downloads: PDF, ePub, MOBI and Other Formats
Lecture Notes in Computer Science 4075 Lecture Notes in Bioinformatics
1st edition by Victor Markowitz, Ulf Leser, Felix Naumann, Barbara
Eckman 9783540365952
https://ebookball.com/product/lecture-notes-in-computer-
science-4075-lecture-notes-in-bioinformatics-1st-edition-by-
victor-markowitz-ulf-leser-felix-naumann-barbara-
eckman-9783540365952-19912/
Iterative Software Engineering for Multiagent Systems Lecture Notes in
Computer Science 1994 Lecture Notes in Artificial Intelligence 1st
edition by Jürgen Lind ISBN 3540421661 9783540421661
https://ebookball.com/product/iterative-software-engineering-for-
multiagent-systems-lecture-notes-in-computer-
science-1994-lecture-notes-in-artificial-intelligence-1st-
edition-by-ja1-4rgen-lind-isbn-3540421661-9783540421661-19638/
Automata for Branching and Layered Temporal Structures Lecture Notes
in Computer Science 5955 Lecture Notes in Artificial Intelligence 1st
edition by Gabriele Puppis ISBN 3642118801 978-3642118807
https://ebookball.com/product/automata-for-branching-and-layered-
temporal-structures-lecture-notes-in-computer-
science-5955-lecture-notes-in-artificial-intelligence-1st-
edition-by-gabriele-puppis-isbn-3642118801-978-3642118807-195/
Lecture Notes in Computer Science 5549 1st Edition by Kenneth Church,
Alexander Gelbukh ISBN 3642003818 9783642003813
https://ebookball.com/product/lecture-notes-in-computer-
science-5549-1st-edition-by-kenneth-church-alexander-gelbukh-
isbn-3642003818-9783642003813-19904/
3. Qualitative Spatial Reasoning with Topological Information Lecture
Notes in Computer Science 2293 Lecture Notes in Artificial
Intelligence 1st edition by Jochen Renz ISBN 3540433465
 978-3540433460
https://ebookball.com/product/qualitative-spatial-reasoning-with-
topological-information-lecture-notes-in-computer-
science-2293-lecture-notes-in-artificial-intelligence-1st-
edition-by-jochen-renz-isbn-3540433465-978-3540433460-196/
Lecture Notes in Computer Science 1561 1st edition by Mihir Bellare,
Ivan Bjerre Damgård 9783540489696
https://ebookball.com/product/lecture-notes-in-computer-
science-1561-1st-edition-by-mihir-bellare-ivan-bjerre-damgay-
rd-9783540489696-19954/
The Seventeen Provers of the World Lecture Notes in Computer Science
3600 Lecture Notes in Artificial Intelligence 1st edition by Freek
Wiedijk, Freek Wiedijk ISBN 3540307044 Â 978-3540307044
https://ebookball.com/product/the-seventeen-provers-of-the-world-
lecture-notes-in-computer-science-3600-lecture-notes-in-
artificial-intelligence-1st-edition-by-freek-wiedijk-freek-
wiedijk-isbn-3540307044-978-3540307044-19614/
Lecture Notes in Computer Science 1st edition by Gerard Cornuejols ,
Rainer Burkard , Gerhard Woeginger ISBN 3540660194Â 9783540660194
https://ebookball.com/product/lecture-notes-in-computer-
science-1st-edition-by-gerard-cornuejols-rainer-burkard-gerhard-
woeginger-isbn-3540660194-9783540660194-19670/
Lecture Notes in Computer Science 1st edition by Gerard Cornuejols ,
Rainer Burkard , Gerhard WoegingerISBN 3540660194Â 9783540660194
https://ebookball.com/product/lecture-notes-in-computer-
science-1st-edition-by-gerard-cornuejols-rainer-burkard-gerhard-
woegingerisbn-3540660194-9783540660194-19668/
4. Lecture Notes in Computer Science 5416
Commenced Publication in 1973
Founding and Former Series Editors:
Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board
David Hutchison
Lancaster University, UK
Takeo Kanade
Carnegie Mellon University, Pittsburgh, PA, USA
Josef Kittler
University of Surrey, Guildford, UK
Jon M. Kleinberg
Cornell University, Ithaca, NY, USA
Alfred Kobsa
University of California, Irvine, CA, USA
Friedemann Mattern
ETH Zurich, Switzerland
John C. Mitchell
Stanford University, CA, USA
Moni Naor
Weizmann Institute of Science, Rehovot, Israel
Oscar Nierstrasz
University of Bern, Switzerland
C. Pandu Rangan
Indian Institute of Technology, Madras, India
Bernhard Steffen
University of Dortmund, Germany
Madhu Sudan
Massachusetts Institute of Technology, MA, USA
Demetri Terzopoulos
University of California, Los Angeles, CA, USA
Doug Tygar
University of California, Berkeley, CA, USA
Gerhard Weikum
Max-Planck Institute of Computer Science, Saarbruecken, Germany
5. Frank Nielsen (Ed.)
Emerging Trends
in Visual Computing
LIX Fall Colloquium, ETVC 2008
Palaiseau, France, November 18-20, 2008
Revised Invited Papers
1 3
7. Preface
ETVC 2008, the fall colloquium of the computer science department (LIX) of the
École Polytechnique, held in Palaiseau, France, November 18-20, 2008, focused
on the Emerging Trends in Visual Computing. The colloquium gave scientists the
opportunity to sketch a state-of-the-art picture of the mathematical foundations
of visual computing.
We were delighted to invite and welcome the following distinguished speakers
to ETVC 2008 (listed in alphabetical order):
– Shun-ichi AMARI (Mathematical Neuroscience Laboratory, Brain Science
Institute, RIKEN, Wako-Shi, Japan): Information Geometry and Its
Applications
– Tetsuo ASANO (School of Information Science, Japan Advanced Institute
of Science and Technology, JAIST, Japan): Constant-Working-Space Algo-
rithms for Image Processing
– Francis BACH (INRIA/ENS, France): Machine Learning and Kernel Meth-
ods for Computer Vision
– Frédéric BARBARESCO (Thales Air Systems, France): Applications of In-
formation Geometry to Radar Signal Processing
– Michel BARLAUD (I3S CNRS, University of Nice-Sophia-Antipolis, Poly-
tech’Nice & Institut Universitaire de France, France): Image Retrieval via
Kullback Divergence of Patches of Wavelets Coefficients in the k-NN
Framework
– Jean-Daniel BOISSONNAT (GEOMETRICA, INRIA Sophia-Antipolis,
France): Certified Mesh Generation
– Pascal FUA (EPFL, CVLAB, Switzerland): Recovering Shape and Motion
from Video Sequences
– Markus GROSS (Department of Computer Science, Institute of Scientific
Computing, Swiss Federal Institute of Technology Zurich, ETHZ, Switzer-
land): 3D Video: A Fusion of Graphics and Vision
– Xianfeng David GU (State University of New York at Stony Brook, USA):
Discrete Curvature Flow for Surfaces and 3-Manifolds
– Leonidas GUIBAS (Computer Science Department, Stanford University,
USA): Detection of Symmetries and Repeated Patterns in 3D Point Cloud
Data
– Sylvain LAZARD (VEGAS, INRIA LORIA Nancy, France): 3D Visibility
and Lines in Space
8. VI Preface
– Stéphane MALLAT (École Polytechnique, Centre de Mathématiques Ap-
pliquées (CMAP), France): Sparse Geometric Super-Resolution
– Hiroshi MATSUZOE (Department of Computer Science and Engineering,
Graduate School of Engineering, Nagoya Institute of Technology, NITECH,
Japan): Computational Geometry from the Viewpoint of Affine Differential
Geometry
– Dimitris METAXAS (Computational Biomedicine Imaging and Modeling
Center, CBMI, Rutgers University, USA): Unifying Subspace and Distance
Metric Learning with Bhattacharyya Coefficient for Image Classification
– Frank NIELSEN (LIX, École Polytechnique, Paris, France & Sony Com-
puter Science Laboratories Inc., Tokyo, Japan): Computational Geometry in
Dually Flat Spaces: Theory, Applications and Perspectives
– Richard NOCK (CEREGMIA, University of Antilles-Guyane, France): The
Intrinsic Geometries of Learning
– Nikos PARAGIOS (École Centrale de Paris, ECP, Paris, France): Procedural
Modeling of Architectures: Towards Large Scale Visual Reconstruction
– Xavier PENNEC (ASCLEPIOS, INRIA Sophia-Antipolis, France): Statis-
tical Computing on Manifolds for Computational Anatomy
– Ramesh RASKAR (MIT Media Lab, USA): Computational Photography:
Epsilon to Coded Imaging
– Cordelia SCHMID (LEAR, INRIA Grenoble, France): Large-Scale Object
Recognition Systems
– Gabriel TAUBIN (Division of Engineering, Brown University, USA): Shape
from Depth Discontinuities
– Baba VEMURI (CISE Dept., University of Florida, USA): Information-
Theoretic Algorithms for Diffusion Tensor Imaging
– Suresh VENKATASUBRAMANIAN (School of Computing, University of
Utah, USA): Non-standard Geometries and Data Analysis
– Martin VETTERLI (School of Computer and Communication Sciences,
EPFL, Switzerland): Sparse Sampling: Variations on a Theme by Shannon
– Jun ZHANG (Department of Psychology, University of Michigan, USA):
Information Geometry: Duality, Convexity and Divergences
Invited speakers were encouraged to submit a state-of-the-art chapter on their
research area. The review process was carried out by members of the Program
Committee and other reviewers. We would like to sincerely thank the contribut-
ing authors and thank the reviewers for the careful feedback that helped the
authors prepare their camera-ready papers.
Videos of the lectures synchronized with slides are available from
www.videolectures.net
9. Preface VII
We were very pleased to welcome all the 150+ participants to ETVC 2008.
For those who did not attend, we hope the chapters of this publication provide
a good snapshot of the current research status in visual computing.
December 2008 Frank Nielsen
Group picture of the participants at ETVC 2008 (November 19, 2008)
10. Organization
Frank Nielsen (Program Chair)
Evelyne Rayssac (Secretary)
Corinne Poulain (Secretary)
Philippe Baptiste (Financial Advisor)
Jean-Marc Steyaert (Scientific Advisor)
Luca Castelli Aleardi (Photographer)
Referees
S. Boltz
F. Chazal
B. Lévy
A. André
F. Hetroy
R. Keriven
F. Nielsen
R. Nock
T. Nakamura
S. Oudot
S. Owada
M. Pauly
A. Vigneron
Sponsoring Institutions
We gratefully acknowledge the following institutions for their generous support:
– CNRS
– DIGITEO
– École Polytechnique
– Groupe de Recherche Informatique & Mathématique (GdR IM)
– University of Antilles-Guyane, CEREGMIA Department
12. XII Table of Contents
Unifying Subspace and Distance Metric Learning with Bhattacharyya
Coefficient for Image Classification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Qingshan Liu and Dimitris N. Metaxas
Information Retrieval
Constant-Working-Space Algorithms for Image Processing. . . . . . . . . . . . . 268
Tetsuo Asano
Sparse Multiscale Patches for Image Processing . . . . . . . . . . . . . . . . . . . . . . 284
Paolo Piro, Sandrine Anthoine, Eric Debreuve, and Michel Barlaud
Medical Imaging and Computational Anatomy
Recent Advances in Large Scale Image Search . . . . . . . . . . . . . . . . . . . . . . . 305
Herve Jegou, Matthijs Douze, and Cordelia Schmid
Information Theoretic Methods for Diffusion-Weighted MRI Analysis . . . 327
Angelos Barmpoutis and Baba C. Vemuri
Statistical Computing on Manifolds: From Riemannian Geometry to
Computational Anatomy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 347
Xavier Pennec
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
13. Abstracts of the LIX Fall Colloquium 2008:
Emerging Trends in Visual Computing
Frank Nielsen
Ecole Polytechnique, Palaiseau, France
Sony CSL, Tokyo, Japan
Abstract. We list the abstracts of the distinguished speakers that par-
ticipated to the 2008 LIX fall colloquium.
Leonidas GUIBAS
Computer Science Department, Stanford University, USA
Detection of Symmetries and Repeated Patterns in 3D Point Cloud Data
Digital models of physical shapes are becoming ubiquitous in our economy and
life. Such models are sometimes designed ab initio using CAD tools, but more
and more often they are based on existing real objects whose shape is acquired
using various 3D scanning technologies. In most instances, the original scanner
data is just a set, but a very large set, of points sampled from the surface of
the object. We are interested in tools for understanding the local and global
structure of such large-scale scanned geometry for a variety of tasks, including
model completion, reverse engineering, shape comparison and retrieval, shape
editing, inclusion in virtual worlds and simulations, etc. This talk will present a
number of point-based techniques for discovering global structure in 3D data sets,
including partial and approximate symmetries, shared parts, repeated patterns,
etc. It is also of interest to perform such structure discovery across multiple data
sets distributed in a network, without actually ever bring them all to the same
host.
Xianfeng David GU
State University of New York at Stony Brook, USA
Discrete Curvature Flow for Surfaces and 3-Manifolds
This talk introduce the concepts, theories and algorithms for discrete curvature
flows for surfaces with arbitrary topologies. Discrete curvature flow for hyperbolic
3-manifolds with geodesic boundaries are also explained. Curvature flow method
can be used to design Riemannian metrics by prescribed curvatures, and applied
for parameterization in graphics, shape registration and comparison in vision
and brain mapping in medical imaging, spline construction in computer aided
geometric design, and many other engineering fields.
F. Nielsen (Ed.): ETVC 2008, LNCS 5416, pp. 1–12, 2009.
c
Springer-Verlag Berlin Heidelberg 2009
14. 2 F. Nielsen
Jean-Daniel BOISSONNAT
GEOMETRICA, INRIA Sophia-Antipolis, France
Certified Mesh Generation
Given a domain D, the problem of mesh generation is to construct a simplicial
complex that approximates D in both a topological and a geometrical sense and
whose elements satisfy various constraints such as size, aspect ratio or anisotropy.
The talk will cover some recent results on triangulating surfaces and volumes by
Delaunay refinement, anisotropic mesh generation and surface reconstruction.
Applications in medical images, computer vision and geology will be discussed.
Baba VEMURI
CISE Dept., University of Florida, USA
Information-Theoretic Algorithms for Diffusion Tensor Imaging
Concepts from Information Theory have been used quite widely in Image
Processing, Computer Vision and Medical Image Analysis for several decades
now. Most widely used concepts are that of KL-divergence, minimum descrip-
tion length (MDL), etc. These concepts have been popularly employed for image
registration, segmentation, classification etc. In this chapter we review several
methods, mostly developed by our group at the Center for Vision, Graphics
Medical Imaging in the University of Florida, that glean concepts from Informa-
tion Theory and apply them to achieve analysis of Diffusion-Weighted Magnetic
Resonance (DW-MRI) data. This relatively new MRI modality allows one to
non-invasively infer axonal connectivity patterns in the central nervous system.
The focus of this chapter is to review automated image analysis techniques that
allow us to automatically segment the region of interest in the DWMRI im-
age wherein one might want to track the axonal pathways and also methods
to reconstruct complex local tissue geometries containing axonal fiber crossings.
Implementation results illustrating the algorithm application to real DW-MRI
data sets are depicted to demonstrate the effectiveness of the methods reviewed.
Xavier PENNEC
ASCLEPIOS, INRIA Sophia-Antipolis, France
Statistical Computing on Manifolds for Computational Anatomy
Computational anatomy is an emerging discipline that aims at analyzing and
modeling the individual anatomy of organs and their biological variability across a
population. The goal is not only to model the normal variations among a popula-
tion, but also discover morphological differences between normal and pathological
populations, and possibly to detect, model and classify the pathologies from struc-
tural abnormalities. Applications are very important both in neuroscience, to min-
imize the influence of the anatomical variability in functional group analysis, and in
medical imaging, to better drive the adaptation of generic models of the anatomy
(atlas) into patient-specific data (personalization). However, understanding and
15. Abstracts of the LIX Fall Colloquium 2008 3
modeling the shape of organs is made difficult by the absence of physical models
for comparing different subjects, the complexity of shapes, and the high number
of degrees of freedom implied. Moreover, the geometric nature of the anatomical
features usually extracted raises the need for statistics and computational meth-
ods on objects that do not belong to standard Euclidean spaces. We investigate in
this chapter the Riemannian metric as a basis for developing generic algorithms
to compute on manifolds. We show that few computational tools derived from this
structure can be used in practice as the atoms to build more complex generic algo-
rithms such as mean computation, Mahalanobis distance, interpolation, filtering
and anisotropic diffusion on fields of geometric features. This computational frame-
work is illustrated with the joint estimation and anisotropic smoothing of diffusion
tensor images and with the modeling of the brain variability from sulcal lines.
Cordelia SCHMID
LEAR, INRIA Grenoble, France
Large-Scale Object Recognition Systems
This paper introduces recent methods for large scale image search. State-of-the-
art methods build on the bag-of-features image representation. We first analyze
bag-of-features in the framework of approximate nearest neighbor search. This
shows the sub-optimality of such a representation for matching descriptors and
leads us to derive a more precise representation based on 1) Hamming embedding
(HE) and 2) weak geometric consistency constraints (WGC). HE provides binary
signatures that refine the matching based on visual words. WGC filters matching
descriptors that are not consistent in terms of angle and scale. HE and WGC are
integrated within the inverted file and are efficiently exploited for all images, even
in the case of very large datasets. Experiments performed on a dataset of one
million of images show a significant improvement due to the binary signature and
the weak geometric consistency constraints, as well as their efficiency. Estimation
of the full geometric transformation, i.e., a re-ranking step on a short list of
images, is complementary to our weak geometric consistency constraints and
allows to further improve the accuracy.
Pascal FUA
EPFL, CVLAB, Swiss
Recovering Shape and Motion from Video Sequences
In recent years, because cameras have become inexpensive and ever more preva-
lent, there has been increasing interest in video-based modeling of shape and
motion. This has many potential applications in areas such as electronic pub-
lishing, entertainment, sports medicine and athletic training. It, however, is an
inherently difficult task because the image-data is often incomplete, noisy, and
ambiguous. In our work, we focus on the recovery of deformable and articulated
3D motion from single video sequences. In this talk, I will present the models we
16. 4 F. Nielsen
have developed for this purpose and demonstrate the applicability of our tech-
nology for Augmented Reality and human body tracking purposes. Finally, I will
present some open research issues and discuss our plans for future developments.
Ramesh RASKAR
MIT Media Lab, USA
Computational Photography: Epsilon to Coded Imaging
Computational photography combines plentiful computing, digital sensors, mod-
ern optics, actuators, and smart lights to escape the limitations of traditional
cameras, enables novel imaging applications and simplifies many computer vision
tasks. However, a majority of current Computational Photography methods in-
volve taking multiple sequential photos by changing scene parameters and fusing
the photos to create a richer representation. The goal of Coded Computational
Photography is to modify the optics, illumination or sensors at the time of cap-
ture so that the scene properties are encoded in a single (or a few) photographs.
We describe several applications of coding exposure, aperture, illumination and
sensing and describe emerging techniques to recover scene parameters from coded
photographs.
Dimitris METAXAS
Computational Biomedicine Imaging and Modeling Center, CBMI, Rutgers Uni-
versity, USA
Unifying Subspace and Distance Metric Learning with Bhattacharyya Coefficient
for Image Classification
In this talk, we propose a unified scheme of subspace and distance metric learning
under the Bayesian framework for image classification. According to the local
distribution of data, we divide the k-nearest neighbors of each sample into the
intra-class set and the inter-class set, and we aim to learn a distance metric in
the embedding subspace, which can make the distances between the sample and
its intra-class set smaller than the distances between it and its inter-class set. To
reach this goal, we consider the intra-class distances and the inter-class distances
to be from two different probability distributions respectively, and we model the
goal with minimizing the overlap between two distributions. Inspired by the
Bayesian classification error estimation, we formulate the objective function by
minimizing the Bhattachyrra coefficient between two distributions. We further
extend it with the kernel trick to learn nonlinear distance metric. The power and
generality of the proposed approach are demonstrated by a series of experiments
on the CMU-PIE face database, the extended YALE face database, and the
COREL-5000 nature image database.
Nikos PARAGIOS
Ecole Centrale de Paris, ECP, Paris, France
17. Abstracts of the LIX Fall Colloquium 2008 5
Procedural Modeling of Architectures: Towards Large Scale Visual Reconstruction
Three-dimensional content is a novel modality used in numerous domains like
navigation, post production cinematography, architectural modeling and ur-
ban planning. These domains have benefited from the enormous progress has
been made on 3D reconstruction from images. Such a problem consists of build-
ing geometric models of the observed environment. State of the art methods can
deliver excellent results in a small scale but suffer from being local and cannot be
considered in a large scale reconstruction process since the assumption of recov-
ering images from multiple views for an important number of buildings is rather
unrealistic. On the other hand several efforts have been made in the graphics
community towards content creation with city engines. Such models are purely
graphics-based and given a set of rules (grammars) as well as dictionary of ar-
chitectures (buildings) can produce virtual cities. Such engines could become far
more realistic through the use of actual city models as well as knowledge of build-
ing architectures. Developing 3D models/rules/grammars that are image-based
and coupling these models with actual observations is the greatest challenge of
urban modeling. Solving the large-scale geometric modeling problem from min-
imal content could create novel means of world representation as well as novel
markets and applications. In this talk, we will present some preliminary results
on large scale modeling and reconstruction through architectural grammars.
Gabriel TAUBIN
Division of Engineering, Brown University, USA
Shape from Depth Discontinuities
We propose a new primal-dual framework for representation, capture, processing,
and display of piecewise smooth surfaces, where the dual space is the space of
oriented 3D lines, or rays, as opposed to the traditional dual space of planes.
An image capture process detects points on a depth discontinuity sweep from a
camera moving with respect to an object, or from a static camera and a moving
object. A depth discontinuity sweep is a surface in dual space composed of the
time-dependent family of depth discontinuity curves span as the camera pose
describes a curved path in 3D space. Only part of this surface, which includes
silhouettes, is visible and measurable from the camera. Locally convex points
deep inside concavities can be estimated from the visible non-silhouette depth
discontinuity points. Locally concave point laying at the bottom of concavities,
which do not correspond to visible depth discontinuities, cannot be estimated,
resulting in holes in the reconstructed surface. A first variational approach to
fill the holes, based on fitting an implicit function to a reconstructed oriented
point cloud, produces watertight models.We describe a first complete end-to-end
system for acquiring models of shape and appearance.We use a single multi-flash
camera and turntable for the data acquisition and represent the scanned objects
as point clouds, with each point being described by a 3-D location, a surface
normal, and a Phong appearance model.
18. 6 F. Nielsen
Shun-ichi AMARI
Mathematical Neuroscience Laboratory, Brain Science Institute, RIKEN, Wako-
Shi, Japan
Information Geometry and Its Applications
Information geometry emerged from studies on invariant properties of a manifold
of probability distributions. It includes convex analysis and its duality as a spe-
cial but important part. Here, we begin with a convex function, and construct a
dually flat manifold. The manifold possesses a Riemannian metric, two types of
geodesics, and a divergence function. The generalized Pythagorean theorem and
dual projections theorem are derived therefrom.We construct alpha-geometry,
extending this convex analysis. In this review, geometry of a manifold of proba-
bility distributions is then given, and a plenty of applications are touched upon.
Appendix presents an easily understable introduction to differential geometry
and its duality.
Jun ZHANG
Department of Psychology, University of Michigan, USA
Information Geometry: Duality, Convexity and Divergences
In this talk, I explore the mathematical relationships between duality in in-
formation geometry, convex analysis, and divergence functions. First, from the
fundamental inequality of a convex function, a family of divergence measures
can be constructed, which specializes to the familiar Bregman divergence, Jen-
son difference, beta-divergence, and alpha-divergence, etc. Second, the mixture
parameter turns out to correspond to the alpha ¡-¿ -alpha duality in informa-
tion geometry (which I call “referential duality”, since it is related to the choice
of a reference point for computing divergence). Third, convex conjugate oper-
ation induces another kind of duality in information geometry, namely, that
of biorthogonal coordinates and their transformation (which I call “representa-
tional duality”, since it is related to the expression of geometric quantities, such
as metric, affine connection, curvature, etc of the underlying manifold). Under
this analysis, what is traditionally called “+1/-1 duality” and “e/m duality”
in information geometry reflect two very different meanings of duality that are
nevertheless intimately interwined for dually flat spaces.
Hiroshi MATSUZOE
Department of Computer Science and Engineering Graduate School of Engineer-
ing, Nagoya Institute of Technology, NITECH, Japan
Computational Geometry from the Viewpoint of Affine Differential Geometry
Incidence relations (configurations of vertexes, edges, etc.) are important in
computational geometry. Incidence relations are invariant under the group of
affine transformations. On the other hand, affine differential geometry is to
study hypersurfaces in an affine space that are invariant under the group of
19. Abstracts of the LIX Fall Colloquium 2008 7
affine transformation. Therefore affine differential geometry gives a new sight
in computational geometry. From the viewpoint of affine differential geometry,
algorithms of geometric transformation and dual transformation are discussed.
The Euclidean distance function is generalized by a divergence function in affine
differential geometry. A divergence function is an asymmetric distance-like func-
tion on a manifold, and it is an important object in information geometry.
For divergence functions, the upper envelope type theorems on statistical mani-
folds are given. Voronoi diagrams determined from divergence functions are also
discussed.
Richard NOCK
CEREGMIA, University of Antilles-Guyane, France
The Intrinsic Geometries of Learning
In a seminal paper, Amari (1998) proved that learning can be made more effi-
cient when one uses the intrinsic Riemanian structure of the algorithms’ spaces
of parameters to point the gradient towards better solutions. In this paper, we
show that many learning algorithms, including various boosting algorithms for
linear separators, the most popular top-down decision-tree induction algorithms,
and some on-line learning algorithms, are spawns of a generalization of Amari’s
natural gradient to some particular non-Riemanian spaces. These algorithms
exploit an intrinsic dual geometric structure of the space of parameters in rela-
tionship with particular integral losses that are to be minimized. We unite some
of them, such as AdaBoost, additive regression with the square loss, the logistic
loss, the top-down induction performed in CART and C4.5, as a single algorithm
on which we show general convergence to the optimum and explicit convergence
rates under very weak assumptions. As a consequence, many of the classifica-
tion calibrated surrogates of Bartlett et al. (2006) admit efficient minimization
algorithms.
Frédéric BARBARESCO
Thales Air Systems, France
Applications of Information Geometry to Radar Signal Processing
Main issue of High Resolution Doppler Imagery is related to robust statistical
estimation of Toeplitz Hermitian positive definite covariance matrices of sensor
data time series (e.g. in Doppler Echography, in Underwater acoustic, in Elec-
tromagnetic Radar, in Pulsed Lidar). We consider this problem jointly in the
framework of Riemannian symmetric spaces and the framework of Information
Geometry. Both approaches lead to the same metric, that has been initially con-
sidered in other mathematical domains (study of Bruhat-Tits complete metric
Space Upper-half Siegel Space in Symplectic Geometry). Based on Frechet-
Karcher barycenter definition geodesics in Bruhat-Tits space, we address prob-
lem of N Covariance matrices Mean estimation. Our main contribution lies in
the development of this theory for Complex Autoregressive models (maximum
20. 8 F. Nielsen
entropy solution of Doppler Spectral Analysis). Specific Blocks structure of the
Toeplitz Hermitian covariance matrix is used to define an iterative parallel
algorithm for Siegel metric computation. Based on Affine Information Geom-
etry theory, we introduce for Complex Autoregressive Model, Khler metric on
reflection coefficients based on Khler potential function given by Doppler signal
Entropy. The metric is closely related to Khler-Einstein manifold and complex
Monge-Ampere Equation. Finally, we study geodesics in space of Khler poten-
tials and action of Calabi Khler-Ricci Geometric Flows for this Complex Au-
toregressive Metric. We conclude with different results obtained on real Doppler
Radar Data in HF X bands : X-band radar monitoring of wake vortex turbu-
lences, detection for Coastal X-band HF Surface Wave Radars.
Frank NIELSEN
LIX, Ecole Polytechnique, Paris, France Sony Computer Science Laboratories
Inc., Tokyo, Japan
Computational Geometry in Dually Flat Spaces: Theory, Applications and Per-
spectives
Computational information geometry emerged from the fruitful interactions of
geometric computing with information geometry. In this talk, we survey the re-
cent results obtained in that direction by first describing generalizations of core
algorithms of computational geometry and machine learning to broad and ver-
satile classes of distortion measures. Namely, we introduce the generic classes
of Bregman, Csiszar and Burbea-Rao parametric divergences and explain their
relationships and properties with respect to algorithmic design. We then present
few applications of these meta-algorithms to the field of statistics and data anal-
ysis and conclude with perspectives.
Tetsuo ASANO
School of Information Science, Japan Advanced Institute of Science and Tech-
nology, JAIST, Japan
Constant-Working-Space Algorithms for Image Processing
This talk surveys recent progress in constant-working-space algorithms for prob-
lems related to image processing. An extreme case is when an input image is
given as read-only memory in which reading an array element is allowed but
writing any value at any array element is prohibited, and also the number of
working storage cells available for algorithms is at most some constant. This
chapter shows how a number of important fundamental problems can be solved
in such a highly constrained situation.
Stéphane MALLAT
Ecole Polytechnique, Centre de Mathmatiques Appliques, CMAP, France
21. Abstracts of the LIX Fall Colloquium 2008 9
Sparse Geometric Super-Resolution
What is the maximum signal resolution that can be recovered from partial noisy
or degraded data? This inverse problem is a central issue, from medical to satel-
lite imaging, from geophysical seismic to HDTV visualization of Internet videos.
Increasing an image resolution is possible by taking advantage of “geometric
regularities”, whatever it means. Super-resolution can indeed be achieved for
signals having a sparse representation which is “incoherent” relatively to the
measurement system. For images and videos, it requires to construct sparse rep-
resentations in redundant dictionaries of waveforms, which are adapted to geo-
metric image structures. Signal recovery in redundant dictionaries is discussed,
and applications are shown in dictionaries of bandlets for image super-resolution.
Martin VETTERLI
School of Computer and Communication Sciences, EPFL, Switzerland
Sparse Sampling: Variations on a Theme by Shannon
Sampling is not only a beautiful topic in harmonic analysis, with an interesting
history, but also a subject with high practical impact, at the heart of signal pro-
cessing and communications and their applications. The question is very simple:
when is there a one-to-one relationship between a continuous-time function and
adequately acquired samples of this function? A cornerstone result is of course
Shannon’s sampling theorem, which gives a sufficient condition for reconstruct-
ing the projection of a signal onto the subspace of bandlimited functions, and
this by taking inner products with a sinc function and its shifts. Many variations
of this basic framework exist, and they are all related to a subspace structure of
the classes of objects that can be sampled. Recently, this framework has been
extended to classes of non-bandlimited sparse signals, which do not have a sub-
space structure. Perfect reconstruction is possible based on a suitable projection
measurement. This gives a sharp result on the sampling and reconstruction of
sparse continuous-time signals, namely that 2K measurements are necessary and
sufficient to perfectly reconstruct a K-sparse continuous-time signal. In accor-
dance with the principle of parsimony, we call this sampling at Occam’s rate. We
first review this result and show that it relies on structured Vandermonde mea-
surement matrices, of which the Fourier matrix is a particular case. It also uses
a separation into location and value estimation, the first being non-linear, while
the second is linear. Because of this structure, fast, O(K3
) methods exist, and
are related to classic algorithms used in spectral estimation and error correction
coding. We then generalize these results to a number of cases where sparsity is
present, including piecewise polynomial signals, as well as to broad classes of
sampling or measurement kernels, including Gaussians and splines. Of course,
real cases always involve noise, and thus, retrieval of sparse signals in noise is
considered. That is, is there a stable recovery mechanism, and robust practical
algorithms to achieve it. Lower bounds by Cramer-Rao are given, which can
also be used to derive uncertainty relations with respect to position and value
of sparse signal estimation. Then, a concrete estimation method is given using
22. 10 F. Nielsen
an iterative algorithm due to Cadzow, and is shown to perform close to opti-
mal over a wide range of signal to noise ratios. This indicates the robustness
of such methods, as well as their practicality. Next, we consider the connection
to compressed sensing and compressive sampling, a recent approach involving
random measurement matrices, a discrete set up, and retrieval based on convex
optimization. These methods have the advantage of unstructured measurement
matrices (actually, typically random ones) and therefore a certain universality,
at the cost of some redundancy. We compare the two approaches, highlighting
differences, similarities, and respective advantages. Finally, we move to appli-
cations of these results, which cover wideband communications, noise removal,
and superresolution imaging, to name a few. We conclude by indicating that
sampling is alive and well, with new perspectives and many interesting recent
results and developments. Joint work with Thierry Blu (CUHK), Lionel Coulot,
Ali Hormati (EPFL), Pier-Luigi Dragotti (ICL) and Pina Marziliano (NTU).
Michel BARLAUD
I3S CNRS, University of Nice-Sophia-Antipolis, Polytech’Nice Institut Uni-
versitaire de France, France
Image Retrieval via Kullback Divergence of Patches of Wavelets Coefficients in
the k-NN Framework
This talk presents a framework to define an objective measure of the similar-
ity (or dissimilarity) between two images for image processing. The problem is
twofold: 1) define a set of features that capture the information contained in the
image relevant for the given task and 2) define a similarity measure in this feature
space. In this paper, we propose a feature space as well as a statistical measure
on this space. Our feature space is based on a global description of the image
in a multiscale transformed domain. After decomposition into a Laplacian pyra-
mid, the coefficients are arranged in intrascale/ interscale/interchannel patches
which reflect the dependencies of neighboring coefficients in presence of spe-
cific structures or textures. At each scale, the probability density function (pdf)
of these patches is used as a description of the relevant information. Because
of the sparsity of the multiscale transform, the most significant patches, called
Sparse Multiscale Patches (SMP), describe efficiently these pdfs. We propose a
statistical measure (the Kullback-Leibler divergence) based on the comparison
of these probability density function. Interestingly, this measure is estimated via
the nonparametric, k-th nearest neighbor framework without explicitly build-
ing the pdfs. This framework is applied to a query-by-example image retrieval
method. Experiments on two publicly available databases showed the potential
of our SMP approach for this task. In particular, it performed comparably to
a SIFT-based retrieval method and two versions of a fuzzy segmentation-based
method (the UFM and CLUE methods), and it exhibited some robustness to
different geometric and radiometric deformations of the images.
23. Abstracts of the LIX Fall Colloquium 2008 11
Francis BACH
INRIA/ENS, France
Machine learning and kernel methods for computer vision
Kernel methods are a new theoretical and algorithmic framework for machine
learning. By representing data through well defined dot-products, referred to as
kernels, they allow to use classical linear supervised machine learning algorithms
to non linear settings and to non vectorial data. A major issue when applying
these methods to image processing or computer vision is the choice of the kernel.
I will present recent advances in the design of kernels for images that take into
account the natural structure of images.
Sylvain LAZARD
VEGAS, INRIA LORIA Nancy, France
3D Visibility and Lines in Space
Computing visibility information in a 3D environment is crucial to many applica-
tions such as computer graphics, vision and robotics. Typical visibility problems
include computing the view from a given point, determining whether two objects
partially see each other, and computing the umbra and penumbra cast by a light
source. In a given scene, two points are visible if the segment joining them does
not properly intersect any obstacle in the scene. The study of visibility is thus
intimately related to the study of the set of free line segments in a scene. In this
talk, I will review some recent combinatorial and algorithmic results related to
non-occluded segments tangent to up to four objects in three dimensional scenes.
Suresh VENKATASUBRAMANIAN
School of Computing, University of Utah, USA
Non-standard Geometries and Data Analysis
Traditional data mining starts with the mapping from entities to points in a
Euclidean space. The search for patterns and structure is then framed as a geo-
metric search in this space. Concepts like principal component analysis, regres-
sion, clustering, and centrality estimation have natural geometric formulations,
and we now understand a great deal about manipulating such (typically high
dimensional) spaces. For many domains of interest however, the most natural
space to embed data in is not Euclidean. Data might lie on curved manifolds, or
even inhabit spaces endowed with different distance structures than lp spaces.
How does one do data analysis in such domains? In this talk, I’ll discuss two
specific domains of interest that pose challenges for traditional data mining and
geometric methods. One space consists of collections of distributions, and the
other is the space of shapes. In both cases, I’ll present ongoing work that at-
tempts to interpret and understand clustering in such spaces, driven by different
applications.
24. 12 F. Nielsen
Markus GROSS
Department of Computer Science, Institute of Scientific Computing, Swiss Fed-
eral Institute of Technology Zurich, ETHZ, Switzerland
3D Video: A Fusion of Graphics and Vision
In recent years 3-dimensional video has received a significant attention both in
research and in industry. Applications range from special effects in feature films
to the analysis of sports events. 3D video is concerned with the computation
of virtual camera positions and fly-throughs of a scene given multiple, conven-
tional 2D video streams. The high-quality synthesis of such view-independent
video representations poses a variety of technical challenges including acquisi-
tion, reconstruction, processing, compression, and rendering. In this talk I will
outline the research in this area carried out at ETH over the past years. I will
discuss various concepts for passive and active acquisition of 3D video using
combinations of multiple cameras and projectors. Furthermore, I will address
topics related to the representation and processing of the massive amount data
arising from such multiple video streams. I will highlight the underlying techni-
cal concepts and algorithms that draw upon knowledge both from graphics and
from vision. Finally I will demonstrate some commercial applications targeting
at virtual replays for sports broadcasts.
25. From Segmented Images to Good Quality
Meshes Using Delaunay Refinement
Jean-Daniel Boissonnat1
, Jean-Philippe Pons2
, and Mariette Yvinec1
1
INRIA Sophia-Antipolis, 2004 route des Lucioles, BP 93,
06902 Sophia-Antipolis Cedex, France
Jean-Daniel.Boissonnat@sophia.inria.fr
2
CSTB, 290 route des Lucioles, BP 209, 06904 Sophia-Antipolis Cedex, France
Jean-Philippe.Pons@cstb.fr
Abstract. This paper surveys Delaunay-based meshing techniques for
curved objects, and their application in medical imaging and in computer
vision to the extraction of geometric models from segmented images. We
show that the so-called Delaunay refinement technique allows to mesh
surfaces and volumes bounded by surfaces, with theoretical guarantees
on the quality of the approximation, from a geometrical and a topologi-
cal point of view. Moreover, it offers extensive control over the size and
shape of mesh elements, for instance through a (possibly non-uniform)
sizing field. We show how this general paradigm can be adapted to pro-
duce anisotropic meshes, i.e. meshes elongated along prescribed direc-
tions. Lastly, we discuss extensions to higher dimensions, and especially
to space-time for producing time-varying 3D models. This is also of inter-
est when input images are transformed into data points in some higher
dimensional space as is common practice in machine learning.
1 Introduction
Motivation. The ubiquity of digital imaging in scientific research and in indus-
try calls for automated tools to extract high-level information from raster rep-
resentations (2D, 3D, or higher-dimensional rectilinearly-sampled scalar/vector
fields), the latter often being not directly suitable for analysis and interpreta-
tion. Notably, the computerized creation of geometric models from digital images
plays a crucial role in many medical imaging applications.
A precondition for extracting geometry from images is usually to partition
image pixels (voxels) into multiple regions of interest. This task, known as im-
age segmentation, is a central long-standing problem in image processing and
computer vision. Doing a review of this area is out of the scope of this paper.
Let us only mention that it is a highly ill-posed problem due to various per-
turbing factors such as noise, occlusions, missing parts, cluttered data, etc. The
interested reader may refer to e.g. [1] for a specific survey on segmentation of
medical images.
This paper focuses on a step posterior to image segmentation: the automatic
generation of discrete geometric representations from segmented images, such
F. Nielsen (Ed.): ETVC 2008, LNCS 5416, pp. 13–37, 2009.
c
Springer-Verlag Berlin Heidelberg 2009
26. 14 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
as surface meshes representing boundaries between different regions of inter-
est, or volume meshes of their interior. This step is determinant in numer-
ous applications. For instance, in medicine, an increasing number of numerical
simulations of physical or physiological processes call for geometric models of
anatomical structures: electroencephalography (EEG) and magnetoencephalog-
raphy (MEG), image-guided neurosurgery, electromagnetic modeling, blood flow
simulation, etc.
However, this topic has attracted less interest than image segmentation so far.
As a result, reliable fully-automated tools for the unstructured discretization of
segmented images, and in particular of medical datasets, are still lacking. So
that simplistic or low-quality geometric models are still of wide use in some ap-
plications. For example, in electromagnetic modeling, such as specific absorption
rate studies, finite element methods (FEM) on unstructured grids conforming
to anatomical structures would be desirable; but due to the difficulty of produc-
ing such models, most numerical simulations so far have been conducted using
finite difference methods on rectilinear grids, although the poor definition of tis-
sue boundaries (stair-casing effect) strongly limits their accuracy. Similarly, in
the EEG/MEG source localization problem using the boundary element method
(BEM), simplistic head models consisting of a few nested tissue layers remain
more popular than realistic models featuring multiple junctions.
The generation of geometric models from segmented images presents many
challenges. The output must fulfill many requirements in terms of geometric ac-
curacy and topological correctness, smoothness, number, type, size and shape of
mesh elements, in order to obtain acceptable results and make useful predictions,
avoid instabilities in the simulations, or reduce the overall processing time. No-
tably, the conditioning of stiffness matrices in FEM directly depends on the sizes
and shapes of the elements. Another example is image-guided neurosurgery, for
which real-time constraints impose strong limitations on the complexity of the
geometric brain model being dynamically registered onto the patient anatomy.
Grid-based methods. Commonly used techniques do not meet the aforemen-
tioned specifications. The most popular technique for producing surface meshes
from raster data is undoubtedly the marching cubes algorithm, introduced by
Lorensen and Cline [2]. Given a scalar field sampled on a rectilinear grid, the
marching cubes algorithm efficiently generates a triangular mesh of an isosurface
by tessellating each cubic cell of the domain according to a case table constructed
off-line.
Unfortunately, this technique, as well as its many subsequent variants, typ-
ically produces unnecessarily large meshes (at least one triangle per boundary
voxel) of very low quality (lots of skinny triangles). This may be acceptable for
visualization purposes, but not for further numerical simulations. In order to ob-
tain suitable representations, the resulting meshes often have to be regularized,
optimized and decimated, while simultaneously controlling the approximation
accuracy and preserving some topological properties, such as the absence of self-
intersections. Sometimes, good tetrahedral meshes of the domains bounded by
27. From Segmented Images to Good Quality Meshes 15
the marching cubes surfaces also have to be generated. Most of the time, these
tasks are overconstrained.
Recently, the interest in grid-based techniques has been renewed by a few
methods with theoretical guarantees. Plantiga and Vegter [3] propose an algo-
rithm to mesh implicit surfaces with guaranteed topology, based on an adaptive
octree subdivision controlled by interval arithmetic. But in its current form, this
algorithm is relevant to closed-form expressions, not to sampled data.
The recent algorithm of Labelle and Shewchuck [4] fills an isosurface with a
uniformly sized tetrahedral mesh whose dihedral angles are bounded between
10.7◦
and 164.8◦
. The algorithm is very fast, numerically robust, and easy to
implement because, like the marching cubes algorithm, it generates tetrahedra
from a small set of precomputed stencils. Moreover, if the isosurface is a smooth
2-manifold with bounded curvature, and the tetrahedra are sufficiently small,
then the boundary of the mesh is guaranteed to be a geometrically and topo-
logically accurate approximation of the isosurface. However, this algorithm lacks
flexibility: notably, it is limited to uniform surface meshes, and isotropic surface
and volume meshes.
Delaunay-based methods. This paper surveys Delaunay-based meshing tech-
niques for curved objects. It is recognized as one of the most powerful techniques
for generating surface and volume meshes with theoretical guarantees on the
quality of the approximation, from a geometrical and topological point of view.
Moreover, it offers extensive control over the size and shape of mesh elements, for
instance through a (possibly non-uniform) sizing field. It also allows to mesh sev-
eral domains simultaneously. Recent extensions show that this general paradigm
can be adapted to produce anisotropic meshes, i.e. meshes elongated along pre-
scribed directions, as well as meshes in higher dimensions.
In this paper, we show how Delaunay-based meshing can be applied in medi-
cal imaging and in computer vision to the extraction of meshes from segmented
images, with all the desired specifications. The rest of the paper is organized
as follows. We first introduce the notion of restricted Delaunay triangulation
in Section 2. We then show how to mesh surfaces (Section 3) and volumes
bounded by surfaces (Section 4) using the so-called Delaunay refinement tech-
nique. Anisotropic meshes are discussed in Section 5. Lastly, we tackle extensions
of Delaunay refinement to higher dimensions (Section 6), and especially to space-
time for producing time-varying 3D models. This is also of interest when input
images are transformed into data points in some higher dimensional space as is
common practice in machine learning.
2 Restricted Delaunay Triangulations
In this section, we recall the definitions of Voronoi diagrams and Delaunay tri-
angutions, and their generalization known as power (or Laguerre) diagrams and
weighted Delaunay (or regular) triangulations. We then introduce the concept
of restricted Delaunay triangulation which is central in this paper.
28. 16 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
2.1 Voronoi Diagrams and Delaunay Triangulations
Voronoi diagrams are versatile structures which encode proximity relationships
between objects. They are particularly relevant to perform nearest neighbor
search and motion planning (e.g. in robotics), and to model growth processes
(e.g. crystal growth in materials science). Delaunay triangulations, which are
geometrically dual to Voronoi diagrams, are a classical tool in the field of mesh
generation and mesh processing due to their optimality properties.
In the sequel, we call k-simplex the convex hull of k + 1 affinely independent
points. For example, a 0-simplex is a point, a 1-simplex is a line segment, a
2-simplex is a triangle and a 3-simplex is a tetrahedron.
Let E = {p1, . . . , pn} be set of points in Rd
, called sites. Note that in this
paper, we are mainly interested in d = 3, except in Section 6, where the case d 3
is studied. The Voronoi region, or Voronoi cell, denoted by V (pi), associated to
a point pi ∈ E is the region formed by points that are closer to pi than to all
other sites in E:
V (pi) = {x ∈ Rd
: ∀j, x − pi ≤ x − pj}.
V (pi) is the intersection of n − 1 half-spaces bounded by the bisector planes
of segments [pipj], j
= i. V (pi) is therefore a convex polyhedron, possibly un-
bounded. The Voronoi diagram of E, denoted by Vor(E), is the subdivision of
space induced by the Voronoi cells V (p1), . . . , V (pn).
See Fig. 1 for a two-dimensional example of a Voronoi diagram. In two di-
mensions, the edges shared by two Voronoi cells are called Voronoi edges and
the points shared by three Voronoi cells are called Voronoi vertices. Similarly,
in three dimensions, we term Voronoi facets, edges and vertices the geometric
objects shared by respectively two, three and four Voronoi cells, respectively.
The Voronoi diagram is the collection of all these k-dimensional objects, with
0 ≤ k ≤ d, which we call Voronoi faces. In particular, note that Voronoi cells
V (pi) correspond to d-dimensional Voronoi faces.
To simplify the presentation and without real loss of generality, we will assume
in the sequel that E does not contain any subset of d + 2 points that lie on a
same hypersphere. We say that the points of E are then in general position.
The Delaunay triangulation of E, noted Del(E), is the geometric dual of
Vor(E), and can be described as an embedding of the nerve1
of Vor(E). The
nerve of Vor(E) is the abstract simplicial complex that contains a simplex
σ = (pi0 , . . . , pik
) iff V (pi0 ) ∩ . . . ∩ V (pik
)
= ∅. Specifically, if k + 1 Voronoi cells
have a non-empty intersection, this intersection constitutes a (d−k)-dimensional
face f of Vor(E). The convex hull of the associated k + 1 sites constitutes a k-
dimensional simplex in the Delaunay triangulation and this simplex is the dual
of face f. In 3D, the dual of a Delaunay tetrahedron is the Voronoi vertex that
coincides with the circumcenter of the tetrahedron, the dual of a Delaunay facet
is a Voronoi edge, the dual of a Delaunay edge is a Voronoi facet, and the dual
of a Delaunay vertex pi is the Voronoi cell V (pi). See Fig. 1.
1
The notion of nerve of a covering is a basic concept in algebraic topology [5].
29. From Segmented Images to Good Quality Meshes 17
Fig. 1. The voronoi diagram of a set of points (left). Its dual Delaunay triangulation
(right).
The Voronoi vertex v that is the dual of a d-dimensional simplex σ of Del(E)
is the circumcenter of σ and, since v is closer to the vertices of σ than to all other
points of E, the interior of the ball centered at v that circumscribes σ does not
contain any point of E. We say that such a ball is empty. This property turns
out to characterize Delaunay triangulations. Hence, Del(E) can be equivalently
defined as the unique (under the general position assumption) triangulation of
E such that each simplex in the triangulation can be circumscribed by an empty
ball.
2.2 Power Diagrams Weighted Delaunay Triangulations
In this section, we introduce an extension of Voronoi diagrams that will be
useful in the sequel. Point sites p1, . . . , pn are replaced by hyperspheres Σ =
{σ1, . . . , σn} and the Euclidean distance from a point x to a point site pi is
replaced by the power distance to hypersphere σi, i.e. the quantity σi(x) =
x−ci2
−r2
i if ci and ri denote the center and radius of σi. One can then define
the power cell of site σi as
V (σi) = {x ∈ Rd
: ∀j, σi(x) ≤ σj(x)}.
Like Voronoi cells, power cells are convex polyhedra. The subdivision of space
induced by the power cells V (σ1), . . . , V (σn), constitutes the power diagram
V (σ) of Σ. As in the case of Voronoi diagrams, we define the geometric dual of
the power diagram V (σ) as an embedding of the nerve of V (σ), where the dual
of a face f =
i=1,...k V (σi) is the convex hull of the centers c1, . . . ck. If the
spheres Σ are in general position, the geometric dual of the power diagram is
a triangulation. This triangulation is called the weighted Delaunay (or regular)
30. 18 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
triangulation of Σ. Note that because some spheres of Σ may have an empty
power cell, the set of vertices in the weighted Delaunay triangulation is only a
subset of the centers of the σi.
It is a remarkable fact that weighted Delaunay triangulations can be computed
almost as fast as non weighted Delaunay triangulations. An efficient implemen-
tation of both types of triangulations can be found in the Cgal library [6,7]. It
is robust to degenerate configurations and floating-point errors through the use
of exact geometric predicates.
2.3 Restricted Delaunay Triangulations
We introduce the concept of restricted Delaunay triangulation which, as the con-
cept of Delaunay triangulation, is related to the notion of nerve. Given a subset
Ω ⊂ Rd
and a set E of points, we call Delaunay triangulation of E restricted
to Ω, and note Del|Ω(E), the subcomplex of Del(E) composed of the Delaunay
simplices whose dual Voronoi faces intersect Ω. We refer to Fig. 2 to illustrate
this concept in 2D. Fig. 2 (left) shows a Delaunay triangulation restricted to a
curve C, which is composed of the Delaunay edges whose dual Voronoi edges
intersect C. Fig. 2 (right) shows the Delaunay triangulation of the same set of
points restricted to the region R bounded by the curve C. The restricted triangu-
lation is composed of the Delaunay triangles whose circumcenters are contained
in R. For an illustration in R3
, consider a region O bounded by a surface S and
a sample E, the Delaunay triangulation restricted to S, Del|S(E), is composed
of the Delaunay facets in Del(E) whose dual Voronoi edges intersect S while the
Delaunay triangulation restricted to O, Del|O(E), is made of those tetrahedra
in Del(E) whose circumcenters belong to O.
The attentive reader may have noticed that in both cases of Figure 2, the
restricted Delaunay triangulation forms a good approximation of the object.
Actually, this is a general property of the restricted Delaunay triangulation. It
can be shown that, under some assumptions, and especially if E is a sufficiently
dense sample of a smooth surface S, Del|S(E) is a good approximation of S, both
in a topological and in a geometric sense. Specifically, Del|S(E) is a triangulated
surface that is isotopic to S; the isotopy moves the points by a quantity that
becomes arbitrarily small when the density increases; in addition, normals of S
of can be consistently approximated from Del|S(E).
Before stating precise results, we define what “sufficiently dense” means. The
definition is based on the notion of medial axis. In the rest of the paper, S will
denote a closed smooth surface of R3
.
Definition 1 (Medial axis). The medial axis of a surface S is the closure of
the set of points with at least two closest points on S.
Definition 2 (lfs). The local feature size at a point x on a surface S, noted lfs(x),
is the distance from x to the medial axis of S. We write lfs(S) = infx∈S lfs(x).
It can be shown that lfs(x) does not exceed the reach of S at x, denoted by
rch(x). The reach at x is defined as the radius of the largest open ball tangent
31. From Segmented Images to Good Quality Meshes 19
Fig. 2. The Voronoi diagram (in red) and the Delaunay triangulation (in blue) of a
sample of red points on a planar closed curve C (in black). On the left: the edges of
the Voronoi diagram and of the Delaunay triangulation that are restricted to the curve
are in bold lines. On the right: the triangles belonging to the Delaunay triangulation
of the sample restricted to the domain bounded by C are in blue.
Fig. 3. The medial axis of a planar curve (only the portion inside the domain bounded
by the curve is shown). The thin curves are parallel to the boundary of the domain.
to S at x whose interior does not contain any point of S. Plainly, rch(x) cannot
exceed the smallest radius of curvature at x and can be strictly less at points
where the thickness of the object bounded by S is small. As shown by Federer [8],
the local feature size of a smooth surface object is bounded away from 0.2
The following notion of ε-sample has been proposed by Amenta and Bern in
their seminal paper on surface reconstruction [9].
2
In fact, Federer proved the stronger result that the local feature size is bounded
away from 0 as soon as S belongs to the class C1,1
of surfaces that admit a normal
at each point and whose normal field is Lipschitz. This class is larger than the class
of C2
surfaces and includes surfaces whose curvature may be discontinuous at some
points. An example of a surface that is C1,1
but not C2
is the offset of a cube.
32. 20 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
Fig. 4. A surface Delaunay ball whose center is a candidate for being inserted in E
Definition 3 (ε-sample). Let ε 1 and S be a smooth surface. We say that
a finite point set E ⊂ S is an ε-sample of S if any point x of S is at distance at
most ε lfs(x) from a point of E.
The notion of ε-sample is not very handy since it requires that any point of the
surface is close to a sample point. A more convenient notion of sample, called
loose ε-sample, only requires a finite set of points of S to be close to the sample
set [10]. More precisely, consider the Voronoi edges of Vor(E) that intersect
S. We require that each such intersection point is close to the sample set. By
definition, these Voronoi edges are dual to the facets of Del|S(E). An intersection
point of such an edge with S is thus the center of a so-called surface Delaunay
ball, i.e. a ball circumscribing a facet of Del|S(E) and centered on the surface S
(see Fig. 4).
Definition 4 (Loose ε-sample). Let ε 1 be a constant and S be a smooth
surface. A point set E ⊂ S is a loose ε-sample of S if Del|S(E) has a vertex on
each connected component of S and if, in addition, any surface Delaunay ball
B(cf , rf ) circumscribing a facet f of Del|S(E) is such that rf εlfs(cf ).
The following theorem states that, for sufficiently dense samples, Del|S(E) is a
good approximation of S.
Theorem 1. If E is a loose ε-sample of a smooth compact surface S, with
ε 0.12, then the restriction of the orthogonal projection πS : R3
M(S) → S,
induces an isotopy that maps Del|S(E) to S. The isotopy does not move the
points of Del|S(E) by more than O(ε2
). The angle between the normal to a facet
f of Del|S(E) and the normals to S at the vertices of f is O(ε).
Weaker variants of this theorem have been proved by Amenta and Bern [9] and
Boissonnat and Oudot [10]. Cohen-Steiner and Morvan have further shown that
one can estimate the tensor of curvatures from Del|S(E) [11].
33. From Segmented Images to Good Quality Meshes 21
3 Surface Sampling and Meshing
In this section, we show how the concept of restricted Delaunay triangulation
can be used to mesh smooth surfaces. The algorithm is proven to terminate
and to construct good-quality meshes, while offering bounds on the accuracy of
the original boundary approximation and on the size of the output mesh. The
refinement process is controlled by highly customizable quality and size criteria
on triangular facets. A notable feature of this algorithm is that the surface needs
only to be known through an oracle that, given a line segment, detects whether
the segment intersects the surface and, in the affirmative, returns an intersection
point. This makes the algorithm useful in a wide variety of contexts and for a
large class of surfaces.
The paradigm of Delaunay refinement has been first proposed by Ruppert for
meshing planar domains [12]. The meshing algorithm presented in this section
is due to Chew [13,14].
3.1 Delaunay Refinement for Meshing Surfaces
Let S be a surface of R3
. If we know a loose ε-sample E of S, with ε 0.12,
then, according to Theorem 1, the restricted Delaunay triangulation Del|S(E)
is a good approximation of S. In this section, we present an algorithm that can
construct such a sample and the associated restricted Delaunay triangulation.
We restrict the presentation to the case of smooth, compact and closed surfaces.
Hence, lfs(S) = infx∈S lfs(x) 0.
The algorithm is greedy. It inserts points one by one and maintains the current
set E, the Delaunay triangulation Del(E) and its restriction Del|S(E) to S.
Let ψ be a function defined over S such that
∀x ∈ S, 0 ψinf ≤ ψ(x) ≤ εlfs(x).
where ψmin = infx∈S ψ(x). Function ψ will control the sampling density and is
called the sizing field.
The shape quality of the mesh facets is controlled through their radius-edge
ratio, where the radius-edge ratio of a facet is the ratio between the circumradius
of the facet and the length of its shortest edge. We define a bad facet as a facet
f of Del|S(E) that:
– either has a too big surface Delaunay ball Bf = B(cf , rf ), meaning that
rf ψ(cf ),
– or is badly shaped, meaning that its radius-edge ratio ρ is such that ρ β
for a constant β ≥ 1.
Bad facets will be removed from the mesh by inserting the centers of their
surface Delaunay balls, The algorithm is initialized with a (usually small) set of
points E0 ⊂ S. Three points per connected component of S are sufficient. Then
the algorithm maintains, in addition to Del(E) and Del|S(E), a list of bad facets
and, as long as there remain bad facets, applies the following procedure
34. 22 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
refine facet(f)
1. insert in E the center cf of a surface Delaunay ball circumscribing f,
2. update Del(E), Del|S(E) and the list of bad facets
An easy recurrence proves that the distance between any two points inserted in
the sample is at least ψinf 0. Since S is compact, the algorithm terminates
after a finite number of steps. It can be shown that the number of inserted points
is O
S
dx
ψ2(x)
.
Upon termination, any facet f of Del|S(E) has a circumscribing surface Delau-
nay ball Bf of center cf and radius rf ψ(cf ). To be able to apply Theorem 1,
we need to take ψ ≤ 0.12 lfs and to ensure that Del|S(E) has at least one vertex
on each connected component of S. This can be done by taking in E0 three
points per component of S that are sufficiently close.
We sum up the results in the following theorem.
Theorem 2. Given a compact smooth and closed surface S, and a positive Lip-
schitz function ψ ≤ ε lfs on S, one can compute a loose ε-sample E of S, of size
O
S
dx
ψ2(x)
. If ε ≤ 0.12, the restricted Delaunay triangulation is Del|S(E) is
a triangulated surface isotopic and close to S.
3.2 Implementation
Note that the surface is only queried through an oracle that, given a line segment
f∗
(to be the edge of Vor(E) dual to a facet f of Del|S(E)), determines whether
f∗
intersects S and, in the affirmative, returns an intersection point and the
value of ψ at this point.
Still, deciding whether a line segment intersects the surface may be a costly
operation. However, a close examination of the proof of correctness of the algo-
rithm shows that Theorems 1 and 2 still hold if we replace the previous oracle by
a weaker one that checks if a given line segment s intersects S an odd number of
times and, in the affirmative, computes an intersection point. Consider the case
where S is an implicit surface f(x) = 0, e.g. an isosurface defined by interpola-
tion in a 3D image. To know if s intersects S an odd number of times, we just
have to evaluate the sign of f at the two endpoints of the segment. It is only in
the case where the two signs are different that we will compute an intersection
point (usually by binary search). This results in a dramatic reduction of the
computing time.
Although the algorithm is quite simple, it is not easy in general to know lfs or
even to bound lfs from below, which is required by the oracle. In practice, good
results have been obtained using the following simple heuristics. We redefine bad
facets to control the distance cf − c
f between the center cf of the surface De-
launay ball circumscribing a facet f of Del|S(E) and the center c
f of the smallest
ball circumscribing f. This strategy nicely adapts the mesh density to the lo-
cal curvature of S. The local feature size lfs(x) depends also on the thickness of S
35. From Segmented Images to Good Quality Meshes 23
Fig. 5. Meshing an isosurface in a 3D image of the brain
at x, which is a global parameter and therefore difficult to estimate. However,
if the sample is too sparse with respect to the object thickness, the restricted
Delaunay triangulation is likely to be non manifold and/or to have boundaries.
The algorithm can check on the fly that Del|S(E) is a triangulated surface with
no boundary by checking that each edge in the restricted triangulation is incident
to two facets, and that the link of each vertex (i.e. the boundary of the union of
the facets incident to the vertex) is a simple polygon.
The issue of estimating lfs can also be circumvented by using a multiscale
approach that has been first proposed in the context of manifold reconstruc-
tion [15,16]. We slightly modify the algorithm so as to insert at each step the
candidate point that is furthest from the current sample. This will guarantee
that the sample remains roughly uniform through the process. If we let the algo-
rithm insert points, the topology of the triangulated surface maintained by the
algorithm may well change. Consider, for instance, the case of an isosurface in
a noisy image, say the brain in Fig. 5. Depending on the sampling density, the
topology of the surface may be a topological sphere (which the brain is indeed)
or a sphere with additional handles due to noise. Accordingly, the algorithm will
produce intermediate meshes of different topologies approximating surfaces of
various lfs. Since the changes of topology can be detected by computing at each
step the Betti numbers of the current triangulated surface, we can output the
various surfaces and the user can decide what is the best one.
The surface meshing algorithm is available in the open source library Cgal
[7]. Fig. 5 shows a result on a medical image. A thorough discussion of the
implementation of the algorithm and other experimental results can be found
in [14].
36. 24 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
4 Meshing Volumes with Curved Boundaries
Let O be an object of R3
bounded by a surface S. The meshing algorithm of the
previous section constructs the 3D Delaunay triangulation of the sample E and
extracts from Del(E) the restricted Delaunay triangulation Del|S(E). Hence,
the algorithm constructs a 3D triangulation T of O as well as a polyhedral
surface approximating S. However, since the algorithm does not insert points
inside O, the aspect ratio of the tetrahedra of T cannot be controlled. If further
computations are to be performed, it is then mandatory to improve the shape
of the tetrahedra by sampling also the interior of O.
We present in this section a modification of the Delaunay-based surface mesher
of the previous section due to Oudot et al. [17]. This algorithm samples the inte-
rior and the boundary of the object at the same time so as to obtain a Delaunay
refinement volume mesher. Delaunay refinement removes all badly shaped tetra-
hedra except the so-called slivers. A special postprocessing is required to remove
those slivers.
4.1 3D Mesh Refinement Algorithm
The algorithm is still a greedy algorithm that builds a sample E while main-
taining the Delaunay triangulations Del(E) and its restrictions Del|O(E) and
Del|S(E) to the object O and its bounding surface S.
The sampling density is controlled by a function ψ(x) defined over O called
the sizing field. Using constant α, β and γ, we define two types of bad elements.
As above, a facet f of Del|S(E) is considered as bad if
– either it has a too big surface Delaunay ball Bf = B(cf , rf ), i.e. rf αψ(cf ),
– or it is badly shaped, meaning that its radius-edge ratio ρf is such that
ρf β.
A tetrahedron t of Del|O is considered as bad if
– either its circumradius rt is too big, i.e. rt ψ(ct)
– or it is badly shaped, meaning that its radius-edge ratio ρt is such that
ρt γ. The radius-edge ratio ρt of a tetrahedron t is the ratio between the
circumradius and the length of the shortest edge.
The algorithm uses two basic procedures, refine facet(f), which has been
defined in Section 3 and the following procedure refine tet(t).
refine tet(t)
1. insert in E the center ct of the ball circumscribing t
2. update Del(E), Del|S(E), Del|O(E) and the lists of bad elements.
The algorithm is initialized as the surface meshing algorithm. Then it applies
the following refinement rules in order, Rule 2 being applied only when Rule 1
can no longer be applied.
37. From Segmented Images to Good Quality Meshes 25
Rule 1. If Del|S(E) contains a facet f which has a vertex in O S or is bad,
refine facet(f)
Rule 2. If there is a bad tetrahedron t ∈ Del|O(E)
1. compute the center ct of the circumscribing ball
2. if ct is included in the surface Delaunay ball of some facet f ∈ Del|S(E),
refine facet(f)
3. else refine tet(t).
It is proved in [17] that, for appropriate choices of parameters α, β and γ, the
algorithm terminates. Upon termination, Del|S(E) = Del|S(E ∩S) and DelO(E)
is a 3D-triangulation isotopic to O.
4.2 Sliver Removal
While Delaunay refinement techniques can be proven to generate tetrahedra
with a good radius-edge ratio, they may create flat tetrahedra of a special type
called slivers. A sliver is a tetrahedron whose four vertices lie close to a plane
and whose projection to that plane is a quadrilateral with no short edge. Slivers
have a good radius-edge ratio but a poor radius-radius ratio (ratio between the
circumradius and the radius of the largest contained sphere). Unfortunately, the
latter measure typically influences the numerical conditioning of finite element
methods. Slivers occur for example if one computes the Delaunay triangulation
of points on a regular grid (slightly pertubed to avoid degeneracies). Each square
in the grid can be circumscribed by an empty ball and is therefore a sliver of the
triangulation.
Fig. 6. A sliver
Two techniques are known to remove slivers from volume meshes. One consists
of a post-processing step called sliver exudation [18]. This step does not include
any new vertex in the mesh, nor does it move any of them. Each vertex is assigned
a weight and the Delaunay triangulation is turned into a weighted Delaunay
triangulation. The weights are carefully chosen so that no vertex disappear from
the mesh, nor any change occurs in the boundary facets (i. e. the facets of
Del|S(E)). Within these constraints, the weight of each vertex is optimized in
turn to maximize the minimum dihedral angles of the tetrahedra incident to that
vertex. Although the guaranteed theoretical bound on dihedral angles is known
38. 26 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
Fig. 7. The new point to be inserted is taken from the grey disk centered at the
circumcenter of the bad element τ but not in the black annulus to prevent the creation
of slivers
to be miserably low, this algorithm is quite efficient in practice at removing
slivers.
Another technique, due to Li [19], avoids the appearance of small slivers in
the mesh by relaxing the choice of the refinement points of a bad element (tetra-
hedron or boundary facet). The new points are no longer inserted at the cir-
cumcenters of Delaunay balls or surface Delaunay balls but in small picking
regions around those circumcenters. Within such a picking region, we further
avoid inserting points that would create slivers. (see Fig. 7).
4.3 Implementation
We present two results in Fig. 8 on both uniform and non uniform sizing fields.
The uniform model is an approximation of an isosurface in a 3D medical image.
The initial mesh of the surface had 33,012 vertices while the second step of the
algorithm added 53,762 new vertices in the interior of the object and 2,471 new
vertices on its boundary. The total CPU time was 20s on a Pentium IV (1.7
GHz). A thorough discussion of the implementation of the algorithm and other
experimental results can be found in [17,20]. The algorithm will be soon available
in the open source library Cgal [7].
4.4 Meshing of Multi-label Datasets
The above method seamlessly extends to the case of non-binary partitions, so
that it can be applied to the generation of high quality geometric models with
multiple junctions from multi-label datasets, frequently encountered in medical
applications.
39. From Segmented Images to Good Quality Meshes 27
Fig. 8. Non uniform and uniform meshes obtained by the algorithm of Section 4. The
lower left corners show histograms of the radius-radius ratios of tetrahedra in the mesh,
where the radius-radius ratio of a tetrahedron is the ratio between the circumradius
and the radius of the maximum inscribed ball.
To that end, we define a partition of Delaunay tetrahedra induced by a space
subdivision. It is closely related to the concept of restricted Delaunay triangula-
tion. Let us consider P = {Ω0, . . . , Ωn} a partition of space into the background
0 and n different regions, i.e.
R3
= ∪i∈{0,...,n}Ωi
and let Γ denote the boundaries of the partition:
Γ = ∪i∂Ωi.
This continuous space subdivision is approximated by a discrete partition of
a Delaunay triangulation: given a set of points E in R3
, we define the partition
of Delaunay tetrahedra induced by P, denoted by Del|P(E), as the partition of
the tetrahedra of Del(E) depending on the region containing their circumcenter.
In other words, Del|P (E) = {Del|Ω0
(E), . . . , Del|Ωn
(E)}, where Del|Ωi
(E) is the
set of tetrahedra of Del(E) whose circumcenters are contained in Ωi.
By construction, Del|P (E) induces watertight surface meshes free of self-
intersections, and volume meshes associated to the different regions. All meshes
are mutually consistent, including at multiple junctions. In particular, the sur-
face meshes are composed of the triangular facets adjacent to two tetrahedra
assigned to different regions (i.e. belonging to different parts of Del|P (E)) and
of the convex hull facets adjacent to non-background tetrahedra. These facets
are called boundary facets in the sequel.
By the results of Sections 3 and 4, the surface and volume meshes form a good
approximation of the original partition P as soon as E is a sufficiently dense
sample. Hence, our meshing algorithm again consists in iteratively refining the
point set until it forms a good sample of the boundaries between the different
regions, and, if a quality volume mesh is desired, a good sample of their interior.
A notable feature of this approach is that the continuous partition need not
be represented explicitly. It is known only through a labeling oracle that, given a
40. 28 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
Fig. 9. Left: Surface and volume meshes of head tissues generated from a segmented
magnetic resonance image. Cross-sections along different planes are displayed to make
apparent the high quality of both boundary facets and tetrahedra. Right: Angle and
radius-radius ratio distributions of surface meshes and volume meshes, respectively.
point in space, answers which region it belongs to. This oracle can be formulated
as a labeling function LP : R3
→ {0, . . ., n} associated to the partition P, such
that LP(p) = i if and only if p ∈ Ωi. Intersections of a segment or a line with Γ
can be computed to the desired accuracy using a dichotomic search on LP .
This makes the approach applicable to virtually any combination of data
sources, including segmented 3D images, polyhedral surfaces, unstructured vol-
ume meshes, fuzzy membership functions, possibly having different resolutions
and different coordinate systems. The different data sources may even be in-
consistent with each other due to noise or discretization artefacts. In this case,
the labeling oracle has the responsibility of resolving the conflicts using some
user defined rules. As a result, our meshing algorithm is not affected by the
heterogeneity and possible inconsistency of the input datasets.
Another important source of flexibility of our approach is that the customiz-
able quality criteria on boundary facets and/or on tetrahedra mentioned in Sec-
tions 3 and 4 can be tuned independently for the different regions. Thus, a
boundary facet must be tested against the criteria of its two adjacent regions. It
is classified as a good facet if it fulfills both criteria.
An experimental result on real medical data is shown in Fig. 9. For further
discussion and experimental results, please refer to [21].
5 Anisotropic Meshes
Anisotropic meshes are triangulations of a given domain in the plane or in higher
dimensions, with elements elongated along prescribed directions. Anisotropic tri-
angulations have been shown to be particularly well suited for interpolation of
functions or numerical modeling [22]. They allow minimizing the number of tri-
angles in the mesh while retaining a good accuracy in computations. For such
41. From Segmented Images to Good Quality Meshes 29
applications, the directions along which the elements should be elongated are
usually given as quadratic forms at each point. These directions may be related
to the curvature of the function to be interpolated, or to some specific directions
taken into account in the equations to be solved.
Anisotropy represented in the form of metric tensors is widely used in image
processing, in two main contexts. First, for a general scalar or vector-valued im-
age, a structure tensor [23,24] which characterizes the local image structure can
be defined. It is a classical tool for edge and corner detection. Second, a recent
medical imaging modality called diffusion tensor magnetic resonance imaging
(DT-MRI) produces a field of symmetric positive definite matrices which lo-
cally quantify the anisotropic diffusion of water molecules in the tissues. In both
cases, these tensors induce Riemannian metrics on the image domain, which have
frequently been embedded in segmentation and motion estimation algorithms,
making the latter more faithful to image structure. Since intensity variations
are typically correlated with the underlying geometry, these metrics can also
enhance the extraction of geometric models from segmented images.
Various heuristic solutions for the generation of anisotropic meshes have been
proposed. Li et al. [25] and Shimada et al. [26] use packing methods. Bossen and
Heckbert [27] use a method consisting in centroidal smoothing, retriangulating
and inserting or removing sites. Borouchaki et al. [28] adapt the classical Delau-
nay refinement algorithm to the case of an anisotropic metric. A related topic
is anisotropic mesh adaptation, a popular technique used to improve numerical
simulations. The mesh is iteratively improved by using a metric field computed
from an error analysis until the mesh and the solution converge [29].
Recently, Labelle and Shewchuk [30] have settled the foundations for a rig-
orous approach for anisotropic mesh generation based on the so-called aniso-
tropic Voronoi diagram. The framework is the following. We consider a domain
Ω ⊂ Rd
and assume that each point p ∈ Ω is given a symmetric positive definite
quadratic form represented by a d × d matrix Mp, called the metric at p. The
distance between two points a and b, as measured by metric Mp is defined as
dMp (a, b) =
(a − b)
t
Mp(a − b).
In the sequel, E = {p1, . . . , pn} denotes a set of points. The points associated
with their metrics, are called sites. We can associate to each point pi of E its
Voronoi cell
V (pi) = {x ∈ Rd
: ∀j, dMpi
(x) ≤ dMpj
(x)}.
The subdivision induced by these Voronoi cells is called the anisotropic diagram
of E (Fig. 10). In the special case where the metric is the same at each site, i.e.
Mpi = M for 1 ≤ i ≤ n, the anisotropic diagram is a power diagram. Indeed,
(x − pi)t
M(x − pi) (x − pj)t
M(x − pj) ⇔ −2 pt
iMx + pt
iMpi −2 pt
jMx + pt
jMpj.
42. 30 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
Fig. 10. An anisotropic diagram (from [30])
Write σi for the hypersphere of equation x2
− 2 pt
iMx + pt
iMpi = 0. It follows
from the inequality above that x belongs to V (pi) iff x belongs to the power cell
of σi in the power diagram of σ1, . . . , σn. In addition, the anisotropic diagram
has a dual triangulation, the regular triangulation dual to the power diagram of
the σi.
Differently, in the general case where the Mpi are distinct, the dual of the
anisotropic diagram may not be a triangulation. Nevertheless, Labelle and
Shewchuk have shown that, in the 2-dimensional case, the dual of the anisotropic
diagram is an embedded triangulation when the density of the sample E is suffi-
ciently high [30]. Based on this observation, they proposed a method to construct
anisotropic triangulations of planar domains: the anisotropic diagram of a set of
sample point is refined by insertion of new sites until the geometric dual diagram
is an embedded triangulation. A simpler variant of the algorithm that provides a
direct computation of the dual mesh without computing the anisotropic diagram
can be found in [31].
The main limitation of Labelle and Shewchuk’s approach is that it is restricted
to the 2D case or the case of surfaces embedded in 3D [32]. The presence of slivers
in higher dimensions, and especially in 3D, have prevented the extension of the
method.
An alternative approach is presented in [33]. This approach, based on the
notion of locally uniform anisotropic triangulation, still follows the Delaunay
refinement paradigm. A locally uniform anisotropic triangulation of a set of
points E is a triangulation T of E in which the star T (v) of any vertex v coincides
exactly with the star of this vertex in the Delaunay triangulation Delv(E) of the
set E computed for the metric Mv of v. Given a set of sites E and a site v ∈ E,
computing the Delaunay triangulation Delv(E) for the metric Mv is simple, since
this triangulation is just the regular triangulation dual to a power diagram as
explained above.
The algorithm maintains a sample E and a set of local triangulations, one
for each site in E, where the local triangulation of site v is reduced to the star
of v in Delv(E). (An analog data structure was suggested by Shewchuk in [34]
43. From Segmented Images to Good Quality Meshes 31
Fig. 11. Anisotropic mesh of the surface of a torus
to handle triangulations of moving points.) At the beginning the local stars are
inconsistent, meaning that a tetrahedron appearing in one local star may not
appear in the local stars of its four vertices. The algorithm refines the sample
E until there is no more inconsistencies. so that the local stars can be merged
together to form a locally uniform anisotropic triangulation.
Inconsistencies among the local stars arise either because the metric is highly
distorted on the domain covered by a single tetrahedron or in presence of quasi-
cospherical configurations. The first situation is not problematic since introducing
new points in the sample will reduce the distortion. The case of quasi-cospherical
configurations is more serious and may prevent the refinement process to termi-
nate. This problem is strongly related to the presence of slivers and can be solved
by avoiding the creation of quasi-cocyclic configurationsin a way similar to the way
Li and Teng suggested to avoid slivers (see Section 4.2). New points are inserted
in the so-called picking region which consists of a small ball around the circum-
center of a bad tetrahedron minus the locus of points yielding quasi-cospherical
configurations.
This algorithm is simple and straightforward since it relies on the usual De-
launay predicates (applied to some stretched spaces). It may be extended to R3
.
In R3
the new points are inserted in picking regions so has to avoid both sliv-
ers and quasi-cospherical configurations. An anisotropic mesh of a surface torus,
obtained with this method, is shown in Fig. 11.
44. 32 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
6 Higher Dimensions
Remarkably, the framework described in the previous sections extends to Eu-
clidean spaces of dimensions higher than three. In particular, it allows to ad-
dress spatio-temporal problems frequently occurring in science and engineering.
Let us mention temporal sequences of MR (magnetic resonance) images of the
beating heart in medical imaging, and spatio-temporal reconstruction of mov-
ing scenes from videos in computer vision. The latter application is detailed in
Section 6.2.
Spatio-temporal models are not the only motivation to construct meshes in
higher dimensions. Although we will not discuss such applications in this pa-
per, let us mention the study of physical dynamical systems which is naturally
expressed in 6D phase-space. Also, many machine learning problems, notably
in computer vision, are tackled by mapping input data into higher dimensional
parametric spaces.
To extend our meshing algorithms, we first need to extend the construction
of the Delaunay triangulation in higher dimensional spaces.
6.1 Computing Delaunay Triangulations in Spaces of Medium
Dimensions
Very efficient and robust codes nowadays exist for constructing Delaunay tri-
angulations in two and three dimensions [7], the situation is less satisfactory
in higher dimensions: the few existing Delaunay codes in higher dimensions are
either non robust, or very slow and space demanding, which make them of little
use in practice. This situation is partially explained by the fact that the size
of the Delaunay triangulation of points grows very fast (exponentially in the
worst-case) with the dimension. However, a careful implementation can lead to
dramatic improvement and quite big input sets can be triangulated in spaces of
dimensions up to 6.
In [35], we propose a new C++ implementation of the well-known incremen-
tal construction. The algorithm maintains, at each step, the complete set of
d-simplices together with their adjacency graph. Two main ingredients are used
to speed up the algorithm. First, the input points are pre-sorted along a d-
dimensional Hilbert curve to accelerate point location. In addition, the dimension
of the embedding space is a C++ template parameter instanciated at compile
time. We thus avoid a lot of memory management. The code is fully robust and
computes the exact Delaunay triangulation of the input data set. Following the
central philosophy of the Cgal library, predicates are evaluated exactly using
arithmetic filters.
Fig. 12 presents a benchmark of the speed and memory usage of our imple-
mentation, with dimension ranging from 2 to 6 and input size ranging from 1K
to 1024K points. We have run our code on input sets consisting of uniformly
distributed random points in a unit cube with floating point (double) input co-
ordinate type. In each case, the input points are provided at once, which permits
the use of spatial sorting prior to inserting the points in the triangulation.
45. From Segmented Images to Good Quality Meshes 33
Fig. 12. Timings (left) and space usage (right) of our novel Delaunay triangulation
implementation. All axes are logarithmic.
Similar results are observed when the points lie on a manifold of co-dimension
1, which is the case for points on a deforming surface. Further experiments are
discussed in [35].
6.2 Application to Spatio-temporal Scene Modeling from Video
Sequences
In [36], we have used a higher-dimensional extension of the meshing algorithm of
Section 3 to compute 4D spatio-temporal representations of non-rigid dynamic
scenes from multiple video sequences.
By considering time as an additional dimension, we could exploit seamlessly
the time coherence between different video frames to produce a compact and
high-quality 4D representation of the scene. The 3D model at a given time
instant can easily be obtained by intersecting this 4D mesh with a temporal
Fig. 13. A few 3D slices of a 4D space-time reconstruction of a moving scene from real
video data
46. 34 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
plane. Compared to independent frame-by-frame computations, this point of
view has several significant advantages. First, it exploits time redundancy to
limit the size of the output representation. For example, parts of the scene that
are immobile or have a uniform motion can be approximated by a piecewise-
linear 4D mesh with few elements elongated in the time direction. In contrast,
in the same configuration, a frame-by-frame approach would repeat 3D elements
at each frame. Second, such an approach yields a temporally continuous repre-
sentation, which is defined at any time, thus enabling interpolation of objects’
shape between consecutive frames. This also makes a spatio-temporal smooth-
ing possible, in order to recover from occasional outliers in the data. Third, a
byproduct of the two first advantages is the reduction of flickering artefacts in
synthesized views, as consecutive 3D slices have a similar geometry and connec-
tivity by construction. At last, changes in 3D topology along time are handled
naturally by our spatio-temporal embedding formulation.
A sample result is displayed in Fig. 13, in the form of several 3D slices of
the output 4D mesh. More generally, this application demonstrates the feasibil-
ity of 4D hypersurface representations in image processing. It is likely to inspire
progress in other applications, such as the spatio-temporal modeling of the beat-
ing heart from temporal sequences of MR images.
7 Conclusion
We have presented algorithms for mesh generation by Delaunay refinement and
some of their applications in Computer Vision and Medical Imaging. As re-
ported in this paper as well as in other recent survey papers [37,38], Delaunay-
based meshing algorithms have advantages over grid-based algorithms like the
marching cubes algorithm. Most notably, they offer theoretical guarantees on
the quality of the approximation and also on the shape of the elements (facets
or tetrahedra) of the mesh. Moreover, the paradigm of Delaunay refinement is
quite flexible and can be adapted to mesh surfaces, 3D domains, or even higher
dimensional manifolds, and to take into account anisotropic metric fields.
Due to limited space, we have assumed throughout the paper that the objects
to be meshed were smooth. Extensions of the Delaunay refinement paradigm to
non-smooth objects can be found in [10,20,39].
The algorithms discussed in this paper are based on the experience of the
Geometrica group at INRIA Sophia-Antipolis3
. The algorithms described in
this paper are or will soon be available from the Cgal library [7]. They have
been used for image segmentation [40], data assimilation for cardiac electrome-
chanical modeling [41], surface reconstruction from unorganized data points
[42,43]. We hope that they will find further applications in Computer Vision and
Medical Imaging, Computer Aided Design, Computer Graphics and Numerical
Simulation.
3
http://www-sop.inria.fr/geometrica
47. From Segmented Images to Good Quality Meshes 35
Acknowledgments
Research reported in this paper has been conducted in collaboration with Steve
Oudot, Laurent Rineau and Camille Wormser who are gratefully acknowledged.
References
1. Pham, D.L., Xu, C., Prince, J.L.: A survey of current methods in medical image
segmentation. Annual Review of Biomedical Engineering 2, 315–338 (2000)
2. Lorensen, W.E., Cline, H.E.: Marching Cubes: A high resolution 3D surface con-
struction algorithm. Computer Graphics 21(4), 163–169 (1987)
3. Plantinga, S., Vegter, G.: Isotopic meshing of implicit surfaces. The Visual Com-
puter 23, 45–58 (2007)
4. Labelle, F., Shewchuk, J.: Isosurface stuffing: fast tetrahedral meshes with good
dihedral angles. ACM Transactions on Graphics 26(3) (2007)
5. Edelsbrunner, E.: Geometry and Topology for Mesh Generation, Cambridge (2001)
6. Boissonnat, J.D., Devillers, O., Pion, S., Teillaud, M., Yvinec, M.: Triangulations
in CGAL. Computational Geometry: Theory and Applications 22, 5–19 (2002)
7. Cgal, Computational Geometry Algorithms Library, http://www.cgal.org
8. Federer, H.: Curvature measures. Transactions of the American Mathematical So-
ciety 93(3), 418–491 (1959)
9. Amenta, N., Bern, M.: Surface reconstruction by Voronoi filtering. Discrete Com-
put. Geom. 22(4), 481–504 (1999)
10. Boissonnat, J.D., Oudot, S.: Provably good sampling and meshing of lipschitz
surfaces. In: Proc. 22nd Annual ACM Symposium on Computational Geometry
(2006)
11. Cohen-Steiner, D., Morvan, J.M.: Restricted Delaunay triangulations and normal
cycle. In: Proc. 19th Annual ACM Symposium on Computational Geometry, pp.
237–246 (2003)
12. Ruppert, J.: A Delaunay refinement algorithm for quality 2-dimensional mesh gen-
eration. J. Algorithms 18, 548–585 (1995)
13. Chew, L.P.: Guaranteed-quality delaunay meshing in 3d (short version). In: SCG
1997: Proceedings of the thirteenth annual symposium on Computational geometry,
pp. 391–393. ACM, New York (1997)
14. Boissonnat, J.D., Oudot, S.: Provably good sampling and meshing of surfaces.
Graphical Models 67, 405–451 (2005)
15. Guibas, L.J., Oudot, S.Y.: Reconstruction using witness complexes. In: Proc. 18th
ACM-SIAM Sympos. on Discrete Algorithms (SODA), pp. 1076–1085 (2007)
16. Boissonnat, J.D., Guibas, L., Oudot, S.: Learning smooth shapes by probing. Com-
put. Geom. Theory and Appl. 37, 38–58 (2007)
17. Oudot, S., Rineau, L., Yvinec, M.: Meshing volumes bounded by smooth surfaces.
In: Proc. 14th International Meshing Roundtable, pp. 203–219 (2005)
18. Cheng, S.W., Dey, T.K., Edelsbrunner, H., Facello, M.A., Teng, S.H.: Silver exu-
dation. J. ACM 47(5), 883–904 (2000)
19. Li, X.Y.: Sliver-free Three Dimensional Delaunay Mesh Generation. PhD thesis,
University of Illinois at Urbana-Champaign (2000)
20. Rineau, L., Yvinec, M.: Meshing 3d domains bounded by piecewise smooth sur-
faces. In: Meshing Roundtable conference proceedings, pp. 443–460 (2007)
48. 36 J.-D. Boissonnat, J.-P. Pons, and M. Yvinec
21. Pons, J.P., Ségonne, F., Boissonnat, J.D., Rineau, L., Yvinec, M., Keriven, R.:
High-quality consistent meshing of multi-label datasets. In: Karssemeijer, N.,
Lelieveldt, B. (eds.) IPMI 2007. LNCS, vol. 4584, pp. 198–210. Springer, Heidelberg
(2007)
22. Shewchuk, J.R.: What is a good linear finite element? Interpolation, conditioning,
anisotropy, and quality measures (manuscript, 2002),
http://www.cs.cmu.edu/ jrs/jrspapers.html
23. Bigun, J., Granlund, G.: Optimal orientation detection of linear symmetry. In:
International Conference on Computer Vision, pp. 433–438 (1987)
24. Harris, C., Stephens, M.: A combined corner and edge detector. In: Proc. 4th Alvey
Vision Conference, pp. 147–151 (1988)
25. Li, X.Y., Teng, S.H., Üngör, A.: Biting ellipses to generate anisotropic mesh. In:
8th International Meshing Roundtable (October 1999)
26. Shimada, K., Yamada, A., Itoh, T.: Anisotropic Triangulation of Parametric Sur-
faces via Close Packing of Ellipsoids. Int. J. Comput. Geometry Appl. 10(4), 417–
440 (2000)
27. Bossen, F., Heckbert, P.: A pliant method for anisotropic mesh generation. In: 5th
International Meshing Roundtable (October 1996)
28. Borouchaki, H., George, P.L., Hecht, F., Laug, P., Saltel, E.: Delaunay mesh gen-
eration governed by metric specifications. part I algorithms. Finite Elem. Anal.
Des. 25(1-2), 61–83 (1997)
29. Dobrzynski, C., Frey, P.: Anisotropic delaunay mesh adaptation for unsteady sim-
ulations. In: Proc. 17th International Meshing Roundtable (2008)
30. Labelle, F., Shewchuk, J.R.: Anisotropic voronoi diagrams and guaranteed-quality
anisotropic mesh generation. In: SCG 2003: Proceedings of the nineteenth an-
nual symposium on Computational geometry, pp. 191–200. ACM Press, New York
(2003)
31. Boissonnat, J.D., Wormser, C., Yvinec, M.: Anisotropic diagrams: Labelle
shewchuk approach revisited. Theoretical Computer Science (to appear)
32. Cheng, S.W., Dey, T.K., Ramos, E.A., Wenger, R.: Anisotropic surface meshing.
In: SODA 2006: Proceedings of the seventeenth annual ACM-SIAM symposium on
Discrete algorithm, pp. 202–211. ACM, New York (2006)
33. Boissonnat, J.D., Wormser, C., Yvinec, M.: Locally uniform anisotropic meshing.
In: Proceedings of the 24th Annu. ACM Sympos. Comput. Geom., pp. 270–277
(2008)
34. Shewchuk, R.: Star splaying: an algorithm for repairing delaunay triangulations
and convex hulls. In: SCG 2005: Proceedings of the twenty-first annual symposium
on Computational geometry, pp. 237–246. ACM, New York (2005)
35. Hornus, S., Boissonnat, J.D.: An efficient implementation of delaunay triangula-
tions in medium dimensions. Technical report, INRIA (2008)
36. Aganj, E., Pons, J.P., Ségonne, F., Keriven, R.: Spatio-temporal shape from silhou-
ette using four-dimensional delaunay meshing. In: IEEE International Conference
on Computer Vision (2007)
37. Boissonnat, J.D., Cohen-Steiner, D., Mourrain, B., Rote, G., Vegter, G.: Mesh-
ing of surfaces. In: Boissonnat, J.D., Teillaud, M. (eds.) Effective Computational
Geometry for Curves and Surfaces. Mathematics and Visualization, pp. 181–229.
Springer, Heidelberg (2006)
38. Dey, T.: Delaunay mesh generation of three dimensional domains. Technical report,
Ohio State University (2007)
39. Cheng, S.W., Dey, T.K., Levine, J.: A practical delaunay meshing algorithm for a
large class of domains. In: Proc. 16th International Meshing Roundtable (2007)
51. Rt'petitionstheodolit öl. niircli-jciingsinstrunuMit .'»2. Hü
selben Betni^' vlli(K' aber, da man wieder nur z,vei Lesungren
macht, bei der Kepetition nunmehr der ganvA- Winkel JOl'
t'ehlerh:iff, da Ja die jedesmalige Einstellunir auf eines der beiden
Ol)Jekte als viel scliärter, etwa nur mit einem Fehler von 1 behaftet,
anzusehen ist. Ist aber der Winkel lOV um 10 fehlerhaft, so wird der
viert«' Tfil TOTl nur um 2-5 uiiriehtig, also wesentlich genauer.
IlierzAi beihirf es nur einer einfachen, besctnderen Hinrichtung,
durch welche sich der Kepetitionstheodolit on den anderen
unterscheidet. Die Büchse I) (Fig. 28), in welcher die vertikale
Umdrehungsachse des Instruments liegt, ist selbst wieder eine
(hohle) vertikale Umdrehungsachse, welche in einer zwehen Biiehse
7' liegt und in ihr durch Klemme und Feinbewegung gedreht
werden kann (in Fig. 138 die untere unmittelbar über dem DreifuH
befindliche I. Beim Gebrauche wird zunäehst unten geklemmt, d. h.
der Horizontal kreis mit dem Dreifuß verbunden und der Theodolit in
gewöhnlicher Weise gebraucht: das linke Objekt anvisiert, der Kreis
gelesen, dann das rechte Objekt anvisiert. Dann wird oben
geklemmt, aber nicht gelesen,* sondern nunmehr unten gelüftet
und der ganze Oberteil mit dem Horizontalkreis wieder auf das linke
Objekt zurückgeführt und dieser scharf eingestellt. Jetzt wird wieder
unten geklemmt** (der Kreis fixiert), oben gelüftet und auf das
rechtsseitige Objekt scharf eingestellt. Für jede folgende Kepetition
wiederholt sich der Vorgang: Es wird oben geklemmt (der Kreis mit
dem Fernrohr verbunden), unten gelüftet und das linke 01)jekt Ä
eingestellt; dann unten geklemmt, der Kreis mit dem Dreifuß
verbunden, oben gelüftet und auf das rechtsseitige Objekt B
eingestellt. Hat man die gewünschte Anzahl (n) Repetitionen, so
wird nach der letzten Einstellung auf das rechtsseitige Objekt der
Kreis gelesen. Bei Mikroskopen wird diese Einrichtung nicht mehr
verwendet, weil die Ablesung an den Mikroskopen dieselbe
Genauigkeit liefert wie die Einstellung des Fernrohrs auf die Objekte.
Neuerdings wird wieder den Repetitionsbeobachtungen gegenüber
den Satzbeobachtungen (s. Nr. 101) das Wort geredet. 52. Das
Durchgangsinstrument. Ist das Instrument im Horizonte nicht
52. drehbar, sondern nur in einem einzigen Vertikalkreise, so entsteht
das Durchgangs- oder Passageninstrument. Auf zwei festen,
vertikalen Säulen (Fig. 139) sind die Lager für die Horizontalachse
AB, welche Fernrohr CD und Kreis E trägt. Das Fernrohr kann
zentrisch mit prismatischem Okular oder exzentrisch oder wie in Fig.
140 ein gebrochenes Fernrohr sein. Bei den kleinen gebräuchlichen
Durchgangsinstrumenten findet man durchwegs die Form Fig. 140.
Der Kreis wird dabei nur als Einstellkreis benutzt und wird mit
Nonien und Lupe abgelesen. Hat das Instrument einen fein geteilten
Kreis, so * Zur Erhöhung der Genauigkeit wird allerdings jedesmal
oder nach je zwei Repetitionen gelesen und zur Ableitung des
Resultates werden alle Lesungen berücksichtigt (vgl. Nr. 101). **
Kleuimung lindet eigentlich vor der Einstellung statt, da diese durch
die Feinbewegungsschraube bewirkt wird.
53. 120 Durchgangsinstrument 52. entstellt der Meridiankreis.
Rechts ist eine Beleuchtun^slampe, die ihre Strahk'n (wie aucli bei
dem Universalinstrumente Fig. 18oj durch die holile Achse
hindurchsendet. Das Instrument Fig. 140 hat ein umsetzbares
Hängeniveau, eine Umlcgevorriclitung, ganz analog der oben (Nr.
oOj beschriebenen, bei welcher nur der untere Teil der Stützen /'
ausgebaucht ist, um für das Hängeniveau Platz zu lassen. Fig. 139.
Neigung der Achse, Kollimationsfehler des Fernrohrs werden auf die
bereits beschriebene Art ermittelt. 5U. Höhenmessung. Die Messung
des lotrechten Abstands der Punkte von einer als Grundebene
angenommenen Horizontalebene erfordert entweder die Kenntnis
des Höhenwinkels und der Entfernung des Punktes oder aber die
Messung des lotrechten Abstands über einer mit genügender
Genauigkeit hergestellten horizontalen Visur. Die erstgenannte
Metliode heilk die trigonometrische Höheumessung; die zweite
Methode das Nivellieren. Für die trigonometrische lliWienmessung
kann jeder Theodolit oder jedes Universalinstrument verwendet
werden, indem man mittels desselben die Höhen- oder Tiefenvinkel
bestimmt, wenn die Entfernung der Punkte eiitAved(M- ebenfalls auf
trigonometrischem Wege durch die Berechnung der Dreiecke oder
aber durch Distanzmesser l)ekannt wird (s. hierzu auch die
Tachymetrie Nr. 50). Doch hat man aueh. wenn es sich nur um die
Höhenmessung handelt, eigene Instrumente, die bereits erwähnten
llithen
55. The text on this page is estimated to be only 28.86%
accurate
Ilölicnincssun;^ '*'i121 Fig. 140. Fig. 141.
58. The text on this page is estimated to be only 29.67%
accurate
122 Nivellieren ö4. kreise; ein Beispiel eines solchen ist in
Fig. 141 dargestellt,* ein mittels eines Nußgelenkes auf ein
Zapfenstativ aufstellbares Instrument; isonienträger des
Hölienkreises mit fester AUiidadenlibelle; Instrument um eine
vertikale Achse drehbar, jedoch ohne Lesung der Horizontalwinkel.
54. Nivellierinstrumente. Für das Nivellieren ist es erforderlich, eine
horizontale Visur an einem Beobachtungsorte P (Fig. 142)
herzustellen und zu bestimmen, um v^ie viel ein anderer Punkt l
über Pj, d. i. über der horizontalen Visur PP^, liegt. Zu diesem
Zwecke wird in J. eine geteilte Fig. 142. Latte vertikal aufgestellt**
und beobachtet, welcher Teil derselben zwischen dem Punkte l und
dem Punkte p, in welchem die horizontale Visur trifft, enthalten ist.
Ist dieser Abschnitt l und ist h die Höhe des Instrumentes J, die
sogenannte Instrumentenhöhe, so ist der Höhenunterschied X = h
— I. Ergibt sich x positiv, so ist der anvisierte Punkt höher, für
negative ./; ist er tiefer. Beispiel: Sei die Instrumentenhöhe h=l-bm
und der Lattenabschnitt l = 8cni, so ist x^-{-{)icm. Mau nennt
diese Methode das Nivellieren aus dem Ende. Es erfordert die
Kenntnis der Instrumentenhöhe, welche allerdings nahe konstant ist,
aber immerhin kleinen Schwankungen durch die Art der Aufstellung
unterworfen sein kann. Von dieser ist man unabhängig beim
Nivellieren aus der Mitte. Auf den Instrumentenstand / (Fig. 143)
wird die horizontale Visur l her2 r / 1 T n ff .: ^ i ^,. ^ i L-^ i 5
Fiir. 143. * Das zur Ennittlung von Bauiiiliölien xorwciulcte
sogonamite Pomlronioter ist ein mit einem Lot verseliene.s Diopter.
Bringt man an dem Hiinirebogen (,Fig. 59a umi 60) bei M und N
statt der Anfliängeringe ein OIvular- und ein (Hijektivdiopter an, so
gibt die Abweieliung des Lotes OIj wwx der Vertikalen die Neigung
der Absehenlinie MX. •*■* Die Metlu)de des Nivellierens mit geteilter
Latte soll sehon bei den Griechen geübt worden sein (s. A.
Laussedat, Keelierebes. Tom. 1. Paris 1S98, S. 16).
59. Nivellieren ö4. 123 gestellt und die Lattenabsdiiiitte auf
eimn- in dem Punkte .1 und auf einer in dem Punkte B aufirestellten
Latte bestinunt; seien diese Abschnitte ), üj, SU ist der
Höhenunterschied zwischen A und B f — i. Von cimin zweiten
Aufstellungspunkte II wird wieder zurück und vor visiert und es
werden die Lattenlesungeu n, v^ ermittelt; dann ist die
Höhendifferenz zwischen C und B: r, — v., usw. Der
Gesamthiihenuuterschied bei den sukzessiven Aufstellungen in den
Punkten /. //. /// . . . wird daher H = {r, — i) + U-2 — V2) + (rg —
r,, ) . . . .Selbstverständlich sind diese Ditferenzen mit den
entsprechenden Zeichen zu nehmen; ist eine Ditlereuz negativ, so
bedeutet dieses, daß der folgende Punkt tiefer als der
vorhergehende ist. Beispiel: Es seien die Lattenlesungen bei der
Msur so wird zurück (/•) vor (v) r — V in Standpunkt / 75-3 46-5' +
28-8 II 68-7 35-2 + 33-5 III 59-8 62-4 — 2-6 IV 37-6 51-8 — 14-2 V
114-7 zwischen dem 92-5 ersten und letzten + 22-2 daher der
Höhenunterschied anvisierten Punkte . . . H = : +67-7 cm. Die
Formel für H läßt sich übrigens auch schreiben H = ( + /-o + ^3 .
. .) — {i + Vo + '-3 . . .) = 3/- — 2i; wo Ir die Summe aller
Lesungen für „Visur zurück und 2v die Summe aller Lesungen für
„Visur vor bedeutet; in dieser Form wird die Rechnung einfacher.
Bildet man dies für das vorige Beispiel, so erhält mau ^;-= 356-1;
2^ = 288-4; i? = H-67-7cw, d. h. der Endpunkt ist um 67*7 cm
höher als der Anfangspunkt. Die Entfernung, in welcher man sich
aufstellt, ist dabei ganz gleichgültig und läßt man sich nur von
praktischen Erwägungen leiten. In der Ebene können die
Standpunkte /,//... weiter voneinander sein, in stärker ansteigendem
Terrain wird man sie näher wählen müssen, um nicht zu lange Latten
verwenden zu müssen. 80 — 100 m Distanz in ebenem Terrain -wird
dabei das passendste sein, weil man auf diese Entfernung noch die
Teilstriche der Latte im Fernrohr genügend genau bei nicht allzu
großen Instrumenten lesen kann. Beim Nivellieren aus der Mitte wird
man die Zielweiten möglichst nahe gleich nehmen, d. h. den
Standpunkt des Nivellierinstruments in der Mitte zwischen den
beiden Lattenaufstellungen. Auf große Entfernungen wird auch auf
60. die Krümmung der Erdoberfläche Rücksicht genommen werden
müssen, da die Horizontale im Aufstellungspunkte immer eine
Tangente zur Oberfläche ist (Fig. 144).* Ist die Länge der zu
messenden Strecke mehrere Kilometer, so bedarf es dazu sehr vieler
V^:!. Nr. 109.
61. 124 Nivellierinstrumente 54. Aufstellungen, im Durchschnitt
10 — 15 pro ]:w sollen sich daher die Fehler nicht summieren, so
muß auf jede einzelne Beobachtung die äußerste Sorgfalt verwendet
werden, wie dies beim Präzisionsnivellement der Fall ist. Zu den
einfachsten Instrumenten zur Herstellung der horizontalen ^'isur
gehört die Kanalw^age. Man visiert dabei direkt über den
Flüssigkeitsspiegel bei mn (Fig. 61) hinüber. Ebenso einfach ist die
von Hart mann angegebene Pendel wage, welche neuerer Zeit
wieder als Pendelnivellierinstrument konstruiert wird; Fig. 145 zeigt
ein solches von Starke und Kammer er, wobei die Visur durch ein um
eine horizontale Achse drehbares kleines Fernrohr hergestellt wird,
das durch ein daran befestigtes Pendel horizontal irestellt wird. ....
Fig. 146. Fi-, u:.. Etwas genauer ist das mit einer Liltelle versehene
Xi ellierdiopter (Fig. 146). Mittels einer Schraube und Gegenfeder
wird dor Oberteil so
62. Nivc'llicriiistriiuiento 54. 121 hiüi^v verstellt, bis die beric-
litig'te Libelle einspielt ; die Visiir über die Diopter muß dann
borizontal sein. Hierlier j^eliören auch die von Starke und Kämmerer
konstruierten Taseiiennivellierdiopter, welche an Stelle der einfachen
Diopter ein Fernrohr oiine VerirrölSerung mit einem Fadenkreuz in
der Mitte zum Vor- und Kückwärtsvisieren haben ( Fig:. 147). Für
j,^enaue Beobachtungen ist aber die Verbindung eines mäßig stark
vergröllernden Fernrolirs mit einer guten Liiielle erforderlich. Diese
vorzugsweise als „Nivellierinstrumente bezeichneten Instrumente
=■ können dabei noch sehr verschieden eingerichtet sein, wobei
wesentlich darauf zu sehen ist, dal5 die theoretischen Forderungen,
welche an das Instrument zu stellen sind: horizontale Lage der Visur
in jeder Lage des Fernrohrs bei einspielender Libelle (man verwendet
hier ausschließlich Libellen mit dem Nullpunkt in der Mitte) durch
Rektifikation des Instruments leicht und sicher zu erreichen sind. ■■-
f^ Fig. 148. Auf die bei jedem Instrumente vorhandenen Teile:
Stativ, Dreifuß, Klemme und Feinbewegung wird im weiteren nicht
Rücksicht genommen; sie sind nach dem früheren ohne weiteres
verständlich. Die Einteilung in * Auch diese Methode des Nivellierens
ist schon ziemlich alt. Liesgan ig hatte ein Instrument nüt festem
Fernrohr und fester (rektiHzierbarer) Libelle, um eine
Horizontalachse und an einem Gradbogen in Höhe drehbar (Joh.
Tob. Mayer, „Gründliciier und ausführlicher Unterricht zur praktischen
Geometrie, Göttingen 1792, IL Teil, S. 574). Ein ebenso einfaches
Instrument ^var das Sissonsche (ibid. S. 579). Mayer beschreibt
auch die minder genauen Pendelinstrumente von Picard (S. 586) und
Huygens (S. 589).
64. 126 Nivellierinstrumente 54. „kleine und „große
Nivellierinstrumente ist theoretisch völlig belanglos, wenn sie auch
praktisch für die erreichbare Genauigkeit nicht gleichgültig ist. Man
kann die Nivellierinstrumente in die folgenden Hauptgruppen teilen,
deren Typen im folgenden durch je eine Figur dargestellt sind: a)
Fernrohr und Libelle fest mit dem Dreifuß verbunden (Fig. 148 1; h)
Fernrohr und Libelle fest miteinander verbunden, aber miteinander
um die Fernrohrachse zu drehen (daher meist mit Ileversionslibellen)
und aus den Lagern herauszuheben und umzulegen (Fig. 149);*
Fi}?. U^. c) Libelle fest mit dem Dreifuß verbunden, Fernrohr zum
Herausheben und Umlegen (Fig. 150); d) Fernrohr umzulegen;
Libelle aufzusetzen (meist mit Horizontalkreis und vertikalem
Gradbogen versehen (Fig. 151). Bei den Gruppen /v), c) und d) ist
aber ein wesentliches Erfordernis, daß die Ringe, mit denen das
Fernrohr in den Lagern liegt und auf welche die Libelle aufzusetzen
ist, genau gleiche Durchmesser haben, damit beim Umlegen des
Fernrohrs die Neigung desselben unverändert bleibt. Nebst der
Hauptlibelle, welche zum Nivellieren dient, haben übrigens viele
dieser Instrumente zum raschen Horizontalstellen noch eine zweite
am Stativ feste Lil)elle, oft nur eine Dosenlibelle, mitunter zwei
Kreuzlibellen, wodurch das wiederholte Drehen um die Vertikalachse
behufs Vertikalstellen derselben erspart wird. * Bei »U'Jii
abgebildeten Instrumente ist libriirtMis duroii S^ehranbe und
(iejrenfeder der ganze Oberteil um eine liorizontale Aelise innerlialb
miilMger (irenzen drehbar (kippbares Fernrohr), die GrölJe der
Drehung- aber nielit an der Sehraube zu lesen (keine
Mikrometerscliraube).
66. Nivcllicrintriiiiit'iitc .»4. 127 Fi- 150. Eine besondere
Methode des Nivellierens hat Stampfer durch Einführung seiner
Meßschraube, der sogenannten Tangentenschraube (oder
Trommelkippschraube), eingeführt.* Trifft nämlich die horizontale
Visur den * Theoretisch und praktische Anleitung zum Nivellieren
von S. Stampfer, 10. Auflage von Dolezal. Die Einführung melirerer
Horizontalfäden im Fernrohr rührt
69. 128 Nivellierinstrumente öl. Boden oder geht sie über der
Latte vorüber, so wird das Fernrohr an der Tangentenschraube so
weit gehoben oder gesenkt, ])is man einen bestimmten Teil der
Latte im Gesichtsfelde erhält und kann dann an dieser ablesen. Fiff.
1.52. CSirrinf. Fi-. 153. bereits von Ferd. Ritter v. Mittis, „Das
Nivellement mit einem mui erfundenen Instrument, Wien 1831, her.
In seiner urspriinirliclien Form brachte er statt einer Libelle drei
nebeneinander an, die unter bestimmten AVinkoln jreirenoinauder
geneigt waren; das Fernrohr sollte mit der mittleren horizontal, mit
der einen auf einen bestimmten Höhenwinkel, mit der andern auf
einen bestimmten Tiefenwinkel gestellt werden (1. c. S. 8).
Dieselben ersetzt er dann durch drei Horizontalfäden im Fernridir,
von denen der mittlere die horizontale Visur, die beiden anderen
bestiuunte Hohen- und Tiefenwinkel gaben.
71. The text on this page is estimated to be only 20.64%
accurate
Nivollierinstruiiifiiic 04. 129 Die Schraul..' kann an jeden,
.ler ang:efiil.rten Instruniententypen angebracht wer.len. F ,^^ lo2
zeigt ein Instrnnirnt .l.s Typus a (festes Fernrohr und Fiff. 154. Fig.
155. feste Libelle), Fig. 153 ein solches vom Typus b (Fernrohr mit
Rever.... hbelle gemeinschaftlich umlegbar), Fig. 154 ein Instrument
der Gruppe Herz, Geodäsie. sionsc 9
74. 130 Nivellierinstrumente öi. (Libelle fest, Fernrohr
umlegbar), Fig. 155 ein solehes der Gruppe d TFernrohr umlegbar,
Libelle aufzusetzen). Die Achse, um welche dabei das Fernrohr zu
kippen ist, ist bei den Starkeschen und Breitliauptsciien
Instrumenten an dem der Schraube entgegengesetzten Ende, bei
den Hildebrandt sehen und Ott sehen zentrisch über der vertikalen
Umdrehungsachse. Fi,l,^ 156. Viele dieser Instrumente sind mit
Horizontalkreison, Höhenkreisen oder Sektoren versehen; namentlich
die Höheukreise sind hierbei in vielen Fällen erwünschte
Ergänzungen, da die Kivcllierinstrumente ohne Vertikalbögen nur
eine beschränkte Verwendung auf wonig gom-igti in Terrain halen.
Für stark geneigtes Terrain mülken die Stände sehr nahe beisammen
sein. Natürlich kann auch Jedes Universalinstrument als
Kivellierinstrument verwendet werden, wenn auf das Fernrohr eine
Libelle aufgesetzt wird (vgl. die Fig. 129 und 132). Doch werden
eigene Universal-Nivellierinstrumente gebaut, bei denen Horizontal-
und Vertikalkreis fein geteilt, durch Nonien abzulesen sind, das
Hauptgewicht aber auf Nivellement und Distanzmessung gelegt ist;
deshalb ist mit dem Vertikalkreise nicht das Fernrohr. st)nd(rn eine
75. Welcome to Our Bookstore - The Ultimate Destination for Book Lovers
Are you passionate about books and eager to explore new worlds of
knowledge? At our website, we offer a vast collection of books that
cater to every interest and age group. From classic literature to
specialized publications, self-help books, and children’s stories, we
have it all! Each book is a gateway to new adventures, helping you
expand your knowledge and nourish your soul
Experience Convenient and Enjoyable Book Shopping Our website is more
than just an online bookstore—it’s a bridge connecting readers to the
timeless values of culture and wisdom. With a sleek and user-friendly
interface and a smart search system, you can find your favorite books
quickly and easily. Enjoy special promotions, fast home delivery, and
a seamless shopping experience that saves you time and enhances your
love for reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookball.com