SlideShare a Scribd company logo
Visit https://ebookultra.com to download the full version and
explore more ebooks
High Performance Computing in Remote Sensing
Antonio J. Plaza
_____ Click the link below to download _____
https://ebookultra.com/download/high-performance-
computing-in-remote-sensing-antonio-j-plaza/
Explore and download more ebooks at ebookultra.com
Here are some suggested products you might be interested in.
Click the link to download
High Performance Embedded Computing Applications in Cyber
Physical Systems and Mobile Computing 2nd Edition Wolf M.
https://ebookultra.com/download/high-performance-embedded-computing-
applications-in-cyber-physical-systems-and-mobile-computing-2nd-
edition-wolf-m/
Remote Sensing in Archaeology 1st Edition Jay K. Johnson
https://ebookultra.com/download/remote-sensing-in-archaeology-1st-
edition-jay-k-johnson/
Advances in Computers Volume 72 High Performance Computing
1st Edition Marvin V. Zelkowitz
https://ebookultra.com/download/advances-in-computers-volume-72-high-
performance-computing-1st-edition-marvin-v-zelkowitz/
Laser Remote Sensing 1st Edition Takashi Fujii
https://ebookultra.com/download/laser-remote-sensing-1st-edition-
takashi-fujii/
Cloud Grid and High Performance Computing Emerging
Applications 1st Edition Emmanuel Udoh
https://ebookultra.com/download/cloud-grid-and-high-performance-
computing-emerging-applications-1st-edition-emmanuel-udoh/
Advances in Photogrammetry Remote Sensing and Spatial
Information Sciences 2008 ISPRS Congress Book
International Society for Photogrammetry and Remote
Sensing Isprs 1st Edition Zhilin Li
https://ebookultra.com/download/advances-in-photogrammetry-remote-
sensing-and-spatial-information-sciences-2008-isprs-congress-book-
international-society-for-photogrammetry-and-remote-sensing-isprs-1st-
edition-zhilin-li/
Remote Sensing with Polarimetric Radar 1st Edition Harold
Mott
https://ebookultra.com/download/remote-sensing-with-polarimetric-
radar-1st-edition-harold-mott/
Integration of GIS and remote sensing 1st Edition Mesev
https://ebookultra.com/download/integration-of-gis-and-remote-
sensing-1st-edition-mesev/
High Performance Embedded Computing Handbook A Systems
Perspective 1st Edition David R. Martinez
https://ebookultra.com/download/high-performance-embedded-computing-
handbook-a-systems-perspective-1st-edition-david-r-martinez/
High Performance Computing in Remote Sensing Antonio J. Plaza
High Performance Computing in Remote Sensing Antonio
J. Plaza Digital Instant Download
Author(s): Antonio J. Plaza, Chein,I Chang
ISBN(s): 9781420011616, 1420011618
Edition: Kindle
File Details: PDF, 6.89 MB
Year: 2007
Language: english
High Performance Computing in Remote Sensing Antonio J. Plaza
High Performance Computing
in Remote Sensing
High Performance Computing in Remote Sensing Antonio J. Plaza
High Performance Computing in Remote Sensing Antonio J. Plaza
Chapman & Hall/CRC
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2008 by Taylor & Francis Group, LLC
Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Printed in the United States of America on acid-free paper
10 9 8 7 6 5 4 3 2 1
International Standard Book Number-13: 978-1-58488-662-4 (Hardcover)
This book contains information obtained from authentic and highly regarded sources. Reprinted
material is quoted with permission, and sources are indicated. A wide variety of references are
listed. Reasonable efforts have been made to publish reliable data and information, but the author
and the publisher cannot assume responsibility for the validity of all materials or for the conse-
quences of their use.
No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any
electronic, mechanical, or other means, now known or hereafter invented, including photocopying,
microfilming, and recording, or in any information storage or retrieval system, without written
permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.
copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC)
222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that
provides licenses and registration for a variety of users. For organizations that have been granted a
photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and
are used only for identification and explanation without intent to infringe.
Library of Congress Cataloging-in-Publication Data
High performance computing in remote sensing / Antonio J. Plaza and Chein-I
Chang, editors.
p. cm. -- (Chapman & Hall/CRC computer & information science series)
Includes bibliographical references and index.
ISBN 978-1-58488-662-4 (alk. paper)
1. High performance computing. 2. Remote sensing. I. Plaza, Antonio J. II.
Chang, Chein-I. III. Title. IV. Series.
QA76.88.H5277 2007
621.36’78028543--dc22 2007020736
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
Contents
1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
Antonio Plaza and Chein-I Chang
2 High-Performance Computer Architectures for Remote Sensing
Data Analysis: Overview and Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
Antonio Plaza and Chein-I Chang
3 Computer Architectures for Multimedia and Video Analysis. . . . . . . . . . . . . . . .43
Edmundo Sáez, José González-Mora, Nicolás Guil, José I. Benavides,
and Emilio L. Zapata
4 Parallel Implementation of the ORASIS Algorithm for Remote
Sensing Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
David Gillis and Jeffrey H. Bowles
5 Parallel Implementation of the Recursive Approximation of an
Unsupervised Hierarchical Segmentation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 97
James C. Tilton
6 Computing for Analysis and Modeling of Hyperspectral Imagery . . . . . . . . . . 109
Gregory P. Asner, Robert S. Haxo, and David E. Knapp
7 Parallel Implementation of Morphological Neural Networks for
Hyperspectral Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
Javier Plaza, Rosa Pérez, Antonio Plaza, Pablo Martínez, and David Valencia
8 Parallel Wildland Fire Monitoring and Tracking Using Remotely
Sensed Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
David Valencia, Pablo Martínez, Antonio Plaza, and Javier Plaza
9 An Introduction to Grids for Remote Sensing Applications. . . . . . . . . . . . . . . .183
Craig A. Lee
v
vi Contents
10 Remote Sensing Grids: Architecture and Implementation . . . . . . . . . . . . . . . . 203
Samuel D. Gasster, Craig A. Lee, and James W. Palko
11 Open Grid Services for Envisat and Earth Observation Applications . . . . . . 237
Luigi Fusco, Roberto Cossu, and Christian Retscher
12 Design and Implementation of a Grid Computing Environment
for Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281
Massimo Cafaro, Italo Epicoco, Gianvito Quarta, Sandro Fiore,
and Giovanni Aloisio
13 A Solutionware for Hyperspectral Image Processing and Analysis . . . . . . . . 309
Miguel Vélez-Reyes, Wilson Rivera-Gallego, and Luis O. Jiménez-Rodríguez
14 AVIRIS and Related 21st Century Imaging Spectrometers for Earth
and Space Science. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335
Robert O. Green
15 Remote Sensing and High-Performance Reconfigurable
Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359
Esam El-Araby, Mohamed Taher, Tarek El-Ghazawi, and Jacqueline Le Moigne
16 FPGA Design for Real-Time Implementation of Constrained Energy
Minimization for Hyperspectral Target Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379
Jianwei Wang and Chein-I Chang
17 Real-Time Online Processing of Hyperspectral Imagery
for Target Detection and Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397
Qian Du
18 Real-Time Onboard Hyperspectral Image Processing Using
Programmable Graphics Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411
Javier Setoain, Manuel Prieto, Christian Tenllado, and Francisco Tirado
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
List of Tables
2.1 Specifications of Heterogeneous Computing Nodes in a Fully
Heterogeneous Network of Distributed Workstations . . . . . . . . . . . . . . . . . . . 28
2.2 Capacity of Communication Links (Time in Milliseconds to Transfer
a 1-MB Message) in a Fully Heterogeneous Network . . . . . . . . . . . . . . . . . . . 28
2.3 SAD-Based Spectral Similarity Scores Between Endmembers Extracted
by Different Parallel Implementations of the PPI Algorithm and the
USGS Reference Signatures Collected in the WTC Area . . . . . . . . . . . . . . . . 30
2.4 Processing Times (Seconds) Achieved by the Cluster-Based and
Heterogeneous Parallel Implementations of PPI on Thunderhead. . . . . . . . .32
2.5 Execution Times (Measured in Seconds) of the Heterogeneous PPI and
its Homogeneous Version on the Four Considered NOWs
(16 Processors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
2.6 Communication (com), Sequential Computation (Ap), and Parallel
Computation (Bp) Times Obtained on the Four Considered NOWs . . . . . . . 33
2.7 Load Balancing Rates for the Heterogeneous PPI and its Homogeneous
Version on the Four Considered NOWs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34
2.8 Summary of Resource Utilization for the FPGA-Based Implementation
of the PPI Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
3.1 Clock Cycles and Speedups for the Sequential/Optimized Kernel
Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
3.2 Percentage of Computation Time Spent by the Temporal Video
Segmentation Algorithm in Different Tasks, Before and After
the Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
4.1 Summary of HPC Platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90
4.2 Summary of Data Cubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
vii
viii List of Tables
4.3 Timing Results for the Longview Machine (in seconds) . . . . . . . . . . . . . . . . . 91
4.4 Timing Results for the Huinalu Machine (in seconds) . . . . . . . . . . . . . . . . . . . 91
4.5 Timing Results for the Shelton Machine (in seconds) . . . . . . . . . . . . . . . . . . . 92
4.6 Statistical Tests used for Compression. X = Original Spectrum, Y =
Reconstructed Spectrum, n = Number of Bands . . . . . . . . . . . . . . . . . . . . . . . 92
4.7 Compression Results for the Longview Machine . . . . . . . . . . . . . . . . . . . . . . . 92
4.8 Compression Results for the Huinalu Machine . . . . . . . . . . . . . . . . . . . . . . . . . 93
4.9 Compression Results for the Shelton Machine . . . . . . . . . . . . . . . . . . . . . . . . . 93
5.1 The Number of CPUs Required for a Naive Parallelization of RHSEG
with one CPU per 4096 Pixel Data Section for Various
Dimensionalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103
5.2 RHSEG Processing Time Results for a Six-Band Landsat Thematic
Mapper Image with 2048 Columns and 2048 Rows. (For the 1 CPU
case, the processing time shown is for the values of Li and Lo that
produce the smallest processing time.) Processing Time Shown as
hours:minutes:seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
5.3 The Percentage of Time Task 0 of the Parallel Implementation
of RHSEG Spent in the Activities of Set-up, Computation, Data Transfer,
Waiting for Other Tasks, and Other Activities for the 2048 × 2048
Landsat TM Test Scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105
6.1 The Basic Characteristics of Several Well-Known Imaging
Spectrometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
7.1 Classification Accuracies (in percentage) Achieved by the Parallel Neural
Classifier for the AVIRIS Salinas Scene Using Morphological Features,
PCT-Based Features, and the Original Spectral Information (processing
times in a single Thunderhead node are given in the parentheses) . . . . . . . 143
7.2 Execution Times (in seconds) and Performance Ratios Reported for the
Homogeneous Algorithms Versus the Heterogeneous Ones on the Two
Considered Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
7.3 Communication (COM), Sequential Computation (SEQ), and Parallel
Computation (PAR) Times for the Homogeneous Algorithms Versus
the Heterogeneous Ones on the Two Considered Networks After
Processing the AVIRIS Salinas Hyperspectral Image . . . . . . . . . . . . . . . . . . 145
List of Tables ix
7.4 Load-Balancing Rates for the Parallel Algorithms on the Homogeneous
and Heterogeneous Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146
8.1 Classification Accuracy Obtained by the Proposed Parallel AMC
Algorithm for Each Ground-Truth Class in the AVIRIS Indian
Pines Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
8.2 Execution Times (seconds) of the HeteroMPI-Based Parallel Version
of AMC Algorithm on the Different Heterogeneous Processors
of the HCL Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
8.3 Minima (tmin) and Maxima (tmax ) Processor Run-Times (in seconds)
and Load Imbalance (R) of the HeteroMPI-Based Implementation
of AMC Algorithm on the HCL Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
13.1 Function Replace Using BLAS Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
13.2 Algorithms Benchmarks Before and After BLAS Library
Replacements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323
13.3 Results of C-Means Method with Euclidean Distance. . . . . . . . . . . . . . . . . .330
13.4 Results Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331
14.1 Spectral, Radiometric, Spatial, Temporal, and Uniformity
Specifications of the AVIRIS Instrument.. . . . . . . . . . . . . . . . . . . . . . . . . . . . .340
14.2 Diversity of Scientific Research and Applications Pursued
with AVIRIS.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346
14.3 Spectral, Radiometric, Spatial, Temporal, and Uniformity
Specifications of the M3
Imaging Spectrometer for the Moon. . . . . . . . . . . 350
14.4 Earth Imaging Spectrometer Products for Terrestrial and Aquatic
Ecosystems Understanding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353
14.5 Nominal Characteristics of an Earth Imaging Spectrometer for
Terrestrial and Aquatic Ecosystems’ Health, Composition and
Productivity at a Seasonal Time Scale. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354
14.6 Earth Ecosystem Imaging Spectrometer Data Volumes. . . . . . . . . . . . . . . . . 356
17.1 Classification Accuracy ND Using the CLDA Algorithm (in al cases,
the number of false alarm pixels NF = 0).. . . . . . . . . . . . . . . . . . . . . . . . . . . .404
18.1 GPGPU Class Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431
x List of Tables
18.2 GPUStream Class Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
18.3 GPUKernel Class Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433
18.4 Experimental GPU Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442
18.5 Experimental CPU Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442
18.6 SAM-Based Spectral Similarity Scores Among USGS Mineral Spectra
and Endmembers Produced by Different Algorithms. . . . . . . . . . . . . . . . . . . 445
18.7 SAM-Based Spectral Similarity Scores Among USGS Mineral Spectra
and Endmembers Produced by the AMEE Algorithm (implemented
using both SAM and SID, and considering different numbers
of algorithm iterations). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445
18.8 Execution Time (in milliseconds) for the CPU Implementations . . . . . . . . 445
18.9 Execution Time (in milliseconds) for the GPU Implementations . . . . . . . . 446
List of Figures
2.1 The concept of hyperspectral imaging in remote sensing.. . . . . . . . . . . . . . . .11
2.2 Thunderhead Beowulf cluster (512 processors) at NASA’s Goddard Space
Flight Center in Maryland. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.3 Toy example illustrating the performance of the PPI algorithm
in a 2-dimensional space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.4 Domain decomposition adopted in the parallel implementation
of the PPI algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.5 Systolic array design for the proposed FPGA implementation
of the PPI algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.6 AVIRIS hyperspectral image collected by NASA’s Jet Propulsion
Laboratory over lower Manhattan on Sept. 16, 2001 (left),
and location of thermal hot spots in the fires observed in the World
Trade Center area (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.7 Scalability of the cluster-based and heterogeneous parallel
implementations of PPI on Thunderhead. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
3.1 Typical SIMD operation using multimedia extensions. . . . . . . . . . . . . . . . . . . 47
3.2 GPU pipeline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.3 Temporal video segmentation algorithm to optimize.. . . . . . . . . . . . . . . . . . . .51
3.4 Implementation of the horizontal 1-D convolution. . . . . . . . . . . . . . . . . . . . . . 54
3.5 Extension to the arctangent calculation to operate in the
interval [0◦
, 360◦
]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
3.6 Tracking process: A warping function is applied to the template, T (x),
to match its occurrence in an image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
3.7 Steps of the tracking algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60
3.8 Efficient computation of a Hessian matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
xi
xii List of Figures
3.9 Time employed by a tracking iteration in several platforms. . . . . . . . . . . . . . 63
3.10 Time comparison for several stages.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64
4.1 Data from AP Hill. (a) Single band of the original data. (b) (c) Fraction
planes from ORASIS processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
4.2 The number of exemplars as a function of the error angle for various
hyperspectral images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
4.3 Three-dimensional histogram of the exemplars projected onto the first
two reference vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
4.4 Abundance coefficient histograms. (a) The histogram of a background
endmember. (b) The histogram of a target endmember.. . . . . . . . . . . . . . . . . .85
4.5 HYDICE data from Forest Radiance. (a) A single band of the raw data.
(b) Overlay with the results of the OAD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
5.1 Graphical representation of the recursive task distribution for RHSEG
on a parallel computer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
6.1 Imaging spectrometers collect hyperspectral data such that each pixel
contains a spectral radiance signature comprised of contiguous, narrow
wavelength bands spanning a broad wavelength range (e.g., 400–
2500 nm). Top shows a typical hyperspectral image cube; each pixel
contains a detailed hyperspectral signature such as those shown
at the bottom.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111
6.2 Change in data volume with flight distance for two common imaging
spectrometers, AVIRIS and CASI-1500, flown at 2 m and 10 m GIFOV. . 114
6.3 Major processing steps used to derive calibrated, geo-referenced surface
reflectance spectra for subsequent analysis of hyperspectral images.. . . . .115
6.4 A per-pixel, Monte Carlo mixture analysis model used for automated,
large-scale quantification of fractional material cover in terrestrial
ecosystems [18, 21]. A spectral endmember database of (A) live, green
vegetation; (B) non-photosynthetic vegetation; and (C) bare soil is
used to iteratively decompose each pixel spectrum in an image into
constituent surface cover fractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
6.5 Example forward canopy radiative transfer model simulations of how a
plant canopy hyperspectral reflectance signature changes with increasing
quantities of dead leaf material. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
List of Figures xiii
6.6 Schematic of a typical canopy radiative transfer inverse modeling environ-
ment, with Monte Carlo simulation over a set of ecologically-constrained
variables. This example mentions AVIRIS as the hyperspectral image
data source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
6.7 Schematic of a small HPC cluster showing 20 compute nodes, front-end
server, InfiniBand high-speed/low-latency network, Gigabit Ethernet
management network, and storage in parallel and conventional file
systems on SCSI RAID-5 drive arrays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
6.8 Effect of storage RAID-5 subsystem on independent simultaneous calcula-
tions, with storage systems accessed via NFS. With multiple simultaneous
accesses, the SCSI array outperforms the SATA array. . . . . . . . . . . . . . . . . . 124
6.9 Performance comparison of multiple computer node access to a data
storage system using the traditional NFS or newer IBRIX parallel
file system.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124
7.1 Communication framework for the morphological feature extraction
algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136
7.2 MLP neural network topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
7.3 AVIRIS scene of Salinas Valley, California (a), and land-cover ground
classes (b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
7.4 Scalability of parallel morphological feature extraction algorithms
on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147
7.5 Scalability of parallel neural classifier on Thunderhead. . . . . . . . . . . . . . . . . 148
8.1 MERIS hyperspectral image of the fires that took place in the summer
of 2005 in Spain and Portugal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8.2 Classification of fire spread models.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155
8.3 Concept of parallelizable spatial/spectral pattern (PSSP) and proposed
partitioning scheme.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164
8.4 Problem of accessing pixels outside the image domain. . . . . . . . . . . . . . . . . 164
8.5 Additional communications required when the SE is located around
a pixel in the border of a PSSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
8.6 Border-handling strategy relative to pixels in the border of a PSSP.. . . . . .165
xiv List of Figures
8.7 Partitioning options for the considered neural algorithm. . . . . . . . . . . . . . . . 168
8.8 Functional diagram of the system design model. . . . . . . . . . . . . . . . . . . . . . . 170
8.9 (Left) Spectral band at 587 nm wavelength of an AVIRIS scene com-
prising agricultural and forest features at Indian Pines, Indiana. (Right)
Ground-truth map with 30 mutually exclusive land-cover classes. . . . . . . . 174
8.10 Speedups achieved by the parallel AMC algorithm using a limited
number of processors on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176
8.11 Speedups achieved by the parallel SOM-based classification algorithm
(using endmembers produced by the first three steps of the AMC
algorithm) using a large number of processors on Thunderhead. . . . . . . . . 177
8.12 Speedups achieved by the parallel ATGP algorithm using a limited
number of processors on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178
9.1 The service architecture concept. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
9.2 The OGSA framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
10.1 High level architectural view of a remote sensing system. . . . . . . . . . . . . . . 206
10.2 WFCS Grid services architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218
10.3 LEAD software architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
11.1 The BEST Toolbox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243
11.2 The BEAM toolbox with VISAT visualization.. . . . . . . . . . . . . . . . . . . . . . . .244
11.3 The BEAT toolbox with VISAN visualization. . . . . . . . . . . . . . . . . . . . . . . . . 246
11.4 The architecture model for EO Grid on-Demand Services. . . . . . . . . . . . . . 257
11.5 Web portal Ozone Profile Result Visualization.. . . . . . . . . . . . . . . . . . . . . . . .258
11.6 MERIS mosaic at 1.3 km resolution obtained in G-POD from the
entire May to December 2004 data set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260
11.7 The ASAR G-POD environment. The user browses for and selects
products of interest (upper left panel). The system automatically
identifies the subtasks required by the application and distributes
them to the different computing elements in the grid (upper right
panel). Results are presented to the user (lower panel).. . . . . . . . . . . . . . . . .262
List of Figures xv
11.8 Three arcsec (∼ 90 m) pixel size orthorectified Envisat ASAR mosaic
obtained using G-POD. Political boundaries have been manually
overlaid. The full resolution result can be seen at [34]. . . . . . . . . . . . . . . . . . 263
11.9 ASAR mosaic obtained using G-POD considering GM products
acquired from March 8 to 14, 2006 (400 m resolution). . . . . . . . . . . . . . . . . 264
11.10 Global monthly mean near surface temperature profile for June 2005,
time layer 0 h. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267
11.11 YAGOP ozone profiles compared with corresponding operational
GOMOS products for two selected stars (first and second panels from
the left). Distribution and comparison of coincidences for GOMOS
and MIPAS profiles for Sep. 2002 are shown in the first and second
panels from the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269
11.12 Zonal mean of Na profiles of 14–31 August 2003.. . . . . . . . . . . . . . . . . . . .270
12.1 Multi-tier grid system architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286
12.2 System architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290
12.3 Distributed data management architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . 295
12.4 Workflow management stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297
12.5 a) Task sequence showing interferogram processing; b) task
sequence mapped on grid resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303
13.1 Levels in solving a computing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
13.2 HIAT graphical user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312
13.3 Data processing schema for hyperspectral image analyis toolbox. . . . . . . 313
13.4 Spectrum of a signal sampled at (a) its Nyquist frequency, and
(b) twice its Nyquist frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314
13.5 (a) The spectrum of grass and (b) its power spectral density.. . . . . . . . . . .314
13.6 Sample spectra before and after lowpass filtering. . . . . . . . . . . . . . . . . . . . . 315
13.7 HYPERION data of Enrique Reef (band 8 at 427 nm) before
(a) and after (b) oversamplig filtering.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315
13.8 Principal component algorithm block components. . . . . . . . . . . . . . . . . . . . 324
xvi List of Figures
13.9 Performance results for Euclidean distance classifier. . . . . . . . . . . . . . . . . . 325
13.10 Performance results for maximum likelihood.. . . . . . . . . . . . . . . . . . . . . . . .326
13.11 Grid-HSI architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327
13.12 Grid-HSI portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
13.13 Graphical output at node 04. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330
14.1 A limited set of rock forming minerals and vegetation reflectance
spectra measured from 400 to 2500 nm in the solar reflected light
spectrum. NPV corresponds to non-photosynthetic vegetation.
A wide diversity of composition related absorption and scattering
signatures in nature are illustrated by these materials.. . . . . . . . . . . . . . . . .337
14.2 The spectral signatures of a limited set of mineral and vegetation
spectra convolved to the six solar reflected range band passes of the
multispectral LandSat Thematic Mapper. When mixtures and
illumination factors are included, the six multispectral measurements
are insufficient to unambiguously identify the wide range of possible
materials present on the surface of the Earth. . . . . . . . . . . . . . . . . . . . . . . . . 338
14.3 AVIRIS spectral range and sampling with a transmittance spectrum
of the atmosphere and the six LandSat TM multi-spectral bands
in the solar reflected spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
14.4 AVIRIS image cube representation of a data set measured of the
southern San Francisco Bay, California. The top panel shows the
spatial content for a 20 m spatial resolution data set. The vertical
panels depict the spectral measurement from 380 to 2510 nm that is
recorded for every spatial element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340
14.5 The 2006 AVIRIS signal-to-noise ratio and corresponding benchmark
reference radiance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341
14.6 Depiction of the spectral cross-track and spectral-IFOV uniformity
for a uniform imaging spectrometer. The grids represent the detectors,
the gray scale represents the wavelengths, and the dots represent the
centers of the IFOVs. This is a uniform imaging spectrometer where
each cross-track spectrum has the same calibration and all the
wavelengths measured for a given spectrum are from the same IFOV. . . 342
14.7 Vegetation reflectance spectrum showing the molecular absorption
and constituent scattering signatures present across the solar reflected
spectral range.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
List of Figures xvii
14.8 Modeled upwelling radiance incident at the AVIRIS aperture from a
well-illuminated vegetation canopy. This spectrum includes the
combined effects of the solar irradiance, two-way transmittance,
and scattering of the atmosphere, as well as the vegetation canopy
reflectance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
14.9 AVIRIS measured signal for the upwelling radiance from a vegetation
covered surface. The instrument optical and electronic characteristics
dominate for recorded signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344
14.10 Spectrally and radiometrically calibrated spectrum for the vegetation
canopy target. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
14.11 Atmospherically corrected spectrum from AVIRIS measurement of a
vegetation canopy. The 1400 and 1900 nm spectral regions are ignored
due to the strong absorption of atmospheric water vapor. In this
reflectance spectrum the molecular absorption and constituent scattering
properties of the canopy are clearly expressed and available for
spectroscopic analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345
14.12 Spectra of samples returned by the NASA Apollo missions showing
the composition-based spectral diversity of surface materials on the
Moon. This spectral diversity provides the basis for pursuing the object-
ives of the M3
mission with an imaging spectrometer. Upon arrival on
Earth the ultradry lunar Samples have absorbed water, resulting in the
absorption feature beyond 2700 nm. These spectra were measured
by the NASA RELAB facility at Brown University. . . . . . . . . . . . . . . . . . . 349
14.13 Mechanical drawing of the M3
imaging spectrometer that has been
built for mapping the composition of the Moon via spectroscopy.
The M3
instrument has the following mass, power, and volume
characteristics: 8 kg, 15 Watts, 25 × 18 × 12 cm. The M3
instrument
was built in 24 months.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350
14.14 Depiction of the spectral spatial and pushbroom imaging approach
of the M3
high uniformity and high precision imaging spectrometer. . . . 351
14.15 Benchmark reference radiance for an Earth imaging spectrometer
focused on terrestrial and aquatic ecosystem objectives. . . . . . . . . . . . . . . 355
14.16 The signal-to-noise ratio requirements for each of the bench-mark
reference radiances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355
15.1 Onboard processing example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361
15.2 Trade-off between flexibility and performance [5]. . . . . . . . . . . . . . . . . . . . 362
xviii List of Figures
15.3 FPGA structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362
15.4 CLB structure.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363
15.5 Early reconfigurable architecture [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364
15.6 Automatic wavelet spectral dimension reduction algorithm. . . . . . . . . . . . 366
15.7 Top hierarchical architecture of the automatic wavelet dimension
reduction algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
15.8 DWT IDWT pipeline implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367
15.9 Correlator module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368
15.10 Speedup of wavelet-based hyperspectral dimension reduction
algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .369
15.11 Generalized classification rules for Pass-One. . . . . . . . . . . . . . . . . . . . . . . . . 370
15.12 Top-level architecture of the ACCA algorithm.. . . . . . . . . . . . . . . . . . . . . . .370
15.13 ACCA normalization module architecture: exact normalization
operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371
15.14 ACCA normalization module architecture: approximated
normalization operations.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372
15.15 ACCA Pass-One architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373
15.16 Detection accuracy (based on the absolute error): image bands
and cloud masks (software/reference mask, hardware masks). . . . . . . . . . 374
15.17 Detection accuracy (based on the absolute error): approximate
normalization and quantization errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374
15.18 ACCA hardware-to-software performance. . . . . . . . . . . . . . . . . . . . . . . . . . . 375
16.1 Systolic array for QR-decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383
16.2 Systolic array for backsubstitution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.3 Boundary cell (left) and internal cell (right). . . . . . . . . . . . . . . . . . . . . . . . . . 385
16.4 Shift-adder DA architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
List of Figures xix
16.5 Computation of ck.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387
16.6 FIR filter for abundance estimation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388
16.7 Block diagram of the auto-correlator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389
16.8 QR-decomposition by CORDIC circuit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390
16.9 Systolic array for backsubstitution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391
16.10 Boundary cell (left) and internal cell (right) implementations. . . . . . . . . . 391
16.11 Real-time updated triangular matrix via CORDIC circuit. . . . . . . . . . . . . . 392
16.12 Real-time updated weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393
16.13 Real-time detection results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394
16.14 Block diagrams of Methods 1 (left) and 2 (right) to be used
for FPGA designs of CEM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395
17.1 (a) A HYDICE image scene that contains 30 panels. (b) Spatial
locations of 30 panels provided by ground truth. (c) Spectra
from P1 to P10.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403
18.1 A hyperspectral image as a cube made up of spatially arranged
pixel vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413
18.2 3D graphics pipeline.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .415
18.3 Fourth generation of GPUs block diagram. These GPUs incorporate
fully programmable vertexes and fragment processors. . . . . . . . . . . . . . . . 416
18.4 NVIDIA G70 (a) and ATI-RADEON R520 (b) block diagrams. . . . . . . . 418
18.5 Block diagram of the NVIDIA’s Geforce 8800 GTX. . . . . . . . . . . . . . . . . . 419
18.6 Stream graphs of the GPU-based (a) filter-bank (FBS) and (b) lifting
(LS) implementations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426
18.7 2D texture layout.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427
18.8 Mapping one lifting step onto the GPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429
18.9 Implementation of the GPGPU Framework. . . . . . . . . . . . . . . . . . . . . . . . . . 430
xx List of Figures
18.10 We allocate a stream S of dimension 8 × 4 and initialize its content
to a sequence of numbers (from 0 to 31). Then, we ask four substreams
dividing the original stream into four quadrants (A, B, C, and D).
Finally, we add quadrants A and D and store the result in B, and we
substract D from A and store the result in C. . . . . . . . . . . . . . . . . . . . . . . . . 432
18.11 Mapping of a hyperspectral image onto the GPU memory. . . . . . . . . . . . . 437
18.12 Flowchart of the proposed stream-based GPU implementation
of the AMEE algorithm using SAM as pointwise distance. . . . . . . . . . . . . 438
18.13 Kernels involved in the computation of the inner products/norms
and definition of a region of influence (RI) for a given pixel defined
by an SE with t = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
18.14 Computation of the partial inner products for distance 5: each
pixel-vector with its south-east nearest neighbor. Notice that the
elements in the GPUStreams are four-element vectors, i.e., A, B, C . . .
contains four floating, point values each, and vector operations
are element-wise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440
18.15 Flowchart of the proposed stream-based GPU implementation
of the AMEE algorithm using SID as pointwise distance. . . . . . . . . . . . . . 441
18.16 Subscene of the full AVIRIS hyperspectral data cube collected
over the Cuprite mining district in Nevada. . . . . . . . . . . . . . . . . . . . . . . . . . . 443
18.17 Ground USGS spectra for ten minerals of interest in the AVIRIS
Cuprite scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444
18.18 Performance of the CPU- and GPU-based AMEE (SAM)
implementations for different image sizes (Imax = 5). . . . . . . . . . . . . . . . . 446
18.19 Performance of the CPU- and GPU-based AMEE (SID)
implementations for different image sizes (Imax = 5). . . . . . . . . . . . . . . . . 447
18.20 Speedups of the GPU-based AMEE implementations for different
numbers of iterations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447
18.21 Speedup comparison between the two different implementations of
AMEE (SID and SAM) in the different execution platforms
(Imax = 5).. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448
18.22 Speedup comparison between the two generations of CPUs,
P4 Northwood (2003) and Prescott (2005), and the two generations
of GPUs, 5950 Ultra (2003) and 7800 GTX (2005). . . . . . . . . . . . . . . . . . . 448
Acknowledgments
The editors would like to thank all the contributors for all their help and support
during the production of this book, and for sharing their vast knowledge with readers.
In particular, Profs. Javier Plaza and David Valencia are gratefully acknowledged for
their help in the preparation of some of the chapters of this text. Last but not least,
the editors gratefully thank their families for their support on this project.
xxi
High Performance Computing in Remote Sensing Antonio J. Plaza
About the Editors
Antonio Plaza received the M.S. degree and the Ph.D. degree in computer engi-
neering from the University of Extemadura, Spain, where he was awarded the out-
standing Ph.D. dissertation award in 2002. Dr. Plaza is an associate professor with
the Department of Technology of Computers and Communications at University of
Extremadura. He has authored or co-authored more than 140 scientific publications
including journal papers, book chapters, and peer-reviewed conference proceedings.
His main research interests comprise remote sensing, image and signal processing,
and efficient implementations of large-scale scientific problems on high-performance
computing architectures, including commodity Beowulf clusters, heterogeneous net-
works of workstations, grid computing facilities, and hardware-based computer archi-
tectures such as field-programmable gate arrays (FPGAs) and graphics processing
units (GPUs).
He has held visiting researcher positions at several institutions, including
the Computational and Information Sciences and Technology Office (CISTO) at
NASA/Goddard Space Flight Center, Greenbelt, Maryland; the Remote Sensing,
Signal and Image Processing Laboratory (RSSIPL) at the Department of Computer
Science and Electrical Engineering, University of Maryland, Baltimore County; the
Microsystems Laboratory at the Department of Electrical & Computer Engineering,
University of Maryland, College Park; and the AVIRIS group at NASA/Jet Propulsion
Laboratory, Pasadena, California.
Dr. Plaza is a senior member of the IEEE. He is active in the IEEE Computer
Society and the IEEE Geoscience and Remote Sensing Society, and has served as
proposal evaluator for the European Commission, the European Space Agency, and
the Spanish Ministry of Science and Education. He is also a frequent manuscript re-
viewer for more than 15 highly-cited journals (including several IEEE Transactions)
in the areas of computer architecture, parallel/distributed systems, remote sensing,
neural networks, image/signal processing, aerospace and engineering systems, and
pattern analysis. He is also a member of the program committee of several inter-
national conferences, such as the European Conference on Parallel and Distributed
Computing; the International Workshop on Algorithms, Models and Tools for Parallel
Computing on Heterogeneous Networks; the Euromicro Workshop on Parallel and
Distributed Image Processing, Video Processing, and Multimedia; the Workshop on
Grid Computing Applications Development; the IEEE GRSS/ASPRS Joint Workshop
on Remote Sensing and Data Fusion over Urban Areas; and the IEEE International
Geoscience and Remote Sensing Symposium.
Dr. Plaza is the project coordinator of HYPER-I-NET (Hyperspectral Imag-
ing Network), a four-year Marie Curie Research Training Network (see
http://www.hyperinet.eu) designed to build an interdisciplinary European research
xxiii
xxiv About the Editors
community focused on remotely sensed hyperspectral imaging. He is guest ed-
itor (with Prof. Chein-I Chang) of a special issue on high performance com-
puting for hyperspectral imaging for the the International Journal of High Per-
formance Computing Applications. He is associate editor for the IEEE Trans-
actions on Geoscience and Remote Sensing journal in the areas of Hyperspec-
tral Image Analysis and Signal Processing. Additional information is available at
http://www.umbc.edu/rssipl/people/aplaza.
Chein-I Chang received his B.S. degree from Soochow University, Taipei, Taiwan;
the M.S. degree from the Institute of Mathematics at National Tsing Hua University,
Hsinchu, Taiwan; and the M.A. degree from the State University of New York at
Stony Brook, all in mathematics. He also received his M.S., and M.S.E.E. degrees
from the University of Illinois at Urbana-Champaign and the Ph.D. degree in electrical
engineering from the University of Maryland, College Park.
Dr. Chang has been with the University of Maryland, Baltimore County (UMBC)
since 1987 and is currently professor in the Department of Computer Science and
Electrical Engineering. He was a visiting research specialist in the Institute of Infor-
mation Engineering at the National Cheng Kung University, Tainan, Taiwan, from
1994 to 1995. He received an NRC (National Research Council) senior research
associateship award from 2002 to 2003 sponsored by the U.S. Army Soldier and Bio-
logical Chemical Command, Edgewood Chemical and Biological Center, Aberdeen
Proving Ground, Maryland. Additionally, Dr. Chang was a distinguished lecturer
chair at the National Chung Hsing University sponsored by the Ministry of Education
in Taiwan from 2005 to 2006 and is currently holding a chair professorship of diaster
reduction technology from 2006 to 2009 with the Environmental Restoration and
Disaster Reduction Research Center, National Chung Hsing University, Taichung,
Taiwan, ROC.
He has three patents and several pending on hyperspectral image processing. He is
on the editorial board of the Journal of High Speed Networks and was an associate
editor in the area of hyperspectral signal processing for IEEE Transactions on Geo-
science and Remote Sensing. He was the guest editor of a special issue of the Journal
of High Speed Networks on telemedicine and applications and co-guest edits three
special issues on Broadband Multimedia Sensor Networks in Healthcare Applications
for the Journal of High Speed Networks, 2007 and on high-performance comput-
ing for hyperspectral imaging for the International Journal of High Performance
Computing Applications.
Dr. Chang is the author of Hyperspectral Imaging: Techniques for Spectral Detec-
tion and Classification published by Kluwer Academic Publishers in 2003 and the
editor of two books, Recent Advances in Hyperspectral Signal and Image Processing,
Trivandrum, Kerala: Research Signpost, Trasworld Research Network, India, 2006,
and Hyperspectral Data Exploitation: Theory and Applications, John Wiley & Sons,
2007. Dr. Chang is currently working on his second book, Hyperspectral Imaging:
Algorithm Design and Analysis, John Wiley & Sons due 2007. He is a Fellow of the
SPIE and a member of Phi Kappa Phi and Eta Kappa Nu. Additional information is
available at http://www.umbc.edu/rssipl.
Contributors
Giovanni Aloisio, Euromediterranean Center for Climate Change & University of Salento, Italy
Gregory P. Asner, Carnegie Institution of Washington, Stanford, California
José I. Benavides, University of Córdoba, Spain
Jeffrey H. Bowles, Naval Research Laboratory, Washington, DC
Massimo Cafaro, Euromediterranean Center for Climate Change & University of Salento, Italy
Chein-I Chang, University of Maryland Baltimore County, Baltimore, Maryland
Roberto Cossu, European Space Agency, ESA-Esrin, Italy
Qian Du, Missisipi State University, Mississippi
Esam El-Araby, George Washington University, Washington, DC
Tarek El-Ghazawi, George Washington University, Washington, DC
Italo Epicoco, Euromediterranean Center for Climate Change & University of Salento, Italy
Sandro Fiore, Euromediterranean Center for Climate Change & University of Salento, Italy
Luigi Fusco, European Space Agency, ESA-Esrin, Italy
Samuel D. Gasster, The Aerospace Corporation, El Segundo, California
David Gillis, Naval Research Laboratory, Washington, DC
José González-Mora, University of Málaga, Spain
Robert O. Green, Jet Propulsion Laboratory & California Institute of Technology, California
Nicolás Guil, University of Málaga, Spain
Robert S. Haxo, Carnegie Institution of Washington, Stanford, California
Luis O. Jiménez-Rodrı́guez, University of Puerto Rico at Mayaguez
David E. Knapp, Carnegie Institution of Washington, Stanford, California
Craig A. Lee, The Aerospace Corporation, El Segundo, California
Jacqueline Le Moigne, NASA’s Goddard Space Flight Center, Greenbelt, Maryland
Pablo Martı́nez, University of Extremadura, Cáceres, Spain
James W. Palko, The Aerospace Corporation, El Segundo, California
Rosa Pérez, University of Extremadura, Cáceres, Spain
Antonio Plaza, University of Extremadura, Cáceres, Spain
Javier Plaza, University of Extremadura, Cáceres, Spain
Manuel Prieto, Complutense University of Madrid, Spain
Gianvito Quarta, Institute of Atmospheric Sciences and Climate, CNR, Bologna, Italy
Christian Retscher, European Space Agency, ESA-Esrin, Italy
Wilson Rivera-Gallego, University of Puerto Rico at Mayaguez, Puerto Rico
Edmundo Sáez, University of Córdoba, Spain
Javier Setoain, Complutense University of Madrid, Spain
Mohamed Taher, George Washington University, Washington, DC
Christian Tenllado, Complutense University of Madrid, Spain
James C. Tilton, NASA Goddard Space Flight Center, Greenbelt, Maryland
xxv
xxvi Contributors
Francisco Tirado, Complutense University of Madrid, Spain
David Valencia, University of Extremadura, Cáceres, Spain
Miguel Vélez-Reyes, University of Puerto Rico at Mayaguez, Puerto Rico
Jianwei Wang, University of Maryland Baltimore County, Baltimore, Maryland
Emilio L. Zapata, University of Málaga, Spain
Chapter 1
Introduction
Antonio Plaza
University of Extremadura, Spain
Chein-I Chang
University of Maryland, Baltimore County
Contents
1.1 Preface ...................................................................1
1.2 Contents ..................................................................2
1.2.1 Organization of Chapters in This Volume ...........................3
1.2.2 Brief Description of Chapters in This Volume .......................3
1.3 Distinguishing Features of the Book .......................................6
1.4 Summary .................................................................7
1.1 Preface
Advances in sensor technology are revolutionizing the way remotely sensed data are
collected, managed, and analyzed. The incorporation of latest-generation sensors to
airborne and satellite platforms is currently producing a nearly continual stream of
high-dimensional data, and this explosion in the amount of collected information
has rapidly created new processing challenges. In particular, many current and future
applications of remote sensing in Earth science, space science, and soon in exploration
science require real- or near-real-time processing capabilities. Relevant examples in-
clude environmental studies, military applications, tracking and monitoring of hazards
such as wild land and forest fires, oil spills, and other types of chemical/biological
contamination.
To address the computational requirements introduced by many time-critical appli-
cations, several research efforts have been recently directed towards the incorporation
of high-performance computing (HPC) models in remote sensing missions. HPC is
an integrated computing environment for solving large-scale computational demand-
ing problems such as those involved in many remote sensing studies. With the aim
of providing a cross-disciplinary forum that will foster collaboration and develop-
ment in those areas, this book has been designed to serve as one of the first available
references specifically focused on describing recent advances in the field of HPC
1
2 High-Performance Computing in Remote Sensing
applied to remote sensing problems. As a result, the content of the book has been
organized to appeal to both remote sensing scientists and computer engineers alike.
On the one hand, remote sensing scientists will benefit by becoming aware of the
extremely high computational requirements introduced by most application areas in
Earth and space observation. On the other hand, computer engineers will benefit from
the wide range of parallel processing strategies discussed in the book. However, the
material presented in this book will also be of great interest to researchers and prac-
titioners working in many other scientific and engineering applications, in particular,
those related with the development of systems and techniques for collecting, storing,
and analyzing extremely high-dimensional collections of data.
1.2 Contents
The contents of this book have been organized as follows. First, an introductory part
addressing some key concepts in the field of computing applied to remote sensing,
along with an extensive review of available and future developments in this area, is
provided.Thispartalsocoversotherapplicationareasnotnecessarilyrelatedtoremote
sensing, such as multimedia and video processing, chemical/biological standoff de-
tection, and medical imaging. Then, three main application-oriented parts follow, each
of which illustrates a specific parallel computing paradigm. In particular, the HPC-
based techniques comprised in these parts include multiprocessor (cluster-based) sys-
tems, large-scale and heterogeneous networks of computers, and specialized hardware
architectures for remotely sensed data analysis and interpretation. Combined, the four
parts deliver an excellent snapshot of the state-of-the-art in those areas, and offer a
thoughtful perspective of the potential and emerging challenges of applying HPC
paradigms to remote sensing problems:
r Part I: General. This part, comprising Chapters 2 and 3, develops basic concepts
about HPC in remote sensing and provides a detailed review of existing and
planned HPC systems in this area. Other areas that share common aspects with
remote sensing data processing are also covered, including multimedia and
video processing.
r Part II: Multiprocessor systems. This part, comprising Chapters 4–8, includes
a compendium of algorithms and techniques for HPC-based remote sensing
data analysis using multiprocessor systems such as clusters and networks of
computers, including massively parallel facilities.
r Part III: Large-scale and heterogeneous distributed computing. The focus of
this part, which comprises Chapters 9–13, is on parallel techniques for re-
mote sensing data analysis using large-scale distributed platforms, with special
emphasis on grid computing environments and fully heterogeneous networks
of workstations.
Introduction 3
r Part IV: Specialized architectures. The last part of this book comprises Chapters
14–18 and is devoted to systems and architectures for at-sensor and real-time
collection and analysis of remote sensing data using specialized hardware and
embedded systems. The part also includes specific aspects about current trends
in remote sensing sensor design and operation.
1.2.1 Organization of Chapters in This Volume
The first part of the book (General) consists of two chapters that include basic concepts
that will appeal to both students and practitioners who have not had a formal education
in remote sensing and/or computer engineering. This part will also be of interest to
remote sensing and general-purpose HPC specialists, who can greatly benefit from
the exhaustive review of techniques and discussion on future data processing per-
spectives in this area. Also, general-purpose specialists will become aware of other
application areas of HPC (e.g., multimedia and video processing) in which the design
of techniques and parallel processing strategies to deal with extremely large com-
putational requirements follows a similar pattern as that used to deal with remotely
sensed data sets. On the other hand, the three application-oriented parts that fol-
low (Multiprocessor systems, Large-scale and heterogeneous distributed computing,
and Specialized architectures) are each composed of five selected chapters that will
appeal to the vast scientific community devoted to designing and developing efficient
techniques for remote sensing data analysis. This includes commercial companies
working on intelligence and defense applications, Earth and space administrations
such as NASA or the European Space Agency (ESA) – both of them represented in
the book via several contributions – and universities with programs in remote sens-
ing, Earth and space sciences, computer architecture, and computer engineering. Also,
the growing interest in some emerging areas of remote sensing such as hyperspectral
imaging (which will receive special attention in this volume) should make this book
a timely reference.
1.2.2 Brief Description of Chapters in This Volume
We provide below a description of the chapters contributed by different authors.
It should be noted that all the techniques and methods presented in those chapters
are well consolidated and cover almost entirely the spectrum of current and future
data processing techniques in remote sensing applications. We specifically avoided
repetition of topics in order to complete a timely compilation of realistic and suc-
cessful efforts in the field. Each chapter was contributed by a reputed expert or a
group of experts in the designed specialty areas. A brief outline of each contribution
follows:
r Chapter 1. Introduction. The present chapter provides an introduction to the
book and describes the main innovative contributions covered by this volume
and its individual chapters.
4 High-Performance Computing in Remote Sensing
r Chapter 2. High-Performance Computer Architectures for Remote Sens-
ing Data Analysis: Overview and Case Study. This chapter provides a re-
view of the state-of-the-art in the design of HPC systems for remote sensing.
The chapter also includes an application case study in which the pixel purity
index (PPI), a well-known remote sensing data processing algorithm included
in Kodak’s Research Systems ENVI (a very popular remote sensing-oriented
commercial software package), is implemented using different types of HPC
platforms such as a massively parallel multiprocessor, a heterogeneous network
of distributed computers, and a specialized hardware architecture.
r Chapter 3. Computer Architectures for Multimedia and Video Analysis.
This chapter focuses on multimedia processing as another example application
with a high demanding computational power and similar aspects as those in-
volved in many remote sensing problems. In particular, the chapter discusses
new computer architectures such as graphic processing units (GPUs) and mul-
timedia extensions in the context of real applications.
r Chapter 4. Parallel Implementation of the ORASIS Algorithm for Re-
mote Sensing Data Analysis. This chapter presents a parallel version of ORA-
SIS (the Optical Real-Time Adaptive Spectral Identification System) that was
recently developed as part of a U.S. Department of Defense program. The
ORASIS system comprises a series of algorithms developed at the Naval Re-
search Laboratory for the analysis of remotely sensed hyperspectral image
data.
r Chapter 5. Parallel Implementation of the Recursive Approximation of an
Unsupervised Hierarchical Segmentation Algorithm. This chapter describes
aparallelimplementationofarecursiveapproximationofthehierarchicalimage
segmentation algorithm developed at NASA. The chapter also demonstrates the
computational efficiency of the algorithm using remotely sensed data collected
by the Landsat Thematic Mapper (a multispectral instrument).
r Chapter 6. Computing for Analysis and Modeling of Hyperspectral Im-
agery. In this chapter, several analytical methods employed in vegetation
and ecosystem studies using remote sensing instruments are developed. The
chapter also summarizes the most common HPC-based approaches used to
meet these analytical demands, and provides examples with computing clus-
ters. Finally, the chapter discusses the emerging use of other HPC-based tech-
niques for the above purpose, including data processing onboard aircraft and
spacecraft platforms, and distributed Internet computing.
r Chapter 7. Parallel Implementation of Morphological Neural Networks
for Hyperspectral Image Analysis. This chapter explores in detail the uti-
lization of parallel neural network architectures for solving remote sensing
problems. The chapter further develops a new morphological/neural parallel
algorithm for the analysis of remotely sensed data, which is implemented using
both massively parallel (homogeneous) clusters and fully heterogeneous net-
works of distributed workstations.
Introduction 5
r Chapter 8. Parallel Wildland Fire Monitoring and Tracking Using
Remotely Sensed Data. This chapter focuses on the use of HPC-based re-
mote sensing techniques to address natural disasters, emphasizing the (near)
real-time computational requirements introduced by time-critical applications.
The chapter also develops several innovative algorithms, including morpholog-
ical and target detection approaches, to monitor and track one particular type
of hazard, wildland fires, using remotely sensed data.
r Chapter 9. An Introduction to Grids for Remote Sensing Applications.
This chapter introduces grid computing technology in preparation for the chap-
ters to follow. The chapter first reviews previous approaches to distributed com-
puting and then introduces current Web and grid service standards, along with
some end-user tools for building grid applications. This is followed by a survey
of current grid infrastructure and science projects relevant to remote sensing.
r Chapter 10. Remote Sensing Grids: Architecture and Implementation.
This chapter applies the grid computing paradigm to the domain of Earth remote
sensing systems by combining the concepts of remote sensing or sensor Web
systems with those of grid computing. In order to provide a specific example and
context for discussing remote sensing grids, the design of a weather forecasting
and climate science grid is presented and discussed.
r Chapter 11. Open Grid Services for Envisat and Earth Observation
Applications. This chapter first provides an overview of some ESA Earth Ob-
servation missions, and of the software tools that ESA currently provides for
facilitating data handling and analysis. Then, the chapter describes a dedicated
Earth-science grid infrastructure, developed by the European Space Research
Institute (ESRIN) at ESA in the context of DATAGRID, the first large European
Commission-funded grid project. Different examples of remote sensing appli-
cations integrated in this system are also given.
r Chapter 12. Design and Implementation of a Grid Computing Envi-
ronment for Remote Sensing. This chapter develops a new dynamic Earth
Observation system specifically tuned to manage huge quantities of data com-
ing from space missions. The system combines recent grid computing technolo-
gies, concepts related to problem solving environments, and other HPC-based
technologies. A comparison of the system to other classic approaches is also
provided.
r Chapter 13. A Solutionware for Hyperspectral Image Processing and
Analysis. This chapter describes the concept of an integrated process for hyper-
spectral image analysis, based on a solutionware (i.e., a set of catalogued tools
that allow for the rapid construction of data processing algorithms and applica-
tions). Parallel processing implementations of some of the tools in the Itanium
architecture are presented, and a prototype version of a hyperspectral image
processing toolbox over the grid, called Grid-HSI, is also described.
r Chapter 14. AVIRIS and Related 21st
Century Imaging Spectrometers
for Earth and Space Science. This chapter uses the NASA Jet Propulsion
6 High-Performance Computing in Remote Sensing
Laboratory’sAirborneVisible/InfraredImagingSpectrometer(AVIRIS),oneof
the most advanced hyperspectral remote sensing instrument currently available,
to review the critical characteristics of an imaging spectrometer instrument and
the corresponding characteristics of the measured spectra. The wide range of
scientific research as well as application objectives pursued with AVIRIS are
briefly presented. Roles for the application of high-performance computing
methods to AVIRIS data set are discussed.
r Chapter 15. Remote Sensing and High-Performance Reconfigurable Com-
puting Systems. This chapter discusses the role of reconfigurable comput-
ing using field programmable gate arrays (FPGAs) for onboard processing of
remotely sensed data. The chapter also describes several case studies of re-
mote sensing applications in which reconfigurable computing has played an
important role, including cloud detection and dimensionality reduction of hy-
perspectral imagery.
r Chapter 16. FPGA Design for Real-Time Implementation of Constrained
Energy Minimization for Hyperspectral Target Detection. This chapter
describes an FPGA implementation of the constrained energy minimization
(CEM) algorithm, which has been widely used for hyperspectral detection and
classification. The main feature of the FPGA design provided in this chapter
is the use of the Coordinate Rotation DIgital Computer (CORDIC) algorithm
to convert a Givens rotation of a vector to a set of shift-add operations, which
allows for efficient implementation in specialized hardware architectures.
r Chapter 17. Real-Time Online Processing of Hyperspectral Imagery for
Target Detection and Discrimination. This chapter describes a real-time on-
line processing technique for fast and accurate exploitation of hyperspectral
imagery. The system has been specifically developed to satisfy the extremely
high computational requirements of many practical remote sensing applica-
tions, such as target detection and discrimination, in which an immediate data
analysis result is required for (near) real-time decision-making.
r Chapter 18. Real-Time Onboard Hyperspectral Image Processing Using
Programmable Graphics Hardware. Finally, this chapter addresses the
emerging use of graphic processing units (GPUs) for onboard remote sensing
data processing. Driven by the ever-growing demands of the video-game indus-
try, GPUs have evolved from expensive application-specific units into highly
parallel programmable systems. In this chapter, GPU-based implementations
of remote sensing data processing algorithms are presented and discussed.
1.3 Distinguishing Features of the Book
Before concluding this introduction, the editors would like to stress several distin-
guishing features of this book. First and foremost, this book is the first volume that is
entirely devoted to providing a perspective on the state-of-the-art of HPC techniques
Introduction 7
in the context of remote sensing problems. In order to address the need for a con-
solidated reference in this area, the editors have made significant efforts to invite
highly recognized experts in academia, institutions, and commercial companies to
write relevant chapters focused on their vast expertise in this area, and share their
knowledge with the community. Second, this book provides a compilation of several
well-established techniques covering most aspects of the current spectrum of process-
ing techniques in remote sensing, including supervised and unsupervised techniques
for data acquisition, calibration, correction, classification, segmentation, model inver-
sion and visualization. Further, many of the application areas addressed in this book
are of great social relevance and impact, including chemical/biological standoff de-
tection, forest fire monitoring and tracking, etc. Finally, the variety and heterogeneity
of parallel computing techniques and architectures discussed in the book are not to
be found in any other similar textbook.
1.4 Summary
The wide range of computer architectures (including homogeneous and heteroge-
neous clusters and groups of clusters, large-scale distributed platforms and grid com-
puting environments, specialized architectures based on reconfigurable computing,
and commodity graphic hardware) and data processing techniques covered by this
book exemplifies a subject area that has drawn together an eclectic collection of par-
ticipants, but increasingly this is the nature of many endeavors at the cutting edge of
science and technology.
In this regard, one of the main purposes of this book is to reflect the increasing
sophistication of a field that is rapidly maturing at the intersection of many different
disciplines, including not only remote sensing or computer architecture/engineering,
but also signal and image processing, optics, electronics, and aerospace engineering.
The ultimate goal of this book is to provide readers with a peek at the cutting-edge
research in the use of HPC-based techniques and practices in the context of remote
sensing applications. The editors hope that this volume will serve as a useful reference
for practitioners and engineers working in the above and related areas. Last but not
least, the editors gratefully thank all the contributors for sharing their vast expertise
with the readers. Without their outstanding contributions, this book could not have
been completed.
High Performance Computing in Remote Sensing Antonio J. Plaza
Chapter 2
High-Performance Computer Architectures
for Remote Sensing Data Analysis: Overview
and Case Study
Antonio Plaza,
University of Extremadura, Spain
Chein-I Chang,
University of Maryland, Baltimore
Contents
2.1 Introduction ............................................................ 10
2.2 Related Work ........................................................... 13
2.2.1 Evolution of Cluster Computing in Remote Sensing ............... 14
2.2.2 Heterogeneous Computing in Remote Sensing .................... 15
2.2.3 Specialized Hardware for Onboard Data Processing ............... 16
2.3 Case Study: Pixel Purity Index (PPI) Algorithm .......................... 17
2.3.1 Algorithm Description ........................................... 17
2.3.2 Parallel Implementations ......................................... 20
2.3.2.1 Cluster-Based Implementation of the PPI Algorithm ..... 20
2.3.2.2 Heterogeneous Implementation of the PPI Algorithm .... 22
2.3.2.3 FPGA-Based Implementation of the PPI Algorithm ...... 23
2.4 Experimental Results ................................................... 27
2.4.1 High-Performance Computer Architectures ....................... 27
2.4.2 Hyperspectral Data .............................................. 29
2.4.3 Performance Evaluation .......................................... 31
2.4.4 Discussion ....................................................... 35
2.5 Conclusions and Future Research ........................................ 36
2.6 Acknowledgments ...................................................... 37
References ................................................................... 38
Advances in sensor technology are revolutionizing the way remotely sensed data are
collected, managed, and analyzed. In particular, many current and future applications
of remote sensing in earth science, space science, and soon in exploration science
require real- or near-real-time processing capabilities. In recent years, several efforts
9
10 High-Performance Computing in Remote Sensing
have been directed towards the incorporation of high-performance computing (HPC)
models to remote sensing missions. In this chapter, an overview of recent efforts in
the design of HPC systems for remote sensing is provided. The chapter also includes
an application case study in which the pixel purity index (PPI), a well-known remote
sensing data processing algorithm, is implemented in different types of HPC platforms
such as a massively parallel multiprocessor, a heterogeneous network of distributed
computers, and a specialized field programmable gate array (FPGA) hardware ar-
chitecture. Analytical and experimental results are presented in the context of a real
application, using hyperspectral data collected by NASA’s Jet Propulsion Laboratory
over the World Trade Center area in New York City, right after the terrorist attacks of
September 11th. Combined, these parts deliver an excellent snapshot of the state-of-
the-art of HPC in remote sensing, and offer a thoughtful perspective of the potential
and emerging challenges of adapting HPC paradigms to remote sensing problems.
2.1 Introduction
The development of computationally efficient techniques for transforming the mas-
sive amount of remote sensing data into scientific understanding is critical for
space-based earth science and planetary exploration [1]. The wealth of informa-
tion provided by latest-generation remote sensing instruments has opened ground-
breaking perspectives in many applications, including environmental modeling and
assessment for Earth-based and atmospheric studies, risk/hazard prevention and re-
sponse including wild land fire tracking, biological threat detection, monitoring of
oil spills and other types of chemical contamination, target detection for military and
defense/security purposes, urban planning and management studies, etc. [2]. Most of
the above-mentioned applications require analysis algorithms able to provide a re-
sponse in real- or near-real-time. This is quite an ambitious goal in most current remote
sensingmissions,mainlybecausethepricepaidfortherichinformationavailablefrom
latest-generation sensors is the enormous amounts of data that they generate [3, 4, 5].
A relevant example of a remote sensing application in which the use of HPC
technologies such as parallel and distributed computing are highly desirable is hy-
perspectral imaging [6], in which an image spectrometer collects hundreds or even
thousands of measurements (at multiple wavelength channels) for the same area
on the surface of the Earth (see Figure 2.1). The scenes provided by such sen-
sors are often called “data cubes,” to denote the extremely high dimensionality
of the data. For instance, the NASA Jet Propulsion Laboratory’s Airborne Visi-
ble Infra-Red Imaging Spectrometer (AVIRIS) [7] is now able to record the vis-
ible and near-infrared spectrum (wavelength region from 0.4 to 2.5 micrometers)
of the reflected light of an area 2 to 12 kilometers wide and several kilometers
long using 224 spectral bands (see Figure 3.8). The resulting cube is a stack of
images in which each pixel (vector) has an associated spectral signature or ‘fin-
gerprint’ that uniquely characterizes the underlying objects, and the resulting data
volume typically comprises several GBs per flight. Although hyperspectral imaging
High-Performance Computer Architectures for Remote Sensing 11
Mixed
pixel
(vegetation
+
soil)
Pure
pixel
(water)
Mixed
pixel
(soil
+
rocks)
0
Reflectance
1000
2000
3000
4000
Wavelength
(nm)
2400
2100
1800
1500
1200
900
600
300
0
Reflectance
1000
2000
3000
4000
Wavelength
(nm)
2400
2100
1800
1500
1200
900
600
300
0
Reflectance
1000
2000
3000
5000
4000
Wavelength
(nm)
2400
2100
1800
1500
1200
900
600
300
Figure
2.1
The
concept
of
hyperspectral
imaging
in
remote
sensing.
12 High-Performance Computing in Remote Sensing
is a good example of the computational requirements introduced by remote sensing
applications, there are many other remote sensing areas in which high-dimensional
data sets are also produced (several of them are covered in detail in this book). How-
ever, the extremely high computational requirements already introduced by hyper-
spectral imaging applications (and the fact that these systems will continue increasing
their spatial and spectral resolutions in the near future) make them an excellent case
study to illustrate the need for HPC systems in remote sensing and will be used in
this chapter for demonstration purposes.
Specifically, the utilization of HPC systems in hyperspectral imaging applications
has become more and more widespread in recent years. The idea developed by the
computer science community of using COTS (commercial off-the-shelf) computer
equipment, clustered together to work as a computational “team,” is a very attractive
solution [8]. This strategy is often referred to as Beowulf-class cluster computing [9]
and has already offered access to greatly increased computational power, but at a low
cost (commensurate with falling commercial PC costs) in a number of remote sensing
applications [10, 11, 12, 13, 14, 15]. In theory, the combination of commercial forces
driving down cost and positive hardware trends (e.g., CPU peak power doubling
every 18–24 months, storage capacity doubling every 12–18 months, and networking
bandwidth doubling every 9–12 months) offers supercomputing performance that can
now be applied a much wider range of remote sensing problems.
Although most parallel techniques and systems for image information processing
employed by NASA and other institutions during the last decade have chiefly been
homogeneous in nature (i.e., they are made up of identical processing units, thus sim-
plifying the design of parallel solutions adapted to those systems), a recent trend in the
design of HPC systems for data-intensive problems is to utilize highly heterogeneous
computing resources [16]. This heterogeneity is seldom planned, arising mainly as
a result of technology evolution over time and computer market sales and trends.
In this regard, networks of heterogeneous COTS resources can realize a very high
level of aggregate performance in remote sensing applications [17], and the pervasive
availability of these resources has resulted in the current notion of grid computing
[18], which endeavors to make such distributed computing platforms easy to utilize
in different application domains, much like the World Wide Web has made it easy to
distribute Web content. It is expected that grid-based HPC systems will soon represent
the tool of choice for the scientific community devoted to very high-dimensional data
analysis in remote sensing and other fields.
Finally, although remote sensing data processing algorithms generally map quite
nicely to parallel systems made up of commodity CPUs, these systems are generally
expensive and difficult to adapt to onboard remote sensing data processing scenarios,
in which low-weight and low-power integrated components are essential to reduce
mission payload and obtain analysis results in real time, i.e., at the same time as the
data are collected by the sensor. In this regard, an exciting new development in the
field of commodity computing is the emergence of programmable hardware devices
such as field programmable gate arrays (FPGAs) [19, 20, 21] and graphic processing
units (GPUs) [22], which can bridge the gap towards onboard and real-time analysis
of remote sensing data. FPGAs are now fully reconfigurable, which allows one to
High-Performance Computer Architectures for Remote Sensing 13
adaptively select a data processing algorithm (out of a pool of available ones) to be
applied onboard the sensor from a control station on Earth.
On the other hand, the emergence of GPUs (driven by the ever-growing demands
of the video-game industry) has allowed these systems to evolve from expensive
application-specific units into highly parallel and programmable commodity compo-
nents. Current GPUs can deliver a peak performance in the order of 360 Gigaflops
(Gflops), more than seven times the performance of the fastest ×86 dual-core proces-
sor (around 50 Gflops). The ever-growing computational demands of remote sensing
applications can fully benefit from compact hardware components and take advan-
tage of the small size and relatively low cost of these units as compared to clusters or
networks of computers.
The main purpose of this chapter is to provide an overview of different HPC
paradigms in the context of remote sensing applications. The chapter is organized as
follows:
r Section 2.2 describes relevant previous efforts in the field, such as the evo-
lution of cluster computing in remote sensing applications, the emergence of
distributed networks of computers as a cost-effective means to solve remote
sensing problems, and the exploitation of specialized hardware architectures in
remote sensing missions.
r Section 2.3 provides an application case study: the well-known Pixel Purity
Index (PPI) algorithm [23], which has been widely used to analyze hyper-
spectral images and is available in commercial software. The algorithm is first
briefly described and several issues encountered in its implementation are dis-
cussed. Then, we provide HPC implementations of the algorithm, including a
cluster-based parallel version, a variation of this version specifically tuned for
heterogeneous computing environments, and an FPGA-based implementation.
r Section 2.4 also provides an experimental comparison of the proposed imple-
mentations of PPI using several high-performance computing architectures.
Specifically, we use Thunderhead, a massively parallel Beowulf cluster at
NASA’s Goddard Space Flight Center, a heterogeneous network of distributed
workstations, and a Xilinx Virtex-II FPGA device. The considered application
is based on the analysis of hyperspectral data collected by the AVIRIS instru-
ment over the World Trade Center area in New York City right after the terrorist
attacks of September 11th
.
r Finally, Section 2.5 concludes with some remarks and plausible future research
lines.
2.2 Related Work
This section first provides an overview of the evolution of cluster computing architec-
tures in the context of remote sensing applications, from the initial developments in
Beowulf systems at NASA centers to the current systems being employed for remote
14 High-Performance Computing in Remote Sensing
sensing data processing. Then, an overview of recent advances in heterogeneous
computing systems is given. These systems can be applied for the sake of distributed
processing of remotely sensed data sets. The section concludes with an overview of
hardware-based implementations for onboard processing of remote sensing data sets.
2.2.1 Evolution of Cluster Computing in Remote Sensing
Beowulf clusters were originally developed with the purpose of creating a cost-
effective parallel computing system able to satisfy specific computational require-
ments in the earth and space sciences communities. Initially, the need for large
amounts of computation was identified for processing multispectral imagery with
only a few bands [24]. As sensor instruments incorporated hyperspectral capabilities,
it was soon recognized that computer mainframes and mini-computers could not pro-
vide sufficient power for processing these kinds of data. The Linux operating system
introduced the potential of being quite reliable due to the large number of developers
and users. Later it became apparent that large numbers of developers could also be a
disadvantage as well as an advantage.
In 1994, a team was put together at NASA’s Goddard Space Flight Center (GSFC)
to build a cluster consisting only of commodity hardware (PCs) running Linux, which
resulted in the first Beowulf cluster [25]. It consisted of 16 100Mhz 486DX4-based
PCs connected with two hub-based Ethernet networks tied together with channel
bonding software so that the two networks acted like one network running at twice
the speed. The next year Beowulf-II, a 16-PC cluster based on 100Mhz Pentium
PCs, was built and performed about 3 times faster, but also demonstrated a much
higher reliability. In 1996, a Pentium-Pro cluster at Caltech demonstrated a sustained
Gigaflop on a remote sensing-based application. This was the first time a commodity
cluster had shown high-performance potential.
Up until 1997, Beowulf clusters were in essence engineering prototypes, that is,
they were built by those who were going to use them. However, in 1997, a project was
started at GSFC to build a commodity cluster that was intended to be used by those
who had not built it, the HIVE (highly parallel virtual environment) project. The idea
was to have workstations distributed among different locations and a large number
of compute nodes (the compute core) concentrated in one area. The workstations
would share the computer core as though it was apart of each. Although the original
HIVE only had one workstation, many users were able to access it from their own
workstations over the Internet. The HIVE was also the first commodity cluster to
exceed a sustained 10 Gigaflop on a remote sensing algorithm.
Currently, an evolution of the HIVE is being used at GSFC for remote sensing data
processing calculations. The system, called Thunderhead (see Figure 2.2), is a 512-
processor homogeneous Beowulf cluster composed of 256 dual 2.4 GHz Intel Xeon
nodes, each with 1 GB of memory and 80 GB of main memory. The total peak perfor-
mance of the system is 2457.6 GFlops. Along with the 512-processor computer core,
Thunderhead has several nodes attached to the core with a 2 Ghz optical fibre Myrinet.
NASA is currently supporting additional massively parallel clusters for remote
sensing applications, such as the Columbia supercomputer at NASA Ames Research
High-Performance Computer Architectures for Remote Sensing 15
Figure 2.2 Thunderhead Beowulf cluster (512 processors) at NASA’s Goddard
Space Flight Center in Maryland.
Center, a 10,240-CPU SGI Altix supercluster, with Intel Itanium 2 processors,
20 terabytes total memory, and heterogeneous interconnects including InfiniBand net-
work and a 10 GB Ethernet. This system is listed as #8 in the November 2006 version
of the Top500 list of supercomputer sites available online at http://www.top500.org.
Among many other examples of HPC systems included in the list that are currently
being exploited for remote sensing and earth science-based applications, we cite
three relevant systems for illustrative purposes. The first one is MareNostrum, an
IBM cluster with 10,240 processors, 2.3 GHz Myrinet connectivity, and 20,480 GB of
main memory available at Barcelona Supercomputing Center (#5 in Top500). Another
example is Jaws, a Dell PowerEdge cluster with 3 GHz Infiniband connectivity,
5,200 GB of main memory, and 5,200 processors available at Maui High-Performance
Computing Center (MHPCC) in Hawaii (#11 in Top500). A final example is NEC’s
Earth Simulator Center, a 5,120-processor system developed by Japan’s Aerospace
Exploration Agency and the Agency for Marine-Earth Science and Technology (#14
in Top500). It is highly anticipated that many new supercomputer systems will be
specifically developed in forthcoming years to support remote sensing applications.
2.2.2 Heterogeneous Computing in Remote Sensing
In the previous subsection, we discussed the use of cluster technologies based on
multiprocessor systems as a high-performance and economically viable tool for
efficient processing of remotely sensed data sets. With the commercial availability
16 High-Performance Computing in Remote Sensing
of networking hardware, it soon became obvious that networked groups of machines
distributed among different locations could be used together by one single parallel
remote sensing code as a distributed-memory machine [26]. Of course, such networks
were originally designed and built to connect heterogeneous sets of machines. As a
result, heterogeneous networks of workstations (NOWs) soon became a very popular
tool for distributed computing with essentially unbounded sets of machines, in which
the number and locations of machines may not be explicitly known [16], as opposed
to cluster computing, in which the number and locations of nodes are known and
relatively fixed.
An evolution of the concept of distributed computing described above resulted
in the current notion of grid computing [18], in which the number and locations of
nodes are relatively dynamic and have to be discovered at run-time. It should be noted
that this section specifically focuses on distributed computing environments without
meta-computing or grid computing, which aims at providing users access to services
distributed over wide-area networks. Several chapters of this volume provide detailed
analyses of the use of grids for remote sensing applications, and this issue is not
further discussed here.
There are currently several ongoing research efforts aimed at efficient distributed
processing of remote sensing data. Perhaps the most simple example is the use of
heterogeneous versions of data processing algorithms developed for Beowulf clus-
ters, for instance, by resorting to heterogeneous-aware variations of homogeneous
algorithms, able to capture the inherent heterogeneity of a NOW and to load-balance
the computation among the available resources [27]. This framework allows one to
easily port an existing parallel code developed for a homogeneous system to a fully
heterogeneous environment, as will be shown in the following subsection.
Another example is the Common Component Architecture (CCA) [28], which has
been used as a plug-and-play environment for the construction of climate, weather,
and ocean applications through a set of software components that conform to stan-
dardized interfaces. Such components encapsulate much of the complexity of the
data processing algorithms inside a black box and expose only well-defined inter-
faces to other components. Among several other available efforts, another distributed
application framework specifically developed for earth science data processing is the
Java Distributed Application Framework (JDAF) [29]. Although the two main goals of
JDAF are flexibility and performance, we believe that the Java programming language
is not mature enough for high-performance computing of large amounts of data.
2.2.3 Specialized Hardware for Onboard Data Processing
Over the last few years, several research efforts have been directed towards the incor-
poration of specialized hardware for accelerating remote sensing-related calculations
aboard airborne and satellite sensor platforms. Enabling onboard data processing
introduces many advantages, such as the possibility to reduce the data down-link
bandwidth requirements at the sensor by both preprocessing data and selecting data
to be transmitted based upon predetermined content-based criteria [19, 20]. Onboard
processing also reduces the cost and the complexity of ground processing systems so
High-Performance Computer Architectures for Remote Sensing 17
that they can be affordable to a larger community. Other remote sensing applications
that will soon greatly benefit from onboard processing are future web sensor mis-
sions as well as future Mars and planetary exploration missions, for which onboard
processing would enable autonomous decisions to be made onboard.
Despite the appealing perspectives introduced by specialized data processing com-
ponents, current hardware architectures including FPGAs (on-the-fly reconfigurabil-
ity) and GPUs (very high performance at low cost) still present some limitations that
need to be carefully analyzed when considering their incorporation to remote sensing
missions [30]. In particular, the very fine granularity of FPGAs is still not efficient,
with extreme situations in which only about 1% of the chip is available for logic while
99% is used for interconnect and configuration. This usually results in a penalty in
terms of speed and power. On the other hand, both FPGAs and GPUs are still difficult
to radiation-harden (currently-available radiation-tolerant FPGA devices have two
orders of magnitude fewer equivalent gates than commercial FPGAs).
2.3 Case Study: Pixel Purity Index (PPI) Algorithm
This section provides an application case study that is used in this chapter to illustrate
different approaches for efficient implementation of remote sensing data processing
algorithms. The algorithm selected as a case study is the PPI [23], one of the most
widely used algorithms in the remote sensing community. First, the serial version of
the algorithm available in commercial software is described. Then, several parallel
implementations are given.
2.3.1 Algorithm Description
The PPI algorithm was originally developed by Boardman et al. [23] and was soon
incorporated into Kodak’s Research Systems ENVI, one of the most widely used
commercial software packages by remote sensing scientists around the world. The
underlyingassumptionunderthePPIalgorithmisthatthespectralsignatureassociated
to each pixel vector measures the response of multiple underlying materials at each
site. For instance, it is very likely that the pixel vectors shown in Figure 3.8 would
actually contain a mixture of different substances (e.g., different minerals, different
types of soils, etc.). This situation, often referred to as the “mixture problem” in
hyperspectral analysis terminology [31], is one of the most crucial and distinguishing
properties of spectroscopic analysis.
Mixed pixels exist for one of two reasons [32]. Firstly, if the spatial resolution of
the sensor is not fine enough to separate different materials, these can jointly occupy
a single pixel, and the resulting spectral measurement will be a composite of the
individual spectra. Secondly, mixed pixels can also result when distinct materials
are combined into a homogeneous mixture. This circumstance occurs independent of
18 High-Performance Computing in Remote Sensing
Extreme
Extreme
Extreme
Extreme
Skewer 3
Skewer 2
Skewer 1
Figure 2.3 Toy example illustrating the performance of the PPI algorithm in a
2-dimensional space.
the spatial resolution of the sensor. A hyperspectral image is often a combination of
the two situations, where a few sites in a scene are pure materials, but many others
are mixtures of materials.
To deal with the mixture problem in hyperspectral imaging, spectral unmixing tech-
niques have been proposed as an inversion technique in which the measured spectrum
of a mixed pixel is decomposed into a collection of spectrally pure constituent spectra,
called endmembers in the literature, and a set of correspondent fractions, or abun-
dances, that indicate the proportion of each endmember present in the mixed pixel [6].
ThePPIalgorithmisatooltoautomaticallysearchforendmembersthatareassumed
to be the vertices of a convex hull [23]. The algorithm proceeds by generating a large
number of random, N-dimensional unit vectors called “skewers” through the data set.
Every data point is projected onto each skewer, and the data points that correspond to
extrema in the direction of a skewer are identified and placed on a list (see Figure 2.3).
As more skewers are generated, the list grows, and the number of times a given pixel
is placed on this list is also tallied. The pixels with the highest tallies are considered
the final endmembers.
The inputs to the algorithm are a hyperspectral data cube F with N dimensions; a
maximum number of endmembers to be extracted, E; the number of random skewers
to be generated during the process, k; a cut-off threshold value, tv, used to select
as final endmembers only those pixels that have been selected as extreme pixels at
least tv times throughout the PPI process; and a threshold angle, ta, used to discard
redundant endmembers during the process. The output of the algorithm is a set of E
final endmembers {ee}E
e=1. The algorithm can be summarized by the following steps:
Another Random Scribd Document
with Unrelated Content
30
Saguaro fruit.
Early growth is extremely slow. A 2-year-old saguaro may be
only one-quarter of an inch in diameter, and a 9-year-old plant
may be 6 inches high. These years are the most hazardous. Insect
larvae devour the tiny cactuses. Woodrats and other rodents chew
the succulent tissue for its water, and ground squirrels uproot the
young plants with their digging. In later life, the saguaro must
contend with uprooting wind and human vandalism, as well as the
earlier foes—drought, frost, erosion, and animals.
Gila woodpecker at its nesting hole.
In a century of maturity, a saguaro may produce 50 million seeds;
replacement of the parent plant would require only that one of these
germinate and grow. But in the cactus forest of the Rincon Mountain
Section, the rate of survival has been even lower, so that over the last
few decades the stand has been dwindling. What is wrong?
Many answers to this question have been advanced, but like all
interrelationships in nature, the saguaro’s role in the desert web of
31
life is very complex, and involves past events as well as present ones;
a partial answer to the problem may be all we can hope for. The
following reasons for the decline of the saguaros have been
suggested by researchers.
Saguaro, 1 foot high, in a rocky habitat.
A typical 4-foot saguaro.
There is some evidence to suggest that the Southwest has been
getting drier since at least the late 19th century, and while the
saguaro is adapted to extreme aridity, some of the “nurse” plants that
shelter it during infancy are not. If such shrubs as paloverdes and
mesquites dwindle, it is argued, so must the saguaro, which in its
early years depends on them for shade.
32
Other culprits in the saguaro problem are man himself and his
livestock. Around 1880, soon after the first railroad reached Tucson, a
cattle boom began in southern Arizona. The valleys were soon
overstocked, and cattle scoured the mountainsides in search of food.
By 1893, when drought and starvation decimated the herds, the land
had been severely overgrazed. Though the monument was
established in 1933, grazing in the Rincon Mountain Section’s main
cactus forest continued until 1958. (Elsewhere in the monument, it
still goes on.) Compounding the problem, woodcutters removed acres
of mesquite and other trees. In the center of the present Cactus
Forest Loop Drive, lime kilns devoured quantities of woody fuel.
Further upsetting the desert’s natural balance, ranchers and
Government agents poisoned coyotes and shot hawks and
other predators—in the belief that this would benefit the
owners of livestock.
This unrestrained assault on the environment had unfortunate effects
on saguaros as well as on the human economy. Overgrazing may
have resulted in an increase in kangaroo rats (which benefit from
bare ground on which to hunt seeds) and certain other rodents
adapted to an open sort of ground cover. Man’s killing of predators,
their natural enemies, further encouraged proliferation of these
rodents, which some people say are especially destructive of saguaro
seeds and young plants. Whatever the effect these rodents have on
the saguaros, the removal of ground cover intensified erosion and
reduced the chances for the seeds to germinate and grow. And
certainly the cutting of desert trees removed shade that would have
benefited young saguaros. In the Tucson Mountain Section, which is
near the northeastern edge of the Sonoran Desert, freezing
temperatures are perhaps the most important environmental factor in
saguaro mortality.
33
Looking toward the Santa Catalina Mountains from Cactus
Forest Drive in September 1942.
Although the causes of decline of the cactus forest lying northwest of
Tanque Verde Ridge are still something of a puzzle, several facts are
clear: the saguaro is not becoming extinct; in rocky habitats many
young saguaros are surviving, promising continued stands for the
future; in non-rocky habitats, some young saguaros are surviving,
ensuring that at least thin stands will endure in these areas.
Furthermore, since grazing was stopped here, ground cover
has improved—a plus factor for the saguaro’s welfare. On the
negative side, it is possible that, in addition to suffering from climatic,
biotic, and human pressures, the once-dense mature stands of the
monument are in the down-phase of a natural fluctuation. It is
possible, too, that these stands owed their exceptional richness to an
unusually favorable past environment which may not occur again. We
can hope, however, that sometime in the not-distant future the total
environmental balance will shift once again in favor of the giant
cactus.
34
A photograph taken from the same spot in January 1970.
Other Common Cactuses
Many other cactuses share the saguaro’s environment. The BARREL
CACTUS is sometimes mistaken for a young saguaro, but can easily be
distinguished by its curved red spines. Stocky and unbranching, this
cactus rarely attains a height of more than 5 or 6 feet. It bears
clusters of sharp spines, called “areoles,” with the stout central spine
flattened and curved like a fishhook. In bloom, in late summer or
early autumn, this succulent plant produces clusters of yellow or
orange flowers on its crown. The widely circulated story that water
can be obtained by tapping the barrel cactus has little basis in fact,
although it is possible that the thick, bitter juice squeezed from the
plant’s moist tissues might, under extreme conditions, prevent death
from thirst. Desert rats, mice, and rabbits, carefully avoiding
the spines, sometimes gnaw into the plant’s tissues to obtain
moisture.
The group of cactuses called opuntias (oh-POON-cha) have jointed
stems and branches. They are common and widespread throughout
the desert and are well represented in the monument.
Those having cylindrical joints are known as chollas (CHO-yah), while
those with flat or padlike joints are called pricklypears.
Chollas range in size and form from low mats to small trees, but most
of those in the monument are shrublike. TEDDY BEAR CHOLLA, infamous
for its barbed, hard-to-remove-from-your-skin spines, forms thick
stands on warm south- or west-facing slopes. Its dense armor of
straw-colored spines and its black trunk identify it. Because its joints
break off easily when in contact with man or animal, this uncuddly
customer is popularly called “jumping cactus.” A similar species is
CHAIN FRUIT CHOLLA, notable for its long, branched chains of fruit,
which sometimes extend to the ground. Each year, the new flowers
blossom from the persistent fruit of the previous year. There is a
common variety of this species that is almost spineless. STAGHORN
CHOLLA, an inhabitant primarily of washes and other damp places, is
named from its antler-shaped stems. This cactus’ scientific name—
Opuntia versicolor—refers to the fact that its flowers, which appear in
April and May, may be yellow, red, green, or brown. (Each plant
sticks with one color through its lifetime.) Among the smaller chollas,
thin-stemmed PENCIL CHOLLA grows from 2 to 4 feet high on plains
and sandy washes. DESERT CHRISTMAS CACTUS, almost mat-like in form,
blooms in late spring but develops brilliant red fruits which last
through the winter.
Barrel cactus blossoms.
35
Barrel cactus spines.
High Performance Computing in Remote Sensing Antonio J. Plaza
36
Chain fruit cholla at Tucson Mountain Section
headquarters.
PRICKLYPEARS, like many of the chollas, produce large blossoms
in late spring. Those on the monument are principally the
yellow-flowered species. The reddish brown-to-mahogany colored
edible fruits, called tunas, attain the size of large strawberries. When
mature in autumn, they are consumed by many desert animals.
Some of the smaller cactuses are so tiny as to be unnoticeable except
when in bloom; examples are the HEDGEHOGS, the FISHHOOKS, and the
PINCUSHIONS. Blossoms of some of these ground-hugging species are
large, in some cases larger than the rest of the plant, and spectacular
in form and color. All add to the monument’s spring and early
summer display of floral beauty.
Non-Succulents
For the diversity of devices for adaptation to an inhospitable
environment, the many species making up the non-succulent desert
vegetation provide an absorbing field for study. As we have seen,
there are two ways to survive the harsh desert climate; one is to
avoid the periods of excessive heat and drought (“escapers”); the
other is to adopt various protective devices (“evaders” and
“resisters”). Short-lived plants follow the first method; perennials, the
second.
Perennials
Chief among the requirements for year-round survival in the desert is
a plant’s ability to control transpiration and thus maintain a balance
37
between water loss and water supply. In this struggle, the hours of
darkness are a great aid because in the cool of the night the air is
unable to take up as much moisture as it does under the influence of
the sun’s evaporating heat. Therefore, less exhaling and evaporating
of water occurs from plants, and both the rate and the amount of
water loss are reduced. This reduction in transpiration at night allows
the plants to recover from the severe drying effects of the day. One
biologist may have been close to the truth when he stated, “If the
celestial machinery should break down so that just one night were
omitted in the midst of a dry season, it would spell the doom of half
the nonsucculent plants in the desert.”
One of the common trees in the desert part of the monument is the
MESQUITE (mess-KEET). In general appearance it resembles a small,
spiny apple or peach tree with finely divided leaves. Its roots
sometimes penetrate to a depth of 40 or more feet, thus securing
moisture at the deeper, cooler soil levels, from a supply that remains
nearly constant throughout the year. This enables the tree to expose
a rather large expanse of leaf surface without losing more water than
it can replace. A number of mechanical devices help the tree reduce
its water loss during the driest part of the day (10 a.m. to 4 p.m.).
Among these are its ability to fold its leaves and close the stomata
(breathing pores), thereby greatly reducing the surface area
exposed to exhaling and evaporating influences. In April and
May, mesquite trees are covered with pale-yellow, catkinlike flowers
which attract swarms of insects. These flowers develop to
stringbeanlike pods rich in sugar and important as food for deer and
other animals. In earlier days, the mesquite was also a valuable
source of food and firewood for Indians and pioneers.
Pricklypear blossom.
Claret cup hedgehog.
Fishhook cactus.
Cholla in bloom.
Staghorn cholla.
Another desert tree abundant in the monument is the YELLOW
PALOVERDE. It is somewhat similar in size and general shape to the
mesquite. Lacking the deeply penetrating root system of the
mesquite, the paloverde (Spanish word meaning “green stick”) has
no dependable moisture source; but it has made unusual adaptations
that enable it to retain as much as possible of the water collected by
its roots. In early spring the tree leafs out in dense foliage, which is
38
followed closely by a blanket of yellow blossoms. At this season the
paloverdes provide one of the most spectacular displays of the
desert, particularly along washes, where they grow especially
well. Blue paloverde, growing in the arroyos, blooms well every
year. Yellow, or foothill, paloverde, a separate species, blooms only if
the soil moisture is high following winter rains.
With the coming of the hot, drying weather of late spring, the trees
need to reduce their moisture losses. They gradually drop their leaves
until, by early summer, each tree has become practically bare. The
trees do not enter a period of dormancy, but are able to remain
active because their green bark contains chlorophyll. Thus, the bark
takes over some of the food-manufacturing function normally
performed by leaves, but without the high rate of water loss.
Carrying the drought-evasion habits of the paloverde a step further,
the OCOTILLO (oh-koh-TEE-yoh) comes into full leaf following each
rainy spell during the warmer months. During the intervening dry
periods it sheds its foliage. The ocotillo, a common and conspicuous
desert dweller, is a shrub of striking appearance, with thorny,
whiplike, unbranching stems 8 to 12 feet long growing upward in a
funnel-shaped cluster. In spring, showy scarlet flower clusters appear
at the tips of the stems, making each plant a glowing splash of color.
39
Mesquite in bloom.
A number of desert shrubs fail to display as much ingenuity as the
paloverde. Some of these evade the dry season simply by going into
a state of dormancy. The WOLFBERRY bursts into full leaf soon after
the first winter rains and blossoms as early as January. Its small,
tomato-red, juicy fruits are sought by birds, which also find protective
cover for their nests and for overnight perches in the stiff, thorny
shrubs. In the past, the berrylike fruits were important to the
Indians, who ate them raw or made them into a sauce.
High Performance Computing in Remote Sensing Antonio J. Plaza
40
Yellow paloverde, Tucson Mountain Section.
Commonest of the conspicuous desert non-succulent shrubs is the
wispy-looking but tough CREOSOTEBUSH, found principally on poor soils
and on the desert flats between mountain ranges. It is also sprinkled
throughout the paloverde-saguaro community in the monument. A
new crop of wax-coated, musty-smelling leaves, giving the plant the
local (but mistaken) name “grease-wood,” appears as early as
January. The leaves are followed by a profuse blooming of small
yellow flowers and cottony seed balls. During abnormally moist
summers or in damp locations, the leaves and flowers persist the
year round; but usually the coming of dry weather brings an
end to the blossoming period. If the dry spell is exceptionally
long, the leaves turn brown, and the plants remain dormant until
awakened by the next winter’s rainfall. Pima Indians formerly
gathered a resinous material, known as lac, which accumulates on
the bark of its branches, and used it to mend pottery and fasten
arrow points. They also steeped the leaves to obtain an antiseptic
medicine. Ground squirrels commonly feed on the seeds.
Ironwood blossoms.
Parry’s penstemon.
A large shrub of open, sprawling growth usually found along desert
washes in company with mesquite is CATCLAW. Its name refers to the
small curved thorns that hide on its branches. In April and May, the
small trees are covered with fragrant, pale-yellow, catkinlike flower
clusters that attract swarms of insects. The seed pods were ground
into meal by the Indians and eaten as mush and cakes.
In lower elevations of the Tucson Mountain Section, the gray-blue
foliage of IRONWOOD is a common sight, but the species does not
range farther eastward. Its wisterialike lavender-and-white flowers
blossom in May and June. The nutritious seeds are harvested by
rodents and formerly were parched and eaten by Indians. The wood
41
is so dense that it sinks in water; Indians used it for making
arrowheads and tool handles.
Ferns—commonly, plants of dank woods and other moist habitats—
seem entirely out of place in the desert; nevertheless, some
members of the fern family have overcome drought conditions. The
GOLDFERN is common on rocky ledges, where it persists by means of
special drought-resistant cells.
Among the smaller perennials are many that add to spring flower
displays when conditions of moisture and temperature are
favorable. Perennials do not need to mature their seeds before
the coming of summer as do the ephemerals; a majority start
blossoming somewhat later in the spring, and gaily flaunt their
flowers long after the annuals have faded and died. When the heat
and drought of early summer begin to bear down, they gradually die
back, surviving the “long dry” by their persistent roots and larger
stems. One of the most noticeable and beautiful of this group of
small perennials fairly common in the monument is PARRY’S
PENSTEMON. It occurs in scattered clumps on well-drained slopes
along the base of Tanque Verde Ridge. The showy rose-magenta
flowers and the glossy-green leaves arise from erect stems that may
grow 4 feet tall in favorable seasons.
Among the first of the shrubby perennials to cover the rocky hillsides
with a blanket of winter and springtime bloom is the BRITTLEBUSH.
Masses of yellow sunflowerlike blossoms are borne on long stems
that exude a gum which was chewed by the Indians and was also
burned as incense in early mission churches.
A conspicuous perennial that survives the dry season as an
underground bulb is BLUEDICKS. Although it doesn’t occur in massed
bloom, it does add spots of color to the desert scene. Usually
appearing from February to May, bluedicks has violet flower clusters
on long, slender, erect stems. The bulbs were dug and eaten by Pima
and Papago Indians.
42
Although neither conspicuous nor attractive, the common TRIANGLE
BURSAGE is an important part of the paloverde-saguaro community in
the Tucson Mountains. A low, rounded, white-barked shrub, bursage
has small, colorless flowers without petals. (Being wind-pollinated,
the flowers do not need to attract insects.)
One of the handsome shrubs abundant in the high desert along the
base of Tanque Verde Ridge is the JOJOBA (ho-HOH-bah), or deernut.
Its thick, leathery, evergreen leaves are especially noticeable in
winter and furnish excellent browse for deer. The flowers are small
and yellowish, but the nutlike fruits are large and edible, although
bitter. They were eaten raw or parched by the Indians, and were
pulverized by early-day settlers for use as a coffee substitute.
Among the attractive flowering shrubs are the INDIGOBUSHES, of which
there are several species adapted to the desert environment. The
local, low-growing indigobushes are especially ornamental when
covered with masses of deep-blue flowers in spring.
Another small shrub, noticeable from February to May because of its
large, tassel-like pink-to-red blossoms and its fernlike leaves is FAIRY-
DUSTER. Deer browse on its delicate foliage.
The PAPER FLOWER, growing in dome-shaped clumps covered with
yellow flowers, sometimes blooms throughout the entire year. The
petals bleach and dry and may remain on the plant weeks after the
blossoms have faded.
Quick to attract attention because of their apparent lack of
foliage, the JOINTFIRS, of which there are several desert
species, grow in clumps of harsh, stringy, yellow-green, erect stems.
The skin or outer bark of the stems performs the usual functions of
leaves, which on these plants have been reduced to scales. Small,
fragrant, yellow blossom clusters, appearing at the stem joints in
spring, are visited by insects attracted to their nectar.
Ephemerals
Every spring, after a winter of normal rainfall, parts of the
southwestern deserts are carpeted with a lush blanket of fast-
growing annual herbs and wildflowers—the early spring ephemerals.
The monument does not get massive displays, however, since it is
lacking in the species that make the best show. But it does have
many annuals that are beautiful individually or in small groups. Many
of these “quickies” do not have the characteristics of desert plants;
some of them, in fact, are part of the common vegetation of other
climes where moisture is plentiful and summer temperatures are
much less severe.
What are these “foreign” plants doing in the desert, and how do they
survive? With its often frostfree winter climate and its normal
December-to-March rains, the desert presents in early spring ideal
growing weather for annuals that are able to compress a generation
into several months. Several hundred species of plants have taken
advantage of this situation.
There is WILD CARROT, which is a summer plant in South Carolina and
a winter annual in California (where it is called “rattlesnake weed”).
In the desert, its seeds lie dormant in the soil through the long, hot
summer and the drying weather of autumn. Then, under the
influence of winter rains and the soil-warming effects of early spring
sunshine, they burst into rapid growth. One of a host of species, this
early spring ephemeral is enabled by these favorable conditions to
flower and mature its seed before the pall of summer heat and
drought descends upon the desert. With their task complete, the
parents wither and die. Their ripened seeds are scattered over the
desert until winter rains enable them to cover the desert with another
multicolored but short-lived carpet of foliage and bloom.
The one-season ephemerals do not limit themselves to the winter
growing period. From July to September, local thundershowers deluge
parts of the desert while other areas, not so fortunate, remain dry.
Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookultra.com

More Related Content

PDF
High Performance Computing in Remote Sensing Antonio J. Plaza
PDF
High Performance Computing for Satellite Image Processing and Analyzing – A ...
PDF
Signal and image processing for remote sensing 2ed. Edition Chen C.H. (Ed.)
PPTX
Artificial intelligence and Machine learning in remote sensing and GIS
PDF
FIEM_ECE_Indrustrial_Training (1).pdf
PPTX
GRID COMPUTING
PDF
Freek d. van der meer, carranza, john multi- and hyperspectral geologic rem...
PDF
Satellite Image Classification with Deep Learning Survey
High Performance Computing in Remote Sensing Antonio J. Plaza
High Performance Computing for Satellite Image Processing and Analyzing – A ...
Signal and image processing for remote sensing 2ed. Edition Chen C.H. (Ed.)
Artificial intelligence and Machine learning in remote sensing and GIS
FIEM_ECE_Indrustrial_Training (1).pdf
GRID COMPUTING
Freek d. van der meer, carranza, john multi- and hyperspectral geologic rem...
Satellite Image Classification with Deep Learning Survey

Similar to High Performance Computing in Remote Sensing Antonio J. Plaza (20)

PPTX
Analysis by semantic segmentation of Multispectral satellite imagery using de...
PDF
04 open source_tools
PDF
remotesensing-12-01253.pdf
PPTX
Big Data Architecture for Sensing Applications
PDF
Deep Learning For Hyperspectral Image Analysis And Classification 1st Ed 2021...
PDF
GeoSensor Networks 1st Edition Anthony Stefanidis
PDF
Remote Sensing The Image Chain Approach 2nd Edition Edition John R. Schott
PDF
12 SuperAI on Supercomputers
PPTX
Software Freedom Day Google Developer Groups on Campus
PPTX
HS Demo
PDF
Cnn acuracia remotesensing-08-00329
PDF
Satellite and Land Cover Image Classification using Deep Learning
PDF
Software Freedom Day Google Developer Groups On Campus PEC, Thiruvallur.
PDF
05 Preparing for Extreme Geterogeneity in HPC
PDF
Satellite Image Classification and Analysis using Machine Learning with ISRO ...
PDF
NextGEOSS: The Next Generation European Data Hub and Cloud Platform for Earth...
PDF
Computer Vision in Remote Sensing Applications
PDF
Q4 2016 GeoTrellis Presentation
PDF
Distributed Decision Tree Learning for Mining Big Data Streams
PDF
markus_mueller_eresearchnz2016
Analysis by semantic segmentation of Multispectral satellite imagery using de...
04 open source_tools
remotesensing-12-01253.pdf
Big Data Architecture for Sensing Applications
Deep Learning For Hyperspectral Image Analysis And Classification 1st Ed 2021...
GeoSensor Networks 1st Edition Anthony Stefanidis
Remote Sensing The Image Chain Approach 2nd Edition Edition John R. Schott
12 SuperAI on Supercomputers
Software Freedom Day Google Developer Groups on Campus
HS Demo
Cnn acuracia remotesensing-08-00329
Satellite and Land Cover Image Classification using Deep Learning
Software Freedom Day Google Developer Groups On Campus PEC, Thiruvallur.
05 Preparing for Extreme Geterogeneity in HPC
Satellite Image Classification and Analysis using Machine Learning with ISRO ...
NextGEOSS: The Next Generation European Data Hub and Cloud Platform for Earth...
Computer Vision in Remote Sensing Applications
Q4 2016 GeoTrellis Presentation
Distributed Decision Tree Learning for Mining Big Data Streams
markus_mueller_eresearchnz2016
Ad

Recently uploaded (20)

PDF
A systematic review of self-coping strategies used by university students to ...
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PPTX
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PPTX
Introduction to Building Materials
PDF
Empowerment Technology for Senior High School Guide
PDF
1_English_Language_Set_2.pdf probationary
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
advance database management system book.pdf
PPTX
Cell Types and Its function , kingdom of life
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
History, Philosophy and sociology of education (1).pptx
A systematic review of self-coping strategies used by university students to ...
A powerpoint presentation on the Revised K-10 Science Shaping Paper
Introduction-to-Literarature-and-Literary-Studies-week-Prelim-coverage.pptx
Supply Chain Operations Speaking Notes -ICLT Program
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
Final Presentation General Medicine 03-08-2024.pptx
LDMMIA Reiki Yoga Finals Review Spring Summer
Introduction to Building Materials
Empowerment Technology for Senior High School Guide
1_English_Language_Set_2.pdf probationary
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
advance database management system book.pdf
Cell Types and Its function , kingdom of life
202450812 BayCHI UCSC-SV 20250812 v17.pptx
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
Orientation - ARALprogram of Deped to the Parents.pptx
Chinmaya Tiranga quiz Grand Finale.pdf
History, Philosophy and sociology of education (1).pptx
Ad

High Performance Computing in Remote Sensing Antonio J. Plaza

  • 1. Visit https://ebookultra.com to download the full version and explore more ebooks High Performance Computing in Remote Sensing Antonio J. Plaza _____ Click the link below to download _____ https://ebookultra.com/download/high-performance- computing-in-remote-sensing-antonio-j-plaza/ Explore and download more ebooks at ebookultra.com
  • 2. Here are some suggested products you might be interested in. Click the link to download High Performance Embedded Computing Applications in Cyber Physical Systems and Mobile Computing 2nd Edition Wolf M. https://ebookultra.com/download/high-performance-embedded-computing- applications-in-cyber-physical-systems-and-mobile-computing-2nd- edition-wolf-m/ Remote Sensing in Archaeology 1st Edition Jay K. Johnson https://ebookultra.com/download/remote-sensing-in-archaeology-1st- edition-jay-k-johnson/ Advances in Computers Volume 72 High Performance Computing 1st Edition Marvin V. Zelkowitz https://ebookultra.com/download/advances-in-computers-volume-72-high- performance-computing-1st-edition-marvin-v-zelkowitz/ Laser Remote Sensing 1st Edition Takashi Fujii https://ebookultra.com/download/laser-remote-sensing-1st-edition- takashi-fujii/
  • 3. Cloud Grid and High Performance Computing Emerging Applications 1st Edition Emmanuel Udoh https://ebookultra.com/download/cloud-grid-and-high-performance- computing-emerging-applications-1st-edition-emmanuel-udoh/ Advances in Photogrammetry Remote Sensing and Spatial Information Sciences 2008 ISPRS Congress Book International Society for Photogrammetry and Remote Sensing Isprs 1st Edition Zhilin Li https://ebookultra.com/download/advances-in-photogrammetry-remote- sensing-and-spatial-information-sciences-2008-isprs-congress-book- international-society-for-photogrammetry-and-remote-sensing-isprs-1st- edition-zhilin-li/ Remote Sensing with Polarimetric Radar 1st Edition Harold Mott https://ebookultra.com/download/remote-sensing-with-polarimetric- radar-1st-edition-harold-mott/ Integration of GIS and remote sensing 1st Edition Mesev https://ebookultra.com/download/integration-of-gis-and-remote- sensing-1st-edition-mesev/ High Performance Embedded Computing Handbook A Systems Perspective 1st Edition David R. Martinez https://ebookultra.com/download/high-performance-embedded-computing- handbook-a-systems-perspective-1st-edition-david-r-martinez/
  • 5. High Performance Computing in Remote Sensing Antonio J. Plaza Digital Instant Download Author(s): Antonio J. Plaza, Chein,I Chang ISBN(s): 9781420011616, 1420011618 Edition: Kindle File Details: PDF, 6.89 MB Year: 2007 Language: english
  • 10. Chapman & Hall/CRC Taylor & Francis Group 6000 Broken Sound Parkway NW, Suite 300 Boca Raton, FL 33487-2742 © 2008 by Taylor & Francis Group, LLC Chapman & Hall/CRC is an imprint of Taylor & Francis Group, an Informa business No claim to original U.S. Government works Printed in the United States of America on acid-free paper 10 9 8 7 6 5 4 3 2 1 International Standard Book Number-13: 978-1-58488-662-4 (Hardcover) This book contains information obtained from authentic and highly regarded sources. Reprinted material is quoted with permission, and sources are indicated. A wide variety of references are listed. Reasonable efforts have been made to publish reliable data and information, but the author and the publisher cannot assume responsibility for the validity of all materials or for the conse- quences of their use. No part of this book may be reprinted, reproduced, transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers. For permission to photocopy or use material electronically from this work, please access www. copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC) 222 Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged. Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are used only for identification and explanation without intent to infringe. Library of Congress Cataloging-in-Publication Data High performance computing in remote sensing / Antonio J. Plaza and Chein-I Chang, editors. p. cm. -- (Chapman & Hall/CRC computer & information science series) Includes bibliographical references and index. ISBN 978-1-58488-662-4 (alk. paper) 1. High performance computing. 2. Remote sensing. I. Plaza, Antonio J. II. Chang, Chein-I. III. Title. IV. Series. QA76.88.H5277 2007 621.36’78028543--dc22 2007020736 Visit the Taylor & Francis Web site at http://www.taylorandfrancis.com and the CRC Press Web site at http://www.crcpress.com
  • 11. Contents 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Antonio Plaza and Chein-I Chang 2 High-Performance Computer Architectures for Remote Sensing Data Analysis: Overview and Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 Antonio Plaza and Chein-I Chang 3 Computer Architectures for Multimedia and Video Analysis. . . . . . . . . . . . . . . .43 Edmundo Sáez, José González-Mora, Nicolás Guil, José I. Benavides, and Emilio L. Zapata 4 Parallel Implementation of the ORASIS Algorithm for Remote Sensing Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 David Gillis and Jeffrey H. Bowles 5 Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 97 James C. Tilton 6 Computing for Analysis and Modeling of Hyperspectral Imagery . . . . . . . . . . 109 Gregory P. Asner, Robert S. Haxo, and David E. Knapp 7 Parallel Implementation of Morphological Neural Networks for Hyperspectral Image Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 Javier Plaza, Rosa Pérez, Antonio Plaza, Pablo Martínez, and David Valencia 8 Parallel Wildland Fire Monitoring and Tracking Using Remotely Sensed Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 David Valencia, Pablo Martínez, Antonio Plaza, and Javier Plaza 9 An Introduction to Grids for Remote Sensing Applications. . . . . . . . . . . . . . . .183 Craig A. Lee v
  • 12. vi Contents 10 Remote Sensing Grids: Architecture and Implementation . . . . . . . . . . . . . . . . 203 Samuel D. Gasster, Craig A. Lee, and James W. Palko 11 Open Grid Services for Envisat and Earth Observation Applications . . . . . . 237 Luigi Fusco, Roberto Cossu, and Christian Retscher 12 Design and Implementation of a Grid Computing Environment for Remote Sensing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 281 Massimo Cafaro, Italo Epicoco, Gianvito Quarta, Sandro Fiore, and Giovanni Aloisio 13 A Solutionware for Hyperspectral Image Processing and Analysis . . . . . . . . 309 Miguel Vélez-Reyes, Wilson Rivera-Gallego, and Luis O. Jiménez-Rodríguez 14 AVIRIS and Related 21st Century Imaging Spectrometers for Earth and Space Science. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .335 Robert O. Green 15 Remote Sensing and High-Performance Reconfigurable Computing Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 359 Esam El-Araby, Mohamed Taher, Tarek El-Ghazawi, and Jacqueline Le Moigne 16 FPGA Design for Real-Time Implementation of Constrained Energy Minimization for Hyperspectral Target Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . 379 Jianwei Wang and Chein-I Chang 17 Real-Time Online Processing of Hyperspectral Imagery for Target Detection and Discrimination . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 397 Qian Du 18 Real-Time Onboard Hyperspectral Image Processing Using Programmable Graphics Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 411 Javier Setoain, Manuel Prieto, Christian Tenllado, and Francisco Tirado Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 453
  • 13. List of Tables 2.1 Specifications of Heterogeneous Computing Nodes in a Fully Heterogeneous Network of Distributed Workstations . . . . . . . . . . . . . . . . . . . 28 2.2 Capacity of Communication Links (Time in Milliseconds to Transfer a 1-MB Message) in a Fully Heterogeneous Network . . . . . . . . . . . . . . . . . . . 28 2.3 SAD-Based Spectral Similarity Scores Between Endmembers Extracted by Different Parallel Implementations of the PPI Algorithm and the USGS Reference Signatures Collected in the WTC Area . . . . . . . . . . . . . . . . 30 2.4 Processing Times (Seconds) Achieved by the Cluster-Based and Heterogeneous Parallel Implementations of PPI on Thunderhead. . . . . . . . .32 2.5 Execution Times (Measured in Seconds) of the Heterogeneous PPI and its Homogeneous Version on the Four Considered NOWs (16 Processors) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32 2.6 Communication (com), Sequential Computation (Ap), and Parallel Computation (Bp) Times Obtained on the Four Considered NOWs . . . . . . . 33 2.7 Load Balancing Rates for the Heterogeneous PPI and its Homogeneous Version on the Four Considered NOWs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .34 2.8 Summary of Resource Utilization for the FPGA-Based Implementation of the PPI Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 3.1 Clock Cycles and Speedups for the Sequential/Optimized Kernel Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56 3.2 Percentage of Computation Time Spent by the Temporal Video Segmentation Algorithm in Different Tasks, Before and After the Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.1 Summary of HPC Platforms. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .90 4.2 Summary of Data Cubes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 vii
  • 14. viii List of Tables 4.3 Timing Results for the Longview Machine (in seconds) . . . . . . . . . . . . . . . . . 91 4.4 Timing Results for the Huinalu Machine (in seconds) . . . . . . . . . . . . . . . . . . . 91 4.5 Timing Results for the Shelton Machine (in seconds) . . . . . . . . . . . . . . . . . . . 92 4.6 Statistical Tests used for Compression. X = Original Spectrum, Y = Reconstructed Spectrum, n = Number of Bands . . . . . . . . . . . . . . . . . . . . . . . 92 4.7 Compression Results for the Longview Machine . . . . . . . . . . . . . . . . . . . . . . . 92 4.8 Compression Results for the Huinalu Machine . . . . . . . . . . . . . . . . . . . . . . . . . 93 4.9 Compression Results for the Shelton Machine . . . . . . . . . . . . . . . . . . . . . . . . . 93 5.1 The Number of CPUs Required for a Naive Parallelization of RHSEG with one CPU per 4096 Pixel Data Section for Various Dimensionalities. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .103 5.2 RHSEG Processing Time Results for a Six-Band Landsat Thematic Mapper Image with 2048 Columns and 2048 Rows. (For the 1 CPU case, the processing time shown is for the values of Li and Lo that produce the smallest processing time.) Processing Time Shown as hours:minutes:seconds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 5.3 The Percentage of Time Task 0 of the Parallel Implementation of RHSEG Spent in the Activities of Set-up, Computation, Data Transfer, Waiting for Other Tasks, and Other Activities for the 2048 × 2048 Landsat TM Test Scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .105 6.1 The Basic Characteristics of Several Well-Known Imaging Spectrometers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 7.1 Classification Accuracies (in percentage) Achieved by the Parallel Neural Classifier for the AVIRIS Salinas Scene Using Morphological Features, PCT-Based Features, and the Original Spectral Information (processing times in a single Thunderhead node are given in the parentheses) . . . . . . . 143 7.2 Execution Times (in seconds) and Performance Ratios Reported for the Homogeneous Algorithms Versus the Heterogeneous Ones on the Two Considered Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 7.3 Communication (COM), Sequential Computation (SEQ), and Parallel Computation (PAR) Times for the Homogeneous Algorithms Versus the Heterogeneous Ones on the Two Considered Networks After Processing the AVIRIS Salinas Hyperspectral Image . . . . . . . . . . . . . . . . . . 145
  • 15. List of Tables ix 7.4 Load-Balancing Rates for the Parallel Algorithms on the Homogeneous and Heterogeneous Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146 8.1 Classification Accuracy Obtained by the Proposed Parallel AMC Algorithm for Each Ground-Truth Class in the AVIRIS Indian Pines Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175 8.2 Execution Times (seconds) of the HeteroMPI-Based Parallel Version of AMC Algorithm on the Different Heterogeneous Processors of the HCL Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178 8.3 Minima (tmin) and Maxima (tmax ) Processor Run-Times (in seconds) and Load Imbalance (R) of the HeteroMPI-Based Implementation of AMC Algorithm on the HCL Cluster . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 13.1 Function Replace Using BLAS Routines . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322 13.2 Algorithms Benchmarks Before and After BLAS Library Replacements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323 13.3 Results of C-Means Method with Euclidean Distance. . . . . . . . . . . . . . . . . .330 13.4 Results Principal Components Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 331 14.1 Spectral, Radiometric, Spatial, Temporal, and Uniformity Specifications of the AVIRIS Instrument.. . . . . . . . . . . . . . . . . . . . . . . . . . . . .340 14.2 Diversity of Scientific Research and Applications Pursued with AVIRIS.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .346 14.3 Spectral, Radiometric, Spatial, Temporal, and Uniformity Specifications of the M3 Imaging Spectrometer for the Moon. . . . . . . . . . . 350 14.4 Earth Imaging Spectrometer Products for Terrestrial and Aquatic Ecosystems Understanding. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353 14.5 Nominal Characteristics of an Earth Imaging Spectrometer for Terrestrial and Aquatic Ecosystems’ Health, Composition and Productivity at a Seasonal Time Scale. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 354 14.6 Earth Ecosystem Imaging Spectrometer Data Volumes. . . . . . . . . . . . . . . . . 356 17.1 Classification Accuracy ND Using the CLDA Algorithm (in al cases, the number of false alarm pixels NF = 0).. . . . . . . . . . . . . . . . . . . . . . . . . . . .404 18.1 GPGPU Class Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .431
  • 16. x List of Tables 18.2 GPUStream Class Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432 18.3 GPUKernel Class Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .433 18.4 Experimental GPU Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 442 18.5 Experimental CPU Features. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .442 18.6 SAM-Based Spectral Similarity Scores Among USGS Mineral Spectra and Endmembers Produced by Different Algorithms. . . . . . . . . . . . . . . . . . . 445 18.7 SAM-Based Spectral Similarity Scores Among USGS Mineral Spectra and Endmembers Produced by the AMEE Algorithm (implemented using both SAM and SID, and considering different numbers of algorithm iterations). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 445 18.8 Execution Time (in milliseconds) for the CPU Implementations . . . . . . . . 445 18.9 Execution Time (in milliseconds) for the GPU Implementations . . . . . . . . 446
  • 17. List of Figures 2.1 The concept of hyperspectral imaging in remote sensing.. . . . . . . . . . . . . . . .11 2.2 Thunderhead Beowulf cluster (512 processors) at NASA’s Goddard Space Flight Center in Maryland. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.3 Toy example illustrating the performance of the PPI algorithm in a 2-dimensional space. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.4 Domain decomposition adopted in the parallel implementation of the PPI algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.5 Systolic array design for the proposed FPGA implementation of the PPI algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24 2.6 AVIRIS hyperspectral image collected by NASA’s Jet Propulsion Laboratory over lower Manhattan on Sept. 16, 2001 (left), and location of thermal hot spots in the fires observed in the World Trade Center area (right). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30 2.7 Scalability of the cluster-based and heterogeneous parallel implementations of PPI on Thunderhead. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 3.1 Typical SIMD operation using multimedia extensions. . . . . . . . . . . . . . . . . . . 47 3.2 GPU pipeline. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 3.3 Temporal video segmentation algorithm to optimize.. . . . . . . . . . . . . . . . . . . .51 3.4 Implementation of the horizontal 1-D convolution. . . . . . . . . . . . . . . . . . . . . . 54 3.5 Extension to the arctangent calculation to operate in the interval [0◦ , 360◦ ]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 3.6 Tracking process: A warping function is applied to the template, T (x), to match its occurrence in an image. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 3.7 Steps of the tracking algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .60 3.8 Efficient computation of a Hessian matrix. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 xi
  • 18. xii List of Figures 3.9 Time employed by a tracking iteration in several platforms. . . . . . . . . . . . . . 63 3.10 Time comparison for several stages.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .64 4.1 Data from AP Hill. (a) Single band of the original data. (b) (c) Fraction planes from ORASIS processing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 4.2 The number of exemplars as a function of the error angle for various hyperspectral images. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 4.3 Three-dimensional histogram of the exemplars projected onto the first two reference vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 4.4 Abundance coefficient histograms. (a) The histogram of a background endmember. (b) The histogram of a target endmember.. . . . . . . . . . . . . . . . . .85 4.5 HYDICE data from Forest Radiance. (a) A single band of the raw data. (b) Overlay with the results of the OAD. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 5.1 Graphical representation of the recursive task distribution for RHSEG on a parallel computer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 6.1 Imaging spectrometers collect hyperspectral data such that each pixel contains a spectral radiance signature comprised of contiguous, narrow wavelength bands spanning a broad wavelength range (e.g., 400– 2500 nm). Top shows a typical hyperspectral image cube; each pixel contains a detailed hyperspectral signature such as those shown at the bottom.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .111 6.2 Change in data volume with flight distance for two common imaging spectrometers, AVIRIS and CASI-1500, flown at 2 m and 10 m GIFOV. . 114 6.3 Major processing steps used to derive calibrated, geo-referenced surface reflectance spectra for subsequent analysis of hyperspectral images.. . . . .115 6.4 A per-pixel, Monte Carlo mixture analysis model used for automated, large-scale quantification of fractional material cover in terrestrial ecosystems [18, 21]. A spectral endmember database of (A) live, green vegetation; (B) non-photosynthetic vegetation; and (C) bare soil is used to iteratively decompose each pixel spectrum in an image into constituent surface cover fractions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.5 Example forward canopy radiative transfer model simulations of how a plant canopy hyperspectral reflectance signature changes with increasing quantities of dead leaf material. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
  • 19. List of Figures xiii 6.6 Schematic of a typical canopy radiative transfer inverse modeling environ- ment, with Monte Carlo simulation over a set of ecologically-constrained variables. This example mentions AVIRIS as the hyperspectral image data source. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 6.7 Schematic of a small HPC cluster showing 20 compute nodes, front-end server, InfiniBand high-speed/low-latency network, Gigabit Ethernet management network, and storage in parallel and conventional file systems on SCSI RAID-5 drive arrays. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.8 Effect of storage RAID-5 subsystem on independent simultaneous calcula- tions, with storage systems accessed via NFS. With multiple simultaneous accesses, the SCSI array outperforms the SATA array. . . . . . . . . . . . . . . . . . 124 6.9 Performance comparison of multiple computer node access to a data storage system using the traditional NFS or newer IBRIX parallel file system.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .124 7.1 Communication framework for the morphological feature extraction algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .136 7.2 MLP neural network topology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 7.3 AVIRIS scene of Salinas Valley, California (a), and land-cover ground classes (b). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142 7.4 Scalability of parallel morphological feature extraction algorithms on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .147 7.5 Scalability of parallel neural classifier on Thunderhead. . . . . . . . . . . . . . . . . 148 8.1 MERIS hyperspectral image of the fires that took place in the summer of 2005 in Spain and Portugal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 8.2 Classification of fire spread models.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .155 8.3 Concept of parallelizable spatial/spectral pattern (PSSP) and proposed partitioning scheme.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .164 8.4 Problem of accessing pixels outside the image domain. . . . . . . . . . . . . . . . . 164 8.5 Additional communications required when the SE is located around a pixel in the border of a PSSP. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 8.6 Border-handling strategy relative to pixels in the border of a PSSP.. . . . . .165
  • 20. xiv List of Figures 8.7 Partitioning options for the considered neural algorithm. . . . . . . . . . . . . . . . 168 8.8 Functional diagram of the system design model. . . . . . . . . . . . . . . . . . . . . . . 170 8.9 (Left) Spectral band at 587 nm wavelength of an AVIRIS scene com- prising agricultural and forest features at Indian Pines, Indiana. (Right) Ground-truth map with 30 mutually exclusive land-cover classes. . . . . . . . 174 8.10 Speedups achieved by the parallel AMC algorithm using a limited number of processors on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .176 8.11 Speedups achieved by the parallel SOM-based classification algorithm (using endmembers produced by the first three steps of the AMC algorithm) using a large number of processors on Thunderhead. . . . . . . . . 177 8.12 Speedups achieved by the parallel ATGP algorithm using a limited number of processors on Thunderhead.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .178 9.1 The service architecture concept. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 9.2 The OGSA framework. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 10.1 High level architectural view of a remote sensing system. . . . . . . . . . . . . . . 206 10.2 WFCS Grid services architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .218 10.3 LEAD software architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 11.1 The BEST Toolbox. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 11.2 The BEAM toolbox with VISAT visualization.. . . . . . . . . . . . . . . . . . . . . . . .244 11.3 The BEAT toolbox with VISAN visualization. . . . . . . . . . . . . . . . . . . . . . . . . 246 11.4 The architecture model for EO Grid on-Demand Services. . . . . . . . . . . . . . 257 11.5 Web portal Ozone Profile Result Visualization.. . . . . . . . . . . . . . . . . . . . . . . .258 11.6 MERIS mosaic at 1.3 km resolution obtained in G-POD from the entire May to December 2004 data set. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 11.7 The ASAR G-POD environment. The user browses for and selects products of interest (upper left panel). The system automatically identifies the subtasks required by the application and distributes them to the different computing elements in the grid (upper right panel). Results are presented to the user (lower panel).. . . . . . . . . . . . . . . . .262
  • 21. List of Figures xv 11.8 Three arcsec (∼ 90 m) pixel size orthorectified Envisat ASAR mosaic obtained using G-POD. Political boundaries have been manually overlaid. The full resolution result can be seen at [34]. . . . . . . . . . . . . . . . . . 263 11.9 ASAR mosaic obtained using G-POD considering GM products acquired from March 8 to 14, 2006 (400 m resolution). . . . . . . . . . . . . . . . . 264 11.10 Global monthly mean near surface temperature profile for June 2005, time layer 0 h. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 11.11 YAGOP ozone profiles compared with corresponding operational GOMOS products for two selected stars (first and second panels from the left). Distribution and comparison of coincidences for GOMOS and MIPAS profiles for Sep. 2002 are shown in the first and second panels from the right. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 269 11.12 Zonal mean of Na profiles of 14–31 August 2003.. . . . . . . . . . . . . . . . . . . .270 12.1 Multi-tier grid system architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286 12.2 System architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .290 12.3 Distributed data management architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . 295 12.4 Workflow management stack. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297 12.5 a) Task sequence showing interferogram processing; b) task sequence mapped on grid resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303 13.1 Levels in solving a computing problem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311 13.2 HIAT graphical user interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 312 13.3 Data processing schema for hyperspectral image analyis toolbox. . . . . . . 313 13.4 Spectrum of a signal sampled at (a) its Nyquist frequency, and (b) twice its Nyquist frequency. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 314 13.5 (a) The spectrum of grass and (b) its power spectral density.. . . . . . . . . . .314 13.6 Sample spectra before and after lowpass filtering. . . . . . . . . . . . . . . . . . . . . 315 13.7 HYPERION data of Enrique Reef (band 8 at 427 nm) before (a) and after (b) oversamplig filtering.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .315 13.8 Principal component algorithm block components. . . . . . . . . . . . . . . . . . . . 324
  • 22. xvi List of Figures 13.9 Performance results for Euclidean distance classifier. . . . . . . . . . . . . . . . . . 325 13.10 Performance results for maximum likelihood.. . . . . . . . . . . . . . . . . . . . . . . .326 13.11 Grid-HSI architecture.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .327 13.12 Grid-HSI portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 13.13 Graphical output at node 04. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 330 14.1 A limited set of rock forming minerals and vegetation reflectance spectra measured from 400 to 2500 nm in the solar reflected light spectrum. NPV corresponds to non-photosynthetic vegetation. A wide diversity of composition related absorption and scattering signatures in nature are illustrated by these materials.. . . . . . . . . . . . . . . . .337 14.2 The spectral signatures of a limited set of mineral and vegetation spectra convolved to the six solar reflected range band passes of the multispectral LandSat Thematic Mapper. When mixtures and illumination factors are included, the six multispectral measurements are insufficient to unambiguously identify the wide range of possible materials present on the surface of the Earth. . . . . . . . . . . . . . . . . . . . . . . . . 338 14.3 AVIRIS spectral range and sampling with a transmittance spectrum of the atmosphere and the six LandSat TM multi-spectral bands in the solar reflected spectrum. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339 14.4 AVIRIS image cube representation of a data set measured of the southern San Francisco Bay, California. The top panel shows the spatial content for a 20 m spatial resolution data set. The vertical panels depict the spectral measurement from 380 to 2510 nm that is recorded for every spatial element. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 14.5 The 2006 AVIRIS signal-to-noise ratio and corresponding benchmark reference radiance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .341 14.6 Depiction of the spectral cross-track and spectral-IFOV uniformity for a uniform imaging spectrometer. The grids represent the detectors, the gray scale represents the wavelengths, and the dots represent the centers of the IFOVs. This is a uniform imaging spectrometer where each cross-track spectrum has the same calibration and all the wavelengths measured for a given spectrum are from the same IFOV. . . 342 14.7 Vegetation reflectance spectrum showing the molecular absorption and constituent scattering signatures present across the solar reflected spectral range.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343
  • 23. List of Figures xvii 14.8 Modeled upwelling radiance incident at the AVIRIS aperture from a well-illuminated vegetation canopy. This spectrum includes the combined effects of the solar irradiance, two-way transmittance, and scattering of the atmosphere, as well as the vegetation canopy reflectance.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .343 14.9 AVIRIS measured signal for the upwelling radiance from a vegetation covered surface. The instrument optical and electronic characteristics dominate for recorded signal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 344 14.10 Spectrally and radiometrically calibrated spectrum for the vegetation canopy target. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 14.11 Atmospherically corrected spectrum from AVIRIS measurement of a vegetation canopy. The 1400 and 1900 nm spectral regions are ignored due to the strong absorption of atmospheric water vapor. In this reflectance spectrum the molecular absorption and constituent scattering properties of the canopy are clearly expressed and available for spectroscopic analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 345 14.12 Spectra of samples returned by the NASA Apollo missions showing the composition-based spectral diversity of surface materials on the Moon. This spectral diversity provides the basis for pursuing the object- ives of the M3 mission with an imaging spectrometer. Upon arrival on Earth the ultradry lunar Samples have absorbed water, resulting in the absorption feature beyond 2700 nm. These spectra were measured by the NASA RELAB facility at Brown University. . . . . . . . . . . . . . . . . . . 349 14.13 Mechanical drawing of the M3 imaging spectrometer that has been built for mapping the composition of the Moon via spectroscopy. The M3 instrument has the following mass, power, and volume characteristics: 8 kg, 15 Watts, 25 × 18 × 12 cm. The M3 instrument was built in 24 months.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .350 14.14 Depiction of the spectral spatial and pushbroom imaging approach of the M3 high uniformity and high precision imaging spectrometer. . . . 351 14.15 Benchmark reference radiance for an Earth imaging spectrometer focused on terrestrial and aquatic ecosystem objectives. . . . . . . . . . . . . . . 355 14.16 The signal-to-noise ratio requirements for each of the bench-mark reference radiances. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 15.1 Onboard processing example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 361 15.2 Trade-off between flexibility and performance [5]. . . . . . . . . . . . . . . . . . . . 362
  • 24. xviii List of Figures 15.3 FPGA structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 362 15.4 CLB structure.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .363 15.5 Early reconfigurable architecture [7]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364 15.6 Automatic wavelet spectral dimension reduction algorithm. . . . . . . . . . . . 366 15.7 Top hierarchical architecture of the automatic wavelet dimension reduction algorithm. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 15.8 DWT IDWT pipeline implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 367 15.9 Correlator module. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368 15.10 Speedup of wavelet-based hyperspectral dimension reduction algorithm.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .369 15.11 Generalized classification rules for Pass-One. . . . . . . . . . . . . . . . . . . . . . . . . 370 15.12 Top-level architecture of the ACCA algorithm.. . . . . . . . . . . . . . . . . . . . . . .370 15.13 ACCA normalization module architecture: exact normalization operations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 15.14 ACCA normalization module architecture: approximated normalization operations.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .372 15.15 ACCA Pass-One architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373 15.16 Detection accuracy (based on the absolute error): image bands and cloud masks (software/reference mask, hardware masks). . . . . . . . . . 374 15.17 Detection accuracy (based on the absolute error): approximate normalization and quantization errors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 374 15.18 ACCA hardware-to-software performance. . . . . . . . . . . . . . . . . . . . . . . . . . . 375 16.1 Systolic array for QR-decomposition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 383 16.2 Systolic array for backsubstitution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 385 16.3 Boundary cell (left) and internal cell (right). . . . . . . . . . . . . . . . . . . . . . . . . . 385 16.4 Shift-adder DA architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 387
  • 25. List of Figures xix 16.5 Computation of ck.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .387 16.6 FIR filter for abundance estimation.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .388 16.7 Block diagram of the auto-correlator. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 389 16.8 QR-decomposition by CORDIC circuit. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 390 16.9 Systolic array for backsubstitution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 391 16.10 Boundary cell (left) and internal cell (right) implementations. . . . . . . . . . 391 16.11 Real-time updated triangular matrix via CORDIC circuit. . . . . . . . . . . . . . 392 16.12 Real-time updated weights. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393 16.13 Real-time detection results. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 394 16.14 Block diagrams of Methods 1 (left) and 2 (right) to be used for FPGA designs of CEM. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 395 17.1 (a) A HYDICE image scene that contains 30 panels. (b) Spatial locations of 30 panels provided by ground truth. (c) Spectra from P1 to P10.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .403 18.1 A hyperspectral image as a cube made up of spatially arranged pixel vectors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 413 18.2 3D graphics pipeline.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .415 18.3 Fourth generation of GPUs block diagram. These GPUs incorporate fully programmable vertexes and fragment processors. . . . . . . . . . . . . . . . 416 18.4 NVIDIA G70 (a) and ATI-RADEON R520 (b) block diagrams. . . . . . . . 418 18.5 Block diagram of the NVIDIA’s Geforce 8800 GTX. . . . . . . . . . . . . . . . . . 419 18.6 Stream graphs of the GPU-based (a) filter-bank (FBS) and (b) lifting (LS) implementations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 426 18.7 2D texture layout.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .427 18.8 Mapping one lifting step onto the GPU. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429 18.9 Implementation of the GPGPU Framework. . . . . . . . . . . . . . . . . . . . . . . . . . 430
  • 26. xx List of Figures 18.10 We allocate a stream S of dimension 8 × 4 and initialize its content to a sequence of numbers (from 0 to 31). Then, we ask four substreams dividing the original stream into four quadrants (A, B, C, and D). Finally, we add quadrants A and D and store the result in B, and we substract D from A and store the result in C. . . . . . . . . . . . . . . . . . . . . . . . . 432 18.11 Mapping of a hyperspectral image onto the GPU memory. . . . . . . . . . . . . 437 18.12 Flowchart of the proposed stream-based GPU implementation of the AMEE algorithm using SAM as pointwise distance. . . . . . . . . . . . . 438 18.13 Kernels involved in the computation of the inner products/norms and definition of a region of influence (RI) for a given pixel defined by an SE with t = 3. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439 18.14 Computation of the partial inner products for distance 5: each pixel-vector with its south-east nearest neighbor. Notice that the elements in the GPUStreams are four-element vectors, i.e., A, B, C . . . contains four floating, point values each, and vector operations are element-wise. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 440 18.15 Flowchart of the proposed stream-based GPU implementation of the AMEE algorithm using SID as pointwise distance. . . . . . . . . . . . . . 441 18.16 Subscene of the full AVIRIS hyperspectral data cube collected over the Cuprite mining district in Nevada. . . . . . . . . . . . . . . . . . . . . . . . . . . 443 18.17 Ground USGS spectra for ten minerals of interest in the AVIRIS Cuprite scene. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 444 18.18 Performance of the CPU- and GPU-based AMEE (SAM) implementations for different image sizes (Imax = 5). . . . . . . . . . . . . . . . . 446 18.19 Performance of the CPU- and GPU-based AMEE (SID) implementations for different image sizes (Imax = 5). . . . . . . . . . . . . . . . . 447 18.20 Speedups of the GPU-based AMEE implementations for different numbers of iterations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 447 18.21 Speedup comparison between the two different implementations of AMEE (SID and SAM) in the different execution platforms (Imax = 5).. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .448 18.22 Speedup comparison between the two generations of CPUs, P4 Northwood (2003) and Prescott (2005), and the two generations of GPUs, 5950 Ultra (2003) and 7800 GTX (2005). . . . . . . . . . . . . . . . . . . 448
  • 27. Acknowledgments The editors would like to thank all the contributors for all their help and support during the production of this book, and for sharing their vast knowledge with readers. In particular, Profs. Javier Plaza and David Valencia are gratefully acknowledged for their help in the preparation of some of the chapters of this text. Last but not least, the editors gratefully thank their families for their support on this project. xxi
  • 29. About the Editors Antonio Plaza received the M.S. degree and the Ph.D. degree in computer engi- neering from the University of Extemadura, Spain, where he was awarded the out- standing Ph.D. dissertation award in 2002. Dr. Plaza is an associate professor with the Department of Technology of Computers and Communications at University of Extremadura. He has authored or co-authored more than 140 scientific publications including journal papers, book chapters, and peer-reviewed conference proceedings. His main research interests comprise remote sensing, image and signal processing, and efficient implementations of large-scale scientific problems on high-performance computing architectures, including commodity Beowulf clusters, heterogeneous net- works of workstations, grid computing facilities, and hardware-based computer archi- tectures such as field-programmable gate arrays (FPGAs) and graphics processing units (GPUs). He has held visiting researcher positions at several institutions, including the Computational and Information Sciences and Technology Office (CISTO) at NASA/Goddard Space Flight Center, Greenbelt, Maryland; the Remote Sensing, Signal and Image Processing Laboratory (RSSIPL) at the Department of Computer Science and Electrical Engineering, University of Maryland, Baltimore County; the Microsystems Laboratory at the Department of Electrical & Computer Engineering, University of Maryland, College Park; and the AVIRIS group at NASA/Jet Propulsion Laboratory, Pasadena, California. Dr. Plaza is a senior member of the IEEE. He is active in the IEEE Computer Society and the IEEE Geoscience and Remote Sensing Society, and has served as proposal evaluator for the European Commission, the European Space Agency, and the Spanish Ministry of Science and Education. He is also a frequent manuscript re- viewer for more than 15 highly-cited journals (including several IEEE Transactions) in the areas of computer architecture, parallel/distributed systems, remote sensing, neural networks, image/signal processing, aerospace and engineering systems, and pattern analysis. He is also a member of the program committee of several inter- national conferences, such as the European Conference on Parallel and Distributed Computing; the International Workshop on Algorithms, Models and Tools for Parallel Computing on Heterogeneous Networks; the Euromicro Workshop on Parallel and Distributed Image Processing, Video Processing, and Multimedia; the Workshop on Grid Computing Applications Development; the IEEE GRSS/ASPRS Joint Workshop on Remote Sensing and Data Fusion over Urban Areas; and the IEEE International Geoscience and Remote Sensing Symposium. Dr. Plaza is the project coordinator of HYPER-I-NET (Hyperspectral Imag- ing Network), a four-year Marie Curie Research Training Network (see http://www.hyperinet.eu) designed to build an interdisciplinary European research xxiii
  • 30. xxiv About the Editors community focused on remotely sensed hyperspectral imaging. He is guest ed- itor (with Prof. Chein-I Chang) of a special issue on high performance com- puting for hyperspectral imaging for the the International Journal of High Per- formance Computing Applications. He is associate editor for the IEEE Trans- actions on Geoscience and Remote Sensing journal in the areas of Hyperspec- tral Image Analysis and Signal Processing. Additional information is available at http://www.umbc.edu/rssipl/people/aplaza. Chein-I Chang received his B.S. degree from Soochow University, Taipei, Taiwan; the M.S. degree from the Institute of Mathematics at National Tsing Hua University, Hsinchu, Taiwan; and the M.A. degree from the State University of New York at Stony Brook, all in mathematics. He also received his M.S., and M.S.E.E. degrees from the University of Illinois at Urbana-Champaign and the Ph.D. degree in electrical engineering from the University of Maryland, College Park. Dr. Chang has been with the University of Maryland, Baltimore County (UMBC) since 1987 and is currently professor in the Department of Computer Science and Electrical Engineering. He was a visiting research specialist in the Institute of Infor- mation Engineering at the National Cheng Kung University, Tainan, Taiwan, from 1994 to 1995. He received an NRC (National Research Council) senior research associateship award from 2002 to 2003 sponsored by the U.S. Army Soldier and Bio- logical Chemical Command, Edgewood Chemical and Biological Center, Aberdeen Proving Ground, Maryland. Additionally, Dr. Chang was a distinguished lecturer chair at the National Chung Hsing University sponsored by the Ministry of Education in Taiwan from 2005 to 2006 and is currently holding a chair professorship of diaster reduction technology from 2006 to 2009 with the Environmental Restoration and Disaster Reduction Research Center, National Chung Hsing University, Taichung, Taiwan, ROC. He has three patents and several pending on hyperspectral image processing. He is on the editorial board of the Journal of High Speed Networks and was an associate editor in the area of hyperspectral signal processing for IEEE Transactions on Geo- science and Remote Sensing. He was the guest editor of a special issue of the Journal of High Speed Networks on telemedicine and applications and co-guest edits three special issues on Broadband Multimedia Sensor Networks in Healthcare Applications for the Journal of High Speed Networks, 2007 and on high-performance comput- ing for hyperspectral imaging for the International Journal of High Performance Computing Applications. Dr. Chang is the author of Hyperspectral Imaging: Techniques for Spectral Detec- tion and Classification published by Kluwer Academic Publishers in 2003 and the editor of two books, Recent Advances in Hyperspectral Signal and Image Processing, Trivandrum, Kerala: Research Signpost, Trasworld Research Network, India, 2006, and Hyperspectral Data Exploitation: Theory and Applications, John Wiley & Sons, 2007. Dr. Chang is currently working on his second book, Hyperspectral Imaging: Algorithm Design and Analysis, John Wiley & Sons due 2007. He is a Fellow of the SPIE and a member of Phi Kappa Phi and Eta Kappa Nu. Additional information is available at http://www.umbc.edu/rssipl.
  • 31. Contributors Giovanni Aloisio, Euromediterranean Center for Climate Change & University of Salento, Italy Gregory P. Asner, Carnegie Institution of Washington, Stanford, California José I. Benavides, University of Córdoba, Spain Jeffrey H. Bowles, Naval Research Laboratory, Washington, DC Massimo Cafaro, Euromediterranean Center for Climate Change & University of Salento, Italy Chein-I Chang, University of Maryland Baltimore County, Baltimore, Maryland Roberto Cossu, European Space Agency, ESA-Esrin, Italy Qian Du, Missisipi State University, Mississippi Esam El-Araby, George Washington University, Washington, DC Tarek El-Ghazawi, George Washington University, Washington, DC Italo Epicoco, Euromediterranean Center for Climate Change & University of Salento, Italy Sandro Fiore, Euromediterranean Center for Climate Change & University of Salento, Italy Luigi Fusco, European Space Agency, ESA-Esrin, Italy Samuel D. Gasster, The Aerospace Corporation, El Segundo, California David Gillis, Naval Research Laboratory, Washington, DC José González-Mora, University of Málaga, Spain Robert O. Green, Jet Propulsion Laboratory & California Institute of Technology, California Nicolás Guil, University of Málaga, Spain Robert S. Haxo, Carnegie Institution of Washington, Stanford, California Luis O. Jiménez-Rodrı́guez, University of Puerto Rico at Mayaguez David E. Knapp, Carnegie Institution of Washington, Stanford, California Craig A. Lee, The Aerospace Corporation, El Segundo, California Jacqueline Le Moigne, NASA’s Goddard Space Flight Center, Greenbelt, Maryland Pablo Martı́nez, University of Extremadura, Cáceres, Spain James W. Palko, The Aerospace Corporation, El Segundo, California Rosa Pérez, University of Extremadura, Cáceres, Spain Antonio Plaza, University of Extremadura, Cáceres, Spain Javier Plaza, University of Extremadura, Cáceres, Spain Manuel Prieto, Complutense University of Madrid, Spain Gianvito Quarta, Institute of Atmospheric Sciences and Climate, CNR, Bologna, Italy Christian Retscher, European Space Agency, ESA-Esrin, Italy Wilson Rivera-Gallego, University of Puerto Rico at Mayaguez, Puerto Rico Edmundo Sáez, University of Córdoba, Spain Javier Setoain, Complutense University of Madrid, Spain Mohamed Taher, George Washington University, Washington, DC Christian Tenllado, Complutense University of Madrid, Spain James C. Tilton, NASA Goddard Space Flight Center, Greenbelt, Maryland xxv
  • 32. xxvi Contributors Francisco Tirado, Complutense University of Madrid, Spain David Valencia, University of Extremadura, Cáceres, Spain Miguel Vélez-Reyes, University of Puerto Rico at Mayaguez, Puerto Rico Jianwei Wang, University of Maryland Baltimore County, Baltimore, Maryland Emilio L. Zapata, University of Málaga, Spain
  • 33. Chapter 1 Introduction Antonio Plaza University of Extremadura, Spain Chein-I Chang University of Maryland, Baltimore County Contents 1.1 Preface ...................................................................1 1.2 Contents ..................................................................2 1.2.1 Organization of Chapters in This Volume ...........................3 1.2.2 Brief Description of Chapters in This Volume .......................3 1.3 Distinguishing Features of the Book .......................................6 1.4 Summary .................................................................7 1.1 Preface Advances in sensor technology are revolutionizing the way remotely sensed data are collected, managed, and analyzed. The incorporation of latest-generation sensors to airborne and satellite platforms is currently producing a nearly continual stream of high-dimensional data, and this explosion in the amount of collected information has rapidly created new processing challenges. In particular, many current and future applications of remote sensing in Earth science, space science, and soon in exploration science require real- or near-real-time processing capabilities. Relevant examples in- clude environmental studies, military applications, tracking and monitoring of hazards such as wild land and forest fires, oil spills, and other types of chemical/biological contamination. To address the computational requirements introduced by many time-critical appli- cations, several research efforts have been recently directed towards the incorporation of high-performance computing (HPC) models in remote sensing missions. HPC is an integrated computing environment for solving large-scale computational demand- ing problems such as those involved in many remote sensing studies. With the aim of providing a cross-disciplinary forum that will foster collaboration and develop- ment in those areas, this book has been designed to serve as one of the first available references specifically focused on describing recent advances in the field of HPC 1
  • 34. 2 High-Performance Computing in Remote Sensing applied to remote sensing problems. As a result, the content of the book has been organized to appeal to both remote sensing scientists and computer engineers alike. On the one hand, remote sensing scientists will benefit by becoming aware of the extremely high computational requirements introduced by most application areas in Earth and space observation. On the other hand, computer engineers will benefit from the wide range of parallel processing strategies discussed in the book. However, the material presented in this book will also be of great interest to researchers and prac- titioners working in many other scientific and engineering applications, in particular, those related with the development of systems and techniques for collecting, storing, and analyzing extremely high-dimensional collections of data. 1.2 Contents The contents of this book have been organized as follows. First, an introductory part addressing some key concepts in the field of computing applied to remote sensing, along with an extensive review of available and future developments in this area, is provided.Thispartalsocoversotherapplicationareasnotnecessarilyrelatedtoremote sensing, such as multimedia and video processing, chemical/biological standoff de- tection, and medical imaging. Then, three main application-oriented parts follow, each of which illustrates a specific parallel computing paradigm. In particular, the HPC- based techniques comprised in these parts include multiprocessor (cluster-based) sys- tems, large-scale and heterogeneous networks of computers, and specialized hardware architectures for remotely sensed data analysis and interpretation. Combined, the four parts deliver an excellent snapshot of the state-of-the-art in those areas, and offer a thoughtful perspective of the potential and emerging challenges of applying HPC paradigms to remote sensing problems: r Part I: General. This part, comprising Chapters 2 and 3, develops basic concepts about HPC in remote sensing and provides a detailed review of existing and planned HPC systems in this area. Other areas that share common aspects with remote sensing data processing are also covered, including multimedia and video processing. r Part II: Multiprocessor systems. This part, comprising Chapters 4–8, includes a compendium of algorithms and techniques for HPC-based remote sensing data analysis using multiprocessor systems such as clusters and networks of computers, including massively parallel facilities. r Part III: Large-scale and heterogeneous distributed computing. The focus of this part, which comprises Chapters 9–13, is on parallel techniques for re- mote sensing data analysis using large-scale distributed platforms, with special emphasis on grid computing environments and fully heterogeneous networks of workstations.
  • 35. Introduction 3 r Part IV: Specialized architectures. The last part of this book comprises Chapters 14–18 and is devoted to systems and architectures for at-sensor and real-time collection and analysis of remote sensing data using specialized hardware and embedded systems. The part also includes specific aspects about current trends in remote sensing sensor design and operation. 1.2.1 Organization of Chapters in This Volume The first part of the book (General) consists of two chapters that include basic concepts that will appeal to both students and practitioners who have not had a formal education in remote sensing and/or computer engineering. This part will also be of interest to remote sensing and general-purpose HPC specialists, who can greatly benefit from the exhaustive review of techniques and discussion on future data processing per- spectives in this area. Also, general-purpose specialists will become aware of other application areas of HPC (e.g., multimedia and video processing) in which the design of techniques and parallel processing strategies to deal with extremely large com- putational requirements follows a similar pattern as that used to deal with remotely sensed data sets. On the other hand, the three application-oriented parts that fol- low (Multiprocessor systems, Large-scale and heterogeneous distributed computing, and Specialized architectures) are each composed of five selected chapters that will appeal to the vast scientific community devoted to designing and developing efficient techniques for remote sensing data analysis. This includes commercial companies working on intelligence and defense applications, Earth and space administrations such as NASA or the European Space Agency (ESA) – both of them represented in the book via several contributions – and universities with programs in remote sens- ing, Earth and space sciences, computer architecture, and computer engineering. Also, the growing interest in some emerging areas of remote sensing such as hyperspectral imaging (which will receive special attention in this volume) should make this book a timely reference. 1.2.2 Brief Description of Chapters in This Volume We provide below a description of the chapters contributed by different authors. It should be noted that all the techniques and methods presented in those chapters are well consolidated and cover almost entirely the spectrum of current and future data processing techniques in remote sensing applications. We specifically avoided repetition of topics in order to complete a timely compilation of realistic and suc- cessful efforts in the field. Each chapter was contributed by a reputed expert or a group of experts in the designed specialty areas. A brief outline of each contribution follows: r Chapter 1. Introduction. The present chapter provides an introduction to the book and describes the main innovative contributions covered by this volume and its individual chapters.
  • 36. 4 High-Performance Computing in Remote Sensing r Chapter 2. High-Performance Computer Architectures for Remote Sens- ing Data Analysis: Overview and Case Study. This chapter provides a re- view of the state-of-the-art in the design of HPC systems for remote sensing. The chapter also includes an application case study in which the pixel purity index (PPI), a well-known remote sensing data processing algorithm included in Kodak’s Research Systems ENVI (a very popular remote sensing-oriented commercial software package), is implemented using different types of HPC platforms such as a massively parallel multiprocessor, a heterogeneous network of distributed computers, and a specialized hardware architecture. r Chapter 3. Computer Architectures for Multimedia and Video Analysis. This chapter focuses on multimedia processing as another example application with a high demanding computational power and similar aspects as those in- volved in many remote sensing problems. In particular, the chapter discusses new computer architectures such as graphic processing units (GPUs) and mul- timedia extensions in the context of real applications. r Chapter 4. Parallel Implementation of the ORASIS Algorithm for Re- mote Sensing Data Analysis. This chapter presents a parallel version of ORA- SIS (the Optical Real-Time Adaptive Spectral Identification System) that was recently developed as part of a U.S. Department of Defense program. The ORASIS system comprises a series of algorithms developed at the Naval Re- search Laboratory for the analysis of remotely sensed hyperspectral image data. r Chapter 5. Parallel Implementation of the Recursive Approximation of an Unsupervised Hierarchical Segmentation Algorithm. This chapter describes aparallelimplementationofarecursiveapproximationofthehierarchicalimage segmentation algorithm developed at NASA. The chapter also demonstrates the computational efficiency of the algorithm using remotely sensed data collected by the Landsat Thematic Mapper (a multispectral instrument). r Chapter 6. Computing for Analysis and Modeling of Hyperspectral Im- agery. In this chapter, several analytical methods employed in vegetation and ecosystem studies using remote sensing instruments are developed. The chapter also summarizes the most common HPC-based approaches used to meet these analytical demands, and provides examples with computing clus- ters. Finally, the chapter discusses the emerging use of other HPC-based tech- niques for the above purpose, including data processing onboard aircraft and spacecraft platforms, and distributed Internet computing. r Chapter 7. Parallel Implementation of Morphological Neural Networks for Hyperspectral Image Analysis. This chapter explores in detail the uti- lization of parallel neural network architectures for solving remote sensing problems. The chapter further develops a new morphological/neural parallel algorithm for the analysis of remotely sensed data, which is implemented using both massively parallel (homogeneous) clusters and fully heterogeneous net- works of distributed workstations.
  • 37. Introduction 5 r Chapter 8. Parallel Wildland Fire Monitoring and Tracking Using Remotely Sensed Data. This chapter focuses on the use of HPC-based re- mote sensing techniques to address natural disasters, emphasizing the (near) real-time computational requirements introduced by time-critical applications. The chapter also develops several innovative algorithms, including morpholog- ical and target detection approaches, to monitor and track one particular type of hazard, wildland fires, using remotely sensed data. r Chapter 9. An Introduction to Grids for Remote Sensing Applications. This chapter introduces grid computing technology in preparation for the chap- ters to follow. The chapter first reviews previous approaches to distributed com- puting and then introduces current Web and grid service standards, along with some end-user tools for building grid applications. This is followed by a survey of current grid infrastructure and science projects relevant to remote sensing. r Chapter 10. Remote Sensing Grids: Architecture and Implementation. This chapter applies the grid computing paradigm to the domain of Earth remote sensing systems by combining the concepts of remote sensing or sensor Web systems with those of grid computing. In order to provide a specific example and context for discussing remote sensing grids, the design of a weather forecasting and climate science grid is presented and discussed. r Chapter 11. Open Grid Services for Envisat and Earth Observation Applications. This chapter first provides an overview of some ESA Earth Ob- servation missions, and of the software tools that ESA currently provides for facilitating data handling and analysis. Then, the chapter describes a dedicated Earth-science grid infrastructure, developed by the European Space Research Institute (ESRIN) at ESA in the context of DATAGRID, the first large European Commission-funded grid project. Different examples of remote sensing appli- cations integrated in this system are also given. r Chapter 12. Design and Implementation of a Grid Computing Envi- ronment for Remote Sensing. This chapter develops a new dynamic Earth Observation system specifically tuned to manage huge quantities of data com- ing from space missions. The system combines recent grid computing technolo- gies, concepts related to problem solving environments, and other HPC-based technologies. A comparison of the system to other classic approaches is also provided. r Chapter 13. A Solutionware for Hyperspectral Image Processing and Analysis. This chapter describes the concept of an integrated process for hyper- spectral image analysis, based on a solutionware (i.e., a set of catalogued tools that allow for the rapid construction of data processing algorithms and applica- tions). Parallel processing implementations of some of the tools in the Itanium architecture are presented, and a prototype version of a hyperspectral image processing toolbox over the grid, called Grid-HSI, is also described. r Chapter 14. AVIRIS and Related 21st Century Imaging Spectrometers for Earth and Space Science. This chapter uses the NASA Jet Propulsion
  • 38. 6 High-Performance Computing in Remote Sensing Laboratory’sAirborneVisible/InfraredImagingSpectrometer(AVIRIS),oneof the most advanced hyperspectral remote sensing instrument currently available, to review the critical characteristics of an imaging spectrometer instrument and the corresponding characteristics of the measured spectra. The wide range of scientific research as well as application objectives pursued with AVIRIS are briefly presented. Roles for the application of high-performance computing methods to AVIRIS data set are discussed. r Chapter 15. Remote Sensing and High-Performance Reconfigurable Com- puting Systems. This chapter discusses the role of reconfigurable comput- ing using field programmable gate arrays (FPGAs) for onboard processing of remotely sensed data. The chapter also describes several case studies of re- mote sensing applications in which reconfigurable computing has played an important role, including cloud detection and dimensionality reduction of hy- perspectral imagery. r Chapter 16. FPGA Design for Real-Time Implementation of Constrained Energy Minimization for Hyperspectral Target Detection. This chapter describes an FPGA implementation of the constrained energy minimization (CEM) algorithm, which has been widely used for hyperspectral detection and classification. The main feature of the FPGA design provided in this chapter is the use of the Coordinate Rotation DIgital Computer (CORDIC) algorithm to convert a Givens rotation of a vector to a set of shift-add operations, which allows for efficient implementation in specialized hardware architectures. r Chapter 17. Real-Time Online Processing of Hyperspectral Imagery for Target Detection and Discrimination. This chapter describes a real-time on- line processing technique for fast and accurate exploitation of hyperspectral imagery. The system has been specifically developed to satisfy the extremely high computational requirements of many practical remote sensing applica- tions, such as target detection and discrimination, in which an immediate data analysis result is required for (near) real-time decision-making. r Chapter 18. Real-Time Onboard Hyperspectral Image Processing Using Programmable Graphics Hardware. Finally, this chapter addresses the emerging use of graphic processing units (GPUs) for onboard remote sensing data processing. Driven by the ever-growing demands of the video-game indus- try, GPUs have evolved from expensive application-specific units into highly parallel programmable systems. In this chapter, GPU-based implementations of remote sensing data processing algorithms are presented and discussed. 1.3 Distinguishing Features of the Book Before concluding this introduction, the editors would like to stress several distin- guishing features of this book. First and foremost, this book is the first volume that is entirely devoted to providing a perspective on the state-of-the-art of HPC techniques
  • 39. Introduction 7 in the context of remote sensing problems. In order to address the need for a con- solidated reference in this area, the editors have made significant efforts to invite highly recognized experts in academia, institutions, and commercial companies to write relevant chapters focused on their vast expertise in this area, and share their knowledge with the community. Second, this book provides a compilation of several well-established techniques covering most aspects of the current spectrum of process- ing techniques in remote sensing, including supervised and unsupervised techniques for data acquisition, calibration, correction, classification, segmentation, model inver- sion and visualization. Further, many of the application areas addressed in this book are of great social relevance and impact, including chemical/biological standoff de- tection, forest fire monitoring and tracking, etc. Finally, the variety and heterogeneity of parallel computing techniques and architectures discussed in the book are not to be found in any other similar textbook. 1.4 Summary The wide range of computer architectures (including homogeneous and heteroge- neous clusters and groups of clusters, large-scale distributed platforms and grid com- puting environments, specialized architectures based on reconfigurable computing, and commodity graphic hardware) and data processing techniques covered by this book exemplifies a subject area that has drawn together an eclectic collection of par- ticipants, but increasingly this is the nature of many endeavors at the cutting edge of science and technology. In this regard, one of the main purposes of this book is to reflect the increasing sophistication of a field that is rapidly maturing at the intersection of many different disciplines, including not only remote sensing or computer architecture/engineering, but also signal and image processing, optics, electronics, and aerospace engineering. The ultimate goal of this book is to provide readers with a peek at the cutting-edge research in the use of HPC-based techniques and practices in the context of remote sensing applications. The editors hope that this volume will serve as a useful reference for practitioners and engineers working in the above and related areas. Last but not least, the editors gratefully thank all the contributors for sharing their vast expertise with the readers. Without their outstanding contributions, this book could not have been completed.
  • 41. Chapter 2 High-Performance Computer Architectures for Remote Sensing Data Analysis: Overview and Case Study Antonio Plaza, University of Extremadura, Spain Chein-I Chang, University of Maryland, Baltimore Contents 2.1 Introduction ............................................................ 10 2.2 Related Work ........................................................... 13 2.2.1 Evolution of Cluster Computing in Remote Sensing ............... 14 2.2.2 Heterogeneous Computing in Remote Sensing .................... 15 2.2.3 Specialized Hardware for Onboard Data Processing ............... 16 2.3 Case Study: Pixel Purity Index (PPI) Algorithm .......................... 17 2.3.1 Algorithm Description ........................................... 17 2.3.2 Parallel Implementations ......................................... 20 2.3.2.1 Cluster-Based Implementation of the PPI Algorithm ..... 20 2.3.2.2 Heterogeneous Implementation of the PPI Algorithm .... 22 2.3.2.3 FPGA-Based Implementation of the PPI Algorithm ...... 23 2.4 Experimental Results ................................................... 27 2.4.1 High-Performance Computer Architectures ....................... 27 2.4.2 Hyperspectral Data .............................................. 29 2.4.3 Performance Evaluation .......................................... 31 2.4.4 Discussion ....................................................... 35 2.5 Conclusions and Future Research ........................................ 36 2.6 Acknowledgments ...................................................... 37 References ................................................................... 38 Advances in sensor technology are revolutionizing the way remotely sensed data are collected, managed, and analyzed. In particular, many current and future applications of remote sensing in earth science, space science, and soon in exploration science require real- or near-real-time processing capabilities. In recent years, several efforts 9
  • 42. 10 High-Performance Computing in Remote Sensing have been directed towards the incorporation of high-performance computing (HPC) models to remote sensing missions. In this chapter, an overview of recent efforts in the design of HPC systems for remote sensing is provided. The chapter also includes an application case study in which the pixel purity index (PPI), a well-known remote sensing data processing algorithm, is implemented in different types of HPC platforms such as a massively parallel multiprocessor, a heterogeneous network of distributed computers, and a specialized field programmable gate array (FPGA) hardware ar- chitecture. Analytical and experimental results are presented in the context of a real application, using hyperspectral data collected by NASA’s Jet Propulsion Laboratory over the World Trade Center area in New York City, right after the terrorist attacks of September 11th. Combined, these parts deliver an excellent snapshot of the state-of- the-art of HPC in remote sensing, and offer a thoughtful perspective of the potential and emerging challenges of adapting HPC paradigms to remote sensing problems. 2.1 Introduction The development of computationally efficient techniques for transforming the mas- sive amount of remote sensing data into scientific understanding is critical for space-based earth science and planetary exploration [1]. The wealth of informa- tion provided by latest-generation remote sensing instruments has opened ground- breaking perspectives in many applications, including environmental modeling and assessment for Earth-based and atmospheric studies, risk/hazard prevention and re- sponse including wild land fire tracking, biological threat detection, monitoring of oil spills and other types of chemical contamination, target detection for military and defense/security purposes, urban planning and management studies, etc. [2]. Most of the above-mentioned applications require analysis algorithms able to provide a re- sponse in real- or near-real-time. This is quite an ambitious goal in most current remote sensingmissions,mainlybecausethepricepaidfortherichinformationavailablefrom latest-generation sensors is the enormous amounts of data that they generate [3, 4, 5]. A relevant example of a remote sensing application in which the use of HPC technologies such as parallel and distributed computing are highly desirable is hy- perspectral imaging [6], in which an image spectrometer collects hundreds or even thousands of measurements (at multiple wavelength channels) for the same area on the surface of the Earth (see Figure 2.1). The scenes provided by such sen- sors are often called “data cubes,” to denote the extremely high dimensionality of the data. For instance, the NASA Jet Propulsion Laboratory’s Airborne Visi- ble Infra-Red Imaging Spectrometer (AVIRIS) [7] is now able to record the vis- ible and near-infrared spectrum (wavelength region from 0.4 to 2.5 micrometers) of the reflected light of an area 2 to 12 kilometers wide and several kilometers long using 224 spectral bands (see Figure 3.8). The resulting cube is a stack of images in which each pixel (vector) has an associated spectral signature or ‘fin- gerprint’ that uniquely characterizes the underlying objects, and the resulting data volume typically comprises several GBs per flight. Although hyperspectral imaging
  • 43. High-Performance Computer Architectures for Remote Sensing 11 Mixed pixel (vegetation + soil) Pure pixel (water) Mixed pixel (soil + rocks) 0 Reflectance 1000 2000 3000 4000 Wavelength (nm) 2400 2100 1800 1500 1200 900 600 300 0 Reflectance 1000 2000 3000 4000 Wavelength (nm) 2400 2100 1800 1500 1200 900 600 300 0 Reflectance 1000 2000 3000 5000 4000 Wavelength (nm) 2400 2100 1800 1500 1200 900 600 300 Figure 2.1 The concept of hyperspectral imaging in remote sensing.
  • 44. 12 High-Performance Computing in Remote Sensing is a good example of the computational requirements introduced by remote sensing applications, there are many other remote sensing areas in which high-dimensional data sets are also produced (several of them are covered in detail in this book). How- ever, the extremely high computational requirements already introduced by hyper- spectral imaging applications (and the fact that these systems will continue increasing their spatial and spectral resolutions in the near future) make them an excellent case study to illustrate the need for HPC systems in remote sensing and will be used in this chapter for demonstration purposes. Specifically, the utilization of HPC systems in hyperspectral imaging applications has become more and more widespread in recent years. The idea developed by the computer science community of using COTS (commercial off-the-shelf) computer equipment, clustered together to work as a computational “team,” is a very attractive solution [8]. This strategy is often referred to as Beowulf-class cluster computing [9] and has already offered access to greatly increased computational power, but at a low cost (commensurate with falling commercial PC costs) in a number of remote sensing applications [10, 11, 12, 13, 14, 15]. In theory, the combination of commercial forces driving down cost and positive hardware trends (e.g., CPU peak power doubling every 18–24 months, storage capacity doubling every 12–18 months, and networking bandwidth doubling every 9–12 months) offers supercomputing performance that can now be applied a much wider range of remote sensing problems. Although most parallel techniques and systems for image information processing employed by NASA and other institutions during the last decade have chiefly been homogeneous in nature (i.e., they are made up of identical processing units, thus sim- plifying the design of parallel solutions adapted to those systems), a recent trend in the design of HPC systems for data-intensive problems is to utilize highly heterogeneous computing resources [16]. This heterogeneity is seldom planned, arising mainly as a result of technology evolution over time and computer market sales and trends. In this regard, networks of heterogeneous COTS resources can realize a very high level of aggregate performance in remote sensing applications [17], and the pervasive availability of these resources has resulted in the current notion of grid computing [18], which endeavors to make such distributed computing platforms easy to utilize in different application domains, much like the World Wide Web has made it easy to distribute Web content. It is expected that grid-based HPC systems will soon represent the tool of choice for the scientific community devoted to very high-dimensional data analysis in remote sensing and other fields. Finally, although remote sensing data processing algorithms generally map quite nicely to parallel systems made up of commodity CPUs, these systems are generally expensive and difficult to adapt to onboard remote sensing data processing scenarios, in which low-weight and low-power integrated components are essential to reduce mission payload and obtain analysis results in real time, i.e., at the same time as the data are collected by the sensor. In this regard, an exciting new development in the field of commodity computing is the emergence of programmable hardware devices such as field programmable gate arrays (FPGAs) [19, 20, 21] and graphic processing units (GPUs) [22], which can bridge the gap towards onboard and real-time analysis of remote sensing data. FPGAs are now fully reconfigurable, which allows one to
  • 45. High-Performance Computer Architectures for Remote Sensing 13 adaptively select a data processing algorithm (out of a pool of available ones) to be applied onboard the sensor from a control station on Earth. On the other hand, the emergence of GPUs (driven by the ever-growing demands of the video-game industry) has allowed these systems to evolve from expensive application-specific units into highly parallel and programmable commodity compo- nents. Current GPUs can deliver a peak performance in the order of 360 Gigaflops (Gflops), more than seven times the performance of the fastest ×86 dual-core proces- sor (around 50 Gflops). The ever-growing computational demands of remote sensing applications can fully benefit from compact hardware components and take advan- tage of the small size and relatively low cost of these units as compared to clusters or networks of computers. The main purpose of this chapter is to provide an overview of different HPC paradigms in the context of remote sensing applications. The chapter is organized as follows: r Section 2.2 describes relevant previous efforts in the field, such as the evo- lution of cluster computing in remote sensing applications, the emergence of distributed networks of computers as a cost-effective means to solve remote sensing problems, and the exploitation of specialized hardware architectures in remote sensing missions. r Section 2.3 provides an application case study: the well-known Pixel Purity Index (PPI) algorithm [23], which has been widely used to analyze hyper- spectral images and is available in commercial software. The algorithm is first briefly described and several issues encountered in its implementation are dis- cussed. Then, we provide HPC implementations of the algorithm, including a cluster-based parallel version, a variation of this version specifically tuned for heterogeneous computing environments, and an FPGA-based implementation. r Section 2.4 also provides an experimental comparison of the proposed imple- mentations of PPI using several high-performance computing architectures. Specifically, we use Thunderhead, a massively parallel Beowulf cluster at NASA’s Goddard Space Flight Center, a heterogeneous network of distributed workstations, and a Xilinx Virtex-II FPGA device. The considered application is based on the analysis of hyperspectral data collected by the AVIRIS instru- ment over the World Trade Center area in New York City right after the terrorist attacks of September 11th . r Finally, Section 2.5 concludes with some remarks and plausible future research lines. 2.2 Related Work This section first provides an overview of the evolution of cluster computing architec- tures in the context of remote sensing applications, from the initial developments in Beowulf systems at NASA centers to the current systems being employed for remote
  • 46. 14 High-Performance Computing in Remote Sensing sensing data processing. Then, an overview of recent advances in heterogeneous computing systems is given. These systems can be applied for the sake of distributed processing of remotely sensed data sets. The section concludes with an overview of hardware-based implementations for onboard processing of remote sensing data sets. 2.2.1 Evolution of Cluster Computing in Remote Sensing Beowulf clusters were originally developed with the purpose of creating a cost- effective parallel computing system able to satisfy specific computational require- ments in the earth and space sciences communities. Initially, the need for large amounts of computation was identified for processing multispectral imagery with only a few bands [24]. As sensor instruments incorporated hyperspectral capabilities, it was soon recognized that computer mainframes and mini-computers could not pro- vide sufficient power for processing these kinds of data. The Linux operating system introduced the potential of being quite reliable due to the large number of developers and users. Later it became apparent that large numbers of developers could also be a disadvantage as well as an advantage. In 1994, a team was put together at NASA’s Goddard Space Flight Center (GSFC) to build a cluster consisting only of commodity hardware (PCs) running Linux, which resulted in the first Beowulf cluster [25]. It consisted of 16 100Mhz 486DX4-based PCs connected with two hub-based Ethernet networks tied together with channel bonding software so that the two networks acted like one network running at twice the speed. The next year Beowulf-II, a 16-PC cluster based on 100Mhz Pentium PCs, was built and performed about 3 times faster, but also demonstrated a much higher reliability. In 1996, a Pentium-Pro cluster at Caltech demonstrated a sustained Gigaflop on a remote sensing-based application. This was the first time a commodity cluster had shown high-performance potential. Up until 1997, Beowulf clusters were in essence engineering prototypes, that is, they were built by those who were going to use them. However, in 1997, a project was started at GSFC to build a commodity cluster that was intended to be used by those who had not built it, the HIVE (highly parallel virtual environment) project. The idea was to have workstations distributed among different locations and a large number of compute nodes (the compute core) concentrated in one area. The workstations would share the computer core as though it was apart of each. Although the original HIVE only had one workstation, many users were able to access it from their own workstations over the Internet. The HIVE was also the first commodity cluster to exceed a sustained 10 Gigaflop on a remote sensing algorithm. Currently, an evolution of the HIVE is being used at GSFC for remote sensing data processing calculations. The system, called Thunderhead (see Figure 2.2), is a 512- processor homogeneous Beowulf cluster composed of 256 dual 2.4 GHz Intel Xeon nodes, each with 1 GB of memory and 80 GB of main memory. The total peak perfor- mance of the system is 2457.6 GFlops. Along with the 512-processor computer core, Thunderhead has several nodes attached to the core with a 2 Ghz optical fibre Myrinet. NASA is currently supporting additional massively parallel clusters for remote sensing applications, such as the Columbia supercomputer at NASA Ames Research
  • 47. High-Performance Computer Architectures for Remote Sensing 15 Figure 2.2 Thunderhead Beowulf cluster (512 processors) at NASA’s Goddard Space Flight Center in Maryland. Center, a 10,240-CPU SGI Altix supercluster, with Intel Itanium 2 processors, 20 terabytes total memory, and heterogeneous interconnects including InfiniBand net- work and a 10 GB Ethernet. This system is listed as #8 in the November 2006 version of the Top500 list of supercomputer sites available online at http://www.top500.org. Among many other examples of HPC systems included in the list that are currently being exploited for remote sensing and earth science-based applications, we cite three relevant systems for illustrative purposes. The first one is MareNostrum, an IBM cluster with 10,240 processors, 2.3 GHz Myrinet connectivity, and 20,480 GB of main memory available at Barcelona Supercomputing Center (#5 in Top500). Another example is Jaws, a Dell PowerEdge cluster with 3 GHz Infiniband connectivity, 5,200 GB of main memory, and 5,200 processors available at Maui High-Performance Computing Center (MHPCC) in Hawaii (#11 in Top500). A final example is NEC’s Earth Simulator Center, a 5,120-processor system developed by Japan’s Aerospace Exploration Agency and the Agency for Marine-Earth Science and Technology (#14 in Top500). It is highly anticipated that many new supercomputer systems will be specifically developed in forthcoming years to support remote sensing applications. 2.2.2 Heterogeneous Computing in Remote Sensing In the previous subsection, we discussed the use of cluster technologies based on multiprocessor systems as a high-performance and economically viable tool for efficient processing of remotely sensed data sets. With the commercial availability
  • 48. 16 High-Performance Computing in Remote Sensing of networking hardware, it soon became obvious that networked groups of machines distributed among different locations could be used together by one single parallel remote sensing code as a distributed-memory machine [26]. Of course, such networks were originally designed and built to connect heterogeneous sets of machines. As a result, heterogeneous networks of workstations (NOWs) soon became a very popular tool for distributed computing with essentially unbounded sets of machines, in which the number and locations of machines may not be explicitly known [16], as opposed to cluster computing, in which the number and locations of nodes are known and relatively fixed. An evolution of the concept of distributed computing described above resulted in the current notion of grid computing [18], in which the number and locations of nodes are relatively dynamic and have to be discovered at run-time. It should be noted that this section specifically focuses on distributed computing environments without meta-computing or grid computing, which aims at providing users access to services distributed over wide-area networks. Several chapters of this volume provide detailed analyses of the use of grids for remote sensing applications, and this issue is not further discussed here. There are currently several ongoing research efforts aimed at efficient distributed processing of remote sensing data. Perhaps the most simple example is the use of heterogeneous versions of data processing algorithms developed for Beowulf clus- ters, for instance, by resorting to heterogeneous-aware variations of homogeneous algorithms, able to capture the inherent heterogeneity of a NOW and to load-balance the computation among the available resources [27]. This framework allows one to easily port an existing parallel code developed for a homogeneous system to a fully heterogeneous environment, as will be shown in the following subsection. Another example is the Common Component Architecture (CCA) [28], which has been used as a plug-and-play environment for the construction of climate, weather, and ocean applications through a set of software components that conform to stan- dardized interfaces. Such components encapsulate much of the complexity of the data processing algorithms inside a black box and expose only well-defined inter- faces to other components. Among several other available efforts, another distributed application framework specifically developed for earth science data processing is the Java Distributed Application Framework (JDAF) [29]. Although the two main goals of JDAF are flexibility and performance, we believe that the Java programming language is not mature enough for high-performance computing of large amounts of data. 2.2.3 Specialized Hardware for Onboard Data Processing Over the last few years, several research efforts have been directed towards the incor- poration of specialized hardware for accelerating remote sensing-related calculations aboard airborne and satellite sensor platforms. Enabling onboard data processing introduces many advantages, such as the possibility to reduce the data down-link bandwidth requirements at the sensor by both preprocessing data and selecting data to be transmitted based upon predetermined content-based criteria [19, 20]. Onboard processing also reduces the cost and the complexity of ground processing systems so
  • 49. High-Performance Computer Architectures for Remote Sensing 17 that they can be affordable to a larger community. Other remote sensing applications that will soon greatly benefit from onboard processing are future web sensor mis- sions as well as future Mars and planetary exploration missions, for which onboard processing would enable autonomous decisions to be made onboard. Despite the appealing perspectives introduced by specialized data processing com- ponents, current hardware architectures including FPGAs (on-the-fly reconfigurabil- ity) and GPUs (very high performance at low cost) still present some limitations that need to be carefully analyzed when considering their incorporation to remote sensing missions [30]. In particular, the very fine granularity of FPGAs is still not efficient, with extreme situations in which only about 1% of the chip is available for logic while 99% is used for interconnect and configuration. This usually results in a penalty in terms of speed and power. On the other hand, both FPGAs and GPUs are still difficult to radiation-harden (currently-available radiation-tolerant FPGA devices have two orders of magnitude fewer equivalent gates than commercial FPGAs). 2.3 Case Study: Pixel Purity Index (PPI) Algorithm This section provides an application case study that is used in this chapter to illustrate different approaches for efficient implementation of remote sensing data processing algorithms. The algorithm selected as a case study is the PPI [23], one of the most widely used algorithms in the remote sensing community. First, the serial version of the algorithm available in commercial software is described. Then, several parallel implementations are given. 2.3.1 Algorithm Description The PPI algorithm was originally developed by Boardman et al. [23] and was soon incorporated into Kodak’s Research Systems ENVI, one of the most widely used commercial software packages by remote sensing scientists around the world. The underlyingassumptionunderthePPIalgorithmisthatthespectralsignatureassociated to each pixel vector measures the response of multiple underlying materials at each site. For instance, it is very likely that the pixel vectors shown in Figure 3.8 would actually contain a mixture of different substances (e.g., different minerals, different types of soils, etc.). This situation, often referred to as the “mixture problem” in hyperspectral analysis terminology [31], is one of the most crucial and distinguishing properties of spectroscopic analysis. Mixed pixels exist for one of two reasons [32]. Firstly, if the spatial resolution of the sensor is not fine enough to separate different materials, these can jointly occupy a single pixel, and the resulting spectral measurement will be a composite of the individual spectra. Secondly, mixed pixels can also result when distinct materials are combined into a homogeneous mixture. This circumstance occurs independent of
  • 50. 18 High-Performance Computing in Remote Sensing Extreme Extreme Extreme Extreme Skewer 3 Skewer 2 Skewer 1 Figure 2.3 Toy example illustrating the performance of the PPI algorithm in a 2-dimensional space. the spatial resolution of the sensor. A hyperspectral image is often a combination of the two situations, where a few sites in a scene are pure materials, but many others are mixtures of materials. To deal with the mixture problem in hyperspectral imaging, spectral unmixing tech- niques have been proposed as an inversion technique in which the measured spectrum of a mixed pixel is decomposed into a collection of spectrally pure constituent spectra, called endmembers in the literature, and a set of correspondent fractions, or abun- dances, that indicate the proportion of each endmember present in the mixed pixel [6]. ThePPIalgorithmisatooltoautomaticallysearchforendmembersthatareassumed to be the vertices of a convex hull [23]. The algorithm proceeds by generating a large number of random, N-dimensional unit vectors called “skewers” through the data set. Every data point is projected onto each skewer, and the data points that correspond to extrema in the direction of a skewer are identified and placed on a list (see Figure 2.3). As more skewers are generated, the list grows, and the number of times a given pixel is placed on this list is also tallied. The pixels with the highest tallies are considered the final endmembers. The inputs to the algorithm are a hyperspectral data cube F with N dimensions; a maximum number of endmembers to be extracted, E; the number of random skewers to be generated during the process, k; a cut-off threshold value, tv, used to select as final endmembers only those pixels that have been selected as extreme pixels at least tv times throughout the PPI process; and a threshold angle, ta, used to discard redundant endmembers during the process. The output of the algorithm is a set of E final endmembers {ee}E e=1. The algorithm can be summarized by the following steps:
  • 51. Another Random Scribd Document with Unrelated Content
  • 52. 30 Saguaro fruit. Early growth is extremely slow. A 2-year-old saguaro may be only one-quarter of an inch in diameter, and a 9-year-old plant may be 6 inches high. These years are the most hazardous. Insect larvae devour the tiny cactuses. Woodrats and other rodents chew the succulent tissue for its water, and ground squirrels uproot the young plants with their digging. In later life, the saguaro must contend with uprooting wind and human vandalism, as well as the earlier foes—drought, frost, erosion, and animals.
  • 53. Gila woodpecker at its nesting hole. In a century of maturity, a saguaro may produce 50 million seeds; replacement of the parent plant would require only that one of these germinate and grow. But in the cactus forest of the Rincon Mountain Section, the rate of survival has been even lower, so that over the last few decades the stand has been dwindling. What is wrong? Many answers to this question have been advanced, but like all interrelationships in nature, the saguaro’s role in the desert web of
  • 54. 31 life is very complex, and involves past events as well as present ones; a partial answer to the problem may be all we can hope for. The following reasons for the decline of the saguaros have been suggested by researchers. Saguaro, 1 foot high, in a rocky habitat.
  • 55. A typical 4-foot saguaro. There is some evidence to suggest that the Southwest has been getting drier since at least the late 19th century, and while the saguaro is adapted to extreme aridity, some of the “nurse” plants that shelter it during infancy are not. If such shrubs as paloverdes and mesquites dwindle, it is argued, so must the saguaro, which in its early years depends on them for shade.
  • 56. 32 Other culprits in the saguaro problem are man himself and his livestock. Around 1880, soon after the first railroad reached Tucson, a cattle boom began in southern Arizona. The valleys were soon overstocked, and cattle scoured the mountainsides in search of food. By 1893, when drought and starvation decimated the herds, the land had been severely overgrazed. Though the monument was established in 1933, grazing in the Rincon Mountain Section’s main cactus forest continued until 1958. (Elsewhere in the monument, it still goes on.) Compounding the problem, woodcutters removed acres of mesquite and other trees. In the center of the present Cactus Forest Loop Drive, lime kilns devoured quantities of woody fuel. Further upsetting the desert’s natural balance, ranchers and Government agents poisoned coyotes and shot hawks and other predators—in the belief that this would benefit the owners of livestock. This unrestrained assault on the environment had unfortunate effects on saguaros as well as on the human economy. Overgrazing may have resulted in an increase in kangaroo rats (which benefit from bare ground on which to hunt seeds) and certain other rodents adapted to an open sort of ground cover. Man’s killing of predators, their natural enemies, further encouraged proliferation of these rodents, which some people say are especially destructive of saguaro seeds and young plants. Whatever the effect these rodents have on the saguaros, the removal of ground cover intensified erosion and reduced the chances for the seeds to germinate and grow. And certainly the cutting of desert trees removed shade that would have benefited young saguaros. In the Tucson Mountain Section, which is near the northeastern edge of the Sonoran Desert, freezing temperatures are perhaps the most important environmental factor in saguaro mortality.
  • 57. 33 Looking toward the Santa Catalina Mountains from Cactus Forest Drive in September 1942. Although the causes of decline of the cactus forest lying northwest of Tanque Verde Ridge are still something of a puzzle, several facts are clear: the saguaro is not becoming extinct; in rocky habitats many young saguaros are surviving, promising continued stands for the future; in non-rocky habitats, some young saguaros are surviving, ensuring that at least thin stands will endure in these areas. Furthermore, since grazing was stopped here, ground cover has improved—a plus factor for the saguaro’s welfare. On the negative side, it is possible that, in addition to suffering from climatic, biotic, and human pressures, the once-dense mature stands of the monument are in the down-phase of a natural fluctuation. It is possible, too, that these stands owed their exceptional richness to an unusually favorable past environment which may not occur again. We can hope, however, that sometime in the not-distant future the total environmental balance will shift once again in favor of the giant cactus.
  • 58. 34 A photograph taken from the same spot in January 1970. Other Common Cactuses Many other cactuses share the saguaro’s environment. The BARREL CACTUS is sometimes mistaken for a young saguaro, but can easily be distinguished by its curved red spines. Stocky and unbranching, this cactus rarely attains a height of more than 5 or 6 feet. It bears clusters of sharp spines, called “areoles,” with the stout central spine flattened and curved like a fishhook. In bloom, in late summer or early autumn, this succulent plant produces clusters of yellow or orange flowers on its crown. The widely circulated story that water can be obtained by tapping the barrel cactus has little basis in fact, although it is possible that the thick, bitter juice squeezed from the plant’s moist tissues might, under extreme conditions, prevent death from thirst. Desert rats, mice, and rabbits, carefully avoiding the spines, sometimes gnaw into the plant’s tissues to obtain moisture.
  • 59. The group of cactuses called opuntias (oh-POON-cha) have jointed stems and branches. They are common and widespread throughout the desert and are well represented in the monument. Those having cylindrical joints are known as chollas (CHO-yah), while those with flat or padlike joints are called pricklypears. Chollas range in size and form from low mats to small trees, but most of those in the monument are shrublike. TEDDY BEAR CHOLLA, infamous for its barbed, hard-to-remove-from-your-skin spines, forms thick stands on warm south- or west-facing slopes. Its dense armor of straw-colored spines and its black trunk identify it. Because its joints break off easily when in contact with man or animal, this uncuddly customer is popularly called “jumping cactus.” A similar species is CHAIN FRUIT CHOLLA, notable for its long, branched chains of fruit, which sometimes extend to the ground. Each year, the new flowers blossom from the persistent fruit of the previous year. There is a common variety of this species that is almost spineless. STAGHORN CHOLLA, an inhabitant primarily of washes and other damp places, is named from its antler-shaped stems. This cactus’ scientific name— Opuntia versicolor—refers to the fact that its flowers, which appear in April and May, may be yellow, red, green, or brown. (Each plant sticks with one color through its lifetime.) Among the smaller chollas, thin-stemmed PENCIL CHOLLA grows from 2 to 4 feet high on plains and sandy washes. DESERT CHRISTMAS CACTUS, almost mat-like in form, blooms in late spring but develops brilliant red fruits which last through the winter.
  • 63. 36 Chain fruit cholla at Tucson Mountain Section headquarters. PRICKLYPEARS, like many of the chollas, produce large blossoms in late spring. Those on the monument are principally the yellow-flowered species. The reddish brown-to-mahogany colored edible fruits, called tunas, attain the size of large strawberries. When mature in autumn, they are consumed by many desert animals. Some of the smaller cactuses are so tiny as to be unnoticeable except when in bloom; examples are the HEDGEHOGS, the FISHHOOKS, and the PINCUSHIONS. Blossoms of some of these ground-hugging species are large, in some cases larger than the rest of the plant, and spectacular in form and color. All add to the monument’s spring and early summer display of floral beauty. Non-Succulents For the diversity of devices for adaptation to an inhospitable environment, the many species making up the non-succulent desert vegetation provide an absorbing field for study. As we have seen, there are two ways to survive the harsh desert climate; one is to avoid the periods of excessive heat and drought (“escapers”); the other is to adopt various protective devices (“evaders” and “resisters”). Short-lived plants follow the first method; perennials, the second. Perennials Chief among the requirements for year-round survival in the desert is a plant’s ability to control transpiration and thus maintain a balance
  • 64. 37 between water loss and water supply. In this struggle, the hours of darkness are a great aid because in the cool of the night the air is unable to take up as much moisture as it does under the influence of the sun’s evaporating heat. Therefore, less exhaling and evaporating of water occurs from plants, and both the rate and the amount of water loss are reduced. This reduction in transpiration at night allows the plants to recover from the severe drying effects of the day. One biologist may have been close to the truth when he stated, “If the celestial machinery should break down so that just one night were omitted in the midst of a dry season, it would spell the doom of half the nonsucculent plants in the desert.” One of the common trees in the desert part of the monument is the MESQUITE (mess-KEET). In general appearance it resembles a small, spiny apple or peach tree with finely divided leaves. Its roots sometimes penetrate to a depth of 40 or more feet, thus securing moisture at the deeper, cooler soil levels, from a supply that remains nearly constant throughout the year. This enables the tree to expose a rather large expanse of leaf surface without losing more water than it can replace. A number of mechanical devices help the tree reduce its water loss during the driest part of the day (10 a.m. to 4 p.m.). Among these are its ability to fold its leaves and close the stomata (breathing pores), thereby greatly reducing the surface area exposed to exhaling and evaporating influences. In April and May, mesquite trees are covered with pale-yellow, catkinlike flowers which attract swarms of insects. These flowers develop to stringbeanlike pods rich in sugar and important as food for deer and other animals. In earlier days, the mesquite was also a valuable source of food and firewood for Indians and pioneers.
  • 69. Staghorn cholla. Another desert tree abundant in the monument is the YELLOW PALOVERDE. It is somewhat similar in size and general shape to the mesquite. Lacking the deeply penetrating root system of the mesquite, the paloverde (Spanish word meaning “green stick”) has no dependable moisture source; but it has made unusual adaptations that enable it to retain as much as possible of the water collected by its roots. In early spring the tree leafs out in dense foliage, which is
  • 70. 38 followed closely by a blanket of yellow blossoms. At this season the paloverdes provide one of the most spectacular displays of the desert, particularly along washes, where they grow especially well. Blue paloverde, growing in the arroyos, blooms well every year. Yellow, or foothill, paloverde, a separate species, blooms only if the soil moisture is high following winter rains. With the coming of the hot, drying weather of late spring, the trees need to reduce their moisture losses. They gradually drop their leaves until, by early summer, each tree has become practically bare. The trees do not enter a period of dormancy, but are able to remain active because their green bark contains chlorophyll. Thus, the bark takes over some of the food-manufacturing function normally performed by leaves, but without the high rate of water loss. Carrying the drought-evasion habits of the paloverde a step further, the OCOTILLO (oh-koh-TEE-yoh) comes into full leaf following each rainy spell during the warmer months. During the intervening dry periods it sheds its foliage. The ocotillo, a common and conspicuous desert dweller, is a shrub of striking appearance, with thorny, whiplike, unbranching stems 8 to 12 feet long growing upward in a funnel-shaped cluster. In spring, showy scarlet flower clusters appear at the tips of the stems, making each plant a glowing splash of color.
  • 71. 39 Mesquite in bloom. A number of desert shrubs fail to display as much ingenuity as the paloverde. Some of these evade the dry season simply by going into a state of dormancy. The WOLFBERRY bursts into full leaf soon after the first winter rains and blossoms as early as January. Its small, tomato-red, juicy fruits are sought by birds, which also find protective cover for their nests and for overnight perches in the stiff, thorny shrubs. In the past, the berrylike fruits were important to the Indians, who ate them raw or made them into a sauce.
  • 73. 40 Yellow paloverde, Tucson Mountain Section. Commonest of the conspicuous desert non-succulent shrubs is the wispy-looking but tough CREOSOTEBUSH, found principally on poor soils and on the desert flats between mountain ranges. It is also sprinkled throughout the paloverde-saguaro community in the monument. A new crop of wax-coated, musty-smelling leaves, giving the plant the local (but mistaken) name “grease-wood,” appears as early as January. The leaves are followed by a profuse blooming of small yellow flowers and cottony seed balls. During abnormally moist summers or in damp locations, the leaves and flowers persist the year round; but usually the coming of dry weather brings an end to the blossoming period. If the dry spell is exceptionally long, the leaves turn brown, and the plants remain dormant until awakened by the next winter’s rainfall. Pima Indians formerly gathered a resinous material, known as lac, which accumulates on the bark of its branches, and used it to mend pottery and fasten arrow points. They also steeped the leaves to obtain an antiseptic medicine. Ground squirrels commonly feed on the seeds.
  • 75. Parry’s penstemon. A large shrub of open, sprawling growth usually found along desert washes in company with mesquite is CATCLAW. Its name refers to the small curved thorns that hide on its branches. In April and May, the small trees are covered with fragrant, pale-yellow, catkinlike flower clusters that attract swarms of insects. The seed pods were ground into meal by the Indians and eaten as mush and cakes. In lower elevations of the Tucson Mountain Section, the gray-blue foliage of IRONWOOD is a common sight, but the species does not range farther eastward. Its wisterialike lavender-and-white flowers blossom in May and June. The nutritious seeds are harvested by rodents and formerly were parched and eaten by Indians. The wood
  • 76. 41 is so dense that it sinks in water; Indians used it for making arrowheads and tool handles. Ferns—commonly, plants of dank woods and other moist habitats— seem entirely out of place in the desert; nevertheless, some members of the fern family have overcome drought conditions. The GOLDFERN is common on rocky ledges, where it persists by means of special drought-resistant cells. Among the smaller perennials are many that add to spring flower displays when conditions of moisture and temperature are favorable. Perennials do not need to mature their seeds before the coming of summer as do the ephemerals; a majority start blossoming somewhat later in the spring, and gaily flaunt their flowers long after the annuals have faded and died. When the heat and drought of early summer begin to bear down, they gradually die back, surviving the “long dry” by their persistent roots and larger stems. One of the most noticeable and beautiful of this group of small perennials fairly common in the monument is PARRY’S PENSTEMON. It occurs in scattered clumps on well-drained slopes along the base of Tanque Verde Ridge. The showy rose-magenta flowers and the glossy-green leaves arise from erect stems that may grow 4 feet tall in favorable seasons. Among the first of the shrubby perennials to cover the rocky hillsides with a blanket of winter and springtime bloom is the BRITTLEBUSH. Masses of yellow sunflowerlike blossoms are borne on long stems that exude a gum which was chewed by the Indians and was also burned as incense in early mission churches. A conspicuous perennial that survives the dry season as an underground bulb is BLUEDICKS. Although it doesn’t occur in massed bloom, it does add spots of color to the desert scene. Usually appearing from February to May, bluedicks has violet flower clusters on long, slender, erect stems. The bulbs were dug and eaten by Pima and Papago Indians.
  • 77. 42 Although neither conspicuous nor attractive, the common TRIANGLE BURSAGE is an important part of the paloverde-saguaro community in the Tucson Mountains. A low, rounded, white-barked shrub, bursage has small, colorless flowers without petals. (Being wind-pollinated, the flowers do not need to attract insects.) One of the handsome shrubs abundant in the high desert along the base of Tanque Verde Ridge is the JOJOBA (ho-HOH-bah), or deernut. Its thick, leathery, evergreen leaves are especially noticeable in winter and furnish excellent browse for deer. The flowers are small and yellowish, but the nutlike fruits are large and edible, although bitter. They were eaten raw or parched by the Indians, and were pulverized by early-day settlers for use as a coffee substitute. Among the attractive flowering shrubs are the INDIGOBUSHES, of which there are several species adapted to the desert environment. The local, low-growing indigobushes are especially ornamental when covered with masses of deep-blue flowers in spring. Another small shrub, noticeable from February to May because of its large, tassel-like pink-to-red blossoms and its fernlike leaves is FAIRY- DUSTER. Deer browse on its delicate foliage. The PAPER FLOWER, growing in dome-shaped clumps covered with yellow flowers, sometimes blooms throughout the entire year. The petals bleach and dry and may remain on the plant weeks after the blossoms have faded. Quick to attract attention because of their apparent lack of foliage, the JOINTFIRS, of which there are several desert species, grow in clumps of harsh, stringy, yellow-green, erect stems. The skin or outer bark of the stems performs the usual functions of leaves, which on these plants have been reduced to scales. Small, fragrant, yellow blossom clusters, appearing at the stem joints in spring, are visited by insects attracted to their nectar.
  • 78. Ephemerals Every spring, after a winter of normal rainfall, parts of the southwestern deserts are carpeted with a lush blanket of fast- growing annual herbs and wildflowers—the early spring ephemerals. The monument does not get massive displays, however, since it is lacking in the species that make the best show. But it does have many annuals that are beautiful individually or in small groups. Many of these “quickies” do not have the characteristics of desert plants; some of them, in fact, are part of the common vegetation of other climes where moisture is plentiful and summer temperatures are much less severe. What are these “foreign” plants doing in the desert, and how do they survive? With its often frostfree winter climate and its normal December-to-March rains, the desert presents in early spring ideal growing weather for annuals that are able to compress a generation into several months. Several hundred species of plants have taken advantage of this situation. There is WILD CARROT, which is a summer plant in South Carolina and a winter annual in California (where it is called “rattlesnake weed”). In the desert, its seeds lie dormant in the soil through the long, hot summer and the drying weather of autumn. Then, under the influence of winter rains and the soil-warming effects of early spring sunshine, they burst into rapid growth. One of a host of species, this early spring ephemeral is enabled by these favorable conditions to flower and mature its seed before the pall of summer heat and drought descends upon the desert. With their task complete, the parents wither and die. Their ripened seeds are scattered over the desert until winter rains enable them to cover the desert with another multicolored but short-lived carpet of foliage and bloom. The one-season ephemerals do not limit themselves to the winter growing period. From July to September, local thundershowers deluge parts of the desert while other areas, not so fortunate, remain dry.
  • 79. Welcome to our website – the ideal destination for book lovers and knowledge seekers. With a mission to inspire endlessly, we offer a vast collection of books, ranging from classic literary works to specialized publications, self-development books, and children's literature. Each book is a new journey of discovery, expanding knowledge and enriching the soul of the reade Our website is not just a platform for buying books, but a bridge connecting readers to the timeless values of culture and wisdom. With an elegant, user-friendly interface and an intelligent search system, we are committed to providing a quick and convenient shopping experience. Additionally, our special promotions and home delivery services ensure that you save time and fully enjoy the joy of reading. Let us accompany you on the journey of exploring knowledge and personal growth! ebookultra.com