SlideShare a Scribd company logo
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                 Copyright © 2001 John Wiley & Sons, Inc.
              ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




DIGITAL IMAGE
PROCESSING
DIGITAL IMAGE
PROCESSING
PIKS Inside


Third Edition



WILLIAM K. PRATT
PixelSoft, Inc.
Los Altos, California




A Wiley-Interscience Publication
JOHN WILEY & SONS, INC.
New York • Chichester • Weinheim • Brisbane • Singapore • Toronto
Designations used by companies to distinguish their products are often
claimed as trademarks. In all instances where John Wiley & Sons, Inc., is
aware of a claim, the product names appear in initial capital or all capital
letters. Readers, however, should contact the appropriate companies for
more complete information regarding trademarks and registration.

Copyright  2001 by John Wiley and Sons, Inc., New York. All rights
reserved.

No part of this publication may be reproduced, stored in a retrieval system
or transmitted in any form or by any means, electronic or mechanical,
including uploading, downloading, printing, decompiling, recording or
otherwise, except as permitted under Sections 107 or 108 of the 1976
United States Copyright Act, without the prior written permission of the
Publisher. Requests to the Publisher for permission should be addressed to
the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue,
New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail:
PERMREQ @ WILEY.COM.

This publication is designed to provide accurate and authoritative
information in regard to the subject matter covered. It is sold with the
understanding that the publisher is not engaged in rendering professional
services. If professional advice or other expert assistance is required, the
services of a competent professional person should be sought.

ISBN 0-471-22132-5

This title is also available in print as ISBN 0-471-37407-5.

For more information about Wiley products, visit our web site at
www.Wiley.com.
To my wife, Shelly
whose image needs no enhancement
CONTENTS




Preface                                              xiii

Acknowledgments                                      xvii


PART 1 CONTINUOUS IMAGE CHARACTERIZATION               1

1   Continuous Image Mathematical Characterization     3
    1.1   Image Representation, 3
    1.2   Two-Dimensional Systems, 5
    1.3   Two-Dimensional Fourier Transform, 10
    1.4   Image Stochastic Characterization, 15

2   Psychophysical Vision Properties                  23
    2.1   Light Perception, 23
    2.2   Eye Physiology, 26
    2.3   Visual Phenomena, 29
    2.4   Monochrome Vision Model, 33
    2.5   Color Vision Model, 39

3   Photometry and Colorimetry                        45
    3.1 Photometry, 45
    3.2 Color Matching, 49

                                                      vii
viii     CONTENTS


       3.3 Colorimetry Concepts, 54
       3.4 Tristimulus Value Transformation, 61
       3.5 Color Spaces, 63


PART 2        DIGITAL IMAGE CHARACTERIZATION                             89

4      Image Sampling and Reconstruction                                 91
       4.1 Image Sampling and Reconstruction Concepts, 91
       4.2 Image Sampling Systems, 99
       4.3 Image Reconstruction Systems, 110

5      Discrete Image Mathematical Representation                        121
       5.1   Vector-Space Image Representation, 121
       5.2   Generalized Two-Dimensional Linear Operator, 123
       5.3   Image Statistical Characterization, 127
       5.4   Image Probability Density Models, 132
       5.5   Linear Operator Statistical Representation, 136

6      Image Quantization                                                141
       6.1 Scalar Quantization, 141
       6.2 Processing Quantized Variables, 147
       6.3 Monochrome and Color Image Quantization, 150

PART 3        DISCRETE TWO-DIMENSIONAL LINEAR PROCESSING                 159

7      Superposition and Convolution                                     161
       7.1   Finite-Area Superposition and Convolution, 161
       7.2   Sampled Image Superposition and Convolution, 170
       7.3   Circulant Superposition and Convolution, 177
       7.4   Superposition and Convolution Operator Relationships, 180

8      Unitary Transforms                                                185
       8.1   General Unitary Transforms, 185
       8.2   Fourier Transform, 189
       8.3   Cosine, Sine, and Hartley Transforms, 195
       8.4   Hadamard, Haar, and Daubechies Transforms, 200
       8.5   Karhunen–Loeve Transform, 207

9      Linear Processing Techniques                                      213
       9.1 Transform Domain Processing, 213
       9.2 Transform Domain Superposition, 216
CONTENTS    ix

     9.3 Fast Fourier Transform Convolution, 221
     9.4 Fourier Transform Filtering, 229
     9.5 Small Generating Kernel Convolution, 236

PART 4      IMAGE IMPROVEMENT                                                  241

10   Image Enhancement                                                         243
     10.1   Contrast Manipulation, 243
     10.2   Histogram Modification, 253
     10.3   Noise Cleaning, 261
     10.4   Edge Crispening, 278
     10.5   Color Image Enhancement, 284
     10.6   Multispectral Image Enhancement, 289

11   Image Restoration Models                                                  297
     11.1   General Image Restoration Models, 297
     11.2   Optical Systems Models, 300
     11.3   Photographic Process Models, 304
     11.4   Discrete Image Restoration Models, 312

12   Point and Spatial Image Restoration Techniques                            319
     12.1   Sensor and Display Point Nonlinearity Correction, 319
     12.2   Continuous Image Spatial Filtering Restoration, 325
     12.3   Pseudoinverse Spatial Image Restoration, 335
     12.4   SVD Pseudoinverse Spatial Image Restoration, 349
     12.5   Statistical Estimation Spatial Image Restoration, 355
     12.6   Constrained Image Restoration, 358
     12.7   Blind Image Restoration, 363

13   Geometrical Image Modification                                            371
     13.1   Translation, Minification, Magnification, and Rotation, 371
     13.2   Spatial Warping, 382
     13.3   Perspective Transformation, 386
     13.4   Camera Imaging Model, 389
     13.5   Geometrical Image Resampling, 393

PART 5      IMAGE ANALYSIS                                                     399

14   Morphological Image Processing                                            401
     14.1 Binary Image Connectivity, 401
     14.2 Binary Image Hit or Miss Transformations, 404
     14.3 Binary Image Shrinking, Thinning, Skeletonizing, and Thickening, 411
x    CONTENTS


     14.4 Binary Image Generalized Dilation and Erosion, 422
     14.5 Binary Image Close and Open Operations, 433
     14.6 Gray Scale Image Morphological Operations, 435

15   Edge Detection                                            443
     15.1   Edge, Line, and Spot Models, 443
     15.2   First-Order Derivative Edge Detection, 448
     15.3   Second-Order Derivative Edge Detection, 469
     15.4   Edge-Fitting Edge Detection, 482
     15.5   Luminance Edge Detector Performance, 485
     15.6   Color Edge Detection, 499
     15.7   Line and Spot Detection, 499

16   Image Feature Extraction                                  509
     16.1   Image Feature Evaluation, 509
     16.2   Amplitude Features, 511
     16.3   Transform Coefficient Features, 516
     16.4   Texture Definition, 519
     16.5   Visual Texture Discrimination, 521
     16.6   Texture Features, 529

17   Image Segmentation                                        551
     17.1   Amplitude Segmentation Methods, 552
     17.2   Clustering Segmentation Methods, 560
     17.3   Region Segmentation Methods, 562
     17.4   Boundary Detection, 566
     17.5   Texture Segmentation, 580
     17.6   Segment Labeling, 581

18   Shape Analysis                                            589
     18.1   Topological Attributes, 589
     18.2   Distance, Perimeter, and Area Measurements, 591
     18.3   Spatial Moments, 597
     18.4   Shape Orientation Descriptors, 607
     18.5   Fourier Descriptors, 609

19   Image Detection and Registration                          613
     19.1   Template Matching, 613
     19.2   Matched Filtering of Continuous Images, 616
     19.3   Matched Filtering of Discrete Images, 623
     19.4   Image Registration, 625
CONTENTS    xi

PART 6    IMAGE PROCESSING SOFTWARE                                     641

20   PIKS Image Processing Software                                     643
     20.1 PIKS Functional Overview, 643
     20.2 PIKS Core Overview, 663

21   PIKS Image Processing Programming Exercises                        673
     21.1 Program Generation Exercises, 674
     21.2 Image Manipulation Exercises, 675
     21.3 Colour Space Exercises, 676
     21.4 Region-of-Interest Exercises, 678
     21.5 Image Measurement Exercises, 679
     21.6 Quantization Exercises, 680
     21.7 Convolution Exercises, 681
     21.8 Unitary Transform Exercises, 682
     21.9 Linear Processing Exercises, 682
     21.10 Image Enhancement Exercises, 683
     21.11 Image Restoration Models Exercises, 685
     21.12 Image Restoration Exercises, 686
     21.13 Geometrical Image Modification Exercises, 687
     21.14 Morphological Image Processing Exercises, 687
     21.15 Edge Detection Exercises, 689
     21.16 Image Feature Extration Exercises, 690
     21.17 Image Segmentation Exercises, 691
     21.18 Shape Analysis Exercises, 691
     21.19 Image Detection and Registration Exercises, 692


Appendix 1     Vector-Space Algebra Concepts                            693

Appendix 2     Color Coordinate Conversion                              709

Appendix 3     Image Error Measures                                     715

Bibliography                                                            717

Index                                                                   723
PREFACE




In January 1978, I began the preface to the first edition of Digital Image Processing
with the following statement:
   The field of image processing has grown considerably during the past decade
with the increased utilization of imagery in myriad applications coupled with
improvements in the size, speed, and cost effectiveness of digital computers and
related signal processing technologies. Image processing has found a significant role
in scientific, industrial, space, and government applications.
   In January 1991, in the preface to the second edition, I stated:
   Thirteen years later as I write this preface to the second edition, I find the quoted
statement still to be valid. The 1980s have been a decade of significant growth and
maturity in this field. At the beginning of that decade, many image processing tech-
niques were of academic interest only; their execution was too slow and too costly.
Today, thanks to algorithmic and implementation advances, image processing has
become a vital cost-effective technology in a host of applications.
   Now, in this beginning of the twenty-first century, image processing has become
a mature engineering discipline. But advances in the theoretical basis of image pro-
cessing continue. Some of the reasons for this third edition of the book are to correct
defects in the second edition, delete content of marginal interest, and add discussion
of new, important topics. Another motivating factor is the inclusion of interactive,
computer display imaging examples to illustrate image processing concepts. Finally,
this third edition includes computer programming exercises to bolster its theoretical
content. These exercises can be implemented using the Programmer’s Imaging Ker-
nel System (PIKS) application program interface (API). PIKS is an International

                                                                                    xiii
xiv    PREFACE


Standards Organization (ISO) standard library of image processing operators and
associated utilities. The PIKS Core version is included on a CD affixed to the back
cover of this book.
    The book is intended to be an “industrial strength” introduction to digital image
processing to be used as a text for an electrical engineering or computer science
course in the subject. Also, it can be used as a reference manual for scientists who
are engaged in image processing research, developers of image processing hardware
and software systems, and practicing engineers and scientists who use image pro-
cessing as a tool in their applications. Mathematical derivations are provided for
most algorithms. The reader is assumed to have a basic background in linear system
theory, vector space algebra, and random processes. Proficiency in C language pro-
gramming is necessary for execution of the image processing programming exer-
cises using PIKS.
    The book is divided into six parts. The first three parts cover the basic technolo-
gies that are needed to support image processing applications. Part 1 contains three
chapters concerned with the characterization of continuous images. Topics include
the mathematical representation of continuous images, the psychophysical proper-
ties of human vision, and photometry and colorimetry. In Part 2, image sampling
and quantization techniques are explored along with the mathematical representa-
tion of discrete images. Part 3 discusses two-dimensional signal processing tech-
niques, including general linear operators and unitary transforms such as the
Fourier, Hadamard, and Karhunen–Loeve transforms. The final chapter in Part 3
analyzes and compares linear processing techniques implemented by direct convolu-
tion and Fourier domain filtering.
    The next two parts of the book cover the two principal application areas of image
processing. Part 4 presents a discussion of image enhancement and restoration tech-
niques, including restoration models, point and spatial restoration, and geometrical
image modification. Part 5, entitled “Image Analysis,” concentrates on the extrac-
tion of information from an image. Specific topics include morphological image
processing, edge detection, image feature extraction, image segmentation, object
shape analysis, and object detection.
    Part 6 discusses the software implementation of image processing applications.
This part describes the PIKS API and explains its use as a means of implementing
image processing algorithms. Image processing programming exercises are included
in Part 6.
    This third edition represents a major revision of the second edition. In addition to
Part 6, new topics include an expanded description of color spaces, the Hartley and
Daubechies transforms, wavelet filtering, watershed and snake image segmentation,
and Mellin transform matched filtering. Many of the photographic examples in the
book are supplemented by executable programs for which readers can adjust algo-
rithm parameters and even substitute their own source images.
    Although readers should find this book reasonably comprehensive, many impor-
tant topics allied to the field of digital image processing have been omitted to limit
the size and cost of the book. Among the most prominent omissions are the topics of
pattern recognition, image reconstruction from projections, image understanding,
PREFACE      xv

image coding, scientific visualization, and computer graphics. References to some
of these topics are provided in the bibliography.

                                                              WILLIAM K. PRATT

Los Altos, California
August 2000
ACKNOWLEDGMENTS




The first edition of this book was written while I was a professor of electrical
engineering at the University of Southern California (USC). Image processing
research at USC began in 1962 on a very modest scale, but the program increased in
size and scope with the attendant international interest in the field. In 1971, Dr.
Zohrab Kaprielian, then dean of engineering and vice president of academic
research and administration, announced the establishment of the USC Image
Processing Institute. This environment contributed significantly to the preparation of
the first edition. I am deeply grateful to Professor Kaprielian for his role in
providing university support of image processing and for his personal interest in my
career.
   Also, I wish to thank the following past and present members of the Institute’s
scientific staff who rendered invaluable assistance in the preparation of the first-
edition manuscript: Jean-François Abramatic, Harry C. Andrews, Lee D. Davisson,
Olivier Faugeras, Werner Frei, Ali Habibi, Anil K. Jain, Richard P. Kruger, Nasser
E. Nahi, Ramakant Nevatia, Keith Price, Guner S. Robinson, Alexander
A. Sawchuk, and Lloyd R. Welsh.
   In addition, I sincerely acknowledge the technical help of my graduate students at
USC during preparation of the first edition: Ikram Abdou, Behnam Ashjari,
Wen-Hsiung Chen, Faramarz Davarian, Michael N. Huhns, Kenneth I. Laws, Sang
Uk Lee, Clanton Mancill, Nelson Mascarenhas, Clifford Reader, John Roese, and
Robert H. Wallis.
   The first edition was the outgrowth of notes developed for the USC course
“Image Processing.” I wish to thank the many students who suffered through the


                                                                                  xvii
xviii    ACKNOWLEDGMENTS


early versions of the notes for their valuable comments. Also, I appreciate the
reviews of the notes provided by Harry C. Andrews, Werner Frei, Ali Habibi, and
Ernest L. Hall, who taught the course.
   With regard to the first edition, I wish to offer words of appreciation to the
Information Processing Techniques Office of the Advanced Research Projects
Agency, directed by Larry G. Roberts, which provided partial financial support of
my research at USC.
   During the academic year 1977–1978, I performed sabbatical research at the
Institut de Recherche d’Informatique et Automatique in LeChesney, France and at
the Université de Paris. My research was partially supported by these institutions,
USC, and a Guggenheim Foundation fellowship. For this support, I am indebted.
   I left USC in 1979 with the intention of forming a company that would put some
of my research ideas into practice. Toward that end, I joined a startup company,
Compression Labs, Inc., of San Jose, California. There I worked on the development
of facsimile and video coding products with Dr., Wen-Hsiung Chen and Dr. Robert
H. Wallis. Concurrently, I directed a design team that developed a digital image
processor called VICOM. The early contributors to its hardware and software design
were William Bryant, Howard Halverson, Stephen K. Howell, Jeffrey Shaw, and
William Zech. In 1981, I formed Vicom Systems, Inc., of San Jose, California, to
manufacture and market the VICOM image processor. Many of the photographic
examples in this book were processed on a VICOM.
   Work on the second edition began in 1986. In 1988, I joined Sun Microsystems,
of Mountain View, California. At Sun, I collaborated with Stephen A. Howell and
Ihtisham Kabir on the development of image processing software. During my time
at Sun, I participated in the specification of the Programmers Imaging Kernel
application program interface which was made an International Standards
Organization standard in 1994. Much of the PIKS content is present in this book.
Some of the principal contributors to PIKS include Timothy Butler, Adrian Clark,
Patrick Krolak, and Gerard A. Paquette.
   In 1993, I formed PixelSoft, Inc., of Los Altos, California, to commercialize the
PIKS standard. The PIKS Core version of the PixelSoft implementation is affixed to
the back cover of this edition. Contributors to its development include Timothy
Butler, Larry R. Hubble, and Gerard A. Paquette.
   In 1996, I joined Photon Dynamics, Inc., of San Jose, California, a manufacturer
of machine vision equipment for the inspection of electronics displays and printed
circuit boards. There, I collaborated with Larry R. Hubble, Sunil S. Sawkar, and
Gerard A. Paquette on the development of several hardware and software products
based on PIKS.
   I wish to thank all those previously cited, and many others too numerous to
mention, for their assistance in this industrial phase of my career. Having
participated in the design of hardware and software products has been an arduous
but intellectually rewarding task. This industrial experience, I believe, has
significantly enriched this third edition.
ACKNOWLEDGMENTS            xix

   I offer my appreciation to Ray Schmidt, who was responsible for many photo-
graphic reproductions in the book, and to Kris Pendelton, who created much of the
line art. Also, thanks are given to readers of the first two editions who reported
errors both typographical and mental.
   Most of all, I wish to thank my wife, Shelly, for her support in the writing of the
third edition.

                                                                             W. K. P.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




PART 1

CONTINUOUS IMAGE
CHARACTERIZATION

Although this book is concerned primarily with digital, as opposed to analog, image
processing techniques. It should be remembered that most digital images represent
continuous natural images. Exceptions are artificial digital images such as test
patterns that are numerically created in the computer and images constructed by
tomographic systems. Thus, it is important to understand the “physics” of image
formation by sensors and optical systems including human visual perception.
Another important consideration is the measurement of light in order quantitatively
to describe images. Finally, it is useful to establish spatial and temporal
characteristics of continuous image fields which provide the basis for the
interrelationship of digital image samples. These topics are covered in the following
chapters.




                                                                                   1
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                         Copyright © 2001 John Wiley & Sons, Inc.
                      ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




1
CONTINUOUS IMAGE MATHEMATICAL
CHARACTERIZATION




In the design and analysis of image processing systems, it is convenient and often
necessary mathematically to characterize the image to be processed. There are two
basic mathematical characterizations of interest: deterministic and statistical. In
deterministic image representation, a mathematical image function is defined and
point properties of the image are considered. For a statistical image representation,
the image is specified by average properties. The following sections develop the
deterministic and statistical characterization of continuous images. Although the
analysis is presented in the context of visual images, many of the results can be
extended to general two-dimensional time-varying signals and fields.


1.1. IMAGE REPRESENTATION

Let C ( x, y, t, λ ) represent the spatial energy distribution of an image source of radi-
ant energy at spatial coordinates (x, y), at time t and wavelength λ . Because light
intensity is a real positive quantity, that is, because intensity is proportional to the
modulus squared of the electric field, the image light function is real and nonnega-
tive. Furthermore, in all practical imaging systems, a small amount of background
light is always present. The physical imaging system also imposes some restriction
on the maximum intensity of an image, for example, film saturation and cathode ray
tube (CRT) phosphor heating. Hence it is assumed that


                                   0 < C ( x, y, t, λ ) ≤ A                       (1.1-1)


                                                                                        3
4      CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


where A is the maximum image intensity. A physical image is necessarily limited in
extent by the imaging system and image recording media. For mathematical sim-
plicity, all images are assumed to be nonzero only over a rectangular region
for which


                                               –Lx ≤ x ≤ Lx                            (1.1-2a)

                                               –Ly ≤ y ≤ Ly                            (1.1-2b)


The physical image is, of course, observable only over some finite time interval.
Thus let


                                                –T ≤ t ≤ T                             (1.1-2c)


The image light function C ( x, y, t, λ ) is, therefore, a bounded four-dimensional
function with bounded independent variables. As a final restriction, it is assumed
that the image function is continuous over its domain of definition.
   The intensity response of a standard human observer to an image light function is
commonly measured in terms of the instantaneous luminance of the light field as
defined by

                                                    ∞
                             Y ( x, y, t ) =   ∫0 C ( x, y, t, λ )V ( λ ) d λ            (1.1-3)


where V ( λ ) represents the relative luminous efficiency function, that is, the spectral
response of human vision. Similarly, the color response of a standard observer is
commonly measured in terms of a set of tristimulus values that are linearly propor-
tional to the amounts of red, green, and blue light needed to match a colored light.
For an arbitrary red–green–blue coordinate system, the instantaneous tristimulus
values are

                                                ∞
                            R ( x, y, t ) =    ∫0 C ( x, y, t, λ )RS ( λ ) d λ         (1.1-4a)

                                                ∞
                            G ( x, y, t ) =    ∫0   C ( x, y, t, λ )G S ( λ ) d λ      (1.1-4b)

                                                ∞
                            B ( x, y, t ) =    ∫0   C ( x, y, t, λ )B S ( λ ) d λ      (1.1-4c)


where R S ( λ ) , G S ( λ ) , BS ( λ ) are spectral tristimulus values for the set of red, green,
and blue primaries. The spectral tristimulus values are, in effect, the tristimulus
TWO-DIMENSIONAL SYSTEMS          5

values required to match a unit amount of narrowband light at wavelength λ . In a
multispectral imaging system, the image field observed is modeled as a spectrally
weighted integral of the image light function. The ith spectral image field is then
given as

                                                 ∞
                            F i ( x, y, t ) =   ∫0   C ( x, y, t, λ )S i ( λ ) d λ            (1.1-5)


where S i ( λ ) is the spectral response of the ith sensor.
   For notational simplicity, a single image function F ( x, y, t ) is selected to repre-
sent an image field in a physical imaging system. For a monochrome imaging sys-
tem, the image function F ( x, y, t ) nominally denotes the image luminance, or some
converted or corrupted physical representation of the luminance, whereas in a color
imaging system, F ( x, y, t ) signifies one of the tristimulus values, or some function
of the tristimulus value. The image function F ( x, y, t ) is also used to denote general
three-dimensional fields, such as the time-varying noise of an image scanner.
   In correspondence with the standard definition for one-dimensional time signals,
the time average of an image function at a given point (x, y) is defined as

                                                         1 T
                      〈 F ( x, y, t )〉 T = lim         ----- ∫ F ( x, y, t )L ( t ) dt
                                                           -                                  (1.1-6)
                                           T→∞         2T –T

where L(t) is a time-weighting function. Similarly, the average image brightness at a
given time is given by the spatial average,

                                                       1         L   L
                   〈 F ( x, y, t )〉 S = lim      -------------- ∫ x ∫ y F ( x, y, t ) dx dy   (1.1-7)
                                       Lx → ∞    4L x L y –L x –Ly
                                       Ly → ∞

   In many imaging systems, such as image projection devices, the image does not
change with time, and the time variable may be dropped from the image function.
For other types of systems, such as movie pictures, the image function is time sam-
pled. It is also possible to convert the spatial variation into time variation, as in tele-
vision, by an image scanning process. In the subsequent discussion, the time
variable is dropped from the image field notation unless specifically required.


1.2. TWO-DIMENSIONAL SYSTEMS

A two-dimensional system, in its most general form, is simply a mapping of some
input set of two-dimensional functions F1(x, y), F2(x, y),..., FN(x, y) to a set of out-
put two-dimensional functions G1(x, y), G2(x, y),..., GM(x, y), where ( – ∞ < x, y < ∞ )
denotes the independent, continuous spatial variables of the functions. This mapping
may be represented by the operators O { · } for m = 1, 2,..., M, which relate the input
to output set of functions by the set of equations
6     CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


                    G 1 ( x, y ) = O 1 { F 1 ( x, y ), F 2 ( x, y ), …, FN ( x, y ) }




                                …
                    G m ( x, y ) = O m { F 1 ( x, y ), F 2 ( x, y ), …, F N ( x, y ) }    (1.2-1)




                                …
                    G M ( x, y ) = O M { F 1 ( x, y ), F2 ( x, y ), …, F N ( x, y ) }

In specific cases, the mapping may be many-to-few, few-to-many, or one-to-one.
The one-to-one mapping is defined as

                                     G ( x, y ) = O { F ( x, y ) }                        (1.2-2)

To proceed further with a discussion of the properties of two-dimensional systems, it
is necessary to direct the discourse toward specific types of operators.


1.2.1. Singularity Operators

Singularity operators are widely employed in the analysis of two-dimensional
systems, especially systems that involve sampling of continuous functions. The
two-dimensional Dirac delta function is a singularity operator that possesses the
following properties:

                       ε   ε
                     ∫–ε ∫–ε δ ( x, y ) dx dy =    1                   for ε > 0         (1.2-3a)

                       ∞    ∞
                     ∫–∞ ∫–∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη =        F ( x, y )     (1.2-3b)


In Eq. 1.2-3a, ε is an infinitesimally small limit of integration; Eq. 1.2-3b is called
the sifting property of the Dirac delta function.
   The two-dimensional delta function can be decomposed into the product of two
one-dimensional delta functions defined along orthonormal coordinates. Thus

                                       δ ( x, y ) = δ ( x )δ ( y )                        (1.2-4)

where the one-dimensional delta function satisfies one-dimensional versions of Eq.
1.2-3. The delta function also can be defined as a limit on a family of functions.
General examples are given in References 1 and 2.


1.2.2. Additive Linear Operators

A two-dimensional system is said to be an additive linear system if the system obeys
the law of additive superposition. In the special case of one-to-one mappings, the
additive superposition property requires that
TWO-DIMENSIONAL SYSTEMS                     7

             O { a 1 F 1 ( x, y ) + a 2 F 2 ( x, y ) } = a 1 O { F 1 ( x, y ) } + a 2 O { F 2 ( x, y ) }    (1.2-5)


where a1 and a2 are constants that are possibly complex numbers. This additive
superposition property can easily be extended to the general mapping of Eq. 1.2-1.
   A system input function F(x, y) can be represented as a sum of amplitude-
weighted Dirac delta functions by the sifting integral,

                                            ∞       ∞
                         F ( x, y ) =   ∫– ∞ ∫– ∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη                       (1.2-6)

where F ( ξ, η ) is the weighting factor of the impulse located at coordinates ( ξ, η ) in
the x–y plane, as shown in Figure 1.2-1. If the output of a general linear one-to-one
system is defined to be

                                            G ( x, y ) = O { F ( x, y ) }                                   (1.2-7)

then

                                      ∞ ∞                                    
                      G ( x, y ) = O  ∫ ∫ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη                            (1.2-8a)
                                      –∞ –∞                                  
or

                                        ∞       ∞
                      G ( x, y ) =   ∫–∞ ∫–∞ F ( ξ, η )O { δ ( x – ξ, y – η ) } d ξ dη                     (1.2-8b)

   In moving from Eq. 1.2-8a to Eq. 1.2-8b, the application order of the general lin-
ear operator O { ⋅ } and the integral operator have been reversed. Also, the linear
operator has been applied only to the term in the integrand that is dependent on the




                       FIGURE1.2-1. Decomposition of image function.
8     CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


spatial variables (x, y). The second term in the integrand of Eq. 1.2-8b, which is
redefined as


                            H ( x, y ; ξ, η) ≡ O { δ ( x – ξ, y – η ) }            (1.2-9)


is called the impulse response of the two-dimensional system. In optical systems, the
impulse response is often called the point spread function of the system. Substitu-
tion of the impulse response function into Eq. 1.2-8b yields the additive superposi-
tion integral

                                     ∞    ∞
                     G ( x, y ) =   ∫–∞ ∫–∞ F ( ξ, η )H ( x, y ; ξ, η) d ξ d η    (1.2-10)


An additive linear two-dimensional system is called space invariant (isoplanatic) if
its impulse response depends only on the factors x – ξ and y – η . In an optical sys-
tem, as shown in Figure 1.2-2, this implies that the image of a point source in the
focal plane will change only in location, not in functional form, as the placement of
the point source moves in the object plane. For a space-invariant system


                             H ( x, y ; ξ, η ) = H ( x – ξ, y – η )               (1.2-11)


and the superposition integral reduces to the special case called the convolution inte-
gral, given by

                                    ∞    ∞
                    G ( x, y ) =   ∫–∞ ∫–∞ F ( ξ, η )H ( x – ξ, y – η ) dξ dη    (1.2-12a)

Symbolically,

                               G ( x, y ) = F ( x, y ) ᭺ H ( x, y )
                                                       *                         (1.2-12b)




                     FIGURE 1.2-2. Point-source imaging system.
TWO-DIMENSIONAL SYSTEMS     9




           FIGURE 1.2-3. Graphical example of two-dimensional convolution.



denotes the convolution operation. The convolution integral is symmetric in the
sense that

                                    ∞    ∞
                    G ( x, y ) =   ∫–∞ ∫–∞ F ( x – ξ, y – η )H ( ξ, η ) d ξ dη   (1.2-13)

Figure 1.2-3 provides a visualization of the convolution process. In Figure 1.2-3a
and b, the input function F(x, y) and impulse response are plotted in the dummy
coordinate system ( ξ, η ) . Next, in Figures 1.2-3c and d the coordinates of the
impulse response are reversed, and the impulse response is offset by the spatial val-
ues (x, y). In Figure 1.2-3e, the integrand product of the convolution integral of
Eq. 1.2-12 is shown as a crosshatched region. The integral over this region is the
value of G(x, y) at the offset coordinate (x, y). The complete function F(x, y) could,
in effect, be computed by sequentially scanning the reversed, offset impulse
response across the input function and simultaneously integrating the overlapped
region.


1.2.3. Differential Operators

Edge detection in images is commonly accomplished by performing a spatial differ-
entiation of the image field followed by a thresholding operation to determine points
of steep amplitude change. Horizontal and vertical spatial derivatives are defined as
10     CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


                                                d x = ∂F ( x, y )
                                                      -------------------
                                                                        -                                   (l.2-14a)
                                                             ∂x

                                                      ∂F ( x, y )
                                                d y = -------------------
                                                                        -                                  (l.2-14b)
                                                             ∂y


The directional derivative of the image field along a vector direction z subtending an
angle φ with respect to the horizontal axis is given by (3, p. 106)

                                          ∂F ( x, y )
                        ∇{ F ( x, y ) } = ------------------- = d x cos φ + d y sin φ
                                                            -                                                (l.2-15)
                                                 ∂z

The gradient magnitude is then

                                                                         2         2
                                       ∇{ F ( x, y ) } =              dx + dy                                (l.2-16)


Spatial second derivatives in the horizontal and vertical directions are defined as


                                                              2
                                                     ∂ F ( x, y )
                                              d xx = ----------------------                                 (l.2-17a)
                                                                  2
                                                            ∂x

                                                              2
                                                     ∂ F ( x, y )
                                              d yy = ----------------------                                (l.2-17b)
                                                                  2
                                                            ∂y


The sum of these two spatial derivatives is called the Laplacian operator:

                                                          2                    2
                                               ∂ F ( x, y ) ∂ F ( x, y )
                            ∇2{ F ( x, y ) } = ---------------------- + ----------------------               (l.2-18)
                                                            2                        2
                                                      ∂x                       ∂y

1.3. TWO-DIMENSIONAL FOURIER TRANSFORM

The two-dimensional Fourier transform of the image function F(x, y) is defined as
(1,2)


                                       ∞      ∞
                 F ( ω x, ω y ) =   ∫–∞ ∫–∞ F ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy                      (1.3-1)


where ω x and ω y are spatial frequencies and i =                                      – 1. Notationally, the Fourier
transform is written as
TWO-DIMENSIONAL FOURIER TRANSFORM                   11

                                     F ( ω x, ω y ) = O F { F ( x, y ) }                        (1.3-2)


In general, the Fourier coefficient F ( ω x, ω y ) is a complex number that may be rep-
resented in real and imaginary form,


                              F ( ω x, ω y ) = R ( ω x, ω y ) + iI ( ω x, ω y )                (1.3-3a)


or in magnitude and phase-angle form,


                          F ( ω x, ω y ) = M ( ω x, ω y ) exp { iφ ( ω x, ω y ) }              (1.3-3b)


where


                                                  2                    2          1⁄2
                         M ( ω x, ω y ) = [ R ( ω x, ω y ) + I ( ω x, ω y ) ]                  (1.3-4a)

                                                    I ( ω x, ω y ) 
                          φ ( ω x, ω y ) = arc tan  ----------------------- 
                                                                           -                   (1.3-4b)
                                                    R ( ω x, ω y ) 


A sufficient condition for the existence of the Fourier transform of F(x, y) is that the
function be absolutely integrable. That is,

                                        ∞     ∞
                                      ∫–∞ ∫–∞ F ( x, y ) dx dy < ∞                              (1.3-5)


The input function F(x, y) can be recovered from its Fourier transform by the inver-
sion formula


                              1- ∞ ∞
              F ( x, y ) = -------- ∫ ∫ F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y   (1.3-6a)
                                  2
                           4π –∞ – ∞

or in operator form


                                                        –1
                                     F ( x, y ) = O F { F ( ω x, ω y ) }                       (1.3-6b)


The functions F(x, y) and F ( ω x, ω y ) are called Fourier transform pairs.
12      CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


   The two-dimensional Fourier transform can be computed in two steps as a result
of the separability of the kernel. Thus, let

                                                 ∞
                           F y ( ω x, y ) =    ∫–∞ F ( x, y ) exp { –i ( ωx x ) } dx             (1.3-7)


then

                                               ∞
                         F ( ω x, ω y ) =     ∫–∞ F y ( ωx, y ) exp { –i ( ωy y ) } dy           (1.3-8)


   Several useful properties of the two-dimensional Fourier transform are stated
below. Proofs are given in References 1 and 2.

Separability. If the image function is spatially separable such that

                                         F ( x, y ) = f x ( x )f y ( y )                         (1.3-9)

then

                                    F y ( ω x, ω y ) = f x ( ω x )f y ( ω y )                   (1.3-10)

where f x ( ω x ) and f y ( ω y ) are one-dimensional Fourier transforms of f x ( x ) and
 f y ( y ), respectively. Also, if F ( x, y ) and F ( ω x, ω y ) are two-dimensional Fourier
transform pairs, the Fourier transform of F ∗ ( x, y ) is F ∗ ( – ω x, – ω y ) . An asterisk∗
used as a superscript denotes complex conjugation of a variable (i.e. if F = A + iB,
then F ∗ = A – iB ). Finally, if F ( x, y ) is symmetric such that F ( x, y ) = F ( – x, – y ),
then F ( ω x, ω y ) = F ( – ω x, – ω y ).

Linearity. The Fourier transform is a linear operator. Thus


                O F { aF 1 ( x, y ) + bF 2 ( x, y ) } = aF 1 ( ω x, ω y ) + bF 2 ( ω x, ω y )   (1.3-11)


where a and b are constants.

Scaling. A linear scaling of the spatial variables results in an inverse scaling of the
spatial frequencies as given by


                                                         1         ω x ωy
                               O F { F ( ax, by ) } = -------- F  ----- , ----- 
                                                             -         - -                      (1.3-12)
                                                        ab  a b 
TWO-DIMENSIONAL FOURIER TRANSFORM                   13

Hence, stretching of an axis in one domain results in a contraction of the corre-
sponding axis in the other domain plus an amplitude change.

Shift. A positional shift in the input plane results in a phase shift in the output
plane:


                 OF { F ( x – a, y – b ) } = F ( ω x, ω y ) exp { – i ( ω x a + ω y b ) }     (1.3-13a)


Alternatively, a frequency shift in the Fourier plane results in the equivalence

                       –1
                     OF { F ( ω x – a, ω y – b ) } = F ( x, y ) exp { i ( ax + by ) }         (1.3-13b)


Convolution. The two-dimensional Fourier transform of two convolved functions
is equal to the products of the transforms of the functions. Thus


                       OF { F ( x, y ) ᭺ H ( x, y ) } = F ( ω x, ω y )H ( ω x, ω y )
                                       *                                                       (1.3-14)


The inverse theorem states that

                                                      1
                     OF { F ( x, y )H ( x, y ) } = -------- F ( ω x, ω y ) ᭺ H ( ω x, ω y )
                                                          -                *                   (1.3-15)
                                                          2
                                                   4π

Parseval 's Theorem. The energy in the spatial and Fourier transform domains is
related by

                 ∞      ∞             2              1 ∞ ∞                   2
               ∫–∞ ∫–∞ F ( x, y )         dx dy = -------- ∫ ∫ F ( ω x, ω y ) dω x dω y
                                                  4π
                                                         -
                                                         2 –∞ –∞
                                                                                               (1.3-16)


Autocorrelation Theorem. The Fourier transform of the spatial autocorrelation of a
function is equal to the magnitude squared of its Fourier transform. Hence

                   ∞ ∞                                                      2
               OF  ∫ ∫ F ( α, β )F∗ ( α – x, β – y ) dα dβ = F ( ω x, ω y )                  (1.3-17)
                    –∞ –∞                                 

Spatial Differentials. The Fourier transform of the directional derivative of an
image function is related to the Fourier transform by


                                     ∂F ( x, y ) 
                                 OF  -------------------  = – i ω x F ( ω x, ω y )
                                                        -                                     (1.3-18a)
                                     ∂x 
14      CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


                                            ∂F ( x, y ) 
                                        OF  -------------------  = – i ω y F ( ω x, ω y )
                                                               -                                         (1.3-18b)
                                            ∂y 


Consequently, the Fourier transform of the Laplacian of an image function is equal
to

                             ∂2 F ( x, y )               2                   
                         OF  ---------------------- + ∂ F ( x, y )  = – ( ω x + ω y ) F ( ω x, ω y )
                                                                                2   2
                                                       ----------------------                             (1.3-19)
                                           2                        2
                             ∂x                              ∂y              

The Fourier transform convolution theorem stated by Eq. 1.3-14 is an extremely
useful tool for the analysis of additive linear systems. Consider an image function
F ( x, y ) that is the input to an additive linear system with an impulse response
H ( x, y ) . The output image function is given by the convolution integral


                                                ∞     ∞
                            G ( x, y ) =       ∫–∞ ∫–∞ F ( α, β )H ( x – α, y – β ) dα dβ                 (1.3-20)


Taking the Fourier transform of both sides of Eq. 1.3-20 and reversing the order of
integration on the right-hand side results in

                     ∞     ∞                    ∞    ∞
 G ( ω x, ω y ) =   ∫–∞ ∫–∞ F ( α, β ) ∫–∞ ∫–∞ H ( x – α, y – β ) exp { – i ( ωx x + ωy y ) } dx dy       dα dβ

                                                                                                          (1.3-21)


By the Fourier transform shift theorem of Eq. 1.3-13, the inner integral is equal to
the Fourier transform of H ( x, y ) multiplied by an exponential phase-shift factor.
Thus

                                    ∞      ∞
             G ( ω x, ω y ) =     ∫–∞ ∫–∞ F ( α, β )H ( ωx, ω y ) exp { – i ( ω x α + ωy β ) } dα dβ      (1.3-22)


Performing the indicated Fourier transformation gives


                                      G ( ω x, ω y ) = H ( ω x, ω y )F ( ω x, ω y )                       (1.3-23)


Then an inverse transformation of Eq. 1.3-23 provides the output image function

                       1 ∞ ∞
       G ( x, y ) = -------- ∫ ∫ H ( ω x, ω y )F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y
                           -                                                                              (1.3-24)
                           2
                    4π –∞ –∞
IMAGE STOCHASTIC CHARACTERIZATION                                                         15

Equations 1.3-20 and 1.3-24 represent two alternative means of determining the out-
put image response of an additive, linear, space-invariant system. The analytic or
operational choice between the two approaches, convolution or Fourier processing,
is usually problem dependent.


1.4. IMAGE STOCHASTIC CHARACTERIZATION

The following presentation on the statistical characterization of images assumes
general familiarity with probability theory, random variables, and stochastic pro-
cess. References 2 and 4 to 7 can provide suitable background. The primary purpose
of the discussion here is to introduce notation and develop stochastic image models.
   It is often convenient to regard an image as a sample of a stochastic process. For
continuous images, the image function F(x, y, t) is assumed to be a member of a con-
tinuous three-dimensional stochastic process with space variables (x, y) and time
variable (t).
   The stochastic process F(x, y, t) can be described completely by knowledge of its
joint probability density


                       p { F 1, F2 …, F J ; x 1, y 1, t 1, x 2, y 2, t 2, …, xJ , yJ , tJ }


for all sample points J, where (xj, yj, tj) represent space and time samples of image
function Fj(xj, yj, tj). In general, high-order joint probability densities of images are
usually not known, nor are they easily modeled. The first-order probability density
p(F; x, y, t) can sometimes be modeled successfully on the basis of the physics of
the process or histogram measurements. For example, the first-order probability
density of random noise from an electronic sensor is usually well modeled by a
Gaussian density of the form

                                                                                                                                  2
                                         2                –1 ⁄ 2        [ F ( x, y, t ) – η F ( x, y, t ) ] 
           p { F ; x, y, t} = [ 2πσ F ( x, y, t ) ]                exp  – ----------------------------------------------------------- 
                                                                                                                                     -      (1.4-1)
                                                                                                  2
                                                                                          2σ F ( x, y, t )                            

                                                          2
where the parameters η F ( x, y, t ) and σ F ( x, y, t ) denote the mean and variance of the
process. The Gaussian density is also a reasonably accurate model for the probabil-
ity density of the amplitude of unitary transform coefficients of an image. The
probability density of the luminance function must be a one-sided density because
the luminance measure is positive. Models that have found application include the
Rayleigh density,

                                             F ( x, y, t )              [ F ( x, y, t ) ] 2 
                         p { F ; x, y, t } = --------------------- exp  – ---------------------------- 
                                                                                                      -                                    (1.4-2a)
                                                         2                                  2
                                                     α                              2α                 

the log-normal density,
16        CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


                                                                                                                                                                  2
                                     2                  2                  –1 ⁄ 2        [ log { F ( x, y, t ) } – η F ( x, y, t ) ] 
     p { F ; x, y, t} = [ 2πF ( x, y, t )σ F ( x, y, t ) ]                          exp  – -------------------------------------------------------------------------- 
                                                                                                                                                                     -
                                                                                                                           2
                                                                                                                  2σ F ( x, y, t )                                    

                                                                                                                                                            (1.4-2b)

and the exponential density,


                                          p {F ; x, y, t} = α exp{ – α F ( x, y, t ) }                                                                      (1.4-2c)


all defined for F ≥ 0, where α is a constant. The two-sided, or Laplacian density,

                                                             α
                                          p { F ; x, y, t} = --- exp{ – α F ( x, y, t ) }                                                                     (1.4-3)
                                                             2

where α is a constant, is often selected as a model for the probability density of the
difference of image samples. Finally, the uniform density

                                                                                 1
                                                            p { F ; x, y, t} = -----
                                                                                   -                                                                          (1.4-4)
                                                                               2π

for – π ≤ F ≤ π is a common model for phase fluctuations of a random process. Con-
ditional probability densities are also useful in characterizing a stochastic process.
The conditional density of an image function evaluated at ( x 1, y 1, t 1 ) given knowl-
edge of the image function at ( x 2, y 2, t 2 ) is defined as

                                                                p { F 1, F 2 ; x 1, y 1, t 1, x 2, y 2, t 2}
                  p { F 1 ; x 1, y 1, t 1 F2 ; x 2, y 2, t 2} = ------------------------------------------------------------------------                      (1.4-5)
                                                                                p { F 2 ; x 2, y 2, t2}

Higher-order conditional densities are defined in a similar manner.
   Another means of describing a stochastic process is through computation of its
ensemble averages. The first moment or mean of the image function is defined as

                                                                                  ∞
                     η F ( x, y, t ) = E { F ( x, y, t ) } =                   ∫– ∞ F ( x, y, t )p { F ; x, y, t} dF                                          (1.4-6)


where E { · } is the expectation operator, as defined by the right-hand side of Eq.
1.4-6.
   The second moment or autocorrelation function is given by


                        R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = E { F ( x 1, y 1, t 1 )F ∗ ( x 2, y 2, t 2 ) }                                                 (1.4-7a)


or in explicit form
IMAGE STOCHASTIC CHARACTERIZATION                                 17

                                                     ∞     ∞
          R ( x 1, y 1, t 1 ; x 2, y 2, t2 ) =   ∫–∞ ∫–∞ F ( x1, x1, y1 )F∗ ( x2, y2, t2 )
                                                  × p { F 1, F 2 ; x 1, y 1, t1, x 2, y 2, t 2 } dF 1 dF2               (1.4-7b)


The autocovariance of the image process is the autocorrelation about the mean,
defined as



K ( x1, y 1, t1 ; x 2, y 2, t2) = E { [ F ( x 1, y 1, t1 ) – η F ( x 1, y 1, t 1 ) ] [ F∗ ( x 2, y 2, t 2 ) – η∗ ( x 2, y 2, t2 ) ] }
                                                                                                               F


                                                                                                                        (1.4-8a)


or


     K ( x 1, y 1, t 1 ; x 2, y 2, t2) = R ( x1, y 1, t1 ; x 2, y 2, t2) – η F ( x 1, y 1, t1 ) η∗ ( x 2, y 2, t 2 )
                                                                                                 F                      (1.4-8b)


Finally, the variance of an image process is


                                                 2
                                             σ F ( x, y, t ) = K ( x, y, t ; x, y, t )                                   (1.4-9)


   An image process is called stationary in the strict sense if its moments are unaf-
fected by shifts in the space and time origins. The image process is said to be sta-
tionary in the wide sense if its mean is constant and its autocorrelation is dependent
on the differences in the image coordinates, x1 – x2, y1 – y2, t1 – t2, and not on their
individual values. In other words, the image autocorrelation is not a function of
position or time. For stationary image processes,


                                                         E { F ( x, y, t ) } = η F                                     (1.4-10a)


                              R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = R ( x1 – x 2, y 1 – y 2, t1 – t 2 )                 (1.4-10b)


The autocorrelation expression may then be written as


                            R ( τx, τy, τt ) = E { F ( x + τ x, y + τy, t + τ t )F∗ ( x, y, t ) }                       (1.4-11)
18       CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


Because

                                        R ( – τx, – τ y, – τ t ) = R∗ ( τx, τy, τt )                                  (1.4-12)

then for an image function with F real, the autocorrelation is real and an even func-
tion of τ x, τ y, τ t . The power spectral density, also called the power spectrum, of a
stationary image process is defined as the three-dimensional Fourier transform of its
autocorrelation function as given by

                              ∞       ∞      ∞
     W ( ω x, ω y, ω t ) =   ∫–∞ ∫–∞ ∫–∞ R ( τx, τy, τt ) exp { –i ( ωx τx + ωy τy + ω t τt ) } dτx dτy dτt
                                                                                                                      (1.4-13)

   In many imaging systems, the spatial and time image processes are separable so
that the stationary correlation function may be written as


                                        R ( τx, τy, τt ) = R xy ( τx, τy )Rt ( τ t )                                  (1.4-14)


Furthermore, the spatial autocorrelation function is often considered as the product
of x and y axis autocorrelation functions,

                                            R xy ( τ x, τ y ) = Rx ( τ x )R y ( τ y )                                 (1.4-15)

for computational simplicity. For scenes of manufactured objects, there is often a
large amount of horizontal and vertical image structure, and the spatial separation
approximation may be quite good. In natural scenes, there usually is no preferential
direction of correlation; the spatial autocorrelation function tends to be rotationally
symmetric and not separable.
   An image field is often modeled as a sample of a first-order Markov process for
which the correlation between points on the image field is proportional to their geo-
metric separation. The autocovariance function for the two-dimensional Markov
process is


                                                                2 2       2 2
                                  R xy ( τ x, τ y ) = C exp – α x τ x + α y τ y                                     (1.4-16)
                                                                                

where C is an energy scaling constant and α x and α y are spatial scaling constants.
The corresponding power spectrum is

                                                     1 -                              2C
                             W ( ω x, ω y ) = --------------- -----------------------------------------------------
                                                                                                                  -   (1.4-17)
                                                  α x αy 1 + [ ωx ⁄ α2 + ω2 ⁄ α2 ]
                                                                              2
                                                                                         x            y         y
IMAGE STOCHASTIC CHARACTERIZATION                      19

As a simplifying assumption, the Markov process is often assumed to be of separa-
ble form with an autocovariance function


                         K xy ( τx, τy ) = C exp { – α x τ x – α y τ y }                            (1.4-18)


The power spectrum of this process is


                                                            4α x α y C
                             W ( ω x, ω y ) = ------------------------------------------------      (1.4-19)
                                                     2            2          2            2
                                              ( α x + ω x ) ( α y + ωy )

    In the discussion of the deterministic characteristics of an image, both time and
space averages of the image function have been defined. An ensemble average has
also been defined for the statistical image characterization. A question of interest is:
What is the relationship between the spatial-time averages and the ensemble aver-
ages? The answer is that for certain stochastic processes, which are called ergodic
processes, the spatial-time averages and the ensemble averages are equal. Proof of
the ergodicity of a process in the general case is often difficult; it usually suffices to
determine second-order ergodicity in which the first- and second-order space-time
averages are equal to the first- and second-order ensemble averages.
    Often, the probability density or moments of a stochastic image field are known
at the input to a system, and it is desired to determine the corresponding information
at the system output. If the system transfer function is algebraic in nature, the output
probability density can be determined in terms of the input probability density by a
probability density transformation. For example, let the system output be related to
the system input by


                                   G ( x, y, t ) = O F { F ( x, y, t ) }                            (1.4-20)

where O F { · } is a monotonic operator on F(x, y). The probability density of the out-
put field is then


                                                         p { F ; x, y, t}
                          p { G ; x, y, t} = ----------------------------------------------------
                                                                                                -   (1.4-21)
                                              dO F { F ( x, y, t ) } ⁄ dF


The extension to higher-order probability densities is straightforward, but often
cumbersome.
    The moments of the output of a system can be obtained directly from knowledge
of the output probability density, or in certain cases, indirectly in terms of the system
operator. For example, if the system operator is additive linear, the mean of the sys-
tem output is
20     CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION


                  E { G ( x, y, t ) } = E { O F { F ( x, y, t ) } } = O F { E { F ( x, y, t ) } }           (1.4-22)

It can be shown that if a system operator is additive linear, and if the system input
image field is stationary in the strict sense, the system output is also stationary in the
strict sense. Furthermore, if the input is stationary in the wide sense, the output is
also wide-sense stationary.
    Consider an additive linear space-invariant system whose output is described by
the three-dimensional convolution integral

                                   ∞    ∞       ∞
              G ( x, y, t ) =     ∫–∞ ∫–∞ ∫–∞ F ( x – α, y – β, t – γ )H ( α, β, γ ) dα d β dγ              (1.4-23)

where H(x, y, t) is the system impulse response. The mean of the output is then

                              ∞     ∞   ∞
     E { G ( x, y, t ) } =   ∫–∞ ∫–∞ ∫–∞ E { F ( x – α, y – β, t – γ ) }H ( α, β, γ ) dα dβ dγ              (1.4-24)


If the input image field is stationary, its mean η F is a constant that may be brought
outside the integral. As a result,

                                            ∞       ∞   ∞
            E { G ( x, y, t ) } = η F ∫         ∫ ∫         H ( α, β, γ ) dα dβ dγ = η F H ( 0, 0, 0 )      (1.4-25)
                                            –∞ –∞ – ∞


where H ( 0, 0, 0 ) is the transfer function of the linear system evaluated at the origin
in the spatial-time frequency domain. Following the same techniques, it can easily
be shown that the autocorrelation functions of the system input and output are
related by

             R G ( τ x, τ y, τ t ) = RF ( τx, τy, τt ) ᭺ H ( τx, τ y, τ t ) ᭺ H ∗ ( – τx, – τ y, – τ t )
                                                       *                    *                               (1.4-26)

Taking Fourier transforms on both sides of Eq. 1.4-26 and invoking the Fourier
transform convolution theorem, one obtains the relationship between the power
spectra of the input and output image,

              W G ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t )H ( ω x, ω y, ω t )H ∗ ( ω x, ω y, ω t )        (1.4-27a)

or

                                                                                            2
                       WG ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t ) H ( ω x, ω y, ω t )                    (1.4-27b)


This result is found useful in analyzing the effect of noise in imaging systems.
REFERENCES         21

REFERENCES


1.   J. W. Goodman, Introduction to Fourier Optics, 2nd Ed., McGraw-Hill, New York,
     1996.
2.   A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New
     York, 1968.
3.   J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy-
     chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.
4.   A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed.,
     McGraw-Hill, New York, 1991.
5.   J. B. Thomas, An Introduction to Applied Probability Theory and Random Processes,
     Wiley, New York, 1971.
6.   J. W. Goodman, Statistical Optics, Wiley, New York, 1985.
7.   E. R. Dougherty, Random Processes for Image and Signal Processing, Vol. PM44, SPIE
     Press, Bellingham, Wash., 1998.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




2
PSYCHOPHYSICAL VISION PROPERTIES




For efficient design of imaging systems for which the output is a photograph or dis-
play to be viewed by a human observer, it is obviously beneficial to have an under-
standing of the mechanism of human vision. Such knowledge can be utilized to
develop conceptual models of the human visual process. These models are vital in
the design of image processing systems and in the construction of measures of
image fidelity and intelligibility.


2.1. LIGHT PERCEPTION

Light, according to Webster's Dictionary (1), is “radiant energy which, by its action
on the organs of vision, enables them to perform their function of sight.” Much is
known about the physical properties of light, but the mechanisms by which light
interacts with the organs of vision is not as well understood. Light is known to be a
form of electromagnetic radiation lying in a relatively narrow region of the electro-
magnetic spectrum over a wavelength band of about 350 to 780 nanometers (nm). A
physical light source may be characterized by the rate of radiant energy (radiant
intensity) that it emits at a particular spectral wavelength. Light entering the human
visual system originates either from a self-luminous source or from light reflected
from some object or from light transmitted through some translucent object. Let
E ( λ ) represent the spectral energy distribution of light emitted from some primary
light source, and also let t ( λ ) and r ( λ ) denote the wavelength-dependent transmis-
sivity and reflectivity, respectively, of an object. Then, for a transmissive object, the
observed light spectral energy distribution is

                                   C ( λ ) = t ( λ )E ( λ )                      (2.1-1)

                                                                                      23
24     PSYCHOPHYSICAL VISION PROPERTIES




     FIGURE 2.1-1. Spectral energy distributions of common physical light sources.



and for a reflective object
                                  C ( λ ) = r ( λ )E ( λ )                      (2.1-2)

   Figure 2.1-1 shows plots of the spectral energy distribution of several common
sources of light encountered in imaging systems: sunlight, a tungsten lamp, a
LIGHT PERCEPTION         25

light-emitting diode, a mercury arc lamp, and a helium–neon laser (2). A human
being viewing each of the light sources will perceive the sources differently. Sun-
light appears as an extremely bright yellowish-white light, while the tungsten light
bulb appears less bright and somewhat yellowish. The light-emitting diode appears
to be a dim green; the mercury arc light is a highly bright bluish-white light; and the
laser produces an extremely bright and pure red beam. These observations provoke
many questions. What are the attributes of the light sources that cause them to be
perceived differently? Is the spectral energy distribution sufficient to explain the dif-
ferences in perception? If not, what are adequate descriptors of visual perception?
As will be seen, answers to these questions are only partially available.
    There are three common perceptual descriptors of a light sensation: brightness,
hue, and saturation. The characteristics of these descriptors are considered below.
    If two light sources with the same spectral shape are observed, the source of
greater physical intensity will generally appear to be perceptually brighter. How-
ever, there are numerous examples in which an object of uniform intensity appears
not to be of uniform brightness. Therefore, intensity is not an adequate quantitative
measure of brightness.
    The attribute of light that distinguishes a red light from a green light or a yellow
light, for example, is called the hue of the light. A prism and slit arrangement
(Figure 2.1-2) can produce narrowband wavelength light of varying color. However,
it is clear that the light wavelength is not an adequate measure of color because
some colored lights encountered in nature are not contained in the rainbow of light
produced by a prism. For example, purple light is absent. Purple light can be
produced by combining equal amounts of red and blue narrowband lights. Other
counterexamples exist. If two light sources with the same spectral energy distribu-
tion are observed under identical conditions, they will appear to possess the same
hue. However, it is possible to have two light sources with different spectral energy
distributions that are perceived identically. Such lights are called metameric pairs.
    The third perceptual descriptor of a colored light is its saturation, the attribute
that distinguishes a spectral light from a pastel light of the same hue. In effect, satu-
ration describes the whiteness of a light source. Although it is possible to speak of
the percentage saturation of a color referenced to a spectral color on a chromaticity
diagram of the type shown in Figure 3.3-3, saturation is not usually considered to be
a quantitative measure.




                    FIGURE 2.1-2. Refraction of light from a prism.
26     PSYCHOPHYSICAL VISION PROPERTIES




                   FIGURE 2.1-3. Perceptual representation of light.




   As an aid to classifying colors, it is convenient to regard colors as being points in
some color solid, as shown in Figure 2.1-3. The Munsell system of color
classification actually has a form similar in shape to this figure (3). However, to be
quantitatively useful, a color solid should possess metric significance. That is, a unit
distance within the color solid should represent a constant perceptual color
difference regardless of the particular pair of colors considered. The subject of
perceptually significant color solids is considered later.



2.2. EYE PHYSIOLOGY

A conceptual technique for the establishment of a model of the human visual system
would be to perform a physiological analysis of the eye, the nerve paths to the brain,
and those parts of the brain involved in visual perception. Such a task, of course, is
presently beyond human abilities because of the large number of infinitesimally
small elements in the visual chain. However, much has been learned from physio-
logical studies of the eye that is helpful in the development of visual models (4–7).
EYE PHYSIOLOGY        27




                          FIGURE 2.2-1. Eye cross section.


   Figure 2.2-1 shows the horizontal cross section of a human eyeball. The front
of the eye is covered by a transparent surface called the cornea. The remaining outer
cover, called the sclera, is composed of a fibrous coat that surrounds the choroid, a
layer containing blood capillaries. Inside the choroid is the retina, which is com-
posed of two types of receptors: rods and cones. Nerves connecting to the retina
leave the eyeball through the optic nerve bundle. Light entering the cornea is
focused on the retina surface by a lens that changes shape under muscular control to




      FIGURE 2.2-2. Sensitivity of rods and cones based on measurements by Wald.
28      PSYCHOPHYSICAL VISION PROPERTIES


perform proper focusing of near and distant objects. An iris acts as a diaphram to
control the amount of light entering the eye.
    The rods in the retina are long slender receptors; the cones are generally shorter and
thicker in structure. There are also important operational distinctions. The rods are
more sensitive than the cones to light. At low levels of illumination, the rods provide a
visual response called scotopic vision. Cones respond to higher levels of illumination;
their response is called photopic vision. Figure 2.2-2 illustrates the relative sensitivities
of rods and cones as a function of illumination wavelength (7,8). An eye contains
about 6.5 million cones and 100 million cones distributed over the retina (4). Figure
2.2-3 shows the distribution of rods and cones over a horizontal line on the retina
(4). At a point near the optic nerve called the fovea, the density of cones is greatest.
This is the region of sharpest photopic vision. There are no rods or cones in the vicin-
ity of the optic nerve, and hence the eye has a blind spot in this region.




               FIGURE 2.2-3. Distribution of rods and cones on the retina.
VISUAL PHENOMENA          29




       FIGURE 2.2-4. Typical spectral absorption curves of pigments of the retina.



    In recent years, it has been determined experimentally that there are three basic
types of cones in the retina (9, 10). These cones have different absorption character-
istics as a function of wavelength with peak absorptions in the red, green, and blue
regions of the optical spectrum. Figure 2.2-4 shows curves of the measured spectral
absorption of pigments in the retina for a particular subject (10). Two major points
of note regarding the curves are that the α cones, which are primarily responsible
for blue light perception, have relatively low sensitivity, and the absorption curves
overlap considerably. The existence of the three types of cones provides a physio-
logical basis for the trichromatic theory of color vision.
    When a light stimulus activates a rod or cone, a photochemical transition occurs,
producing a nerve impulse. The manner in which nerve impulses propagate through
the visual system is presently not well established. It is known that the optic nerve
bundle contains on the order of 800,000 nerve fibers. Because there are over
100,000,000 receptors in the retina, it is obvious that in many regions of the retina,
the rods and cones must be interconnected to nerve fibers on a many-to-one basis.
Because neither the photochemistry of the retina nor the propagation of nerve
impulses within the eye is well understood, a deterministic characterization of the
visual process is unavailable. One must be satisfied with the establishment of mod-
els that characterize, and hopefully predict, human visual response. The following
section describes several visual phenomena that should be considered in the model-
ing of the human visual process.


2.3. VISUAL PHENOMENA

The visual phenomena described below are interrelated, in some cases only mini-
mally, but in others, to a very large extent. For simplification in presentation and, in
some instances, lack of knowledge, the phenomena are considered disjoint.
30     PSYCHOPHYSICAL VISION PROPERTIES

.




                                (a) No background




                               (b) With background

                   FIGURE 2.3-1. Contrast sensitivity measurements.



Contrast Sensitivity. The response of the eye to changes in the intensity of illumina-
tion is known to be nonlinear. Consider a patch of light of intensity I + ∆I surrounded
by a background of intensity I (Figure 2.3-1a). The just noticeable difference ∆I is to
be determined as a function of I. Over a wide range of intensities, it is found that the
ratio ∆I ⁄ I , called the Weber fraction, is nearly constant at a value of about 0.02
(11; 12, p. 62). This result does not hold at very low or very high intensities, as illus-
trated by Figure 2.3-1a (13). Furthermore, contrast sensitivity is dependent on the
intensity of the surround. Consider the experiment of Figure 2.3-1b, in which two
patches of light, one of intensity I and the other of intensity I + ∆I , are sur-
rounded by light of intensity Io. The Weber fraction ∆I ⁄ I for this experiment is
plotted in Figure 2.3-1b as a function of the intensity of the background. In this
situation it is found that the range over which the Weber fraction remains constant is
reduced considerably compared to the experiment of Figure 2.3-1a. The envelope of
the lower limits of the curves of Figure 2.3-lb is equivalent to the curve of Figure
2.3-1a. However, the range over which ∆I ⁄ I is approximately constant for a fixed
background intensity I o is still comparable to the dynamic range of most electronic
imaging systems.
VISUAL PHENOMENA   31




        (a ) Step chart photo




 (b ) Step chart intensity distribution




        (c ) Ramp chart photo

              D         B




(d ) Ramp chart intensity distribution

FIGURE 2.3-2. Mach band effect.
32       PSYCHOPHYSICAL VISION PROPERTIES


     Because the differential of the logarithm of intensity is

                                       d ( log I ) = dI
                                                     ----
                                                        -                            (2.3-1)
                                                       I
equal changes in the logarithm of the intensity of a light can be related to equal just
noticeable changes in its intensity over the region of intensities, for which the Weber
fraction is constant. For this reason, in many image processing systems, operations
are performed on the logarithm of the intensity of an image point rather than the
intensity.

Mach Band. Consider the set of gray scale strips shown in of Figure 2.3-2a. The
reflected light intensity from each strip is uniform over its width and differs from its
neighbors by a constant amount; nevertheless, the visual appearance is that each
strip is darker at its right side than at its left. This is called the Mach band effect (14).
Figure 2.3-2c is a photograph of the Mach band pattern of Figure 2.3-2d. In the pho-
tograph, a bright bar appears at position B and a dark bar appears at D. Neither bar
would be predicted purely on the basis of the intensity distribution. The apparent
Mach band overshoot in brightness is a consequence of the spatial frequency
response of the eye. As will be seen shortly, the eye possesses a lower sensitivity to
high and low spatial frequencies than to midfrequencies. The implication for the
designer of image processing systems is that perfect fidelity of edge contours can be
sacrificed to some extent because the eye has imperfect response to high-spatial-
frequency brightness transitions.

Simultaneous Contrast. The simultaneous contrast phenomenon (7) is illustrated in
Figure 2.3-3. Each small square is actually the same intensity, but because of the dif-
ferent intensities of the surrounds, the small squares do not appear equally bright.
The hue of a patch of light is also dependent on the wavelength composition of sur-
rounding light. A white patch on a black background will appear to be yellowish if
the surround is a blue light.

Chromatic Adaption. The hue of a perceived color depends on the adaption of a
viewer (15). For example, the American flag will not immediately appear red, white,
and blue if the viewer has been subjected to high-intensity red light before viewing the
flag. The colors of the flag will appear to shift in hue toward the red complement, cyan.




                          FIGURE 2.3-3. Simultaneous contrast.
MONOCHROME VISION MODEL              33

Color Blindness. Approximately 8% of the males and 1% of the females in the
world population are subject to some form of color blindness (16, p. 405). There are
various degrees of color blindness. Some people, called monochromats, possess
only rods or rods plus one type of cone, and therefore are only capable of monochro-
matic vision. Dichromats are people who possess two of the three types of cones.
Both monochromats and dichromats can distinguish colors insofar as they have
learned to associate particular colors with particular objects. For example, dark
roses are assumed to be red, and light roses are assumed to be yellow. But if a red
rose were painted yellow such that its reflectivity was maintained at the same value,
a monochromat might still call the rose red. Similar examples illustrate the inability
of dichromats to distinguish hue accurately.


2.4. MONOCHROME VISION MODEL

One of the modern techniques of optical system design entails the treatment of an
optical system as a two-dimensional linear system that is linear in intensity and can
be characterized by a two-dimensional transfer function (17). Consider the linear
optical system of Figure 2.4-1. The system input is a spatial light distribution
obtained by passing a constant-intensity light beam through a transparency with a
spatial sine-wave transmittance. Because the system is linear, the spatial output
intensity distribution will also exhibit sine-wave intensity variations with possible
changes in the amplitude and phase of the output intensity compared to the input
intensity. By varying the spatial frequency (number of intensity cycles per linear
dimension) of the input transparency, and recording the output intensity level and
phase, it is possible, in principle, to obtain the optical transfer function (OTF) of the
optical system.
   Let H ( ω x, ω y ) represent the optical transfer function of a two-dimensional linear
system where ω x = 2π ⁄ T x and ω y = 2π ⁄ Ty are angular spatial frequencies with
spatial periods T x and Ty in the x and y coordinate directions, respectively. Then,
with I I ( x, y ) denoting the input intensity distribution of the object and I o ( x, y )




              FIGURE 2.4-1. Linear systems analysis of an optical system.
34     PSYCHOPHYSICAL VISION PROPERTIES


representing the output intensity distribution of the image, the frequency spectra of
the input and output signals are defined as

                                            ∞       ∞
                  I I ( ω x, ω y ) =     ∫–∞ ∫–∞ II ( x, y ) exp { –i ( ω x x + ωy y )} dx dy     (2.4-1)

                                         ∞      ∞
                I O ( ω x, ω y ) =      ∫–∞ ∫–∞ IO ( x, y ) exp { –i ( ωx x + ω y y )} dx dy      (2.4-2)



The input and output intensity spectra are related by


                                 I O ( ω x, ω y ) = H ( ω x, ω y ) I I ( ω x, ω y )               (2.4-3)


The spatial distribution of the image intensity can be obtained by an inverse Fourier
transformation of Eq. 2.4-2, yielding

                               1-       ∞       ∞
             I O ( x, y ) = --------
                            4π
                                   2   ∫–∞ ∫–∞ IO ( ω x, ωy ) exp { i ( ωx x + ωy y ) } dωx dωy   (2.4-4)


In many systems, the designer is interested only in the magnitude variations of the
output intensity with respect to the magnitude variations of the input intensity, not
the phase variations. The ratio of the magnitudes of the Fourier transforms of the
input and output signals,

                                          I O ( ω x, ω y )
                                        ------------------------------ = H ( ω x, ω y )           (2.4-5)
                                          I I ( ω x, ω y )

is called the modulation transfer function (MTF) of the optical system.
    Much effort has been given to application of the linear systems concept to the
human visual system (18–24). A typical experiment to test the validity of the linear
systems model is as follows. An observer is shown two sine-wave grating transpar-
encies, a reference grating of constant contrast and spatial frequency and a variable-
contrast test grating whose spatial frequency is set at a value different from that of
the reference. Contrast is defined as the ratio

                                                        max – min
                                                        --------------------------
                                                                                 -
                                                        max + min

where max and min are the maximum and minimum of the grating intensity,
respectively. The contrast of the test grating is varied until the brightnesses of the
bright and dark regions of the two transparencies appear identical. In this manner it is
possible to develop a plot of the MTF of the human visual system. Figure 2.4-2a is a
MONOCHROME VISION MODEL            35




FIGURE 2.4-2. Hypothetical measurements of the spatial frequency response of the human
visual system.
   Contrast




                                   Spatial frequency

FIGURE 2.4-3. MTF measurements of the human visual system by modulated sine-wave
grating.
36      PSYCHOPHYSICAL VISION PROPERTIES




                FIGURE 2.4-4. Logarithmic model of monochrome vision.



hypothetical plot of the MTF as a function of the input signal contrast. Another indi-
cation of the form of the MTF can be obtained by observation of the composite sine-
wave grating of Figure 2.4-3, in which spatial frequency increases in one coordinate
direction and contrast increases in the other direction. The envelope of the visible
bars generally follows the MTF curves of Figure 2.4-2a (23).
   Referring to Figure 2.4-2a, it is observed that the MTF measurement depends on
the input contrast level. Furthermore, if the input sine-wave grating is rotated rela-
tive to the optic axis of the eye, the shape of the MTF is altered somewhat. Thus, it
can be concluded that the human visual system, as measured by this experiment, is
nonlinear and anisotropic (rotationally variant).
   It has been postulated that the nonlinear response of the eye to intensity variations
is logarithmic in nature and occurs near the beginning of the visual information
processing system, that is, near the rods and cones, before spatial interaction occurs
between visual signals from individual rods and cones. Figure 2.4-4 shows a simple
logarithmic eye model for monochromatic vision. If the eye exhibits a logarithmic
response to input intensity, then if a signal grating contains a recording of an
exponential sine wave, that is, exp { sin { I I ( x, y ) } } , the human visual system can be
linearized. A hypothetical MTF obtained by measuring an observer's response to an
exponential sine-wave grating (Figure 2.4-2b) can be fitted reasonably well by a sin-
gle curve for low-and mid-spatial frequencies. Figure 2.4-5 is a plot of the measured
MTF of the human visual system obtained by Davidson (25) for an exponential




         FIGURE 2.4-5. MTF measurements with exponential sine-wave grating.
MONOCHROME VISION MODEL       37

sine-wave test signal. The high-spatial-frequency portion of the curve has been
extrapolated for an average input contrast.
    The logarithmic/linear system eye model of Figure 2.4-4 has proved to provide a
reasonable prediction of visual response over a wide range of intensities. However,
at high spatial frequencies and at very low or very high intensities, observed
responses depart from responses predicted by the model. To establish a more accu-
rate model, it is necessary to consider the physical mechanisms of the human visual
system.
    The nonlinear response of rods and cones to intensity variations is still a subject
of active research. Hypotheses have been introduced suggesting that the nonlinearity
is based on chemical activity, electrical effects, and neural feedback. The basic loga-
rithmic model assumes the form

                          IO ( x, y ) = K 1 log { K 2 + K 3 I I ( x, y ) }                 (2.4-6)

where the Ki are constants and I I ( x, y ) denotes the input field to the nonlinearity
and I O ( x, y ) is its output. Another model that has been suggested (7, p. 253) follows
the fractional response


                                                   K 1 I I ( x, y )
                                 I O ( x, y ) = -----------------------------
                                                                            -              (2.4-7)
                                                K 2 + I I ( x, y )


where K 1 and K 2 are constants. Mannos and Sakrison (26) have studied the effect
of various nonlinearities employed in an analytical visual fidelity measure. Their
results, which are discussed in greater detail in Chapter 3, establish that a power law
nonlinearity of the form

                                                                         s
                                   I O ( x, y ) = [ I I ( x, y ) ]                         (2.4-8)


where s is a constant, typically 1/3 or 1/2, provides good agreement between the
visual fidelity measure and subjective assessment. The three models for the nonlin-
ear response of rods and cones defined by Eqs. 2.4-6 to 2.4-8 can be forced to a
reasonably close agreement over some midintensity range by an appropriate choice
of scaling constants.
   The physical mechanisms accounting for the spatial frequency response of the eye
are partially optical and partially neural. As an optical instrument, the eye has limited
resolution because of the finite size of the lens aperture, optical aberrations, and the
finite dimensions of the rods and cones. These effects can be modeled by a low-pass
transfer function inserted between the receptor and the nonlinear response element.
The most significant contributor to the frequency response of the eye is the lateral
inhibition process (27). The basic mechanism of lateral inhibition is illustrated in
38     PSYCHOPHYSICAL VISION PROPERTIES




                       FIGURE 2.4-6. Lateral inhibition effect.



Figure 2.4-6. A neural signal is assumed to be generated by a weighted contribution
of many spatially adjacent rods and cones. Some receptors actually exert an inhibi-
tory influence on the neural response. The weighting values are, in effect, the
impulse response of the human visual system beyond the retina. The two-dimen-
sional Fourier transform of this impulse response is the postretina transfer function.
    When a light pulse is presented to a human viewer, there is a measurable delay in
its perception. Also, perception continues beyond the termination of the pulse for a
short period of time. This delay and lag effect arising from neural temporal response
limitations in the human visual system can be modeled by a linear temporal transfer
function.
    Figure 2.4-7 shows a model for monochromatic vision based on results of the
preceding discussion. In the model, the output of the wavelength-sensitive receptor
is fed to a low-pass type of linear system that represents the optics of the eye. Next
follows a general monotonic nonlinearity that represents the nonlinear intensity
response of rods or cones. Then the lateral inhibition process is characterized by a
linear system with a bandpass response. Temporal filtering effects are modeled by
the following linear system. Hall and Hall (28) have investigated this model exten-
sively and have found transfer functions for the various elements that accurately
model the total system response. The monochromatic vision model of Figure 2.4-7,
with appropriately scaled parameters, seems to be sufficiently detailed for most
image processing applications. In fact, the simpler logarithmic model of Figure
2.4-4 is probably adequate for the bulk of applications.
COLOR VISION MODEL        39




                  FIGURE 2.4-7. Extended model of monochrome vision.


2.5. COLOR VISION MODEL

There have been many theories postulated to explain human color vision, beginning
with the experiments of Newton and Maxwell (29–32). The classical model of
human color vision, postulated by Thomas Young in 1802 (31), is the trichromatic
model in which it is assumed that the eye possesses three types of sensors, each
sensitive over a different wavelength band. It is interesting to note that there was no
direct physiological evidence of the existence of three distinct types of sensors until
about 1960 (9,10).
   Figure 2.5-1 shows a color vision model proposed by Frei (33). In this model,
three receptors with spectral sensitivities s 1 ( λ ), s 2 ( λ ), s 3 ( λ ) , which represent the
absorption pigments of the retina, produce signals


                                     e 1 = ∫ C ( λ )s 1 ( λ ) d λ                      (2.5-1a)

                                    e 2 = ∫ C ( λ )s 2 ( λ ) d λ                       (2.5-1b)

                                    e 3 = ∫ C ( λ )s 3 ( λ ) d λ                       (2.5-1c)

where C ( λ ) is the spectral energy distribution of the incident light source. The three
signals e 1, e 2, e 3 are then subjected to a logarithmic transfer function and combined
to produce the outputs

                                d 1 = log e 1                                          (2.5-2a)

                                                              e2
                                d 2 = log e 2 – log e 1 = log ----
                                                                 -                     (2.5-2b)
                                                               e1

                                                               e3
                                 d 3 = log e 3 – log e 1 = log ----
                                                                  -                    (2.5-2c)
                                                               e1
40       PSYCHOPHYSICAL VISION PROPERTIES




                                FIGURE 2.5-1 Color vision model.



Finally, the signals d 1, d 2, d 3 pass through linear systems with transfer functions
H 1 ( ω x, ω y ) , H 2 ( ω x, ω y ) , H 3 ( ω x, ω y ) to produce output signals g 1, g 2, g 3 that provide
the basis for perception of color by the brain.
    In the model of Figure 2.5-1, the signals d 2 and d 3 are related to the chromaticity
of a colored light while signal d 1 is proportional to its luminance. This model has
been found to predict many color vision phenomena quite accurately, and also to sat-
isfy the basic laws of colorimetry. For example, it is known that if the spectral
energy of a colored light changes by a constant multiplicative factor, the hue and sat-
uration of the light, as described quantitatively by its chromaticity coordinates,
remain invariant over a wide dynamic range. Examination of Eqs. 2.5-1 and 2.5-2
indicates that the chrominance signals d 2 and d 3 are unchanged in this case, and
that the luminance signal d 1 increases in a logarithmic manner. Other, more subtle
evaluations of the model are described by Frei (33).
    As shown in Figure 2.2-4, some indication of the spectral sensitivities s i ( λ ) of
the three types of retinal cones has been obtained by spectral absorption measure-
ments of cone pigments. However, direct physiological measurements are difficult
to perform accurately. Indirect estimates of cone spectral sensitivities have been
obtained from measurements of the color response of color-blind peoples by Konig
and Brodhun (34). Judd (35) has used these data to produce a linear transformation
relating the spectral sensitivity functions s i ( λ ) to spectral tristimulus values
obtained by colorimetric testing. The resulting sensitivity curves, shown in Figure
2.5-2, are unimodal and strictly positive, as expected from physiological consider-
ations (34).
    The logarithmic color vision model of Figure 2.5-1 may easily be extended, in
analogy with the monochromatic vision model of Figure 2.4-7, by inserting a linear
transfer function after each cone receptor to account for the optical response of the
eye. Also, a general nonlinearity may be substituted for the logarithmic transfer
function. It should be noted that the order of the receptor summation and the transfer
function operations can be reversed without affecting the output, because both are
COLOR VISION MODEL             41




   FIGURE 2.5-2. Spectral sensitivity functions of retinal cones based on Konig’s data.



linear operations. Figure 2.5-3 shows the extended model for color vision. It is
expected that the spatial frequency response of the g 1 neural signal through the
color vision model should be similar to the luminance spatial frequency response
discussed in Section 2.4. Sine-wave response measurements for colored lights
obtained by van der Horst et al. (36), shown in Figure 2.5-4, indicate that the chro-
matic response is shifted toward low spatial frequencies relative to the luminance
response. Lateral inhibition effects should produce a low spatial frequency rolloff
below the measured response.
   Color perception is relative; the perceived color of a given spectral energy distri-
bution is dependent on the viewing surround and state of adaption of the viewer. A
human viewer can adapt remarkably well to the surround or viewing illuminant of a
scene and essentially normalize perception to some reference white or overall color
balance of the scene. This property is known as chromatic adaption.




                    FIGURE 2.5-3. Extended model of color vision.
42       PSYCHOPHYSICAL VISION PROPERTIES




     FIGURE 2.5-4. Spatial frequency response measurements of the human visual system.



    The simplest visual model for chromatic adaption, proposed by von Kries (37,
16, p. 435), involves the insertion of automatic gain controls between the cones and
first linear system of Figure 2.5-2. These gains

                                                                  –1
                                 ai =   [ ∫ W ( λ )si ( λ ) dλ]                  (2.5-3)


for i = 1, 2, 3 are adjusted such that the modified cone response is unity when view-
ing a reference white with spectral energy distribution W ( λ ) . Von Kries's model is
attractive because of its qualitative reasonableness and simplicity, but chromatic
testing (16, p. 438) has shown that the model does not completely predict the chro-
matic adaptation effect. Wallis (38) has suggested that chromatic adaption may, in
part, result from a post-retinal neural inhibition mechanism that linearly attenuates
slowly varying visual field components. The mechanism could be modeled by the
low-spatial-frequency attenuation associated with the post-retinal transfer functions
H Li ( ω x, ω y ) of Figure 2.5-3. Undoubtedly, both retinal and post-retinal mechanisms
are responsible for the chromatic adaption effect. Further analysis and testing are
required to model the effect adequately.


REFERENCES

 1. Webster's New Collegiate Dictionary, G. & C. Merriam Co. (The Riverside Press),
    Springfield, MA, 1960.
 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965,
    162–165.
 3. Munsell Book of Color, Munsell Color Co., Baltimore.
 4. M. H. Pirenne, Vision and the Eye, 2nd ed., Associated Book Publishers, London, 1967.
 5. S. L. Polyak, The Retina, University of Chicago Press, Chicago, 1941.
REFERENCES          43

 6.   L. H. Davson, The Physiology of the Eye, McGraw-Hill (Blakiston), New York, 1949.
 7.   T. N. Cornsweet, Visual Perception, Academic Press, New York, 1970.
 8.   G. Wald, “Human Vision and the Spectrum,” Science, 101, 2635, June 29, 1945, 653–658.
 9.   P. K. Brown and G. Wald, “Visual Pigment in Single Rods and Cones of the Human Ret-
      ina,” Science, 144, 3614, April 3, 1964, 45–52.
10.   G. Wald, “The Receptors for Human Color Vision,” Science, 145, 3636, September 4,
      1964, 1007–1017.
11.   S. Hecht, “The Visual Discrimination of Intensity and the Weber–Fechner Law,” J. Gen-
      eral. Physiology., 7, 1924, 241.
12.   W. F. Schreiber, Fundamentals of Electronic Imaging Systems, Springer-Verlag, Berlin,
      1986.
13.   S. S. Stevens, Handbook of Experimental Psychology, Wiley, New York, 1951.
14.   F. Ratliff, Mach Bands: Quantitative Studies on Neural Networks in the Retina, Holden-
      Day, San Francisco, 1965.
15.   G. S. Brindley, “Afterimages,” Scientific American, 209, 4, October 1963, 84–93.
16.   G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982.
17.   J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996.
18.   F. W. Campbell, “The Human Eye as an Optical Filter,” Proc. IEEE, 56, 6, June 1968,
      1009–1014.
19.   O. Bryngdahl, “Characteristics of the Visual System: Psychophysical Measurement of
      the Response to Spatial Sine-Wave Stimuli in the Mesopic Region,” J. Optical. Society
      of America, 54, 9, September 1964, 1152–1160.
20.   E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, I. The
      Mach Phenomenon,” J. Optical Society of America, 51, 7, July 1961, 740–746.
21.   E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, II. Sine
      Wave and Square Wave Contrast Sensitivity,” J. Optical Society of America, 52, 3,
      March 1962, 328–335.
22.   M. B. Sachs, J. Nachmias, and J. G. Robson, “Spatial Frequency Channels in Human
      Vision,” J. Optical Society of America, 61, 9, September 1971, 1176–1186.
23.   T. G. Stockham, Jr., “Image Processing in the Context of a Visual Model,” Proc. IEEE,
      60, 7, July 1972, 828–842.
24.   D. E. Pearson, “A Realistic Model for Visual Communication Systems,” Proc. IEEE, 55,
      3, March 1967, 380–389.
25.   M. L. Davidson, “Perturbation Approach to Spatial Brightness Interaction in Human
      Vision,” J. Optical Society of America, 58, 9, September 1968, 1300–1308.
26.   J. L. Mannos and D. J. Sakrison, “The Effects of a Visual Fidelity Criterion on the
      Encoding of Images,” IEEE Trans. Information. Theory, IT-20, 4, July 1974, 525–536.
27.   F. Ratliff, H. K. Hartline, and W. H. Miller, “Spatial and Temporal Aspects of Retinal
      Inhibitory Interaction,” J. Optical Society of America, 53, 1, January 1963, 110–120.
28.   C. F. Hall and E. L. Hall, “A Nonlinear Model for the Spatial Characteristics of the
      Human Visual System,” IEEE Trans, Systems, Man and Cybernetics, SMC-7, 3, March
      1977, 161–170.
29.   J. J. McCann, “Human Color Perception,” in Color: Theory and Imaging Systems, R. A.
      Enyard, Ed., Society of Photographic Scientists and Engineers, Washington, DC, 1973, 1–23.
44     PSYCHOPHYSICAL VISION PROPERTIES


30. I. Newton, Optiks, 4th ed., 1730; Dover Publications, New York, 1952.
31. T. Young, Philosophical Trans, 92, 1802, 12–48.
32. J. C. Maxwell, Scientific Papers of James Clerk Maxwell, W. D. Nevern, Ed., Dover
    Publications, New York, 1965.
33. W. Frei, “A New Model of Color Vision and Some Practical Limitations,” USCEE
    Report 530, University of Southern California, Image Processing Institute, Los Angeles
    March 1974, 128–143.
34. A. Konig and E. Brodhun, “Experimentell Untersuchungen uber die Psycho-physische
    fundamental in Bezug auf den Gesichtssinn,” Zweite Mittlg. S.B. Preuss Akademic der
    Wissenschaften, 1889, 641.
35. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J.
    Optical Society of America, 35, 3, March 1945, 199–221.
36. C. J. C. van der Horst, C. M. de Weert, and M. A. Bouman, “Transfer of Spatial Chroma-
    ticity Contrast at Threshold in the Human Eye,” J. Optical Society of America, 57, 10,
    October 1967, 1260–1266.
37. J. von Kries, “Die Gesichtsempfindungen,” Nagel's Handbuch der. Physiologie der
    Menschen, Vol. 3, 1904, 211.
38. R. H. Wallis, “Film Recording of Digital Color Images,” USCEE Report 570, University
    of Southern California, Image Processing Institute, Los Angeles, June 1975.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




3
PHOTOMETRY AND COLORIMETRY




Chapter 2 dealt with human vision from a qualitative viewpoint in an attempt to
establish models for monochrome and color vision. These models may be made
quantitative by specifying measures of human light perception. Luminance mea-
sures are the subject of the science of photometry, while color measures are treated
by the science of colorimetry.


3.1. PHOTOMETRY

A source of radiative energy may be characterized by its spectral energy distribution
C ( λ ) , which specifies the time rate of energy the source emits per unit wavelength
interval. The total power emitted by a radiant source, given by the integral of the
spectral energy distribution,
                                                       ∞
                                         P =        ∫0      C(λ ) d λ                         (3.1-1)

is called the radiant flux of the source and is normally expressed in watts (W).
    A body that exists at an elevated temperature radiates electromagnetic energy
proportional in amount to its temperature. A blackbody is an idealized type of heat
radiator whose radiant flux is the maximum obtainable at any wavelength for a body
at a fixed temperature. The spectral energy distribution of a blackbody is given by
Planck's law (1):

                                                              C1
                            C ( λ ) = -----------------------------------------------------   (3.1-2)
                                          5
                                      λ [ exp { C 2 ⁄ λT } – 1 ]

                                                                                                  45
46     PHOTOMETRY AND COLORIMETRY




                    FIGURE 3.1-1. Blackbody radiation functions.

where λ is the radiation wavelength, T is the temperature of the body, and C 1 and
C 2 are constants. Figure 3.1-1a is a plot of the spectral energy of a blackbody as a
function of temperature and wavelength. In the visible region of the electromagnetic
spectrum, the blackbody spectral energy distribution function of Eq. 3.1-2 can be
approximated by Wien's radiation law (1):

                                                         C1
                              C ( λ ) = ---------------------------------------
                                                                              -   (3.1-3)
                                            5
                                        λ exp { C 2 ⁄ λT }
Wien's radiation function is plotted in Figure 3.1-1b over the visible spectrum.
   The most basic physical light source, of course, is the sun. Figure 2.1-1a shows a
plot of the measured spectral energy distribution of sunlight (2). The dashed line in




                  FIGURE 3.1-2. CIE standard illumination sources.
PHOTOMETRY          47




             FIGURE 3.1-3. Spectral energy distribution of CRT phosphors.



this figure, approximating the measured data, is a 6000 kelvin (K) blackbody curve.
Incandescent lamps are often approximated as blackbody radiators of a given tem-
perature in the range 1500 to 3500 K (3).
    The Commission Internationale de l'Eclairage (CIE), which is an international
body concerned with standards for light and color, has established several standard
sources of light, as illustrated in Figure 3.1-2 (4). Source SA is a tungsten filament
lamp. Over the wavelength band 400 to 700 nm, source SB approximates direct sun-
light, and source SC approximates light from an overcast sky. A hypothetical source,
called Illuminant E, is often employed in colorimetric calculations. Illuminant E is
assumed to emit constant radiant energy at all wavelengths.
    Cathode ray tube (CRT) phosphors are often utilized as light sources in image
processing systems. Figure 3.1-3 describes the spectral energy distributions of
common phosphors (5). Monochrome television receivers generally use a P4 phos-
phor, which provides a relatively bright blue-white display. Color television displays
utilize cathode ray tubes with red, green, and blue emitting phosphors arranged in
triad dots or strips. The P22 phosphor is typical of the spectral energy distribution of
commercial phosphor mixtures. Liquid crystal displays (LCDs) typically project a
white light through red, green and blue vertical strip pixels. Figure 3.1-4 is a plot of
typical color filter transmissivities (6).
    Photometric measurements seek to describe quantitatively the perceptual bright-
ness of visible electromagnetic energy (7,8). The link between photometric mea-
surements and radiometric measurements (physical intensity measurements) is the
photopic luminosity function, as shown in Figure 3.1-5a (9). This curve, which is a
CIE standard, specifies the spectral sensitivity of the human visual system to optical
radiation as a function of wavelength for a typical person referred to as the standard
48     PHOTOMETRY AND COLORIMETRY




                  FIGURE 3.1-4. Transmissivities of LCD color filters.




observer. In essence, the curve is a standardized version of the measurement of cone
sensitivity given in Figure 2.2-2 for photopic vision at relatively high levels of illu-
mination. The standard luminosity function for scotopic vision at relatively low
levels of illumination is illustrated in Figure 3.1-5b. Most imaging system designs
are based on the photopic luminosity function, commonly called the relative lumi-
nous efficiency.
   The perceptual brightness sensation evoked by a light source with spectral energy
distribution C ( λ ) is specified by its luminous flux, as defined by


                                          ∞
                               F = Km ∫       C ( λ )V ( λ ) d λ                (3.1-4)
                                          0



where V ( λ ) represents the relative luminous efficiency and K m is a scaling con-
stant. The modern unit of luminous flux is the lumen (lm), and the corresponding
value for the scaling constant is K m = 685 lm/W. An infinitesimally narrowband
source of 1 W of light at the peak wavelength of 555 nm of the relative luminous
efficiency curve therefore results in a luminous flux of 685 lm.
COLOR MATCHING           49




                 FIGURE 3.1-5. Relative luminous efficiency functions.




3.2. COLOR MATCHING


The basis of the trichromatic theory of color vision is that it is possible to match
an arbitrary color by superimposing appropriate amounts of three primary colors
(10–14). In an additive color reproduction system such as color television, the
three primaries are individual red, green, and blue light sources that are projected
onto a common region of space to reproduce a colored light. In a subtractive color
system, which is the basis of most color photography and color printing, a white
light sequentially passes through cyan, magenta, and yellow filters to reproduce a
colored light.



3.2.1. Additive Color Matching

An additive color-matching experiment is illustrated in Figure 3.2-1. In
Figure 3.2-1a, a patch of light (C) of arbitrary spectral energy distribution C ( λ ) , as
shown in Figure 3.2-2a, is assumed to be imaged onto the surface of an ideal
diffuse reflector (a surface that reflects uniformly over all directions and all
wavelengths). A reference white light (W) with an energy distribution, as in
Figure 3.2-2b, is imaged onto the surface along with three primary lights (P1),
(P2), (P3) whose spectral energy distributions are sketched in Figure 3.2-2c to e.
The three primary lights are first overlapped and their intensities are adjusted until
the overlapping region of the three primary lights perceptually matches the
reference white in terms of brightness, hue, and saturation. The amounts of the
three primaries A 1 ( W ) , A 2 ( W ) , A3 ( W ) are then recorded in some physical units,
such as watts. These are the matching values of the reference white. Next, the
intensities of the primaries are adjusted until a match is achieved with
the colored light (C), if a match is possible. The procedure to be followed
if a match cannot be achieved is considered later. The intensities of the primaries
50     PHOTOMETRY AND COLORIMETRY




                                      FIGURE 3.2-1. Color matching.




A1 ( C ), A 2 ( C ), A 3 ( C ) when a match is obtained are recorded, and normalized match-
ing values T1 ( C ) , T 2 ( C ) , T3 ( C ) , called tristimulus values, are computed as



                         A1 ( C )                      A2 ( C )                      A3 ( C )
             T 1 ( C ) = ---------------
                                       -   T 2 ( C ) = ---------------
                                                                     -   T 3 ( C ) = ---------------
                                                                                                   -
                         A1 ( W )                      A2 ( W )                      A3( W )
                                                                                                       (3.2-1)
COLOR MATCHING            51




                           FIGURE 3.2-2. Spectral energy distributions.


   If a match cannot be achieved by the procedure illustrated in Figure 3.2-1a, it is
often possible to perform the color matching outlined in Figure 3.2-1b. One of the
primaries, say (P3), is superimposed with the light (C), and the intensities of all
three primaries are adjusted until a match is achieved between the overlapping
region of primaries (P1) and (P2) with the overlapping region of (P3) and (C). If
such a match is obtained, the tristimulus values are

                        A1 ( C )                      A2 ( C )                      – A3 ( C )
            T 1 ( C ) = ---------------
                                      -   T 2 ( C ) = ---------------
                                                                    -   T 3 ( C ) = -----------------
                                                                                                    -   (3.2-2)
                        A1 ( W )                      A2 ( W )                       A3( W )

In this case, the tristimulus value T3 ( C ) is negative. If a match cannot be achieved
with this geometry, a match is attempted between (P1) plus (P3) and (P2) plus (C). If
a match is achieved by this configuration, tristimulus value T2 ( C ) will be negative.
If this configuration fails, a match is attempted between (P2) plus (P3) and (P1) plus
(C). A correct match is denoted with a negative value for T1 ( C ) .
52      PHOTOMETRY AND COLORIMETRY


   Finally, in the rare instance in which a match cannot be achieved by either of the
configurations of Figure 3.2-1a or b, two of the primaries are superimposed with (C)
and an attempt is made to match the overlapped region with the remaining primary.
In the case illustrated in Figure 3.2-1c, if a match is achieved, the tristimulus values
become

                        A1 ( C )                      –A2 ( C )                       – A3 ( C )
           T 1 ( C ) = ---------------
                                     -    T 2 ( C ) = -----------------
                                                                      -   T 3 ( C ) = -----------------
                                                                                                      -   (3.2-3)
                       A1 ( W )                        A2 ( W )                        A3( W )

If a match is not obtained by this configuration, one of the other two possibilities
will yield a match.
   The process described above is a direct method for specifying a color quantita-
tively. It has two drawbacks: The method is cumbersome and it depends on the per-
ceptual variations of a single observer. In Section 3.3 we consider standardized
quantitative color measurement in detail.


3.2.2. Subtractive Color Matching

A subtractive color-matching experiment is shown in Figure 3.2-3. An illumination
source with spectral energy distribution E ( λ ) passes sequentially through three dye
filters that are nominally cyan, magenta, and yellow. The spectral absorption of the
dye filters is a function of the dye concentration. It should be noted that the spectral
transmissivities of practical dyes change shape in a nonlinear manner with dye con-
centration.
     In the first step of the subtractive color-matching process, the dye concentrations
of the three spectral filters are varied until a perceptual match is obtained with a refer-
ence white (W). The dye concentrations are the matching values of the color match
A1 ( W ) , A 2 ( W ) , A 3 ( W ) . Next, the three dye concentrations are varied until a match is
obtained with a desired color (C). These matching values A1 ( C ), A2 ( C ), A3 ( C ) , are
then used to compute the tristimulus values T1 ( C ) , T2 ( C ), T 3 ( C ), as in Eq. 3.2-1.




                                FIGURE 3.2-3. Subtractive color matching.
COLOR MATCHING       53

It should be apparent that there is no fundamental theoretical difference between
color matching by an additive or a subtractive system. In a subtractive system, the
yellow dye acts as a variable absorber of blue light, and with ideal dyes, the yellow
dye effectively forms a blue primary light. In a similar manner, the magenta filter
ideally forms the green primary, and the cyan filter ideally forms the red primary.
Subtractive color systems ordinarily utilize cyan, magenta, and yellow dye spectral
filters rather than red, green, and blue dye filters because the cyan, magenta, and
yellow filters are notch filters which permit a greater transmission of light energy
than do narrowband red, green, and blue bandpass filters. In color printing, a fourth
filter layer of variable gray level density is often introduced to achieve a higher con-
trast in reproduction because common dyes do not possess a wide density range.


3.2.3. Axioms of Color Matching

The color-matching experiments described for additive and subtractive color match-
ing have been performed quite accurately by a number of researchers. It has been
found that perfect color matches sometimes cannot be obtained at either very high or
very low levels of illumination. Also, the color matching results do depend to some
extent on the spectral composition of the surrounding light. Nevertheless, the simple
color matching experiments have been found to hold over a wide range of condi-
tions.
   Grassman (15) has developed a set of eight axioms that define trichromatic color
matching and that serve as a basis for quantitative color measurements. In the
following presentation of these axioms, the symbol ◊ indicates a color match; the
symbol ⊕ indicates an additive color mixture; the symbol • indicates units of a
color. These axioms are:
1. Any color can be matched by a mixture of no more than three colored lights.
2. A color match at one radiance level holds over a wide range of levels.
3. Components of a mixture of colored lights cannot be resolved by the human eye.
4. The luminance of a color mixture is equal to the sum of the luminance of its
   components.
5. Law of addition. If color (M) matches color (N) and color (P) matches color (Q),
   then color (M) mixed with color (P) matches color (N) mixed with color (Q):

                  ( M ) ◊ ( N ) ∩ ( P ) ◊ ( Q) ⇒ [ ( M) ⊕ ( P) ] ◊ [ ( N ) ⊕ ( Q )]      (3.2-4)

6. Law of subtraction. If the mixture of (M) plus (P) matches the mixture of (N)
   plus (Q) and if (P) matches (Q), then (M) matches (N):

                 [ (M ) ⊕ (P )] ◊ [(N ) ⊕ ( Q) ] ∩ [( P) ◊ (Q) ] ⇒ ( M) ◊ (N )           (3.2-5)

7. Transitive law. If (M) matches (N) and if (N) matches (P), then (M) matches (P):
54       PHOTOMETRY AND COLORIMETRY


                               [ (M) ◊ (N)] ∩ [(N) ◊ (P) ] ⇒ (M) ◊ (P)                                            (3.2-6)


8. Color matching. (a) c units of (C) matches the mixture of m units of (M) plus n
   units of (N) plus p units of (P):


                              c • C ◊ [m • (M )] ⊕ [n • ( N) ] ⊕ [p • (P ) ]                                      (3.2-7)


     or (b) a mixture of c units of C plus m units of M matches the mixture of n units
     of N plus p units of P:


                           [c • (C )] ⊕ [m • ( M) ] ◊ [n • (N)] ⊕ [ p • (P) ]                                     (3.2-8)


     or (c) a mixture of c units of (C) plus m units of (M) plus n units of (N) matches p
     units of P:


                           [c • (C )] ⊕ [m • ( M) ] ⊕ [n • (N )] ◊ [ p • (P) ]                                    (3.2-9)


With Grassman's laws now specified, consideration is given to the development of a
quantitative theory for color matching.


3.3. COLORIMETRY CONCEPTS

Colorimetry is the science of quantitatively measuring color. In the trichromatic
color system, color measurements are in terms of the tristimulus values of a color or
a mathematical function of the tristimulus values.
   Referring to Section 3.2.3, the axioms of color matching state that a color C can
be matched by three primary colors P1, P2, P3. The qualitative match is expressed as

                   ( C ) ◊ [ A 1 ( C ) • ( P 1 ) ] ⊕ [ A 2 ( C ) • ( P 2 ) ] ⊕ [ A3 ( C ) • ( P 3 ) ]             (3.3-1)

where A 1 ( C ) , A2 ( C ) , A 3 ( C ) are the matching values of the color (C). Because the
intensities of incoherent light sources add linearly, the spectral energy distribution of
a color mixture is equal to the sum of the spectral energy distributions of its compo-
nents. As a consequence of this fact and Eq. 3.3-1, the spectral energy distribution
C ( λ ) can be replaced by its color-matching equivalent according to the relation

                                                                                        3
            C ( λ ) ◊ A 1 ( C )P 1 ( λ ) + A2 ( C )P2 ( λ ) + A 3 ( C )P 3 ( λ ) =     ∑      A j ( C )Pj ( λ )   (3.3-2)
                                                                                       j =1
COLORIMETRY CONCEPTS       55

Equation 3.3-2 simply means that the spectral energy distributions on both sides of
the equivalence operator ◊ evoke the same color sensation. Color matching is usu-
ally specified in terms of tristimulus values, which are normalized matching values,
as defined by

                                                       Aj ( C )
                                           T j ( C ) = --------------
                                                                    -                          (3.3-3)
                                                       Aj ( W )

where A j ( W ) represents the matching value of the reference white. By this substitu-
tion, Eq. 3.3-2 assumes the form

                                              3
                              C(λ) ◊         ∑ Tj ( C )Aj ( W )Pj ( λ )                        (3.3-4)
                                            j =1

   From Grassman's fourth law, the luminance of a color mixture Y(C) is equal to
the luminance of its primary components. Hence

                                                           3
                  Y(C ) =   ∫ C ( λ )V ( λ ) dλ = ∑ ∫ Aj ( C )Pj ( λ )V ( λ ) dλ           (3.3-5a)
                                                          j =1

or
                                     3
                        Y(C ) =     ∑ ∫ Tj ( C )Aj ( W )Pj ( λ )V ( λ ) dλ                 (3.3-5b)
                                    j =1

where V ( λ ) is the relative luminous efficiency and P j ( λ ) represents the spectral
energy distribution of a primary. Equations 3.3-4 and 3.3-5 represent the quantita-
tive foundation for colorimetry.


3.3.1. Color Vision Model Verification

Before proceeding further with quantitative descriptions of the color-matching pro-
cess, it is instructive to determine whether the matching experiments and the axioms
of color matching are satisfied by the color vision model presented in Section 2.5. In
that model, the responses of the three types of receptors with sensitivities s 1 ( λ ) ,
s 2 ( λ ) , s 3 ( λ ) are modeled as


                                  e 1 ( C ) = ∫ C ( λ )s 1 ( λ ) d λ                       (3.3-6a)

                                  e 2 ( C ) = ∫ C ( λ )s 2 ( λ ) d λ                       (3.3-6b)

                                  e 3 ( C ) = ∫ C ( λ )s 3 ( λ ) d λ                       (3.3-6c)
56     PHOTOMETRY AND COLORIMETRY


If a viewer observes the primary mixture instead of C, then from Eq. 3.3-4, substitu-
tion for C ( λ ) should result in the same cone signals e i ( C ) . Thus


                                      3
                       e1 ( C ) =    ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s1 ( λ ) d λ                   (3.3-7a)
                                     j =1

                                      3
                       e2( C ) =     ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s2 ( λ ) d λ                   (3.3-7b)
                                     j =1

                                      3
                       e3 ( C ) =    ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s3 ( λ ) d λ                   (3.3-7c)
                                    j =1



Equation 3.3-7 can be written more compactly in matrix form by defining


                                     k ij =    ∫ Pj ( λ )si ( λ ) dλ                              (3.3-8)


Then


          e1 ( C )     k 11   k 12          k 13     A1( W )           0         0    T1 ( C )
          e2 ( C ) =   k 21   k 22          k 23        0         A2( W )        0    T2 ( C )    (3.3-9)
          e3 ( C )     k 31   k 32          k 33        0              0    A3( W )   T3 ( C )


or in yet more abbreviated form,


                                          e ( C ) = KAt ( C )                                    (3.3-10)


where the vectors and matrices of Eq. 3.3-10 are defined in correspondence with
Eqs. 3.3-7 to 3.3-9. The vector space notation used in this section is consistent with
the notation formally introduced in Appendix 1. Matrices are denoted as boldface
uppercase symbols, and vectors are denoted as boldface lowercase symbols. It
should be noted that for a given set of primaries, the matrix K is constant valued,
and for a given reference white, the white matching values of the matrix A are con-
stant. Hence, if a set of cone signals e i ( C ) were known for a color (C), the corre-
sponding tristimulus values Tj ( C ) could in theory be obtained from


                                                          –1
                                      t ( C ) = [ KA ] e ( C )                                   (3.3-11)
COLORIMETRY CONCEPTS                       57

provided that the matrix inverse of [KA] exists. Thus, it has been shown that with
proper selection of the tristimulus signals Tj ( C ) , any color can be matched in the
sense that the cone signals will be the same for the primary mixture as for the actual
color C. Unfortunately, the cone signals e i ( C ) are not easily measured physical
quantities, and therefore, Eq. 3.3-11 cannot be used directly to compute the tristimu-
lus values of a color. However, this has not been the intention of the derivation.
Rather, Eq. 3.3-11 has been developed to show the consistency of the color-match-
ing experiment with the color vision model.


3.3.2. Tristimulus Value Calculation

It is possible indirectly to compute the tristimulus values of an arbitrary color for a
particular set of primaries if the tristimulus values of the spectral colors (narrow-
band light) are known for that set of primaries. Figure 3.3-1 is a typical sketch of the
tristimulus values required to match a unit energy spectral color with three arbitrary
primaries. These tristimulus values, which are fundamental to the definition of a pri-
mary system, are denoted as Ts1 ( λ ) , T s2 ( λ ) , T s3 ( λ ) , where λ is a particular wave-
length in the visible region. A unit energy spectral light ( C ψ ) at wavelength ψ with
energy distribution δ ( λ – ψ ) is matched according to the equation

                                                              3
               e i ( Cψ ) =   ∫ δ ( λ – ψ )si ( λ ) d λ =   ∑ ∫ Aj ( W )Pj ( λ )Ts ( ψ )si ( λ ) d λ
                                                                                    j
                                                                                                         (3.3-12)
                                                            j=1


Now, consider an arbitrary color [C] with spectral energy distribution C ( λ ) . At
wavelength ψ , C ( ψ ) units of the color are matched by C ( ψ )Ts1 ( ψ ) , C ( ψ )Ts2 ( ψ ) ,
C ( ψ )T s ( ψ ) tristimulus units of the primaries as governed by
          3


                                                      3

              ∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ = ∑ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) d λ
                                                                                    j
                                                                                                         (3.3-13)
                                                   j =1

Integrating each side of Eq. 3.3-13 over ψ and invoking the sifting integral gives the
cone signal for the color (C). Thus

                                                             3

   ∫ ∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ dψ =   ei ( C ) =     ∑ ∫ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) dψ d λ
                                                                                            j
                                                            j =1
                                                                                                         (3.3-14)


By correspondence with Eq. 3.3-7, the tristimulus values of (C) must be equivalent
to the second integral on the right of Eq. 3.3-14. Hence


                                         Tj ( C ) =   ∫ C ( ψ )Ts ( ψ ) dψ
                                                                   j
                                                                                                         (3.3-15)
58     PHOTOMETRY AND COLORIMETRY




FIGURE 3.3-1. Tristimulus values of typical red, green, and blue primaries required to
match unit energy throughout the spectrum.



   From Figure 3.3-1 it is seen that the tristimulus values obtained from solution of
Eq. 3.3-11 may be negative at some wavelengths. Because the tristimulus values
represent units of energy, the physical interpretation of this mathematical result is
that a color match can be obtained by adding the primary with negative tristimulus
value to the original color and then matching this resultant color with the remaining
primary. In this sense, any color can be matched by any set of primaries. However,
from a practical viewpoint, negative tristimulus values are not physically realizable,
and hence there are certain colors that cannot be matched in a practical color display
(e.g., a color television receiver) with fixed primaries. Fortunately, it is possible to
choose primaries so that most commonly occurring natural colors can be matched.
   The three tristimulus values T1, T2, T'3 can be considered to form the three axes of
a color space as illustrated in Figure 3.3-2. A particular color may be described as a
a vector in the color space, but it must be remembered that it is the coordinates of
the vectors (tristimulus values), rather than the vector length, that specify the color.
In Figure 3.3-2, a triangle, called a Maxwell triangle, has been drawn between the
three primaries. The intersection point of a color vector with the triangle gives an
indication of the hue and saturation of the color in terms of the distances of the point
from the vertices of the triangle.




          FIGURE 3.3-2 Color space for typical red, green, and blue primaries.
COLORIMETRY CONCEPTS    59




     FIGURE 3.3-3. Chromaticity diagram for typical red, green, and blue primaries.



   Often the luminance of a color is not of interest in a color match. In such situa-
tions, the hue and saturation of color (C) can be described in terms of chromaticity
coordinates, which are normalized tristimulus values, as defined by


                                                     T1
                                   t 1 ≡ -----------------------------
                                                                     -                     (3.3-16a)
                                         T 1 + T 2 + T3

                                                     T2
                                   t 2 ≡ -----------------------------
                                                                     -                     (3.3-16b)
                                         T 1 + T 2 + T3

                                                     T3
                                   t 3 ≡ -----------------------------
                                                                     -                     (3.3-16c)
                                         T 1 + T 2 + T3


Clearly, t3 = 1 – t 1 – t2 , and hence only two coordinates are necessary to describe a
color match. Figure 3.3-3 is a plot of the chromaticity coordinates of the spectral
colors for typical primaries. Only those colors within the triangle defined by the
three primaries are realizable by physical primary light sources.


3.3.3. Luminance Calculation

The tristimulus values of a color specify the amounts of the three primaries required
to match a color where the units are measured relative to a match of a reference
white. Often, it is necessary to determine the absolute rather than the relative
amount of light from each primary needed to reproduce a color match. This informa-
tion is found from luminance measurements of calculations of a color match.
60     PHOTOMETRY AND COLORIMETRY


   From Eq. 3.3-5 it is noted that the luminance of a matched color Y(C) is equal to
the sum of the luminances of its primary components according to the relation

                                                   3
                             Y(C ) =             ∑ Tj ( C ) ∫ Aj ( C )Pj ( λ )V ( λ ) d λ                                           (3.3-17)
                                                 j =1


The integrals of Eq. 3.3-17,


                                        Y ( Pj ) =          ∫ Aj ( C )Pj ( λ )V ( λ ) d λ                                           (3.3-18)


are called luminosity coefficients of the primaries. These coefficients represent the
luminances of unit amounts of the three primaries for a match to a specific reference
white. Hence the luminance of a matched color can be written as


                   Y ( C ) = T 1 ( C )Y ( P1 ) + T 2 ( C )Y ( P 2 ) + T 3 ( C )Y ( P 3 )                                            (3.3-19)


Multiplying the right and left sides of Eq. 3.3-19 by the right and left sides, respec-
tively, of the definition of the chromaticity coordinate

                                                                      T1 ( C )
                                     t 1 ( C ) = ---------------------------------------------------------
                                                                                                         -                          (3.3-20)
                                                 T 1 ( C ) + T 2 ( C ) + T3 ( C )


and rearranging gives

                                                                   t 1 ( C )Y ( C )
                   T 1 ( C ) = -------------------------------------------------------------------------------------------------
                                                                                                                               -   (3.3-21a)
                               t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 )


Similarly,


                                                                   t 2 ( C )Y ( C )
                   T 2 ( C ) = -------------------------------------------------------------------------------------------------
                                                                                                                               -   (3.3-21b)
                               t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 )

                                                                   t 3 ( C )Y ( C )
                   T 3 ( C ) = -------------------------------------------------------------------------------------------------
                                                                                                                               -   (3.3-21c)
                               t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 )


Thus the tristimulus values of a color can be expressed in terms of the luminance
and chromaticity coordinates of the color.
TRISTIMULUS VALUE TRANSFORMATION                                61

3.4. TRISTIMULUS VALUE TRANSFORMATION

From Eq. 3.3-7 it is clear that there is no unique set of primaries for matching colors.
If the tristimulus values of a color are known for one set of primaries, a simple coor-
dinate conversion can be performed to determine the tristimulus values for another
set of primaries (16). Let (P1), (P2), (P3) be the original set of primaries with spec-
tral energy distributions P1 ( λ ), P2 ( λ ), P3 ( λ ), with the units of a match determined
by a white reference (W) with matching values A 1 ( W ), A 2 ( W ), A 3 ( W ). Now, consider
                               ˜       ˜         ˜                                         ˜
a new set of primaries ( P 1 ) , ( P2 ) , ( P3 ) with spectral energy distributions P1 ( λ ) ,
 ˜        ˜                                                         ˜
P2 ( λ ), P 3 ( λ ). Matches are made to a reference white ( W ) , which may be different
than the reference white of the original set of primaries, by matching values A1 ( W ),   ˜
 ˜         ˜
A2 ( W ), A3 ( W ). Referring to Eq. 3.3-10, an arbitrary color (C) can be matched by the
tristimulus values T 1 ( C ) , T2 ( C ) , T 3 ( C ) with the original set of primaries or by the
                        ˜       ˜         ˜
tristimulus values T1 ( C ) , T 2 ( C ) , T 3 ( C ) with the new set of primaries, according to
the matching matrix relations

                                                            ˜ ˜ ˜ ˜
                                e ( C ) = KA ( W )t ( C ) = K A ( W )t ( C )                              (3.4-1)

The tristimulus value units of the new set of primaries, with respect to the original
set of primaries, must now be found. This can be accomplished by determining the
color signals of the reference white for the second set of primaries in terms of both
                                                                      ˜
sets of primaries. The color signal equations for the reference white W become

                                   ˜                 ˜     ˜ ˜ ˜ ˜ ˜
                               e ( W ) = KA ( W )t ( W ) = K A ( W )t ( W )                               (3.4-2)

            ˜ ˜        ˜ ˜       ˜ ˜
where T 1 ( W ) = T 2 ( W ) = T 3 ( W ) = 1. Finally, it is necessary to relate the two sets of
primaries by determining the color signals of each of the new primary colors ( P1 ) ,      ˜
  ˜         ˜
( P 2 ) , ( P3 ) in terms of both primary systems. These color signal equations are

                                   ˜                   ˜      ˜ ˜ ˜ ˜ ˜
                              e ( P 1 ) = KA ( W )t ( P 1 ) = K A ( W )t ( P1 )                        (3.4-3a)

                                   ˜                   ˜      ˜ ˜ ˜ ˜ ˜
                              e ( P 2 ) = KA ( W )t ( P 2 ) = K A ( W )t ( P2 )                        (3.4-3b)

                                   ˜                   ˜      ˜ ˜ ˜ ˜ ˜
                              e ( P 3 ) = KA ( W )t ( P 3 ) = K A ( W )t ( P3 )                        (3.4-3c)


where


                                1                                  0                                 0
                         ---------------
                                       -
                                    ˜                                                                0
               ˜( P1 ) = A1( W )
               t  ˜                              ˜ ˜               1 -
                                                 t ( P2 ) = ---------------       t ˜
                                                                                  ˜( P2 ) =
                                0                           A2 ( W )   ˜                             1
                                                                                              ---------------
                                                                                                            -
                                                                                              A3( W )    ˜
                                0                                    0
62      PHOTOMETRY AND COLORIMETRY


Matrix equations 3.4-1 to 3.4-3 may be solved jointly to obtain a relationship
between the tristimulus values of the original and new primary system:



                                                   T1( C )                 ˜          ˜
                                                                      T1 ( P 2 ) T1 ( P 3 )
                                                   T2 ( C )                ˜          ˜
                                                                      T2 ( P 2 ) T2 ( P 3 )
                                           T3 ( C ) T3 ( P 2 ) T3 ( P 3 ) ˜                      ˜
                           ˜
                           T 1 ( C ) = ------------------------------------------------------------------------
                                                                                                              -   (3.4-4a)
                                           T (W      ˜ ) T (P ) T (P )     ˜                      ˜
                                                      1                   1      2           1        3
                                                         ˜        ˜          ˜
                                                    T2 ( W ) T2 ( P 2 ) T2 ( P 3 )
                                                           ˜        ˜          ˜
                                                     T 3 ( W ) T3 ( P 2 ) T3 ( P 3 )


                                                        ˜
                                                   T1 ( P 1 )           T1 ( C )                ˜
                                                                                           T1 ( P 3 )
                                                         ˜
                                                    T2 ( P 1 )          T2 ( C )                ˜
                                                                                           T2 ( P 3 )
                                                     ˜
                                            T3 ( P 1 ) T3 ( C ) T3 ( P 3 )                         ˜
                           ˜
                           T 2 ( C ) = ------------------------------------------------------------------------
                                                                                                              -   (3.4-4b)
                                                     ˜
                                           T ( P ) T ( W) T (P )             ˜                    ˜
                                                      1       1           1                   1       3
                                                         ˜          ˜        ˜
                                                   T 2 ( P1 ) T 2 ( W ) T2 ( P 3 )
                                                         ˜
                                                   T 3 ( P1 )                ˜        ˜
                                                                       T 3 ( W ) T3 ( P 3 )



                                                        ˜
                                                   T1 ( P1 )                  ˜
                                                                        T 1 ( P2 )            T1 ( C )
                                                        ˜
                                                   T2 ( P1 )                  ˜
                                                                        T 2 ( P2 )           T2( C )
                                                     ˜
                                           T3 ( P1 ) T 3 ( P2 ) T 3 ( C )    ˜
                           ˜
                           T 3 ( C ) = ------------------------------------------------------------------------
                                                                                                              -   (3.4-4c)
                                                     ˜
                                           T (P ) T (P ) T (W)               ˜                        ˜
                                                      1       1            1       2              1
                                                        ˜
                                                   T2 ( P 1 )                ˜
                                                                        T2 ( P 2 )                ˜
                                                                                             T2 ( W )
                                                        ˜
                                                   T3 ( P 1 )                ˜
                                                                        T3 ( P 2 )                ˜
                                                                                             T3 ( W )

where T denotes the determinant of matrix T. Equations 3.4-4 then may be written
                                              ˜           ˜           ˜
in terms of the chromaticity coordinates ti ( P 1 ), ti ( P 2 ), ti ( P 3 ) of the new set of pri-
maries referenced to the original primary coordinate system.
With this revision,

                          ˜
                          T1 ( C )                   m 11            m 12              m 13           T1( C )
                          ˜
                          T (C) =                    m 21            m 22              m 31           T2( C )      (3.4-5)
                            2
                          ˜
                          T3 ( C )                   m 31            m 32              m 33           T3( C )
COLOR SPACES   63

where
                                                      ∆ ij
                                               m ij = ------
                                                       ∆i
and

                                   ˜               ˜               ˜
                        ∆1 = T 1 ( W )∆ 11 + T 2 ( W )∆ 12 + T 3 ( W )∆ 13

                                   ˜               ˜               ˜
                        ∆2 = T 1 ( W )∆ 21 + T 2 ( W )∆ 22 + T 3 ( W )∆ 23

                                   ˜               ˜               ˜
                        ∆3 = T 1 ( W )∆ 31 + T 2 ( W )∆ 32 + T 3 ( W )∆ 33

                                    ˜         ˜           ˜          ˜
                       ∆ 11 = t 2 ( P2 )t 3 ( P3 ) – t3 ( P 2 )t 2 ( P 3 )

                                    ˜         ˜           ˜          ˜
                       ∆ 12 = t 3 ( P2 )t 1 ( P3 ) – t1 ( P 2 )t 3 ( P 3 )

                                    ˜         ˜           ˜          ˜
                       ∆ 13 = t 1 ( P2 )t 2 ( P3 ) – t2 ( P 2 )t 1 ( P 3 )

                                    ˜         ˜           ˜          ˜
                       ∆ 21 = t 3 ( P1 )t 2 ( P3 ) – t2 ( P 1 )t 3 ( P 3 )

                                    ˜         ˜           ˜          ˜
                       ∆ 22 = t 1 ( P1 )t 3 ( P3 ) – t3 ( P 1 )t 1 ( P 3 )

                                    ˜         ˜           ˜          ˜
                       ∆ 23 = t 2 ( P1 )t 1 ( P3 ) – t1 ( P 1 )t 2 ( P 3 )

                                    ˜          ˜           ˜         ˜
                       ∆ 31 = t 2 ( P 1 )t 3 ( P2 ) – t3 ( P 1 )t2 ( P 2 )

                                    ˜          ˜           ˜         ˜
                       ∆ 32 = t 3 ( P 1 )t 1 ( P2 ) – t1 ( P 1 )t3 ( P 2 )

                                    ˜          ˜           ˜         ˜
                       ∆ 33 = t 1 ( P 1 )t 2 ( P2 ) – t2 ( P 1 )t1 ( P 2 )

Thus, if the tristimulus values are known for a given set of primaries, conversion to
another set of primaries merely entails a simple linear transformation of coordinates.


3.5. COLOR SPACES

It has been shown that a color (C) can be matched by its tristimulus values T1 ( C ) ,
T 2 ( C ) , T 3 ( C ) for a given set of primaries. Alternatively, the color may be specified
by its chromaticity values t 1 ( C ) , t 2 ( C ) and its luminance Y(C). Appendix 2 presents
formulas for color coordinate conversion between tristimulus values and chromatic-
ity coordinates for various representational combinations. A third approach in speci-
fying a color is to represent the color by a linear or nonlinear invertible function of
its tristimulus or chromaticity values.
     In this section we describe several standard and nonstandard color spaces for the
representation of color images. They are categorized as colorimetric, subtractive,
video, or nonstandard. Figure 3.5-1 illustrates the relationship between these color
spaces. The figure also lists several example color spaces.
64     PHOTOMETRY AND COLORIMETRY



                                    nonstandard


                                               linear and nonlinear
                                                  intercomponent
           colorimetric                            transformation
              linear

              linear
        intercomponent
         transformation             colorimetric
                                                                  subtractive
                                       linear
                                                                  CMY/CMYK
            nonlinear                   RGB
        intercomponent
         transformation                                       linear point
                                               nonlinear    transformation
                                                 point          nonlinear
           colorimetric                     transformation intercomponent
            nonlinear                                       transformation


                                                                      video
                                       video
                                                                     gamma
                                      gamma
                                                                  luma/chroma
                                       RGB
                                                                      YCC

                                                                   linear
                                                             intercomponent
                                                              transformation

                      FIGURE 3.5-1. Relationship of color spaces.




   Natural color images, as opposed to computer-generated images, usually origi-
nate from a color scanner or a color video camera. These devices incorporate three
sensors that are spectrally sensitive to the red, green, and blue portions of the light
spectrum. The color sensors typically generate red, green, and blue color signals that
are linearly proportional to the amount of red, green, and blue light detected by each
sensor. These signals are linearly proportional to the tristimulus values of a color at
each pixel. As indicated in Figure 3.5-1, linear RGB images are the basis for the gen-
eration of the various color space image representations.



3.5.1. Colorimetric Color Spaces

The class of colorimetric color spaces includes all linear RGB images and the stan-
dard colorimetric images derived from them by linear and nonlinear intercomponent
transformations.
COLOR SPACES        65




FIGURE 3.5-2. Tristimulus values of CIE spectral primaries required to match unit energy
throughout the spectrum. Red = 700 nm, green = 546.1 nm, and blue = 435.8 nm.


RCGCBC Spectral Primary Color Coordinate System. In 1931, the CIE developed a
standard primary reference system with three monochromatic primaries at wave-
lengths: red = 700 nm; green = 546.1 nm; blue = 435.8 nm (11). The units of the
tristimulus values are such that the tristimulus values RC, GC, BC are equal when
matching an equal-energy white, called Illuminant E, throughout the visible spectrum.
The primary system is defined by tristimulus curves of the spectral colors, as shown in
Figure 3.5-2. These curves have been obtained indirectly by experimental color-match-
ing experiments performed by a number of observers. The collective color-matching
response of these observers has been called the CIE Standard Observer. Figure 3.5-3 is
a chromaticity diagram for the CIE spectral coordinate system.




         FIGURE 3.5-3. Chromaticity diagram for CIE spectral primary system.
66      PHOTOMETRY AND COLORIMETRY


RNGNBN NTSC Receiver Primary Color Coordinate System. Commercial televi-
sion receivers employ a cathode ray tube with three phosphors that glow in the red,
green, and blue regions of the visible spectrum. Although the phosphors of
commercial television receivers differ from manufacturer to manufacturer, it is
common practice to reference them to the National Television Systems Committee
(NTSC) receiver phosphor standard (14). The standard observer data for the CIE
spectral primary system is related to the NTSC primary system by a pair of linear
coordinate conversions.
   Figure 3.5-4 is a chromaticity diagram for the NTSC primary system. In this
system, the units of the tristimulus values are normalized so that the tristimulus
values are equal when matching the Illuminant C white reference. The NTSC
phosphors are not pure monochromatic sources of radiation, and hence the gamut of
colors producible by the NTSC phosphors is smaller than that available from the
spectral primaries. This fact is clearly illustrated by Figure 3.5-3, in which the gamut
of NTSC reproducible colors is plotted in the spectral primary chromaticity diagram
(11). In modern practice, the NTSC chromaticities are combined with Illuminant
D65.


REGEBE EBU Receiver Primary Color Coordinate System. The European Broad-
cast Union (EBU) has established a receiver primary system whose chromaticities
are close in value to the CIE chromaticity coordinates, and the reference white is
Illuminant C (17). The EBU chromaticities are also combined with the D65 illumi-
nant.


RRGRBR CCIR Receiver Primary Color Coordinate Systems. In 1990, the Interna-
tional Telecommunications Union (ITU) issued its Recommendation 601, which




     FIGURE 3.5-4. Chromaticity diagram for NTSC receiver phosphor primary system.
COLOR SPACES        67

specified the receiver primaries for standard resolution digital television (18). Also,
in 1990 the ITU published its Recommendation 709 for digital high-definition tele-
vision systems (19). Both standards are popularly referenced as CCIR Rec. 601 and
CCIR Rec. 709, abbreviations of the former name of the standards committee,
Comité Consultatif International des Radiocommunications.


RSGSBS SMPTE Receiver Primary Color Coordinate System. The Society of
Motion Picture and Television Engineers (SMPTE) has established a standard
receiver primary color coordinate system with primaries that match modern receiver
phosphors better than did the older NTSC primary system (20). In this coordinate
system, the reference white is Illuminant D65.


XYZ Color Coordinate System. In the CIE spectral primary system, the tristimulus
values required to achieve a color match are sometimes negative. The CIE has
developed a standard artificial primary coordinate system in which all tristimulus
values required to match colors are positive (4). These artificial primaries are shown
in the CIE primary chromaticity diagram of Figure 3.5-3 (11). The XYZ system
primaries have been chosen so that the Y tristimulus value is equivalent to the lumi-
nance of the color to be matched. Figure 3.5-5 is the chromaticity diagram for the
CIE XYZ primary system referenced to equal-energy white (4). The linear transfor-
mations between RCGCBC and XYZ are given by




           FIGURE 3.5-5. Chromaticity diagram for CIE XYZ primary system.
68       PHOTOMETRY AND COLORIMETRY



                     X           0.49018626        0.30987954     0.19993420            RC
                     Y     =     0.17701522        0.81232418      0.01066060           GC           (3.5-1a)
                     Z           0.00000000        0.01007720      0.98992280           BC



                 RC               2.36353918       – 0.89582361    – 0.46771557          X
                GC =             – 0.51511248       1.42643694         0.08867553        Y           (3.5-1b)
                 BC               0.00524373 – 0.01452082              1.00927709        Z

The color conversion matrices of Eq. 3.5-1 and those color conversion matrices
defined later are quoted to eight decimal places (21,22). In many instances, this quo-
tation is to a greater number of places than the original specification. The number of
places has been increased to reduce computational errors when concatenating trans-
formations between color representations.
    The color conversion matrix between XYZ and any other linear RGB color space
can be computed by the following algorithm.

     1. Compute the colorimetric weighting coefficients a(1), a(2), a(3) from


                                                                  –1
                                      a(1)         xR   xG   xB        xW ⁄ yW

                                      a(2) =       yR   yG   yB           1                          (3.5-2a)
                                      a(3)         zR   zG   zB         zW ⁄ yW


     where xk, yk, zk are the chromaticity coordinates of the RGB primary set.
     2. Compute the RGB-to-XYZ conversion matrix.


        M ( 1, 1 )       M ( 1, 2 )   M ( 1, 3 )        xR   xG    xB      a(1)     0         0
       M ( 2, 1 )        M ( 2, 2 )   M ( 2, 3 )   =    yR   yG   yB          0   a(2)        0      (3.5-2b)
       M ( 3, 1 )        M ( 3, 2 )   M ( 3, 3 )        zR   zG    zB         0     0        a( 3)


The XYZ-to-RGB conversion matrix is, of course, the matrix inverse of M . Table
3.5-1 lists the XYZ tristimulus values of several standard illuminants. The XYZ chro-
maticity coordinates of the standard linear RGB color systems are presented in Table
3.5-2.
   From Eqs. 3.5-1 and 3.5-2 it is possible to derive a matrix transformation
between RCGCBC and any linear colorimetric RGB color space. The book CD con-
tains a file that lists the transformation matrices (22) between the standard RGB
color coordinate systems and XYZ and UVW, defined below.
COLOR SPACES         69

            TABLE 3.5-1. XYZ Tristimulus Values of Standard Illuminants

              Illuminant          X0              Y0              Z0

                 A             1.098700        1.000000       0.355900

                 C             0.980708        1.000000       1.182163

                 D50           0.964296        1.000000       0.825105

                 D65           0.950456        1.000000       1.089058

                 E             1.000000        1.000000       1.000000



         TABLE 3.5-2. XYZ Chromaticity Coordinates of Standard Primaries
               Standard                x               y               z
                 CIE RC          0.640000        0.330000        0.030000
                       GC        0.300000        0.600000        0.100000
                       BC        0.150000        0.06000         0.790000
               NTSC RN           0.670000        0.330000        0.000000
                       GN        0.210000        0.710000        0.080000
                       BN        0.140000        0.080000        0.780000
             SMPTE RS            0.630000        0.340000        0.030000
                       GS        0.310000        0.595000        0.095000
                       BS        0.155000        0.070000        0.775000
                EBU RE           0.640000        0.330000        0.030000
                       GE        0.290000        0.60000         0.110000
                       BE        0.150000        0.060000        0.790000
               CCIR RR           0.640000        0.330000        0.030000
                       GR        0.30000         0.600000        0.100000
                       BR        0.150000        0.060000        0.790000



UVW Uniform Chromaticity Scale Color Coordinate System. In 1960, the CIE.
adopted a coordinate system, called the Uniform Chromaticity Scale (UCS), in
which, to a good approximation, equal changes in the chromaticity coordinates
result in equal, just noticeable changes in the perceived hue and saturation of a color.
The V component of the UCS coordinate system represents luminance. The u, v
chromaticity coordinates are related to the x, y chromaticity coordinates by the rela-
tions (23)
70     PHOTOMETRY AND COLORIMETRY


                                                    4x
                                 u = ----------------------------------
                                                                      -                      (3.5-3a)
                                     – 2x + 12y + 3
                                                    6y
                                 v = ----------------------------------
                                                                      -                      (3.5-3b)
                                     – 2x + 12y + 3
                                                  3u -
                                   x = --------------------------                            (3.5-3c)
                                       2u – 8v – 4
                                                2v
                                 y = --------------------------
                                                              -                              (3.5-3d)
                                     2u – 8v – 4


Figure 3.5-6 is a UCS chromaticity diagram.
   The tristimulus values of the uniform chromaticity scale coordinate system UVW
are related to the tristimulus values of the spectral coordinate primary system by



             U        0.32679084        0.20658636                  0.13328947      RC
             V   =    0.17701522        0.81232418                  0.01066060      GC       (3.5-4a)
             W        0.02042971        1.06858510                  0.41098519      BC


            RC         2.84373542             0.50732308                  – 0.93543113   U
            GC =      – 0.63965541            1.16041034                   0.17735107    V   (3.5-4b)
            BC         1.52178123           – 3.04235208                   2.01855417    W




FIGURE 3.5-6. Chromaticity diagram for CIE uniform chromaticity scale primary system.
COLOR SPACES        71

U*V*W* Color Coordinate System. The U*V*W* color coordinate system, adopted
by the CIE in 1964, is an extension of the UVW coordinate system in an attempt to
obtain a color solid for which unit shifts in luminance and chrominance are uniformly
perceptible. The U*V*W* coordinates are defined as (24)

                                   U∗ = 13W∗ ( u – u o )                                (3.5-5a)

                                   V∗ = 13W∗ ( v – v o )                                (3.5-5b)

                                                   1⁄3
                                  W∗ = 25 ( 100Y )     – 17                             (3.5-5c)

where the luminance Y is measured over a scale of 0.0 to 1.0 and uo and vo are the
chromaticity coordinates of the reference illuminant.
   The UVW and U*V*W* coordinate systems were rendered obsolete in 1976 by
the introduction by the CIE of the more accurate L*a*b* and L*u*v* color coordi-
nate systems. Although depreciated by the CIE, much valuable data has been col-
lected in the UVW and U*V*W* color systems.


L*a*b* Color Coordinate System. The L*a*b* cube root color coordinate system
was developed to provide a computationally simple measure of color in agreement
with the Munsell color system (25). The color coordinates are

                                    Y 1⁄3                         Y
                             116  ----- 
                                  Y 
                                            – 16              for ----- > 0.008856      (3.5-6a)
                                                                 Yo
                       L∗ = 
                                        o

                                       Y                                Y
                             903.3 -----                     for 0.0 ≤ ----- ≤ 0.008856 (3.5-6b)
                                      Yo                               Yo



                                   X Y
                        a∗ = 500 f  -----  – f  ----- 
                                         -                                              (3.5-6c)
                                    X o   Yo 


                                   X           Z
                        b∗ = 200 f  -----  – f  ----- 
                                         -                                              (3.5-6d)
                                    Xo          Zo 

where


                      w1 ⁄ 3                            for w > 0.008856               (3.6-6e)
                     
             f( w) = 
                     
                      7.787 ( w ) + 0.1379              for 0.0 ≤ w ≤ 0.008856         (3.6-6f)
72      PHOTOMETRY AND COLORIMETRY


The terms Xo, Yo, Zo are the tristimulus values for the reference white. Basically, L*
is correlated with brightness, a* with redness-greenness, and b* with yellowness-
blueness. The inverse relationship between L*a*b* and XYZ is


                                                    L∗ + 16 
                                         X = X o g  ------------------                           (3.5-7a)
                                                    25 

                                                     Y  a∗ 
                                         Y = Y o g  f  -----  + -------- 
                                                                          -                        (3.5-7b)
                                                     Y o  500 

                                                    Y  b∗ 
                                        Z = Z o g  f  -----  – -------- 
                                                                         -                         (3.5-7c)
                                                    Y o  200 

where


                       w3                                                  for w > 0.20681        (3.6-7d)
                      
              g( w) = 
                      
                       0.1284 ( w – 0.1379 )                               if 0.0 ≤ w ≤ 0.20689   (3.6-7e)


L*u*v* Color Coordinate System. The L*u*v* coordinate system (26), which has
evolved from the L*a*b* and the U*V*W* coordinate systems, became a CIE stan-
dard in 1976. It is defined as

                            Y 1⁄3
                             ----- 
                   25  100 Y         – 16                                     Y
                                                                            for ----- ≥ 0.008856   (3.5-8a)
                                o                                              Yo
                  
             L∗ = 
                  
                                    Y                                           Y
                          903.3 -----                                      for ----- < 0.008856   (3.5-8b)
                                    Yo                                         Yo

                    u∗ = 13L∗ ( u′ – u′ )
                                      o                                                            (3.5-8c)

                     v∗ = 13L∗ ( v′ – v′ )
                                       o                                                           (3.5-8d)

where
                                        4X
                      u′ = --------------------------------                                        (3.5-8e)
                           X + 15Y + 3Z

                                        9Y
                      v′ = --------------------------------                                        (3.5-8f)
                           X + 15Y + 3Z
COLOR SPACES    73

and u′ and v′ are obtained by substitution of the tristimulus values Xo, Yo, Zo for
      o       o
the reference white. The inverse relationship is given by


                                 X = 9u′ Y
                                     -------
                                           -                                              (3.5-9a)
                                     4v′

                                              ∗              3
                                 Y = Y o  L + 16 
                                           -----------------
                                                           -                              (3.5-9b)
                                          25 

                                        12 – 3u′ – 20v′
                                  Z = Y ------------------------------------
                                                                           -              (3.5-9c)
                                                       4v′


where


                                           u∗
                                   u′ = ----------- + u′ o
                                                  -                                       (3.5-9d)
                                        13L∗

                                             v∗ -
                                    v' = ----------- + u′ o                               (3.5-9e)
                                         13L∗


   Figure 3.5-7 shows the linear RGB components of an NTSC receiver primary
color image. This color image is printed in the color insert. If printed properly, the
color image and its monochromatic component images will appear to be of “nor-
mal” brightness. When displayed electronically, the linear images will appear too
dark. Section 3.5.3 discusses the proper display of electronic images. Figures 3.5-8
to 3.5-10 show the XYZ, Yxy, and L*a*b* components of Figure 3.5-7. Section
10.1.1 describes amplitude-scaling methods for the display of image components
outside the unit amplitude range. The amplitude range of each component is printed
below each photograph.


3.5.2. Subtractive Color Spaces

The color printing and color photographic processes (see Section 11-3) are based on
a subtractive color representation. In color printing, the linear RGB color compo-
nents are transformed to cyan (C), magenta (M), and yellow (Y) inks, which are
overlaid at each pixel on a, usually, white paper. The simplest transformation rela-
tionship is

                                       C = 1.0 – R                                       (3.5-10a)

                                      M = 1.0 – G                                        (3.5-10b)

                                       Y = 1.0 – B                                       (3.5-10c)
74     PHOTOMETRY AND COLORIMETRY




                                  (a) Linear R, 0.000 to 0.965




         (b) Linear G, 0.000 to 1.000                 (c) Linear B, 0.000 to 0.965

FIGURE 3.5-7. Linear RGB components of the dolls_linear color image. See insert
for a color representation of this figure.



where the linear RGB components are tristimulus values over [0.0, 1.0]. The inverse
relations are

                                        R = 1.0 – C                                  (3.5-11a)

                                        G = 1.0 – M                                  (3.5-11b)
                                        B = 1.0 – Y                                  (3.5-11c)


In high-quality printing systems, the RGB-to-CMY transformations, which are usu-
ally proprietary, involve color component cross-coupling and point nonlinearities.
COLOR SPACES        75




                                    (a) X, 0.000 to 0.952




            (b) Y, 0.000 to 0.985                           (c ) Z, 0.000 to 1,143

         FIGURE 3.5-8. XYZ components of the dolls_linear color image.


    To achieve dark black printing without using excessive amounts of CMY inks, it
is common to add a fourth component, a black ink, called the key (K) or black com-
ponent. The black component is set proportional to the smallest of the CMY compo-
nents as computed by Eq. 3.5-10. The common RGB-to-CMYK transformation,
which is based on the undercolor removal algorithm (27), is

                                    C = 1.0 – R – uK b                               (3.5-12a)

                                    M = 1.0 – G – uK b                               (3.5-12b)

                                    Y = 1.0 – B – uKb                                (3.5-12c)

                                    K = bKb                                          (3.5-12d)
76      PHOTOMETRY AND COLORIMETRY




                                     (a) Y, 0.000 to 0.965




             (b) x, 0.140 to 0.670                           (c) y, 0.080 to 0.710
          FIGURE 3.5-9. Yxy components of the dolls_linear color image.



where
                          K b = MIN { 1.0 – R, 1.0 – G, 1.0 – B }                    (3.5-12e)

and 0.0 ≤ u ≤ 1.0 is the undercolor removal factor and 0.0 ≤ b ≤ 1.0 is the blackness
factor. Figure 3.5-11 presents the CMY components of the color image of Figure 3.5-7.


3.5.3 Video Color Spaces

The red, green, and blue signals from video camera sensors typically are linearly
proportional to the light striking each sensor. However, the light generated by cathode
tube displays is approximately proportional to the display amplitude drive signals
COLOR SPACES      77




                                (a) L*, −16.000 to 99.434




          (b) a*, −55.928 to 69.291                   (c) b*, −65.224 to 90.171

       FIGURE 3.5-10. L*a*b* components of the dolls_linear color image.




raised to a power in the range 2.0 to 3.0 (28). To obtain a good-quality display, it is
necessary to compensate for this point nonlinearity. The compensation process,
called gamma correction, involves passing the camera sensor signals through a
point nonlinearity with a power, typically, of about 0.45. In television systems, to
reduce receiver cost, gamma correction is performed at the television camera rather
than at the receiver. A linear RGB image that has been gamma corrected is called a
gamma RGB image. Liquid crystal displays are reasonably linear in the sense that
the light generated is approximately proportional to the display amplitude drive
signal. But because LCDs are used in lieu of CRTs in many applications, they usu-
ally employ circuitry to compensate for the gamma correction at the sensor.
78     PHOTOMETRY AND COLORIMETRY




                                   (a) C, 0.0035 to 1.000




           (b) M, 0.000 to 1.000                            (c) Y, 0.0035 to 1.000

        FIGURE 3.5-11. CMY components of the dolls_linear color image.




    In high-precision applications, gamma correction follows a linear law for low-
amplitude components and a power law for high-amplitude components according
to the relations (22)



                                 c Kc 2 + c                 for K ≥ b               (3.5-13a)
                                   1         3
                            ˜ = 
                            K   
                                
                                 c4 K                       for 0.0 ≤ K < b         (3.5-13b)
COLOR SPACES       79

                                                 ˜
where K denotes a linear RGB component and K is the gamma-corrected component.
The constants ck and the breakpoint b are specified in Table 3.5-3 for the general case
and for conversion to the SMPTE, CCIR and CIE lightness components. Figure 3.5-12
is a plot of the gamma correction curve for the CCIR Rec. 709 primaries.

          TABLE 3.5-3. Gamma Correction Constants
                         General           SMPTE       CCIR           CIE L*
               c1          1.00             1.1115     1.099         116.0
               c2          0.45             0.45       0.45            0.3333
               c3          0.00            -0.1115    -0.099         -16.0
               c4          0.00             4.0        4.5           903.3
               b           0.00             0.0228     0.018           0.008856

   The inverse gamma correction relation is


                                K – c 3 1 ⁄ c2
                                   ˜                     ˜
                                                     for K ≥ c 4 b                (3.5-14a)
                                -------------- 
                                               -
                                c1 
                          k = 
                               K  ˜
                               ---- -
                               c4                             ˜
                                                     for 0.0 ≤ K < c 4 b          (3.5-14b)




       FIGURE 3.5-12. Gamma correction curve for the CCIR Rec. 709 primaries.
80     PHOTOMETRY AND COLORIMETRY




                             (a) Gamma R, 0.000 to 0.984




         (b) Gamma G, 0.000 to 1.000             (c) Gamma B, 0.000 to 0.984

FIGURE 3.5-13. Gamma RGB components of the dolls_gamma color image. See insert
for a color representation of this figure.


Figure 3.5-13 shows the gamma RGB components of the color image of Figure
3.5-7. The gamma color image is printed in the color insert. The gamma components
have been printed as if they were linear components to illustrate the effects of the
point transformation. When viewed on an electronic display, the gamma RGB color
image will appear to be of “normal” brightness.


YIQ NTSC Transmission Color Coordinate System. In the development of the
color television system in the United States, NTSC formulated a color coordinate
system for transmission composed of three values, Y, I, Q (14). The Y value, called
luma, is proportional to the gamma-corrected luminance of a color. The other two
components, I and Q, called chroma, jointly describe the hue and saturation
COLOR SPACES    81

attributes of an image. The reasons for transmitting the YIQ components rather than
                       ˜ ˜ ˜
the gamma-corrected RN G N B N components directly from a color camera were two
fold: The Y signal alone could be used with existing monochrome receivers to dis-
play monochrome images; and it was found possible to limit the spatial bandwidth
of the I and Q signals without noticeable image degradation. As a result of the latter
property, a clever analog modulation scheme was developed such that the bandwidth
of a color television carrier could be restricted to the same bandwidth as a mono-
chrome carrier.
    The YIQ transformations for an Illuminant C reference white are given by



                Y      0.29889531     0.58662247           0.11448223     ˜
                                                                          RN
                I   = 0.59597799 – 0.27417610 – 0.32180189                ˜
                                                                          G       (3.5-15a)
                                                                              N
                Q      0.21147017 – 0.52261711             0.31114694     ˜
                                                                          BN


               ˜
               RN    1.00000000 0.95608445 0.62088850                     Y
               ˜
               G N = 1.00000000 – 0.27137664 – 0.64860590                 I       (3.5-15b)
               ˜
               BN      1.00000000 – 1.10561724             1.70250126     Q


where the tilde denotes that the component has been gamma corrected.
   Figure 3.5-14 presents the YIQ components of the gamma color image of Figure
3.5-13.


YUV EBU Transmission Color Coordinate System. In the PAL and SECAM color
television systems (29) used in many countries, the luma Y and two color differ-
ences,


                                          ˜
                                         BE – Y
                                     U = ---------------
                                                       -                          (3.5-16a)
                                            2.03

                                          ˜
                                         RE – Y
                                     V = ---------------
                                                       -                          (3.5-16b)
                                            1.14


                                              ˜        ˜
are used as transmission coordinates, where RE and B E are the gamma-corrected
EBU red and blue components, respectively. The YUV coordinate system was ini-
tially proposed as the NTSC transmission standard but was later replaced by the YIQ
system because it was found (4) that the I and Q signals could be reduced in band-
width to a greater degree than the U and V signals for an equal level of visual qual-
ity. The I and Q signals are related to the U and V signals by a simple rotation of
coordinates in color space:
82     PHOTOMETRY AND COLORIMETRY




                                     (a) Y, 0.000 to 0.994




            (b) l, −0.276 to 0.347                       (c) Q, = 0.147 to 0.169

 FIGURE 3.5-14. YIQ components of the gamma corrected dolls_gamma color image.



                                I = – U sin 33° + V cos 33°                        (3.5-17a)

                                Q = U cos 33° + V sin 33°                          (3.5-17b)

It should be noted that the U and V components of the YUV video color space are not
equivalent to the U and V components of the UVW uniform chromaticity system.


YCbCr CCIR Rec. 601 Transmission Color Coordinate System. The CCIR Rec.
601 color coordinate system YCbCr is defined for the transmission of luma and
chroma components coded in the integer range 0 to 255. The YCbCr transformations
for unit range components are defined as (28)
COLOR SPACES    83


               Y             0.29900000     0.58700000     0.11400000      ˜
                                                                           RS
              Cb       = – 0.16873600 – 0.33126400         0.50000000      ˜
                                                                           G       (3.5-18a)
                                                                               S
              Cr             0.50000000 – 0.4186680      – 0.08131200      ˜
                                                                           BS



               ˜
               RS        1.00000000 – 0.0009264   1.40168676               Y
               ˜
               G       = 1.00000000 – 0.34369538 – 0.71416904              Cb      (3.5-18b)
                   S
               ˜
               BS         1.00000000      1.77216042     0.00099022        Cr


where the tilde denotes that the component has been gamma corrected.


Photo YCC Color Coordinate System. Eastman Kodak company has developed an
image storage system, called PhotoCD, in which a photographic negative is
scanned, converted to a luma/chroma format similar to Rec. 601YCbCr, and
recorded in a proprietary compressed form on a compact disk. The PhotoYCC
format and its associated RGB display format have become defacto standards.
PhotoYCC employs the CCIR Rec. 709 primaries for scanning. The conversion to
YCC is defined as (27,28,30)


                         Y          0.299     0.587 0.114     ˜
                                                              R 709
                         C1    = – 0.299 – 0.587 0.500        ˜
                                                              G                    (3.5-19a)
                                                                  709
                         C2         0.500 – 0.587 0.114          ˜
                                                                 B 709



Transformation from PhotoCD components for display is not an exact inverse of Eq.
3.5-19a, in order to preserve the extended dynamic range of film images. The
YC1C2-to-RDGDBD display components is given by


                        RD         0.969 0.000 1.000         Y
                        GD =       0.969 – 0.194 – 0.509     C1                    (3.5-19b)
                        BD         0.969 1.000     0.000     C2



3.5.4. Nonstandard Color Spaces

Several nonstandard color spaces used for image processing applications are
described in this section.
84     PHOTOMETRY AND COLORIMETRY


IHS Color Coordinate System. The IHS coordinate system (31) has been used
within the image processing community as a quantitative means of specifying the
intensity, hue, and saturation of a color. It is defined by the relations



                           I               1           1        1        R
                                           --
                                            -          --
                                                        -       --
                                                                 -
                                           3           3        3
                                        –1           –1         2
                           V1     =     ------
                                             -      ------
                                                         -    ------
                                                                   -     G    (3.5-20a)
                                            6           6         6
                                          1          –1
                                        ------
                                             -      ------
                                                         -      0
                           V2               6           6                B



                                            V2
                               H = arc tan  -----
                                                 -                            (3.5-20b)
                                            V1

                                        2        2 1⁄2
                                S = ( V1 + V 2 )                              (3.5-20c)


By this definition, the color blue is the zero reference for hue. The inverse relation-
ship is
                                       V 1 = S cos { H }                      (3.5-21a)
                                       V2 = S sin { H }                       (3.5-21b)



                                                 – 6              6
                           R             1       ---------
                                                         -    ------
                                                                   -     I
                                                     6          2
                                  =                    6     – 6              (3.5-21c)
                           G                1      ------
                                                        -    ---------
                                                                     -   V1
                                                     6           2

                                            1          6
                                                   ------
                                                        -       0
                           B                                             V2
                                                     3


Figure 3.5-15 shows the IHS components of the gamma RGB image of Figure
3.5-13.


Karhunen–Loeve Color Coordinate System. Typically, the R, G, and B tristimulus
values of a color image are highly correlated with one another (32). In the develop-
ment of efficient quantization, coding, and processing techniques for color images,
it is often desirable to work with components that are uncorrelated. If the second-
order moments of the RGB tristimulus values are known, or at least estimable, it is
COLOR SPACES         85




                                   (a) l, 0.000 to 0.989




          (b) H, −3.136 to 3.142                            (c) S, 0.000 to 0.476

        FIGURE 3.5-15. IHS components of the dolls_gamma color image.



possible to derive an orthogonal coordinate system, in which the components are
uncorrelated, by a Karhunen–Loeve (K–L) transformation of the RGB tristimulus
values. The K-L color transform is defined as



                           K1          m 11 m12      m 13    R
                           K2      =   m 21   m 22   m 23    G                      (3.5-22a)
                           K3          m 31   m 32   m 33    B
86      PHOTOMETRY AND COLORIMETRY


                               R               m 11 m 21           m 31   K1
                               G        =      m 12     m 22       m 32   K2                       (3.5-22b)
                               B               m 13     m 23       m 33   K3


where the transformation matrix with general term m ij composed of the eigenvec-
tors of the RGB covariance matrix with general term u ij . The transformation matrix
satisfies the relation


      m 11   m 12    m 13     u 11      u 12     u 13      m 11       m 21     m 31       λ1   0     0
      m 21   m 22    m 23    u 12       u 22     u 23      m 12       m 22     m 32   =   0    λ2    0
      m 31   m 32    m 33    u 13       u 23     u 33      m 13       m 23     m 33       0    0     λ3

                                                                                                    (3.5-23)

where λ 1 , λ 2 , λ 3 are the eigenvalues of the covariance matrix and

                                                           2
                                     u 11 = E { ( R – R ) }                                        (3.5-24a)

                                                               2
                                     u 22 = E { ( G – G ) }                                        (3.5-24b)

                                                           2
                                     u 33 = E { ( B – B ) }                                        (3.5-24c)

                                     u 12 = E { ( R – R ) ( G – G ) }                              (3.5-24d)

                                     u 13 = E { ( R – R ) ( B – B ) }                              (3.5-24e)

                                     u 23 = E { ( G – G ) ( B – B ) }                              (3.5-24f)

In Eq. 3.5-23, E { · } is the expectation operator and the overbar denotes the mean
value of a random variable.


Retinal Cone Color Coordinate System. As indicated in Chapter 2, in the discus-
sion of models of the human visual system for color vision, indirect measurements
of the spectral sensitivities s 1 ( λ ) , s 2 ( λ ) , s 3 ( λ ) have been made for the three types
of retinal cones. It has been found that these spectral sensitivity functions can be lin-
early related to spectral tristimulus values established by colorimetric experimenta-
tion. Hence a set of cone signals T1, T2, T3 may be regarded as tristimulus values in
a retinal cone color coordinate system. The tristimulus values of the retinal cone
color coordinate system are related to the XYZ system by the coordinate conversion
matrix (33)
REFERENCES         87


                      T1      0.000000 1.000000 0.000000            X
                      T2   = – 0.460000 1.359000 0.101000           Y               (3.5-25)
                      T3        0.000000    0.000000 1.000000       Z




REFERENCES


 1. T. P. Merrit and F. F. Hall, Jr., “Blackbody Radiation,” Proc. IRE, 47, 9, September 1959,
    1435–1442.
 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965,
    162–165.
 3. R. D. Larabee, “Spectral Emissivity of Tungsten,” J. Optical of Society America, 49, 6,
    June 1959, 619–625.
 4. The Science of Color, Crowell, New York, 1973.
 5. D. G. Fink, Ed., Television Engineering Handbook, McGraw-Hill, New York, 1957.
 6. Toray Industries, Inc. LCD Color Filter Specification.
 7. J. W. T. Walsh, Photometry, Constable, London, 1953.
 8. M. Born and E. Wolf, Principles of Optics, 6th ed., Pergamon Press, New York, 1981.
 9. K. S. Weaver, “The Visibility of Radiation at Low Intensities,” J. Optical Society of
    America, 27, 1, January 1937, 39–43.
10. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982.
11. R. W. G. Hunt, The Reproduction of Colour, 5th ed., Wiley, New York, 1957.
12. W. D. Wright, The Measurement of Color, Adam Hilger, London, 1944, 204–205.
13. R. A. Enyord, Ed., Color: Theory and Imaging Systems, Society of Photographic Scien-
    tists and Engineers, Washington, DC, 1973.
14. F. J. Bingley, “Color Vision and Colorimetry,” in Television Engineering Handbook, D.
    G. Fink, ed., McGraw–Hill, New York, 1957.
15. H. Grassman, “On the Theory of Compound Colours,” Philosophical Magazine, Ser. 4,
    7, April 1854, 254–264.
16. W. T. Wintringham, “Color Television and Colorimetry,” Proc. IRE, 39, 10, October
    1951, 1135–1172.
17. “EBU Standard for Chromaticity Tolerances for Studio Monitors,” Technical Report
    3213-E, European Broadcast Union, Brussels, 1975.
18. “Encoding Parameters of Digital Television for Studios”, Recommendation ITU-R
    BT.601-4, (International Telecommunications Union, Geneva; 1990).
19 “Basic Parameter Values for the HDTV Standard for the Studio and for International
    Programme Exchange,” Recommendation ITU-R BT 709, International Telecommuni-
    cations Unions, Geneva; 1990.
20. L. E. DeMarsh, “Colorimetric Standards in U.S. Color Television. A Report to the Sub-
    committee on Systems Colorimetry of the SMPTE Television Committee,” J. Society of
    Motion Picture and Television Engineers, 83, 1974.
88     PHOTOMETRY AND COLORIMETRY


21. “Information Technology, Computer Graphics and Image Processing, Image Processing
    and Interchange, Part 1: Common Architecture for Imaging,” ISO/IEC 12087-1:1995(E).
22. “Information Technology, Computer Graphics and Image Processing, Image Processing
    and Interchange, Part 2: Programmer’s Imaging Kernel System Application Program
    Interface,” ISO/IEC 12087-2:1995(E).
23. D. L. MacAdam, “Projective Transformations of ICI Color Specifications,” J. Optical
    Society of America, 27, 8, August 1937, 294–299.
24. G. Wyszecki, “Proposal for a New Color-Difference Formula,” J. Optical Society of
    America, 53, 11, November 1963, 1318–1319.
25. “CIE Colorimetry Committee Proposal for Study of Color Spaces,” Technical, Note, J.
    Optical Society of America, 64, 6, June 1974, 896–897.
26. Colorimetry, 2nd ed., Publication 15.2, Central Bureau, Commission Internationale de
    l'Eclairage, Vienna, 1986.
27. W. K. Pratt, Developing Visual Applications, XIL: An Imaging Foundation Library, Sun
    Microsystems Press, Mountain View, CA, 1997.
28. C. A. Poynton, A Technical Introduction to Digital Video, Wiley, New York, 1996.
29. P. S. Carnt and G. B. Townsend, Color Television Vol. 2; PAL, SECAM, and Other Sys-
    tems, Iliffe, London, 1969.
30. I. Kabir, High Performance Computer Imaging, Manning Publications, Greenwich, CT,
    1996.
31. W. Niblack, An Introduction to Digital Image Processing, Prentice Hall, Englewood
    Cliffs, NJ, 1985.
32. W. K. Pratt, “Spatial Transform Coding of Color Images,” IEEE Trans. Communication
    Technology, COM-19, 12, December 1971, 980–992.
33. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J.
    Optical Society of America, 35, 3, March 1945, 199–221.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                       Copyright © 2001 John Wiley & Sons, Inc.
                    ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




PART 2

DIGITAL IMAGE CHARACTERIZATION

Digital image processing is based on the conversion of a continuous image field to
equivalent digital form. This part of the book considers the image sampling and
quantization processes that perform the analog image to digital image conversion.
The inverse operation of producing continuous image displays from digital image
arrays is also analyzed. Vector-space methods of image representation are developed
for deterministic and stochastic image arrays.




                                                                                89
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




4
IMAGE SAMPLING AND
RECONSTRUCTION




In digital image processing systems, one usually deals with arrays of numbers
obtained by spatially sampling points of a physical image. After processing, another
array of numbers is produced, and these numbers are then used to reconstruct a con-
tinuous image for viewing. Image samples nominally represent some physical mea-
surements of a continuous image field, for example, measurements of the image
intensity or photographic density. Measurement uncertainties exist in any physical
measurement apparatus. It is important to be able to model these measurement
errors in order to specify the validity of the measurements and to design processes
for compensation of the measurement errors. Also, it is often not possible to mea-
sure an image field directly. Instead, measurements are made of some function
related to the desired image field, and this function is then inverted to obtain the
desired image field. Inversion operations of this nature are discussed in the sections
on image restoration. In this chapter the image sampling and reconstruction process
is considered for both theoretically exact and practical systems.


4.1. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS

In the design and analysis of image sampling and reconstruction systems, input
images are usually regarded as deterministic fields (1–5). However, in some
situations it is advantageous to consider the input to an image processing system,
especially a noise input, as a sample of a two-dimensional random process (5–7).
Both viewpoints are developed here for the analysis of image sampling and
reconstruction methods.

                                                                                   91
92         IMAGE SAMPLING AND RECONSTRUCTION




                          FIGURE 4.1-1. Dirac delta function sampling array.



4.1.1. Sampling Deterministic Fields

Let F I ( x, y ) denote a continuous, infinite-extent, ideal image field representing the
luminance, photographic density, or some desired parameter of a physical image. In
a perfect image sampling system, spatial samples of the ideal image would, in effect,
be obtained by multiplying the ideal image by a spatial sampling function

                                                  ∞       ∞
                               S ( x, y ) =       ∑       ∑    δ ( x – j ∆x, y – k ∆y )                   (4.1-1)
                                              j = –∞ k = – ∞


composed of an infinite array of Dirac delta functions arranged in a grid of spacing
( ∆x, ∆y ) as shown in Figure 4.1-1. The sampled image is then represented as


                                                  ∞       ∞
     F P ( x, y ) = FI ( x, y )S ( x, y ) =       ∑       ∑   FI ( j ∆x, k ∆y )δ ( x – j ∆x, y – k ∆y )   (4.1-2)
                                              j = –∞ k = –∞


where it is observed that F I ( x, y ) may be brought inside the summation and evalu-
ated only at the sample points ( j ∆x, k ∆y) . It is convenient, for purposes of analysis,
to consider the spatial frequency domain representation F P ( ω x, ω y ) of the sampled
image obtained by taking the continuous two-dimensional Fourier transform of the
sampled image. Thus


                                              ∞       ∞
                      F P ( ω x, ω y ) =   ∫–∞ ∫–∞ FP ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy           (4.1-3)
IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS                                   93

By the Fourier transform convolution theorem, the Fourier transform of the sampled
image can be expressed as the convolution of the Fourier transforms of the ideal
image F I ( ω x, ω y ) and the sampling function S ( ω x, ω y ) as expressed by

                                                     1
                               F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) ᭺ S ( ω x, ω y )
                                                         -                  *                                 (4.1-4)
                                                         2
                                                  4π

The two-dimensional Fourier transform of the spatial sampling function is an infi-
nite array of Dirac delta functions in the spatial frequency domain as given by
(4, p. 22)

                                                2         ∞        ∞
                                        4π -
                    S ( ω x, ω y ) = --------------
                                     ∆x ∆y                ∑ ∑           δ ( ω x – j ω xs, ω y – k ω ys )      (4.1-5)
                                                      j = –∞ k = –∞


where ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y represent the Fourier domain sampling fre-
quencies. It will be assumed that the spectrum of the ideal image is bandlimited to
some bounds such that F I ( ω x, ω y ) = 0 for ω x > ω xc and ω y > ω yc . Performing the
convolution of Eq. 4.1-4 yields


                                   1            ∞      ∞
          F P ( ω x, ω y ) = --------------
                             ∆x ∆y
                                          -   ∫– ∞ ∫– ∞ F I ( ω x – α , ω y – β )
                                           ∞          ∞
                                     ×    ∑ ∑                 δ ( ω x – j ω xs, ω y – k ω ys ) dα dβ          (4.1-6)
                                         j = – ∞ k = –∞



Upon changing the order of summation and integration and invoking the sifting
property of the delta function, the sampled image spectrum becomes

                                                          ∞         ∞
                                         1 -
                F P ( ω x, ω y ) = --------------
                                   ∆x ∆y               ∑           ∑     F I ( ω x – j ω xs, ω y – k ω ys )   (4.1-7)
                                                      j = –∞ k = – ∞


As can be seen from Figure 4.1-2, the spectrum of the sampled image consists of the
spectrum of the ideal image infinitely repeated over the frequency plane in a grid of
resolution ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . It should be noted that if ∆x and ∆y are chosen too
large with respect to the spatial frequency limits of F I ( ω x, ω y ) , the individual spectra
will overlap.
    A continuous image field may be obtained from the image samples of FP ( x, y )
by linear spatial interpolation or by linear spatial filtering of the sampled image. Let
R ( x, y ) denote the continuous domain impulse response of an interpolation filter and
R ( ω x, ω y ) represent its transfer function. Then the reconstructed image is obtained
94     IMAGE SAMPLING AND RECONSTRUCTION


                                                                              wY



                                                                                wX


                                             (a) Original image


                                                                                        wY




                                                                                              wX
                                                                                   2p
                                                                                   ∆y
                                             2p
                                             ∆x
                                             (b) Sampled image
                           FIGURE 4.1-2. Typical sampled image spectra.


by a convolution of the samples with the reconstruction filter impulse response. The
reconstructed image then becomes

                                       FR ( x, y ) = F P ( x, y ) ᭺ R ( x, y )
                                                                  *                                             (4.1-8)

Upon substituting for FP ( x, y ) from Eq. 4.1-2 and performing the convolution, one
obtains
                                        ∞       ∞
                   FR ( x, y ) =       ∑ ∑           F I ( j ∆x, k ∆y )R ( x – j ∆x, y – k ∆y )                 (4.1-9)
                                     j = –∞ k = –∞
Thus it is seen that the impulse response function R ( x, y ) acts as a two-dimensional
interpolation waveform for the image samples. The spatial frequency spectrum of
the reconstructed image obtained from Eq. 4.1-8 is equal to the product of the recon-
struction filter transform and the spectrum of the sampled image,


                                   F R ( ω x, ω y ) = F P ( ω x, ω y )R ( ω x, ω y )                          (4.1-10)


or, from Eq. 4.1-7,


                                                               ∞      ∞
                                   1 -
          F R ( ω x, ω y ) = -------------- R ( ω x, ω y )
                             ∆x ∆y                            ∑ ∑             F I ( ω x – j ω xs, ω y – k ω ys ) (4.1-11)
                                                             j = –∞ k = – ∞
IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS                                                 95

It is clear from Eq. 4.1-11 that if there is no spectrum overlap and if R ( ω x, ω y ) filters
out all spectra for j, k ≠ 0 , the spectrum of the reconstructed image can be made
equal to the spectrum of the ideal image, and therefore the images themselves can be
made identical. The first condition is met for a bandlimited image if the sampling
period is chosen such that the rectangular region bounded by the image cutoff
frequencies ( ω xc, ω yc ) lies within a rectangular region defined by one-half the sam-
pling frequency. Hence

                                           ω xs                                         ω ys
                                    ω xc ≤ -------
                                                 -                               ω yc ≤ -------
                                                                                              -                       (4.1-12a)
                                              2                                            2


or, equivalently,


                                             π                                            π
                                     ∆x ≤ --------                                ∆y ≤ --------                       (4.1-12b)
                                          ω xc                                         ω yc


In physical terms, the sampling period must be equal to or smaller than one-half the
period of the finest detail within the image. This sampling condition is equivalent to
the one-dimensional sampling theorem constraint for time-varying signals that
requires a time-varying signal to be sampled at a rate of at least twice its highest-fre-
quency component. If equality holds in Eq. 4.1-12, the image is said to be sampled
at its Nyquist rate; if ∆x and ∆y are smaller than required by the Nyquist criterion,
the image is called oversampled; and if the opposite case holds, the image is under-
sampled.
    If the original image is sampled at a spatial rate sufficient to prevent spectral
overlap in the sampled image, exact reconstruction of the ideal image can be
achieved by spatial filtering the samples with an appropriate filter. For example, as
shown in Figure 4.1-3, a filter with a transfer function of the form


                                     K                          for ω x ≤ ω xL and ω y ≤ ω yL                        (4.1-13a)
                                     
                    R ( ω x, ω y ) = 
                                     
                                     0                          otherwise                                            (4.1-13b)



where K is a scaling constant, satisfies the condition of exact reconstruction if
ω xL > ω xc and ω yL > ω yc . The point-spread function or impulse response of this
reconstruction filter is


                                       Kω xL ω yL sin { ω xL x } sin { ω yL y }
                          R ( x, y ) = ---------------------- -------------------------- --------------------------
                                                            -                                                          (4.1-14)
                                                π
                                                   2                 ω xL x                     ω yL y
96      IMAGE SAMPLING AND RECONSTRUCTION




                     FIGURE 4.1-3. Sampled image reconstruction filters.




With this filter, an image is reconstructed with an infinite sum of ( sin θ ) ⁄ θ func-
tions, called sinc functions. Another type of reconstruction filter that could be
employed is the cylindrical filter with a transfer function


                                                             2    2
                                              K     for ω x + ω y ≤ ω 0     (4.1-15a)
                                              
                             R ( ω x, ω y ) = 
                                              
                                              0      otherwise              (4.1-15b)

                 2      2     2
provided that ω 0 > ω xc + ω yc . The impulse response for this filter is
IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS                                         97


                                                                             2           2
                                                       J1  ω0 x + y 
                                                                                            
                                  R ( x, y ) = 2πω 0 K ---------------------------------------
                                                                                             -                 (4.1-16)
                                                                        2          2
                                                                     x +y


where J 1 { · } is a first-order Bessel function. There are a number of reconstruction
filters, or equivalently, interpolation waveforms, that could be employed to provide
perfect image reconstruction. In practice, however, it is often difficult to implement
optimum reconstruction filters for imaging systems.


4.1.2. Sampling Random Image Fields

In the previous discussion of image sampling and reconstruction, the ideal input
image field has been considered to be a deterministic function. It has been shown
that if the Fourier transform of the ideal image is bandlimited, then discrete image
samples taken at the Nyquist rate are sufficient to reconstruct an exact replica of the
ideal image with proper sample interpolation. It will now be shown that similar
results hold for sampling two-dimensional random fields.
   Let FI ( x, y ) denote a continuous two-dimensional stationary random process
with known mean η F I and autocorrelation function


                               R F ( τ x, τ y ) = E { F I ( x 1, y 1 )F * ( x 2, y 2 ) }
                                                                        I                                      (4.1-17)
                                   I




where τ x = x 1 – x 2 and τ y = y 1 – y 2 . This process is spatially sampled by a Dirac
sampling array yielding

                                                                   ∞          ∞
       F P ( x, y ) = FI ( x, y )S ( x, y ) = F I ( x, y )        ∑ ∑               δ ( x – j ∆x, y – k ∆y )   (4.1-18)
                                                                j = –∞ k = –∞


The autocorrelation of the sampled process is then


                                                             *
                    RF ( τ x, τ y ) = E { F P ( x 1, y 1 ) F P ( x 2, y 2 ) }                                  (4.1-19)
                        P



                                       = E { F I ( x 1, y 1 ) F *( x 2, y 2 ) }S ( x 1, y 1 )S ( x 2, y 2 )
                                                                I



The first term on the right-hand side of Eq. 4.1-19 is the autocorrelation of the
stationary ideal image field. It should be observed that the product of the two Dirac
sampling functions on the right-hand side of Eq. 4.1-19 is itself a Dirac sampling
function of the form
98      IMAGE SAMPLING AND RECONSTRUCTION


                     S ( x 1, y 1 )S ( x 2, y 2 ) = S ( x 1 – x 2, y 1 – y 2 ) = S ( τ x, τ y )      (4.1-20)


Hence the sampled random field is also stationary with an autocorrelation function

                                  R F ( τ x, τ y ) = R F ( τ x, τ y )S ( τ x, τ y )                  (4.1-21)
                                        P                  I




Taking the two-dimensional Fourier transform of Eq. 4.1-21 yields the power spec-
trum of the sampled random field. By the Fourier transform convolution theorem

                                               1-
                         W F ( ω x, ω y ) = -------- W F ( ω x, ω y ) ᭺ S ( ω x, ω y )               (4.1-22)
                            P                      2    I
                                                                      *
                                            4π

where W F I ( ω x, ω y ) and W F P ( ω x, ω y ) represent the power spectral densities of the
ideal image and sampled ideal image, respectively, and S ( ω x, ω y ) is the Fourier
transform of the Dirac sampling array. Then, by the derivation leading to Eq. 4.1-7,
it is found that the spectrum of the sampled field can be written as

                                                    ∞     ∞
                                     1
             WF ( ω x, ω y ) = --------------
               P               ∆x ∆y
                                            -       ∑    ∑      W F ( ω x – j ω xs, ω y – k ω ys )
                                                                     I
                                                                                                     (4.1-23)
                                                j = –∞ k = –∞

Thus the sampled image power spectrum is composed of the power spectrum of the
continuous ideal image field replicated over the spatial frequency domain at integer
multiples of the sampling spatial frequency ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . If the power spectrum
of the continuous ideal image field is bandlimited such that W F I ( ω x, ω y ) = 0 for
 ω x > ω xc and ω y > ω yc , where ω xc and are ω yc cutoff frequencies, the individual
spectra of Eq. 4.1-23 will not overlap if the spatial sampling periods are chosen such
that ∆x < π ⁄ ω xc and ∆y < π ⁄ ω yc . A continuous random field F R ( x, y ) may be recon-
structed from samples of the random ideal image field by the interpolation formula

                                    ∞           ∞
                F R ( x, y ) =     ∑        ∑       F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y )        (4.1-24)
                                 j = – ∞ k = –∞


where R ( x, y ) is the deterministic interpolation function. The reconstructed field and
the ideal image field can be made equivalent in the mean-square sense (5, p. 284),
that is,

                                                                     2
                                     E { F I ( x, y ) – F R ( x, y ) } = 0                           (4.1-25)


if the Nyquist sampling criteria are met and if suitable interpolation functions, such
as the sinc function or Bessel function of Eqs. 4.1-14 and 4.1-16, are utilized.
IMAGE SAMPLING SYSTEMS           99




                   FIGURE 4.1-4. Spectra of a sampled noisy image.



   The preceding results are directly applicable to the practical problem of sampling
a deterministic image field plus additive noise, which is modeled as a random field.
Figure 4.1-4 shows the spectrum of a sampled noisy image. This sketch indicates a
significant potential problem. The spectrum of the noise may be wider than the ideal
image spectrum, and if the noise process is undersampled, its tails will overlap into
the passband of the image reconstruction filter, leading to additional noise artifacts.
A solution to this problem is to prefilter the noisy image before sampling to reduce
the noise bandwidth.


4.2. IMAGE SAMPLING SYSTEMS

In a physical image sampling system, the sampling array will be of finite extent, the
sampling pulses will be of finite width, and the image may be undersampled. The
consequences of nonideal sampling are explored next.
   As a basis for the discussion, Figure 4.2-1 illustrates a common image scanning
system. In operation, a narrow light beam is scanned directly across a positive
photographic transparency of an ideal image. The light passing through the
transparency is collected by a condenser lens and is directed toward the surface of a
photodetector. The electrical output of the photodetector is integrated over the time
period during which the light beam strikes a resolution cell. In the analysis it will be
assumed that the sampling is noise-free. The results developed in Section 4.1 for
100     IMAGE SAMPLING AND RECONSTRUCTION




                        FIGURE 4.2-1. Image scanning system.



sampling noisy images can be combined with the results developed in this section
quite readily. Also, it should be noted that the analysis is easily extended to a wide
class of physical image sampling systems.


4.2.1. Sampling Pulse Effects

Under the assumptions stated above, the sampled image function is given by


                                 F P ( x, y ) = FI ( x, y )S ( x, y )           (4.2-1)


where the sampling array

                                          J       K
                       S ( x, y ) =    ∑ ∑            P ( x – j ∆x, y – k ∆y)   (4.2-2)
                                      j = –J k = –K


is composed of (2J + 1)(2K + 1) identical pulses P ( x, y ) arranged in a grid of spac-
ing ∆x, ∆y . The symmetrical limits on the summation are chosen for notational
simplicity. The sampling pulses are assumed scaled such that


                                      ∞       ∞
                                  ∫–∞ ∫–∞ P ( x, y ) dx dy = 1                  (4.2-3)


For purposes of analysis, the sampling function may be assumed to be generated by
a finite array of Dirac delta functions DT ( x, y ) passing through a linear filter with
impulse response P ( x, y ). Thus
IMAGE SAMPLING SYSTEMS                         101

                                           S ( x, y ) = D T ( x, y ) ᭺ P ( x, y )
                                                                     *                                                               (4.2-4)

where

                                                           J           K
                             D T ( x, y ) =              ∑ ∑                   δ ( x – j ∆x, y – k ∆y)                               (4.2-5)
                                                       j = –J k = –K


Combining Eqs. 4.2-1 and 4.2-2 results in an expression for the sampled image
function,

                                          J            K
                F P ( x, y ) =          ∑ ∑                    F I ( j ∆x, k ∆ y)P ( x – j ∆x, y – k ∆y)                             (4.2-6)
                                      j = – J k = –K


The spectrum of the sampled image function is given by


                                       1
                 F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) ᭺ [ D T ( ω x, ω y )P ( ω x, ω y ) ]
                                           -                  *                                                                      (4.2-7)
                                           2
                                    4π

where P ( ω x, ω y ) is the Fourier transform of P ( x, y ) . The Fourier transform of the
truncated sampling array is found to be (5, p. 105)
                                                                                                                              
                                      sin  ω x ( J + 1 ) ∆ x sin  ω y ( K + 1 ) ∆ y 
                                                                     --
                                                                      -                                            --
                                                                                                                    -
                                                                     2                                             2
                                                                                                                              
                   D T ( ω x, ω y ) = --------------------------------------------- ----------------------------------------------
                                                                                                                                 -   (4.2-8)
                                           sin { ω x ∆x ⁄ 2 }                             sin { ω y ∆ y ⁄ 2 }


Figure 4.2-2 depicts D T ( ω x, ω y ) . In the limit as J and K become large, the right-hand
side of Eq. 4.2-7 becomes an array of Dirac delta functions.




            FIGURE 4.2-2. Truncated sampling train and its Fourier spectrum.
102      IMAGE SAMPLING AND RECONSTRUCTION


    In an image reconstruction system, an image is reconstructed by interpolation of
its samples. Ideal interpolation waveforms such as the sinc function of Eq. 4.1-14 or
the Bessel function of Eq. 4.1-16 generally extend over the entire image field. If the
sampling array is truncated, the reconstructed image will be in error near its bound-
ary because the tails of the interpolation waveforms will be truncated in the vicinity
of the boundary (8,9). However, the error is usually negligibly small at distances of
about 8 to 10 Nyquist samples or greater from the boundary.
    The actual numerical samples of an image are obtained by a spatial integration of
FS ( x, y ) over some finite resolution cell. In the scanning system of Figure 4.2-1, the
integration is inherently performed on the photodetector surface. The image sample
value of the resolution cell (j, k) may then be expressed as

                                  j∆x + A x k∆y + A y
           F S ( j ∆x, k ∆y) =   ∫j∆x – A ∫k∆y – A
                                          x            y
                                                           F I ( x, y )P ( x – j ∆x, y – k ∆y ) dx dy    (4.2-9)


where Ax and Ay denote the maximum dimensions of the resolution cell. It is
assumed that only one sample pulse exists during the integration time of the detec-
tor. If this assumption is not valid, consideration must be given to the difficult prob-
lem of sample crosstalk. In the sampling system under discussion, the width of the
resolution cell may be larger than the sample spacing. Thus the model provides for
sequentially overlapped samples in time.
    By a simple change of variables, Eq. 4.2-9 may be rewritten as

                                     Ax       Ay
              FS ( j ∆x, k ∆y) =    ∫–A ∫–A FI ( j ∆x – α, k ∆y – β )P ( – α, – β ) dx dy
                                          x        y
                                                                                                        (4.2-10)


Because only a single sampling pulse is assumed to occur during the integration
period, the limits of Eq. 4.2-10 can be extended infinitely . In this formulation, Eq.
4.2-10 is recognized to be equivalent to a convolution of the ideal continuous image
FI ( x, y ) with an impulse response function P ( – x, – y ) with reversed coordinates,
followed by sampling over a finite area with Dirac delta functions. Thus, neglecting
the effects of the finite size of the sampling array, the model for finite extent pulse
sampling becomes


               F S ( j ∆x, k ∆y) = [ FI ( x, y ) ᭺ P ( – x, – y ) ]δ ( x – j ∆x, y – k ∆y)
                                                 *                                                      (4.2-11)


In most sampling systems, the sampling pulse is symmetric, so that P ( – x, – y ) = P ( x, y ).
   Equation 4.2-11 provides a simple relation that is useful in assessing the effect
of finite extent pulse sampling. If the ideal image is bandlimited and Ax and Ay sat-
isfy the Nyquist criterion, the finite extent of the sample pulse represents an equiv-
alent linear spatial degradation (an image blur) that occurs before ideal sampling.
Part 4 considers methods of compensating for this degradation. A finite-extent
sampling pulse is not always a detriment, however. Consider the situation in which
IMAGE SAMPLING SYSTEMS          103

the ideal image is insufficiently bandlimited so that it is undersampled. The finite-
extent pulse, in effect, provides a low-pass filtering of the ideal image, which, in
turn, serves to limit its spatial frequency content, and hence to minimize aliasing
error.


4.2.2. Aliasing Effects

To achieve perfect image reconstruction in a sampled imaging system, it is neces-
sary to bandlimit the image to be sampled, spatially sample the image at the Nyquist
or higher rate, and properly interpolate the image samples. Sample interpolation is
considered in the next section; an analysis is presented here of the effect of under-
sampling an image.
   If there is spectral overlap resulting from undersampling, as indicated by the
shaded regions in Figure 4.2-3, spurious spatial frequency components will be intro-
duced into the reconstruction. The effect is called an aliasing error (10,11). Aliasing
effects in an actual image are shown in Figure 4.2-4. Spatial undersampling of the
image creates artificial low-spatial-frequency components in the reconstruction. In
the field of optics, aliasing errors are called moiré patterns.
   From Eq. 4.1-7 the spectrum of a sampled image can be written in the form


                                            1
                    F P ( ω x, ω y ) = ------------- [ F I ( ω x, ω y ) + F Q ( ω x, ω y ) ]   (4.2-12)
                                       ∆x ∆y




              −




           FIGURE 4.2-3. Spectra of undersampled two-dimensional function.
104   IMAGE SAMPLING AND RECONSTRUCTION




                               (a) Original image




                              (b) Sampled image

          FIGURE 4.2-4. Example of aliasing error in a sampled image.
IMAGE SAMPLING SYSTEMS                105

where F I ( ω x, ω y ) represents the spectrum of the original image sampled at period
( ∆x, ∆y ) . The term

                                                    ∞      ∞
                                        1
               F Q ( ω x, ω y ) = -------------
                                  ∆x ∆y
                                              -    ∑ ∑             F I ( ω x – j ω xs, ω y – k ω ys )    (4.2-13)
                                                  j = –∞ k = – ∞


for j ≠ 0 and k ≠ 0 describes the spectrum of the higher-order components of the
sampled image repeated over spatial frequencies ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y. If
there were no spectral foldover, optimal interpolation of the sampled image
components could be obtained by passing the sampled image through a zonal low-
pass filter defined by


                             K                      for ω x ≤ ω xs ⁄ 2 and ω y ≤ ω ys ⁄ 2              (4.2-14a)
                             
            R ( ω x, ω y ) = 
                             
                             0                         otherwise                                       (4.2-14b)

where K is a scaling constant. Applying this interpolation strategy to an undersam-
pled image yields a reconstructed image field

                                    FR ( x, y ) = FI ( x, y ) + A ( x, y )                               (4.2-15)

where

                          1 - ωxs ⁄ 2 ωys ⁄ 2 F ( ω , ω ) exp { i ( ω x + ω y ) } dω dω (4.2-16)
          A ( x, y ) = -------- ∫
                              2       ∫
                       4π – ωxs ⁄ 2 –ωys ⁄ 2
                                               Q   x   y             x     y        x  y



represents the aliasing error artifact in the reconstructed image. The factor K has
absorbed the amplitude scaling factors. Figure 4.2-5 shows the reconstructed image




                       FIGURE 4.2-5. Reconstructed image spectrum.
106      IMAGE SAMPLING AND RECONSTRUCTION




                    FIGURE 4.2-6. Model for analysis of aliasing effect.



spectrum that illustrates the spectral foldover in the zonal low-pass filter passband.
The aliasing error component of Eq. 4.2-16 can be reduced substantially by low-
pass filtering before sampling to attenuate the spectral foldover.
   Figure 4.2-6 shows a model for the quantitative analysis of aliasing effects. In
this model, the ideal image FI ( x, y ) is assumed to be a sample of a two-dimensional
random process with known power-spectral density W FI ( ω x, ω y ) . The ideal image is
linearly filtered by a presampling spatial filter with a transfer function H ( ω x, ω y ) .
This filter is assumed to be a low-pass type of filter with a smooth attenuation of
high spatial frequencies (i.e., not a zonal low-pass filter with a sharp cutoff). The fil-
tered image is then spatially sampled by an ideal Dirac delta function sampler at a
resolution ∆x, ∆y. Next, a reconstruction filter interpolates the image samples to pro-
duce a replica of the ideal image. From Eq. 1.4-27, the power spectral density at the
presampling filter output is found to be

                                                                         2
                            W F ( ω x, ω y ) = H ( ω x, ω y ) W F ( ω x, ω y )                            (4.2-17)
                                 O                                             I



and the Fourier spectrum of the sampled image field is


                                                         ∞       ∞
                                     1
             W F ( ω x, ω y ) = -------------
                P               ∆x ∆y                ∑           ∑   W F ( ω x – j ω xs, ω y – k ω ys )
                                                                         O
                                                                                                          (4.2-18)
                                                   j = – ∞ k = –∞




Figure 4.2-7 shows the sampled image power spectral density and the foldover alias-
ing spectral density from the first sideband with and without presampling low-pass
filtering.
     It is desirable to isolate the undersampling effect from the effect of improper
reconstruction. Therefore, assume for this analysis that the reconstruction filter
 R ( ω x, ω y ) is an optimal filter of the form given in Eq. 4.2-14. The energy passing
through the reconstruction filter for j = k = 0 is then


                               ω xs ⁄ 2       ω ys ⁄ 2                                2
                   ER =      ∫– ω         ∫
                                    xs ⁄ 2 – ω ys ⁄ 2
                                                         W F ( ω x, ω y ) H ( ω x, ω y ) dω x dω y
                                                             I
                                                                                                          (4.2-19)
IMAGE SAMPLING SYSTEMS       107




            FIGURE 4.2-7. Effect of presampling filtering on a sampled image.



Ideally, the presampling filter should be a low-pass zonal filter with a transfer func-
tion identical to that of the reconstruction filter as given by Eq. 4.2-14. In this case,
the sampled image energy would assume the maximum value


                                    ω xs ⁄ 2           ω ys ⁄ 2
                         E RM =   ∫– ω             ∫
                                         xs ⁄ 2 – ω ys ⁄ 2
                                                                  W F ( ω x, ω y ) dω x dω y
                                                                      I
                                                                                                   (4.2-20)



Image resolution degradation resulting from the presampling filter may then be
measured by the ratio

                                                 E RM – E R
                                           E R = -----------------------                           (4.2-21)
                                                       ERM

   The aliasing error in a sampled image system is generally measured in terms of
the energy, from higher-order sidebands, that folds over into the passband of the
reconstruction filter. Assume, for simplicity, that the sampling rate is sufficient so
that the spectral foldover from spectra centered at ( ± j ω xs ⁄ 2, ± k ω ys ⁄ 2 ) is negligible
for j ≥ 2 and k ≥ 2 . The total aliasing error energy, as indicated by the doubly cross-
hatched region of Figure 4.2-7, is then

                                             EA = E O – ER                                         (4.2-22)

where

                              ∞    ∞                                              2
                      EO =   ∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y )
                                               I
                                                                                      dω x dω y    (4.2-23)
108      IMAGE SAMPLING AND RECONSTRUCTION


denotes the energy of the output of the presampling filter. The aliasing error is
defined as (10)


                                                       EA
                                                 E A = ------
                                                            -                         (4.2-24)
                                                       EO


    Aliasing error can be reduced by attenuating high spatial frequencies of F I ( x, y )
with the presampling filter. However, any attenuation within the passband of the
reconstruction filter represents a loss of resolution of the sampled image. As a result,
there is a trade-off between sampled image resolution and aliasing error.
    Consideration is now given to the aliasing error versus resolution performance of
several practical types of presampling filters. Perhaps the simplest means of spa-
tially filtering an image formed by incoherent light is to pass the image through a
lens with a restricted aperture. Spatial filtering can then be achieved by controlling
the degree of lens misfocus. Figure 11.2-2 is a plot of the optical transfer function of
a circular lens as a function of the degree of lens misfocus. Even a perfectly focused
lens produces some blurring because of the diffraction limit of its aperture. The
transfer function of a diffraction-limited circular lens of diameter d is given by
(12, p. 83)


                                                                  for 0 ≤ ω ≤ ω 0   (4.2-25a)
              -- a cos  -----  – ----- 1 –  -----  2
               2
                -
                           ω         ω           ω
             π          -            -
                                              ω 
                                                    -
      H(ω) =            ω0  ω0                   0
             
              0
                                                                  for ω > ω 0       (4.2-25b)


where ω 0 = πd ⁄ R and R is the distance from the lens to the focal plane. In Section
4.2.1, it was noted that sampling with a finite-extent sampling pulse is equivalent to
ideal sampling of an image that has been passed through a spatial filter whose
impulse response is equal to the pulse shape of the sampling pulse with reversed
coordinates. Thus the sampling pulse may be utilized to perform presampling filter-
ing. A common pulse shape is the rectangular pulse


                                              1
                                            -----              for x, y ≤ T
                                                                           --
                                                                            -        (4.2-26a)
                                            2                             2
                              P ( x, y ) =  T
                                           
                                           0                   for x, y > T
                                                                           --
                                                                            -        (4.2-26b)
                                                                           2


obtained with an incoherent light imaging system of a scanning microdensitometer.
The transfer function for a square scanning spot is
IMAGE SAMPLING SYSTEMS          109

                                        sin { ω x T ⁄ 2 } sin { ω y T ⁄ 2 }
                       P ( ω x, ω y ) = ------------------------------ ------------------------------
                                                                     -                              -   (4.2-27)
                                               ωxT ⁄ 2                        ωy T ⁄ 2


Cathode ray tube displays produce display spots with a two-dimensional Gaussian
shape of the form


                                                 1             x2 + y2 
                               P ( x, y ) = ------------- exp  – ----------------                     (4.2-28)
                                                       2
                                            2πσw               2σ 2       w


where σ w is a measure of the spot spread. The equivalent transfer function of the
Gaussian-shaped scanning spot

                                                                        2         2      2
                                                   ( ω x + ω y )σ w 
                             P ( ω x, ω y ) = exp  – -------------------------------                  (4.2-29)
                                                                    2                

   Examples of the aliasing error-resolution trade-offs for a diffraction-limited aper-
ture, a square sampling spot, and a Gaussian-shaped spot are presented in Figure
4.2-8 as a function of the parameter ω 0. The square pulse width is set at T = 2π ⁄ ω 0,
so that the first zero of the sinc function coincides with the lens cutoff frequency.
 The spread of the Gaussian spot is set at σ w = 2 ⁄ ω 0, corresponding to two stan-
dard deviation units in crosssection. In this example, the input image spectrum is
modeled as




FIGURE 4.2-8. Aliasing error and resolution error obtained with different types of
prefiltering.
110     IMAGE SAMPLING AND RECONSTRUCTION


                                                                A
                             W F ( ω x, ω y ) = ----------------------------------
                                                                                 -   (4.2-30)
                                I                                            2m
                                                1 + ( ω ⁄ ωc )

where A is an amplitude constant, m is an integer governing the rate of falloff of the
Fourier spectrum, and ω c is the spatial frequency at the half-amplitude point. The
curves of Figure 4.2-8 indicate that the Gaussian spot and square spot scanning pre-
filters provide about the same results, while the diffraction-limited lens yields a
somewhat greater loss in resolution for the same aliasing error level. A defocused
lens would give even poorer results.


4.3. IMAGE RECONSTRUCTION SYSTEMS

In Section 4.1 the conditions for exact image reconstruction were stated: The origi-
nal image must be spatially sampled at a rate of at least twice its highest spatial fre-
quency, and the reconstruction filter, or equivalent interpolator, must be designed to
pass the spectral component at j = 0, k = 0 without distortion and reject all spectra
for which j, k ≠ 0. With physical image reconstruction systems, these conditions are
impossible to achieve exactly. Consideration is now given to the effects of using
imperfect reconstruction functions.


4.3.1. Implementation Techniques

In most digital image processing systems, electrical image samples are sequentially
output from the processor in a normal raster scan fashion. A continuous image is
generated from these electrical samples by driving an optical display such as a cath-
ode ray tube (CRT) with the intensity of each point set proportional to the image
sample amplitude. The light array on the CRT can then be imaged onto a ground-
glass screen for viewing or onto photographic film for recording with a light projec-
tion system incorporating an incoherent spatial filter possessing a desired optical
transfer function. Optimal transfer functions with a perfectly flat passband over the
image spectrum and a sharp cutoff to zero outside the spectrum cannot be physically
implemented.
   The most common means of image reconstruction is by use of electro-optical
techniques. For example, image reconstruction can be performed quite simply by
electrically defocusing the writing spot of a CRT display. The drawback of this tech-
nique is the difficulty of accurately controlling the spot shape over the image field.
In a scanning microdensitometer, image reconstruction is usually accomplished by
projecting a rectangularly shaped spot of light onto photographic film. Generally,
the spot size is set at the same size as the sample spacing to fill the image field com-
pletely. The resulting interpolation is simple to perform, but not optimal. If a small
writing spot can be achieved with a CRT display or a projected light display, it is
possible approximately to synthesize any desired interpolation by subscanning a res-
olution cell, as shown in Figure 4.3-1.
IMAGE RECONSTRUCTION SYSTEMS           111




                FIGURE 4.3-1. Image reconstruction by subscanning.


   The following subsections introduce several one- and two-dimensional interpola-
tion functions and discuss their theoretical performance. Chapter 13 presents meth-
ods of digitally implementing image reconstruction systems.




              FIGURE 4.3-2. One-dimensional interpolation waveforms.
112     IMAGE SAMPLING AND RECONSTRUCTION


4.3.2. Interpolation Functions

Figure 4.3-2 illustrates several one-dimensional interpolation functions. As stated
previously, the sinc function, provides an exact reconstruction, but it cannot be
physically generated by an incoherent optical filtering system. It is possible to
approximate the sinc function by truncating it and then performing subscanning
(Figure 4.3-1). The simplest interpolation waveform is the square pulse function,
which results in a zero-order interpolation of the samples. It is defined mathemati-
cally as


                          R0 ( x ) = 1          for – 1 ≤ x ≤ 1
                                                      --
                                                      2
                                                       -      --
                                                              2
                                                               -             (4.3-1)


and zero otherwise, where for notational simplicity, the sample spacing is assumed
to be of unit dimension. A triangle function, defined as

                                     x + 1        for – 1 ≤ x ≤ 0          (4.3-2a)
                                     
                           R1( x ) = 
                                     
                                     1 – x        for 0 < x ≤ 1           (4.3-2b)




                    FIGURE 4.3-3. One-dimensional interpolation.
IMAGE RECONSTRUCTION SYSTEMS       113

provides the first-order linear sample interpolation with trianglar interpolation
waveforms. Figure 4.3-3 illustrates one-dimensional interpolation using sinc,
square, and triangle functions.
   The triangle function may be considered to be the result of convolving a square
function with itself. Convolution of the triangle function with the square function
yields a bell-shaped interpolation waveform (in Figure 4.3-2d). It is defined as


                                      1 ( x + 3 )2       for – 3 ≤ x ≤ – 1
                                                                --
                                                                 -        ---     (4.3-3a)
                                      --
                                       2
                                        -       --
                                                2
                                                 -              2         2
                                     
                                                               1         1
                         R 2 ( x ) =  3 – ( x )2
                                         --
                                          -               for – --- < x ≤ ---     (4.3-3b)
                                      4                        2         2
                                     
                                      1        3 2
                                      -- ( x – -- )
                                       2
                                        -
                                                2
                                                 -
                                                           for 1 < x ≤ ---
                                                               ---
                                                                       3          (4.3-3c)
                                                               2       2



This process quickly converges to the Gaussian-shaped waveform of Figure 4.3-2f.
Convolving the bell-shaped waveform with the square function results in a third-
order polynomial function called a cubic B-spline (13,14). It is defined mathemati-
cally as


                              1
                         2 + --- x 3 – ( x ) 2           for 0 ≤ x ≤ 1           (4.3-4a)
                         -- 2
                          3
                           -
             R3 ( x ) = 
                         1 (2 – x )3
                          --
                           -
                        6                                for 1 < x ≤ 2           (4.3-4b)


The cubic B-spline is a particularly attractive candidate for image interpolation
because of its properties of continuity and smoothness at the sample points. It can be
shown by direct differentiation of Eq. 4.3-4, that R3(x) is continuous in its first and
second derivatives at the sample points.
    As mentioned earlier, the sinc function can be approximated by truncating its
tails. Typically, this is done over a four-sample interval. The problem with this
approach is that the slope discontinuity at the ends of the waveform leads to ampli-
tude ripples in a reconstructed function. This problem can be eliminated by generat-
ing a cubic convolution function (15,16), which forces the slope of the ends of the
interpolation to be zero. The cubic convolution interpolation function can be
expressed in the following general form:


                            3          2
                     A 1 x + B1 x + C 1 x + D1               for 0 ≤ x ≤ 1       (4.3-5a)
                    
         Rc ( x ) = 
                          3      2
                     A 2 x + B2 x + C 2 x + D2               for 1 < x ≤ 2       (4.3-5b)
114        IMAGE SAMPLING AND RECONSTRUCTION


where Ai, Bi, Ci, Di are weighting factors. The weighting factors are determined by
satisfying two sets of extraneous conditions:


      1.     R c ( x ) = 1 at x = 0, and R c ( x ) = 0 at x = 1, 2.



      2.    The first-order derivative R' c ( x ) = 0 at x = 0, 1, 2.


These conditions results in seven equations for the eight unknowns and lead to the
parametric expression


                                3              2
                      (a + 2) x – (a + 3) x + 1                for 0 ≤ x ≤ 1   (4.3-6a)
                      
           Rc ( x ) = 
                           3      2
                       a x – 5a x + 8a x – 4a                  for 1 < x ≤ 2   (4.3-6b)



where a ≡ A2 of Eq. 4.3-5 is the remaining unknown weighting factor. Rifman
(15) and Bernstein (16) have set a = – 1, which causes R c ( x ) to have the same
slope, - 1, at x = 1 as the sinc function. Keys (17) has proposed setting a = – 1 ⁄ 2 ,
which provides an interpolation function that approximates the original unsam-
pled image to as high a degree as possible in the sense of a power series expan-
sion. The factor a in Eq. 4.3-6 can be used as a tuning parameter to obtain a best
visual interpolation (18,19).
    Table 4.3-1 defines several orthogonally separable two-dimensional interpola-
tion functions for which R ( x, y ) = R ( x )R ( y ). The separable square function has a
square peg shape. The separable triangle function has the shape of a pyramid.
Using a triangle interpolation function for one-dimensional interpolation is
equivalent to linearly connecting adjacent sample peaks as shown in Figure
4.3-3c. The extension to two dimensions does not hold because, in general, it is
not possible to fit a plane to four adjacent samples. One approach, illustrated in
Figure 4.3-4a, is to perform a planar fit in a piecewise fashion. In region I of
Figure 4.3-4a, points are linearly interpolated in the plane defined by pixels A, B,
C, while in region II, interpolation is performed in the plane defined by pixels B,
C, D. A computationally simpler method, called bilinear interpolation, is
described in Figure 4.3-4b. Bilinear interpolation is performed by linearly interpo-
lating points along separable orthogonal coordinates of the continuous image
field. The resultant interpolated surface of Figure 4.3-4b, connecting pixels A, B,
C, D, is generally nonplanar. Chapter 13 shows that bilinear interpolation is equiv-
alent to interpolation with a pyramid function.
IMAGE RECONSTRUCTION SYSTEMS                                                     115

TABLE 4.3-1. Two-Dimensional Interpolation Functions
Function                                                                               Definition
Separable sinc                                        4 sin { 2πx ⁄ T x } sin { 2πy ⁄ T y }                                               2π
                                     R ( x, y ) = ---------- --------------------------------- ---------------------------------
                                                           -                                                                       T x = -------
                                                                                                                                               -
                                                  T x T y 2πx ⁄ T x                                   2πy ⁄ T y                          ω xs
                                                                                                                                          2π
                                                                                                                                   T y = -------
                                                                                                                                               -
                                                                                                                                         ω ys

                                                     1                            ω x ≤ ω xs ,              ω y ≤ ω ys
                                    ᏾ ( ω x, ω y ) = 
                                                     0                            otherwise

Separable square
                                                    1                                 Tx                      Ty
                                                    ----------
                                                              -                    x ≤ ---- ,
                                                                                          -                y ≤ ----
                                                                                                                  -
                                    R 0 ( x, y ) =  T x T y                            2                       2
                                                   
                                                    0                             otherwise

                                                       sin { ω x T x ⁄ 2 } sin { ω y T y ⁄ 2 }
                                    ᏾ 0 ( ω x ,ω y ) = -------------------------------------------------------------------
                                                                                                                         -
                                                                 ( ωx Tx ⁄ 2 ) ( ω y Ty ⁄ 2 )

Separable triangle                  R 1 ( x, y ) = R 0 ( x, y ) ᭺ R0 ( x, y )
                                                                *
                                                                       2
                                    ᏾ 1 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y )

Separable bell                      R 2 ( x, y ) = R 0 ( x, y ) ᭺ R1 ( x, y )
                                                                *
                                                            3
                                    ᏾ 2 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y )
Separable cubic B-spline            R 3 ( x, y ) = R 0 ( x, y ) ᭺ R2 ( x, y )
                                                                *
                                                            4
                                    ᏾ 3 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y )
Gaussian
                                                       2 –1     x2 + y2 
                                    R ( x, y ) = [ 2πσ w ] exp  – ---------------- 
                                                                2σ 3       w
                                                                                  2        2         2
                                                          σw ( ωx + ωy ) 
                                    ᏾ ( ω x, ω y ) = exp  – ------------------------------- 
                                                                           2                


4.3.3. Effect of Imperfect Reconstruction Filters

The performance of practical image reconstruction systems will now be analyzed. It
will be assumed that the input to the image reconstruction system is composed of
samples of an ideal image obtained by sampling with a finite array of Dirac
samples at the Nyquist rate. From Eq. 4.1-9 the reconstructed image is found to be

                                    ∞             ∞
                 F R ( x, y ) =    ∑            ∑         F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y)                                        (4.3-7)
                                  j = –∞      k = –∞
116       IMAGE SAMPLING AND RECONSTRUCTION




                         FIGURE 4.3-4. Two-dimensional linear interpolation.



where R(x, y) is the two-dimensional interpolation function of the image reconstruc-
tion system. Ideally, the reconstructed image would be the exact replica of the ideal
image as obtained from Eq. 4.1-9. That is,

                                         ∞          ∞
                      ˆ
                      F R ( x, y ) =    ∑          ∑      F I ( j ∆x, k ∆y)R I ( x – j ∆ x, y – k ∆y)             (4.3-8)
                                       j = – ∞ k = –∞


where R I ( x, y ) represents an optimum interpolation function such as given by Eq.
4.1-14 or 4.1-16. The reconstruction error over the bounds of the sampled image is
then

                  ∞          ∞
ED ( x, y ) =    ∑          ∑     FI ( j ∆x, k ∆y) [ R ( x – j ∆x, y – k ∆y) – R I ( x – j ∆x, y – k ∆y) ] (4.3-9)
                j = –∞ k = – ∞


    There are two contributors to the reconstruction error: (1) the physical system
interpolation function R(x, y) may differ from the ideal interpolation function
RI ( x, y ) , and (2) the finite bounds of the reconstruction, which cause truncation of
the interpolation functions at the boundary. In most sampled imaging systems, the
boundary reconstruction error is ignored because the error generally becomes negli-
gible at distances of a few samples from the boundary. The utilization of nonideal
interpolation functions leads to a potential loss of image resolution and to the intro-
duction of high-spatial-frequency artifacts.
    The effect of an imperfect reconstruction filter may be analyzed conveniently by
examination of the frequency spectrum of a reconstructed image, as derived in Eq.
4.1-11:

                                                                ∞        ∞
                                    1
           F R ( ω x, ω y ) = -------------- R ( ω x, ω y )
                              ∆x ∆y
                                           -                   ∑        ∑       F I ( ω x – j ω xs, ω y – k ω ys ) (4.3-10)
                                                              j = –∞   k = –∞
IMAGE RECONSTRUCTION SYSTEMS                 117

Ideally, R ( ω x, ω y ) should select the spectral component for j = 0, k = 0 with uniform
attenuation at all spatial frequencies and should reject all other spectral components.
An imperfect filter may attenuate the frequency components of the zero-order spec-
tra, causing a loss of image resolution, and may also permit higher-order spectral
modes to contribute to the restoration, and therefore introduce distortion in the resto-
ration. Figure 4.3-5 provides a graphic example of the effect of an imperfect image
reconstruction filter. A typical cross section of a sampled image is shown in Figure
4.3-5a. With an ideal reconstruction filter employing sinc functions for interpola-
tion, the central image spectrum is extracted and all sidebands are rejected, as shown
in Figure 4.3-5c. Figure 4.3-5d is a plot of the transfer function for a zero-order
interpolation reconstruction filter in which the reconstructed pixel amplitudes over
the pixel sample area are set at the sample value. The resulting spectrum shown in
Figure 4.3-5e exhibits distortion from attenuation of the central spectral mode and
spurious high-frequency signal components.
    Following the analysis leading to Eq. 4.2-21, the resolution loss resulting from
the use of a nonideal reconstruction function R(x, y) may be specified quantitatively
as


                                                E RM – E R
                                          E R = -----------------------                      (4.3-11)
                                                      ERM




FIGURE 4.3-5. Power spectra for perfect and imperfect reconstruction: (a) Sampled image
input W F ( ω x, 0 ) ; (b) sinc function reconstruction filter transfer function R ( ω x, 0 ) ; (c) sinc
          I
function interpolator output W F ( ω x, 0 ) ; (d) zero-order interpolation reconstruction filter
                                      O
transfer function R ( ω x, 0 ) ; (e) zero-order interpolator output W F ( ω x, 0 ).
                                                                          O
118      IMAGE SAMPLING AND RECONSTRUCTION


where

                             ω xs ⁄ 2        ω ys ⁄ 2                                         2
                 ER =   ∫– ω    xs
                                         ∫
                                     ⁄ 2 – ω ys ⁄ 2
                                                        W F ( ω x, ω y ) H ( ω x, w y )
                                                                  I
                                                                                                  dω x dω y     (4.3-12)


represents the actual interpolated image energy in the Nyquist sampling band limits,
and

                                         ω xs ⁄ 2           ω ys ⁄ 2
                      E RM =          ∫– ω    xs
                                                        ∫
                                                   ⁄ 2 – ω ys ⁄ 2
                                                                       W F ( ω x, ω y ) dω x dω y
                                                                           I
                                                                                                                (4.3-13)


is the ideal interpolated image energy. The interpolation error attributable to high-
spatial-frequency artifacts may be defined as

                                                                      EH
                                                                E H = ------
                                                                           -                                    (4.3-14)
                                                                      ET

where


                                     ∞       ∞                                          2
                     ET =       ∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y )
                                                            I
                                                                                            dω x dω y           (4.3-15)



denotes the total energy of the interpolated image and


                                                        EH = ET – ER                                            (4.3-16)


is that portion of the interpolated image energy lying outside the Nyquist band lim-
its.
     Table 4.3-2 lists the resolution error and interpolation error obtained with several
separable two-dimensional interpolation functions. In this example, the power spec-
tral density of the ideal image is assumed to be of the form


                                                             ω- – ω 2                                 ω
                                                                       2                                    2
                                                                                                  2
                     W F ( ω x, ω y ) =
                         I                                   2
                                                                  s
                                                              -----                   for ω ≤  -----
                                                                                               2
                                                                                                    s
                                                                                                    -           (4.3-17)


and zero elsewhere. The interpolation error contribution of highest-order
components, j 1, j2 > 2 , is assumed negligible. The table indicates that zero-order
REFERENCES         119

TABLE 4.3-2. Interpolation Error and Resolution Error for Various Separable Two-
Dimensional Interpolation Functions
                                     Percent                         Percent
                                  Resoluton Error              Interpolation Error
Function                                ER                             EH
Sinc                                    0.0                           0.0
Square                                 26.9                          15.7
Triangle                              44.0                            3.7
Bell                                  55.4                            1.1
Cubic B-spline                         63.2                           0.3
               3T                      38.6                          10.3
Gaussian σ w = -----
                   -
                 8
                                       54.6                           2.0
Gaussian σ w = T
               --
                -
               2
               5T                      66.7                           0.3
Gaussian σ w = -----
                   -
                 8

interpolation with a square interpolation function results in a significant amount of
resolution error. Interpolation error reduces significantly for higher-order convolu-
tional interpolation functions, but at the expense of resolution error.


REFERENCES

 1. F. T. Whittaker, “On the Functions Which Are Represented by the Expansions of the
    Interpolation Theory,” Proc. Royal Society of Edinburgh, A35, 1915, 181–194.
 2. C. E. Shannon, “Communication in the Presence of Noise,” Proc. IRE, 37, 1, January
    1949, 10–21.
 3. H. J. Landa, “Sampling, Data Transmission, and the Nyquist Rate,” Proc. IEEE, 55, 10,
    October 1967, 1701–1706.
 4. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996.
 5. A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New
    York, 1966.
 6. S. P. Lloyd, “A Sampling Theorem for Stationary (Wide Sense) Stochastic Processes,”
    Trans. American Mathematical Society, 92, 1, July 1959, 1–12.
 7. H. S. Shapiro and R. A. Silverman, “Alias-Free Sampling of Random Noise,” J. SIAM, 8,
    2, June 1960, 225–248.
 8. J. L. Brown, Jr., “Bounds for Truncation Error in Sampling Expansions of Band-Limited
    Signals,” IEEE Trans. Information Theory, IT-15, 4, July 1969, 440–444.
 9. H. D. Helms and J. B. Thomas, “Truncation Error of Sampling Theory Expansions,”
    Proc. IRE, 50, 2, February 1962, 179–184.
10. J. J. Downing, “Data Sampling and Pulse Amplitude Modulation,” in Aerospace Teleme-
    try, H. L. Stiltz, Ed., Prentice Hall, Englewood Cliffs, NJ, 1961.
120      IMAGE SAMPLING AND RECONSTRUCTION


11. D. G. Childers, “Study and Experimental Investigation on Sampling Rate and Aliasing in
    Time Division Telemetry Systems,” IRE Trans. Space Electronics and Telemetry, SET-8,
    December 1962, 267–283.
12. E. L. O'Neill, Introduction to Statistical Optics, Addison-Wesley, Reading, MA, 1963.
13. H. S. Hou and H. C. Andrews, “Cubic Splines for Image Interpolation and Digital Filter-
    ing,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-26, 6, December
    1978, 508–517.
14. T. N. E. Greville, “Introduction to Spline Functions,” in Theory and Applications of
    Spline Functions, T. N. E. Greville, Ed., Academic Press, New York, 1969.
15. S. S. Rifman, “Digital Rectification of ERTS Multispectral Imagery,” Proc. Symposium
    on Significant Results Obtained from ERTS-1 (NASA SP-327), I, Sec. B, 1973, 1131–
    1142.
16. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J.
    Research and Development, 20, 1976, 40–57.
17. R. G. Keys, “Cubic Convolution Interpolation for Digital Image Processing,” IEEE
    Trans. Acoustics, Speech, and Signal Processing, AASP-29, 6, December 1981, 1153–
    1160.
18. K. W. Simon, “Digital Image Reconstruction and Resampling of Landsat Imagery,”
    Proc. Symposium on Machine Processing of Remotely Sensed Data, Purdue University,
    Lafayette, IN, IEEE 75, CH 1009-0-C, June 1975, 3A-1–3A-11.
19. S. K. Park and R. A. Schowengerdt, “Image Reconstruction by Parametric Cubic Convo-
    lution,” Computer Vision, Graphics, and Image Processing, 23, 3, September 1983, 258–
    272.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




5
DISCRETE IMAGE MATHEMATICAL
CHARACTERIZATION




Chapter 1 presented a mathematical characterization of continuous image fields.
This chapter develops a vector-space algebra formalism for representing discrete
image fields from a deterministic and statistical viewpoint. Appendix 1 presents a
summary of vector-space algebra concepts.


5.1. VECTOR-SPACE IMAGE REPRESENTATION

In Chapter 1 a generalized continuous image function F(x, y, t) was selected to
represent the luminance, tristimulus value, or some other appropriate measure of a
physical imaging system. Image sampling techniques, discussed in Chapter 4,
indicated means by which a discrete array F(j, k) could be extracted from the contin-
uous image field at some time instant over some rectangular area – J ≤ j ≤ J ,
– K ≤ k ≤ K . It is often helpful to regard this sampled image array as a N 1 × N 2
element matrix


                                  F = [ F ( n 1, n 2 ) ]                      (5.1-1)


for 1 ≤ n i ≤ Ni where the indices of the sampled array are reindexed for consistency
with standard vector-space notation. Figure 5.1-1 illustrates the geometric relation-
ship between the Cartesian coordinate system of a continuous image and its array of
samples. Each image sample is called a pixel.

                                                                                 121
122     DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION




FIGURE 5.1-1. Geometric relationship between a continuous image and its array of
samples.



   For purposes of analysis, it is often convenient to convert the image matrix to
vector form by column (or row) scanning F, and then stringing the elements together
in a long vector (1). An equivalent scanning operation can be expressed in quantita-
tive form by the use of a N2 × 1 operational vector v n and a N 1 ⋅ N 2 × N 2 matrix N n
defined as


                          0       1                          0    1
                                  …




                                                                  …
                                                             …
                          …




                          0     n–1                          0   n–1
                   vn =   1      n                 Nn =      1    n             (5.1-2)
                          0     n+1                          0   n+1
                                  …




                                                                  …
                                                             …
                          …




                          0      N2                          0   N2


Then the vector representation of the image matrix F is given by the stacking opera-
tion

                                            N2
                                      f =   ∑     N n Fv n                      (5.1-3)
                                            n=1


In essence, the vector v n extracts the nth column from F and the matrix N n places
this column into the nth segment of the vector f. Thus, f contains the column-
GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR                             123

scanned elements of F. The inverse relation of casting the vector f into matrix form
is obtained from
                                                      N2
                                                             T   T
                                            F =     ∑      N n fv n                            (5.1-4)
                                                    n=1

With the matrix-to-vector operator of Eq. 5.1-3 and the vector-to-matrix operator of
Eq. 5.1-4, it is now possible easily to convert between vector and matrix representa-
tions of a two-dimensional array. The advantages of dealing with images in vector
form are a more compact notation and the ability to apply results derived previously
for one-dimensional signal processing applications. It should be recognized that Eqs
5.1-3 and 5.1-4 represent more than a lexicographic ordering between an array and a
vector; these equations define mathematical operators that may be manipulated ana-
lytically. Numerous examples of the applications of the stacking operators are given
in subsequent sections.


5.2. GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR

A large class of image processing operations are linear in nature; an output image
field is formed from linear combinations of pixels of an input image field. Such
operations include superposition, convolution, unitary transformation, and discrete
linear filtering.
    Consider the N 1 × N 2 element input image array F ( n1, n 2 ). A generalized linear
operation on this image field results in a M 1 × M 2 output image array P ( m 1, m 2 ) as
defined by

                                       N1      N2
                   P ( m 1, m 2 ) =   ∑ ∑             F ( n 1, n 2 )O ( n 1, n 2 ; m 1, m2 )   (5.2-1)
                                      n1 = 1 n 2= 1


where the operator kernel O ( n 1, n 2 ; m 1, m 2 ) represents a weighting constant, which,
in general, is a function of both input and output image coordinates (1).
   For the analysis of linear image processing operations, it is convenient to adopt
the vector-space formulation developed in Section 5.1. Thus, let the input image
array F ( n1, n 2 ) be represented as matrix F or alternatively, as a vector f obtained by
column scanning F. Similarly, let the output image array P ( m1, m2 ) be represented
by the matrix P or the column-scanned vector p. For notational simplicity, in the
subsequent discussions, the input and output image arrays are assumed to be square
and of dimensions N 1 = N 2 = N and M 1 = M 2 = M , respectively. Now, let T
                 2    2                                                         2
denote the M × N matrix performing a linear transformation on the N × 1 input
                                  2
image vector f yielding the M × 1 output image vector


                                                  p = Tf                                       (5.2-2)
124     DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


The matrix T may be partitioned into M × N submatrices T mn and written as


                                     T11       T 12     …       T 1N
                                     T21       T 22     …       T 2N
                           T =                                                (5.2-3)




                                                  …



                                                                 …
                                     …
                                     TM1 TM2 … T MN



From Eq. 5.1-3, it is possible to relate the output image vector p to the input image
matrix F by the equation

                                              N
                                   p =       ∑      TN n Fv n                 (5.2-4)
                                             n =1


Furthermore, from Eq. 5.1-4, the output image matrix P is related to the input image
vector p by

                                              M

                                             ∑
                                                        T   T
                                   P =              Mm pu m                   (5.2-5)
                                             m=1


Combining the above yields the relation between the input and output image matri-
ces,

                               M         N
                                                    T                  T
                         P =   ∑ ∑             ( M m TNn )F ( v n u m )       (5.2-6)
                               m=1    n=1


where it is observed that the operators Mm and N n simply extract the partition T mn
from T. Hence

                                     M        N

                                     ∑ ∑ Tmn F ( vn um )
                                                                  T
                            P =                                               (5.2-7)
                                   m =1 n =1


   If the linear transformation is separable such that T may be expressed in the
direct product form


                                     T = TC ⊗ TR                              (5.2-8)
GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR                125




                 FIGURE 5.2-1. Structure of linear operator matrices.




where T R and T C are row and column operators on F, then


                                 T mn = T R ( m, n )T C                       (5.2-9)


As a consequence,


                                M     N

                               ∑ ∑
                                                           T            T
                    P = TC F                T R ( m, n )v n u m = T C FT R   (5.2-10)
                               m =1 n = 1



Hence the output image matrix P can be produced by sequential row and column
operations.
   In many image processing applications, the linear transformations operator T is
highly structured, and computational simplifications are possible. Special cases of
interest are listed below and illustrated in Figure 5.2-1 for the case in which the
input and output images are of the same dimension, M = N .
126      DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


   1. Column processing of F:


                              T = diag [ T C1, T C2, …, T CN ]                      (5.2-11)


   where T Cj is the transformation matrix for the jth column.
   2. Identical column processing of F:


                         T = diag [ T C, T C, …, T C ] = T C ⊗ I N                  (5.2-12)


   3. Row processing of F:


                  T mn = diag [ T R1 ( m, n ), T R2 ( m, n ), …, T RN ( m, n ) ]    (5.2-13)


   where T Rj is the transformation matrix for the jth row.
   4. Identical row processing of F:


                    T mn = diag [ T R ( m, n ), T R ( m, n ), …, T R ( m, n ) ]    (5.2-14a)


   and


                                        T = IN ⊗ TR                                (5.2-14b)


   5. Identical row and identical column processing of F:


                                  T = TC ⊗ I N + I N ⊗ T R                          (5.2-15)


The number of computational operations for each of these cases is tabulated in Table
5.2-1.
   Equation 5.2-10 indicates that separable two-dimensional linear transforms can
be computed by sequential one-dimensional row and column operations on a data
array. As indicated by Table 5.2-1, a considerable savings in computation is possible
                                                                                   2 2
for such transforms: computation by Eq 5.2-2 in the general case requires M N
                                                                              2     2
operations; computation by Eq. 5.2-10, when it applies, requires only MN + M N
operations. Furthermore, F may be stored in a serial memory and fetched line by
line. With this technique, however, it is necessary to transpose the result of the col-
umn transforms in order to perform the row transforms. References 2 and 3 describe
algorithms for line storage matrix transposition.
IMAGE STATISTICAL CHARACTERIZATION                                    127

TABLE 5.2-1. Computational Requirements for Linear Transform Operator
                                                                                              Operations
Case                                                                                       (Multiply and Add)
General                                                                                               N4
Column processing                                                                                     N3
Row processing                                                                                        N3
Row and column processing                                                                          2N3– N2
Separable row and column processing matrix form                                                      2N3



5.3. IMAGE STATISTICAL CHARACTERIZATION

The statistical descriptors of continuous images presented in Chapter 1 can be
applied directly to characterize discrete images. In this section, expressions are
developed for the statistical moments of discrete image arrays. Joint probability
density models for discrete image fields are described in the following section. Ref-
erence 4 provides background information for this subject.
   The moments of a discrete image process may be expressed conveniently in
vector-space form. The mean value of the discrete image function is a matrix of the
form


                                              E { F } = [ E { F ( n 1, n 2 ) } ]                                        (5.3-1)


If the image array is written as a column-scanned vector, the mean of the image vec-
tor is

                                                                 N2
                                          ηf = E { f } =         ∑     N n E { F }v n                                   (5.3-2)
                                                                n= 1


The correlation function of the image array is given by


                                R ( n 1, n 2 ; n 3 , n 4 ) = E { F ( n 1, n 2 )F∗ ( n 3, n 4 ) }                        (5.3-3)


where the n i represent points of the image array. Similarly, the covariance function
of the image array is


  K ( n 1, n 2 ; n 3 , n 4) = E { [ F ( n 1, n 2 ) – E { F ( n 1, n 2 ) } ] [ F∗ ( n 3, n 4 ) – E { F∗ ( n 3, n 4 ) } ] }
                                                                                                                        (5.3-4)
128     DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


Finally, the variance function of the image array is obtained directly from the cova-
riance function as

                               2
                             σ ( n 1, n 2 ) = K ( n 1, n 2 ; n 1, n 2 )         (5.3-5)


If the image array is represented in vector form, the correlation matrix of f can be
written in terms of the correlation of elements of F as
                                                   N2              N
                             T                             2 T T T 
                R f = E { ff∗ } = E  
                                    
                                                  ∑ Nm Fvm   ∑ vn F∗ Nn 
                                                            n = 1      
                                                                               (5.3-6a)
                                                  m=1

or
                                   N2        N2
                                                                     T T
                                   ∑ ∑
                                                                  T
                        Rf =                       N m E  Fv m v n F∗ N n    (5.3-6b)
                                   m=1 n=1                            

The term

                                             T   T
                                   E  Fv m v n F∗  = R mn                     (5.3-7)
                                                  

is the N 1 × N1 correlation matrix of the mth and nth columns of F. Hence it is possi-
ble to express R f in partitioned form as


                                   R11            R 12    …        R 1N
                                                                           2

                                   R21            R 22    …        R 2N
                      Rf =                                                2
                                   …




                                                                       …
                                                   …




                                   RN         RN          …        RN
                                        21           22                 2 N2
                                                                                (5.3-8)


The covariance matrix of f can be found from its correlation matrix and mean vector
by the relation

                                                               T
                                        K f = R f – η f η f∗                    (5.3-9)


A variance matrix V F of the array F ( n1, n 2 ) is defined as a matrix whose elements
represent the variances of the corresponding elements of the array. The elements of
this matrix may be extracted directly from the covariance matrix partitions of K f .
That is,
IMAGE STATISTICAL CHARACTERIZATION                                        129

                                V F ( n 1, n 2 ) = K n           n 2 ( n 1,   n1 )                            (5.3-10)
                                                            2,




   If the image matrix F is wide-sense stationary, the correlation function can be
expressed as


                   R ( n 1, n 2 ; n3, n 4 ) = R ( n 1 – n3 , n 2 – n 4 ) = R ( j, k )                         (5.3-11)


where j = n 1 – n 3 and k = n 2 – n 4. Correspondingly, the covariance matrix parti-
tions of Eq. 5.3-9 are related by


                             K mn = K k                     m≥n                                             (5.3-12a)

                            mn = K∗
                           K∗     k
                                                            m<n                                             (5.3-12b)


where k = m – n + 1. Hence, for a wide-sense-stationary image array


                                    K1         K2                 …             KN
                                                                                         2

                                    K∗          K1                …           KN         –1
                        Kf =         2                                               2                        (5.3-13)
                                                 …
                                    …




                                                                                 …




                                   K∗
                                    N         K∗
                                               N       –1
                                                                  …             K1
                                        2          2




The matrix of Eq. 5.3-13 is of block Toeplitz form (5). Finally, if the covariance
between elements is separable into the product of row and column covariance func-
tions, then the covariance matrix of the image vector can be expressed as the direct
product of row and column covariance matrices. Under this condition



                            K R ( 1, 1 )K C             K R ( 1, 2 )K C                      …   K R ( 1, N 2 )K C
                            K R ( 2, 1 )K C            K R ( 2, 2 )K C                       …   K R ( 2, N 2 )K C
    Kf = KC ⊗ KR =
                                                                                                        …
                                                                 …
                                   …




                           K R ( N 2, 1 )K C           K R ( N 2, 2 )K C                     …   K R ( N 2, N2 )K C


                                                                                                              (5.3-14)


where K C is a N 1 × N 1 covariance matrix of each column of F and K R is a N 2 × N 2
covariance matrix of the rows of F.
130     DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


   As a special case, consider the situation in which adjacent pixels along an image
row have a correlation of ( 0.0 ≤ ρ R ≤ 1.0 ) and a self-correlation of unity. Then the
covariance matrix reduces to



                                                         N –1
                                      1      ρR     … ρR 2
                                2                        N –2
                        KR = σR       ρR      1     …   ρR 2
                                                                              (5.3-15)


                                             …




                                                          …
                                      …
                                     N –1    N –2
                                    ρR 2    ρR 2    …     1




FIGURE 5.3-1. Covariance measurements of the smpte_girl_luminance mono-
chrome image.
IMAGE STATISTICAL CHARACTERIZATION                      131




            FIGURE 5.3-2. Photograph of smpte_girl_luminance image.



        2
where σ R denotes the variance of pixels along a row. This is an example of the
covariance matrix of a Markov process, analogous to the continuous autocovariance
function exp ( – α x ) . Figure 5.3-1 contains a plot by Davisson (6) of the measured
covariance of pixels along an image line of the monochrome image of Figure 5.3-2.
The data points can be fit quite well with a Markov covariance function with
ρ = 0.953 . Similarly, the covariance between lines can be modeled well with a
Markov covariance function with ρ = 0.965 . If the horizontal and vertical covari-
ances were exactly separable, the covariance function for pixels along the image
diagonal would be equal to the product of the horizontal and vertical axis covariance
functions. In this example, the approximation was found to be reasonably accurate
for up to five pixel separations.
   The discrete power-spectral density of a discrete image random process may be
defined, in analogy with the continuous power spectrum of Eq. 1.4-13, as the two-
dimensional discrete Fourier transform of its stationary autocorrelation function.
Thus, from Eq. 5.3-11

                                 N1 – 1 N2– 1
                                                                         ju kv 
                  W ( u, v ) =   ∑ ∑            R ( j, k ) exp  – 2πi  ----- + -----  
                                                                             -       -       (5.3-16)
                                                                       N              
                                 j= 0   k=0                                 1 N2 



Figure 5.3-3 shows perspective plots of the power-spectral densities for separable
and circularly symmetric Markov processes.
132      DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION




                                                             u




                                                                     v
                                      (a) Separable




                                                              u
                                                                    v




                                 (b) Circularly symmetric


FIGURE 5.3-3. Power spectral densities of Markov process sources; N = 256, log magnitude
displays.



5.4. IMAGE PROBABILITY DENSITY MODELS

A discrete image array F ( n1, n 2 ) can be completely characterized statistically by its
joint probability density, written in matrix form as
IMAGE PROBABILITY DENSITY MODELS                    133

                           p ( F ) ≡ p { F ( 1, 1 ), F ( 2, 1 ), …, F ( N 1, N 2 ) }                   (5.4-1a)


or in corresponding vector form as


                                     p ( f ) ≡ p { f ( 1 ), f ( 2 ), …, f ( Q ) }                      (5.4-1b)


where Q = N 1 ⋅ N 2 is the order of the joint density. If all pixel values are statistically
independent, the joint density factors into the product


                                 p ( f ) ≡ p { f ( 1 ) }p { f ( 2 ) }…p { f ( Q ) }                     (5.4-2)


of its first-order marginal densities.
   The most common joint probability density is the joint Gaussian, which may be
expressed as

                                     –Q ⁄ 2        –1 ⁄ 2        1               T –1             
                  p ( f ) = ( 2π )            Kf            exp  – -- ( f – η f ) K f ( f – η f ) 
                                                                     -                                  (5.4-3)
                                                                   2                              

where K f is the covariance matrix of f, η f is the mean of f and K f denotes the
determinant of K f . The joint Gaussian density is useful as a model for the density of
unitary transform coefficients of an image. However, the Gaussian density is not an
adequate model for the luminance values of an image because luminance is a posi-
tive quantity and the Gaussian variables are bipolar.
    Expressions for joint densities, other than the Gaussian density, are rarely found
in the literature. Huhns (7) has developed a technique of generating high-order den-
sities in terms of specified first-order marginal densities and a specified covariance
matrix between the ensemble elements.
    In Chapter 6, techniques are developed for quantizing variables to some discrete
set of values called reconstruction levels. Let r jq ( q ) denote the reconstruction level
of the pixel at vector coordinate (q). Then the probability of occurrence of the possi-
ble states of the image vector can be written in terms of the joint probability distri-
bution as


              P ( f ) = p { f ( 1 ) = r j ( 1 ) }p { f ( 2 ) = r j ( 2 ) }…p { f ( Q ) = r j ( Q ) }    (5.4-4)
                                         1                           2                            Q




where 0 ≤ j q ≤ j Q = J – 1. Normally, the reconstruction levels are set identically for
each vector component and the joint probability distribution reduces to


                     P ( f ) = p { f ( 1 ) = r j }p { f ( 2 ) = r j }…p { f ( Q ) = r j }               (5.4-5)
                                                   1                     2                    Q
134      DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


Probability distributions of image values can be estimated by histogram measure-
ments. For example, the first-order probability distribution


                                 P [ f ( q ) ] = PR [ f ( q ) = r j ]               (5.4-6)


of the amplitude value at vector coordinate q can be estimated by examining a large
collection of images representative of a given image class (e.g., chest x-rays, aerial
scenes of crops). The first-order histogram estimate of the probability distribution is
the frequency ratio

                                                       Np ( j )
                                       H E ( j ; q ) = -------------
                                                                   -                (5.4-7)
                                                           Np


where N p represents the total number of images examined and N p ( j ) denotes the
number for which f ( q ) = r j for j = 0, 1,..., J – 1. If the image source is statistically
stationary, the first-order probability distribution of Eq. 5.4-6 will be the same for all
vector components q. Furthermore, if the image source is ergodic, ensemble aver-
ages (measurements over a collection of pictures) can be replaced by spatial aver-
ages. Under the ergodic assumption, the first-order probability distribution can be
estimated by measurement of the spatial histogram

                                                     NS ( j )
                                          HS ( j ) = ------------
                                                                -                   (5.4-8)
                                                          Q


where N S ( j ) denotes the number of pixels in an image for which f ( q ) = r j for
1 ≤ q ≤ Q and 0 ≤ j ≤ J – 1 . For example, for an image with 256 gray levels, H S ( j )
denotes the number of pixels possessing gray level j for 0 ≤ j ≤ 255.
   Figure 5.4-1 shows first-order histograms of the red, green, and blue components
of a color image. Most natural images possess many more dark pixels than bright
pixels, and their histograms tend to fall off exponentially at higher luminance levels.
   Estimates of the second-order probability distribution for ergodic image sources
can be obtained by measurement of the second-order spatial histogram, which is a
measure of the joint occurrence of pairs of pixels separated by a specified distance.
With reference to Figure 5.4-2, let F ( n1, n 2 ) and F ( n3, n 4 ) denote a pair of pixels
separated by r radial units at an angle θ with respect to the horizontal axis. As a
consequence of the rectilinear grid, the separation parameters may only assume cer-
tain discrete values. The second-order spatial histogram is then the frequency ratio


                                                         NS ( j1, j2 )
                               H S ( j 1, j 2 ; r, θ ) = -----------------------
                                                                               -    (5.4-9)
                                                                 QT
IMAGE PROBABILITY DENSITY MODELS              135




FIGURE 5.4-1. Histograms of the red, green and blue components of the smpte_girl
_linear color image.




where N S ( j 1, j2 ) denotes the number of pixel pairs for which F ( n1, n 2 ) = r j1 and
F ( n 3, n 4 ) = r j . The factor QT in the denominator of Eq. 5.4-9 represents the total
                    2
number of pixels lying in an image region for which the separation is ( r, θ ). Because
of boundary effects, QT < Q.
    Second-order spatial histograms of a monochrome image are presented in Figure
5.4-3 as a function of pixel separation distance and angle. As the separation
increases, the pairs of pixels become less correlated and the histogram energy tends
to spread more uniformly about the plane.
136     DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION




                 FIGURE 5.4-2. Geometric relationships of pixel pairs.




                                                 j2




            j1




FIGURE 5.4-3. Second-order histogram of the smpte_girl_luminance monochrome
image; r = 1 and θ = 0 .



5.5. LINEAR OPERATOR STATISTICAL REPRESENTATION

If an input image array is considered to be a sample of a random process with known
first and second-order moments, the first- and second-order moments of the output
image array can be determined for a given linear transformation. First, the mean of
the output image array is

                                      N1      N2                                 
                                                                                 
             E { P ( m 1, m2 ) } = E  ∑       ∑ F ( n1, n2 )O ( n1, n2 ; m1, m2 )   (5.5-1a)
                                      n =1   n2 = 1                              
                                      1                                          
LINEAR OPERATOR STATISTICAL REPRESENTATION                                             137

Because the expectation operator is linear,


                                               N1        N2
                 E { P ( m 1, m 2 ) } =        ∑ ∑             E { F ( n 1, n 2 ) }O ( n 1, n 2 ; m 1, m 2 )        (5.5-1b)
                                              n1 = 1 n2 = 1



The correlation function of the output image array is


                        RP ( m 1, m 2 ; m 3 , m4 ) = E { P ( m 1, m 2 )P∗ ( m 3, m 4 ) }                            (5.5-2a)


or in expanded form


                                        N1        N2
                                
R P ( m 1, m2 ; m 3 , m 4 ) = E 
                                
                                         ∑ ∑            F ( n 1, n 2 )O ( n 1, n 2 ; m 1, m 2 ) ×
                                     n 1 = 1 n 2= 1
                                

                                    N1        N2                                          
                                                                                          
                                 ∑         ∑     F∗ ( n 3, n 4 )O∗ ( n 3, n 3 ; m 3, m4 ) 
                                                                                          
                                n3 = 1    n4 = 1
                                                                                          
                                                                                                                    (5.5-2b)


After multiplication of the series and performance of the expectation operation, one
obtains
                                         N1        N2     N1      N2
     R P ( m 1, m 2 ; m 3, m 4 ) =       ∑ ∑ ∑ ∑                        RF ( n 1, n 2, n 3, n 4 )O ( n 1, n 2 ; m1, m 2 )
                                     n1 = 1 n2 = 1 n 3 = 1 n4 = 1

                                     × O∗ ( n 3, n 3 ; m3, m 4 )                                                      (5.5-3)


where R F ( n 1, n 2 ; n 3 , n 4 ) represents the correlation function of the input image array.
In a similar manner, the covariance function of the output image is found to be


                                         N1        N2    N1      N2
     KP ( m 1, m 2 ; m 3, m 4 ) =        ∑ ∑ ∑ ∑                       KF ( n 1, n 2, n 3, n 4 )O ( n 1, n 2 ; m 1, m 2 )
                                     n1 = 1 n2 = 1 n 3 = 1 n4 = 1


                                         × O∗ ( n 3, n 3 ; m3, m 4 )                                                  (5.5-4)
138       DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION


If the input and output image arrays are expressed in vector form, the formulation of
the moments of the transformed image becomes much more compact. The mean of
the output vector p is


                         η p = E { p } = E { Tf } = TE { f } = Tηf
                                                                η                (5.5-5)


and the correlation matrix of p is

                                   T            T  T           T
                      R p = E { pp∗ } = E { Tff∗ T∗ } = TRf T∗                   (5.5-6)


Finally, the covariance matrix of p is

                                                   T
                                     K p = TK f T∗                               (5.5-7)

Applications of this theory to superposition and unitary transform operators are
given in following chapters.
   A special case of the general linear transformation p = Tf , of fundamental
importance, occurs when the covariance matrix of Eq. 5.5-7 assumes the form

                                                 T
                                   K p = TK f T∗ = Λ                             (5.5-8)


where Λ is a diagonal matrix. In this case, the elements of p are uncorrelated. From
Appendix A1.2, it is found that the transformation T, which produces the diagonal
matrix Λ , has rows that are eigenvectors of K f . The diagonal elements of Λ are the
corresponding eigenvalues of K f . This operation is called both a matrix diagonal-
ization and a principal components transformation.


REFERENCES

1.    W. K. Pratt, “Vector Formulation of Two Dimensional Signal Processing Operations,”
      Computer Graphics and Image Processing, 4, 1, March 1975, 1–24.
2.    J. O. Eklundh, “A Fast Computer Method for Matrix Transposing,” IEEE Trans. Com-
      puters, C-21, 7, July 1972, 801–803.
3.    R. E. Twogood and M. P. Ekstrom, “An Extension of Eklundh's Matrix Transposition
      Algorithm and Its Applications in Digital Image Processing,” IEEE Trans. Computers,
      C-25, 9, September 1976, 950–952.
4.    A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed.,
      McGraw-Hill, New York, 1991.
REFERENCES         139

5.   U. Grenander and G. Szego, Toeplitz Forms and Their Applications, University of Cali-
     fornia Press, Berkeley, CA, 1958.
6.   L. D. Davisson, private communication.
7.   M. N. Huhns, “Optimum Restoration of Quantized Correlated Signals,” USCIPI Report
     600, University of Southern California, Image Processing Institute, Los Angeles August
     1975.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                         Copyright © 2001 John Wiley & Sons, Inc.
                      ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




6
IMAGE QUANTIZATION




Any analog quantity that is to be processed by a digital computer or digital system
must be converted to an integer number proportional to its amplitude. The conver-
sion process between analog samples and discrete-valued samples is called quanti-
zation. The following section includes an analytic treatment of the quantization
process, which is applicable not only for images but for a wide class of signals
encountered in image processing systems. Section 6.2 considers the processing of
quantized variables. The last section discusses the subjective effects of quantizing
monochrome and color images.


6.1. SCALAR QUANTIZATION

Figure 6.1-1 illustrates a typical example of the quantization of a scalar signal. In the
quantization process, the amplitude of an analog signal sample is compared to a set
of decision levels. If the sample amplitude falls between two decision levels, it is
quantized to a fixed reconstruction level lying in the quantization band. In a digital
system, each quantized sample is assigned a binary code. An equal-length binary
code is indicated in the example.
    For the development of quantitative scalar signal quantization techniques, let f
and ˆ represent the amplitude of a real, scalar signal sample and its quantized value,
     f
respectively. It is assumed that f is a sample of a random process with known proba-
bility density p ( f ) . Furthermore, it is assumed that f is constrained to lie in the range

                                        aL ≤ f ≤ a U                                 (6.1-1)


                                                                                         141
142      IMAGE QUANTIZATION


                        256
                                   11111111
                        255
                                   11111110
                        254


                         33
                                   00100000
                         32
                                   00011111
                         31
                                   00011110
                         30


                          3
                                   00000010
                          2
                                   00000001
                          1
                              00000000
                         0
       ORIGINAL      DECISION BINARY                          QUANTIZED         RECONSTRUCTION
       SAMPLE         LEVELS CODE                              SAMPLE               LEVELS

                          FIGURE 6.1-1. Sample quantization.



where a U and a L represent upper and lower limits.
   Quantization entails specification of a set of decision levels d j and a set of recon-
struction levels r j such that if

                                            dj ≤ f < dj + 1                                          (6.1-2)

the sample is quantized to a reconstruction value r j . Figure 6.1-2a illustrates the
placement of decision and reconstruction levels along a line for J quantization lev-
els. The staircase representation of Figure 6.1-2b is another common form of
description.
   Decision and reconstruction levels are chosen to minimize some desired quanti-
zation error measure between f and ˆ . The quantization error measure usually
                                       f
employed is the mean-square error because this measure is tractable, and it usually
correlates reasonably well with subjective criteria. For J quantization levels, the
mean-square quantization error is

                                                                     J–1
                              2      aU              2                              2
             E = E{( f – ˆ ) } =
                         f         ∫a       ( f – ˆ ) p ( f ) df =
                                                  f                  ∑ ( f – rj )       p ( f ) df   (6.1-3)
                                        L
                                                                     j=0
SCALAR QUANTIZATION      143




             FIGURE 6.1-2. Quantization decision and reconstruction levels.



For a large number of quantization levels J, the probability density may be repre-
sented as a constant value p ( r j ) over each quantization band. Hence

                                       J –1
                                                         dj + 1             2
                             E =       ∑ p ( r j ) ∫d     j
                                                                  ( f – r j ) df                  (6.1-4)
                                       j= 0

which evaluates to

                                J–1
                           1                                            3          3
                       E = --
                           3
                            -   ∑      p ( rj ) [ ( dj + 1 – rj ) – ( dj – rj ) ]                 (6.1-5)
                                j= 0

The optimum placing of the reconstruction level r j within the range d j – 1 to d j can
be determined by minimization of E with respect to r j . Setting


                                               dE
                                               ------ = 0                                         (6.1-6)
                                               dr j


yields

                                               dj + 1 + d j
                                         r j = ----------------------                             (6.1-7)
                                                         2
144      IMAGE QUANTIZATION


Therefore, the optimum placement of reconstruction levels is at the midpoint
between each pair of decision levels. Substitution for this choice of reconstruction
levels into the expression for the quantization error yields

                                                      J–1
                                      1                                                          3
                                 E = -----
                                     12
                                         -            ∑       p ( rj ) ( dj + 1 – dj )                   (6.1-8)
                                                     j =0


The optimum choice for decision levels may be found by minimization of E in Eq.
6.1-8 by the method of Lagrange multipliers. Following this procedure, Panter and
Dite (1) found that the decision levels may be computed to a good approximation
from the integral equation


                                                                  aj                 –1 ⁄ 3
                                     ( aU – aL ) ∫ [ p ( f ) ]                                   df
                                                               aL
                               d j = ---------------------------------------------------------------
                                                                                                   -    (6.1-9a)
                                                    aU                     –1 ⁄ 3
                                                ∫      aL
                                                          [p( f ) ]                  df



where

                                                 j ( a U – aL )
                                           a j = ------------------------ + a L
                                                                        -                               (6.1-9b)
                                                            J

for j = 0, 1,..., J. If the probability density of the sample is uniform, the decision lev-
els will be uniformly spaced. For nonuniform probability densities, the spacing of
decision levels is narrow in large-amplitude regions of the probability density func-
tion and widens in low-amplitude portions of the density. Equation 6.1-9 does not
reduce to closed form for most probability density functions commonly encountered
in image processing systems models, and hence the decision levels must be obtained
by numerical integration.
    If the number of quantization levels is not large, the approximation of Eq. 6.1-4
becomes inaccurate, and exact solutions must be explored. From Eq. 6.1-3, setting
the partial derivatives of the error expression with respect to the decision and recon-
struction levels equal to zero yields


                      ∂E                    2                             2
                      ------ = ( d j – r j ) p ( d j ) – ( d j – r j – 1 ) p ( d j ) = 0
                           -                                                                           (6.1-10a)
                      ∂d j

                       ∂E          d
                       ------ = 2 ∫ j + 1 ( f – rj )p ( f ) df = 0                                     (6.1-10b)
                       ∂r j        dj
SCALAR QUANTIZATION       145

Upon simplification, the set of equations

                                       r j = 2d j – r j – 1                                     (6.1-11a)

                                                 dj + 1
                                               ∫d      fp ( f ) df
                                     r j = ------------------------------
                                                                        -
                                                j
                                                                                                (6.1-11b)
                                               dj + 1
                                            ∫     dj
                                                         p ( f ) df


is obtained. Recursive solution of these equations for a given probability distribution
p ( f ) provides optimum values for the decision and reconstruction levels. Max (2)
has developed a solution for optimum decision and reconstruction levels for a Gaus-
sian density and has computed tables of optimum levels as a function of the number
of quantization steps. Table 6.1-1 lists placements of decision and quantization lev-
els for uniform, Gaussian, Laplacian, and Rayleigh densities for the Max quantizer.
    If the decision and reconstruction levels are selected to satisfy Eq. 6.1-11, it can
easily be shown that the mean-square quantization error becomes

                               J–1
                                       dj + 1 2                         2 dj + 1
                     E min =   ∑ ∫d     j
                                              f p ( f ) df – r j ∫
                                                                            dj
                                                                                   p ( f ) df     (6.1-12)
                               j=0

In the special case of a uniform probability density, the minimum mean-square
quantization error becomes

                                                      1
                                         E min = -----------
                                                           -                                      (6.1-13)
                                                           2
                                                 12J
Quantization errors for most other densities must be determined by computation.
   It is possible to perform nonlinear quantization by a companding operation, as
shown in Figure 6.1-3, in which the sample is transformed nonlinearly, linear quanti-
zation is performed, and the inverse nonlinear transformation is taken (3). In the com-
panding system of quantization, the probability density of the transformed samples is
forced to be uniform. Thus, from Figure 6.1-3, the transformed sample value is


                                            g = T{ f }                                            (6.1-14)


where the nonlinear transformation T { · } is chosen such that the probability density
of g is uniform. Thus,




                        FIGURE 6.1-3. Companding quantizer.
146     IMAGE QUANTIZATION


TABLE 6.1-1. Placement of Decision and Reconstruction Levels for Max Quantizer
            Uniform             Gaussian            Laplacian           Rayleigh
Bits      di        ri         di        ri        di        ri        di      ri
1      –1.0000 –0.5000      –∞      –0.7979     –∞       –0.7071    0.0000 1.2657
        0.0000 0.5000       0.0000    0.7979    0.0000     0.7071   2.0985 2.9313
        1.0000              ∞                   –∞                  ∞

2      –1.0000 –0.7500     –∞        –1.5104    ∞        –1.8340    0.0000   0.8079
       –0.5000 –0.2500     –0.9816   –0.4528   –1.1269   –0.4198    1.2545   1.7010
       –0.0000 0.2500       0.0000    0.4528    0.0000    0.4198    2.1667   2.6325
        0.5000 0.7500       0.9816    1.5104    1.1269    1.8340    3.2465   3.8604
        1.0000              ∞                   ∞                   ∞

3      –1.0000   –0.8750   –∞        –2.1519   –∞        –3.0867    0.0000   0.5016
       –0.7500   –0.6250   –1.7479   –1.3439   –2.3796   –1.6725    0.7619   1.0222
       –0.5000   –0.3750   –1.0500   –0.7560   –1.2527   –0.8330    1.2594   1.4966
       –0.2500   –0.1250   –0.5005   –0.2451   –0.5332   –0.2334    1.7327   1.9688
        0.0000    0.1250    0.0000    0.2451    0.0000    0.2334    2.2182   2.4675
        0.2500    0.3750    0.5005    0.7560    0.5332    0.8330    2.7476   3.0277
        0.5000    0.6250    1.0500    1.3439    1.2527    1.6725    3.3707   3.7137
        0.7500    0.8750    1.7479    2.1519    2.3796    3.0867    4.2124   4.7111
        1.0000              ∞                   ∞                   ∞

4      –1.0000   –0.9375   –∞        –2.7326   –∞        –4.4311    0.0000   0.3057
       –0.8750   –0.8125   –2.4008   –2.0690   –3.7240   –3.0169    0.4606   0.6156
       –0.7500   –0.6875   –1.8435   –1.6180   –2.5971   –2.1773    0.7509   0.8863
       –0.6250   –0.5625   –1.4371   –1.2562   –1.8776   –1.5778    1.0130   1.1397
       –0.5000   –0.4375   –1.0993   –0.9423   –1.3444   –1.1110    1.2624   1.3850
       –0.3750   –0.3125   –0.7995   –0.6568   –0.9198   –0.7287    1.5064   1.6277
       –0.2500   –0.1875   –0.5224   –0.3880   –0.5667   –0.4048    1.7499   1.8721
       –0.1250   –0.0625   –0.2582   –0.1284   –0.2664   –0.1240    1.9970   2.1220
        0.0000    0.0625    0.0000    0.1284    0.0000    0.1240    2.2517   2.3814
        0.1250    0.1875    0.2582    0.3880    0.2644    0.4048    2.5182   2.6550
        0.2500    0.3125    0.5224    0.6568    0.5667    0.7287    2.8021   2.9492
        0.3750    0.4375    0.7995    0.9423    0.9198    1.1110    3.1110   3.2729
        0.5000    0.5625    1.0993    1.2562    1.3444    1.5778    3.4566   3.6403
        0.6250    0.6875    1.4371    1.6180    1.8776    2.1773    3.8588   4.0772
        0.7500    0.8125    1.8435    2.0690    2.5971    3.0169    4.3579   4.6385
        0.8750    0.9375    2.4008    2.7326    3.7240    4.4311    5.0649   5.4913
        1.0000              ∞                   ∞                   ∞
PROCESSING QUANTIZED VARIABLES        147

                                               p(g) = 1                             (6.1-15)


for – 1 ≤ g ≤ 1 . If f is a zero mean random variable, the proper transformation func-
      --
      2
       -      --
              2
               -
tion is (4)

                                                     f                   1
                                  T{ f } =        ∫–∞ p ( z ) dz – --
                                                                   2
                                                                    -               (6.1-16)


That is, the nonlinear transformation function is equivalent to the cumulative proba-
bility distribution of f. Table 6.1-2 contains the companding transformations and
inverses for the Gaussian, Rayleigh, and Laplacian probability densities. It should
be noted that nonlinear quantization by the companding technique is an approxima-
tion to optimum quantization, as specified by the Max solution. The accuracy of the
approximation improves as the number of quantization levels increases.


6.2. PROCESSING QUANTIZED VARIABLES

Numbers within a digital computer that represent image variables, such as lumi-
nance or tristimulus values, normally are input as the integer codes corresponding to
the quantization reconstruction levels of the variables, as illustrated in Figure 6.1-1.
If the quantization is linear, the jth integer value is given by

                                                     f – aL
                                    j = ( J – 1 ) -----------------
                                                                  -                  (6.2-1)
                                                  aU – a L               N


where J is the maximum integer value, f is the unquantized pixel value over a
lower-to-upper range of a L to a U , and [ · ] N denotes the nearest integer value of the
argument. The corresponding reconstruction value is

                                    aU – a L              aU – aL
                              r j = ----------------- j + ----------------- + a L
                                                    -                     -          (6.2-2)
                                            J                   2J

Hence, r j is linearly proportional to j. If the computer processing operation is itself
linear, the integer code j can be numerically processed rather than the real number r j .
However, if nonlinear processing is to be performed, for example, taking the loga-
rithm of a pixel, it is necessary to process r j as a real variable rather than the integer j
because the operation is scale dependent. If the quantization is nonlinear, all process-
ing must be performed in the real variable domain.
    In a digital computer, there are two major forms of numeric representation: real
and integer. Real numbers are stored in floating-point form, and typically have a
large dynamic range with fine precision. Integer numbers can be strictly positive or
bipolar (negative or positive). The two's complement number system is commonly
148
      TABLE 6.1.-2. Companding Quantization Transformations

                                          Probability Density                        Forward Transformation                   Inverse Transformation

                                                                          2                                                            –1
                                                                                      1       f                      ˆ =
                                                                                                                       f                  ˆ
                                                                                                                              2 σ erf { 2 g }
                                                  2 –1 ⁄ 2        f                  -                -
       Gaussian                p ( f ) = ( 2πσ )                            -
                                                             exp  – --------    g = -- erf  ---------- 
                                                                                      2       2σ 
                                                                  2σ 2 

                                                             2                                           2                                            1⁄2
                                           f        f                               1         f                               2          1 ˆ 
       Rayleigh                              -                -
                               p ( f ) = ----- exp  – --------                       -                  -
                                                                                  g = -- – exp  – -------- 
                                                                                                                       ˆ =
                                                                                                                       f                       -
                                                                                                                                2σ ln  1 ⁄  -- – g 
                                             2                                        2                                               
                                                                                                                                            2 
                                                                                                                                                      
                                         σ          2σ 2                                      2σ 2 

       Laplacian                                                                                                              1
                                         ---
                               p ( f ) = α exp { – α f }                              --
                                                                                       -
                                                                                  g = 1 [ 1 – exp { – αf } ]    f ≥0                        ˆ
                                                                                                                       ˆ = – --- ln { 1 – 2 g }
                                                                                                                       f                           ˆ
                                                                                                                                                   g≥0
                                         2                                            2                                      α
                                                                                        1                                  1
                                                                                         -
                                                                                  g = – -- [ 1 – exp { αf } ] f < 0                         ˆ
                                                                                                                       ˆ = --- ln { 1 + 2 g }
                                                                                                                       f                            ˆ
                                                                                                                                                    g<0
                                                                                        2                                  α

                         2-    x            2              2-
                               0
       where erf {x} ≡ ------ ∫ exp { – y } dy and α = ------
                           π                                         σ
PROCESSING QUANTIZED VARIABLES            149

used in computers and digital processing hardware for representing bipolar integers.
The general format is as follows:

                                  S.M1,M2,...,MB-1

where S is a sign bit (0 for positive, 1 for negative), followed, conceptually, by a
binary point, Mb denotes a magnitude bit, and B is the number of bits in the com-
puter word. Table 6.2-1 lists the two's complement correspondence between integer,
fractional, and decimal numbers for a 4-bit word. In this representation, all pixels
                                                   –(B – 1 )
are scaled in amplitude between –1.0 and 1.0 – 2            . One of the advantages of


            TABLE 6.2-1. Two’s Complement Code for 4-Bit Code Word

                                      Fractional           Decimal
                   Code                 Value               Value
                                           7
                   0.111                 + --
                                            -              +0.875
                                           8
                                           6
                   0.110                 + --
                                            -              +0.750
                                           8
                                           5
                   0.101                 + --
                                            -              +0.625
                                           8
                                           4
                   0.100                 + --
                                            -              +0.500
                                           8
                                           3
                   0.011                 + --
                                            -              +0.375
                                           8
                                           2
                   0.010                 + --
                                            -              +0.250
                                           8
                   0.001                 +1--
                                            -              +0.125
                                           8
                   0.000                  0                 0.000

                   1.111                 –1
                                          --
                                           -                –0.125
                                          8
                   1.110                 –2
                                          --
                                           -                –0.250
                                          8
                   1.101                 –3
                                          --
                                           -                –0.375
                                          8
                   1.100                 –4
                                          --
                                           -                –0.500
                                          8
                   1.011                 –5
                                          --
                                           -                –0.625
                                          8
                   1.010                 –6
                                          --
                                           -                –0.750
                                          8
                   1.001                 –7
                                          --
                                           -                –0.875
                                          8
                   1.000                 –8
                                          --
                                           -                –1.000
                                          8
150      IMAGE QUANTIZATION


this representation is that pixel scaling is independent of precision in the sense that a
pixel F ( j, k ) is bounded over the range


                                   – 1.0 ≤ F ( j, k ) < 1.0


regardless of the number of bits in a word.


6.3. MONOCHROME AND COLOR IMAGE QUANTIZATION

This section considers the subjective and quantitative effects of the quantization of
monochrome and color images.


6.3.1. Monochrome Image Quantization

Monochrome images are typically input to a digital image processor as a sequence
of uniform-length binary code words. In the literature, the binary code is often
called a pulse code modulation (PCM) code. Because uniform-length code words
are used for each image sample, the number of amplitude quantization levels is
determined by the relationship

                                                  B
                                          L = 2                                   (6.3-1)


where B represents the number of code bits allocated to each sample.
   A bit rate compression can be achieved for PCM coding by the simple expedient
of restricting the number of bits assigned to each sample. If image quality is to be
judged by an analytic measure, B is simply taken as the smallest value that satisfies
the minimal acceptable image quality measure. For a subjective assessment, B is
lowered until quantization effects become unacceptable. The eye is only capable of
judging the absolute brightness of about 10 to 15 shades of gray, but it is much more
sensitive to the difference in the brightness of adjacent gray shades. For a reduced
number of quantization levels, the first noticeable artifact is a gray scale contouring
caused by a jump in the reconstructed image brightness between quantization levels
in a region where the original image is slowly changing in brightness. The minimal
number of quantization bits required for basic PCM coding to prevent gray scale
contouring is dependent on a variety of factors, including the linearity of the image
display and noise effects before and after the image digitizer.
   Assuming that an image sensor produces an output pixel sample proportional to the
image intensity, a question of concern then is: Should the image intensity itself, or
some function of the image intensity, be quantized? Furthermore, should the quantiza-
tion scale be linear or nonlinear? Linearity or nonlinearity of the quantization scale can
MONOCHROME AND COLOR IMAGE QUANTIZATION      151




             (a) 8 bit, 256 levels            (b) 7 bit, 128 levels




             (c) 6 bit, 64 levels             (d) 5 bit, 32 levels




             (e) 4 bit, 16 levels              (f ) 3 bit, 8 levels
FIGURE 6.3-1. Uniform quantization of the peppers_ramp_luminance monochrome
image.
152     IMAGE QUANTIZATION


be viewed as a matter of implementation. A given nonlinear quantization scale can
be realized by the companding operation of Figure 6.1-3, in which a nonlinear
amplification weighting of the continuous signal to be quantized is performed,
followed by linear quantization, followed by an inverse weighting of the quantized
amplitude. Thus, consideration is limited here to linear quantization of companded
pixel samples.
   There have been many experimental studies to determine the number and place-
ment of quantization levels required to minimize the effect of gray scale contouring
(5–8). Goodall (5) performed some of the earliest experiments on digital television
and concluded that 6 bits of intensity quantization (64 levels) were required for good
quality and that 5 bits (32 levels) would suffice for a moderate amount of contour-
ing. Other investigators have reached similar conclusions. In most studies, however,
there has been some question as to the linearity and calibration of the imaging sys-
tem. As noted in Section 3.5.3, most television cameras and monitors exhibit a non-
linear response to light intensity. Also, the photographic film that is often used to
record the experimental results is highly nonlinear. Finally, any camera or monitor
noise tends to diminish the effects of contouring.
   Figure 6.3-1 contains photographs of an image linearly quantized with a variable
number of quantization levels. The source image is a split image in which the left
side is a luminance image and the right side is a computer-generated linear ramp. In
Figure 6.3-1, the luminance signal of the image has been uniformly quantized with
from 8 to 256 levels (3 to 8 bits). Gray scale contouring in these pictures is apparent
in the ramp part of the split image for 6 or fewer bits. The contouring of the lumi-
nance image part of the split image becomes noticeable for 5 bits.
   As discussed in Section 2-4, it has been postulated that the eye responds
logarithmically or to a power law of incident light amplitude. There have been several
efforts to quantitatively model this nonlinear response by a lightness function Λ ,
which is related to incident luminance. Priest et al. (9) have proposed a square-root
nonlinearity

                                                      1⁄2
                                  Λ = ( 100.0Y )                               (6.3-2)


where 0.0 ≤ Y ≤ 1.0 and 0.0 ≤ Λ ≤ 10.0 . Ladd and Pinney (10) have suggested a cube-
root scale

                                                     1⁄3
                            Λ = 2.468 ( 100.0Y )           – 1.636             (6.3-3)


A logarithm scale


                              Λ = 5.0 [ log        { 100.0Y } ]                (6.3-4)
                                              10
MONOCHROME AND COLOR IMAGE QUANTIZATION                   153




                            FIGURE 6.3-2. Lightness scales.



where 0.01 ≤ Y ≤ 1.0 has also been proposed by Foss et al. (11). Figure 6.3-2 com-
pares these three scaling functions.

   In an effort to reduce the grey scale contouring of linear quantization, it is reason-
able to apply a lightness scaling function prior to quantization, and then to apply its
inverse to the reconstructed value in correspondence to the companding quantizer of
Figure 6.1-3. Figure 6.3-3 presents a comparison of linear, square-root, cube-root,
and logarithmic quantization for a 4-bit quantizer. Among the lightness scale quan-
tizers, the gray scale contouring appears least for the square-root scaling. The light-
ness quantizers exhibit less contouring than the linear quantizer in dark areas but
worse contouring for bright regions.


6.3.2. Color Image Quantization

A color image may be represented by its red, green, and blue source tristimulus val-
ues or any linear or nonlinear invertible function of the source tristimulus values. If
the red, green, and blue tristimulus values are to be quantized individually, the selec-
tion of the number and placement of quantization levels follows the same general
considerations as for a monochrome image. The eye exhibits a nonlinear response to
spectral lights as well as white light, and therefore, it is subjectively preferable to
compand the tristimulus values before quantization. It is known, however, that the
eye is most sensitive to brightness changes in the blue region of the spectrum, mod-
erately sensitive to brightness changes in the green spectral region, and least sensi-
tive to red changes. Thus, it is possible to assign quantization levels on this basis
more efficiently than simply using an equal number for each tristimulus value.
154      IMAGE QUANTIZATION




                   (a) Linear                                    (b) Log




                (c) Square root                               (d) Cube root

FIGURE 6.3-3. Comparison of lightness scale quantization of the peppers_ramp
_luminance image for 4 bit quantization.




   Figure 6.3-4 is a general block diagram for a color image quantization system. A
source image described by source tristimulus values R, G, B is converted to three
components x(1), x(2), x(3), which are then quantized. Next, the quantized compo-
       ˆ
nents x ( 1 ) , x ( 2 ) , x ( 3 ) are converted back to the original color coordinate system,
                ˆ         ˆ
                                                     ˆ ˆ ˆ
producing the quantized tristimulus values R, G , B . The quantizer in Figure 6.3-4
effectively partitions the color space of the color coordinates x(1), x(2), x(3) into
quantization cells and assigns a single color value to all colors within a cell. To be
most efficient, the three color components x(1), x(2), x(3) should be quantized jointly.
However, implementation considerations often dictate separate quantization of the
color components. In such a system, x(1), x(2), x(3) are individually quantized over
MONOCHROME AND COLOR IMAGE QUANTIZATION                 155




                    FIGURE 6.3-4 Color image quantization model.




  FIGURE 6.3-5. Loci of reproducible colors for RNGNBN and UVW coordinate systems.



their maximum ranges. In effect, the physical color solid is enclosed in a rectangular
solid, which is then divided into rectangular quantization cells.
   If the source tristimulus values are converted to some other coordinate system for
quantization, some immediate problems arise. As an example, consider the
quantization of the UVW tristimulus values. Figure 6.3-5 shows the locus of
reproducible colors for the RGB source tristimulus values plotted as a cube and the
transformation of this color cube into the UVW coordinate system. It is seen that
the RGB cube becomes a parallelepiped. If the UVW tristimulus values are to be
quantized individually over their maximum and minimum limits, many of the
quantization cells represent nonreproducible colors and hence are wasted. It is only
worthwhile to quantize colors within the parallelepiped, but this generally is a
difficult operation to implement efficiently.
   In the present analysis, it is assumed that each color component is linearly quan-
                                       B(i)
tized over its maximum range into 2         levels, where B(i) represents the number of
bits assigned to the component x(i). The total number of bits allotted to the coding is
fixed at

                              BT = B ( 1 ) + B ( 2 ) + B ( 3 )                 (6.3-5)
156     IMAGE QUANTIZATION




FIGURE 6.3-6. Chromaticity shifts resulting from uniform quantization of the
smpte_girl_linear color image.



Let a U ( i ) represent the upper bound of x(i) and a L ( i ) the lower bound. Then each
quantization cell has dimension


                                           aU ( i ) – aL ( i )
                                 q ( i ) = -------------------------------
                                                                         -      (6.3-6)
                                                         B(i)
                                                     2


Any color with color component x(i) within the quantization cell will be quantized
                              ˆ
to the color component value x ( i ) . The maximum quantization error along each
color coordinate axis is then
REFERENCES      157


                                                            aU ( i ) – aL ( i )
                                                  ˆ
                              ε ( i ) = x ( i ) – x ( i ) = -------------------------------
                                                                                          -                (6.3-7)
                                                                      B(i ) + 1
                                                                   2


Thus, the coordinates of the quantized color become


                                             ˆ
                                             x(i) = x(i) ± ε(i )                                           (6.3-8)


                                      ˆ
subject to the conditions a L ( i ) ≤ x ( i ) ≤ a U ( i ) . It should be observed that the values of
 ˆ
x ( i ) will always lie within the smallest cube enclosing the color solid for the given
color coordinate system. Figure 6.3-6 illustrates chromaticity shifts of various colors
for quantization in the RN GN BN and Yuv coordinate systems (12).
      Jain and Pratt (12) have investigated the optimal assignment of quantization deci-
sion levels for color images in order to minimize the geodesic color distance
between an original color and its reconstructed representation. Interestingly enough,
it was found that quantization of the RN GN BN color coordinates provided better
results than for other common color coordinate systems. The primary reason was
that all quantization levels were occupied in the RN GN BN system, but many levels
were unoccupied with the other systems. This consideration seemed to override the
metric nonuniformity of the RN GN BN color space.


REFERENCES

 1. P. F. Panter and W. Dite, “Quantization Distortion in Pulse Code Modulation with Non-
    uniform Spacing of Levels,” Proc. IRE, 39, 1, January 1951, 44–48.
 2. J. Max, “Quantizing for Minimum Distortion,” IRE Trans. Information Theory, IT-6, 1,
    March 1960, 7–12.
 3. V. R. Algazi, “Useful Approximations to Optimum Quantization,” IEEE Trans. Commu-
    nication Technology, COM-14, 3, June 1966, 297–301.
 4. R. M. Gray, “Vector Quantization,” IEEE ASSP Magazine, April 1984, 4–29.
 5. W. M. Goodall, “Television by Pulse Code Modulation,” Bell System Technical J., Janu-
    ary 1951.
 6. R. L. Cabrey, “Video Transmission over Telephone Cable Pairs by Pulse Code Modula-
    tion,” Proc. IRE, 48, 9, September 1960, 1546–1551.
 7. L. H. Harper, “PCM Picture Transmission,” IEEE Spectrum, 3, 6, June 1966, 146.
 8. F. W. Scoville and T. S. Huang, “The Subjective Effect of Spatial and Brightness Quanti-
    zation in PCM Picture Transmission,” NEREM Record, 1965, 234–235.
 9. I. G. Priest, K. S. Gibson, and H. J. McNicholas, “An Examination of the Munsell Color
    System, I. Spectral and Total Reflection and the Munsell Scale of Value,” Technical
    Paper 167, National Bureau of Standards, Washington, DC, 1920.
10. J. H. Ladd and J. E. Pinney, “Empherical Relationships with the Munsell Value Scale,”
    Proc. IRE (Correspondence), 43, 9, 1955, 1137.
158      IMAGE QUANTIZATION


11. C. E. Foss, D. Nickerson, and W. C. Granville, “Analysis of the Oswald Color System,”
    J. Optical Society of America, 34, 1, July 1944, 361–381.
12. A. K. Jain and W. K. Pratt, “Color Image Quantization,” IEEE Publication 72 CH0 601-
    5-NTC, National Telecommunications Conference 1972 Record, Houston, TX, Decem-
    ber 1972.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                       Copyright © 2001 John Wiley & Sons, Inc.
                    ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




PART 3

DISCRETE TWO-DIMENSIONAL
LINEAR PROCESSING

Part 3 of the book is concerned with a unified analysis of discrete two-dimensional
linear processing operations. Several forms of discrete two-dimensional
superposition and convolution operators are developed and related to one another.
Two-dimensional transforms, such as the Fourier, Hartley, cosine, and Karhunen–
Loeve transforms, are introduced. Consideration is given to the utilization of two-
dimensional transforms as an alternative means of achieving convolutional
processing more efficiently.




                                                                               159
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                         Copyright © 2001 John Wiley & Sons, Inc.
                      ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




7
SUPERPOSITION AND CONVOLUTION




In Chapter 1, superposition and convolution operations were derived for continuous
two-dimensional image fields. This chapter provides a derivation of these operations
for discrete two-dimensional images. Three types of superposition and convolution
operators are defined: finite area, sampled image, and circulant area. The finite-area
operator is a linear filtering process performed on a discrete image data array. The
sampled image operator is a discrete model of a continuous two-dimensional image
filtering process. The circulant area operator provides a basis for a computationally
efficient means of performing either finite-area or sampled image superposition and
convolution.


7.1. FINITE-AREA SUPERPOSITION AND CONVOLUTION

Mathematical expressions for finite-area superposition and convolution are devel-
oped below for both series and vector-space formulations.


7.1.1. Finite-Area Superposition and Convolution: Series Formulation

Let F ( n1, n 2 ) denote an image array for n1, n2 = 1, 2,..., N. For notational simplicity,
all arrays in this chapter are assumed square. In correspondence with Eq. 1.2-6, the
image array can be represented at some point ( m 1 , m 2 ) as a sum of amplitude
weighted Dirac delta functions by the discrete sifting summation

                F ( m 1, m 2 ) =   ∑∑      F ( n 1, n 2 )δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 )   (7.1-1)
                                   n1 n2


                                                                                                 161
162        SUPERPOSITION AND CONVOLUTION


The term


                                                 1            if m1 = n 1 and m 2 = n 2               (7.1-2a)
                                                 
            δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ) = 
                                                 
                                                 0             otherwise                              (7.1-2b)


is a discrete delta function. Now consider a spatial linear operator O { · } that pro-
duces an output image array


                                        Q ( m 1, m 2 ) = O { F ( m 1, m 2 ) }                           (7.1-3)


by a linear spatial combination of pixels within a neighborhood of ( m 1, m 2 ) . From
the sifting summation of Eq. 7.1-1,

                                                                                        
                 Q ( m 1, m 2 ) = O  ∑ ∑ F ( n 1, n 2 )δ ( m1 – n 1 + 1, m 2 – n 2 + 1 )             (7.1-4a)
                                     n1 n2                                              

or


                Q ( m 1, m 2 ) =   ∑∑         F ( n 1, n 2 )O { δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ) }   (7.1-4b)
                                   n1    n2



recognizing that O { · } is a linear operator and that F ( n 1, n 2 ) in the summation of
Eq. 7.1-4a is a constant in the sense that it does not depend on ( m 1, m 2 ) . The term
O { δ ( t 1, t 2 ) } for ti = m i – n i + 1 is the response at output coordinate ( m 1, m 2 ) to a
unit amplitude input at coordinate ( n 1, n 2 ) . It is called the impulse response function
array of the linear operator and is written as


      δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2 ) = O { δ ( t1, t2 ) }        for 1 ≤ t1, t2 ≤ L      (7.1-5)


and is zero otherwise. For notational simplicity, the impulse response array is con-
sidered to be square.
   In Eq. 7.1-5 it is assumed that the impulse response array is of limited spatial
extent. This means that an output image pixel is influenced by input image pixels
only within some finite area L × L neighborhood of the corresponding output image
pixel. The output coordinates ( m 1, m 2 ) in Eq. 7.1-5 following the semicolon indicate
that in the general case, called finite area superposition, the impulse response array
can change form for each point ( m 1, m 2 ) in the processed array Q ( m 1, m 2 ). Follow-
ing this nomenclature, the finite area superposition operation is defined as
FINITE-AREA SUPERPOSITION AND CONVOLUTION                             163




FIGURE 7.1-1. Relationships between input data, output data, and impulse response arrays
for finite-area superposition; upper left corner justified array definition.



          Q ( m 1, m 2 ) =    ∑∑       F ( n 1, n 2 )H ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2 )   (7.1-6)
                              n 1 n2


The limits of the summation are


                             MAX { 1, m i – L + 1 } ≤ n i ≤ MIN { N, m i }                           (7.1-7)


where MAX { a, b } and MIN { a, b } denote the maximum and minimum of the argu-
ments, respectively. Examination of the indices of the impulse response array at its
extreme positions indicates that M = N + L - 1, and hence the processed output array
Q is of larger dimension than the input array F. Figure 7.1-1 illustrates the geometry
of finite-area superposition. If the impulse response array H is spatially invariant,
the superposition operation reduces to the convolution operation.


               Q ( m 1, m 2 ) =   ∑∑        F ( n 1, n 2 )H ( m 1 – n 1 + 1, m2 – n 2 + 1 )          (7.1-8)
                                   n1 n2


Figure 7.1-2 presents a graphical example of convolution with a 3 × 3 impulse
response array.
   Equation 7.1-6 expresses the finite-area superposition operation in left-justified
form in which the input and output arrays are aligned at their upper left corners. It is
often notationally convenient to utilize a definition in which the output array is cen-
tered with respect to the input array. This definition of centered superposition is
given by
164      SUPERPOSITION AND CONVOLUTION




FIGURE 7.1-2. Graphical example of finite-area convolution with a 3 × 3 impulse response
array; upper left corner justified array definition.




             Q c ( j 1, j2 ) =   ∑∑       F ( n 1, n 2 )H ( j 1 – n 1 + L c, j2 – n 2 + L c ; j1, j 2 )    (7.1-9)
                                 n 1 n2




where – ( L – 3 ) ⁄ 2 ≤ ji ≤ N + ( L – 1 ) ⁄ 2 and L c = ( L + 1 ) ⁄ 2 . The limits of the summa-
tion are



                   MAX { 1, j i – ( L – 1 ) ⁄ 2 } ≤ n i ≤ MIN { N, j i + ( L – 1 ) ⁄ 2 }                  (7.1-10)



Figure 7.1-3 shows the spatial relationships between the arrays F, H, and Qc for cen-
tered superposition with a 5 × 5 impulse response array.
   In digital computers and digital image processors, it is often convenient to restrict
the input and output arrays to be of the same dimension. For such systems, Eq. 7.1-9
needs only to be evaluated over the range 1 ≤ j i ≤ N . When the impulse response
FINITE-AREA SUPERPOSITION AND CONVOLUTION                                 165




FIGURE 7.1-3. Relationships between input data, output data, and impulse response arrays
for finite-area superposition; centered array definition.




array is located on the border of the input array, the product computation of Eq.
7.1-9 does not involve all of the elements of the impulse response array. This situa-
tion is illustrated in Figure 7.1-3, where the impulse response array is in the upper
left corner of the input array. The input array pixels “missing” from the computation
are shown crosshatched in Figure 7.1-3. Several methods have been proposed to
deal with this border effect. One method is to perform the computation of all of the
impulse response elements as if the missing pixels are of some constant value. If the
constant value is zero, the result is called centered, zero padded superposition. A
variant of this method is to regard the missing pixels to be mirror images of the input
array pixels, as indicated in the lower left corner of Figure 7.1-3. In this case the
centered, reflected boundary superposition definition becomes


           Q c ( j 1, j2 ) =   ∑∑       F ( n′ , n′ )H ( j 1 – n 1 + L c , j 2 – n 2 + L c ; j 1, j 2 )
                                             1    2                                                       (7.1-11)
                               n 1 n2



where the summation limits are


                                   ji – ( L – 1 ) ⁄ 2 ≤ ni ≤ ji + ( L – 1 ) ⁄ 2                           (7.1-12)
166       SUPERPOSITION AND CONVOLUTION


and

                                      2 – ni                           for n i ≤ 0                        (7.1-13a)
                                     
                                     
                                     
                              n' i =  n i                             for 1 ≤ n i ≤ N                     (7.1-13b)
                                     
                                     
                                      2N – n
                                             i                         for n i > N                        (7.1-13c)


In many implementations, the superposition computation is limited to the range
( L + 1 ) ⁄ 2 ≤ j i ≤ N – ( L – 1 ) ⁄ 2 , and the border elements of the N × N array Qc are set
to zero. In effect, the superposition operation is computed only when the impulse
response array is fully embedded within the confines of the input array. This region
is described by the dashed lines in Figure 7.1-3. This form of superposition is called
centered, zero boundary superposition.
    If the impulse response array H is spatially invariant, the centered definition for
convolution becomes


                   Q c ( j 1, j 2 ) =   ∑∑        F ( n 1, n 2 )H ( j 1 – n 1 + L c, j2 – n 2 + L c )        (7.1-14)
                                        n1   n2


The 3 × 3 impulse response array, which is called a small generating kernel (SGK),
is fundamental to many image processing algorithms (1). When the SGK is totally
embedded within the input data array, the general term of the centered convolution
operation can be expressed explicitly as


 Q c ( j1, j 2 ) = H ( 3, 3 )F ( j1 – 1, j 2 – 1 ) + H ( 3, 2 )F ( j 1 – 1, j2 ) + H ( 3, 1 )F ( j 1 – 1, j 2 + 1 )

                  + H ( 2, 3 )F ( j 1, j2 – 1 ) + H ( 2, 2 )F ( j1, j 2 ) + H ( 2, 1 )F ( j 1, j 2 + 1 )

                  + H ( 1, 3 )F ( j 1 + 1, j 2 – 1 ) + H ( 1, 2 )F ( j 1 + 1, j2 ) + H ( 1, 1 )F ( j 1 + 1, j 2 + 1 )

                                                                                                             (7.1-15)


for 2 ≤ j i ≤ N – 1 . In Chapter 9 it will be shown that convolution with arbitrary-size
impulse response arrays can be achieved by sequential convolutions with SGKs.
    The four different forms of superposition and convolution are each useful in var-
ious image processing applications. The upper left corner–justified definition is
appropriate for computing the correlation function between two images. The cen-
tered, zero padded and centered, reflected boundary definitions are generally
employed for image enhancement filtering. Finally, the centered, zero boundary def-
inition is used for the computation of spatial derivatives in edge detection. In this
application, the derivatives are not meaningful in the border region.
FINITE-AREA SUPERPOSITION AND CONVOLUTION                               167


     0.040 0.080 0.120 0.160 0.200 0.200 0.200                0.000 0.000 0.000 0.000 0.000 0.000 0.000
     0.080 0.160 0.240 0.320 0.400 0.400 0.400                0.000 0.000 0.000 0.000 0.000 0.000 0.000
     0.120 0.240 0.360 0.480 0.600 0.600 0.600                0.000 0.000 1.000 1.000 1.000 1.000 1.000
     0.160 0.320 0.480 0.640 0.800 0.800 0.800                0.000 0.000 1.000 1.000 1.000 1.000 1.000
     0.200 0.400 0.600 0.800 1.000 1.000 1.000                0.000 0.000 1.000 1.000 1.000 1.000 1.000
     0.200 0.400 0.600 0.800 1.000 1.000 1.000                0.000 0.000 1.000 1.000 1.000 1.000 1.000
     0.200 0.400 0.600 0.800 1.000 1.000 1.000                0.000 0.000 1.000 1.000 1.000 1.000 1.000

         (a) Upper left corner justified                          (b) Centered, zero boundary

     0.360 0.480 0.600 0.600 0.600 0.600 0.600                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.480 0.640 0.800 0.800 0.800 0.800 0.800                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.600 0.800 1.000 1.000 1.000 1.000 1.000                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.600 0.800 1.000 1.000 1.000 1.000 1.000                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.600 0.800 1.000 1.000 1.000 1.000 1.000                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.600 0.800 1.000 1.000 1.000 1.000 1.000                1.000 1.000 1.000 1.000 1.000 1.000 1.000
     0.600 0.800 1.000 1.000 1.000 1.000 1.000                1.000 1.000 1.000 1.000 1.000 1.000 1.000

          (c) Centered, zero padded                                  (d) Centered, reflected
FIGURE 7.1-4 Finite-area convolution boundary conditions, upper left corner of convolved
image.


   Figure 7.1-4 shows computer printouts of pixels in the upper left corner of a
convolved image for the four types of convolution boundary conditions. In this
example, the source image is constant of maximum value 1.0. The convolution
impulse response array is a 5 × 5 uniform array.


7.1.2. Finite-Area Superposition and Convolution: Vector-Space Formulation
                                                                                                2
If the arrays F and Q of Eq. 7.1-6 are represented in vector form by the N × 1 vec-
                 2
tor f and the M × 1 vector q, respectively, the finite-area superposition operation
can be written as (2)

                                                 q = Df                                             (7.1-16)
                   2     2
where D is a M × N matrix containing the elements of the impulse response. It is
convenient to partition the superposition operator matrix D into submatrices of
dimension M × N . Observing the summation limits of Eq. 7.1-7, it is seen that

                                  D 1, 1         0      …             0
                                  D 2, 1     D 2, 2
                                                                    …




                                                                      0
                                             …
                                  …




                        D =       D L, 1     D L, 2              D M – L + 1, N                     (7.1-17)
                                    0      D L + 1, 1
                                                                    … …
                                             …
                                  …




                                    0            …        0         D M, N
168      SUPERPOSITION AND CONVOLUTION


                                                         11   0         0     0
                                                         21 11          0     0
                                                         31 21          0     0
                                                         0 31           0     0
                                                         12   0 11            0
                                                         22 12 21 11
                                                         32 22 31 21
                   11 12 13
                                                          0 32          0 31
              H = 21 22 23                   D=
                                                         13   0 12            0
                   31 32 33
                                                         23 13 22 12
                                                         33 23 32 22
                                                          0 33          0 32
                                                          0   0 13            0
                                                          0   0 23 13
                                                          0   0 33 23
                                                          0   0         0 33

                                            (a)                                      (b)

FIGURE 7.1-5 Finite-area convolution operators: (a) general impulse array, M = 4, N = 2,
L = 3; (b) Gaussian-shaped impulse array, M = 16, N = 8, L = 9.



The general nonzero term of D is then given by


                 Dm            ( m 1, n 1 ) = H ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m1, m 2 )   (7.1-18)
                      2 ,n 2




Thus, it is observed that D is highly structured and quite sparse, with the center band
of submatrices containing stripes of zero-valued elements.
   If the impulse response is position invariant, the structure of D does not depend
explicitly on the output array coordinate ( m 1, m2 ) . Also,


                                           Dm            = Dm       + 1 ,n 2 + 1             (7.1-19)
                                                2 ,n 2          2




As a result, the columns of D are shifted versions of the first column. Under these
conditions, the finite-area superposition operator is known as the finite-area convo-
lution operator. Figure 7.1-5a contains a notational example of the finite-area con-
volution operator for a 2 × 2 (N = 2) input data array, a 4 × 4 (M = 4) output data
array, and a 3 × 3 (L = 3) impulse response array. The integer pairs (i, j) at each ele-
ment of D represent the element (i, j) of H ( i, j ). The basic structure of D can be seen
more clearly in the larger matrix depicted in Figure 7.l-5b. In this example, M = 16,
FINITE-AREA SUPERPOSITION AND CONVOLUTION               169

N = 8, L = 9, and the impulse response has a symmetrical Gaussian shape. Note that
D is a 256 × 64 matrix in this example.
   Following the same technique as that leading to Eq. 5.4-7, the matrix form of the
superposition operator may be written as

                                         M    N
                                                                 T
                               Q =      ∑ ∑ Dm ,n Fvn um                      (7.1-20)
                                        m =1 n =1

If the impulse response is spatially invariant and is of separable form such that

                                                     T
                                         H = h C hR                           (7.1-21)


where hR and hC are column vectors representing row and column impulse
responses, respectively, then


                                     D = DC ⊗ DR                              (7.1-22)


The matrices D R and DC are M × N matrices of the form


                                 hR ( 1 )      0         …           0
                                 hR ( 2 ) hR ( 1 )
                                                                     …




                                 hR ( 3 ) hR ( 2 )       …           0
                                                                 hR ( 1 )     (7.1-23)
                                   …




                        DR =
                                hR ( L )
                                                                     … … …




                                    0
                                   …




                                    0         …              0   hR ( L )


The two-dimensional convolution operation may then be computed by sequential
row and column one-dimensional convolutions. Thus

                                                         T
                                        Q = D C FD R                          (7.1-24)


In vector form, the general finite-area superposition or convolution operator requires
  2 2
N L operations if the zero-valued multiplications of D are avoided. The separable
operator of Eq. 7.1-24 can be computed with only NL ( M + N ) operations.
170      SUPERPOSITION AND CONVOLUTION


7.2. SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION

Many applications in image processing require a discretization of the superposition
integral relating the input and output continuous fields of a linear system. For exam-
ple, image blurring by an optical system, sampling with a finite-area aperture or
imaging through atmospheric turbulence, may be modeled by the superposition inte-
gral equation

                       ˜                  ∞   ∞       ˜   ˜
                       G ( x, y ) =   ∫–∞ ∫–∞ F ( α, β )J ( x, y ; α, β ) dα   dβ            (7.2-1a)

         ˜            ˜
where F ( x ,y ) and G ( x, y ) denote the input and output fields of a linear system,
                               ˜
respectively, and the kernel J ( x, y ; α , β ) represents the impulse response of the linear
system model. In this chapter, a tilde over a variable indicates that the spatial indices
of the variable are bipolar; that is, they range from negative to positive spatial limits.
In this formulation, the impulse response may change form as a function of its four
indices: the input and output coordinates. If the linear system is space invariant, the
output image field may be described by the convolution integral

                     ˜                ∞       ∞   ˜       ˜
                     G ( x, y ) =    ∫–∞ ∫–∞ F ( α, β )J ( x – α, y – β ) dα dβ              (7.2-1b)

For discrete processing, physical image sampling will be performed on the output
image field. Numerical representation of the integral must also be performed in
order to relate the physical samples of the output field to points on the input field.
    Numerical representation of a superposition or convolution integral is an impor-
tant topic because improper representations may lead to gross modeling errors or
numerical instability in an image processing application. Also, selection of a numer-
ical representation algorithm usually has a significant impact on digital processing
computational requirements.
    As a first step in the discretization of the superposition integral, the output image
field is physically sampled by a ( 2J + 1 ) × ( 2J + 1 ) array of Dirac pulses at a resolu-
tion ∆S to obtain an array whose general term is

                    ˜                     ˜
                    G ( j 1 ∆S, j 2 ∆S) = G ( x, y )δ ( x – j1 ∆S, y – j2 ∆S)                 (7.2-2)

where – J ≤ ji ≤ J . Equal horizontal and vertical spacing of sample pulses is assumed
for notational simplicity. The effect of finite area sample pulses can easily be incor-
                                                         ˜
porated by replacing the impulse response with J ( x, y ; α, β ) ᭺ P ( – x ,– y ), where
                                                                     *
P ( – x ,– y ) represents the pulse shape of the sampling pulse. The delta function may
be brought under the integral sign of the superposition integral of Eq. 7.2-la to give

            ˜                          ∞      ∞   ˜       ˜
            G ( j 1 ∆S, j 2 ∆ S) =    ∫–∞ ∫–∞ F ( α, β )J ( j1 ∆S,   j 2 ∆S ; α, β ) dα dβ    (7.2-3)
SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION                                           171

It should be noted that the physical sampling is performed on the observed image
spatial variables (x, y); physical sampling does not affect the dummy variables of
integration ( α, β ) .
    Next, the impulse response must be truncated to some spatial bounds. Thus, let

                                                     ˜
                                                     J ( x, y ; α, β ) = 0                                      (7.2-4)

for x > T and y > T . Then,

                                  j 1 ∆S + T   j 2 ∆S + T
     ˜                                                       ˜          ˜
     G ( j 1 ∆S, j1 ∆S) =      ∫j ∆S – T ∫ j ∆S – T
                                  1              2
                                                             F ( α, β ) J ( j 1 ∆S, j 2 ∆S ; α, β ) dα dβ (7.2-5)


Truncation of the impulse response is equivalent to multiplying the impulse
response by a window function V(x, y), which is unity for x < T and y < T and
zero elsewhere. By the Fourier convolution theorem, the Fourier spectrum of G(x, y)
is equivalently convolved with the Fourier transform of V(x, y), which is a two-
dimensional sinc function. This distortion of the Fourier spectrum of G(x, y) results
in the introduction of high-spatial-frequency artifacts (a Gibbs phenomenon) at spa-
tial frequency multiples of 2π ⁄ T . Truncation distortion can be reduced by using a
shaped window, such as the Bartlett, Blackman, Hamming, or Hanning windows
(3), which smooth the sharp cutoff effects of a rectangular window. This step is
especially important for image restoration modeling because ill-conditioning of the
superposition operator may lead to severe amplification of the truncation artifacts.
    In the next step of the discrete representation, the continuous ideal image array
 ˜
F ( α, β ) is represented by mesh points on a rectangular grid of resolution ∆I and
dimension ( 2K + 1 ) × ( 2K + 1 ). This is not a physical sampling process, but merely
an abstract numerical representation whose general term is described by

                         ˜                    ˜
                         F ( k 1 ∆I, k2 ∆I) = F ( α, β )δ ( α – k 1 ∆I, α – k 2 ∆I)                             (7.2-6)

where K iL ≤ k i ≤ K iU , with K iU and K iL denoting the upper and lower index limits.
   If the ultimate objective is to estimate the continuous ideal image field by pro-
cessing the physical observation samples, the mesh spacing ∆I should be fine
enough to satisfy the Nyquist criterion for the ideal image. That is, if the spectrum of
the ideal image is bandlimited and the limits are known, the mesh spacing should be
set at the corresponding Nyquist spacing. Ideally, this will permit perfect interpola-
                                ˜                               ˜
tion of the estimated points F ( k 1 ∆I, k 2 ∆I) to reconstruct F ( x, y ).
   The continuous integration of Eq. 7.2-5 can now be approximated by a discrete
summation by employing a quadrature integration formula (4). The physical image
samples may then be expressed as
                           K 1U         K 2U
˜                                                ˜                    ˜             ˜
G ( j 1 ∆ S, j 2 ∆S) =     ∑            ∑        F ( k 1 ∆ I, k 2 ∆ I)W ( k 1, k 2 )J ( j 1 ∆ S, j 2 ∆ S ; k 1 ∆I, k 2 ∆I )
                         k 1 = K 1L k 2 = K 2L

                                                                                                                (7.2-7)
172         SUPERPOSITION AND CONVOLUTION

        ˜
where W ( k 1 , k 2 ) is a weighting coefficient for the particular quadrature formula
employed. Usually, a rectangular quadrature formula is used, and the weighting
coefficients are unity. In any case, it is notationally convenient to lump the weight-
ing coefficient and the impulse response function together so that

               ˜                                    ˜             ˜
               H ( j1 ∆S, j 2 ∆S; k 1 ∆I, k 2 ∆I) = W ( k 1, k 2 )J ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I)              (7.2-8)

Then,
                                K 1U           K 2U
      ˜                                                     ˜                   ˜
      G ( j 1 ∆S, j 2 ∆S) =      ∑             ∑            F ( k 1 ∆I, k 2 ∆I )H ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I ) (7.2-9)
                              k 1 = K 1L k 2 = K 2L

                                ˜
Again, it should be noted that H is not spatially discretized; the function is simply
evaluated at its appropriate spatial argument. The limits of summation of Eq. 7.2-9
are

                                           ∆S T                                       ∆S T
                        K iL =         j i ------ – -----
                                                -       -               K iU =     ji ------ + -----
                                                                                           -       -              (7.2-10)
                                            ∆I ∆I            N                         ∆I ∆I           N

where [ · ] N denotes the nearest integer value of the argument.
    Figure 7.2-1 provides an example relating actual physical sample values
 ˜                                ˜
G ( j1 ∆ S, j2 ∆S) to mesh points F ( k 1 ∆I, k 2 ∆I ) on the ideal image field. In this exam-
ple, the mesh spacing is twice as large as the physical sample spacing. In the figure,




FIGURE 7.2-1. Relationship of physical image samples to mesh points on an ideal image
field for numerical representation of a superposition integral.
SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION                  173




FIGURE 7.2-2. Relationship between regions of physical samples and mesh points for
numerical representation of a superposition integral.



the values of the impulse response function that are utilized in the summation of
Eq. 7.2-9 are represented as dots.
     An important observation should be made about the discrete model of Eq. 7.2-9
for a sampled superposition integral; the physical area of the ideal image field
 ˜
 F ( x, y ) containing mesh points contributing to physical image samples is larger
                         ˜
than the sample image G ( j 1 ∆S, j2 ∆S) regardless of the relative number of physical
samples and mesh points. The dimensions of the two image fields, as shown in
Figure 7.2-2, are related by


                                    J ∆S + T = K ∆I                             (7.2-11)

to within an accuracy of one sample spacing.
     At this point in the discussion, a discrete and finite model for the sampled super-
                                                                            ˜
position integral has been obtained in which the physical samples G ( j1 ∆S, j2 ∆S)
are related to points on an ideal image field F 1˜ ( k ∆I, k ∆I) by a discrete mathemati-
                                                            2
cal superposition operation. This discrete superposition is an approximation to con-
tinuous superposition because of the truncation of the impulse response function
 ˜
J ( x, y ; α, β ) and quadrature integration. The truncation approximation can, of
course, be made arbitrarily small by extending the bounds of definition of the
impulse response, but at the expense of large dimensionality. Also, the quadrature
integration approximation can be improved by use of complicated formulas of
quadrature, but again the price paid is computational complexity. It should be noted,
however, that discrete superposition is a perfect approximation to continuous super-
position if the spatial functions of Eq. 7.2-1 are all bandlimited and the physical
174        SUPERPOSITION AND CONVOLUTION


sampling and numerical representation periods are selected to be the corresponding
Nyquist period (5).
   It is often convenient to reformulate Eq. 7.2-9 into vector-space form. Toward
                     ˜       ˜
this end, the arrays G and F are reindexed to M × M and N × N arrays, respectively,
such that all indices are positive. Let

                                                          ˜
                                    F ( n 1 ∆I, n2 ∆I ) = F ( k 1 ∆I, k2 ∆I )                            (7.2-12a)

where n i = k i + K + 1 and let

                                G ( m 1 ∆S, m 2 ∆S) = G ( j 1 ∆S, j2 ∆S )                                (7.2-12b)

where m i = j i + J + 1 . Also, let the impulse response be redefined such that

                                                        ˜
                H ( m1 ∆S, m 2 ∆S ; n 1 ∆ I, n 2 ∆I ) = H ( j1 ∆S, j2 ∆S ; k 1 ∆I, k 2 ∆I )              (7.2-12c)


Figure 7.2-3 illustrates the geometrical relationship between these functions.
   The discrete superposition relationship of Eq. 7.2-9 for the shifted arrays
becomes
                                   N 1U       N 2U
      G ( m 1 ∆S, m 2 ∆S) =        ∑          ∑        F ( n 1 ∆I, n 2 ∆I ) H ( m1 ∆ S, m 2 ∆S ; n 1 ∆I, n 2 ∆I )
                              n 1 = N 1L n 2 = N 2L
                                                                                                          (7.2-13)

for ( 1 ≤ m i ≤ M ) where


                              N iL = m i ∆S
                                         ------
                                              -                N iU = m i ∆S + 2T
                                                                          ------ -----
                                                                               -     -
                                          ∆I           N                   ∆I ∆I         N


Following the techniques outlined in Chapter 5, the vectors g and f may be formed
by column scanning the matrices G and F to obtain

                                                         g = Bf                                           (7.2-14)

                   2     2
where B is a M × N matrix of the form


                                     B 1, 1 B1, 2 …           B ( 1, L )   0…       0
                                                                 … …




                                                                                   …




                                          0   B 2, 2
                             B =                                                                          (7.2-15)
                                                                                    0
                                          …




                                          0    …        0 B M, N – L + 1     … B M, N
SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION                                      175




                                  FIGURE 7.2-3. Sampled image arrays.



The general term of B is defined as


                       Bm
                            2,   n 2 ( m 1,    n 1 ) = H ( m1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I )                 (7.2-16)


for 1 ≤ m i ≤ M and m i ≤ n i ≤ m i + L – 1 where L = [ 2T ⁄ ∆I ] N represents the nearest
odd integer dimension of the impulse response in resolution units ∆I . For descrip-
tional simplicity, B is called the blur matrix of the superposition integral.
   If the impulse response function is translation invariant such that


             H ( m 1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I ) = H ( m 1 ∆S – n 1 ∆I, m2 ∆S – n 2 ∆I )                  (7.2-17)


then the discrete superposition operation of Eq. 7.2-13 becomes a discrete convolu-
tion operation of the form
                                  N 1U             N 2U
   G ( m 1 ∆S, m 2 ∆S ) =             ∑            ∑      F ( n 1 ∆I, n 2 ∆I )H ( m1 ∆S – n 1 ∆I, m 2 ∆S – n 2 ∆I )
                            n 1 = N 1L n 2 = N 2L

                                                                                                            (7.2-18)


   If the physical sample and quadrature mesh spacings are equal, the general term
of the blur matrix assumes the form


                        Bm
                                 2,   n 2 ( m 1,   n 1 ) = H ( m 1 – n 1 + L, m 2 – n 2 + L )               (7.2-19)
176        SUPERPOSITION AND CONVOLUTION


                 11 12 13
           H = 21 22 23
                 31 32 33

                 33 23 13            0 32 22 12                 0 31 21 11             0   0    0     0       0
                  0 33 23 13                 0 32 22 12                  0 31 21 11        0    0     0       0
           B=
                  0    0       0     0 33 23 13                 0 32 22 12             0 31 21 11             0
                  0    0       0     0       0 33 23 13                  0 32 22 12        0 31 21 11
                                                              (a)




                                                              (b)
FIGURE 7.2-4. Sampled infinite area convolution operators: (a) General impulse array,
M = 2, N = 4, L = 3; (b) Gaussian-shaped impulse array, M = 8, N = 16, L = 9.



In Eq. 7.2-19, the mesh spacing variable ∆I is understood. In addition,


                                             Bm             = Bm                                                  (7.2-20)
                                                  2,   n2           2   + 1, n 2 + 1



Consequently, the rows of B are shifted versions of the first row. The operator B
then becomes a sampled infinite area convolution operator, and the series form rep-
resentation of Eq. 7.2-19 reduces to

                               m1 + L – 1 m 2 + L – 1
      G ( m 1 ∆S, m 2 ∆S ) =        ∑          ∑            F ( n 1, n 2 )H ( m 1 – n 1 + L, m2 – n 2 + L )       (7.2-21)
                                   n1 = m1    n2 = m2


where the sampling spacing is understood.
   Figure 7.2-4a is a notational example of the sampled image convolution operator
for a 4 × 4 (N = 4) data array, a 2 × 2 (M = 2) filtered data array, and a 3 × 3
(L = 3) impulse response array. An extension to larger dimension is shown in Figure
7.2-4b for M = 8, N = 16, L = 9 and a Gaussian-shaped impulse response.
   When the impulse response is spatially invariant and orthogonally separable,


                                                   B = BC ⊗ BR                                                    (7.2-22)


where B R and B C are M × N matrices of the form
CIRCULANT SUPERPOSITION AND CONVOLUTION                                    177


                     hR ( L )     hR ( L – 1 )      …            hR ( 1 )      0      …           0
                       0            hR ( L )




                                                                                             …
          BR =                                                                                          (7.2-23)
                                                                                               0

                     …
                       0              …              0       hR ( L )                 …      hR ( 1 )



The two-dimensional convolution operation then reduces to sequential row and col-
umn convolutions of the matrix form of the image array. Thus


                                                             T
                                            G = B C FB R                                                (7.2-24)


                                                                                                            2 2
The superposition or convolution operator expressed in vector form requires M L
operations if the zero multiplications of B are avoided. A separable convolution
operator can be computed in matrix form with only ML ( M + N ) operations.


7.3. CIRCULANT SUPERPOSITION AND CONVOLUTION

In circulant superposition (2), the input data, the processed output, and the impulse
response arrays are all assumed spatially periodic over a common period. To unify
the presentation, these arrays will be defined in terms of the spatially limited arrays
considered previously. First, let the N × N data array F ( n1 ,n 2 ) be embedded in the
upper left corner of a J × J array ( J > N ) of zeros, giving


                                           F ( n 1, n 2 )           for 1 ≤ n i ≤ N                    (7.3-1a)
                                          
                                          
                        FE ( n 1, n 2 ) = 
                                          
                                          
                                                 0                  for N + 1 ≤ n i ≤ J                (7.3-1b)


In a similar manner, an extended impulse response array is created by embedding
the spatially limited impulse array in a J × J matrix of zeros. Thus, let


                                        H(l , l ; m , m )                  for 1 ≤ li ≤ L              (7.3-2a)
                                           1 2     1   2
                                        
          H E ( l 1, l 2 ; m 1, m 2 ) = 
                                        
                                                0                          for L + 1 ≤ l i ≤ J         (7.3-2b)
                                        
178       SUPERPOSITION AND CONVOLUTION


Periodic arrays F E ( n 1, n 2 ) and H E ( l 1, l 2 ; m 1, m 2 ) are now formed by replicating the
extended arrays over the spatial period J. Then, the circulant superposition of these
functions is defined as


                            J         J
      KE ( m 1, m 2 ) =    ∑         ∑        F E ( n 1, n 2 )H E ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2)   (7.3-3)
                          n1 = 1 n2 = 1



Similarity of this equation with Eq. 7.1-6 describing finite-area superposition is evi-
dent. In fact, if J is chosen such that J = N + L – 1, the terms F E ( n 1, n 2 ) = F ( n 1, n 2 )
for 1 ≤ ni ≤ N . The similarity of the circulant superposition operation and the sam-
pled image superposition operation should also be noted. These relations become
clearer in the vector-space representation of the circulant superposition operation.
                                                                     2
   Let the arrays FE and KE be expressed in vector form as the J × 1 vectors fE and
kE, respectively. Then, the circulant superposition operator can be written as


                                                            k E = CfE                                          (7.3-4)

                     2      2
where C is a J × J matrix containing elements of the array HE. The circulant
superposition operator can then be conveniently expressed in terms of J × J subma-
trices Cmn as given by



                          C 1, 1        0           0 …              0        C 1, J – L + 2 …       C 1, J
                          C 2, 1    C 2, 2          0 …              0              0
                                                                                                       …
                                                                                                …




                                      ·
                                                                                                0 C L – 1, J
                           …




                          C 2, 1    C L, 2          0                              …                   0
               C =                                                                                             (7.3-5)
                           0       C L + 1, 2
                                                                                                      … … …




                           0
                           …




                           0                       … 0 C J, J – L + 1 C J, J – L + 2 …               C J, J



where


                                   Cm        n 2 ( m 1,   n 1 ) = H E ( k 1, k 2 ; m 1, m 2 )                  (7.3-6)
                                        2,
CIRCULANT SUPERPOSITION AND CONVOLUTION               179


               11 12 13
          H = 21 22 23
               31 32 33

                11    0 31 21     0   0   0    0 13     0 33 23 12      0 32 22
                21 11     0 31    0   0   0    0 23 13      0 33 22 12      0 32
                31 21 11      0   0   0   0    0 33 23 13       0 32 22 12      0
                  0 31 21 11      0   0   0    0    0 33 23 13      0 32 22 12
                12    0 32 22 11      0 31 21       0   0   0   0 13    0 33 23
                22 12     0 32 21 11      0 31      0   0   0   0 23 13     0 33
                32 22 12      0 31 21 11       0    0   0   0   0 33 23 13      0
                  0 32 22 12      0 31 21 11        0   0   0   0   0 33 23 13
          C=
                13    0 33 23 12      0 32 22 11        0 31 21     0   0   0   0
                23 13     0 33 22 12      0 32 21 11        0 31    0   0   0   0
                33 23 13      0 32 22 12       0 31 21 11       0   0   0   0   0
                  0 33 23 13      0 32 22 12        0 31 21 11      0   0   0   0
                  0   0   0   0 13    0 33 23 12        0 32 22 11      0 31 21
                  0   0   0   0 23 13     0 33 22 12        0 32 21 11      0 31
                  0   0   0   0 33 23 13       0 32 22 12       0 31 21 11      0
                  0   0   0   0   0 33 23 13        0 32 22 12      0 31 21 11
                                              (a)




                                              (b)
FIGURE 7.3-1. Circulant convolution operators: (a) General impulse array, J = 4, L = 3;
(b) Gaussian-shaped impulse array, J = 16, L = 9.
180      SUPERPOSITION AND CONVOLUTION


for 1 ≤ n i ≤ J and 1 ≤ m i ≤ J with k i = ( m i – n i + 1 ) modulo J and HE(0, 0) = 0. It
should be noted that each row and column of C contains L nonzero submatrices. If
the impulse response array is spatially invariant, then


                                      Cm             = Cm                                              (7.3-7)
                                           2,   n2          2   + 1, n 2 + 1



and the submatrices of the rows (columns) can be obtained by a circular shift of the
first row (column). Figure 7.3-la illustrates the circulant convolution operator for
16 × 16 (J = 4) data and filtered data arrays and for a 3 × 3 (L = 3) impulse response
array. In Figure 7.3-lb, the operator is shown for J = 16 and L = 9 with a Gaussian-
shaped impulse response.
    Finally, when the impulse response is spatially invariant and orthogonally separa-
ble,


                                             C = CC ⊗ CR                                               (7.3-8)


where C R and CC are J × J matrices of the form


                       hR ( 1 )          0           …          0     h R ( L ) … hR ( 3 ) hR ( 2 )
                       hR ( 2 )       hR ( 1 )       …          0        0                 hR ( 3 )
                                                                                   …
                       …




                                                                                            …
                                       …




                     hR ( L – 1 )                                              …     0     hR ( L )    (7.3-9)
            CR =
                      hR ( L )      hR ( L – 1 )                                              0
                          0           hR ( L )
                                                                                            …




                                                                                         0
                       …




                          0             …             0 hR ( L )         …     … h (2) h (1)
                                                                                  R     R



Two-dimensional circulant convolution may then be computed as


                                                                     T
                                           KE = C C FE C R                                            (7.3-10)


7.4. SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS

The elements of the finite-area superposition operator D and the elements of the
sampled image superposition operator B can be extracted from circulant superposi-
tion operator C by use of selection matrices defined as (2)
SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS                    181

                                       (K)
                                    S1 J      = IK 0                               (7.4-1a)

                                       (K )
                                    S2 J      = 0A IK 0                            (7.4-1b)


         (K)        (K)
where S1 J   and S2 J are K × J matrices, IK is a K × K identity matrix, and 0 A is a
K × L – 1 matrix. For future reference, it should be noted that the generalized
inverses of S1 and S2 and their transposes are

                                        (K) –               ( K) T
                                    [ S1 J ]        = [ S1 J ]                     (7.4-2a)

                                           ( K) T –               K
                                    [ [ S1 J ] ]         = S1 J                    (7.4-2b)

                                       ( K) –              (K) T
                                [ S2 J ]            = [ S2 J ]                     (7.4-2c)

                                           ( K) T –               K
                                    [ [ S2 J ] ]         = S2 J                    (7.4-2d)


Examination of the structure of the various superposition operators indicates that

                                (M)            (M)           ( N)         (N) T
                       D = [ S1 J     ⊗ S1 J ]C [ S1 J                ⊗ S1 J ]     (7.4-3a)

                                (M)            (M)           ( N)         ( N) T
                       B = [ S2 J     ⊗ S2 J ]C [ S1 J                ⊗ S1 J ]     (7.4-3b)


That is, the matrix D is obtained by extracting the first M rows and N columns of sub-
matrices Cmn of C. The first M rows and N columns of each submatrix are also
extracted. A similar explanation holds for the extraction of B from C. In Figure 7.3-1,
the elements of C to be extracted to form D and B are indicated by boxes.
   From the definition of the extended input data array of Eq. 7.3-1, it is obvious
that the spatially limited input data vector f can be obtained from the extended data
vector fE by the selection operation

                                              (N)          (N )
                                f = [ S1 J          ⊗ S1 J ]f E                    (7.4-4a)

and furthermore,


                                                ( N)       ( N) T
                               fE = [ S1 J             ⊗ S1 J ] f                  (7.4-4b)
182     SUPERPOSITION AND CONVOLUTION


It can also be shown that the output vector for finite-area superposition can be
obtained from the output vector for circulant superposition by the selection opera-
tion

                                          (M)        (M)
                               q = [ S1 J       ⊗ S1 J     ]k E                   (7.4-5a)

The inverse relationship also exists in the form

                                          (M)         (M) T
                               k E = [ S1J      ⊗ S1 J      ] q                   (7.4-5b)


For sampled image superposition


                                          (M)        (M)
                               g = [ S2 J       ⊗ S2 J     ]k E                    (7.4-6)


but it is not possible to obtain kE from g because of the underdeterminacy of the
sampled image superposition operator. Expressing both q and kE of Eq. 7.4-5a in
matrix form leads to

                           M     J
                                             (M)         (M)
                          ∑ ∑ Mm [ S1J
                                      T                                      T
                    Q =                            ⊗ S1 J      ]N n K E v n u m    (7.4-7)
                          m=1 n=1


As a result of the separability of the selection operator, Eq. 7.4-7 reduces to

                                          (M)            (M) T
                               Q = [ S1 J ]K E [ S1 J ]                            (7.4-8)


Similarly, for Eq. 7.4-6 describing sampled infinite-area superposition,




        FIGURE 7.4-1. Location of elements of processed data Q and G from KE.
REFERENCES         183

                                         (M)        (M) T
                                G = [ S2 J ]K E [ S2 J ]                          (7.4-9)


Figure 7.4-1 illustrates the locations of the elements of G and Q extracted from KE
for finite-area and sampled infinite-area superposition.
   In summary, it has been shown that the output data vectors for either finite-area
or sampled image superposition can be obtained by a simple selection operation on
the output data vector of circulant superposition. Computational advantages that can
be realized from this result are considered in Chapter 9.


REFERENCES

1.   J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from
     Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image
     Processing, Chicago, May 1978.
2.   W. K. Pratt, “Vector Formulation of Two Dimensional Signal Processing Operations,”
     Computer Graphics and Image Processing, 4, 1, March 1975, 1–24.
3.   A. V. Oppenheim and R. W. Schaefer (Contributor), Digital Signal Processing, Prentice
     Hall, Englewood Cliffs, NJ, 1975.
4.   T. R. McCalla, Introduction to Numerical Methods and FORTRAN Programming, Wiley,
     New York, 1967.
5.   A. Papoulis, Systems and Transforms with Applications in Optics, 2nd ed., McGraw-
     Hill, New York, 1981.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




8
UNITARY TRANSFORMS




Two-dimensional unitary transforms have found two major applications in image
processing. Transforms have been utilized to extract features from images. For
example, with the Fourier transform, the average value or dc term is proportional to
the average image amplitude, and the high-frequency terms (ac term) give an indica-
tion of the amplitude and orientation of edges within an image. Dimensionality
reduction in computation is a second image processing application. Stated simply,
those transform coefficients that are small may be excluded from processing opera-
tions, such as filtering, without much loss in processing accuracy. Another applica-
tion in the field of image coding is transform image coding, in which a bandwidth
reduction is achieved by discarding or grossly quantizing low-magnitude transform
coefficients. In this chapter we consider the properties of unitary transforms com-
monly used in image processing.


8.1. GENERAL UNITARY TRANSFORMS

A unitary transform is a specific type of linear transformation in which the basic lin-
ear operation of Eq. 5.4-1 is exactly invertible and the operator kernel satisfies cer-
tain orthogonality conditions (1,2). The forward unitary transform of the N 1 × N 2
image array F ( n1, n 2 ) results in a N1 × N 2 transformed image array as defined by


                                      N1      N2
                  F ( m 1, m 2 ) =   ∑ ∑             F ( n 1, n 2 )A ( n 1, n 2 ; m 1, m 2 )   (8.1-1)
                                     n1 = 1 n2 = 1



                                                                                                  185
186      UNITARY TRANSFORMS


where A ( n1, n 2 ; m1 , m 2 ) represents the forward transform kernel. A reverse or
inverse transformation provides a mapping from the transform domain to the image
space as given by
                                         N1      N2
                     F ( n 1, n 2 ) =    ∑ ∑            F ( m 1, m 2 )B ( n 1, n 2 ; m 1, m2 )              (8.1-2)
                                        m1 = 1 m2 = 1

where B ( n1, n 2 ; m 1, m2 ) denotes the inverse transform kernel. The transformation is
unitary if the following orthonormality conditions are met:


            ∑ ∑ A ( n1, n2 ; m1, m2 )A∗ ( j1, j2 ; m1, m2 ) = δ ( n1 – j1, n 2 – j2 )                      (8.1-3a)
            m1 m2


            ∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( j1, j2 ; m1, m2 )               = δ ( n 1 – j 1, n 2 – j2 )     (8.1-3b)
            m1 m 2


            ∑ ∑ A ( n1, n2 ;       m 1, m 2 )A∗ ( n 1, n 2 ; k 1, k 2 ) = δ ( m 1 – k 1, m 2 – k 2 )       (8.1-3c)
             n1 n2


            ∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( n1, n2 ; k1, k2 ) =                δ ( m 1 – k 1, m 2 – k 2 )   (8.1-3c)
             n1 n2



   The transformation is said to be separable if its kernels can be written in the form


                           A ( n1, n 2 ; m 1, m 2 ) = A C ( n 1, m 1 )AR ( n 2, m 2 )                      (8.1-4a)

                           B ( n1, n 2 ; m 1, m 2 ) = B C ( n 1, m 1 )BR ( n 2, m 2 )                      (8.1-4b)


where the kernel subscripts indicate row and column one-dimensional transform
operations. A separable two-dimensional unitary transform can be computed in two
steps. First, a one-dimensional transform is taken along each column of the image,
yielding
                                                  N1
                              P ( m 1, n 2 ) =    ∑       F ( n 1, n 2 )A C ( n 1, m 1 )                    (8.1-5)
                                                 n1= 1

Next, a second one-dimensional unitary transform is taken along each row of
P ( m1, n 2 ), giving

                                                  N2
                              F ( m 1, m 2 ) =    ∑       P ( m1, n 2 )A R ( n 2, m 2 )                     (8.1-6)
                                                 n2 = 1
GENERAL UNITARY TRANSFORMS           187

   Unitary transforms can conveniently be expressed in vector-space form (3). Let F
and f denote the matrix and vector representations of an image array, and let F and
 f be the matrix and vector forms of the transformed image. Then, the two-dimen-
sional unitary transform written in vector form is given by


                                         f = Af                                  (8.1-7)


where A is the forward transformation matrix. The reverse transform is


                                         f = Bf
                                              f                                  (8.1-8)


where B represents the inverse transformation matrix. It is obvious then that

                                                 –1
                                      B = A                                      (8.1-9)


For a unitary transformation, the matrix inverse is given by

                                         –1        T
                                     A        = A∗                              (8.1-10)


and A is said to be a unitary matrix. A real unitary matrix is called an orthogonal
matrix. For such a matrix,

                                         –1         T
                                     A        = A                               (8.1-11)


If the transform kernels are separable such that


                                   A = AC ⊗ AR                                  (8.1-12)


where A R and A C are row and column unitary transform matrices, then the trans-
formed image matrix can be obtained from the image matrix by

                                                        T
                                    F = A C FA R                           (8.1-13a)


The inverse transformation is given by

                                                        T
                                    F = BC F BR                            (8.1-13b)
188      UNITARY TRANSFORMS


                 –1               –1
where B C = A C and BR = A R .
   Separable unitary transforms can also be expressed in a hybrid series–vector
space form as a sum of vector outer products. Let a C ( n 1 ) and a R ( n 2 ) represent rows
n1 and n2 of the unitary matrices AR and AR, respectively. Then, it is easily verified
that

                                  N1     N2

                                  ∑ ∑
                                                                            T
                           F =                   F ( n 1, n 2 )a C ( n 1 )a R ( n 2 )   (8.1-14a)
                                 n1 = 1 n2 = 1


Similarly,

                                 N1    N2

                                 ∑ ∑
                                                                             T
                         F =                   F ( m 1, m 2 )b C ( m 1 )b R ( m 2 )     (8.1-14b)
                               m1 = 1 m2 = 1


where b C ( m 1 ) and b R ( m 2 ) denote rows m1 and m2 of the unitary matrices BC and
BR, respectively. The vector outer products of Eq. 8.1-14 form a series of matrices,
called basis matrices, that provide matrix decompositions of the image matrix F or
its unitary transformation F.
    There are several ways in which a unitary transformation may be viewed. An
image transformation can be interpreted as a decomposition of the image data into a
generalized two-dimensional spectrum (4). Each spectral component in the trans-
form domain corresponds to the amount of energy of the spectral function within the
original image. In this context, the concept of frequency may now be generalized to
include transformations by functions other than sine and cosine waveforms. This
type of generalized spectral analysis is useful in the investigation of specific decom-
positions that are best suited for particular classes of images. Another way to visual-
ize an image transformation is to consider the transformation as a multidimensional
rotation of coordinates. One of the major properties of a unitary transformation is
that measure is preserved. For example, the mean-square difference between two
images is equal to the mean-square difference between the unitary transforms of the
images. A third approach to the visualization of image transformation is to consider
Eq. 8.1-2 as a means of synthesizing an image with a set of two-dimensional mathe-
matical functions B ( n1, n 2 ; m 1, m 2 ) for a fixed transform domain coordinate
( m 1, m2 ) . In this interpretation, the kernel B ( n 1, n 2 ; m 1, m 2 ) is called a two-dimen-
sional basis function and the transform coefficient F ( m1, m 2 ) is the amplitude of the
basis function required in the synthesis of the image.
    In the remainder of this chapter, to simplify the analysis of two-dimensional uni-
tary transforms, all image arrays are considered square of dimension N. Further-
more, when expressing transformation operations in series form, as in Eqs. 8.1-1
and 8.1-2, the indices are renumbered and renamed. Thus the input image array is
denoted by F(j, k) for j, k = 0, 1, 2,..., N - 1, and the transformed image array is rep-
resented by F(u, v) for u, v = 0, 1, 2,..., N - 1. With these definitions, the forward uni-
tary transform becomes
FOURIER TRANSFORM                    189

                                                 N–1 N–1
                                  F ( u, v ) =    ∑ ∑ F ( j, k )A ( j, k ; u, v )                               (8.1-15a)
                                                 j=0 k=0



and the inverse transform is

                                                 N–1 N–1
                                  F ( j, k ) =   ∑ ∑ F ( u, v )B ( j, k ; u, v )                                (8.1-15b)
                                                 u=0 v=0


8.2. FOURIER TRANSFORM

The discrete two-dimensional Fourier transform of an image array is defined in
series form as (5–10)

                                              N–1 N–1
                                        1                                  – 2πi               
                          F ( u, v ) = ---
                                       N
                                         -    ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk ) 
                                                                  N                       
                                                                                                                 (8.2-1a)
                                              j=0    k=0

where i =         – 1 , and the discrete inverse transform is given by

                                              N–1 N–1
                                         1                                  2πi               
                           F ( j, k ) = ---
                                        N
                                          -   ∑ ∑ F ( u, v ) exp  -------- ( uj + vk ) 
                                                                  N                    
                                                                                                                 (8.2-1b)
                                              u=0 v=0

The indices (u, v) are called the spatial frequencies of the transformation in analogy
with the continuous Fourier transform. It should be noted that Eq. 8.2-1 is not uni-
versally accepted by all authors; some prefer to place all scaling constants in the
inverse transform equation, while still others employ a reversal in the sign of the
kernels.
   Because the transform kernels are separable and symmetric, the two dimensional
transforms can be computed as sequential row and column one-dimensional trans-
forms. The basis functions of the transform are complex exponentials that may be
decomposed into sine and cosine components. The resulting Fourier transform pairs
then become

                         – 2πi                           2π                          2π                
A ( j, k ; u, v ) = exp  ----------- ( uj + vk )  = cos  ----- ( uj + vk )  – i sin  ----- ( uj + vk ) 
                                                                -                             -                  (8.2-2a)
                         N                              N                           N                  

                         2πi                         2π                          2π                
B ( j, k ; u, v ) = exp  ------- ( uj + vk )  = cos  ----- ( uj + vk )  + i sin  ----- ( uj + vk ) 
                                -                           -                             -                      (8.2-2b)
                           N                          N                           N                

Figure 8.2-1 shows plots of the sine and cosine components of the one-dimensional
Fourier basis functions for N = 16. It should be observed that the basis functions are
a rough approximation to continuous sinusoids only for low frequencies; in fact, the
190     UNITARY TRANSFORMS




               FIGURE 8.2-1 Fourier transform basis functions, N = 16.




highest-frequency basis function is a square wave. Also, there are obvious redun-
dancies between the sine and cosine components.
   The Fourier transform plane possesses many interesting structural properties.
The spectral component at the origin of the Fourier domain

                                              N–1 N–1
                                        1
                           F ( 0, 0 ) = ---
                                        N
                                          -   ∑ ∑ F ( j, k )                 (8.2-3)
                                              j=0 k=0


is equal to N times the spatial average of the image plane. Making the substitutions
u = u + mN , v = v + nN in Eq. 8.2-1, where m and n are constants, results in
FOURIER TRANSFORM                 191




               FIGURE 8.2-2. Periodic image and Fourier transform arrays.



                               N–1 N–1
                          1                              – 2πi            
  F ( u + mN, v + nN ) = ---
                         N
                           -   ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )  exp { –2πi ( mj + nk ) }
                                                   N                       
                               j=0 k=0

                                                                                                 (8.2-4)


For all integer values of m and n, the second exponential term of Eq. 8.2-5 assumes
a value of unity, and the transform domain is found to be periodic. Thus, as shown in
Figure 8.2-2a,


                                F ( u + mN, v + nN ) = F ( u, v )                                (8.2-5)


for m, n = 0, ± 1, ± 2, … .
   The two-dimensional Fourier transform of an image is essentially a Fourier series
representation of a two-dimensional field. For the Fourier series representation to be
valid, the field must be periodic. Thus, as shown in Figure 8.2-2b, the original image
must be considered to be periodic horizontally and vertically. The right side of the
image therefore abuts the left side, and the top and bottom of the image are adjacent.
Spatial frequencies along the coordinate axes of the transform plane arise from these
transitions.
   If the image array represents a luminance field, F ( j, k ) will be a real positive
function. However, its Fourier transform will, in general, be complex. Because the
                                 2
transform domain contains 2N components, the real and imaginary, or phase and
magnitude components, of each coefficient, it might be thought that the Fourier
transformation causes an increase in dimensionality. This, however, is not the case
because F ( u, v ) exhibits a property of conjugate symmetry. From Eq. 8.2-4, with m
and n set to integer values, conjugation yields
192     UNITARY TRANSFORMS




                  FIGURE 8.2-3. Fourier transform frequency domain.



                                           N–1 N–1
                                     1                               – 2πi            
            F * ( u + mN, v + nN ) = ---
                                     N
                                       -   ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk ) 
                                                               N                       
                                                                                            (8.2-6)
                                           j=0 k=0



By the substitution u = – u and v = – v it can be shown that


                           F ( u, v ) = F * ( – u + mN, – v + nN )                          (8.2-7)


for n = 0, ± 1, ±2, … . As a result of the conjugate symmetry property, almost one-
half of the transform domain samples are redundant; that is, they can be generated
from other transform samples. Figure 8.2-3 shows the transform plane with a set of
redundant components crosshatched. It is possible, of course, to choose the left half-
plane samples rather than the upper plane samples as the nonredundant set.
   Figure 8.2-4 shows a monochrome test image and various versions of its Fourier
transform, as computed by Eq. 8.2-1a, where the test image has been scaled over
unit range 0.0 ≤ F ( j, k ) ≤ 1.0. Because the dynamic range of transform components is
much larger than the exposure range of photographic film, it is necessary to com-
press the coefficient values to produce a useful display. Amplitude compression to a
unit range display array D ( u, v ) can be obtained by clipping large-magnitude values
according to the relation
FOURIER TRANSFORM       193




                (a) Original                                           (b) Clipped magnitude, nonordered




       (c) Log magnitude, nonordered                                          (d) Log magnitude, ordered

        FIGURE 8.2-4. Fourier transform of the smpte_girl_luma image.




                                        1.0                                 if F ( u, v ) ≥ c F max       (8.2-8a)
                                       
                     D ( u, v )      =  F ( u, v )
                                        --------------------
                                           c F max
                                                            -
                                                                            if F ( u, v ) < c F max       (8.2-8b)


where 0.0 < c ≤ 1.0 is the clipping factor and F max is the maximum coefficient
magnitude. Another form of amplitude compression is to take the logarithm of each
component as given by

                                            log { a + b F ( u, v ) }
                               D ( u, v ) = ------------------------------------------------
                                                                                           -                (8.2-9)
                                               log { a + b F max }
194     UNITARY TRANSFORMS


where a and b are scaling constants. Figure 8.2-4b is a clipped magnitude display of
the magnitude of the Fourier transform coefficients. Figure 8.2-4c is a logarithmic
display for a = 1.0 and b = 100.0.
   In mathematical operations with continuous signals, the origin of the transform
domain is usually at its geometric center. Similarly, the Fraunhofer diffraction pat-
tern of a photographic transparency of transmittance F ( x, y ) produced by a coher-
ent optical system has its zero-frequency term at the center of its display. A
computer-generated two-dimensional discrete Fourier transform with its origin at its
center can be produced by a simple reordering of its transform coefficients. Alterna-
tively, the quadrants of the Fourier transform, as computed by Eq. 8.2-la, can be
                                                                                  j+k
reordered automatically by multiplying the image function by the factor ( – 1 )
prior to the Fourier transformation. The proof of this assertion follows from Eq.
8.2-4 with the substitution m = n = 1 . Then, by the identity
                                     --
                                     2
                                      -



                                                                   j+k
                                  exp { iπ ( j + k ) } = ( – 1 )                                         (8.2-10)


Eq. 8.2-5 can be expressed as


                                          N–1 N–1
                                    1                                j+k        – 2πi                   
       F ( u + N ⁄ 2, v + N ⁄ 2 ) = ---
                                    N
                                      -   ∑ ∑ F ( j, k ) ( – 1 )           exp  ----------- ( uj + vk ) 
                                                                                    N                   
                                          j=0 k=0
                                                                                                         (8.2-11)


Figure 8.2-4d contains a log magnitude display of the reordered Fourier compo-
nents. The conjugate symmetry in the Fourier domain is readily apparent from the
photograph.
   The Fourier transform written in series form in Eq. 8.2-1 may be redefined in
vector-space form as


                                             f = Af                                                    (8.2-12a)


                                                   T
                                             f = A∗ f                                                  (8.2-12b)


where f and f are vectors obtained by column scanning the matrices F and F,
respectively. The transformation matrix A can be written in direct product form as


                                          A = AC ⊗ A R                                                   (8.2-13)
COSINE, SINE, AND HARTLEY TRANSFORMS               195

where
                                     0        0       0               0
                                 W       W        W       …       W
                                     0        1       2            N–1
                                 W       W        W       …   W
                 AR = AC =           0        2       4           2(N – 1)       (8.2-14)
                                 W       W        W       … W




                                  …




                                                                   …
                                                                            2
                                     0    ·       ·               (N – 1)
                                 W                        …   W


with W = exp { – 2πi ⁄ N }. As a result of the direct product decomposition of A, the
image matrix and transformed image matrix are related by


                                     F = A C FA R                               (8.2-15a)


                                   F = A C∗ F A R∗                              (8.2-15b)


The properties of the Fourier transform previously proved in series form obviously
hold in the matrix formulation.
    One of the major contributions to the field of image processing was the discovery
(5) of an efficient computational algorithm for the discrete Fourier transform (DFT).
Brute-force computation of the discrete Fourier transform of a one-dimensional
                                                   2
sequence of N values requires on the order of N complex multiply and add opera-
tions. A fast Fourier transform (FFT) requires on the order of N log N operations.
For large images the computational savings are substantial. The original FFT algo-
rithms were limited to images whose dimensions are a power of 2 (e.g.,
       9
N = 2 = 512 ). Modern algorithms exist for less restrictive image dimensions.
    Although the Fourier transform possesses many desirable analytic properties, it
has a major drawback: Complex, rather than real number computations are
necessary. Also, for image coding it does not provide as efficient image energy
compaction as other transforms.


8.3. COSINE, SINE, AND HARTLEY TRANSFORMS

The cosine, sine, and Hartley transforms are unitary transforms that utilize
sinusoidal basis functions, as does the Fourier transform. The cosine and sine
transforms are not simply the cosine and sine parts of the Fourier transform. In fact,
the cosine and sine parts of the Fourier transform, individually, are not orthogonal
functions. The Hartley transform jointly utilizes sine and cosine basis functions, but
its coefficients are real numbers, as contrasted with the Fourier transform whose
coefficients are, in general, complex numbers.
196       UNITARY TRANSFORMS


8.3.1. Cosine Transform

The cosine transform, discovered by Ahmed et al. (12), has found wide application
in transform image coding. In fact, it is the foundation of the JPEG standard (13) for
still image coding and the MPEG standard for the coding of moving images (14).
The forward cosine transform is defined as (12)

                                           N–1 N–1
                   2                                                            π                  1           π   1       
      F ( u, v ) = --- C ( u )C ( v )
                   N
                     -                      ∑ ∑ F ( j, k ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ] 
                                                               N
                                                                   -
                                                                               2
                                                                                            N
                                                                                                 -
                                                                                                             2
                                                                                                                     
                                           j=0 k=0

                                                                                                                          (8.3-1a)

                             N–1 N–1
                      2                                                           π                    1       π       1       
        F ( j, k ) = ---
                     N
                       -      ∑ ∑ C ( u )C ( v )F ( u, v ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ] 
                                                               N
                                                                   -
                                                                               2
                                                                                            N
                                                                                                 -
                                                                                                             2
                                                                                                                     
                             j=0 k=0

                                                                                                                          (8.3-1b)

                           –1 ⁄ 2
where C ( 0 ) = ( 2 )    and C ( w ) = 1 for w = 1, 2,..., N – 1. It has been observed
that the basis functions of the cosine transform are actually a class of discrete Che-
byshev polynomials (12).
   Figure 8.3-1 is a plot of the cosine transform basis functions for N = 16. A photo-
graph of the cosine transform of the test image of Figure 8.2-4a is shown in Figure
8.3-2a. The origin is placed in the upper left corner of the picture, consistent with
matrix notation. It should be observed that as with the Fourier transform, the image
energy tends to concentrate toward the lower spatial frequencies.
   The cosine transform of a N × N image can be computed by reflecting the image
about its edges to obtain a 2N × 2N array, taking the FFT of the array and then
extracting the real parts of the Fourier transform (15). Algorithms also exist for the
direct computation of each row or column of Eq. 8.3-1 with on the order of N log N
real arithmetic operations (12,16).


8.3.2. Sine Transform

The sine transform, introduced by Jain (17), as a fast algorithmic substitute for the
Karhunen–Loeve transform of a Markov process is defined in one-dimensional form
by the basis functions


                                                         2            ( j + 1 ) ( u + 1 )π 
                                    A ( u, j ) =    ------------ sin  ------------------------------------- 
                                                               -                                           -                  (8.3-2)
                                                    N+1                            N+1                      


for u, j = 0, 1, 2,..., N – 1. Consider the tridiagonal matrix
COSINE, SINE, AND HARTLEY TRANSFORMS        197




               FIGURE 8.3-1. Cosine transform basis functions, N = 16.




                                                ·
                                 1 –α 0 …               0
                                                    ·   ·
                                –α 1 –α
                                 · · ·                  ·
                          T =     ·   ·   ·             ·                  (8.3-3)
                                  ·                     ·
                                  ·
                                                –α 1 –α
                                 0            … 0 –α 1



                    2
where α = ρ ⁄ ( 1 + ρ ) and 0.0 ≤ ρ ≤ 1.0 is the adjacent element correlation of a
Markov process covariance matrix. It can be shown (18) that the basis functions of
198      UNITARY TRANSFORMS




                                                        (a) Cosine




                     (b) Sine                                                                    (c) Hartley

FIGURE 8.3-2. Cosine, sine, and Hartley transforms of the smpte_girl_luma image,
log magnitude displays



Eq. 8.3-2, inserted as the elements of a unitary matrix A, diagonalize the matrix T in
the sense that

                                                                 T
                                                     ATA = D                                                    (8.3-4)

Matrix D is a diagonal matrix composed of the terms

                                                                                  2
                                                                  1–ρ
                        D ( k, k ) = ------------------------------------------------------------------------   (8.3-5)
                                                                                                            2
                                     1 – 2ρ cos { kπ ⁄ ( N + 1 ) } + ρ

for k = 1, 2,..., N. Jain (17) has shown that the cosine and sine transforms are interre-
lated in that they diagonalize a family of tridiagonal matrices.
COSINE, SINE, AND HARTLEY TRANSFORMS                                                 199




                        FIGURE 8.3-3. Sine transform basis functions, N = 15.



  The two-dimensional sine transform is defined as


                              N–1 N–1
                    2                                        ( j + 1 ) ( u + 1 )π               ( k + 1 ) ( v + 1 )π 
  F ( u, v ) = ------------
               N+1
                          -   ∑ ∑ F ( j, k ) sin  ------------------------------------- 
                                                              N+1
                                                                                       -
                                                                                         
                                                                                             sin  -------------------------------------  (8.3-6)
                                                                                                              N+1                       
                              j=0 k=0


Its inverse is of identical form.
    Sine transform basis functions are plotted in Figure 8.3-3 for N = 15. Figure
8.3-2b is a photograph of the sine transform of the test image. The sine transform
can also be computed directly from Eq. 8.3-10, or efficiently with a Fourier trans-
form algorithm (17).
200     UNITARY TRANSFORMS


8.3.3. Hartley Transform

Bracewell (19,20) has proposed a discrete real-valued unitary transform, called the
Hartley transform, as a substitute for the Fourier transform in many filtering appli-
cations. The name derives from the continuous integral version introduced by Hart-
ley in 1942 (21). The discrete two-dimensional Hartley transform is defined by the
transform pair

                                      N–1 N–1
                                 1                               2π              
                   F ( u, v ) = ---
                                N
                                  -    ∑ ∑ F ( j, k ) cas  ------ ( uj + vk ) 
                                                           N                  
                                                                                       (8.3-7a)
                                      j=0 k=0


                                      N–1 N–1
                                 1                              2π                
                   F ( j, k ) = ---
                                N
                                  -   ∑ ∑       F ( u, v ) cas  ----- ( uj + vk ) 
                                                                 N
                                                                     -
                                                                                   
                                                                                       (8.3-7b)
                                      u=0   v=0


where casθ ≡ cos θ + sin θ . The structural similarity between the Fourier and Hartley
transforms becomes evident when comparing Eq. 8.3-7 and Eq. 8.2-2.
   It can be readily shown (17) that the cas θ function is an orthogonal function.
Also, the Hartley transform possesses equivalent but not mathematically identical
structural properties of the discrete Fourier transform (20). Figure 8.3-2c is a photo-
graph of the Hartley transform of the test image.
   The Hartley transform can be computed efficiently by a FFT-like algorithm (20).
The choice between the Fourier and Hartley transforms for a given application is
usually based on computational efficiency. In some computing structures, the Hart-
ley transform may be more efficiently computed, while in other computing environ-
ments, the Fourier transform may be computationally superior.


8.4. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS

The Hadamard, Haar, and Daubechies transforms are related members of a family of
nonsinusoidal transforms.


8.4.1. Hadamard Transform

The Hadamard transform (22,23) is based on the Hadamard matrix (24), which is a
square array of plus and minus 1s whose rows and columns are orthogonal. A nor-
malized N × N Hadamard matrix satisfies the relation

                                                 T
                                            HH = I                                      (8.4-1)


The smallest orthonormal Hadamard matrix is the 2 × 2 Hadamard matrix given by
HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS                  201




             FIGURE 8.4-1. Nonordered Hadamard matrices of size 4 and 8.



                                           1
                                   H 2 = ------ 1 1
                                              -                                 (8.4-2)
                                             2 1 –1

It is known that if a Hadamard matrix of size N exists (N > 2), then N = 0 modulo 4
(22). The existence of a Hadamard matrix for every value of N satisfying this
requirement has not been shown, but constructions are available for nearly all per-
missible values of N up to 200. The simplest construction is for a Hadamard matrix
of size N = 2n, where n is an integer. In this case, if H N is a Hadamard matrix of size
N, the matrix


                                        1 HN          HN
                               H 2N = ------
                                           -                                    (8.4-3)
                                          2 HN    – HN

is a Hadamard matrix of size 2N. Figure 8.4-1 shows Hadamard matrices of size 4
and 8 obtained by the construction of Eq. 8.4-3.
   Harmuth (25) has suggested a frequency interpretation for the Hadamard matrix
generated from the core matrix of Eq. 8.4-3; the number of sign changes along each
row of the Hadamard matrix divided by 2 is called the sequency of the row. It is pos-
                                                         n
sible to construct a Hadamard matrix of order N = 2 whose number of sign
changes per row increases from 0 to N – 1. This attribute is called the sequency
property of the unitary matrix.
202     UNITARY TRANSFORMS




             FIGURE 8.4-2. Hadamard transform basis functions, N = 16.




   The rows of the Hadamard matrix of Eq. 8.4-3 can be considered to be samples
of rectangular waves with a subperiod of 1/N units. These continuous functions are
called Walsh functions (26). In this context, the Hadamard matrix merely performs
the decomposition of a function by a set of rectangular waveforms rather than the
sine–cosine waveforms with the Fourier transform. A series formulation exists for
the Hadamard transform (23).
   Hadamard transform basis functions for the ordered transform with N = 16 are
shown in Figure 8.4-2. The ordered Hadamard transform of the test image in shown
in Figure 8.4-3a.
HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS                  203




                (a) Hadamard                                       (b) Haar

FIGURE 8.4-3. Hadamard and Haar transforms of the smpte_girl_luma image, log
magnitude displays.



8.4.2. Haar Transform

The Haar transform (1,26,27) is derived from the Haar matrix. The following are
4 × 4 and 8 × 8 orthonormal Haar matrices:


                              1       1     1     1
                     1        1       1    –1    –1
               H 4 = --
                      -                                                           (8.4-4)
                     2        2 – 2         0        0
                              0       0        2 – 2




                              1        1   1     1        1    1     1    1
                              1        1   1     1       –1   –1    –1   –1
                                  2    2 – 2 – 2          0    0     0        0
                       1      0        0   0     0         2  2 – 2 – 2
               H 8 = ------
                          -                                                       (8.4-5)
                         8    2       –2   0     0        0  0    0   0
                              0        0   2    –2        0  0    0   0
                              0        0   0     0        2 –2    0   0
                              0        0   0     0        0  0    2 –2


Extensions to higher-order Haar matrices follow the structure indicated by Eqs.
8.4-4 and 8.4-5. Figure 8.4-4 is a plot of the Haar basis functions for N = 16 .
204     UNITARY TRANSFORMS




                FIGURE 8.4-4. Haar transform basis functions, N = 16.



   The Haar transform can be computed recursively (29) using the following N × N
recursion matrix

                                               VN
                                        RN =                                 (8.4-6)
                                               WN

where V N is a N ⁄ 2 × N scaling matrix and WN is a N ⁄ 2 × N wavelet matrix defined
as
                                        110 00 0 … 00 00
                                        001 10 0 … 00 00
                                 1
                         V N = ------
                                    -                                       (8.4-7a)
                                   2
                                        000 00 0 … 11 00
                                        000 00 0 … 00 11
HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS                          205


                  1   –1       0    0       0     0     …          0        0   0    0
                  0    0       1   –1       0     0     …          0        0   0    0
           1
   W N = ------
              -                                                                           (8.4-7b)




                                                                           …
                               …
             2
                  0    0       0     0      0     0     …          1       –1   0    0
                  0    0       0     0      0     0     …          0        0   1   –1


The elements of the rows of V N are called first-level scaling signals, and the
elements of the rows of W N are called first-level Haar wavelets (29).
    The first-level Haar transform of a N × 1 vector f is


                                                               T
                                   f1 = R N f = [ a 1 d1 ]                                 (8.4-8)


where


                                           a1 = VN f                                      (8.4-9a)

                                           d 1 = WN f                                     (8.4-9b)


The vector a 1 represents the running average or trend of the elements of f , and the
vector d1 represents the running fluctuation of the elements of f . The next step in
the recursion process is to compute the second-level Haar transform from the trend
part of the first-level transform and concatenate it with the first-level fluctuation
vector. This results in

                                                           T
                                     f 2 = [ a2 d2 d1 ]                                   (8.4-10)

where
                                         a 2 = VN ⁄ 2 a1                                 (8.4-11a)

                                         d2 = WN ⁄ 2 a 1                                 (8.4-11b)

are N ⁄ 4 × 1 vectors. The process continues until the full transform

                                                                       T
                               f ≡ fn = [ an d n d n – 1 … d 1 ]                          (8.4-12)

                           n
is obtained where N = 2 . It should be noted that the intermediate levels are unitary
transforms.
206     UNITARY TRANSFORMS


   The Haar transform can be likened to a sampling process in which rows of the
transform matrix sample an input data sequence with finer and finer resolution
increasing of powers of 2. In image processing applications, the Haar transform pro-
vides a transform domain in which a type of differential energy is concentrated in
localized regions.


8.4.3. Daubechies Transforms

Daubechies (30) has discovered a class of wavelet transforms that utilize running
averages and running differences of the elements of a vector, as with the Haar trans-
form. The difference between the Haar and Daubechies transforms is that the aver-
ages and differences are grouped in four or more elements.
   The Daubechies transform of support four, called Daub4, can be defined in a
manner similar to the Haar recursive generation process. The first-level scaling and
wavelet matrices are defined as

                    α1 α2 α3 α 4 0           0 … 0               0   0   0
                     0   0 α1 α 2 α3 α4 … 0                      0   0   0
            VN =
                                       …
                    …
                         …


                                 …


                                             …




                                                                         …
                             …




                                                         …
                                                                 …
                                                                     …



                                                                             (8.4-13a)
                     0   0   0   0     0     0 … α1 α2 α3 α4
                    α3 α4 0      0     0     0 … 0               0 α1 α2


                    β1   β2 β3 β4 0          0    … 0            0   0   0
                     0   0   β1 β2 β3 β4 … 0                     0   0   0
                             …


                                       …
                     …
                         …


                                 …


                                             …




                                                                     …

                                                                         …
                                                                 …
                                                        …




            WN =                                                             (8.4-13b)
                     0   0   0   0     0     0    … β1 β2 β3 β4
                    β3 β4    0   0     0     0 … 0               0   β1 β2

where

                                               1+ 3
                                 α 1 = – β 4 = ---------------
                                                             -               (8.4-14a)
                                                  4 2

                                             3+ 3
                                 α 2 = β 3 = ---------------
                                                           -                 (8.4-14b)
                                                4 2


                                 α3 = –β2 = 3 – 3
                                            ---------------
                                                          -                  (8.4-14c)
                                               4 2

                                             1– 3
                                 α 4 = β 1 = ---------------
                                                           -                 (8.4-14d)
                                                4 2
KARHUNEN–LOEVE TRANSFORM                        207

In Eqs. 8.4-13a and 8.4-13b, the row-to-row shift is by two elements, and the last
two scale factors wrap around on the last rows. Following the recursion process of
the Haar transform results in the Daub4 transform final stage:

                                                                               T
                                   f ≡ f n = [ an dn dn – 1 … d 1 ]                                       (8.4-15)


   Daubechies has extended the wavelet transform concept for higher degrees of
support, 6, 8, 10,..., by straightforward extension of Eq. 8.4-13 (29). Daubechies
also has also constructed another family of wavelets, called coiflets, after a sugges-
tion of Coifman (29).


8.5. KARHUNEN–LOEVE TRANSFORM

Techniques for transforming continuous signals into a set of uncorrelated represen-
tational coefficients were originally developed by Karhunen (31) and Loeve (32).
Hotelling (33) has been credited (34) with the conversion procedure that transforms
discrete signals into a sequence of uncorrelated coefficients. However, most of the
literature in the field refers to both discrete and continuous transformations as either
a Karhunen–Loeve transform or an eigenvector transform.
    The Karhunen–Loeve transformation is a transformation of the general form

                                              N–1 N–1
                            F ( u, v ) =      ∑ ∑ F ( j, k )A ( j, k ; u, v )                              (8.5-1)
                                              j=0 k=0

for which the kernel A(j, k; u, v) satisfies the equation

                                              N–1 N–1
              λ ( u, v )A ( j, k ; u, v ) =    ∑ ∑            K F ( j, k ; j′, k′ ) A ( j′, k′ ; u, v )    (8.5-2)
                                              j′ = 0 k′ = 0

where KF ( j, k ; j′, k′ ) denotes the covariance function of the image array and λ ( u, v )
is a constant for fixed (u, v). The set of functions defined by the kernel are the eigen-
functions of the covariance function, and λ ( u, v ) represents the eigenvalues of the
covariance function. It is usually not possible to express the kernel in explicit form.
 If the covariance function is separable such that


                               K F ( j, k ; j′, k′ ) = K C ( j, j′ )K R ( k, k′ )                          (8.5-3)


then the Karhunen-Loeve kernel is also separable and


                                 A ( j, k ; u , v ) = A C ( u, j )AR ( v, k )                              (8.5-4)
208        UNITARY TRANSFORMS


The row and column kernels satisfy the equations


                                                                             N–1
                                      λ R ( u )AR ( v, k ) =                  ∑       K R ( k, k′ )A R ( v, k′ )                           (8.5-5a)
                                                                             k′ = 0

                                                                             N–1
                                      λ C ( v )A C ( u, j ) =                ∑ KC ( j, j′ )AC ( u, j′ )                                    (8.5-5b)
                                                                           j′ = 0



In the special case in which the covariance matrix is of separable first-order Markov
process form, the eigenfunctions can be written in explicit form. For a one-dimen-
sional Markov process with correlation factor ρ , the eigenfunctions and eigenvalues
are given by (35)

                                                        2              1⁄2                      N–1             ( u + 1 )π 
                     A ( u, j ) =            -----------------------
                                                                   -         sin  w ( u )  j – ------------ + -------------------- 
                                                                                                            -                                (8.5-6)
                                                           2                               
                                             N + λ ( u)                                              2                  2           

and

                                                                       2
                                                       1–ρ
                        λ ( u ) = --------------------------------------------------------
                                                                                         -       for 0 ≤ j, u ≤ N – 1                        (8.5-7)
                                                                                         2
                                  1 – 2ρ cos { w ( u ) } + ρ

where w(u) denotes the root of the transcendental equation

                                                                                             2
                                                                   ( 1 – ρ ) sin w
                                              tan { Nw } = --------------------------------------------------
                                                                                                            -                                (8.5-8)
                                                                                               2
                                                           cos w – 2ρ + ρ cos w

The eigenvectors can also be generated by the recursion formula (36)


                     λ(u)
      A ( u, 0 ) = -------------- [ A ( u, 0 ) – ρA ( u, 1 ) ]                                                                             (8.5-9a)
                                2
                   1–ρ

                     λ(u)                                      2
      A ( u, j ) = -------------- [ – ρA ( u, j – 1 ) + ( 1 + ρ )A ( u, j ) – ρA ( u, j + 1 ) ]                                   for 0 < j < N – 1
                                2
                   1–ρ

                                                                                                                                           (8.5-9b)

                    λ( u)
 A ( u, N – 1 ) = -------------- [ – ρA ( u, N – 2 ) + ρA ( u, N – 1 ) ]                                                                   (8.5-9c)
                               2
                  1–ρ

by initially setting A(u, 0) = 1 and subsequently normalizing the eigenvectors.
KARHUNEN–LOEVE TRANSFORM         209

  If the image array and transformed image array are expressed in vector form, the
Karhunen–Loeve transform pairs are


                                           f = Af                           (8.5-10)
                                                T
                                          f = A f                           (8.5-11)


The transformation matrix A satisfies the relation


                                         AK f = Λ A                         (8.5-12)


where K f is the covariance matrix of f, A is a matrix whose rows are eigenvectors of
K f , and Λ is a diagonal matrix of the form



                                    λ(1)  0         …     0
                                     0   λ(2)
                                                          …




                           Λ =                      …
                                                                            (8.5-13)
                                                          0
                                     …




                                                              2
                                     0      …       0   λ( N )



If K f is of separable form, then


                                      A = AC ⊗ A R                          (8.5-14)


where AR and AC satisfy the relations


                                     AR KR = ΛR AR                         (8.5-15a)

                                     AC KC = ΛC AC                         (8.5-15b)



and λ ( w ) = λ R ( v )λ C ( u ) for u, v = 1, 2,..., N.
   Figure 8.5-1 is a plot of the Karhunen–Loeve basis functions for a one-
dimensional Markov process with adjacent element correlation ρ = 0.9.
210     UNITARY TRANSFORMS




          FIGURE 8.5-1. Karhunen–Loeve transform basis functions, N = 16.




REFERENCES

1. H. C. Andrews, Computer Techniques in Image Processing, Academic Press, New York,
   1970.
2. H. C. Andrews, “Two Dimensional Transforms,” in Topics in Applied Physics: Picture Pro-
   cessing and Digital Filtering, Vol. 6, T. S. Huang, Ed., Springer-Verlag, New York, 1975.
3. R. Bellman, Introduction to Matrix Analysis, 2nd ed., Society for Industrial and Applied
   Mathematics, Philadelphia, 1997.
REFERENCES          211

 4. H. C. Andrews and K. Caspari, “A Generalized Technique for Spectral Analysis,” IEEE
    Trans. Computers, C-19, 1, January 1970, 16–25.
 5. J. W. Cooley and J. W. Tukey, “An Algorithm for the Machine Calculation of Complex
    Fourier Series,” Mathematics of Computation 19, 90, April 1965, 297–301.
 6. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU-
    15, 2, June 1967.
 7. W. T. Cochran et al., “What Is the Fast Fourier Transform?” Proc. IEEE, 55, 10, 1967,
    1664–1674.
 8. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU-
    17, 2, June 1969.
 9. J. W. Cooley, P. A. Lewis, and P. D. Welch, “Historical Notes on the Fast Fourier Trans-
    form,” Proc. IEEE, 55, 10, October 1967, 1675–1677.
10. B. O. Brigham and R. B. Morrow, “The Fast Fourier Transform,” IEEE Spectrum, 4, 12,
    December 1967, 63–70.
11. C. S. Burrus and T. W. Parks, DFT/FFT and Convolution Algorithms, Wiley-Inter-
    science, New York, 1985.
12. N. Ahmed, T. Natarajan, and K. R. Rao, “On Image Processing and a Discrete Cosine
    Transform,” IEEE Trans. Computers, C-23, 1, January 1974, 90–93.
13. W. B. Pennebaker and J. L. Mitchell, JPEG Still Image Data Compression Standard, Van
    Nostrand Reinhold, New York, 1993.
14. K. R. Rao and J. J. Hwang, Techniques and Standards for Image, Video, and Audio Cod-
    ing, Prentice Hall, Upper Saddle River, NJ, 1996.
15. R. W. Means, H. J. Whitehouse, and J. M. Speiser, “Television Encoding Using a Hybrid
    Discrete Cosine Transform and a Differential Pulse Code Modulator in Real Time,”
    Proc. National Telecommunications Conference, San Diego, CA, December 1974, 61–
    66.
16. W. H. Chen, C. Smith, and S. C. Fralick, “Fast Computational Algorithm for the Discrete
    Cosine Transform,” IEEE Trans. Communications., COM-25, 9, September 1977,
    1004–1009.
17. A. K. Jain, “A Fast Karhunen–Loeve Transform for Finite Discrete Images,” Proc.
    National Electronics Conference, Chicago, October 1974, 323–328.
18. A. K. Jain and E. Angel, “Image Restoration, Modeling, and Reduction of Dimensional-
    ity,” IEEE Trans. Computers, C-23, 5, May 1974, 470–476.
19. R. M. Bracewell, “The Discrete Hartley Transform,” J. Optical Society of America, 73,
    12, December 1983, 1832–1835.
20. R. M. Bracewell, The Hartley Transform, Oxford University Press, Oxford, 1986.
21. R. V. L. Hartley, “A More Symmetrical Fourier Analysis Applied to Transmission Prob-
    lems,” Proc. IRE, 30, 1942, 144–150.
22. J. E. Whelchel, Jr. and D. F. Guinn, “The Fast Fourier–Hadamard Transform and Its Use
    in Signal Representation and Classification,” EASCON 1968 Convention Record, 1968,
    561–573.
23. W. K. Pratt, H. C. Andrews, and J. Kane, “Hadamard Transform Image Coding,” Proc.
    IEEE, 57, 1, January 1969, 58–68.
24. J. Hadamard, “Resolution d'une question relative aux determinants,” Bull. Sciences
    Mathematiques, Ser. 2, 17, Part I, 1893, 240–246.
212      UNITARY TRANSFORMS


25. H. F. Harmuth, Transmission of Information by Orthogonal Functions, Springer-Verlag,
    New York, 1969.
26. J. L. Walsh, “A Closed Set of Orthogonal Functions,” American J. Mathematics, 45,
    1923, 5–24.
27. A. Haar, “Zur Theorie der Orthogonalen-Funktionen,” Mathematische Annalen, 5, 1955,
    17–31.
28. K. R. Rao, M. A. Narasimhan, and K. Revuluri, “Image Data Processing by Hadamard–
    Haar Transforms,” IEEE Trans. Computers, C-23, 9, September 1975, 888–896.
29. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall/
    CRC, Press, Boca Raton, FL, 1999.
30. I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992.
31. H. Karhunen, 1947, English translation by I. Selin, “On Linear Methods in Probability
    Theory,” Doc. T-131, Rand Corporation, Santa Monica, CA, August 11, 1960.
32. M. Loeve, Fonctions aldatories de seconde ordre, Hermann, Paris, 1948.
33. H. Hotelling, “Analysis of a Complex of Statistical Variables into Principal Compo-
    nents,” J. Educational Psychology, 24, 1933, 417–441, 498–520.
34. P. A. Wintz, “Transform Picture Coding,” Proc. IEEE, 60, 7, July 1972, 809–820.
35. W. D. Ray and R. M. Driver, “Further Decomposition of the Karhunen–Loeve Series
    Representation of a Stationary Random Process,” IEEE Trans. Information Theory, IT-
    16, 6, November 1970, 663–668.
36. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans.
    Computers, C-21, 7, July 1972, 636–641.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




9
LINEAR PROCESSING TECHNIQUES




Most discrete image processing computational algorithms are linear in nature; an
output image array is produced by a weighted linear combination of elements of an
input array. The popularity of linear operations stems from the relative simplicity of
spatial linear processing as opposed to spatial nonlinear processing. However, for
image processing operations, conventional linear processing is often computation-
ally infeasible without efficient computational algorithms because of the large
image arrays. This chapter considers indirect computational techniques that permit
more efficient linear processing than by conventional methods.


9.1. TRANSFORM DOMAIN PROCESSING

Two-dimensional linear transformations have been defined in Section 5.4 in series
form as
                                     N1     N2
                 P ( m 1, m 2 ) =    ∑ ∑            F ( n 1, n 2 )T ( n 1, n 2 ; m 1, m 2 )   (9.1-1)
                                    n1 = 1 n2 = 1

and defined in vector form as

                                             p = Tf                                           (9.1-2)

It will now be demonstrated that such linear transformations can often be computed
more efficiently by an indirect computational procedure utilizing two-dimensional
unitary transforms than by the direct computation indicated by Eq. 9.1-1 or 9.1-2.

                                                                                                 213
214      LINEAR PROCESSING TECHNIQUES




   FIGURE 9.1-1. Direct processing and generalized linear filtering; series formulation.



    Figure 9.1-1 is a block diagram of the indirect computation technique called gen-
eralized linear filtering (1). In the process, the input array F ( n1, n 2 ) undergoes a
two-dimensional unitary transformation, resulting in an array of transform coeffi-
cients F ( u 1, u 2 ) . Next, a linear combination of these coefficients is taken according
to the general relation

                                       M1     M2
                   ˜
                   F ( w 1, w 2 ) =    ∑ ∑            F ( u 1, u 2 )T ( u 1, u 2 ; w 1, w 2 )    (9.1-3)
                                      u1 = 1 u2 = 1



where T ( u 1, u 2 ; w 1, w 2 ) represents the linear filtering transformation function.
Finally, an inverse unitary transformation is performed to reconstruct the processed
array P ( m1, m 2 ) . If this computational procedure is to be more efficient than direct
computation by Eq. 9.1-1, it is necessary that fast computational algorithms exist for
the unitary transformation, and also the kernel T ( u 1, u 2 ; w 1, w 2 ) must be reasonably
sparse; that is, it must contain many zero elements.
   The generalized linear filtering process can also be defined in terms of vector-
space computations as shown in Figure 9.1-2. For notational simplicity, let N1 = N2
= N and M1 = M2 = M. Then the generalized linear filtering process can be described
by the equations


                                            f = [ A 2 ]f                                        (9.1-4a)
                                                        N


                                            ˜ = Tf
                                            f    f                                              (9.1-4b)

                                                                ] ˜
                                                                 –1
                                            p = [A          2       f                           (9.1-4c)
                                                        M
TRANSFORM DOMAIN PROCESSING             215




   FIGURE 9.1-2. Direct processing and generalized linear filtering; vector formulation.



                    2    2                                                    2   2
where A 2 is a N × N unitary transform matrix, T is a M × N linear filtering
         N                                 2     2
transform operation, and A 2 is a M × M unitary transform matrix. From
                              M
Eq. 9.1-4, the input and output vectors are related by

                                                       –1
                                  p = [A       2   ] T [ A 2 ]f                             (9.1-5)
                                           M                         N


Therefore, equating Eqs. 9.1-2 and 9.1-5 yields the relations between T and T given
by

                                                       –1
                                  T = [A       2   ] T [A 2]                            (9.1-6a)
                                           M                         N

                                                                         –1
                                  T = [A           2   ]T [ A 2 ]                       (9.1-6b)
                                           M                     N


                                                                                      2 2
If direct processing is employed, computation by Eq. 9.1-2 requires k P ( M N ) oper-
ations, where 0 ≤ k P ≤ 1 is a measure of the sparseness of T. With the generalized
linear filtering technique, the number of operations required for a given operator are:
                                               4
   Forward transform:                      N by direct transformation

                                                       2
                                           2N log 2 N by fast transformation

                                                           2 2
   Filter multiplication:                  kT M N

                                               4
   Inverse transform:                    M by direct transformation
                                                   2
                                         2M log 2 M by fast transformation
216     LINEAR PROCESSING TECHNIQUES


where 0 ≤ k T ≤ 1 is a measure of the sparseness of T. If k T = 1 and direct unitary
transform computation is performed, it is obvious that the generalized linear filter-
ing concept is not as efficient as direct computation. However, if fast transform
algorithms, similar in structure to the fast Fourier transform, are employed, general-
ized linear filtering will be more efficient than direct processing if the sparseness
index satisfies the inequality

                                           2                2
                             k T < k P – ------ log 2 N – ----- log 2 M
                                              -               -                 (9.1-7)
                                              2               2
                                         M                N

In many applications, T will be sufficiently sparse such that the inequality will be
satisfied. In fact, unitary transformation tends to decorrelate the elements of T caus-
ing T to be sparse. Also, it is often possible to render the filter matrix sparse by
setting small-magnitude elements to zero without seriously affecting computational
accuracy (1).
    In subsequent sections, the structure of superposition and convolution operators
is analyzed to determine the feasibility of generalized linear filtering in these appli-
cations.


9.2. TRANSFORM DOMAIN SUPERPOSITION

The superposition operations discussed in Chapter 7 can often be performed more
efficiently by transform domain processing rather than by direct processing. Figure
9.2-1a and b illustrate block diagrams of the computational steps involved in direct
finite area or sampled image superposition. In Figure 9.2-1d and e, an alternative
form of processing is illustrated in which a unitary transformation operation is per-
formed on the data vector f before multiplication by a finite area filter matrix D or
sampled image filter matrix B. An inverse transform reconstructs the output vector.
From Figure 9.2-1, for finite-area superposition, because


                                  q = Df                                       (9.2-1a)


and

                                                     –1
                                  q = [A         2   ] D [ A 2 ]f              (9.2-1b)
                                             M                     N


then clearly the finite-area filter matrix may be expressed as

                                                                       –1
                                  D = [A             2   ]D [ A 2 ]            (9.2-2a)
                                              M                N
TRANSFORM DOMAIN SUPERPOSITION    217




FIGURE 9.2-1. Data and transform domain superposition.
218     LINEAR PROCESSING TECHNIQUES


Similarly,

                                                                 –1
                                 B = [A         2   ]B [ A 2 ]                  (9.2-2b)
                                            M             N


If direct finite-area superposition is performed, the required number of
                                               2 2
computational operations is approximately N L , where L is the dimension of the
impulse response matrix. In this case, the sparseness index of D is

                                              L 2
                                      k D =  --- 
                                            N -                               (9.2-3a)

                                                                          2 2
Direct sampled image superposition requires on the order of M L operations, and
the corresponding sparseness index of B is

                                              L 2
                                     k B =  ---- 
                                                -                               (9.2-3b)
                                           M


Figure 9.2-1f is a block diagram of a system for performing circulant superposition
by transform domain processing. In this case, the input vector kE is the extended
data vector, obtained by embedding the input image array F ( n1, n 2 ) in the left cor-
ner of a J × J array of zeros and then column scanning the resultant matrix. Follow-
ing the same reasoning as above, it is seen that

                                                          –1
                             k E = Cf E = [ A 2 ] C [ A 2 ]f                    (9.2-4a)
                                                      J               J


and hence,

                                                               –1
                                  C = [ A 2 ]C [ A 2 ]                          (9.2-4b)
                                            J             J


As noted in Chapter 7, the equivalent output vector for either finite-area or sampled
image superposition can be obtained by an element selection operation of kE. For
finite-area superposition,

                                         (M)              (M)
                               q = [ S1 J           ⊗ S1 J       ]k E           (9.2-5a)


and for sampled image superposition

                                         (M)              (M)
                               g = [ S2 J           ⊗ S2 J       ]k E           (9.2-5b)
TRANSFORM DOMAIN SUPERPOSITION                          219

Also, the matrix form of the output for finite-area superposition is related to the
extended image matrix KE by

                                                     (M)             (M) T
                                   Q = [ S1 J ]K E [ S1 J ]                                             (9.2-6a)


For sampled image superposition,

                                                     (M)             (M) T
                                   G = [ S2 J ]K E [ S2 J ]                                             (9.2-6b)


The number of computational operations required to obtain kE by transform domain
processing is given by the previous analysis for M = N = J.
                                                         4
   Direct transformation                         3J
                                                     2          2
   Fast transformation:                          J + 4J log 2 J
                               2
If C is sparse, many of the J filter multiplication operations can be avoided.
   From the discussion above, it can be seen that the secret to computationally effi-
cient superposition is to select a transformation that possesses a fast computational
algorithm that results in a relatively sparse transform domain superposition filter
matrix. As an example, consider finite-area convolution performed by Fourier
domain processing (2,3). Referring to Figure 9.2-1, let


                                        A        2   = AK ⊗ AK                                           (9.2-7)
                                             K


where

                            1 - ( x – 1) (y – 1 )                                                 
                    AK = ------- W                                         with W ≡ exp  – 2πi 
                                                                                          -----------
                             K                                                              K 
                                       (K)                             2
for x, y = 1, 2,..., K. Also, let h E denote the K × 1 vector representation of the
extended spatially invariant impulse response array of Eq. 7.3-2 for J = K. The Fou-
                     (K)
rier transform of h E is denoted as

                                         (K)                        (K )
                                        hE           = [ A 2 ]h E                                        (9.2-8)
                                                                K


                                                                                                         2    2
These transform components are then inserted as the diagonal elements of a K × K
matrix

                              ( K)                       ( K)              (K)   2
                          H          = diag [ h E ( 1 ), …, h E ( K ) ]                                  (9.2-9)
220      LINEAR PROCESSING TECHNIQUES


Then, it can be shown, after considerable manipulation, that the Fourier transform
domain superposition matrices for finite area and sampled image convolution can be
written as (4)

                                                             (M)
                                           D = H                    [ PD ⊗ PD ]                                   (9.2-10)

for N = M – L + 1 and

                                                                                   (N)
                                            B = [ PB ⊗ PB ] H                                                     (9.2-11)


where N = M + L + 1 and


                                                                               –(u – 1 ) (L – 1 )
                                         1             1 – WM
                       P D ( u, v ) = -------- ---------------------------------------------------------------
                                             -                                                               -   (9.2-12a)
                                          M 1 – W M – ( u – 1 ) – W N –( v – 1 )

                                                                              –( v – 1 ) ( L – 1 )
                                         1             1 – WN
                        PB ( u, v ) = ------- ---------------------------------------------------------------
                                            -                                                               -    (9.2-12b)
                                          N 1 – W M –( u – 1 ) – W N– ( v – 1 )



Thus the transform domain convolution operators each consist of a scalar weighting
         (K )
matrix H      and an interpolation matrix ( P ⊗ P ) that performs the dimensionality con-
                          2                                     2
version between the N - element input vector and the M - element output vector.
Generally, the interpolation matrix is relatively sparse, and therefore, transform domain
superposition is quite efficient.
   Now, consider circulant area convolution in the transform domain. Following the
previous analysis it is found (4) that the circulant area convolution filter matrix
reduces to a scalar operator

                                                                          (J )
                                                      C = JH                                                      (9.2-13)


Thus, as indicated in Eqs. 9.2-10 to 9.2-13, the Fourier domain convolution filter
matrices can be expressed in a compact closed form for analysis or operational stor-
age. No closed-form expressions have been found for other unitary transforms.
    Fourier domain convolution is computationally efficient because the convolution
operator C is a circulant matrix, and the corresponding filter matrix C is of diagonal
form. Actually, as can be seen from Eq. 9.1-6, the Fourier transform basis vectors
are eigenvectors of C (5). This result does not hold true for superposition in general,
nor for convolution using other unitary transforms. However, in many instances, the
filter matrices D, B, and C are relatively sparse, and computational savings can
often be achieved by transform domain processing.
FAST FOURIER TRANSFORM CONVOLUTION              221


                       Signal             Fourier             Hadamard




                                (a) Finite length convolution




                                (b) Sampled data convolution




                                  (c) Circulant convolution
  FIGURE 9.2-2. One-dimensional Fourier and Hadamard domain convolution matrices.


   Figure 9.2-2 shows the Fourier and Hadamard domain filter matrices for the three
forms of convolution for a one-dimensional input vector and a Gaussian-shaped
impulse response (6). As expected, the transform domain representations are much
more sparse than the data domain representations. Also, the Fourier domain
circulant convolution filter is seen to be of diagonal form. Figure 9.2-3 illustrates the
structure of the three convolution matrices for two-dimensional convolution (4).



9.3. FAST FOURIER TRANSFORM CONVOLUTION

As noted previously, the equivalent output vector for either finite-area or sampled
image convolution can be obtained by an element selection operation on the
extended output vector kE for circulant convolution or its matrix counterpart KE.
222     LINEAR PROCESSING TECHNIQUES


               Spatial domain                                   Fourier domain




                                  (a) Finite-area convolution




                                (b) Sampled image convolution




                                   (c) Circulant convolution

         FIGURE 9.2-3. Two-dimensional Fourier domain convolution matrices.



This result, combined with Eq. 9.2-13, leads to a particularly efficient means of con-
volution computation indicated by the following steps:

   1.   Embed the impulse response matrix in the upper left corner of an all-zero
        J × J matrix, J ≥ M for finite-area convolution or J ≥ N for sampled
        infinite-area convolution, and take the two-dimensional Fourier trans-
        form of the extended impulse response matrix, giving
FAST FOURIER TRANSFORM CONVOLUTION       223

                                           HE = AJ HE AJ                        (9.3-1)


     2.   Embed the input data array in the upper left corner of an all-zero J × J
          matrix, and take the two-dimensional Fourier transform of the extended
          input data matrix to obtain


                                           FE = A J FE A J                      (9.3-2)


     3.   Perform the scalar multiplication


                                 K E ( m, n ) = JH E ( m, n )F E ( m, n )       (9.3-3)


          where 1 ≤ m, n ≤ J .

     4.   Take the inverse Fourier transform

                                                     –1            –1
                                     KE = [ A 2 ] HE [ A 2 ]                    (9.3-4)
                                                 J             J


     5.   Extract the desired output matrix


                                                (M)           (M) T
                                    Q = [ S1 J ]K E [ S1 J ]                   (9.3-5a)


or

                                                (M)           (M) T
                                    G = [ S2 J ]K E [ S2 J ]                   (9.3-5b)


It is important that the size of the extended arrays in steps 1 and 2 be chosen large
enough to satisfy the inequalities indicated. If the computational steps are performed
with J = N, the resulting output array, shown in Figure 9.3-1, will contain erroneous
terms in a boundary region of width L – 1 elements, on the top and left-hand side of
the output field. This is the wraparound error associated with incorrect use of the
Fourier domain convolution method. In addition, for finite area (D-type) convolu-
tion, the bottom and right-hand-side strip of output elements will be missing. If the
computation is performed with J = M, the output array will be completely filled with
the correct terms for D-type convolution. To force J = M for B-type convolution, it is
necessary to truncate the bottom and right-hand side of the input array. As a conse-
quence, the top and left-hand-side elements of the output array are erroneous.
224      LINEAR PROCESSING TECHNIQUES




                        FIGURE 9.3-1. Wraparound error effects.



   Figure 9.3-2 illustrates the Fourier transform convolution process with proper
zero padding. The example in Figure 9.3-3 shows the effect of no zero padding. In
both examples, the image has been filtered using a 11 × 11 uniform impulse
response array. The source image of Figure 9.3-3 is 512 × 512 pixels. The source
image of Figure 9.3-2 is 502 × 502 pixels. It has been obtained by truncating the bot-
tom 10 rows and right 10 columns of the source image of Figure 9.3-3. Figure 9.3-4
shows computer printouts of the upper left corner of the processed images. Figure
9.3-4a is the result of finite-area convolution. The same output is realized in Figure
9.3-4b for proper zero padding. Figure 9.3-4c shows the wraparound error effect for
no zero padding.
   In many signal processing applications, the same impulse response operator is
used on different data, and hence step 1 of the computational algorithm need not be
repeated. The filter matrix HE may be either stored functionally or indirectly as a
computational algorithm. Using a fast Fourier transform algorithm, the forward and
                                                  2
inverse transforms require on the order of 2J log 2 J operations each. The scalar
                           2                                      2
multiplication requires J operations, in general, for a total of J ( 1 + 4 log 2 J ) opera-
tions. For an N × N input array, an M × M output array, and an L × L impulse
                                                       2 2
response array, finite-area convolution requires N L operations, and sampled
                                   2 2
image convolution requires M L operations. If the dimension of the impulse
response L is sufficiently large with respect to the dimension of the input array N,
Fourier domain convolution will be more efficient than direct convolution, perhaps
by an order of magnitude or more. Figure 9.3-5 is a plot of L versus N for equality
FAST FOURIER TRANSFORM CONVOLUTION        225




                 (a) HE                             (b)     E




                 (c) FE                              (d )   E




                 (e) KE                              (f )   E

FIGURE 9.3-2. Fourier transform convolution of the candy_502_luma image with
proper zero padding, clipped magnitude displays of Fourier images.
226    LINEAR PROCESSING TECHNIQUES




                  (a ) H E                          (b )    E




                  (c ) F E                           (d )   E




                  (e ) k E                           (f )   E

FIGURE 9.3-3. Fourier transform convolution of the candy_512_luma image with
improper zero padding, clipped magnitude displays of Fourier images.
FAST FOURIER TRANSFORM CONVOLUTION                                 227


 0.001   0.002   0.003   0.005   0.006   0.007   0.008   0.009   0.010   0.011   0.013   0.013   0.013   0.013   0.013
 0.002   0.005   0.007   0.009   0.011   0.014   0.016   0.018   0.021   0.023   0.025   0.025   0.026   0.026   0.026
 0.003   0.007   0.010   0.014   0.017   0.020   0.024   0.027   0.031   0.034   0.038   0.038   0.038   0.039   0.039
 0.005   0.009   0.014   0.018   0.023   0.027   0.032   0.036   0.041   0.046   0.050   0.051   0.051   0.051   0.051
 0.006   0.011   0.017   0.023   0.028   0.034   0.040   0.045   0.051   0.057   0.063   0.063   0.063   0.064   0.064
 0.007   0.014   0.020   0.027   0.034   0.041   0.048   0.054   0.061   0.068   0.075   0.076   0.076   0.076   0.076
 0.008   0.016   0.024   0.032   0.040   0.048   0.056   0.064   0.072   0.080   0.088   0.088   0.088   0.088   0.088
 0.009   0.018   0.027   0.036   0.045   0.054   0.064   0.073   0.082   0.091   0.100   0.100   0.100   0.100   0.101
 0.010   0.020   0.031   0.041   0.051   0.061   0.071   0.081   0.092   0.102   0.112   0.112   0.112   0.113   0.113
 0.011   0.023   0.034   0.045   0.056   0.068   0.079   0.090   0.102   0.113   0.124   0.124   0.125   0.125   0.125
 0.012   0.025   0.037   0.050   0.062   0.074   0.087   0.099   0.112   0.124   0.136   0.137   0.137   0.137   0.137
 0.012   0.025   0.037   0.049   0.062   0.074   0.086   0.099   0.111   0.124   0.136   0.136   0.136   0.136   0.136
 0.012   0.025   0.037   0.049   0.062   0.074   0.086   0.099   0.111   0.123   0.135   0.135   0.135   0.135   0.135
 0.012   0.025   0.037   0.049   0.061   0.074   0.086   0.098   0.110   0.123   0.135   0.135   0.135   0.135   0.134
 0.012   0.025   0.037   0.049   0.061   0.074   0.086   0.098   0.110   0.122   0.134   0.134   0.134   0.134   0.134


                                            (a) Finite-area convolution

 0.001   0.002   0.003   0.005   0.006   0.007   0.008   0.009   0.010   0.011   0.013   0.013   0.013   0.013   0.013
 0.002   0.005   0.007   0.009   0.011   0.014   0.016   0.018   0.021   0.023   0.025   0.025   0.026   0.026   0.026
 0.003   0.007   0.010   0.014   0.017   0.020   0.024   0.027   0.031   0.034   0.038   0.038   0.038   0.039   0.039
 0.005   0.009   0.014   0.018   0.023   0.027   0.032   0.036   0.041   0.046   0.050   0.051   0.051   0.051   0.051
 0.006   0.011   0.017   0.023   0.028   0.034   0.040   0.045   0.051   0.057   0.063   0.063   0.063   0.064   0.064
 0.007   0.014   0.020   0.027   0.034   0.041   0.048   0.054   0.061   0.068   0.075   0.076   0.076   0.076   0.076
 0.008   0.016   0.024   0.032   0.040   0.048   0.056   0.064   0.072   0.080   0.088   0.088   0.088   0.088   0.088
 0.009   0.018   0.027   0.036   0.045   0.054   0.064   0.073   0.082   0.091   0.100   0.100   0.100   0.100   0.101
 0.010   0.020   0.031   0.041   0.051   0.061   0.071   0.081   0.092   0.102   0.112   0.112   0.112   0.113   0.113
 0.011   0.023   0.034   0.045   0.056   0.068   0.079   0.090   0.102   0.113   0.124   0.124   0.125   0.125   0.125
 0.012   0.025   0.037   0.050   0.062   0.074   0.087   0.099   0.112   0.124   0.136   0.137   0.137   0.137   0.137
 0.012   0.025   0.037   0.049   0.062   0.074   0.086   0.099   0.111   0.124   0.136   0.136   0.136   0.136   0.136
 0.012   0.025   0.037   0.049   0.062   0.074   0.086   0.099   0.111   0.123   0.135   0.135   0.135   0.135   0.135
 0.012   0.025   0.037   0.049   0.061   0.074   0.086   0.098   0.110   0.123   0.135   0.135   0.135   0.135   0.134
 0.012   0.025   0.037   0.049   0.061   0.074   0.086   0.098   0.110   0.122   0.134   0.134   0.134   0.134   0.134


                         (b) Fourier transform convolution with proper zero padding

 0.771   0.700   0.626   0.552   0.479   0.407   0.334   0.260   0.187   0.113   0.040   0.036   0.034   0.033   0.034
 0.721   0.655   0.587   0.519   0.452   0.385   0.319   0.252   0.185   0.118   0.050   0.047   0.044   0.044   0.045
 0.673   0.612   0.550   0.488   4.426   0.365   0.304   0.243   0.182   0.122   0.061   0.057   0.055   0.055   0.055
 0.624   0.569   0.513   0.456   0.399   0.344   0.288   0.234   0.180   0.125   0.071   0.067   0.065   0.065   0.065
 0.578   0.528   0.477   0.426   0.374   0.324   0.274   0.225   0.177   0.129   0.081   0.078   0.076   0.075   0.075
 0.532   0.488   0.442   0.396   0.350   0.305   0.260   0.217   0.174   0.133   0.091   0.088   0.086   0.085   0.086
 0.486   0.448   0.407   0.367   0.326   0.286   0.246   0.208   0.172   0.136   0.101   0.098   0.096   0.096   0.096
 0.438   0.405   0.371   0.336   0.301   0.266   0.232   0.200   0.169   0.139   0.110   0108    0.107   0.106   0.106
 0.387   0.361   0.333   0.304   0.275   0.246   0.218   0.191   0.166   0.142   0.119   0.118   0.117   0.116   0.116
 0.334   0.313   0.292   0.270   0.247   0.225   0.203   0.182   0.163   0.145   0.128   0.127   0.127   0.127   0.127
 0.278   0.264   0.249   0.233   0.218   0.202   0.186   0.172   0.159   0.148   0.136   0.137   0.137   0.137   0.137
 0.273   0.260   0.246   0.231   0.216   0.200   0.185   0.171   0.158   0.147   0.136   0.136   0.136   0.136   0.136
 0.266   0.254   0.241   0.228   0.213   0.198   0.183   0.169   0.157   0.146   0.135   0.135   0.135   0.135   0.135
 0.257   0.246   0.234   0.222   0.209   0.195   0.181   0.168   0.156   0.145   0.135   0.135   0.135   0.135   0.134
 0.247   0.237   0.227   0.215   0.204   0.192   0.179   0.166   0.155   0.144   0.134   0.134   0.134   0.134   0.134


                           (c) Fourier transform convolution without zero padding

FIGURE 9.3-4. Wraparound error for Fourier transform convolution, upper left
corner of processed image.


between direct and Fourier domain finite area convolution. The jaggedness of the
plot, in this example, arises from discrete changes in J (64, 128, 256,...) as N
increases.
   Fourier domain processing is more computationally efficient than direct process-
ing for image convolution if the impulse response is sufficiently large. However, if
the image to be processed is large, the relative computational advantage of Fourier
domain processing diminishes. Also, there are attendant problems of computational
228     LINEAR PROCESSING TECHNIQUES




FIGURE 9.3-5. Comparison of direct and Fourier domain processing for finite-area convo-
lution.


accuracy with large Fourier transforms. Both difficulties can be alleviated by a
block-mode filtering technique in which a large image is separately processed in
adjacent overlapped blocks (2, 7–9).
    Figure 9.3-6a illustrates the extraction of a NB × NB pixel block from the upper
left corner of a large image array. After convolution with a L × L impulse response,
the resulting M B × M B pixel block is placed in the upper left corner of an output




       FIGURE 9.3-6. Geometric arrangement of blocks for block-mode filtering.
FOURIER TRANSFORM FILTERING              229

data array as indicated in Figure 9.3-6a. Next, a second block of N B × N B pixels is
extracted from the input array to produce a second block of M B × M B output pixels
that will lie adjacent to the first block. As indicated in Figure 9.3-6b, this second
input block must be overlapped by (L – 1) pixels in order to generate an adjacent
output block. The computational process then proceeds until all input blocks are
filled along the first row. If a partial input block remains along the row, zero-value
elements can be added to complete the block. Next, an input block, overlapped by
(L –1) pixels with the first row blocks, is extracted to produce the first block of the
second output row. The algorithm continues in this fashion until all output points are
computed.
   A total of

                                            2     2
                                  O F = N + 2N log 2 N                               (9.3-6)


operations is required for Fourier domain convolution over the full size image array.
With block-mode filtering with N B × N B input pixel blocks, the required number of
operations is

                                        2   2         2
                               O B = R ( N B + 2NB log 2 N )                         (9.3-7)

where R represents the largest integer value of the ratio N ⁄ ( N B + L – 1 ). Hunt (9) has
determined the optimum block size as a function of the original image size and
impulse response size.


9.4. FOURIER TRANSFORM FILTERING

The discrete Fourier transform convolution processing algorithm of Section 9.3 is
often utilized for computer simulation of continuous Fourier domain filtering. In this
section we consider discrete Fourier transform filter design techniques.


9.4.1. Transfer Function Generation

The first step in the discrete Fourier transform filtering process is generation of the
discrete domain transfer function. For simplicity, the following discussion is limited
to one-dimensional signals. The extension to two dimensions is straightforward.
    Consider a one-dimensional continuous signal f C ( x ) of wide extent which is
bandlimited such that its Fourier transform f C ( ω ) is zero for ω greater than a cut-
off frequency ω 0. This signal is to be convolved with a continuous impulse function
h C ( x ) whose transfer function h C ( ω ) is also bandlimited to ω 0 . From Chapter 1 it is
known that the convolution can be performed either in the spatial domain by the
operation
230     LINEAR PROCESSING TECHNIQUES


                                               ∞
                             gC ( x ) =     ∫–∞ fC ( α )hC ( x – α ) dα            (9.4-1a)



or in the continuous Fourier domain by


                                    1      ∞
                      g C ( x ) = -----
                                  2π
                                      -   ∫–∞ fC ( ω )hC ( ω ) exp { iωx } dω      (9.4-1b)

   Chapter 7 has presented techniques for the discretization of the convolution inte-
gral of Eq. 9.4-1. In this process, the continuous impulse response function h C ( x )
must be truncated by spatial multiplication of a window function y(x) to produce the
windowed impulse response


                                      b C ( x ) = h C ( x )y ( x )                  (9.4-2)


where y(x) = 0 for x > T . The window function is designed to smooth the truncation
effect. The resulting convolution integral is then approximated as

                                            x+T
                           gC ( x ) =     ∫x – T    fC ( α )b C ( x – α ) dα        (9.4-3)


Next, the output signal g C ( x ) is sampled over 2J + 1 points at a resolution
∆ = π ⁄ ω 0, and the continuous integration is replaced by a quadrature summation at
the same resolution ∆ , yielding the discrete representation

                                            j+K
                          g C ( j∆ ) =         ∑    f C ( k∆ )b C [ ( j – k )∆ ]    (9.4-4)
                                           k=j–K

where K is the nearest integer value of the ratio T ⁄ ∆.
   Computation of Eq. 9.4-4 by discrete Fourier transform processing requires
formation of the discrete domain transfer function b D ( u ) . If the continuous domain
impulse response function h C ( x ) is known analytically, the samples of the
windowed impulse response function are inserted as the first L = 2K + 1 elements of
a J-element sequence and the remaining J – L elements are set to zero. Thus, let


                   b D ( p ) = b C ( – K ), …, b C ( 0 ), …, b C ( K ) , 0, …, 0    (9.4-5)
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               
                               




                                                   L terms

where 0 ≤ p ≤ P – 1. The terms of b D ( p ) can be extracted from the continuous
impulse response function h C ( x ) and the window function by the sampling
operation
FOURIER TRANSFORM FILTERING               231

                                  b D ( p ) = y ( x )h C ( x )δ ( x – p∆ )                          (9.4-6)


The next step in the discrete Fourier transform convolution algorithm is to perform a
discrete Fourier transform of b D ( p ) over P points to obtain

                                                P–1
                                      1                                 – 2πipu 
                        b D ( u ) = -------
                                        P
                                                 ∑       b D ( p ) exp  ----------------- 
                                                                        P
                                                                                         -
                                                                                           
                                                                                                    (9.4-7)
                                               p=1

where 0 ≤ u ≤ P – 1 .
    If the continuous domain transfer function hC ( ω ) is known analytically, then
b D ( u ) can be obtained directly. It can be shown that


                                                                                  
                      b D ( u ) = ---------------- exp  – iπ ( L – 1 )  h C  2πu 
                                         1 -             -------------------------
                                                                                 -     ---------
                                                                                               -   (9.4-8a)
                                                 2                  P                 P∆ 
                                  4 Pπ                                            

                                          bD( P – u ) = b * ( u )
                                                          D                                        (9.4-8b)


for u = 0, 1,..., P/2, where

                                       bC ( ω ) = h C ( ω ) ᭺ y ( ω )
                                                            *                                      (9.4-8c)

and y ( ω ) is the continuous domain Fourier transform of the window function y(x). If
h C ( ω ) and y ( ω ) are known analytically, then, in principle, h C ( ω ) can be obtained
by analytically performing the convolution operation of Eq. 9.4-8c and evaluating
the resulting continuous function at points 2πu ⁄ P∆. In practice, the analytic convo-
lution is often difficult to perform, especially in two dimensions. An alternative is to
perform an analytic inverse Fourier transformation of the transfer function h C ( ω ) to
obtain its continuous domain impulse response h C ( x ) and then form b D ( u ) from the
steps of Eqs. 9.4-5 to 9.4-7. Still another alternative is to form b D ( u ) from h C ( ω )
according to Eqs. 9.4-8a and 9.4-8b, take its discrete inverse Fourier transform, win-
dow the resulting sequence, and then form b D ( u ) from Eq. 9.4-7.


9.4.2. Windowing Functions

The windowing operation performed explicitly in the spatial domain according to
Eq. 9.4-6 or implicitly in the Fourier domain by Eq. 9.4-8 is absolutely imperative if
the wraparound error effect described in Section 9.3 is to be avoided. A common
mistake in image filtering is to set the values of the discrete impulse response func-
tion arbitrarily equal to samples of the continuous impulse response function. The
corresponding extended discrete impulse response function will generally possess
nonzero elements in each of its J elements. That is, the length L of the discrete
232         LINEAR PROCESSING TECHNIQUES


impulse response embedded in the extended vector of Eq. 9.4-5 will implicitly be set
equal to J. Therefore, all elements of the output filtering operation will be subject to
wraparound error.
   A variety of window functions have been proposed for discrete linear filtering
(10–12). Several of the most common are listed in Table 9.4-1 and sketched in
Figure 9.4-1. Figure 9.4-2 shows plots of the transfer functions of these window
functions. The window transfer functions consist of a main lobe and sidelobes
whose peaks decrease in magnitude with increasing frequency. Examination of the
structure of Eq. 9.4-8 indicates that the main lobe causes a loss in frequency
response over the signal passband from 0 to ω 0 , while the sidelobes are responsible
for an aliasing error because the windowed impulse response function b C ( ω ) is not
bandlimited. A tapered window function reduces the magnitude of the sidelobes and
consequently attenuates the aliasing error, but the main lobe becomes wider, causing
the signal frequency response within the passband to be reduced. A design trade-off
must be made between these complementary sources of error. Both sources of
degradation can be reduced by increasing the truncation length of the windowed
impulse response, but this strategy will either result in a shorter length output
sequence or an increased number of computational operations.


TABLE 9.4-1. Window Functions a
Function                                                                              Definition
Rectangular                                                           w(n) = 1                0≤n≤L–1

                                                                    2n -
                                                                -----------                         0≤n–L–1
                                                                                                         -----------
                                                                                                                   -
                                                               L–1                                           2
Barlett (triangular)                                    w(n) = 
                                                                             2
                                                                2 – -----------                     L–1
                                                                               -                     ----------- ≤ n ≤ L – 1
                                                                                                               -
                                                                        L–1                              2

                                                          1            2πn 
Hanning                                            w(n) = --  1 – cos  ----------- 
                                                           -                       -                              0≤n≤L–1
                                                          2            L – 1 

                                                                        2πn-                        
Hamming                                       w(n) = 0.54 - 0.46 cos  -----------                                    0≤n≤L–1
                                                                                               L – 1

                                                      2πn                      4πn 
Blackman                       w(n) = 0.42 – 0.5 cos  -----------  + 0.08 cos  ----------- 
                                                                 -                          -                                            0≤n≤L–1
                                                      L–1                     L – 1 

                                                                               2                                                   2 1 ⁄ 2
                                 I0  ωa [ ( ( L – 1 ) ⁄ 2 ) – [ n – ( ( L – 1 ) ⁄ 2 ) ] ] 
Kaiser                                                                                                                                          
                                 ----------------------------------------------------------------------------------------------------------------- 0 ≤ n ≤ L – 1
                                                                                                                                                 -
                                                                   I0 { ωa [ ( L – 1 ) ⁄ 2 ] }
a
    I 0 { · } is the modified zeroth-order Bessel function of the first kind and ω a is a design parameter.
FOURIER TRANSFORM FILTERING             233




                   FIGURE 9.4-1. One-dimensional window functions.




9.4.3. Discrete Domain Transfer Functions

In practice, it is common to define the discrete domain transform directly in the dis-
crete Fourier transform frequency space. The following are definitions of several
widely used transfer functions for a N × N pixel image. Applications of these filters
are presented in Chapter 10.


1. Zonal low-pass filter:


          H ( u, v ) = 1    0≤u≤C–1                and 0 ≤ v ≤ C – 1


                             0≤u≤C–1               and N + 1 – C ≤ v ≤ N – 1


                             N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1


                             N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1     (9.4-9a)

          H ( u, v ) = 0     otherwise                                           (9.4-9b)


where C is the filter cutoff frequency for 0 < C ≤ 1 + N ⁄ 2. Figure 9.4-3 illustrates the
low-pass filter zones.
234     LINEAR PROCESSING TECHNIQUES




              (a) Rectangular                                (b) Triangular




                (c) Hanning                                  (d) Hamming




                                              (e) Blackman

        FIGURE 9.4-2. Transfer functions of one-dimensional window functions.




2. Zonal high-pass filter:


         H ( 0, 0 ) = 0                                                         (9.4-10a)

         H ( u, v ) = 0       0≤u≤C–1              and 0 ≤ v ≤ C – 1

                              0≤u≤C–1              and N + 1 – C ≤ v ≤ N – 1

                              N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1

                              N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1   (9.4-10b)

         H ( u, v ) = 1       otherwise                                         (9.4-10c)
FOURIER TRANSFORM FILTERING            235




                    FIGURE 9.4-3. Zonal filter transfer function definition.




3. Gaussian filter:


     H ( u, v ) = G ( u, v )     0≤u≤N⁄2                        and 0 ≤ v ≤ N ⁄ 2

                                  0≤u≤N⁄2                       and 1 + N ⁄ 2 ≤ v ≤ N – 1
                                  1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2

                                  1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1           (9.4-11a)
where


                                                 1               2           2 
                               G ( u, v ) = exp  – -- [ ( s u u ) + ( s v v ) ] 
                                                     -                                      (9.4-11b)
                                                   2                            


and su and sv are the Gaussian filter spread factors.
236       LINEAR PROCESSING TECHNIQUES


4. Butterworth low-pass filter:


         H ( u, v ) = B ( u, v )       0≤u≤N⁄2                                     and 0 ≤ v ≤ N ⁄ 2

                                       0≤u≤N⁄2                                    and 1 + N ⁄ 2 ≤ v ≤ N – 1

                                       1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2

                                       1+N⁄2≤u≤N–1                                 and 1 + N ⁄ 2 ≤ v ≤ N – 1 (9.4-12a)
where


                                                                        1
                                   B ( u, v ) = -------------------------------------------------
                                                                                                -
                                                                                             2n
                                                                      2         2 1⁄2                               (9.4-12b)
                                                        1 + (u + v )
                                                            -----------------------------
                                                                                        -
                                                                         C


where the integer variable n is the order of the filter. The Butterworth low-pass filter
                                                                                                    2   2 1⁄2
provides an attenuation of 50% at the cutoff frequency C = ( u + v )                                            .


5. Butterworth high-pass filter:


      H ( u, v ) = B ( u, v )      0≤u≤N⁄2                                    and 0 ≤ v ≤ N ⁄ 2

                                   0≤u≤N⁄2                                    and 1 + N ⁄ 2 ≤ v ≤ N – 1

                                   1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2

                                    1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1                                 (9.4-13a)
where
                                                                        1
                                   B ( u, v ) = -------------------------------------------------
                                                                                                -                   (9.4-13b)
                                                                          C                  2n
                                                1+           -----------------------------
                                                                                         -
                                                                   2          2 1⁄2
                                                             (u + v )

Figure 9.4-4 shows the transfer functions of zonal and Butterworth low- and high-
pass filters for a 512 × 512 pixel image.


9.5. SMALL GENERATING KERNEL CONVOLUTION

It is possible to perform convolution on a N × N image array F( j, k) with an
arbitrary L × L impulse response array H( j, k) by a sequential technique called small
SMALL GENERATING KERNEL CONVOLUTION                     237




                (a) Zonal low-pass                             (b) Butterworth low-pass




                (c) Zonal high-pass                           (d ) Butterworth high-pass
FIGURE 9.4-4. Zonal and Butterworth low- and high-pass transfer functions; 512 × 512 images;
cutoff frequency = 64.



generating kernel (SGK) convolution (13–16). Figure 9.5-1 illustrates the decompo-
sition process in which a L × L prototype impulse response array H( j, k) is sequen-
tially decomposed into 3 × 3 pixel SGKs according to the relation

                     ˆ
                     H ( j, k ) = K 1 ( j, k ) ᭺ K 2 ( j, k ) ᭺ … ᭺ K Q ( j, k )
                                               ‫ء‬              ‫ء‬   ‫ء‬                        (9.5-1)

        ˆ                                                               ‫ء‬
where H ( j, k ) is the synthesized impulse response array, the symbol ᭺ denotes cen-
tered two-dimensional finite-area convolution, as defined by Eq. 7.1-14, and K i ( j, k )
is the ith 3 × 3 pixel SGK of the decomposition, where Q = ( L – 1 ) ⁄ 2 . The SGK
convolution technique can be extended to larger SGK kernels. Generally, the SGK
synthesis of Eq. 9.5-1 is not exact. Techniques have been developed for choosing the
                                                     ˆ
SGKs to minimize the mean-square error between H ( j, k ) and H ( j, k ) (13).
238      LINEAR PROCESSING TECHNIQUES




FIGURE 9.5-1. Cascade decomposition of a two-dimensional impulse response array into
small generating kernels.



   Two-dimensional convolution can be performed sequentially without approxima-
tion error by utilizing the singular-value decomposition technique described in
Appendix A1.2 in conjunction with the SGK decimation (17–19). With this method,
called SVD/SGK convolution, the impulse response array H ( j, k ) is regarded as a
matrix H. Suppose that H is orthogonally separable such that it can be expressed in
the outer product form

                                                        T
                                             H = ab                                    (9.5-2)

where a and b are column and row operator vectors, respectively. Then, the two-
dimensional convolution operation can be performed by first convolving the col-
umns of F ( j, k ) with the impulse response sequence a(j) corresponding to the vec-
tor a, and then convolving the rows of that resulting array with the sequence b(k)
corresponding to the vector b. If H is not separable, the matrix can be expressed as a
sum of separable matrices by the singular-value decomposition by which

                                                    R
                                           H =     ∑ Hi                               (9.5-3a)
                                                  i=1

                                                            T
                                           Hi = si ai bi                              (9.5-3b)


where R ≥ 1 is the rank of H, si is the ith singular value of H. The vectors ai and bi
are the L × 1 eigenvectors of HHT and HTH, respectively.
   Each eigenvector ai and bi of Eq. 9.5-3 can be considered to be a one-dimen-
sional sequence, which can be decimated by a small generating kernel expansion as


                   a i ( j ) = c i [ a i1 ( j ) ᭺ … ᭺ a iq ( j ) ᭺ … ᭺ a iQ ( j ) ]
                                                ‫ء‬   ‫ء‬            ‫ء‬   ‫ء‬                (9.5-4a)

                   b i ( k ) = r i [ b i1 ( k ) ᭺ … ᭺ b iq ( k ) ᭺ … ᭺ b iQ ( k ) ]
                                                ‫ء‬   ‫ء‬            ‫ء‬   ‫ء‬                (9.5-4b)


where a iq ( j ) and b iq ( k ) are 3 × 1 impulse response sequences corresponding to the
ith singular-value channel and the qth SGK expansion. The terms ci and ri are col-
umn and row gain constants. They are equal to the sum of the elements of their
respective sequences if the sum is nonzero, and equal to the sum of the magnitudes
REFERENCES        239




                    FIGURE 9.5-2. Nonseparable SVD/SGK expansion.



otherwise. The former case applies for a unit-gain filter impulse response, while the
latter case applies for a differentiating filter.
    As a result of the linearity of the SVD expansion of Eq. 9.5-3b, the large size
impulse response array Hi ( j, k ) corresponding to the matrix Hi of Eq. 9.5-3a can be
synthesized by sequential 3 × 3 convolutions according to the relation


             H i ( j, k ) = r i c i [ K i1 ( j, k ) ᭺ … ᭺ K iq ( j, k ) ᭺ … ᭺ KiQ ( j, k ) ]
                                                    *   *               *   *                  (9.5-5)


where K iq ( j, k ) is the qth SGK of the ith SVD channel. Each K iq ( j, k ) is formed by an
outer product expansion of a pair of the a iq ( j ) and b iq ( k ) terms of Eq. 9.5-4. The
ordering is important only for low-precision computation when roundoff error becomes
a consideration. Figure 9.5-2 is the flowchart for SVD/SGK convolution. The weight-
ing terms in the figure are


                                             W i = si ri ci                                    (9.5-6)


Reference 19 describes the design procedure for computing the K iq ( j, k ) .


REFERENCES

 1. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans.
    Computers, C-21, 7, July 1972, 636–641.
 2. T. G. Stockham, Jr., “High Speed Convolution and Correlation,” Proc. Spring Joint
    Computer Conference, 1966, 229–233.
 3. W. M. Gentleman and G. Sande, “Fast Fourier Transforms for Fun and Profit,” Proc.
    Fall Joint Computer Conference, 1966, 563–578.
240      LINEAR PROCESSING TECHNIQUES


 4. W. K. Pratt, “Vector Formulation of Two-Dimensional Signal Processing Operations,”
    Computer Graphics and Image Processing, 4, 1, March 1975, 1–24.
 5. B. R. Hunt, “A Matrix Theory Proof of the Discrete Convolution Theorem,” IEEE
    Trans. Audio and Electroacoustics, AU-19, 4, December 1973, 285–288.
 6. W. K. Pratt, “Transform Domain Signal Processing Techniques,” Proc. National Elec-
    tronics Conference, Chicago, 1974.
 7. H. D. Helms, “Fast Fourier Transform Method of Computing Difference Equations and
    Simulating Filters,” IEEE Trans. Audio and Electroacoustics, AU-15, 2, June 1967, 85–
    90.
 8. M. P. Ekstrom and V. R. Algazi, “Optimum Design of Two-Dimensional Nonrecursive
    Digital Filters,” Proc. 4th Asilomar Conference on Circuits and Systems, Pacific Grove,
    CA, November 1970.
 9. B. R. Hunt, “Computational Considerations in Digital Image Enhancement,” Proc. Con-
    ference on Two-Dimensional Signal Processing, University of Missouri, Columbia, MO,
    October 1971.
10. A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Prentice Hall, Engle-
    wood Cliffs, NJ, 1975.
11. R. B. Blackman and J. W. Tukey, The Measurement of Power Spectra, Dover Publica-
    tions, New York, 1958.
12. J. F. Kaiser, “Digital Filters”, Chapter 7 in Systems Analysis by Digital Computer, F. F.
    Kuo and J. F. Kaiser, Eds., Wiley, New York, 1966.
13. J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from
    Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image
    Processing, Chicago, May 1978.
14. W. K. Pratt, J. F. Abramatic, and O. D. Faugeras, “Method and Apparatus for Improved
    Digital Image Processing,” U.S. patent 4,330,833, May 18, 1982.
15. J. F. Abramatic and O. D. Faugeras, “Sequential Convolution Techniques for Image Fil-
    tering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 1, February
    1982, 1–10.
16. J. F. Abramatic and O. D. Faugeras, “Correction to Sequential Convolution Techniques
    for Image Filtering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30,
    2, April 1982, 346.
17. W. K. Pratt, “Intelligent Image Processing Display Terminal,” Proc. SPIE, 199, August
    1979, 189–194.
18. J. F. Abramatic and S. U. Lee, “Singular Value Decomposition of 2-D Impulse
    Responses,” Proc. International Conference on Acoustics, Speech, and Signal Process-
    ing, Denver, CO, April 1980, 749–752.
19. S. U. Lee, “Design of SVD/SGK Convolution Filters for Image Processing,” Report
    USCIPI 950, University Southern California, Image Processing Institute, January 1980.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                       Copyright © 2001 John Wiley & Sons, Inc.
                    ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




PART 4

IMAGE IMPROVEMENT

The use of digital processing techniques for image improvement has received much
interest with the publicity given to applications in space imagery and medical
research. Other applications include image improvement for photographic surveys
and industrial radiographic analysis.
   Image improvement is a term coined to denote three types of image manipulation
processes: image enhancement, image restoration, and geometrical image modi-
fication. Image enhancement entails operations that improve the appearance to a
human viewer, or operations to convert an image to a format better suited to
machine processing. Image restoration has commonly been defined as the
modification of an observed image in order to compensate for defects in the imaging
system that produced the observed image. Geometrical image modification includes
image magnification, minification, rotation, and nonlinear spatial warping.
   Chapter 10 describes several techniques of monochrome and color image
enhancement. The chapters that follow develop models for image formation and
restoration, and present methods of point and spatial image restoration. The final
chapter of this part considers geometrical image modification.




                                                                               241
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




10
IMAGE ENHANCEMENT




Image enhancement processes consist of a collection of techniques that seek to
improve the visual appearance of an image or to convert the image to a form better
suited for analysis by a human or a machine. In an image enhancement system, there
is no conscious effort to improve the fidelity of a reproduced image with regard to
some ideal form of the image, as is done in image restoration. Actually, there is
some evidence to indicate that often a distorted image, for example, an image with
amplitude overshoot and undershoot about its object edges, is more subjectively
pleasing than a perfectly reproduced original.
   For image analysis purposes, the definition of image enhancement stops short of
information extraction. As an example, an image enhancement system might
emphasize the edge outline of objects in an image by high-frequency filtering. This
edge-enhanced image would then serve as an input to a machine that would trace the
outline of the edges, and perhaps make measurements of the shape and size of the
outline. In this application, the image enhancement processor would emphasize
salient features of the original image and simplify the processing task of a data-
extraction machine.
   There is no general unifying theory of image enhancement at present because
there is no general standard of image quality that can serve as a design criterion for
an image enhancement processor. Consideration is given here to a variety of tech-
niques that have proved useful for human observation improvement and image anal-
ysis.


10.1. CONTRAST MANIPULATION

One of the most common defects of photographic or electronic images is poor con-
trast resulting from a reduced, and perhaps nonlinear, image amplitude range. Image

                                                                                  243
244     IMAGE ENHANCEMENT




        FIGURE 10.1-1. Continuous and quantized image contrast enhancement.




contrast can often be improved by amplitude rescaling of each pixel (1,2).
Figure 10.1-1a illustrates a transfer function for contrast enhancement of a typical
continuous amplitude low-contrast image. For continuous amplitude images, the
transfer function operator can be implemented by photographic techniques, but it is
often difficult to realize an arbitrary transfer function accurately. For quantized
amplitude images, implementation of the transfer function is a relatively simple
task. However, in the design of the transfer function operator, consideration must be
given to the effects of amplitude quantization. With reference to Figure l0.l-lb,
suppose that an original image is quantized to J levels, but it occupies a smaller
range. The output image is also assumed to be restricted to J levels, and the mapping
is linear. In the mapping strategy indicated in Figure 10.1-1b, the output level
chosen is that level closest to the exact mapping of an input level. It is obvious from
the diagram that the output image will have unoccupied levels within its range, and
some of the gray scale transitions will be larger than in the original image. The latter
effect may result in noticeable gray scale contouring. If the output image is
quantized to more levels than the input image, it is possible to approach a
linear placement of output levels, and hence, decrease the gray scale contouring
effect.
CONTRAST MANIPULATION       245




                             (a) Linear image scaling




                       (b) Linear image scaling with clipping




                            (c) Absolute value scaling

                       FIGURE 10.1-2. Image scaling methods.


10.1.1. Amplitude Scaling

A digitally processed image may occupy a range different from the range of the
original image. In fact, the numerical range of the processed image may encompass
negative values, which cannot be mapped directly into a light intensity range. Figure
10.1-2 illustrates several possibilities of scaling an output image back into the
domain of values occupied by the original image. By the first technique, the pro-
cessed image is linearly mapped over its entire range, while by the second technique,
the extreme amplitude values of the processed image are clipped to maximum and
minimum limits. The second technique is often subjectively preferable, especially
for images in which a relatively small number of pixels exceed the limits. Contrast
enhancement algorithms often possess an option to clip a fixed percentage of the
amplitude values on each end of the amplitude scale. In medical image enhancement
applications, the contrast modification operation shown in Figure 10.2-2b, for a ≥ 0,
is called a window-level transformation. The window value is the width of the linear
slope, b – a; the level is located at the midpoint c of the slope line. The third
technique of amplitude scaling, shown in Figure 10.1-2c, utilizes an absolute value
transformation for visualizing an image with negatively valued pixels. This is a
246     IMAGE ENHANCEMENT




                           (a) Linear, full range, − 0.147 to 0.169




          (b) Clipping, 0.000 to 0.169              (c) Absolute value, 0.000 to 0.169

FIGURE 10.1-3. Image scaling of the Q component of the YIQ representation of the
dolls_gamma color image.


useful transformation for systems that utilize the two's complement numbering con-
vention for amplitude representation. In such systems, if the amplitude of a pixel
overshoots +1.0 (maximum luminance white) by a small amount, it wraps around by
the same amount to –1.0, which is also maximum luminance white. Similarly, pixel
undershoots remain near black.
   Figure 10.1-3 illustrates the amplitude scaling of the Q component of the YIQ
transformation, shown in Figure 3.5-14, of a monochrome image containing nega-
tive pixels. Figure 10.1-3a presents the result of amplitude scaling with the linear
function of Figure 10.1-2a over the amplitude range of the image. In this example,
the most negative pixels are mapped to black (0.0), and the most positive pixels are
mapped to white (1.0). Amplitude scaling in which negative value pixels are clipped
to zero is shown in Figure 10.1-3b. The black regions of the image correspond to
CONTRAST MANIPULATION             247




             (a) Original                      (b) Original histogram




 (c) Min. clip = 0.17, max. clip = 0.64     (d) Enhancement histogram




 (e) Min. clip = 0.24, max. clip = 0.35     (f) Enhancement histogram

FIGURE 10.1-4. Window-level contrast stretching of an earth satellite image.
248      IMAGE ENHANCEMENT


negative pixel values of the Q component. Absolute value scaling is presented in
Figure 10.1-3c.
   Figure 10.1-4 shows examples of contrast stretching of a poorly digitized original
satellite image along with gray scale histograms of the original and enhanced pic-
tures. In Figure 10.1-4c, the clip levels are set at the histogram limits of the original,
while in Figure 10.1-4e, the clip levels truncate 5% of the original image upper and
lower level amplitudes. It is readily apparent from the histogram of Figure 10.1-4f
that the contrast-stretched image of Figure 10.1-4e has many unoccupied amplitude
levels. Gray scale contouring is at the threshold of visibility.

10.1.2. Contrast Modification

Section 10.1.1 dealt with amplitude scaling of images that do not properly utilize the
dynamic range of a display; they may lie partly outside the dynamic range or
occupy only a portion of the dynamic range. In this section, attention is directed to
point transformations that modify the contrast of an image within a display's
dynamic range.
   Figure 10.1-5a contains an original image of a jet aircraft that has been digitized to
256 gray levels and numerically scaled over the range of 0.0 (black) to 1.0 (white).




                    (a) Original                       (b) Original histogram




                (c) Transfer function                 (d ) Contrast stretched

        FIGURE 10.1-5. Window-level contrast stretching of the jet_mon image.
CONTRAST MANIPULATION             249




             (a ) Square function                                     (b ) Square output




              (c ) Cube function                                      (d ) Cube output

     FIGURE 10.1-6. Square and cube contrast modification of the jet_mon image.




The histogram of the image is shown in Figure 10.1-5b. Examination of the
histogram of the image reveals that the image contains relatively few low- or high-
amplitude pixels. Consequently, applying the window-level contrast stretching
function of Figure 10.1-5c results in the image of Figure 10.1-5d, which possesses
better visual contrast but does not exhibit noticeable visual clipping.
   Consideration will now be given to several nonlinear point transformations, some
of which will be seen to improve visual contrast, while others clearly impair visual
contrast.
   Figures 10.1-6 and 10.1-7 provide examples of power law point transformations
in which the processed image is defined by


                                                                  p
                                    G ( j, k ) = [ F ( j, k ) ]                            (10.1-1)
250     IMAGE ENHANCEMENT




            (a) Square root function                                                   (b) Square root output




             (c ) Cube root function                                                     (d ) Cube root output

 FIGURE 10.1-7. Square root and cube root contrast modification of the jet_mon image.


where 0.0 ≤ F ( j, k ) ≤ 1.0 represents the original image and p is the power law vari-
able. It is important that the amplitude limits of Eq. 10.1-1 be observed; processing
of the integer code (e.g., 0 to 255) by Eq. 10.1-1 will give erroneous results. The
square function provides the best visual result. The rubber band transfer function
shown in Figure 10.1-8a provides a simple piecewise linear approximation to the
power law curves. It is often useful in interactive enhancement machines in which
the inflection point is interactively placed.
   The Gaussian error function behaves like a square function for low-amplitude
pixels and like a square root function for high- amplitude pixels. It is defined as

                                                  F ( j, k ) – 0.5  0.5
                                        erf  -----------------------------  + ---------
                                                                                 -                    -
                                                             a 2                    a 2
                           G ( j, k ) = ----------------------------------------------------------------
                                                                                                       -         (10.1-2a)
                                                                       0.5 
                                                        2 erf  ---------         -
                                                                      a 2
CONTRAST MANIPULATION                 251




           (a ) Rubber-band function                                               (b ) Rubber-band output

        FIGURE 10.1-8. Rubber-band contrast modification of the jet_mon image.




where


                                            2-               x                 2
                              erf { x } = ------
                                              π            ∫0 exp { –y             } dy                      (10.1-2b)



and a is the standard deviation of the Gaussian distribution.
   The logarithm function is useful for scaling image arrays with a very wide
dynamic range. The logarithmic point transformation is given by


                                          log e { 1.0 + aF ( j, k ) }
                             G ( j, k ) = --------------------------------------------------                  (10.1-3)
                                                      log e { 2.0 }


under the assumption that 0.0 ≤ F ( j, k ) ≤ 1.0, where a is a positive scaling factor.
Figure 8.2-4 illustrates the logarithmic transformation applied to an array of Fourier
transform coefficients.
   There are applications in image processing in which monotonically decreasing
and nonmonotonic amplitude scaling is useful. For example, contrast reverse and
contrast inverse transfer functions, as illustrated in Figure 10.1-9, are often helpful
in visualizing detail in dark areas of an image. The reverse function is defined as


                                     G ( j, k ) = 1.0 – F ( j, k )                                            (10.1-4)
252      IMAGE ENHANCEMENT




              (a) Reverse function                           (b) Reverse function output




               (c) Inverse function                           (d) Inverse function output

FIGURE 10.1-9. Reverse and inverse function contrast modification of the jet_mon image.


where 0.0 ≤ F ( j, k ) ≤ 1.0 The inverse function

                                       1.0               for 0.0 ≤ F ( j, k ) < 0.1        (10.1-5a)
                                      
                         G ( j, k ) = 
                                            0.1
                                       ---------------
                                                      -
                                       F ( j, k )        for 0.1 ≤ F ( j, k ) ≤ 1.0        (10.1-5b)

is clipped at the 10% input amplitude level to maintain the output amplitude within
the range of unity.
    Amplitude-level slicing, as illustrated in Figure 10.1-10, is a useful interactive
tool for visually analyzing the spatial distribution of pixels of certain amplitude
within an image. With the function of Figure 10.1-10a, all pixels within the ampli-
tude passband are rendered maximum white in the output, and pixels outside the
passband are rendered black. Pixels outside the amplitude passband are displayed in
their original state with the function of Figure 10.1-10b.
HISTOGRAM MODIFICATION        253




            FIGURE 10.1-10. Level slicing contrast modification functions.



10.2. HISTOGRAM MODIFICATION

The luminance histogram of a typical natural scene that has been linearly quantized
is usually highly skewed toward the darker levels; a majority of the pixels possess
a luminance less than the average. In such images, detail in the darker regions is
often not perceptible. One means of enhancing these types of images is a technique
called histogram modification, in which the original image is rescaled so that the
histogram of the enhanced image follows some desired form. Andrews, Hall, and
others (3–5) have produced enhanced imagery by a histogram equalization process
for which the histogram of the enhanced image is forced to be uniform. Frei (6) has
explored the use of histogram modification procedures that produce enhanced
images possessing exponential or hyperbolic-shaped histograms. Ketcham (7) and
Hummel (8) have demonstrated improved results by an adaptive histogram modifi-
cation procedure.
254      IMAGE ENHANCEMENT




FIGURE 10.2-1. Approximate gray level histogram equalization with unequal number of
quantization levels.




10.2.1. Nonadaptive Histogram Modification

Figure 10.2-1 gives an example of histogram equalization. In the figure, H F ( c ) for
c = 1, 2,..., C, represents the fractional number of pixels in an input image whose
amplitude is quantized to the cth reconstruction level. Histogram equalization seeks
to produce an output image field G by point rescaling such that the normalized
 gray-level histogram H G ( d ) = 1 ⁄ D for d = 1, 2,..., D. In the example of Figure
10.2-1, the number of output levels is set at one-half of the number of input levels. The
scaling algorithm is developed as follows. The average value of the histogram is
computed. Then, starting at the lowest gray level of the original, the pixels in the
quantization bins are combined until the sum is closest to the average. All of these
pixels are then rescaled to the new first reconstruction level at the midpoint of the
enhanced image first quantization bin. The process is repeated for higher-value gray
levels. If the number of reconstruction levels of the original image is large, it is
possible to rescale the gray levels so that the enhanced image histogram is almost
constant. It should be noted that the number of reconstruction levels of the enhanced
image must be less than the number of levels of the original image to provide proper
gray scale redistribution if all pixels in each quantization level are to be treated
similarly. This process results in a somewhat larger quantization error. It is possible to
perform the gray scale histogram equalization process with the same number of gray
levels for the original and enhanced images, and still achieve a constant histogram of
the enhanced image, by randomly redistributing pixels from input to output
quantization bins.
HISTOGRAM MODIFICATION       255

   The histogram modification process can be considered to be a monotonic point
transformation g d = T { f c } for which the input amplitude variable f 1 ≤ f c ≤ fC is
mapped into an output variable g 1 ≤ g d ≤ g D such that the output probability distri-
bution PR { g d = b d } follows some desired form for a given input probability distri-
bution PR { f c = a c } where ac and bd are reconstruction values of the cth and dth
levels. Clearly, the input and output probability distributions must each sum to unity.
Thus,


                                           C

                                         ∑ PR { f c = ac }              = 1                  (10.2-1a)
                                        c=1

                                           D

                                        ∑ PR { gd = bd }                 = 1                 (10.2-1b)
                                       d=1



Furthermore, the cumulative distributions must equate for any input index c. That is,
the probability that pixels in the input image have an amplitude less than or equal to
ac must be equal to the probability that pixels in the output image have amplitude
less than or equal to bd, where b d = T { a c } because the transformation is mono-
tonic. Hence

                          d                                  c

                         ∑    PR { g n = bn } =             ∑           PR { fm = am }           (10.2-2)
                        n=1                                 m=1

The summation on the right is the cumulative probability distribution of the input
image. For a given image, the cumulative distribution is replaced by the cumulative
histogram to yield the relationship

                               d                                        c

                              ∑        PR { g n = bn } =            ∑       HF ( m )             (10.2-3)
                              n=1                                 m=1

Equation 10.2-3 now must be inverted to obtain a solution for gd in terms of fc. In
general, this is a difficult or impossible task to perform analytically, but certainly
possible by numerical methods. The resulting solution is simply a table that indi-
cates the output image level for each input image level.
    The histogram transformation can be obtained in approximate form by replacing
the discrete probability distributions of Eq. 10.2-2 by continuous probability densi-
ties. The resulting approximation is

                                   g                         f
                               ∫g   m in
                                           p g ( g ) dg =   ∫f   m in
                                                                        p f ( f ) df             (10.2-4)
256
      TABLE 10.2-1. Histogram Modification Transfer Functions


        Output Probability Density Model                                                                                                                    Transfer Functiona


        Uniform                                                                     1             -
                                                           p g ( g ) = ---------------------------- g min ≤ g ≤ g max                       g = ( g max – g min )Pf ( f ) + g min
                                                                       g max – g min

        Exponential                                        p g ( g ) = α exp { – α ( g – g min ) } g ≤ g min                                            1
                                                                                                                                            g = g min – --- ln { 1 – Pf ( f ) }
                                                                                                                                                        α
                                                                                                                                       2                                                 1⁄2
                                                                       g – g min                    ( g – g min )                                       2 
        Rayleigh                                                                         -                                  -
                                                           p g ( g ) = ------------------- exp  – --------------------------  g ≥ g min                                1 -
                                                                                                                                            g = g min + 2α ln  -------------------- 
                                                                                  2                                2
                                                                              α                            2α                                                1 – Pf( f ) 
                                                                                       –2 ⁄ 3                                                                                                  3
                                                                                   g
                                                                       1 ----------------------------                                                 1⁄3       1⁄3                  1⁄3
        Hyperbolic                                                      -
                                                           p g ( g ) = --                           -                                       g = g max – g min [ P f ( f ) ] + g max
         (Cube root)                                                   3 g 1 ⁄ 3 – g1 ⁄ 3
                                                                                     max            min

                                                                                                     1                                                  g max P f ( f )
        Hyperbolic                                         p g ( g ) = -------------------------------------------------------------        g = g min  ----------- 
                                                                       g [ ln { g max } – ln { g min } ]                                              g 
         (Logarithmic)                                                                                                                                      min



      aThe cumulative probability distribution P (f), of the input image is approximated by its cumulative histogram:
                                                f
                                                              j
                                              pf ( f ) ≈    ∑         HF ( m )
                                                           m=0
HISTOGRAM MODIFICATION        257




       (a) Original                            (b) Original histogram




                      (c) Transfer function




      (d ) Enhanced                           (e ) Enhanced histogram

FIGURE 10.2-2. Histogram equalization of the projectile image.
258      IMAGE ENHANCEMENT


where p f ( f ) and p g ( g ) are the probability densities of f and g, respectively. The
integral on the right is the cumulative distribution function P f ( f ) of the input vari-
able f. Hence,
                                   g
                                 ∫g   m in
                                             pg ( g ) dg = P f ( f )             (10.2-5)


In the special case, for which the output density is forced to be the uniform density,

                                                            1
                                   p g ( g ) = ----------------------------
                                                                          -      (10.2-6)
                                                    g max – g min

for g min ≤ g ≤ g max , the histogram equalization transfer function becomes

                             g = ( g max – g min )P f ( f ) + g min              (10.2-7)

Table 10.2-1 lists several output image histograms and their corresponding transfer
functions.
   Figure 10.2-2 provides an example of histogram equalization for an x-ray of a
projectile. The original image and its histogram are shown in Figure 10.2-2a and b,
respectively. The transfer function of Figure 10.2-2c is equivalent to the cumulative
histogram of the original image. In the histogram equalized result of Figure 10.2-2,
ablating material from the projectile, not seen in the original, is clearly visible. The
histogram of the enhanced image appears peaked, but close examination reveals that
many gray level output values are unoccupied. If the high occupancy gray levels
were to be averaged with their unoccupied neighbors, the resulting histogram would
be much more uniform.
   Histogram equalization usually performs best on images with detail hidden in
dark regions. Good-quality originals are often degraded by histogram equalization.
As an example, Figure 10.2-3 shows the result of histogram equalization on the jet
image.
   Frei (6) has suggested the histogram hyperbolization procedure listed in Table
10.2-1 and described in Figure 10.2-4. With this method, the input image histogram
is modified by a transfer function such that the output image probability density is of
hyperbolic form. Then the resulting gray scale probability density following the
assumed logarithmic or cube root response of the photoreceptors of the eye model
will be uniform. In essence, histogram equalization is performed after the cones of
the retina.


10.2.2. Adaptive Histogram Modification

The histogram modification methods discussed in Section 10.2.1 involve applica-
tion of the same transformation or mapping function to each pixel in an image. The
mapping function is based on the histogram of the entire image. This process can be
HISTOGRAM MODIFICATION            259




                                           (a ) Original




             (b) Transfer function                           (c ) Histogram equalized
            FIGURE 10.2-3. Histogram equalization of the jet_mon image.


made spatially adaptive by applying histogram modification to each pixel based on
the histogram of pixels within a moving window neighborhood. This technique is
obviously computationally intensive, as it requires histogram generation, mapping
function computation, and mapping function application at each pixel.
    Pizer et al. (9) have proposed an adaptive histogram equalization technique in
which histograms are generated only at a rectangular grid of points and the mappings
at each pixel are generated by interpolating mappings of the four nearest grid points.
Figure 10.2-5 illustrates the geometry. A histogram is computed at each grid point in
a window about the grid point. The window dimension can be smaller or larger than
the grid spacing. Let M00, M01, M10, M11 denote the histogram modification map-
pings generated at four neighboring grid points. The mapping to be applied at pixel
F(j, k) is determined by a bilinear interpolation of the mappings of the four nearest
grid points as given by

               M = a [ bM00 + ( 1 – b )M 10 ] + ( 1 – a ) [ bM 01 + ( 1 – b )M 11 ]     (10.2-8a)
260     IMAGE ENHANCEMENT




                      FIGURE 10.2-4. Histogram hyperbolization.



where
                                          k – k0
                                     a = ---------------
                                                       -                     (10.2-8b)
                                         k1 – k0

                                           j – j0
                                      b = -------------
                                                      -                      (10.2-8c)
                                          j1 – j0


Pixels in the border region of the grid points are handled as special cases of
Eq. 10.2-8. Equation 10.2-8 is best suited for general-purpose computer calculation.




FIGURE 10.2-5. Array geometry for interpolative adaptive histogram modification. * Grid
point; • pixel to be computed.
NOISE CLEANING        261




                                      (a) Original




               (b) Nonadaptive                            (c) Adaptive

FIGURE 10.2-6. Nonadaptive and adaptive histogram equalization of the brainscan image.



For parallel processors, it is often more efficient to use the histogram generated in
the histogram window of Figure 10.2-5 and apply the resultant mapping function
to all pixels in the mapping window of the figure. This process is then repeated at all
grid points. At each pixel coordinate (j, k), the four histogram modified pixels
obtained from the four overlapped mappings are combined by bilinear interpolation.
Figure 10.2-6 presents a comparison between nonadaptive and adaptive histogram
equalization of a monochrome image. In the adaptive histogram equalization exam-
ple, the histogram window is 64 × 64 .


10.3. NOISE CLEANING

An image may be subject to noise and interference from several sources, including
electrical sensor noise, photographic grain noise, and channel errors. These noise
262     IMAGE ENHANCEMENT


effects can be reduced by classical statistical filtering techniques to be discussed in
Chapter 12. Another approach, discussed in this section, is the application of ad hoc
noise cleaning techniques.
    Image noise arising from a noisy sensor or channel transmission errors usually
appears as discrete isolated pixel variations that are not spatially correlated. Pixels
that are in error often appear visually to be markedly different from their neighbors.
This observation is the basis of many noise cleaning algorithms (10–13). In this sec-
tion we describe several linear and nonlinear techniques that have proved useful for
noise reduction.
    Figure 10.3-1 shows two test images, which will be used to evaluate noise clean-
ing techniques. Figure 10.3-1b has been obtained by adding uniformly distributed
noise to the original image of Figure 10.3-1a. In the impulse noise example of
Figure 10.3-1c, maximum-amplitude pixels replace original image pixels in a spa-
tially random manner.




                                            (a ) Original




         (b ) Original with uniform noise                   (c) Original with impulse noise

       FIGURE 10.3-1. Noisy test images derived from the peppers_mon image.
NOISE CLEANING       263

10.3.1. Linear Noise Cleaning

Noise added to an image generally has a higher-spatial-frequency spectrum than the
normal image components because of its spatial decorrelatedness. Hence, simple
low-pass filtering can be effective for noise cleaning. Consideration will now be
given to convolution and Fourier domain methods of noise cleaning.


Spatial Domain Processing. Following the techniques outlined in Chapter 7, a spa-
tially filtered output image G ( j, k ) can be formed by discrete convolution of an
input image F ( j, k ) with a L × L impulse response array H ( j, k ) according to the
relation

                    G ( j, k ) =   ∑∑    F ( m, n )H ( m + j + C, n + k + C )       (10.13-1)

where C = (L + 1)/2. Equation 10.3-1 utilizes the centered convolution notation
developed by Eq. 7.1-14, whereby the input and output arrays are centered with
respect to one another, with the outer boundary of G ( j, k ) of width ( L – 1 ) ⁄ 2 pixels
set to zero.
   For noise cleaning, H should be of low-pass form, with all positive elements.
Several common 3 × 3 pixel impulse response arrays of low-pass form are listed
below.

                                               1       1       1
Mask 1:                             H = 1
                                        --
                                         -     1       1       1                    (10.3-2a)
                                        9
                                               1       1       1

                                                1      1       1
                                         1
Mask 2:                            H = -----
                                           -    1      2       1                    (10.3-2b)
                                       10
                                                1      1       1

                                                1      2       1
Mask 3:                                  1-
                                   H = -----                                        (10.3-2c)
                                       16       2      4       2
                                                1      2       1

These arrays, called noise cleaning masks, are normalized to unit weighting so that
the noise-cleaning process does not introduce an amplitude bias in the processed
image. The effect of noise cleaning with the arrays on the uniform noise and impulse
noise test images is shown in Figure 10.3-2. Mask 1 and 2 of Eq. 10.3-2 are special
cases of a 3 × 3 parametric low-pass filter whose impulse response is defined as

                                                   1       b       1
                                            1 -
                                   H = -----------         2                            (10.3-3)
                                       b+2 b               b       b
                                                   1       b       1
264     IMAGE ENHANCEMENT




          (a ) Uniform noise, mask 1               (b ) Impulse noise, mask 1




           (c ) Uniform noise, mask 2              (d ) Impulse noise, mask 2




          (e ) Uniform noise, mask 3               (f ) Impulse noise, mask 3

FIGURE 10.3-2. Noise cleaning with 3 × 3 low-pass impulse response arrays on the noisy
test images.
NOISE CLEANING         265




             (a ) Uniform rectangle                    (b) Uniform circular




                  (c ) Pyramid                        (d ) Gaussian, s = 1.0

FIGURE 10.3-3. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image
with uniform noise.



   The concept of low-pass filtering noise cleaning can be extended to larger
impulse response arrays. Figures 10.3-3 and 10.3-4 present noise cleaning results for
several 7 × 7 impulse response arrays for uniform and impulse noise. As expected,
use of a larger impulse response array provides more noise smoothing, but at the
expense of the loss of fine image detail.

Fourier Domain Processing. It is possible to perform linear noise cleaning in the
Fourier domain (13) using the techniques outlined in Section 9.3. Properly executed,
there is no difference in results between convolution and Fourier filtering; the
choice is a matter of implementation considerations.
   High-frequency noise effects can be reduced by Fourier domain filtering with a
zonal low-pass filter with a transfer function defined by Eq. 9.3-9. The sharp cutoff
characteristic of the zonal low-pass filter leads to ringing artifacts in a filtered
image. This deleterious effect can be eliminated by the use of a smooth cutoff filter,
266     IMAGE ENHANCEMENT




             (a ) Uniform rectangle                    (b) Uniform circular




                  (c ) Pyramid                        (d ) Gaussian, s = 1.0

FIGURE 10.3-4. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image
with impulse noise.



such as the Butterworth low-pass filter whose transfer function is specified by
Eq. 9.4-12. Figure 10.3-5 shows the results of zonal and Butterworth low-pass filter-
ing of noisy images.
   Unlike convolution, Fourier domain processing, often provides quantitative and
intuitive insight into the nature of the noise process, which is useful in designing
noise cleaning spatial filters. As an example, Figure 10.3-6a shows an original
image subject to periodic interference. Its two-dimensional Fourier transform,
shown in Figure 10.3-6b, exhibits a strong response at the two points in the Fourier
plane corresponding to the frequency response of the interference. When multiplied
point by point with the Fourier transform of the original image, the bandstop filter of
Figure 10.3-6c attenuates the interference energy in the Fourier domain. Figure
10.3-6d shows the noise-cleaned result obtained by taking an inverse Fourier trans-
form of the product.
NOISE CLEANING           267




             (a ) Uniform noise, zonal                          (b) Impulse noise, zonal




          (c ) Uniform noise, Butterworth                   (d ) Impulse noise, Butterworth

FIGURE 10.3-5. Noise cleaning with zonal and Butterworth low-pass filtering on the noisy
test images; cutoff frequency = 64.



Homomorphic Filtering. Homomorphic filtering (14) is a useful technique for
image enhancement when an image is subject to multiplicative noise or interference.
Figure 10.3-7 describes the process. The input image F ( j, k ) is assumed to be mod-
eled as the product of a noise-free image S ( j, k ) and an illumination interference
array I ( j, k ). Thus,


                                   F ( j, k ) = I ( j, k )S ( j, k )                          (10.3-4)


Ideally, I ( j, k ) would be a constant for all ( j, k ) . Taking the logarithm of Eq. 10.3-4
yields the additive linear result
268      IMAGE ENHANCEMENT




                  (a) Original                                  (b) Original Fourier transform




               (c) Bandstop filter                                     (d ) Noise cleaned
FIGURE 10.3-6. Noise cleaning with Fourier domain band stop filtering on the parts
image with periodic interference.



                        log { F ( j, k ) } = log { I ( j, k ) } + log { S ( j, k ) }             (10.3-5)


Conventional linear filtering techniques can now be applied to reduce the log inter-
ference component. Exponentiation after filtering completes the enhancement pro-
cess. Figure 10.3-8 provides an example of homomorphic filtering. In this example,
the illumination field I ( j, k ) increases from left to right from a value of 0.1 to 1.0.




                         FIGURE 10.3-7. Homomorphic filtering.
NOISE CLEANING   269




             (a) Illumination field                           (b) Original




                                  (c) Homomorphic filtering

FIGURE 10.3-8. Homomorphic filtering on the washington_ir image with a Butter-
worth high-pass filter; cutoff frequency = 4.



Therefore, the observed image appears quite dim on its left side. Homomorphic
filtering (Figure 10.3-8c) compensates for the nonuniform illumination.


10.3.2. Nonlinear Noise Cleaning

The linear processing techniques described previously perform reasonably well on
images with continuous noise, such as additive uniform or Gaussian distributed
noise. However, they tend to provide too much smoothing for impulselike noise.
Nonlinear techniques often provide a better trade-off between noise smoothing and
the retention of fine image detail. Several nonlinear techniques are presented below.
Mastin (15) has performed subjective testing of several of these operators.
270     IMAGE ENHANCEMENT




                   FIGURE 10.3-9. Outlier noise cleaning algorithm.



Outlier. Figure 10.3-9 describes a simple outlier noise cleaning technique in which
each pixel is compared to the average of its eight neighbors. If the magnitude of the
difference is greater than some threshold level, the pixel is judged to be noisy, and it
is replaced by its neighborhood average. The eight-neighbor average can be com-
puted by convolution of the observed image with the impulse response array

                                           1 1 1
                                        1
                                    H = -- 1 0 1
                                         -                                      (10.3-6)
                                        8
                                           1 1 1

Figure 10.3-10 presents the results of outlier noise cleaning for a threshold level of
10%.




               (a ) Uniform noise                        (b ) Impulse noise

   FIGURE 10.3-10. Noise cleaning with the outlier algorithm on the noisy test images.
NOISE CLEANING       271

   The outlier operator can be extended straightforwardly to larger windows. Davis
and Rosenfeld (16) have suggested a variant of the outlier technique in which the
center pixel in a window is replaced by the average of its k neighbors whose ampli-
tudes are closest to the center pixel.


Median Filter. Median filtering is a nonlinear signal processing technique devel-
oped by Tukey (17) that is useful for noise suppression in images. In one-dimen-
sional form, the median filter consists of a sliding window encompassing an odd
number of pixels. The center pixel in the window is replaced by the median of the
pixels in the window. The median of a discrete sequence a1, a2,..., aN for N odd is
that member of the sequence for which (N – 1)/2 elements are smaller or equal in
value and (N – 1)/2 elements are larger or equal in value. For example, if the values
of the pixels within a window are 0.1, 0.2, 0.9, 0.4, 0.5, the center pixel would be
replaced by the value 0.4, which is the median value of the sorted sequence 0.1, 0.2,
0.4, 0.5, 0.9. In this example, if the value 0.9 were a noise spike in a monotonically
increasing sequence, the median filter would result in a considerable improvement.
On the other hand, the value 0.9 might represent a valid signal pulse for a wide-
bandwidth sensor, and the resultant image would suffer some loss of resolution.
Thus, in some cases the median filter will provide noise suppression, while in other
cases it will cause signal suppression.
   Figure 10.3-11 illustrates some examples of the operation of a median filter and a
mean (smoothing) filter for a discrete step function, ramp function, pulse function,
and a triangle function with a window of five pixels. It is seen from these examples
that the median filter has the usually desirable property of not affecting step func-
tions or ramp functions. Pulse functions, whose periods are less than one-half the
window width, are suppressed. But the peak of the triangle is flattened.
   Operation of the median filter can be analyzed to a limited extent. It can be
shown that the median of the product of a constant K and a sequence f ( j ) is


                           MED { K [ f ( j ) ] } = K [ MED { f ( j ) } ]                    (10.3-7)


However, for two arbitrary sequences f ( j ) and g ( j ), it does not follow that the
median of the sum of the sequences is equal to the sum of their medians. That is, in
general,


                   MED { f ( j ) + g ( j ) } ≠ MED { f ( j ) } + MED { g ( j ) }            (10.3-8)


The sequences 0.1, 0.2, 0.3, 0.4, 0.5 and 0.1, 0.2, 0.3, 0.2, 0.1 are examples for
which the additive linearity property does not hold.
   There are various strategies for application of the median filter for noise suppres-
sion. One method would be to try a median filter with a window of length 3. If there
is no significant signal loss, the window length could be increased to 5 for median
272     IMAGE ENHANCEMENT




           FIGURE 10.3-11. Median filtering on one-dimensional test signals.



filtering of the original. The process would be terminated when the median filter
begins to do more harm than good. It is also possible to perform cascaded median
filtering on a signal using a fixed-or variable-length window. In general, regions that
are unchanged by a single pass of the filter will remain unchanged in subsequent
passes. Regions in which the signal period is lower than one-half the window width
will be continually altered by each successive pass. Usually, the process will con-
tinue until the resultant period is greater than one-half the window width, but it can
be shown that some sequences will never converge (18).
    The concept of the median filter can be extended easily to two dimensions by uti-
lizing a two-dimensional window of some desired shape such as a rectangle or dis-
crete approximation to a circle. It is obvious that a two-dimensional L × L median
filter will provide a greater degree of noise suppression than sequential processing
with L × 1 median filters, but two-dimensional processing also results in greater sig-
nal suppression. Figure 10.3-12 illustrates the effect of two-dimensional median
filtering of a spatial peg function with a 3 × 3 square filter and a 5 × 5 plus sign–
shaped filter. In this example, the square median has deleted the corners of the peg,
but the plus median has not affected the corners.
    Figures 10.3-13 and 10.3-14 show results of plus sign shaped median filtering
on the noisy test images of Figure 10.3-1 for impulse and uniform noise, respectively.
NOISE CLEANING       273




           FIGURE 10.3-12. Median filtering on two-dimensional test signals.




In the impulse noise example, application of the 3 × 3 median significantly reduces
the noise effect, but some residual noise remains. Applying two 3 × 3 median filters
in cascade provides further improvement. The 5 × 5 median filter removes almost
all of the impulse noise. There is no visible impulse noise in the 7 × 7 median filter
result, but the image has become somewhat blurred. In the case of uniform noise,
median filtering provides little visual improvement.
    Huang et al. (19) and Astola and Campbell (20) have developed fast median fil-
tering algorithms. The latter can be generalized to implement any rank ordering.


Pseudomedian Filter. Median filtering is computationally intensive; the number of
operations grows exponentially with window size. Pratt et al. (21) have proposed a
computationally simpler operator, called the pseudomedian filter, which possesses
many of the properties of the median filter.
   Let {SL} denote a sequence of elements s1, s2,..., sL. The pseudomedian of the
sequence is
274     IMAGE ENHANCEMENT




             (a) 3 × 3 median filter                   (b) 3 × 3 cascaded median filter




             (c) 5 × 5 median filter                         (d) 7 × 7 median filter

      FIGURE 10.3-13. Median filtering on the noisy test image with uniform noise.



           PMED { S L } = ( 1 ⁄ 2 )MAXIMIN { S L } + ( 1 ⁄ 2 )MINIMAX { S L }             (10.3-9)


where for M = (L + 1)/2


         MAXIMIN { S L } = MAX { [ MIN ( s 1, …, s M ) ], [ MIN ( s 2, …, s M + 1 ) ]

                                …, [ MIN ( sL – M + 1, …, sL ) ] }                     (10.3-10a)


         MINIMAX { S L } = MIN { [ MAX ( s 1, …, s M ) ], [ MAX ( s 2, …, s M + 1 ) ]

                               …, [ MAX ( s L – M + 1, …, s L ) ] }                    (10.3-10b)
NOISE CLEANING      275




                                      (a ) 3 × 3 median filter




            (b) 5 × 5 median filter                              (c ) 7 × 7 median filter

      FIGURE 10.3-14. Median filtering on the noisy test image with uniform noise.




Operationally, the sequence of L elements is decomposed into subsequences of M
elements, each of which is slid to the right by one element in relation to its
predecessor, and the appropriate MAX and MIN operations are computed. As will
be demonstrated, the MAXIMIN and MINIMAX operators are, by themselves,
useful operators. It should be noted that it is possible to recursively decompose the
MAX and MIN functions on long sequences into sliding functions of length 2 and 3
for pipeline computation (21).
   The one-dimensional pseudomedian concept can be extended in a variety of
ways. One approach is to compute the MAX and MIN functions over rectangular
windows. As with the median filter, this approach tends to over smooth an image.
A plus-shape pseudomedian generally provides better subjective results. Consider
a plus-shaped window containing the following two-dimensional set elements {SE}
276     IMAGE ENHANCEMENT


                                         y1
                                          ·
                                          ·
                                          ·
                                   x 1 … xM … xC
                                          ·
                                          ·
                                          ·
                                         yR



Let the sequences {XC} and {YR} denote the elements along the horizontal and ver-
tical axes of the window, respectively. Note that the element xM is common to both
sequences. Then the plus-shaped pseudomedian can be defined as


          PMED { S E } = ( 1 ⁄ 2 )MAX [ MAXIMIN { X C }, MAXIMIN { Y R } ]

                         + ( 1 ⁄ 2 ) MIN [ MINIMAX { XC }, MINIMAX { Y R } ]

                                                                               (10.3-11)


    The MAXIMIN operator in one- or two-dimensional form is useful for removing
bright impulse noise but has little or no effect on dark impulse noise. Conversely,
the MINIMAX operator does a good job in removing dark, but not bright, impulse
noise. A logical conclusion is to cascade the operators.
    Figure 10.3-16 shows the results of MAXIMIN, MINIMAX, and pseudomedian
filtering on an image subjected to salt and pepper noise. As observed, the
MAXIMIN operator reduces the salt noise, while the MINIMAX operator reduces
the pepper noise. The pseudomedian provides attenuation for both types of noise.
The cascade MINIMAX and MAXIMIN operators, in either order, show excellent
results.


Wavelet De-noising. Section 8.4-3 introduced wavelet transforms. The usefulness
of wavelet transforms for image coding derives from the property that most of the
energy of a transformed image is concentrated in the trend transform components
rather than the fluctuation components (22). The fluctuation components may be
grossly quantized without serious image degradation. This energy compaction prop-
erty can also be exploited for noise removal. The concept, called wavelet de-noising
(22,23), is quite simple. The wavelet transform coefficients are thresholded such
that the presumably noisy, low-amplitude coefficients are set to zero.
NOISE CLEANING      277




                (a ) Original                        (b) MAXIMIN




               (c ) MINIMAX                        (d ) Pseudomedian




          (e ) MINIMAX of MAXIMIN               (f ) MAXIMIN of MINIMAX

FIGURE 10.3-15. 5 × 5 plus-shape MINIMAX, MAXIMIN, and pseudomedian filtering on
the noisy test images.
278       IMAGE ENHANCEMENT


10.4. EDGE CRISPENING

Psychophysical experiments indicate that a photograph or visual signal with
accentuated or crispened edges is often more subjectively pleasing than an exact
photometric reproduction. Edge crispening can be accomplished in a variety of
ways.


10.4.1. Linear Edge Crispening

Edge crispening can be performed by discrete convolution, as defined by Eq. 10.3-1,
in which the impulse response array H is of high-pass form. Several common 3 × 3
high-pass masks are given below (24–26).

Mask 1:

                                                  0       –1        0
                                     H =         –1        5       –1                        (10.4-1a)
                                                  0       –1        0

Mask 2:
                                                 –1       –1       –1
                                     H =         –1        9       –1                        (10.4-1b)
                                                 –1       –1       –1
Mask 3:

                                                  1       –2        1
                                     H =         –2        5       –2                        (10.3-1c)
                                                  1       –2        1


These masks possess the property that the sum of their elements is unity, to avoid
amplitude bias in the processed image. Figure 10.4-1 provides examples of edge
crispening on a monochrome image with the masks of Eq. 10.4-1. Mask 2 appears to
provide the best visual results.
    To obtain edge crispening on electronically scanned images, the scanner signal
can be passed through an electrical filter with a high-frequency bandpass character-
istic. Another possibility for scanned images is the technique of unsharp masking
(27,28). In this process, the image is effectively scanned with two overlapping aper-
tures, one at normal resolution and the other at a lower spatial resolution, which
upon sampling produces normal and low-resolution images F ( j, k ) and F L ( j, k ),
respectively. An unsharp masked image

                                         c                       1–c
                                                                            -
                      G ( j, k ) = -------------- F ( j, k ) – -------------- F L ( j, k )
                                                -                                             (10.4-2)
                                   2c – 1                      2c – 1
EDGE CRISPENING         279




                  (a ) Original                           (b) Mask 1




                  (c ) Mask 2                             (d ) Mask 3

     FIGURE 10.4-1. Edge crispening with 3 × 3 masks on the chest_xray image.



is then generated by forming the weighted difference between the normal and low-
resolution images, where c is a weighting constant. Typically, c is in the range 3/5 to
5/6, so that the ratio of normal to low-resolution components in the masked image is
from 1.5:1 to 5:1. Figure 10.4-2 illustrates typical scan signals obtained when scan-
ning over an object edge. The masked signal has a longer-duration edge gradient as
well as an overshoot and undershoot, as compared to the original signal. Subjec-
tively, the apparent sharpness of the original image is improved. Figure 10.4-3
presents examples of unsharp masking in which the low-resolution image is
obtained by convolution with a uniform L × L impulse response array. The sharpen-
ing effect is stronger as L increases and c decreases.
    Linear edge crispening can be performed by Fourier domain filtering. A zonal
high-pass filter with a transfer function given by Eq. 9.4-10 suppresses all spatial
frequencies below the cutoff frequency except for the dc component, which is nec-
essary to maintain the average amplitude of the filtered image. Figure 10.4-4 shows
280      IMAGE ENHANCEMENT




      FIGURE 10.4-2. Waveforms in an unsharp masking image enhancement system.




the result of zonal high-pass filtering of an image. Zonal high-pass filtering often
causes ringing in a filtered image. Such ringing can be reduced significantly by utili-
zation of a high-pass filter with a smooth cutoff response. One such filter is the
Butterworth high-pass filter, whose transfer function is defined by Eq. 9.4-13.
   Figure 10.4-4 shows the results of zonal and Butterworth high-pass filtering. In
both examples, the filtered images are biased to a midgray level for display.


10.4.2. Statistical Differencing

Another form of edge crispening, called statistical differencing (29, p. 100),
involves the generation of an image by dividing each pixel value by its estimated
standard deviation D ( j, k ) according to the basic relation

                                                     F ( j, k )
                                        G ( j, k ) = -----------------                             (10.4-3)
                                                     D ( j, k )


where the estimated standard deviation


                                     j+w          k+w                                        1⁄2
                              1                                                          2
                D ( j, k ) = ----
                             W
                                -     ∑            ∑       [ F ( m, n ) – M ( m, n ) ]             (10.4-4)
                                    m = j–w n = k–w
EDGE CRISPENING       281




               (a) L = 3, c = 0.6                                   (b) L = 3, c = 0.8




               (c) L = 7, c = 0.6                                  (d ) L = 7, c = 0.8

FIGURE 10.4-3. Unsharp mask processing for L × L uniform low-pass convolution on the
chest_xray image.



is computed at each pixel over some W × W neighborhood where W = 2w + 1. The
function M ( j, k ) is the estimated mean value of the original image at point (j, k),
which is computed as


                                               j+w       k+w
                                       1-
                       M ( j, k ) = -------
                                    W
                                          2     ∑         ∑       F ( m, n )              (10.4-5)
                                              m = j–w   n = k–w



The enhanced image G ( j, k ) is increased in amplitude with respect to the original at
pixels that deviate significantly from their neighbors, and is decreased in relative
amplitude elsewhere. The process is analogous to automatic gain control for an
audio signal.
282          IMAGE ENHANCEMENT




                       (a) Zonal filtering                                             (b) Butterworth filtering

FIGURE 10.4-4. Zonal and Butterworth high-pass filtering on the chest_xray image;
cutoff frequency = 32.



   Wallis (30) has suggested a generalization of the statistical differencing operator
in which the enhanced image is forced to a form with desired first- and second-order
moments. The Wallis operator is defined by


                                                           A max D d
      G ( j, k ) = [ F ( j, k ) – M ( j, k ) ] ------------------------------------------ + [ pM d + ( 1 – p )M ( j, k ) ]
                                                                                        -                                    (10.4-6)
                                               A max D ( j, k ) + D d


where Md and Dd represent desired average mean and standard deviation factors,
Amax is a maximum gain factor that prevents overly large output values when
 D ( j, k ) is small and 0.0 ≤ p ≤ 1.0 is a mean proportionality factor controlling the
background flatness of the enhanced image.
    The Wallis operator can be expressed in a more general form as


                                 G ( j, k ) = [ F ( j, k ) – M ( j, k ) ]A ( j, k ) + B ( j, k )                             (10.4-7)


where A ( j, k ) is a spatially dependent gain factor and B ( j, k ) is a spatially depen-
dent background factor. These gain and background factors can be derived directly
from Eq. 10.4-4, or they can be specified in some other manner. For the Wallis oper-
ator, it is convenient to specify the desired average standard deviation Dd such that
the spatial gain ranges between maximum Amax and minimum Amin limits. This can
be accomplished by setting Dd to the value
EDGE CRISPENING            283




                  (a) Original                      (b) Mean, 0.00 to 0.98




      (c) Standard deviation, 0.01 to 0.26       (d ) Background, 0.09 to 0.88




         (e) Spatial gain, 0.75 to 2.35      (f) Wallis enhancement, − 0.07 to 1.12

FIGURE 10.4-5. Wallis statistical differencing on the bridge image for Md = 0.45,
Dd = 0.28, p = 0.20, Amax = 2.50, Amin = 0.75 using a 9 × 9 pyramid array.
284     IMAGE ENHANCEMENT




                 (a) Original                                            (b) Wallis enhancement
FIGURE 10.4-6. Wallis statistical differencing on the chest_xray image for Md = 0.64,
Dd = 0.22, p = 0.20, Amax = 2.50, Amin = 0.75 using a 11 × 11 pyramid array.



                                      A min A max D max
                                D d = -------------------------------------
                                                                          -                       (10.4-8)
                                          A max – A min

where Dmax is the maximum value of D ( j, k ) . The summations of Eqs. 10.4-4 and
10.4-5 can be implemented by convolutions with a uniform impulse array. But,
overshoot and undershoot effects may occur. Better results are usually obtained with
a pyramid or Gaussian-shaped array.
   Figure 10.4-5 shows the mean, standard deviation, spatial gain, and Wallis statis-
tical differencing result on a monochrome image. Figure 10.4-6 presents a medical
imaging example.


10.5. COLOR IMAGE ENHANCEMENT

The image enhancement techniques discussed previously have all been applied to
monochrome images. This section considers the enhancement of natural color
images and introduces the pseudocolor and false color image enhancement methods.
In the literature, the terms pseudocolor and false color have often been used improp-
erly. Pseudocolor produces a color image from a monochrome image, while false
color produces an enhanced color image from an original natural color image or
from multispectral image bands.

10.5.1. Natural Color Image Enhancement

The monochrome image enhancement methods described previously can be applied
to natural color images by processing each color component individually. However,
COLOR IMAGE ENHANCEMENT            285

care must be taken to avoid changing the average value of the processed image com-
ponents. Otherwise, the processed color image may exhibit deleterious shifts in hue
and saturation.
   Typically, color images are processed in the RGB color space. For some image
enhancement algorithms, there are computational advantages to processing in a
luma-chroma space, such as YIQ, or a lightness-chrominance space, such as L*u*v*.
As an example, if the objective is to perform edge crispening of a color image, it is
usually only necessary to apply the enhancement method to the luma or lightness
component. Because of the high-spatial-frequency response limitations of human
vision, edge crispening of the chroma or chrominance components may not be per-
ceptible.
   Faugeras (31) has investigated color image enhancement in a perceptual space
based on a color vision model similar to the model presented in Figure 2.5-3. The
procedure is to transform the RGB tristimulus value original images according to the
color vision model to produce a set of three perceptual space images that, ideally,
are perceptually independent. Then, an image enhancement method is applied inde-
pendently to the perceptual space images. Finally, the enhanced perceptual space
images are subjected to steps that invert the color vision model and produce an
enhanced color image represented in RGB color space.


10.5.2. Pseudocolor

Pseudocolor (32–34) is a color mapping of a monochrome image array which is
intended to enhance the detectability of detail within the image. The pseudocolor
mapping of an array F ( j, k ) is defined as


                                   R ( j, k ) = O R { F ( j, k ) }                  (10.5-1a)

                                   G ( j, k ) = O G { F ( j, k ) }                  (10.5-1b)

                                   B ( j, k ) = O B { F ( j, k ) }                  (10.5-1c)


where R ( j, k ) , G ( j, k ) , B ( j, k ) are display color components and O R { F ( j, k ) },
O G { F ( j, k ) } , O B { F ( j, k ) } are linear or nonlinear functional operators. This map-
ping defines a path in three-dimensional color space parametrically in terms of the
array F ( j, k ). Figure 10.5-1 illustrates the RGB color space and two color mappings
that originate at black and terminate at white. Mapping A represents the achromatic
path through all shades of gray; it is the normal representation of a monochrome
image. Mapping B is a spiral path through color space.
   Another class of pseudocolor mappings includes those mappings that exclude all
shades of gray. Mapping C, which follows the edges of the RGB color cube, is such
an example. This mapping follows the perimeter of the gamut of reproducible colors
as depicted by the uniform chromaticity scale (UCS) chromaticity chart shown in
286      IMAGE ENHANCEMENT




       FIGURE 10.5-1. Black-to-white and RGB perimeter pseudocolor mappings.




Figure 10.5-2. The luminances of the colors red, green, blue, cyan, magenta, and
yellow that lie along the perimeter of reproducible colors are noted in the figure. It is
seen that the luminance of the pseudocolor scale varies between a minimum of
0.114 for blue to a maximum of 0.886 for yellow. A maximum luminance of unity is
reached only for white. In some applications it may be desirable to fix the luminance
of all displayed colors so that discrimination along the pseudocolor scale is by hue
and saturation attributes of a color only. Loci of constant luminance are plotted in
Figure 10.5-2.
   Figure 10.5-2 also includes bounds for displayed colors of constant luminance.
For example, if the RGB perimeter path is followed, the maximum luminance of any
color must be limited to 0.114, the luminance of blue. At a luminance of 0.2, the
RGB perimeter path can be followed except for the region around saturated blue. At
higher luminance levels, the gamut of constant luminance colors becomes severely
limited. Figure 10.5-2b is a plot of the 0.5 luminance locus. Inscribed within this
locus is the locus of those colors of largest constant saturation. A pseudocolor scale
along this path would have the property that all points differ only in hue.
   With a given pseudocolor path in color space, it is necessary to choose the scaling
between the data plane variable and the incremental path distance. On the UCS
chromaticity chart, incremental distances are subjectively almost equally noticeable.
Therefore, it is reasonable to subdivide geometrically the path length into equal
increments. Figure 10.5-3 shows examples of pseudocoloring of a gray scale chart
image and a seismic image.


10.5.3. False Color

False color is a point-by-point mapping of an original color image, described by its
three primary colors, or of a set of multispectral image planes of a scene, to a color
COLOR IMAGE ENHANCEMENT            287




                   FIGURE 10.5-2. Luminance loci for NTSC colors.




space defined by display tristimulus values that are linear or nonlinear functions of
the original image pixel values (35,36). A common intent is to provide a displayed
image with objects possessing different or false colors from what might be expected.
288      IMAGE ENHANCEMENT




              (a) Gray scale chart                    (b) Pseudocolor of chart




                  (c ) Seismic                      (d ) Pseudocolor of seismic

FIGURE 10.5-3. Pseudocoloring of the gray_chart and seismic images. See insert for
a color representation of this figure.



For example, blue sky in a normal scene might be converted to appear red, and
green grass transformed to blue. One possible reason for such a color mapping is to
place normal objects in a strange color world so that a human observer will pay
more attention to the objects than if they were colored normally.
   Another reason for false color mappings is the attempt to color a normal scene to
match the color sensitivity of a human viewer. For example, it is known that the
luminance response of cones in the retina peaks in the green region of the visible
spectrum. Thus, if a normally red object is false colored to appear green, it may
become more easily detectable. Another psychophysical property of color vision
that can be exploited is the contrast sensitivity of the eye to changes in blue light. In
some situation it may be worthwhile to map the normal colors of objects with fine
detail into shades of blue.
MULTISPECTRAL IMAGE ENHANCEMENT             289

   A third application of false color is to produce a natural color representation of a
set of multispectral images of a scene. Some of the multispectral images may even
be obtained from sensors whose wavelength response is outside the visible wave-
length range, for example, infrared or ultraviolet.
   In a false color mapping, the red, green, and blue display color components are
related to natural or multispectral images Fi by

                                   R D = O R { F 1, F 2, … }                    (10.5-2a)

                                   G D = O G { F 1, F2, … }                     (10.5-2b)

                                   B D = O B { F 1, F 2, … }                    (10.5-2c)


where O R { · } , O R { · } , O B { · } are general functional operators. As a simple exam-
ple, the set of red, green, and blue sensor tristimulus values (RS = F1, GS = F2, BS =
F3) may be interchanged according to the relation


                              RD             0    1      0      RS
                              GD       =     0    0      1      GS                (10.5-3)
                              BD             1    0      0      BS


Green objects in the original will appear red in the display, blue objects will appear
green, and red objects will appear blue. A general linear false color mapping of nat-
ural color images can be defined as


                            RD             m 11   m 12   m 13    RS
                            GD     =       m 21   m 22   m 21    GS               (10.5-4)
                            BD             m 23   m 32   m 33    BS



This color mapping should be recognized as a linear coordinate conversion of colors
reproduced by the primaries of the original image to a new set of primaries.
Figure 10.5-4 provides examples of false color mappings of a pair of images.


10.6. MULTISPECTRAL IMAGE ENHANCEMENT

Enhancement procedures are often performed on multispectral image bands of a
scene in order to accentuate salient features to assist in subsequent human interpre-
tation or machine analysis (35,37). These procedures include individual image band
290      IMAGE ENHANCEMENT




             (a) Infrared band                                      (b) Blue band




      (c) R = infrared, G = 0, B = blue      (d ) R = infrared, G = 1/2 [infrared + blue], B = blue

FIGURE 10.5-4. False coloring of multispectral images. See insert for a color representation
of this figure.



enhancement techniques, such as contrast stretching, noise cleaning, and edge crisp-
ening, as described earlier. Other methods, considered in this section, involve the
joint processing of multispectral image bands.
   Multispectral image bands can be subtracted in pairs according to the relation


                              D m, n ( j, k ) = Fm ( j, k ) – Fn ( j, k )                  (10.6-1)


in order to accentuate reflectivity variations between the multispectral bands. An
associated advantage is the removal of any unknown but common bias components
that may exist. Another simple but highly effective means of multispectral image
enhancement is the formation of ratios of the image bands. The ratio image between
the mth and nth multispectral bands is defined as
MULTISPECTRAL IMAGE ENHANCEMENT                           291


                                                            F m ( j, k )
                                           Rm, n ( j, k ) = -------------------
                                                                              -                          (10.6-2)
                                                            F n ( j, k )



It is assumed that the image bands are adjusted to have nonzero pixel values. In
many multispectral imaging systems, the image band Fn ( j, k ) can be modeled by
the product of an object reflectivity function R n ( j, k ) and an illumination function
I ( j, k ) that is identical for all multispectral bands. Ratioing of such imagery provides
an automatic compensation of the illumination factor. The ratio
F m ( j, k ) ⁄ [ F n ( j, k ) ± ∆ ( j, k ) ], for which ∆ ( j, k ) represents a quantization level uncer-
tainty, can vary considerably if F n ( j, k ) is small. This variation can be reduced
significantly by forming the logarithm of the ratios defined by (24)


               L m, n ( j, k ) = log { R m, n ( j, k ) } = log { F m ( j, k ) } – log { F n ( j, k ) }   (10.6-3)


    There are a total of N(N – 1) different difference or ratio pairs that may be formed
from N multispectral bands. To reduce the number of combinations to be consid-
ered, the differences or ratios are often formed with respect to an average image
field:

                                                               N
                                                     1
                                        A ( j, k ) = ---
                                                     N
                                                       -      ∑ F n ( j, k )                             (10.6-4)
                                                             n=1


  Unitary transforms between multispectral planes have also been employed as a
means of enhancement. For N image bands, a N × 1 vector


                                                          F 1 ( j, k )
                                                          F 2 ( j, k )
                                                               ·
                                              x =                                                        (10.6-5)
                                                                ·
                                                                ·
                                                          F N ( j, k )



is formed at each coordinate (j, k). Then, a transformation


                                                      y = Ax                                             (10.6-6)
292     IMAGE ENHANCEMENT


is formed where A is a N × N unitary matrix. A common transformation is the prin-
cipal components decomposition, described in Section 5.8, in which the rows of the
matrix A are composed of the eigenvectors of the covariance matrix Kx between the
bands. The matrix A performs a diagonalization of the covariance matrix Kx such
that the covariance matrix of the transformed imagery bands
                                               T
                                     K y = AK x A = Λ                              (10.6-7)

is a diagonal matrix Λ whose elements are the eigenvalues of Kx arranged in
descending value. The principal components decomposition, therefore, results in a
set of decorrelated data arrays whose energies are ranged in amplitude. This process,
of course, requires knowledge of the covariance matrix between the multispectral
bands. The covariance matrix must be either modeled, estimated, or measured. If the
covariance matrix is highly nonstationary, the principal components method
becomes difficult to utilize.
   Figure 10.6-1 contains a set of four multispectral images, and Figure 10.6-2
exhibits their corresponding log ratios (37). Principal components bands of these
multispectral images are illustrated in Figure 10.6-3 (37).




                (a) Band 4 (green)                         (b) Band 5 (red)




              (c) Band 6 (infrared 1)                   (d ) Band 7 (infrared 2)
                         FIGURE 10.6-1. Multispectral images.
MULTISPECTRAL IMAGE ENHANCEMENT       293




   (a) Band 4                               (b ) Band 4
       Band 5                                    Band 6




   (c) Band 4                               (d ) Band 5
       Band 7                                    Band 6




   (e) Band 5                               (f ) Band 6
       Band 7                                    Band 7

FIGURE 10.6-2. Logarithmic ratios of multispectral images.
294      IMAGE ENHANCEMENT




                 (a) First band                            (b) Second band




                 (c) Third band                            (d ) Fourth band

             FIGURE 10.6-3. Principal components of multispectral images.



REFERENCES

 1. R. Nathan, “Picture Enhancement for the Moon, Mars, and Man,” in Pictorial Pattern
    Recognition, G. C. Cheng, ed., Thompson, Washington DC, 1968, 239–235.
 2. F. Billingsley, “Applications of Digital Image Processing,” Applied Optics, 9, 2, Febru-
    ary 1970, 289–299.
 3. H. C. Andrews, A. G. Tescher, and R. P. Kruger, “Image Processing by Digital Com-
    puter,” IEEE Spectrum, 9, 7, July 1972, 20–32.
 4. E. L. Hall et al., “A Survey of Preprocessing and Feature Extraction Techniques for
    Radiographic Images,” IEEE Trans. Computers, C-20, 9, September 1971, 1032–1044.
 5. E. L. Hall, “Almost Uniform Distribution for Computer Image Enhancement,” IEEE
    Trans. Computers, C-23, 2, February 1974, 207–208.
 6. W. Frei, “Image Enhancement by Histogram Hyperbolization,” Computer Graphics and
    Image Processing, 6, 3, June 1977, 286–294.
REFERENCES         295

 7. D. J. Ketcham, “Real Time Image Enhancement Technique,” Proc. SPIE/OSA Confer-
    ence on Image Processing, Pacific Grove, CA, 74, February 1976, 120–125.
 8. R. A. Hummel, “Image Enhancement by Histogram Transformation,” Computer Graph-
    ics and Image Processing, 6, 2, 1977, 184–195.
 9. S. M. Pizer et al., “Adaptive Histogram Equalization and Its Variations,” Computer
    Vision, Graphics, and Image Processing. 39, 3, September 1987, 355–368.
10. G. P. Dineen, “Programming Pattern Recognition,” Proc. Western Joint Computer Con-
    ference, March 1955, 94–100.
11. R. E. Graham, “Snow Removal: A Noise Stripping Process for Picture Signals,” IRE
    Trans. Information Theory, IT-8, 1, February 1962, 129–144.
12. A. Rosenfeld, C. M. Park, and J. P. Strong, “Noise Cleaning in Digital Pictures,” Proc.
    EASCON Convention Record, October 1969, 264–273.
13. R. Nathan, “Spatial Frequency Filtering,” in Picture Processing and Psychopictorics,
    B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 151–164.
14. A. V. Oppenheim, R. W. Schaefer, and T. G. Stockham, Jr., “Nonlinear Filtering of Mul-
    tiplied and Convolved Signals,” Proc. IEEE, 56, 8, August 1968, 1264–1291.
15. G. A. Mastin, “Adaptive Filters for Digital Image Noise Smoothing: An Evaluation,”
    Computer Vision, Graphics, and Image Processing, 31, 1, July 1985, 103–121.
16. L. S. Davis and A. Rosenfeld, “Noise Cleaning by Iterated Local Averaging,” IEEE
    Trans. Systems, Man and Cybernetics, SMC-7, 1978, 705–710.
17. J. W. Tukey, Exploratory Data Analysis, Addison-Wesley, Reading, MA, 1971.
18. T. A. Nodes and N. C. Gallagher, Jr., “Median Filters: Some Manipulations and Their
    Properties,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 5, Octo-
    ber 1982, 739–746.
19. T. S. Huang, G. J. Yang, and G. Y. Tang, “A Fast Two-Dimensional Median Filtering
    Algorithm,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 1, Febru-
    ary 1979, 13–18.
20. J. T. Astola and T. G. Campbell, “On Computation of the Running Median,” IEEE Trans.
    Acoustics, Speech, and Signal Processing, 37, 4, April 1989, 572–574.
21. W. K. Pratt, T. J. Cooper, and I. Kabir, “Pseudomedian Filter,” Proc. SPIE Conference,
    Los Angeles, January 1984.
22. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall
    CRC Press, Boca Raton, FL, 1999.
23. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, New York, 1998.
24. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec-
    tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge,
    MA, 1965.
25. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy-
    chopictorics, B. S. Lipkin and A. Rosenfeld, eds., Academic Press, New York, 1970, 75–
    150.
26. A. Arcese, P. H. Mengert, and E. W. Trombini, “Image Detection Through Bipolar Cor-
    relation,” IEEE Trans. Information Theory, IT-16, 5, September 1970, 534–541.
27. W. F. Schreiber, “Wirephoto Quality Improvement by Unsharp Masking,” J. Pattern
    Recognition, 2, 1970, 111–121.
296      IMAGE ENHANCEMENT


28. J-S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,”
    IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-2, 2, March 1980,
    165–168.
29. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969.
30. R. H. Wallis, “An Approach for the Space Variant Restoration and Enhancement of
    Images,” Proc. Symposium on Current Mathematical Problems in Image Science,
    Monterey, CA, November 1976.
31. O. D. Faugeras, “Digital Color Image Processing Within the Framework of a Human
    Visual Model,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 4,
    August 1979, 380–393.
32. C. Gazley, J. E. Reibert, and R. H. Stratton, “Computer Works a New Trick in Seeing
    Pseudo Color Processing,” Aeronautics and Astronautics, 4, April 1967, 56.
33. L. W. Nichols and J. Lamar, “Conversion of Infrared Images to Visible in Color,”
    Applied Optics, 7, 9, September 1968, 1757.
34. E. R. Kreins and L. J. Allison, “Color Enhancement of Nimbus High Resolution Infrared
    Radiometer Data,” Applied Optics, 9, 3, March 1970, 681.
35. A. F. H. Goetz et al., “Application of ERTS Images and Image Processing to Regional
    Geologic Problems and Geologic Mapping in Northern Arizona,” Technical Report
    32-1597, Jet Propulsion Laboratory, Pasadena, CA, May 1975.
36. W. Find, “Image Coloration as an Interpretation Aid,” Proc. SPIE/OSA Conference on
    Image Processing, Pacific Grove, CA, February 1976, 74, 209–215.
37. G. S. Robinson and W. Frei, “Final Research Report on Computer Processing of ERTS
    Images,” Report USCIPI 640, University of Southern California, Image Processing
    Institute, Los Angeles, September 1975.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




11
IMAGE RESTORATION MODELS




Image restoration may be viewed as an estimation process in which operations are
performed on an observed or measured image field to estimate the ideal image field
that would be observed if no image degradation were present in an imaging system.
Mathematical models are described in this chapter for image degradation in general
classes of imaging systems. These models are then utilized in subsequent chapters as
a basis for the development of image restoration techniques.


11.1. GENERAL IMAGE RESTORATION MODELS

In order effectively to design a digital image restoration system, it is necessary
quantitatively to characterize the image degradation effects of the physical imaging
system, the image digitizer, and the image display. Basically, the procedure is to
model the image degradation effects and then perform operations to undo the model
to obtain a restored image. It should be emphasized that accurate image modeling is
often the key to effective image restoration. There are two basic approaches to the
modeling of image degradation effects: a priori modeling and a posteriori modeling.
In the former case, measurements are made on the physical imaging system, digi-
tizer, and display to determine their response for an arbitrary image field. In some
instances it will be possible to model the system response deterministically, while in
other situations it will only be possible to determine the system response in a sto-
chastic sense. The a posteriori modeling approach is to develop the model for the
image degradations based on measurements of a particular image to be restored.
Basically, these two approaches differ only in the manner in which information is
gathered to describe the character of the image degradation.

                                                                                  297
298      IMAGE RESTORATION MODELS




                    FIGURE 11.1-1. Digital image restoration model.



    Figure 11.1-1 shows a general model of a digital imaging system and restoration
process. In the model, a continuous image light distribution C ( x, y, t, λ ) dependent
on spatial coordinates (x, y), time (t), and spectral wavelength ( λ ) is assumed to
exist as the driving force of a physical imaging system subject to point and spatial
degradation effects and corrupted by deterministic and stochastic disturbances.
Potential degradations include diffraction in the optical system, sensor nonlineari-
ties, optical system aberrations, film nonlinearities, atmospheric turbulence effects,
image motion blur, and geometric distortion. Noise disturbances may be caused by
electronic imaging sensors or film granularity. In this model, the physical imaging
                                                (i)
system produces a set of output image fields FO ( x, y, t j ) at time instant t j described
by the general relation

                               (i )
                             F O ( x, y, tj ) = O P { C ( x, y, t, λ ) }           (11.1-1)


where O P { · } represents a general operator that is dependent on the space coordi-
nates (x, y), the time history (t), the wavelength ( λ ), and the amplitude of the light
distribution (C). For a monochrome imaging system, there will only be a single out-
                                                              (i)
put field, while for a natural color imaging system, FO ( x, y, t j ) may denote the red,
green, and blue tristimulus bands for i = 1, 2, 3, respectively. Multispectral imagery
may also involve several output bands of data.
                                                                               (i)
   In the general model of Figure 11.1-1, each observed image field FO ( x, y, t j ) is
digitized, following the techniques outlined in Part 3, to produce an array of image
            (i )
samples F S ( m 1, m 2, t j ) at each time instant t j . The output samples of the digitizer
are related to the input observed field by


                             (i)                            (i)
                           F S ( m 1, m 2, t j ) = O G { F O ( x, y, t j ) }       (11.1-2)
GENERAL IMAGE RESTORATION MODELS                  299

where O G { · } is an operator modeling the image digitization process.
   A digital image restoration system that follows produces an output array
 (i)
FK ( k 1, k 2, tj ) by the transformation

                              (i )                          ( i)
                            F K ( k 1, k 2, t j ) = O R { FS ( m 1, m2, t j ) }          (11.1-3)


where O R { · } represents the designed restoration operator. Next, the output samples
of the digital restoration system are interpolated by the image display system to pro-
duce a continuous image estimate F ( i ) ( x, y, t j ). This operation is governed by the
                                       ˆ
                                         I
relation


                              ˆ (i)                     (i)
                              FI ( x, y, t j ) = O D { FK ( k 1, k 2, tj ) }             (11.1-4)


where O D { · } models the display transformation.
   The function of the digital image restoration system is to compensate for degra-
dations of the physical imaging system, the digitizer, and the image display system
                                                             (i)
to produce an estimate of a hypothetical ideal image field FI ( x, y, t j ) that would be
displayed if all physical elements were perfect. The perfect imaging system would
produce an ideal image field modeled by

                    (i)                    ∞ t                                     
                   FI ( x, y, t j ) = O I  ∫ ∫ j C ( x, y, t, λ )U i ( t, λ ) dt dλ    (11.1-5)
                                            0 tj – T                               

where U i ( t, λ ) is a desired temporal and spectral response function, T is the observa-
tion period, and O I { · } is a desired point and spatial response function.
    Usually, it will not be possible to restore perfectly the observed image such that
the output image field is identical to the ideal image field. The design objective of
the image restoration processor is to minimize some error measure between
  (i)
FI ( x, y, t j ) and F ( i ) ( x, y, t j ). The discussion here is limited, for the most part, to a
                      ˆ
                        I
consideration of techniques that minimize the mean-square error between the ideal
and estimated image fields as defined by

                                     (i )                  ˆ (i )             2
                            E i = E  [ F I ( x, y, t j ) – F I ( x, y, t j ) ]         (11.1-6)
                                                                               

where E { · } denotes the expectation operator. Often, it will be desirable to place
side constraints on the error minimization, for example, to require that the image
estimate be strictly positive if it is to represent light intensities that are positive.
   Because the restoration process is to be performed digitally, it is often more con-
venient to restrict the error measure to discrete points on the ideal and estimated
image fields. These discrete arrays are obtained by mathematical models of perfect
image digitizers that produce the arrays
300      IMAGE RESTORATION MODELS


                       (i)                    (i)
                     FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ )   (11.1-7a)

                     ˆ (i)                  ˆ (i)
                     FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ )   (11.1-7b)


It is assumed that continuous image fields are sampled at a spatial period ∆ satisfy-
ing the Nyquist criterion. Also, quantization error is assumed negligible. It should be
noted that the processes indicated by the blocks of Figure 11.1-1 above the dashed
division line represent mathematical modeling and are not physical operations per-
formed on physical image fields and arrays. With this discretization of the continu-
ous ideal and estimated image fields, the corresponding mean-square restoration
error becomes



                                 (i )                      ˆ (i)                 2
                        E i = E  [ F I ( n 1, n 2, t j ) – F I ( n 1, n 2, tj ) ]       (11.1-8)
                                                                                  


   With the relationships of Figure 11.1-1 quantitatively established, the restoration
problem may be formulated as follows:
                                                    (i)
   Given the sampled observation F S ( m 1, m 2, t j ) expressed in terms of the
   image light distribution C ( x, y, t, λ ), determine the transfer function O K { · }
                                                    (i )           ˆ (i)
   that minimizes the error measure between F I ( x, y, t j ) and FI ( x, y, t j ) subject
   to desired constraints.
There are no general solutions for the restoration problem as formulated above
because of the complexity of the physical imaging system. To proceed further, it is
necessary to be more specific about the type of degradation and the method of resto-
ration. The following sections describe models for the elements of the generalized
imaging system of Figure 11.1-1.


11.2. OPTICAL SYSTEMS MODELS

One of the major advances in the field of optics during the past 40 years has been the
application of system concepts to optical imaging. Imaging devices consisting of
lenses, mirrors, prisms, and so on, can be considered to provide a deterministic
transformation of an input spatial light distribution to some output spatial light dis-
tribution. Also, the system concept can be extended to encompass the spatial propa-
gation of light through free space or some dielectric medium.
    In the study of geometric optics, it is assumed that light rays always travel in a
straight-line path in a homogeneous medium. By this assumption, a bundle of rays
passing through a clear aperture onto a screen produces a geometric light projection
of the aperture. However, if the light distribution at the region between the light and
OPTICAL SYSTEMS MODELS        301




                   FIGURE 11.2-1. Generalized optical imaging system.



dark areas on the screen is examined in detail, it is found that the boundary is not
sharp. This effect is more pronounced as the aperture size is decreased. For a pin-
hole aperture, the entire screen appears diffusely illuminated. From a simplistic
viewpoint, the aperture causes a bending of rays called diffraction. Diffraction of
light can be quantitatively characterized by considering light as electromagnetic
radiation that satisfies Maxwell's equations. The formulation of a complete theory of
optical imaging from the basic electromagnetic principles of diffraction theory is a
complex and lengthy task. In the following, only the key points of the formulation
are presented; details may be found in References 1 to 3.
   Figure 11.2-1 is a diagram of a generalized optical imaging system. A point in the
object plane at coordinate ( x o, y o ) of intensity I o ( x o, y o ) radiates energy toward an
imaging system characterized by an entrance pupil, exit pupil, and intervening sys-
tem transformation. Electromagnetic waves emanating from the optical system are
focused to a point ( x i, y i ) on the image plane producing an intensity I i ( x i, y i ) . The
imaging system is said to be diffraction limited if the light distribution at the image
plane produced by a point-source object consists of a converging spherical wave
whose extent is limited only by the exit pupil. If the wavefront of the electromag-
netic radiation emanating from the exit pupil is not spherical, the optical system is
said to possess aberrations.
   In most optical image formation systems, the optical radiation emitted by an
object arises from light transmitted or reflected from an incoherent light source. The
image radiation can often be regarded as quasimonochromatic in the sense that the
spectral bandwidth of the image radiation detected at the image plane is small with
respect to the center wavelength of the radiation. Under these joint assumptions, the
imaging system of Figure 11.2-1 will respond as a linear system in terms of the
intensity of its input and output fields. The relationship between the image intensity
and object intensity for the optical system can then be represented by the superposi-
tion integral equation

                                        ∞    ∞
                   Ii ( x i, y i ) =   ∫–∞ ∫–∞ H ( xi, yi ; xo, yo )Io ( xo, yo ) dxo dyo   (11.2-1)
302      IMAGE RESTORATION MODELS


where H ( x i, y i ; x o, y o ) represents the image intensity response to a point source of
light. Often, the intensity impulse response is space invariant and the input–output
relationship is given by the convolution equation

                                                   ∞       ∞
                      I i ( x i, y i ) =        ∫–∞ ∫–∞ H ( xi – xo, yi – yo )Io ( xo, yo ) dxo dyo                                                       (11.2-2)


In this case, the normalized Fourier transforms


                                               ∞       ∞
                                            ∫–∞ ∫–∞ Io ( xo, yo ) exp{ –i ( ωx xo + ωy yo ) } dxo dyo
             Io ( ω x, ω y ) = -----------------------------------------------------------------------------------------------------------------------
                                                                                                                                                     -   (11.2-3a)
                                                                ∞ ∞
                                                            ∫ ∫         –∞ –∞
                                                                               I o ( x o, y o ) dx o dy o


                                                   ∞       ∞
                                               ∫–∞ ∫–∞ Ii ( xi, yi ) exp{ –i ( ωx xi + ω y yi ) } dxi dy i
                I i ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------------------
                                                                                                                                                 -       (11.2-3b)
                                                                  ∞ ∞
                                                               ∫ ∫        – ∞ –∞
                                                                                  Ii ( x i, y i ) dx i dyi



of the object and image intensity fields are related by


                                          I o ( ω x, ω y ) = H ( ω x, ω y ) I i ( ω x, ω y )                                                              (11.2-4)


where H ( ω x, ω y ) , which is called the optical transfer function (OTF), is defined by


                                                       ∞       ∞
                                                   ∫–∞ ∫–∞ H ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy
                   H ( ω x, ω y ) = --------------------------------------------------------------------------------------------------------
                                                                                                                                           -              (11.2-5)
                                                                   ∞ ∞
                                                                ∫ ∫          –∞ –∞
                                                                                   H ( x , y ) dx dy



The absolute value H ( ω x, ω y ) of the OTF is known as the modulation transfer
function (MTF) of the optical system.
    The most common optical image formation system is a circular thin lens. Figure
11.2-2 illustrates the OTF for such a lens as a function of its degree of misfocus
(1, p. 486; 4). For extreme misfocus, the OTF will actually become negative at some
spatial frequencies. In this state, the lens will cause a contrast reversal: Dark objects
will appear light, and vice versa.
    Earth's atmosphere acts as an imaging system for optical radiation transversing a
path through the atmosphere. Normally, the index of refraction of the atmos-
phere remains relatively constant over the optical extent of an object, but in
some instances atmospheric turbulence can produce a spatially variable index of
OPTICAL SYSTEMS MODELS       303




FIGURE 11.2-2. Cross section of transfer function of a lens. Numbers indicate degree of
misfocus.



refraction that leads to an effective blurring of any imaged object. An equivalent
impulse response

                                                         2      2 5 ⁄ 6
                          H ( x, y ) = K 1 exp  – ( K 2 x + K3 y )                    (11.2-6)
                                                                       

where the Kn are constants, has been predicted and verified mathematically by
experimentation (5) for long-exposure image formation. For convenience in analy-
sis, the function 5/6 is often replaced by unity to obtain a Gaussian-shaped impulse
response model of the form


                                                  x2                2
                                                                   y 
                             H ( x, y ) = K exp  –  -------- + --------             (11.2-7)
                                                  2b 2 2b 2  
                                                             x          y


where K is an amplitude scaling constant and bx and by are blur-spread factors.
   Under the assumption that the impulse response of a physical imaging system is
independent of spectral wavelength and time, the observed image field can be mod-
eled by the superposition integral equation


                (i)                    ∞ ∞                                         
              F O ( x, y, t j ) = O C  ∫ ∫ C ( α, β, t, λ )H ( x, y ; α, β ) dα dβ    (11.2-8)
                                        –∞ –∞                                      


where O C { · } is an operator that models the spectral and temporal characteristics of
the physical imaging system. If the impulse response is spatially invariant, the
model reduces to the convolution integral equation
304     IMAGE RESTORATION MODELS


               (i )                  ∞ ∞                                           
             F O ( x, y, t j ) = OC  ∫ ∫ C ( α, β, t, λ )H ( x – α, y – β ) dα d β    (11.2-9)
                                     –∞ –∞                                         



11.3. PHOTOGRAPHIC PROCESS MODELS

There are many different types of materials and chemical processes that have been
utilized for photographic image recording. No attempt is made here either to survey
the field of photography or to deeply investigate the physics of photography. Refer-
ences 6 to 8 contain such discussions. Rather, the attempt here is to develop mathe-
matical models of the photographic process in order to characterize quantitatively
the photographic components of an imaging system.


11.3.1. Monochromatic Photography

The most common material for photographic image recording is silver halide emul-
sion, depicted in Figure 11.3-1. In this material, silver halide grains are suspended in
a transparent layer of gelatin that is deposited on a glass, acetate, or paper backing.
If the backing is transparent, a transparency can be produced, and if the backing is a
white paper, a reflection print can be obtained. When light strikes a grain, an electro-
chemical conversion process occurs, and part of the grain is converted to metallic
silver. A development center is then said to exist in the grain. In the development
process, a chemical developing agent causes grains with partial silver content to be
converted entirely to metallic silver. Next, the film is fixed by chemically removing
unexposed grains.
    The photographic process described above is called a non reversal process. It
produces a negative image in the sense that the silver density is inversely propor-
tional to the exposing light. A positive reflection print of an image can be obtained
in a two-stage process with nonreversal materials. First, a negative transparency is
produced, and then the negative transparency is illuminated to expose negative
reflection print paper. The resulting silver density on the developed paper is then
proportional to the light intensity that exposed the negative transparency.
    A positive transparency of an image can be obtained with a reversal type of film.
This film is exposed and undergoes a first development similar to that of a nonreversal
film. At this stage in the photographic process, all grains that have been exposed




                 FIGURE 11.3-1. Cross section of silver halide emulsion.
PHOTOGRAPHIC PROCESS MODELS           305

to light are converted completely to metallic silver. In the next step, the metallic
silver grains are chemically removed. The film is then uniformly exposed to light, or
alternatively, a chemical process is performed to expose the remaining silver halide
grains. Then the exposed grains are developed and fixed to produce a positive trans-
parency whose density is proportional to the original light exposure.
    The relationships between light intensity exposing a film and the density of silver
grains in a transparency or print can be described quantitatively by sensitometric
measurements. Through sensitometry, a model is sought that will predict the spec-
tral light distribution passing through an illuminated transparency or reflected from
a print as a function of the spectral light distribution of the exposing light and certain
physical parameters of the photographic process. The first stage of the photographic
process, that of exposing the silver halide grains, can be modeled to a first-order
approximation by the integral equation


                               X ( C ) = kx   ∫ C ( λ )L ( λ ) dλ                (11.3-1)


where X(C) is the integrated exposure, C ( λ ) represents the spectral energy distribu-
tion of the exposing light, L ( λ ) denotes the spectral sensitivity of the film or paper
plus any spectral losses resulting from filters or optical elements, and kx is an expo-
sure constant that is controllable by an aperture or exposure time setting. Equation
11.3-1 assumes a fixed exposure time. Ideally, if the exposure time were to be
increased by a certain factor, the exposure would be increased by the same factor.
Unfortunately, this relationship does not hold exactly. The departure from linearity
is called a reciprocity failure of the film. Another anomaly in exposure prediction is
the intermittency effect, in which the exposures for a constant intensity light and for
an intermittently flashed light differ even though the incident energy is the same for
both sources. Thus, if Eq. 11.3-1 is to be utilized as an exposure model, it is neces-
sary to observe its limitations: The equation is strictly valid only for a fixed expo-
sure time and constant-intensity illumination.
    The transmittance τ ( λ ) of a developed reversal or non-reversal transparency as a
function of wavelength can be ideally related to the density of silver grains by the
exponential law of absorption as given by


                                 τ ( λ ) = exp { – d e D ( λ ) }                 (11.3-2)


where D ( λ ) represents the characteristic density as a function of wavelength for a
reference exposure value, and de is a variable proportional to the actual exposure.
For monochrome transparencies, the characteristic density function D ( λ ) is reason-
ably constant over the visible region. As Eq. 11.3-2 indicates, high silver densities
result in low transmittances, and vice versa. It is common practice to change the pro-
portionality constant of Eq. 11.3-2 so that measurements are made in exponent ten
units. Thus, the transparency transmittance can be equivalently written as
306      IMAGE RESTORATION MODELS


                                                   –dx D ( λ )
                                    τ ( λ ) = 10                                (11.3-3)

where dx is the density variable, inversely proportional to exposure, for exponent 10
units. From Eq. 11.3-3, it is seen that the photographic density is logarithmically
related to the transmittance. Thus,

                                 d x D ( λ ) = – log 10 τ ( λ )                 (11.3-4)

   The reflectivity r o ( λ ) of a photographic print as a function of wavelength is also
inversely proportional to its silver density, and follows the exponential law of
absorption of Eq. 11.3-2. Thus, from Eqs. 11.3-3 and 11.3-4, one obtains directly

                                                   –d x D ( λ )
                                  r o ( λ ) = 10                                (11.3-5)

                                d x D ( λ ) = – log 10 r o ( λ )                (11.3-6)

where dx is an appropriately evaluated variable proportional to the exposure of the
photographic paper.
   The relational model between photographic density and transmittance or reflectivity
is straightforward and reasonably accurate. The major problem is the next step of
modeling the relationship between the exposure X(C) and the density variable dx.
Figure 11.3-2a shows a typical curve of the transmittance of a nonreversal transparency




                        (a)                                           (b)




                        (c)                                           (d)

FIGURE 11.3-2. Relationships between transmittance, density, and exposure for a
nonreversal film.
PHOTOGRAPHIC PROCESS MODELS        307




   FIGURE 11.3-3. H & D curves for a reversal film as a function of development time.



as a function of exposure. It is to be noted that the curve is highly nonlinear except
for a relatively narrow region in the lower exposure range. In Figure 11.3-2b, the
curve of Figure 11.3-2a has been replotted as transmittance versus the logarithm of
exposure. An approximate linear relationship is found to exist between transmit-
tance and the logarithm of exposure, but operation in this exposure region is usually
of little use in imaging systems. The parameter of interest in photography is the pho-
tographic density variable dx, which is plotted as a function of exposure and loga-
rithm of exposure in Figure 11.3-2c and 11.3-2d. The plot of density versus
logarithm of exposure is known as the H & D curve after Hurter and Driffield, who
performed fundamental investigations of the relationships between density and
exposure. Figure 11.3-3 is a plot of the H & D curve for a reversal type of film. In
Figure 11.3-2d, the central portion of the curve, which is approximately linear, has
been approximated by the line defined by


                               d x = γ [ log 10 X ( C ) – KF ]                 (11.3-7)


where γ represents the slope of the line and KF denotes the intercept of the line with
the log exposure axis. The slope of the curve γ (gamma,) is a measure of the contrast
of the film, while the factor KF is a measure of the film speed; that is, a measure of
the base exposure required to produce a negative in the linear region of the H & D
curve. If the exposure is restricted to the linear portion of the H & D curve, substitu-
tion of Eq. 11.3-7 into Eq. 11.3-3 yields a transmittance function

                                                               – γD ( λ )
                              τ ( λ ) = Kτ ( λ ) [ X ( C ) ]                  (11.3-8a)


where
                                                     γK F D ( λ )
                                    K τ ( λ ) ≡ 10                            (11.3-8b)
308     IMAGE RESTORATION MODELS




                      FIGURE 11.3-4. Color film integral tripack.



    With the exposure model of Eq. 11.3-1, the transmittance or reflection models of
Eqs. 11.3-3 and 11.3-5, and the H & D curve, or its linearized model of Eq. 11.3-7, it
is possible mathematically to model the monochrome photographic process.


11.3.2. Color Photography

Modern color photography systems utilize an integral tripack film, as illustrated in
Figure 11.3-4, to produce positive or negative transparencies. In a cross section of
this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow
filter following the blue emulsion prevents blue light from passing through to the
green and red silver emulsions that follow in consecutive layers and are naturally
sensitive to blue light. A transparent base supports the emulsion layers. Upon devel-
opment, the blue emulsion layer is converted into a yellow dye transparency whose
dye concentration is proportional to the blue exposure for a negative transparency
and inversely proportional for a positive transparency. Similarly, the green and blue
emulsion layers become magenta and cyan dye layers, respectively. Color prints can
be obtained by a variety of processes (7). The most common technique is to produce
a positive print from a color negative transparency onto nonreversal color paper.
    In the establishment of a mathematical model of the color photographic process,
each emulsion layer can be considered to react to light as does an emulsion layer of
a monochrome photographic material. To a first approximation, this assumption is
correct. However, there are often significant interactions between the emulsion and
dye layers, Each emulsion layer possesses a characteristic sensitivity, as shown by
the typical curves of Figure 11.3-5. The integrated exposures of the layers are given
by

                             X R ( C ) = d R ∫ C ( λ )L R ( λ ) dλ            (11.3-9a)

                             XG ( C ) = d G ∫ C ( λ )L G ( λ ) dλ             (11.3-9b)

                             X B ( C ) = d B ∫ C ( λ )L B ( λ ) dλ            (11.3-9c)
PHOTOGRAPHIC PROCESS MODELS            309




           FIGURE 11.3-5. Spectral sensitivities of typical film layer emulsions.



where dR, dG, dB are proportionality constants whose values are adjusted so that the
exposures are equal for a reference white illumination and so that the film is not sat-
urated. In the chemical development process of the film, a positive transparency is
produced with three absorptive dye layers of cyan, magenta, and yellow dyes.
   The transmittance τT ( λ ) of the developed transparency is the product of the
transmittance of the cyan τTC ( λ ), the magenta τ TM ( λ ), and the yellow τ TY ( λ ) dyes.
Hence,


                              τ T ( λ ) = τ TC ( λ )τTM ( λ )τ TY ( λ )             (11.3-10)


The transmittance of each dye is a function of its spectral absorption characteristic
and its concentration. This functional dependence is conveniently expressed in
terms of the relative density of each dye as

                                                      – cD NC ( λ )
                                    τ TC ( λ ) = 10                              (11.3-11a)

                                                     – mD NM ( λ )
                                   τTM ( λ ) = 10                                (11.3-11b)

                                                      – yD NY ( λ )
                                    τTY ( λ ) = 10                               (11.3-11c)

where c, m, y represent the relative amounts of the cyan, magenta, and yellow dyes,
and D NC ( λ ) , D NM ( λ ) , D NY ( λ ) denote the spectral densities of unit amounts of the
dyes. For unit amounts of the dyes, the transparency transmittance is

                                                       – D TN ( λ )
                                     τTN ( λ ) = 10                              (11.3-12a)
310     IMAGE RESTORATION MODELS




FIGURE 11.3-6. Spectral dye densities and neutral density of a typical reversal color film.


where

                        D TN ( λ ) = D NC ( λ ) + D NM ( λ ) + D NY ( λ )       (11.3-12b)

Such a transparency appears to be a neutral gray when illuminated by a reference
white light. Figure 11.3-6 illustrates the typical dye densities and neutral density for
a reversal film.
   The relationship between the exposure values and dye layer densities is, in gen-
eral, quite complex. For example, the amount of cyan dye produced is a nonlinear
function not only of the red exposure, but is also dependent to a smaller extent on
the green and blue exposures. Similar relationships hold for the amounts of magenta
and yellow dyes produced by their exposures. Often, these interimage effects can be
neglected, and it can be assumed that the cyan dye is produced only by the red expo-
sure, the magenta dye by the green exposure, and the blue dye by the yellow expo-
sure. For this assumption, the dye density–exposure relationship can be
characterized by the Hurter–Driffield plot of equivalent neutral density versus the
logarithm of exposure for each dye. Figure 11.3-7 shows a typical H & D curve for a
reversal film. In the central portion of each H & D curve, the density versus expo-
sure characteristic can be modeled as

                                  c = γ C log 10 X R + K FC                     (11.3-13a)

                                 m = γ M log 10 X G + K FM                      (11.3-13b)

                                  y = γY log 10 X B + K FY                      (11.3-13c)
PHOTOGRAPHIC PROCESS MODELS                 311




             FIGURE 11.3-7. H & D curves for a typical reversal color film.




where γC , γM , γ Y , representing the slopes of the curves in the linear region, are
called dye layer gammas.
   The spectral energy distribution of light passing through a developed transpar-
ency is the product of the transparency transmittance and the incident illumination
spectral energy distribution E ( λ ) as given by


                                            – [ cD NC ( λ ) + mD NM ( λ ) + yD NY ( λ ) ]
                     CT ( λ ) = E ( λ )10                                                   (11.3-14)



Figure 11.3-8 is a block diagram of the complete color film recording and reproduc-
tion process. The original light with distribution C ( λ ) and the light passing through
the transparency C T ( λ ) at a given resolution element are rarely identical. That is, a
spectral match is usually not achieved in the photographic process. Furthermore, the
lights C and CT usually do not even provide a colorimetric match.
312     IMAGE RESTORATION MODELS




                              FIGURE 11.3-8. Color film model.



11.4. DISCRETE IMAGE RESTORATION MODELS

This chapter began with an introduction to a general model of an imaging system
and a digital restoration process. Next, typical components of the imaging system
were described and modeled within the context of the general model. Now, the dis-
cussion turns to the development of several discrete image restoration models. In the
development of these models, it is assumed that the spectral wavelength response
and temporal response characteristics of the physical imaging system can be sepa-
rated from the spatial and point characteristics. The following discussion considers
only spatial and point characteristics.
   After each element of the digital image restoration system of Figure 11.1-1 is
modeled, following the techniques described previously, the restoration system may
be conceptually distilled to three equations:

   Observed image:

             F S ( m 1, m 2 ) = O M { F I ( n 1, n 2 ), N 1 ( m 1, m 2 ), …, N N ( m 1, m 2 ) }   (11.4-1a)

   Compensated image:

                                FK ( k 1, k 2 ) = O R { F S ( m 1, m 2 ) }                        (11.4-1b)

   Restored image:
                                 ˆ
                                 F I ( n 1, n 2 ) = O D { F K ( k 1, k 2 ) }                      (11.4-1c)
DISCRETE IMAGE RESTORATION MODELS              313

                                                                         ˆ
where FS represents an array of observed image samples, FI and F I are arrays of
ideal image points and estimates, respectively, FK is an array of compensated image
points from the digital restoration system, Ni denotes arrays of noise samples from
various system elements, and O M { · } , O R { · } , O D { · } represent general transfer
functions of the imaging system, restoration processor, and display system, respec-
tively. Vector-space equivalents of Eq. 11.4-1 can be formed for purposes of analysis
by column scanning of the arrays of Eq. 11.4-1. These relationships are given by

                                f S = O M { f I, n 1, …, n N }                 (11.4-2a)

                               fK = OR { fS }                                  (11.4-2b)

                                ˆ = O {f }
                                fI                                             (11.4-2c)
                                     D K


Several estimation approaches to the solution of 11.4-1 or 11.4-2 are described in
the following chapters. Unfortunately, general solutions have not been found;
recourse must be made to specific solutions for less general models.
   The most common digital restoration model is that of Figure 11.4-1a, in which a
continuous image field is subjected to a linear blur, the electrical sensor responds
nonlinearly to its input intensity, and the sensor amplifier introduces additive Gauss-
ian noise independent of the image field. The physical image digitizer that follows
may also introduce an effective blurring of the sampled image as the result of sam-
pling with extended pulses. In this model, display degradation is ignored.




FIGURE 11.4-1. Imaging and restoration models for a sampled blurred image with additive
noise.
314      IMAGE RESTORATION MODELS


    Figure 11.4-1b shows a restoration model for the imaging system. It is assumed
that the imaging blur can be modeled as a superposition operation with an impulse
response J(x, y) that may be space variant. The sensor is assumed to respond nonlin-
early to the input field FB(x, y) on a point-by-point basis, and its output is subject to
an additive noise field N(x, y). The effect of sampling with extended sampling
pulses, which are assumed symmetric, can be modeled as a convolution of FO(x, y)
with each pulse P(x, y) followed by perfect sampling.
                                                                                    ˆ
    The objective of the restoration is to produce an array of samples F I ( n 1, n 2 ) that
are estimates of points on the ideal input image field FI(x, y) obtained by a perfect
image digitizer sampling at a spatial period ∆I . To produce a digital restoration
model, it is necessary quantitatively to relate the physical image samples FS ( m 1, m 2 )
to the ideal image points FI ( n 1, n 2 ) following the techniques outlined in Section 7.2.
This is accomplished by truncating the sampling pulse equivalent impulse response
P(x, y) to some spatial limits ± TP , and then extracting points from the continuous
observed field FO(x, y) at a grid spacing ∆P . The discrete representation must then
be carried one step further by relating points on the observed image field FO(x, y) to
points on the image field FP(x, y) and the noise field N(x, y). The final step in the
development of the discrete restoration model involves discretization of the super-
position operation with J(x, y). There are two potential sources of error in this mod-
eling process: truncation of the impulse responses J(x, y) and P(x, y), and quadrature
integration errors. Both sources of error can be made negligibly small by choosing
the truncation limits TB and TP large and by choosing the quadrature spacings ∆I
and ∆P small. This, of course, increases the sizes of the arrays, and eventually, the
amount of storage and processing required. Actually, as is subsequently shown, the
numerical stability of the restoration estimate may be impaired by improving the
accuracy of the discretization process!
    The relative dimensions of the various arrays of the restoration model are impor-
tant. Figure 11.4-2 shows the nested nature of the arrays. The image array observed,
FO ( k 1, k 2 ), is smaller than the ideal image array, F I ( n 1, n 2 ), by the half-width of the
truncated impulse response J(x, y). Similarly, the array of physical sample points
FS(m1, m2) is smaller than the array of image points observed, FO ( k 1, k 2 ), by the
half-width of the truncated impulse response P ( x, y ).
    It is convenient to form vector equivalents of the various arrays of the restoration
model in order to utilize the formal structure of vector algebra in the subsequent
restoration analysis. Again, following the techniques of Section 7.2, the arrays are
reindexed so that the first element appears in the upper-left corner of each array.
Next, the vector relationships between the stages of the model are obtained by col-
umn scanning of the arrays to give

                                         fS = BP fO                                    (11.4-3a)

                                         fO = fP + n                                   (11.4-3b)

                                         fP = O P { f B }                              (11.4-3c)

                                         f B = BB f I                                  (11.4-3d)
DISCRETE IMAGE RESTORATION MODELS            315




                FIGURE 11.4-2. Relationships of sampled image arrays.




where the blur matrix BP contains samples of P(x, y) and BB contains samples of
J(x, y). The nonlinear operation of Eq. 1 l.4-3c is defined as a point-by-point nonlin-
ear transformation. That is,


                                  fP ( i ) = OP { fB ( i ) }                  (11.4-4)


   Equations 11.4-3a to 11.4-3d can be combined to yield a single equation for the
observed physical image samples in terms of points on the ideal image:


                              fS = BP OP { BB fI } + BP n                     (11.4-5)


   Several special cases of Eq. 11.4-5 will now be defined. First, if the point nonlin-
earity is absent,


                                     fS = BfI + n B                           (11.4-6)
316     IMAGE RESTORATION MODELS




                                (a) Original




                (b) Impulse response

                                                     (c) Observation
               FIGURE 11.4-3. Image arrays for underdetermined model.



where B = BPBB and nB = BPn. This is the classical discrete model consisting of a
set of linear equations with measurement uncertainty. Another case that will be
defined for later discussion occurs when the spatial blur of the physical image digi-
tizer is negligible. In this case,


                                  f S = O P { Bf I } + n                     (11.4-7)


where B = BB is defined by Eq. 7.2-15.
   Chapter 12 contains results for several image restoration experiments based on the
restoration model defined by Eq. 11.4-6. An artificial image has been generated for
these computer simulation experiments (9). The original image used for the analysis of
underdetermined restoration techniques, shown in Figure 11.4-3a, consists of a 4 × 4
pixel square of intensity 245 placed against an extended background of intensity
REFERENCES       317

10 referenced to an intensity scale of 0 to 255. All images are zoomed for display
purposes. The Gaussian-shaped impulse response function is defined as

                                                    l1              l2  
                            H ( l1, l 2 ) = K exp  –  -------- + --------  
                                                               -          -                    (11.4-8)
                                                    2b C 2b 2  
                                                               2
                                                                          R


over a 5 × 5 point array where K is an amplitude scaling constant and bC and bR are
blur-spread constants.
   In the computer simulation restoration experiments, the observed blurred image
model has been obtained by multiplying the column-scanned original image of
Figure 11.4-3a by the blur matrix B. Next, additive white Gaussian observation
noise has been simulated by adding output variables from an appropriate random
number generator to the blurred images. For display, all image points restored are
clipped to the intensity range 0 to 255.


REFERENCES

1.   M. Born and E. Wolf, Principles of Optics, 7th ed., Pergamon Press, New York, 1999.
2.   J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996.
3.   E. L. O'Neill and E. H. O’Neill, Introduction to Statistical Optics, reprint ed., Addison-
     Wesley, Reading, MA, 1992.
4.   H. H. Hopkins, Proc. Royal Society, A, 231, 1184, July 1955, 98.
5.   R. E. Hufnagel and N. R. Stanley, “Modulation Transfer Function Associated with
     Image Transmission Through Turbulent Media,” J. Optical Society of America, 54, 1,
     January 1964, 52–61.
6.   K. Henney and B. Dudley, Handbook of Photography, McGraw-Hill, New York, 1939.
7.   R. M. Evans, W. T. Hanson, and W. L. Brewer, Principles of Color Photography, Wiley,
     New York, 1953.
8.   C. E. Mees, The Theory of Photographic Process, Macmillan, New York, 1966.
9.   N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression
     Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




12
POINT AND SPATIAL IMAGE
RESTORATION TECHNIQUES




A common defect in imaging systems is unwanted nonlinearities in the sensor and
display systems. Post processing correction of sensor signals and pre-processing
correction of display signals can reduce such degradations substantially (1). Such
point restoration processing is usually relatively simple to implement. One of the
most common image restoration tasks is that of spatial image restoration to compen-
sate for image blur and to diminish noise effects. References 2 to 6 contain surveys
of spatial image restoration methods.


12.1. SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION

This section considers methods for compensation of point nonlinearities of sensors
and displays.


12.1.1. Sensor Point Nonlinearity Correction

In imaging systems in which the source degradation can be separated into cascaded
spatial and point effects, it is often possible directly to compensate for the point deg-
radation (7). Consider a physical imaging system that produces an observed image
field FO ( x, y ) according to the separable model


                           F O ( x, y ) = O Q { O D { C ( x, y, λ ) } }         (12.1-1)

                                                                                     319
320      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




            FIGURE 12.1-1. Point luminance correction for an image sensor.


where C ( x, y, λ ) is the spectral energy distribution of the input light field, OQ { · }
represents the point amplitude response of the sensor and O D { · } denotes the spatial
and wavelength responses. Sensor luminance correction can then be accomplished
by passing the observed image through a correction system with a point restoration
operator O R { · } ideally chosen such that

                                   OR { OQ { · } } = 1                           (12.1-2)

For continuous images in optical form, it may be difficult to implement a desired
point restoration operator if the operator is nonlinear. Compensation for images in
analog electrical form can be accomplished with a nonlinear amplifier, while digital
image compensation can be performed by arithmetic operators or by a table look-up
procedure.
    Figure 12.1-1 is a block diagram that illustrates the point luminance correction
methodology. The sensor input is a point light distribution function C that is con-
verted to a binary number B for eventual entry into a computer or digital processor.
In some imaging applications, processing will be performed directly on the binary
representation, while in other applications, it will be preferable to convert to a real
fixed-point computer number linearly proportional to the sensor input luminance. In
                                                                              ˜
the former case, the binary correction unit will produce a binary number B that is
designed to be linearly proportional to C, and in the latter case, the fixed-point cor-
                                                 ˜
rection unit will produce a fixed-point number C that is designed to be equal to C.
    A typical measured response B versus sensor input luminance level C is shown in
Figure 12.1-2a, while Figure 12.1-2b shows the corresponding compensated
response that is desired. The measured response can be obtained by scanning a gray
scale test chart of known luminance values and observing the digitized binary value
B at each step. Repeated measurements should be made to reduce the effects of
noise and measurement errors. For calibration purposes, it is convenient to regard
the binary-coded luminance as a fixed-point binary number. As an example, if the
luminance range is sliced to 4096 levels and coded with 12 bits, the binary represen-
tation would be


                      B = b8 b7 b6 b5 b4 b3 b2 b1. b–1 b–2 b–3 b–4               (12.1-3)
SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION               321




        FIGURE 12.1-2. Measured and compensated sensor luminance response.



The whole-number part in this example ranges from 0 to 255, and the fractional part
divides each integer step into 16 subdivisions. In this format, the scanner can pro-
duce output levels over the range

                                 255.9375 ≤ B ≤ 0.0                          (12.1-4)

  After the measured gray scale data points of Figure 12.1-2a have been obtained, a
smooth analytic curve

                                    C = g{B}                                 (12.1-5)

is fitted to the data. The desired luminance response in real number and binary num-
ber forms is
322      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


                                ˜
                                C = C                                          (12.1-6a)

                                ˜             C – C min
                                B = B max ------------------------------
                                                                       -       (12.1-6b)
                                          C max – C min


Hence, the required compensation relationships are


                               ˜
                               C = g{ B}                                       (12.1-7a)

                                ˜         g { B } – C min
                                B = B max -------------------------------      (12.1-7b)
                                          C max – C min


The limits of the luminance function are commonly normalized to the range 0.0 to
1.0.
   To improve the accuracy of the calibration procedure, it is first wise to perform a
rough calibration and then repeat the procedure as often as required to refine the cor-
rection curve. It should be observed that because B is a binary number, the corrected
                    ˜
luminance value C will be a quantized real number. Furthermore, the corrected
                            ˜
binary coded luminance B will be subject to binary roundoff of the right-hand side
of Eq. 12.1-7b. As a consequence of the nonlinearity of the fitted curve C = g { B }
and the amplitude quantization inherent to the digitizer, it is possible that some of
the corrected binary-coded luminance values may be unoccupied. In other words,
                          ˜
the image histogram of B may possess gaps. To minimize this effect, the number of
output levels can be limited to less than the number of input levels. For example, B
                                ˜
may be coded to 12 bits and B coded to only 8 bits. Another alternative is to add
pseudorandom noise to B   ˜ to smooth out the occupancy levels.
   Many image scanning devices exhibit a variable spatial nonlinear point lumi-
nance response. Conceptually, the point correction techniques described previously
could be performed at each pixel value using the measured calibrated curve at that
point. Such a process, however, would be mechanically prohibitive. An alternative
approach, called gain correction, that is often successful is to model the variable
spatial response by some smooth normalized two-dimensional curve G(j, k) over the
sensor surface. Then, the corrected spatial response can be obtained by the operation

                                     ˜            F ( j, k )
                                     F ( j, k ) = ----------------
                                                                 -              (12.1-8)
                                                  G ( j, k )
                     ˜
where F ( j, k ) and F ( j, k ) represent the raw and corrected sensor responses, respec-
tively.
   Figure 12.1-3 provides an example of adaptive gain correction of a charge cou-
pled device (CCD) camera. Figure 12.1-3a is an image of a spatially flat light box
surface obtained with the CCD camera. A line profile plot of a diagonal line through
the original image is presented in Figure 12.1-3b. Figure 12.3-3c is the gain-cor-
rected original, in which G ( j, k ) is obtained by Fourier domain low-pass filtering of
SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION                                      323




                 (a) Original                                          (b) Line profile of original




             (c) Gain corrected                                  (d) Line profile of gain corrected

                FIGURE 12.1-3. Gain correction of a CCD camera image.



the original image. The line profile plot of Figure 12.1-3d shows the “flattened”
result.


12.1.2. Display Point Nonlinearity Correction

Correction of an image display for point luminance nonlinearities is identical in
principle to the correction of point luminance nonlinearities of an image sensor. The
procedure illustrated in Figure 12.1-4 involves distortion of the binary coded image
                                                                            ˜
luminance variable B to form a corrected binary coded luminance function B so that
                            ˜
the displayed luminance C will be linearly proportional to B. In this formulation,
the display may include a photographic record of a displayed light field. The desired
overall response is

                                       ˜
                                      C max – C min ˜     ˜
                                ˜
                                C = B ------------------------------ + C min
                                                                   -                                  (12.1-9)
                                               B max


   Normally, the maximum and minimum limits of the displayed luminance
         ˜
function C are not absolute quantities, but rather are transmissivities or reflectivities
324     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




            FIGURE 12.1-4. Point luminance correction of an image display.



normalized over a unit range. The measured response of the display and image
reconstruction system is modeled by the nonlinear function


                                           C = f {B}                            (12.1-10)


Therefore, the desired linear response can be obtained by setting

                                       ˜
                                   C max – C min ˜       ˜                 
                            ˜
                            B = g  B ------------------------------ + Cmin 
                                                                   -            (12.1-11)
                                              Bmax                         


where g { · } is the inverse function of f { · } .
   The experimental procedure for determining the correction function g { · } will be
described for the common example of producing a photographic print from an
image display. The first step involves the generation of a digital gray scale step chart
over the full range of the binary number B. Usually, about 16 equally spaced levels
of B are sufficient. Next, the reflective luminance must be measured over each step
of the developed print to produce a plot such as in Figure 12.1-5. The data points are
then fitted by the smooth analytic curve B = g { C }, which forms the desired trans-
formation of Eq. 12.1-10. It is important that enough bits be allocated to B so that
the discrete mapping g { · } can be approximated to sufficient accuracy. Also, the
                               ˜
number of bits allocated to B must be sufficient to prevent gray scale contouring as
the result of the nonlinear spacing of display levels. A 10-bit representation of B and
                             ˜
an 8-bit representation of B should be adequate in most applications.
   Image display devices such as cathode ray tube displays often exhibit spatial
luminance variation. Typically, a displayed image is brighter at the center of the dis-
play screen than at its periphery. Correction techniques, as described by Eq. 12.1-8,
can be utilized for compensation of spatial luminance variations.
CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION                               325




                    FIGURE 12.1-5. Measured image display response.



12.2. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION

For the class of imaging systems in which the spatial degradation can be modeled by
a linear-shift-invariant impulse response and the noise is additive, restoration of
continuous images can be performed by linear filtering techniques. Figure 12.2-1
contains a block diagram for the analysis of such techniques. An ideal image
FI ( x, y ) passes through a linear spatial degradation system with an impulse response
H D ( x, y ) and is combined with additive noise N ( x, y ). The noise is assumed to be
uncorrelated with the ideal image. The image field observed can be represented by
the convolution operation as

                                ∞     ∞
             F O ( x, y ) =   ∫– ∞ ∫– ∞    FI ( α, β ) H D ( x – α, y – β ) dα dβ + N ( x, y )   (12.2-1a)


or


                           FO ( x, y ) = F I ( x, y ) ᭺ H D ( x, y ) + N ( x, y )
                                                      ‫ء‬                                          (12.2-1b)


The restoration system consists of a linear-shift-invariant filter defined by the
impulse response H R ( x, y ). After restoration with this filter, the reconstructed image
becomes

                    ˆ                  ∞    ∞
                    F I ( x, y ) =   ∫–∞ ∫–∞ FO ( α, β )HR ( x – α, y – β ) dα dβ                (12.2-2a)


or
                                    ˆ
                                    F I ( x, y ) = F O ( x, y ) ᭺ H R ( x, y )
                                                                ‫ء‬                                (12.2-2b)
326        POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




                      FIGURE 12.2-1. Continuous image restoration model.



Substitution of Eq. 12.2-lb into Eq. 12.2-2b yields


                     ˆ
                     F I ( x, y ) = [ F I ( x, y ) ᭺ H D ( x, y ) + N ( x, y ) ] ᭺ H R ( x, y )
                                                   ‫ء‬                             ‫ء‬                         (12.2-3)


It is analytically convenient to consider the reconstructed image in the Fourier trans-
form domain. By the Fourier transform convolution theorem,


                ˆ
                F I ( ω x, ω y ) = [ F I ( ω x, ω y )H D ( ω x, ω y ) + N ( ω x, ω y ) ]H R ( ω x, ω y )   (12.2-4)


                        ˆ
where F I ( ω x, ω y ), F I ( ω x, ω y ), N ( ω x, ω y ), HD ( ω x, ω y ), H R ( ω x, ω y ) are the two-dimen-
                                                          ˆ
sional Fourier transforms of FI ( x, y ), F I ( x, y ) N ( x, y ), H D ( x, y ), H R ( x, y ), respec-
                                                                  ,
tively.
    The following sections describe various types of continuous image restoration
filters.


12.2.1. Inverse Filter

The earliest attempts at image restoration were based on the concept of inverse fil-
tering, in which the transfer function of the degrading system is inverted to yield a
restored image (8–12). If the restoration inverse filter transfer function is chosen so
that

                                                                           1
                                           H R ( ω x, ω y ) = ---------------------------
                                                                                        -                  (12.2-5)
                                                              H D ( ω x, ω y )

then the spectrum of the reconstructed image becomes

                                                                         N ( ω x, ω y )
                                 ˆ
                                 F I ( ω x, ω y ) = F I ( ω x, ω y ) + ---------------------------
                                                                                                 -         (12.2-6)
                                                                       H D ( ω x, ω y )
CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION                                                327




      FIGURE 12.2-2. Typical spectra of an inverse filtering image restoration system.



Upon inverse Fourier transformation, the restored image field

                                 1        ∞    ∞       N ( ω x, ω y )
  ˆ
  FI ( x, y ) = FI ( x, y ) + --------
                              4π
                                     -
                                     2   ∫– ∞ ∫– ∞   --------------------------- exp { i ( ω x x + ω y y ) } dω x dω y
                                                     H D ( ω x, ω y )
                                                                               -

                                                                                                                         (12.2-7)

is obtained. In the absence of source noise, a perfect reconstruction results, but if
source noise is present, there will be an additive reconstruction error whose value
can become quite large at spatial frequencies for which HD ( ω x, ω y ) is small.
Typically, H D ( ω x, ω y ) and F I ( ω x, ω y ) are small at high spatial frequencies, hence
image quality becomes severely impaired in high-detail regions of the recon-
structed image. Figure 12.2-2 shows typical frequency spectra involved in
inverse filtering.
   The presence of noise may severely affect the uniqueness of a restoration esti-
mate. That is, small changes in N ( x, y ) may radically change the value of the esti-
       ˆ
mate F I ( x, y ) . For example, consider the dither function Z ( x, y ) added to an ideal
image to produce a perturbed image


                                          F Z ( x, y ) = F I ( x, y ) + Z ( x, y )                                       (12.2-8)

There may be many dither functions for which
328      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


                         ∞      ∞
                       ∫– ∞ ∫– ∞     Z ( α, β )H D ( x – α, y – β ) dα dβ < N ( x, y )                                       (12.2-9)



For such functions, the perturbed image field FZ ( x, y ) may satisfy the convolution
integral of Eq. 12.2-1 to within the accuracy of the observed image field. Specifi-
cally, it can be shown that if the dither function is a high-frequency sinusoid of
arbitrary amplitude, then in the limit


                     ∞              ∞                                                   
                lim  ∫          ∫        sin { n ( α + β ) } H D ( x – α, y – β ) dα dβ  = 0                              (12.2-10)
               n → ∞ – ∞ – ∞                                                            


For image restoration, this fact is particularly disturbing, for two reasons. High-fre-
quency signal components may be present in an ideal image, yet their presence may
be masked by observation noise. Conversely, a small amount of observation noise
may lead to a reconstruction of F I ( x, y ) that contains very large amplitude high-fre-
quency components. If relatively small perturbations N ( x, y ) in the observation
result in large dither functions for a particular degradation impulse response, the
convolution integral of Eq. 12.2-1 is said to be unstable or ill conditioned. This
potential instability is dependent on the structure of the degradation impulse
response function.
   There have been several ad hoc proposals to alleviate noise problems inherent to
inverse filtering. One approach (10) is to choose a restoration filter with a transfer
function


                                                               H K ( ω x, ω y )
                                            H R ( ω x, ω y ) = ---------------------------
                                                                                         -                                  (12.2-11)
                                                               H D ( ω x, ω y )


where H K ( ω x, ω y ) has a value of unity at spatial frequencies for which the expected
magnitude of the ideal image spectrum is greater than the expected magnitude of the
noise spectrum, and zero elsewhere. The reconstructed image spectrum is then


                                                                    N ( ω x, ω y )H K ( ω x, ω y )
              ˆ
              F I ( ω x, ω y ) = F I ( ω x, ω y )H K ( ω x, ω y ) + -----------------------------------------------------
                                                                                                                        -   (12.2-12)
                                                                                 H D ( ω x, ω y )


The result is a compromise between noise suppression and loss of high-frequency
image detail.
   Another fundamental difficulty with inverse filtering is that the transfer function
of the degradation may have zeros in its passband. At such points in the frequency
spectrum, the inverse filter is not physically realizable, and therefore the filter must
be approximated by a large value response at such points.
CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION                                  329

12.2.2. Wiener Filter

It should not be surprising that inverse filtering performs poorly in the presence of
noise because the filter design ignores the noise process. Improved restoration qual-
ity is possible with Wiener filtering techniques, which incorporate a priori statistical
knowledge of the noise field (13–17).
    In the general derivation of the Wiener filter, it is assumed that the ideal image
FI ( x, y ) and the observed image F O ( x, y ) of Figure 12.2-1 are samples of two-
dimensional, continuous stochastic fields with zero-value spatial means. The
impulse response of the restoration filter is chosen to minimize the mean-square res-
toration error

                                                                ˆ            2
                                       E = E { [ F I ( x, y ) – FI ( x, y ) ] }                       (12.2-13)


The mean-square error is minimized when the following orthogonality condition is
met (13):

                                                      ˆ
                                  E { [ FI ( x, y ) – FI ( x, y ) ]F O ( x′, y′ ) } = 0               (12.2-14)


for all image coordinate pairs ( x, y ) and ( x′, y′ ). Upon substitution of Eq. 12.2-2a
for the restored image and some linear algebraic manipulation, one obtains

                                  ∞    ∞
E { FI ( x, y )FO ( x, y ) } =   ∫–∞ ∫–∞ E { FO ( α, β )FO ( x′, y′ ) }HR ( x – α, y – β ) dα dβ      (12.2-15)


Under the assumption that the ideal image and observed image are jointly stationary,
the expectation terms can be expressed as covariance functions, as in Eq. 1.4-8. This
yields


                                      ∞    ∞
    K F F ( x – x′, y – y′ ) =
        I   O                      ∫– ∞ ∫– ∞ K F    O FO
                                                           ( α – x′, β – y′ )H R ( x – α, y – β ) dα dβ (12.2-16)


Then, taking the two-dimensional Fourier transform of both sides of Eq. 12.2-16 and
solving for H R ( ω x, ω y ) , the following general expression for the Wiener filter trans-
fer function is obtained:

                                                          W F F ( ω x, ω y )
                                       HR ( ω x, ω y ) = -------------------------------------
                                                                  I O
                                                                                                      (12.2-17)
                                                         W F F ( ω x, ω y )
                                                                        O   O




In the special case of the additive noise model of Figure 12.2-1:
330       POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


                                        *
                   WF F ( ω x, ω y ) = HD ( ω x, ω y )W F ( ω x, ω y )                                                                              (12.2-18a)
                         I   O                                                           I


                                                                                     2
                   WF            ( ω x, ω y ) = H D ( ω x, ω y ) W F ( ω x, ω y ) + WN ( ω x, ω y )                                                 (12.2-18b)
                        O FO                                                                     I




This leads to the additive noise Wiener filter

                                                                 *
                                                            H D ( ω x, ω y )W F ( ω x, ω y )
                     H R ( ω x, ω y ) = ----------------------------------------------------------------------------------------------------
                                                                                                   I
                                                                                                                                           -        (12.2-19a)
                                                                        2
                                         H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
                                                                                             I


or

                                                                                    *
                                                                               HD ( ω x, ω y )
                      H R ( ω x, ω y ) = --------------------------------------------------------------------------------------------------------
                                                                                                                                                -   (12.2-19b)
                                                                         2
                                           H D ( ω x, ω y ) + W N ( ω x, ω y ) ⁄ W F ( ω x, ω y )
                                                                                                                             I


In the latter formulation, the transfer function of the restoration filter can be
expressed in terms of the signal-to-noise power ratio

                                                                     W F ( ω x, ω y )
                                                  SNR ( ω x, ω y ) ≡ -----------------------------
                                                                              I
                                                                                                 -                                                   (12.2-20)
                                                                      W N ( ω x, ω y )

at each spatial frequency. Figure 12.2-3 shows cross-sectional sketches of a typical
ideal image spectrum, noise spectrum, blur transfer function, and the resulting
Wiener filter transfer function. As noted from the figure, this version of the Wiener
filter acts as a bandpass filter. It performs as an inverse filter at low spatial frequen-
cies, and as a smooth rolloff low-pass filter at high spatial frequencies.
    Equation 12.2-19 is valid when the ideal image and observed image stochastic
processes are zero mean. In this case, the reconstructed image Fourier transform is


                                         ˆ
                                         F I ( ω x, ω y ) = H R ( ω x, ω y )F O ( ω x, ω y )                                                         (12.2-21)


If the ideal image and observed image means are nonzero, the proper form of the
reconstructed image Fourier transform is


        ˆ
        F I ( ω x, ω y ) = H R ( ω x, ω y ) [ F O ( ω x, ω y ) – M O ( ω x, ω y ) ] + MI ( ω x, ω y )                                               (12.2-22a)


where


                        M O ( ω x, ω y ) = H D ( ω x, ω y )M I ( ω x, ω y ) + MN ( ω x, ω y )                                                       (12.2-22b)
CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION                                                  331




      FIGURE 12.2-3. Typical spectra of a Wiener filtering image restoration system.



and MI ( ω x, ω y ) and M N ( ω x, ω y ) are the two-dimensional Fourier transforms of the
means of the ideal image and noise, respectively. It should be noted that Eq. 12.2-22
accommodates spatially varying mean models. In practice, it is common to estimate
the mean of the observed image by its spatial average M O ( x, y ) and apply the Wiener
filter of Eq. 12.2-19 to the observed image difference F O ( x, y ) – M O ( x, y ), and then
add back the ideal image mean M I ( x, y ) to the Wiener filter result.
    It is useful to investigate special cases of Eq. 12.2-19. If the ideal image is
assumed to be uncorrelated with unit energy, W F I ( ω x, ω y ) = 1 and the Wiener filter
becomes


                                                                H ∗ ( ω x, ω y )
                                                                     D
                       H R ( ω x, ω y ) = ---------------------------------------------------------------------
                                                                                                              -   (12.2-23)
                                                                          2
                                            H D ( ω x, ω y ) + W N ( ω x, ω y )
332      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


This version of the Wiener filter provides less noise smoothing than does the general
case of Eq. 12.2-19. If there is no blurring of the ideal image, H D ( ω x, ω y ) = 1 and
the Wiener filter becomes a noise smoothing filter with a transfer function

                                                                                 1
                                             HR ( ω x, ω y ) = --------------------------------------
                                                                                                    -                                         (12.2-24)
                                                               1 + W N ( ω x, ω y )

   In many imaging systems, the impulse response of the blur may not be fixed;
rather, it changes shape in a random manner. A practical example is the blur caused
by imaging through a turbulent atmosphere. Obviously, a Wiener filter applied to
this problem would perform better if it could dynamically adapt to the changing blur
impulse response. If this is not possible, a design improvement in the Wiener filter
can be obtained by considering the impulse response to be a sample of a two-dimen-
sional stochastic process with a known mean shape and with a random perturbation
about the mean modeled by a known power spectral density. Transfer functions for
this type of restoration filter have been developed by Slepian (18).


12.2.3. Parametric Estimation Filters

Several variations of the Wiener filter have been developed for image restoration.
Some techniques are ad hoc, while others have a quantitative basis.
  Cole (19) has proposed a restoration filter with a transfer function

                                                                    W F ( ω x, ω y )                                                    1⁄2
              H R ( ω x, ω y ) = ----------------------------------------------------------------------------------------------------
                                                                             I
                                                                                                                                    -         (12.2-25)
                                                                 2
                                  H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
                                                                                    I



The power spectrum of the filter output is

                                                                                             2
                                 W F ( ω x, ω y ) = HR ( ω x, ω y ) W F ( ω x, ω y )
                                   ˆ                                                                                                          (12.2-26)
                                        I                                                             O




where W FO ( ω x, ω y ) represents the power spectrum of the observation, which is
related to the power spectrum of the ideal image by

                                                                               2
                  W F ( ω x, ω y ) = HD ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )                                                      (12.2-27)
                         O                                                               I




Thus, it is easily seen that the power spectrum of the reconstructed image is identical
to the power spectrum of the ideal image field. That is,


                                                W F ( ω x, ω y ) = W F ( ω x, ω y )
                                                  ˆ                                                                                           (12.2-28)
                                                       I                                 I
CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION                                                                                       333

For this reason, the restoration filter defined by Eq. 12.2-25 is called the image
power-spectrum filter. In contrast, the power spectrum for the reconstructed image
as obtained by the Wiener filter of Eq. 12.2-19 is

                                                                                                        2                                 2
                                                        H D ( ω x, ω y ) [ W F ( ω x, ω y ) ]
                      WF ( ω x, ω y ) = ----------------------------------------------------------------------------------------------------
                       ˆI
                                                                                                    I
                                                                                                                                           -                       (12.2-29)
                                                                        2
                                          H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
                                                                                                    I




In this case, the power spectra of the reconstructed and ideal images become identi-
cal only for a noise-free observation. Although equivalence of the power spectra of
the ideal and reconstructed images appears to be an attractive feature of the image
power-spectrum filter, it should be realized that it is more important that the Fourier
spectra (Fourier transforms) of the ideal and reconstructed images be identical
because their Fourier transform pairs are unique, but power-spectra transform pairs
are not necessarily unique. Furthermore, the Wiener filter provides a minimum
mean-square error estimate, while the image power-spectrum filter may result in a
large residual mean-square error.
   Cole (19) has also introduced a geometrical mean filter, defined by the transfer
function


                                               –S                           H ∗ ( ω x, ω y )W F ( ω x, ω y )
                                                                                 D
                                                                                                                                                             1–S
H R ( ω x, ω y ) = [ H D ( ω x, ω y ) ]               ----------------------------------------------------------------------------------------------------
                                                                                                                   I
                                                                                                                                                         -         (12.2-30)
                                                                                      2
                                                       H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y )
                                                                                                I



where 0 ≤ S ≤ 1 is a design parameter. If S = 1 ⁄ 2 and H D = H ∗ , the geometrical
                                                                    D
mean filter reduces to the image power-spectrum filter as given in Eq. 12.2-25.
   Hunt (20) has developed another parametric restoration filter, called the con-
strained least-squares filter, whose transfer function is of the form


                                                                            H ∗ ( ω x, ω y )
                                                                                 D
                                  H R ( ω x, ω y ) = ------------------------------------------------------------------------
                                                                                                                            -                                      (12.2-31)
                                                                                     2                                      2
                                                      H D ( ω x, ω y ) + γ C ( ω x, ω y )


where γ is a design constant and C ( ω x, ω y ) is a design spectral variable. If γ = 1
                  2
and C ( ω x, ω y ) is set equal to the reciprocal of the spectral signal-to-noise power
ratio of Eq. 12.2-20, the constrained least-squares filter becomes equivalent to the
Wiener filter of Eq. 12.2-19b. The spectral variable can also be used to minimize
higher-order derivatives of the estimate.


12.2.4. Application to Discrete Images

The inverse filtering, Wiener filtering, and parametric estimation filtering tech-
niques developed for continuous image fields are often applied to the restoration of
334     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




                                      (a) Original




              (b) Blurred, b = 2.0               (c) Blurred with noise, SNR = 10.0

                          FIGURE 12.2-4. Blurred test images.



discrete images. The common procedure has been to replace each of the continuous
spectral functions involved in the filtering operation by its discrete two-dimensional
Fourier transform counterpart. However, care must be taken in this conversion pro-
cess so that the discrete filtering operation is an accurate representation of the con-
tinuous convolution process and that the discrete form of the restoration filter
impulse response accurately models the appropriate continuous filter impulse
response.
   Figures 12.2-4 to 12.2-7 present examples of continuous image spatial filtering
techniques by discrete Fourier transform filtering. The original image of Figure
12.2-4a has been blurred with a Gaussian-shaped impulse response with b = 2.0 to
obtain the blurred image of Figure 12.2-4b. White Gaussian noise has been added to
the blurred image to give the noisy blurred image of Figure l2.2-4c, which has a sig-
nal-to-noise ratio of 10.0.
PSEUDOINVERSE SPATIAL IMAGE RESTORATION                335

    Figure 12.2-5 shows the results of inverse filter image restoration of the blurred
and noisy-blurred images. In Figure 12.2-5a, the inverse filter transfer function
follows Eq. 12.2-5 (i.e., no high-frequency cutoff). The restored image for the noise-
free observation is corrupted completely by the effects of computational error. The
computation was performed using 32-bit floating-point arithmetic. In Figure 12.2-5c
the inverse filter restoration is performed with a circular cutoff inverse filter as
defined by Eq. 12.2-11 with C = 200 for the 512 × 512 pixel noise-free observation.
Some faint artifacts are visible in the restoration. In Figure 12.2-5e the cutoff fre-
quency is reduced to C = 150 . The restored image appears relatively sharp and free
of artifacts. Figure 12.2-5b, d, and f show the result of inverse filtering on the noisy-
blurred observed image with varying cutoff frequencies. These restorations illustrate
the trade-off between the level of artifacts and the degree of deblurring.
    Figure 12.2-6 shows the results of Wiener filter image restoration. In all cases,
the noise power spectral density is white and the signal power spectral density is
circularly symmetric Markovian with a correlation factor ρ . For the noise-free
observation, the Wiener filter provides restorations that are free of artifacts but only
slightly sharper than the blurred observation. For the noisy observation, the
restoration artifacts are less noticeable than for an inverse filter.
    Figure 12.2-7 presents restorations using the power spectrum filter. For a noise-
free observation, the power spectrum filter gives a restoration of similar quality to
an inverse filter with a low cutoff frequency. For a noisy observation, the power
spectrum filter restorations appear to be grainier than for the Wiener filter.
    The continuous image field restoration techniques derived in this section are
advantageous in that they are relatively simple to understand and to implement
using Fourier domain processing. However, these techniques face several important
limitations. First, there is no provision for aliasing error effects caused by physical
undersampling of the observed image. Second, the formulation inherently assumes
that the quadrature spacing of the convolution integral is the same as the physical
sampling. Third, the methods only permit restoration for linear, space-invariant deg-
radation. Fourth, and perhaps most important, it is difficult to analyze the effects of
numerical errors in the restoration process and to develop methods of combatting
such errors. For these reasons, it is necessary to turn to the discrete model of a sam-
pled blurred image developed in Section 7.2 and then reformulate the restoration
problem on a firm numeric basic. This is the subject of the remaining sections of the
chapter.


12.3. PSEUDOINVERSE SPATIAL IMAGE RESTORATION

The matrix pseudoinverse defined in Chapter 5 can be used for spatial image resto-
ration of digital images when it is possible to model the spatial degradation as a
vector-space operation on a vector of ideal image points yielding a vector of physi-
cal observed samples obtained from the degraded image (21–23).
336   POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




         (a) Noise-free, no cutoff                        (b) Noisy, C = 100




          (c) Noise-free, C = 200                     (d ) Noisy, C = 75




          (e) Noise-free, C = 150                    (f ) Noisy, C = 50

      FIGURE 12.2-5. Inverse filter image restoration on the blurred test images.
PSEUDOINVERSE SPATIAL IMAGE RESTORATION               337




          (a) Noise-free, r = 0.9                       (b) Noisy, r = 0.9




           (c) Noise-free, r = 0.5                      (d ) Noisy, r = 0.5




          (e) Noise-free, r = 0.0                       (f ) Noisy, r = 0.0

FIGURE 12.2-6. Wiener filter image restoration on the blurred test images; SNR = 10.0.
338     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




            (a) Noise-free, r = 0.5                    (b) Noisy, r = 0.5




              (c) Noisy, r = 0.5                       (d ) Noisy, r = 0.0

FIGURE 12.2-7. Power spectrum filter image restoration on the blurred test images;
SNR = 10.0.



12.3.1. Pseudoinverse: Image Blur

The first application of the pseudoinverse to be considered is that of the restoration
of a blurred image described by the vector-space model


                                       g = Bf                                (12.3-1)

                                                            2
as derived in Eq. 11.5-6, where g is a P × 1 vector ( P = M ) containing the M × M
                                                                      2
physical samples of the blurred image, f is a Q × 1 vector ( Q = N ) containing
N × N points of the ideal image and B is the P × Q matrix whose elements are points
PSEUDOINVERSE SPATIAL IMAGE RESTORATION                339

on the impulse function. If the physical sample period and the quadrature represen-
tation period are identical, P will be smaller than Q, and the system of equations will
be underdetermined. By oversampling the blurred image, it is possible to force
P > Q or even P = Q . In either case, the system of equations is called overdeter-
mined. An overdetermined set of equations can also be obtained if some of the
elements of the ideal image vector can be specified through a priori knowledge. For
example, if the ideal image is known to contain a limited size object against a black
background (zero luminance), the elements of f beyond the limits may be set to zero.
                                                                               ˆ
    In discrete form, the restoration problem reduces to finding a solution f to Eq.
12.3-1 in the sense that

                                             ˆ
                                            Bf = g                                (12.3-2)

Because the vector g is determined by physical sampling and the elements of B are
specified independently by system modeling, there is no guarantee that a f even     ˆ
exists to satisfy Eq. 12.3-2. If there is a solution, the system of equations is said to be
consistent; otherwise, the system of equations is inconsistent.
   In Appendix 1 it is shown that inconsistency in the set of equations of Eq. 12.3-1
can be characterized as


                                        g = Bf + e { f }                          (12.3-3)


where e { f } is a vector of remainder elements whose value depends on f. If the set
of equations is inconsistent, a solution of the form


                                            ˆ = Wg
                                            f                                     (12.3-4)


is sought for which the linear operator W minimizes the least-squares modeling
error


                       E M = [ e { ˆ } ] [ e { ˆ } ] = [ g – Bf ] [ g – Bf ]
                                                              ˆ          ˆ
                                       T                        T
                                   f           f                                  (12.3-5)


This error is shown, in Appendix 1, to be minimized when the operator W = B$ is
set equal to the least-squares inverse of B. The least-squares inverse is not necessar-
ily unique. It is also proved in Appendix 1 that the generalized inverse operator
W = B–, which is a special case of the least-squares inverse, is unique, minimizes
the least-squares modeling error, and simultaneously provides a minimum norm
estimate. That is, the sum of the squares of ˆ is a minimum for all possible mini-
                                               f
mum least-square error estimates. For the restoration of image blur, the generalized
inverse provides a lowest-intensity restored image.
340     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


   If Eq. 12.3-1 represents a consistent set of equations, one or more solutions may
exist for Eq. 12.3-2. The solution commonly chosen is the estimate that minimizes
the least-squares estimation error defined in the equivalent forms


                               EE = ( f – ˆ) ( f – ˆ)
                                            T
                                          f        f                         (12.3-6a)

                               E E = tr { ( f – f ) ( f – ˆ ) }
                                                ˆ                        T
                                                          f                  (12.3-6b)


In Appendix 1 it is proved that the estimation error is minimum for a generalized
inverse (W = B–) estimate. The resultant residual estimation error then becomes

                                          T             –
                              E E = f [ I – [ B B ] ]f                       (12.3-7a)


or

                                                T               –
                              E E = tr { ff [ I – [ B B ] ] }                (12.3-7b)


The estimate is perfect, of course, if B–B = I.
   Thus, it is seen that the generalized inverse is an optimal solution, in the sense
defined previously, for both consistent and inconsistent sets of equations modeling
image blur. From Eq. 5.5-5, the generalized inverse has been found to be algebra-
ically equivalent to

                                      –             T       –1 T
                                  B           = [B B] B                      (12.3-8a)


if the P × Q matrix B is of rank Q. If B is of rank P, then

                                      –             T       T       –1
                                  B           = B [B B]                      (12.3-8b)


For a consistent set of equations and a rank Q generalized inverse, the estimate

                                                                     –1
                      ˆ = B – g = B – Bf = [ [ B T B ] B T ]Bf = f
                      f                                                       (12.3-9)


is obviously perfect. However, in all other cases, a residual estimation error may
occur. Clearly, it would be desirable to deal with an overdetermined blur matrix of
rank Q in order to achieve a perfect estimate. Unfortunately, this situation is rarely
PSEUDOINVERSE SPATIAL IMAGE RESTORATION              341

achieved in image restoration. Oversampling the blurred image can produce an
overdetermined set of equations ( P > Q ), but the rank of the blur matrix is likely to
be much less than Q because the rows of the blur matrix will become more linearly
dependent with finer sampling.
   A major problem in application of the generalized inverse to image restoration is
dimensionality. The generalized inverse is a Q × P matrix where P is equal to the
number of pixel observations and Q is equal to the number of pixels to be estimated
in an image. It is usually not computationally feasible to use the generalized inverse
operator, defined by Eq. 12.3-8, over large images because of difficulties in reliably
computing the generalized inverse and the large number of vector multiplications
associated with Eq. 12.3-4. Computational savings can be realized if the blur matrix
B is separable such that

                                      B = BC ⊗ B R                           (12.3-10)

where BC and BR are column and row blur operators. In this case, the generalized
inverse is separable in the sense that

                                      –      –    –
                                  B       = BC ⊗ BR                          (12.3-11)

        –         –
where B C and B R are generalized inverses of BC and BR, respectively. Thus, when
the blur matrix is of separable form, it becomes possible to form the estimate of the
image by sequentially applying the generalized inverse of the row blur matrix to
each row of the observed image array and then using the column generalized inverse
operator on each column of the array.
   Pseudoinverse restoration of large images can be accomplished in an approxi-
mate fashion by a block mode restoration process, similar to the block mode filter-
ing technique of Section 9.3, in which the blurred image is partitioned into small
blocks that are restored individually. It is wise to overlap the blocks and accept only
the pixel estimates in the center of each restored block because these pixels exhibit
the least uncertainty. Section 12.3.3 describes an efficient computational algorithm
for pseudoinverse restoration for space-invariant blur.
   Figure l2.3-1a shows a blurred image based on the model of Figure 11.5-3.
Figure 12.3-1b shows a restored image using generalized inverse image restoration.
In this example, the observation is noise free and the blur impulse response function
is Gaussian shaped, as defined in Eq. 11.5-8, with bR = bC = 1.2. Only the center
8 × 8 region of the 12 × 12 blurred picture is displayed, zoomed to an image size of
256 × 256 pixels. The restored image appears to be visually improved compared to
the blurred image, but the restoration is not identical to the original unblurred image
of Figure 11.5-3a. The figure also gives the percentage least-squares error (PLSE) as
defined in Appendix 3, between the blurred image and the original unblurred image,
and between the restored image and the original. The restored image has less error
than the blurred image.
342     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




           (a) Blurred, PLSE = 4.97%                (b) Restored, PLSE = 1.41%

FIGURE 12.3-1. Pseudoinverse image restoration for test image blurred with Gaussian
shape impulse response. M = 8, N = 12, L = 5; bR = bC = 1.2; noise-free observation.



12.3.2. Pseudoinverse: Image Blur Plus Additive Noise

In many imaging systems, an ideal image is subject to both blur and additive noise;
the resulting vector-space model takes the form


                                       g = Bf + n                                (12.3-12)


where g and n are P × 1 vectors of the observed image field and noise field, respec-
tively, f is a Q × 1 vector of ideal image points, and B is a P × Q blur matrix. The
vector n is composed of two additive components: samples of an additive external
noise process and elements of the vector difference ( g – Bf ) arising from modeling
errors in the formulation of B. As a result of the noise contribution, there may be no
vector solutions ˆ that satisfy Eq. 12.3-12. However, as indicated in Appendix 1, the
                  f
generalized inverse B– can be utilized to determine a least-squares error, minimum
norm estimate. In the absence of modeling error, the estimate

                              ˆ = B – g = B – Bf + B – n
                              f                                                  (12.3-13)

                                                                                 –
differs from the ideal image because of the additive noise contribution B n. Also,
                                    –
for the underdetermined model, B B will not be an identity matrix. If B is an over-
                                                               –
determined rank Q matrix, as defined in Eq. 12.3-8a, then B B = I , and the resulting
                                                                                   –
estimate is equal to the original image vector f plus a perturbation vector ∆ f = B n .
The perturbation error in the estimate can be measured as the ratio of the vector
PSEUDOINVERSE SPATIAL IMAGE RESTORATION               343

norm of the perturbation to the vector norm of the estimate. It can be shown (24, p.
52) that the relative error is subject to the bound

                                   ∆f
                                ----------- < B
                                                –          n-
                                                    ⋅ B --------              (12.3-14)
                                     f                     g

                –
The product B ⋅ B , which is called the condition number C{B} of B, deter-
mines the relative error in the estimate in terms of the ratio of the vector norm of the
noise to the vector norm of the observation. The condition number can be computed
directly or found in terms of the ratio


                                                 W1
                                       C { B } = -------
                                                       -                      (12.3-15)
                                                 WN


of the largest W1 to smallest WN singular values of B. The noise perturbation error
for the underdetermined matrix B is also governed by Eqs. 12.3-14 and 12.3-15 if
WN is defined to be the smallest nonzero singular value of B (25, p. 41). Obviously,
the larger the condition number of the blur matrix, the greater will be the sensitivity
to noise perturbations.
   Figure 12.3-2 contains image restoration examples for a Gaussian-shaped blur
function for several values of the blur standard deviation and a noise variance of
10.0 on an amplitude scale of 0.0 to 255.0. As expected, observation noise degrades
the restoration. Also as expected, the restoration for a moderate degree of blur is
worse than the restoration for less blur. However, this trend does not continue; the
restoration for severe blur is actually better in a subjective sense than for moderate
blur. This seemingly anomalous behavior, which results from spatial truncation of
the point-spread function, can be explained in terms of the condition number of the
blur matrix. Figure 12.3-3 is a plot of the condition number of the blur matrix of the
previous examples as a function of the blur coefficient (21). For small amounts of
blur, the condition number is low. A maximum is attained for moderate blur, fol-
lowed by a decrease in the curve for increasing values of the blur coefficient. The
curve tends to stabilize as the blur coefficient approaches infinity. This curve pro-
vides an explanation for the previous experimental results. In the restoration opera-
tion, the blur impulse response is spatially truncated over a square region of 5 × 5
quadrature points. As the blur coefficient increases, for fixed M and N, the blur
impulse response becomes increasingly wider, and its tails become truncated to a
greater extent. In the limit, the nonzero elements in the blur matrix become constant
values, and the condition number assumes a constant level. For small values of the
blur coefficient, the truncation effect is negligible, and the condition number curve
follows an ascending path toward infinity with the asymptotic value obtained for a
smoothly represented blur impulse response. As the blur factor increases, the num-
ber of nonzero elements in the blur matrix increases, and the condition number
stabilizes to a constant value. In effect, a trade-off exists between numerical
errors caused by ill-conditioning and modeling accuracy. Although this conclusion
344     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


                    Blurred                              Restored




                                     bR = bC = 0.6
               (a ) PLSE = 1.30%                       (b ) PLSE = 0.21%




                                   bR = bC = 1.2
              (c ) PLSE = 4.91%                      (d ) PLSE = 2695.81%




                                   bR = bC = 50.0
              (e ) PLSE = 7.99%                       (f ) PLSE = 7.29%

FIGURE 12.3-2. Pseudoinverse image restoration for test image blurred with Gaussian
shape impulse response. M = 8, N = 12, L = 5; noisy observation, Var = 10.0.
PSEUDOINVERSE SPATIAL IMAGE RESTORATION              345




                       FIGURE 12.3-3. Condition number curve.




is formulated on the basis of a particular degradation model, the inference seems to
be more general because the inverse of the integral operator that describes the blur is
unbounded. Therefore, the closer the discrete model follows the continuous model,
the greater the degree of ill-conditioning. A move in the opposite direction reduces
singularity but imposes modeling errors. This inevitable dilemma can only be bro-
ken with the intervention of correct a priori knowledge about the original image.


12.3.3. Pseudoinverse Computational Algorithms

Efficient computational algorithms have been developed by Pratt and Davarian (22)
for pseudoinverse image restoration for space-invariant blur. To simplify the expla-
nation of these algorithms, consideration will initially be limited to a one-dimen-
sional example.
   Let the N × 1 vector fT and the M × 1 vector g T be formed by selecting the center
portions of f and g, respectively. The truncated vectors are obtained by dropping L -
1 elements at each end of the appropriate vector. Figure 12.3-4a illustrates the rela-
tionships of all vectors for N = 9 original vector points, M = 7 observations and an
impulse response of length L = 3.
   The elements f T and g T are entries in the adjoint model


                                   q E = Cf E + n E                         (12.3-16a)
346     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




FIGURE 12.3-4. One-dimensional       sampled       continuous       convolution   and   discrete
convolution.



where the extended vectors q E , f E and n E are defined in correspondence with


                             g                     fT          nT
                                 =     C                   +                       (12.3-16b)
                             0                     0           0


where g is a M × 1 vector, f T and nT are K × 1 vectors, and C is a J × J matrix. As
noted in Figure 12.3-4b, the vector q is identical to the image observation g over its
R = M – 2 ( L – 1 ) center elements. The outer elements of q can be approximated by


                                         ˜
                                     q ≈ q = Eg                                     (12.3-17)


where E, called an extraction weighting matrix, is defined as


                                        a      0       0
                                 E =    0      I       0                            (12.3-18)
                                        0      0       b


where a and b are L × L submatrices, which perform a windowing function similar
to that described in Section 9.4.2 (22).
    Combining Eqs. 12.3-17 and 12.3-18, an estimate of fT can be obtained from

                                            –1
                                     ˆ
                                     f E = C qEˆ                                    (12.3-19)
PSEUDOINVERSE SPATIAL IMAGE RESTORATION                347




        (a ) Original image vectors, f          (b ) Truncated image vectors, fT




         (c ) Observation vectors, g          (d ) Windowed observation vectors, q




                                         ^                                      ^
     (e ) Restoration without windowing, fT    (f ) Restoration with windowing, fT

FIGURE 12.3-5. Pseudoinverse image restoration for small degree of horizontal blur,
bR = 1.5.
348     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


Equation 12.3-19 can be solved efficiently using Fourier domain convolution
techniques, as described in Section 9.3. Computation of the pseudoinverse by Fou-
                                                   2
rier processing requires on the order of J ( 1 + 4 log 2 J ) operations in two
                                                                2 2
dimensions; spatial domain computation requires about M N operations. As an
example, for M = 256 and L = 17, the computational savings are nearly 1750:1 (22).
   Figure 12.3-5 is a computer simulation example of the operation of the pseudoin-
verse image restoration algorithm for one-dimensional blur of an image. In the first
step of the simulation, the center K pixels of the original image are extracted to form
the set of truncated image vectors f T shown in Figure 12.3-5b. Next, the truncated
image vectors are subjected to a simulated blur with a Gaussian-shaped impulse
response with bR = 1.5 to produce the observation of Figure 12.3-5c. Figure 12.3-5d
shows the result of the extraction operation on the observation. Restoration results
without and with the extraction weighting operator E are presented in Figure
12.3-5e and f, respectively. These results graphically illustrate the importance of the




                                                                           ∧
               (a) Observation, g                        (b) Restoration, fT
                                      Gaussian blur, bR = 2.0




                                                                               ∧
                (c ) Observation, g                         (d ) Restoration, fT
                                      Uniform motion blur, L = 15.0

FIGURE 12.3-6. Pseudoinverse image restoration for moderate and high degrees of horizon-
tal blur.
SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION                          349

extraction operation. Without weighting, errors at the observation boundary
completely destroy the estimate in the boundary region, but with weighting the
restoration is subjectively satisfying, and the restoration error is significantly
reduced. Figure 12.3-6 shows simulation results for the experiment of Figure 12.3-5
when the degree of blur is increased by setting bR = 2.0. The higher degree of blur
greatly increases the ill-conditioning of the blur matrix, and the residual error in
formation of the modified observation after weighting leads to the disappointing
estimate of Figure 12.3-6b. Figure 12.3-6c and d illustrate the restoration improve-
ment obtained with the pseudoinverse algorithm for horizontal image motion blur.
In this example, the blur impulse response is constant, and the corresponding blur
matrix is better conditioned than the blur matrix for Gaussian image blur.


12.4. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION

In Appendix 1 it is shown that any matrix can be decomposed into a series of eigen-
matrices by the technique of singular value decomposition. For image restoration,
this concept has been extended (26–29) to the eigendecomposition of blur matrices
in the imaging model

                                            g = Bf + n                                    (12.4-1)

From Eq. A1.2-3, the blur matrix B may be expressed as

                                                    1⁄2       T
                                       B = UΛ
                                            Λ             V                               (12.4-2)

where the P × P matrix U and the Q × Q matrix V are unitary matrices composed of
the eigenvectors of BBT and BTB, respectively and Λ is a P × Q matrix whose diag-
onal terms λ ( i ) contain the eigenvalues of BBT and BTB. As a consequence of the
orthogonality of U and V, it is possible to express the blur matrix in the series form
                                            R
                                                          1⁄2
                                            ∑ [ λ( i )]
                                                                     T
                                 B =                            u i vi                    (12.4-3)
                                        i=1

where ui and v i are the ith columns of U and V, respectively, and R is the rank of
the matrix B.
   From Eq. 12.4-2, because U and V are unitary matrices, the generalized inverse
of B is

                                                     R
                                      1⁄2                                –1 ⁄ 2
                                                    ∑ [ λ( i ) ]
                           –                    T                                     T
                       B       = VΛ
                                  Λ         U =                                   vi ui   (12.4-4)
                                                    i=1

Figure 12.4-1 shows an example of the SVD decomposition of a blur matrix. The
generalized inverse estimate can then be expressed as
350     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




                                       (a) Blur matrix, B




           (b) u1v1 , l(1) = 0.871
                  T                                         (c ) u2v2 , l(2) = 0.573
                                                                    T




           (d ) u3v3 , l(3) = 0.285
                   T                                        (e) u4v4 , l(4) = 0.108
                                                                   T




            (f ) u5v5 , l(5) = 0.034
                    T                                       (g) u6v6T, l(6) = 0.014




           (h ) u7v7 , l(7) = 0.011
                   T                                        (i ) u8v8T, l(8) = 0.010

 FIGURE 12.4-1. SVD decomposition of a blur matrix for bR = 2.0, M = 8, N = 16, L = 9.
SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION                                     351


                                     ˆ = B– g = VΛ1 ⁄ 2 U T g
                                     f           Λ                                                    (12.4-5a)


or, equivalently,

                           R                                       R
                    ˆ =                   –1 ⁄ 2                              –1 ⁄ 2
                          ∑                                        ∑ [λ(i)]
                                                      T                                   T
                    f           [λ(i )]            vi ui g =                           [ u i g ]v i   (12.4-5b)
                          i=1                                  i=1

                                                               T
recognizing the fact that the inner product u i g is a scalar. Equation 12.4-5 provides
the basis for sequential estimation; the kth estimate of f in a sequence of estimates is
equal to


                                 ˆ = ˆ                    –1 ⁄ 2 T
                                 fk  fk – 1 + [ λ ( k ) ]       [ u k g ]v k                           (12.4-6)


One of the principal advantages of the sequential formulation is that problems of ill-
conditioning generally occur only for higher-order singular values. Thus, it is possi-
ble interactively to terminate the expansion before numerical problems occur.
   Figure 12.4-2 shows an example of sequential SVD restoration for the underde-
termined model example of Figure 11.5-3 with a poorly conditioned Gaussian blur
matrix. A one-step pseudoinverse would have resulted in the final image estimate
that is totally overwhelmed by numerical errors. The sixth step, which is the best
subjective restoration, offers a considerable improvement over the blurred original,
but the lowest least-squares error occurs for three singular values.
   The major limitation of the SVD image restoration method formulation in Eqs.
12.4-5 and 12.4-6 is computational. The eigenvectors ui and v i must first be deter-
mined for the matrix BBT and BTB. Then the vector computations of Eq 12.4-5 or
12.4-6 must be performed. Even if B is direct-product separable, permitting separa-
ble row and column SVD pseudoinversion, the computational task is staggering in
the general case.
   The pseudoinverse computational algorithm described in the preceding section
can be adapted for SVD image restoration in the special case of space-invariant blur
(23). From the adjoint model of Eq. 12.3-16 given by


                                              q E = Cf E + n E                                         (12.4-7)


the circulant matrix C can be expanded in SVD form as

                                                          1⁄2         T
                                             C = X∆
                                                  ∆                Y∗                                  (12.4-8)


where X and Y are unitary matrices defined by
352     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




        (a ) 8 singular values PLSE = 2695.81%   (b ) 7 singular values PLSE = 148.93%




          (c ) 6 singular values PLSE = 6.88%     (d ) 5 singular values PLSE = 3.31%




          (e ) 4 singular values PLSE = 3.06%     (f ) 3 singular values PLSE = 3.05%




          (g ) 2 singular values PLSE = 9.52%     (h) 1 singular value PLSE = 9.52%

FIGURE 12.4-2. SVD restoration for test image blurred with a Gaussian-shaped impulse
response. bR = bC = 1.2, M = 8, N = 12, L = 5; noisy observation, Var = 10.0.
SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION                                    353

                                                       T   T
                                                 X [ CC ]X∗ = ∆                                              (12.4-9a)

                                                           T         T
                                                 Y [ C C ]Y∗ = ∆                                             (12.4-9b)


Because C is circulant, CCT is also circulant. Therefore X and Y must be equivalent
                                         –1
to the Fourier transform matrix A or A because the Fourier matrix produces a
diagonalization of a circulant matrix. For purposes of standardization, let
            –1
X = Y = A . As a consequence, the eigenvectors x i = y i , which are rows of X
and Y, are actually the complex exponential basis functions


                                                        2πi                         
                                       x k ( j ) = exp  ------- ( k – 1 ) ( j – 1 ) 
                                         *                     -                                             (12.4-10)
                                                           J                        


of a Fourier transform for 1 ≤ j, k ≤ J . Furthermore,

                                                                          T
                                                         ∆ = C C∗                                            (12.4-11)


where C is the Fourier domain circular area convolution matrix. Then, in correspon-
dence with Eq. 12.4-5


                                                ˆ     –1 –1 ⁄ 2
                                                fE = A Λ         ˜
                                                                Aq E                                         (12.4-12)


       ˜
where q E is the modified blurred image observation of Eqs. 12.3-19 and 12.3-20.
Equation 12.4-12 should be recognized as being a Fourier domain pseudoinverse
estimate. Sequential SVD restoration, analogous to the procedure of Eq. 12.4-6, can
                                                         –1 ⁄ 2
be obtained by replacing the SVD pseudoinverse matrix ∆         of Eq. 12.4-12 by the
operator


                              –1 ⁄ 2
               [ ∆T ( 1 ) ]                                                                              0
                      ·                                 –1 ⁄ 2                                           ·
                                          [ ∆T( 2 ) ]
                      ·                                                                                  ·
  –1 ⁄ 2                                                         …
 ∆T        =          ·                                                                 –1 ⁄ 2           ·   (12.4-13)
                                                                         [ ∆T ( T ) ]
                      ·                                                                                  ·
                                                                                                 0
                      ·                                                                                  ·
                                                                                                     …
                      0                                                                                  0
354     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




            (a) Blurred observation                     (b) Restoration, T = 58




                                 (c ) Restoration, T = 60

FIGURE 12.4-3. Sequential SVD pseudoinverse image restoration for horizontal Gaussian
blur, bR = 3.0, L = 23, J = 256.



Complete truncation of the high-frequency terms to avoid ill-conditioning effects
may not be necessary in all situations. As an alternative to truncation, the diagonal
                                               –1 ⁄ 2
zero elements can be replaced by [ ∆T ( T ) ]         or perhaps by some sequence that
declines in value as a function of frequency. This concept is actually analogous to
the truncated inverse filtering technique defined by Eq. 12.2-11 for continuous
image fields.
   Figure 12.4-3 shows an example of SVD pseudoinverse image restoration for
one-dimensional Gaussian image blur with bR = 3.0. It should be noted that the res-
toration attempt with the standard pseudoinverse shown in Figure 12.3-6b was sub-
ject to severe ill-conditioning errors at a blur spread of bR = 2.0.
STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION                 355

12.5. STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION

A fundamental limitation of pseudoinverse restoration techniques is that observation
noise may lead to severe numerical instability and render the image estimate unus-
able. This problem can be alleviated in some instances by statistical restoration
techniques that incorporate some a priori statistical knowledge of the observation
noise (21).


12.5.1. Regression Spatial Image Restoration

Consider the vector-space model


                                       g = Bf + n                                (12.5-1)


for a blurred image plus additive noise in which B is a P × Q blur matrix and the
noise is assumed to be zero mean with known covariance matrix Kn. The regression
method seeks to form an estimate

                                        ˆ
                                        f = Wg                                   (12.5-2)


where W is a restoration matrix that minimizes the weighted error measure

                                ˆ            ˆ   T       –1 ˆ
                            Θ { f } = [ g – Bf ] K n [ g – Bf ]                  (12.5-3)


Minimization of the restoration error can be accomplished by the classical method
                                         ˆ                   ˆ
of setting the partial derivative of Θ { f } with respect to f to zero. In the underdeter-
mined case, for which P < Q , it can be shown (30) that the minimum norm estimate
regression operator is


                                            –1       –        –1
                                  W = [K B] K                                    (12.5-4)


where K is a matrix obtained from the spectral factorization

                                                      T
                                       K n = KK                                  (12.5-5)

                                                                   2
of the noise covariance matrix K n. For white noise, K = σ n I, and the regression
operator assumes the form of a rank P generalized inverse for an underdetermined
system as given by Eq. 12.3-8b.
356     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


12.5.2. Wiener Estimation Spatial Image Restoration

With the regression technique of spatial image restoration, the noise field is modeled
as a sample of a two-dimensional random process with a known mean and covari-
ance function. Wiener estimation techniques assume, in addition, that the ideal
image is also a sample of a two-dimensional random process with known first and
second moments (21,22,31).


Wiener Estimation: General Case. Consider the general discrete model of Figure
12.5-1 in which a Q × 1 image vector f is subject to some unspecified type of point
and spatial degradation resulting in the P × 1 vector of observations g. An estimate
of f is formed by the linear operation

                                      ˆ = Wg + b
                                      f                                        (12.5-6)


where W is a Q × P restoration matrix and b is a Q × 1 bias vector. The objective of
Wiener estimation is to choose W and b to minimize the mean-square restoration
error, which may be defined as


                                E = E{[f – ˆ] [f – ˆ] }
                                             T
                                           f       f                          (12.5-7a)


or


                             E = tr { E { [ f – ˆ ] [ f – ˆ ] } }
                                                             T
                                                f         f                   (12.5-7b)


Equation 12.5-7a expresses the error in inner-product form as the sum of the squares
of the elements of the error vector [ f – ˆ ], while Eq. 12.5-7b forms the covariance
                                          f
matrix of the error, and then sums together its variance terms (diagonal elements) by
the trace operation. Minimization of Eq. 12.5-7 in either of its forms can
be accomplished by differentiation of E with respect to ˆ . An alternative approach,
                                                            f




            FIGURE 12.5-1. Wiener estimation for spatial image restoration.
STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION                357

which is of quite general utility, is to employ the orthogonality principle (32, p. 219)
to determine the values of W and b that minimize the mean-square error. In the con-
text of image restoration, the orthogonality principle specifies two necessary and
sufficient conditions for the minimization of the mean-square restoration error:

   1. The expected value of the image estimate must equal the expected value of
      the image

                                     E{ ˆ} = E{f }
                                        f                                      (12.5-8)

   2. The restoration error must be orthogonal to the observation about its mean

                              E{[f – ˆ][g – E{g }] } = 0
                                                  T
                                     f                                         (12.5-9)

From condition 1, one obtains

                                 b = E { f } – WE { g }                       (12.5-10)

and from condition 2

                                                            T
                           E{[ W + b – f][g – E{g }] } = 0                    (12.5-11)


Upon substitution for the bias vector b from Eq. 12.5-10 and simplification, Eq.
12.5-11 yields

                                                       –1
                                   W = K fg [ K gg ]                          (12.5-12)


where K gg is the P × P covariance matrix of the observation vector (assumed nons-
ingular) and K fg is the Q × P cross-covariance matrix between the image and obser-
vation vectors. Thus, the optimal bias vector b and restoration matrix W may be
directly determined in terms of the first and second joint moments of the ideal image
and observation vectors. It should be noted that these solutions apply for nonlinear
and space-variant degradations. Subsequent sections describe applications of
Wiener estimation to specific restoration models.


Wiener Estimation: Image Blur with Additive Noise. For the discrete model for a
blurred image subjective to additive noise given by


                                      g = Bf + n                              (12.5-13)
358     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


the Wiener estimator is composed of a bias term


                 b = E { f } – WE { g } = E { f } – WBE { f } + WE { n }             (12.5-14)


and a matrix operator

                                         –1            T           T        –1
                     W = K fg [ K gg ]        = K f B [ BK f B + K n ]               (12.5-15)

                                                                            2    2
   If the ideal image field is assumed uncorrelated, K f = σ f I where σ f represents
the image energy. Equation 12.5-15 then reduces to

                                     2 T           2   T           –1
                             W = σ f B [ σ f BB + K n ]                              (12.5-16)

                                               2
For a white-noise process with energy σ n , the Wiener filter matrix becomes

                                                           2
                                     T   T σn 
                                W = B  BB + ----- I
                                                 -                                   (12.5-17)
                                                2
                                             σ            f

                                                               2        2
As the ratio of image energy to noise energy ( σ f ⁄ σ n ) approaches infinity, the
Wiener estimator of Eq. 12.5-17 becomes equivalent to the generalized inverse esti-
mator.
   Figure 12.5-2 shows restoration examples for the model of Figure 11.5-3 for a
Gaussian-shaped blur function. Wiener restorations of large size images are given in
Figure 12.5-3 using a fast computational algorithm developed by Pratt and Davarian
(22). In the example of Figure 12.5-3a illustrating horizontal image motion blur, the
impulse response is of rectangular shape of length L = 11. The center pixels have
been restored and replaced within the context of the blurred image to show the
visual restoration improvement. The noise level and blur impulse response of the
electron microscope original image of Figure 12.5-3c were estimated directly from
the photographic transparency using techniques to be described in Section 12.7. The
parameters were then utilized to restore the center pixel region, which was then
replaced in the context of the blurred original.


12.6. CONSTRAINED IMAGE RESTORATION

The previously described image restoration techniques have treated images as arrays
of numbers. They have not considered that a restored natural image should be sub-
ject to physical constraints. A restored natural image should be spatially smooth and
strictly positive in amplitude.
CONSTRAINED IMAGE RESTORATION       359


                     Blurred                                   Restored




                      bR = bC = 1.2, Var = 10.0, r = 0.75, SNR = 200.0
                (a) PLSE = 4.91%                           (b) PLSE = 3.71%




                     bR = bC = 50.0, Var = 10.0, r = 0.75, SNR = 200.0

                (c) PLSE = 7.99%                          (d ) PLSE = 4.20%




                      bR = bC = 50.0, Var = 100.0, r = 0.75, SNR = 60.0
                (e) PLSE = 7.93%                            (f ) PLSE = 4.74%

FIGURE 12.5-2. Wiener estimation for test image blurred with Gaussian-shaped impulse
response. M = 8, N = 12, L = 5.
360     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES




                (a) Observation                         (b) Restoration




                (c) Observation                         (d ) Restoration

                     FIGURE 12.5-3. Wiener image restoration.



12.6.1. Smoothing Methods

Smoothing and regularization techniques (33–35) have been used in an attempt to
overcome the ill-conditioning problems associated with image restoration. Basi-
cally, these methods attempt to force smoothness on the solution of a least-squares
error problem.
   Two formulations of these methods are considered (21). The first formulation
consists of finding the minimum of ˆ Sf subject to the equality constraint
                                    f ˆ
                                     T




                                     ˆ T          ˆ
                              [ g – Bf ] M [ g – Bf ] = e                  (12.6-1)

where S is a smoothing matrix, M is an error-weighting matrix, and e denotes a
residual scalar estimation error. The error-weighting matrix is often chosen to be
CONSTRAINED IMAGE RESTORATION            361

                                                                                 –1
equal to the inverse of the observation noise covariance matrix, M = K n . The
Lagrangian estimate satisfying Eq. 12.6-1 is (19)

                                 –1 T – 1 T 1 –1                –1
                            ˆ
                            f = S B BS B + -- M
                                            -                        g           (12.6-2)
                                            γ

In Eq. 12.6-2, the Lagrangian factor γ is chosen so that Eq. 12.6-1 is satisfied; that
is, the compromise between residual error and smoothness of the estimator is
deemed satisfactory.
    Now consider the second formulation, which involves solving an equality-con-
strained least-squares problem by minimizing the left-hand side of Eq. 12.6-1 such
that

                                          ˆT ˆ
                                          f Sf = d                               (12.6-3)


where the scalar d represents a fixed degree of smoothing. In this case, the optimal
solution for an underdetermined nonsingular system is found to be


                            ˆ = S –1 B T [ BS – 1 B T + γM– 1 ] –1 g
                            f                                                    (12.6-4)


   A comparison of Eqs. 12.6-2 and 12.6-4 reveals that the two inverse problems are
solved by the same expression, the only difference being the Lagrange multipliers,
which are inverses of one another. The smoothing estimates of Eq. 12.6-4 are
closely related to the regression and Wiener estimates derived previously. If γ = 0,
                     –1
S = I and M = K n where K n is the observation noise covariance matrix, then the
smoothing and regression estimates become equivalent. Substitution of γ = 1,
       –1                –1
S = K f and M = K n where K f is the image covariance matrix results in
equivalence to the Wiener estimator. These equivalences account for the relative
smoothness of the estimates obtained with regression and Wiener restoration as
compared to pseudoinverse restoration. A problem that occurs with the smoothing
and regularizing techniques is that even though the variance of a solution can be
calculated, its bias can only be determined as a function of f.


12.6.2. Constrained Restoration Techniques

Equality and inequality constraints have been suggested (21) as a means of improving
restoration performance for ill-conditioned restoration models. Examples of con-
straints include the specification of individual pixel values, of ratios of the values of
some pixels, or the sum of part or all of the pixels, or amplitude limits of pixel values.
    Quite often a priori information is available in the form of inequality constraints
involving pixel values. The physics of the image formation process requires that
362      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


pixel values be non-negative quantities. Furthermore, an upper bound on these val-
ues is often known because images are digitized with a finite number of bits
assigned to each pixel. Amplitude constraints are also inherently introduced by the
need to “fit” a restored image to the dynamic range of a display. One approach is lin-
early to rescale the restored image to the display image. This procedure is usually
undesirable because only a few out-of-range pixels will cause the contrast of all
other pixels to be reduced. Also, the average luminance of a restored image is usu-
ally affected by rescaling. Another common display method involves clipping of all
pixel values exceeding the display limits. Although this procedure is subjectively
preferable to rescaling, bias errors may be introduced.
   If a priori pixel amplitude limits are established for image restoration, it is best to
incorporate these limits directly in the restoration process rather than arbitrarily
invoke the limits on the restored image. Several techniques of inequality constrained
restoration have been proposed.
   Consider the general case of constrained restoration in which the vector estimate
ˆ
f is subject to the inequality constraint

                                        l≤ˆ≤u
                                          f                                      (12.6-5)

where u and l are vectors containing upper and lower limits of the pixel estimate,
respectively. For least-squares restoration, the quadratic error must be minimized
subject to the constraint of Eq. 12.6-5. Under this framework, restoration reduces to
the solution of a quadratic programming problem (21). In the case of an absolute
error measure, the restoration task can be formulated as a linear programming prob-
lem (36,37). The a priori knowledge involving the inequality constraints may sub-
stantially reduce pixel uncertainty in the restored image; however, as in the case of
equality constraints, an unknown amount of bias may be introduced.
   Figure 12.6-1 is an example of image restoration for the Gaussian blur model of
Chapter 11 by pseudoinverse restoration and with inequality constrained (21) in
which the scaled luminance of each pixel of the restored image has been limited to
the range of 0 to 255. The improvement obtained by the constraint is substantial.
Unfortunately, the quadratic programming solution employed in this example
requires a considerable amount of computation. A brute-force extension of the pro-
cedure does not appear feasible.
   Several other methods have been proposed for constrained image restoration.
One simple approach, based on the concept of homomorphic filtering, is to take the
logarithm of each observation. Exponentiation of the corresponding estimates auto-
matically yields a strictly positive result. Burg (38), Edward and Fitelson (39), and
Frieden (6,40,41) have developed restoration methods providing a positivity con-
straint, which are based on a maximum entropy principle originally employed to
estimate a probability density from observation of its moments. Huang et al. (42)
have introduced a projection method of constrained image restoration in which the
set of equations g = Bf are iteratively solved by numerical means. At each stage of
the solution the intermediate estimates are amplitude clipped to conform to ampli-
tude limits.
BLIND IMAGE RESTORATION             363




                                   (a) Blurred observation




          (b) Unconstrained restoration                (c) Constrained restoration

FIGURE 12.6-1. Comparison of unconstrained and inequality constrained image restoration
for a test image blurred with Gaussian-shaped impulse response. bR = bC = 1.2, M = 12, N = 8,
L = 5; noisy observation, Var = 10.0.




12.7. BLIND IMAGE RESTORATION

Most image restoration techniques are based on some a priori knowledge of the
image degradation; the point luminance and spatial impulse responses of the system
degradation are assumed known. In many applications, such information is simply
not available. The degradation may be difficult to measure or may be time varying
in an unpredictable manner. In such cases, information about the degradation must
be extracted from the observed image either explicitly or implicitly. This task is
called blind image restoration (5,19,43). Discussion here is limited to blind image
restoration methods for blurred images subject to additive noise.
364      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


   There are two major approaches to blind image restoration: direct measurement
and indirect estimation. With the former approach, the blur impulse response and
noise level are first estimated from an image to be restored, and then these parame-
ters are utilized in the restoration. Indirect estimation techniques employ temporal or
spatial averaging to either obtain a restoration or to determine key elements of a res-
toration algorithm.


12.7.1. Direct Measurement Methods

Direct measurement blind restoration of a blurred noisy image usually requires mea-
surement of the blur impulse response and noise power spectrum or covariance
function of the observed image. The blur impulse response is usually measured by
isolating the image of a suspected object within a picture. By definition, the blur
impulse response is the image of a point-source object. Therefore, a point source in
the observed scene yields a direct indication of the impulse response. The image of a
suspected sharp edge can also be utilized to derive the blur impulse response. Aver-
aging several parallel line scans normal to the edge will significantly reduce noise
effects. The noise covariance function of an observed image can be estimated by
measuring the image covariance over a region of relatively constant background
luminance. References 5, 44, and 45 provide further details on direct measurement
methods.


12.7.2. Indirect Estimation Methods

Temporal redundancy of scenes in real-time television systems can be exploited to
perform blind restoration indirectly. As an illustration, consider the ith observed
image frame


                                  Gi ( x, y ) = FI ( x, y ) + N i ( x, y )                  (12.7-1)


of a television system in which F I ( x, y ) is an ideal image and N i ( x, y ) is an additive
noise field independent of the ideal image. If the ideal image remains constant over
a sequence of M frames, then temporal summation of the observed images yields the
relation

                                           M                          M
                                    1-                          1-
                     FI ( x, y ) = ----
                                   M      ∑     G i ( x, y ) – ----
                                                               M      ∑      N i ( x, y )   (12.7-2)
                                          i=1                         i=1


The value of the noise term on the right will tend toward its ensemble average
E { N ( x, y ) } for M large. In the common case of zero-mean white Gaussian noise, the
BLIND IMAGE RESTORATION            365




             (a) Noise-free original                                    (b) Noisy image 1




               (c) Noisy image 2                                   (d ) Temporal average

  FIGURE 12.7-1 Temporal averaging of a sequence of eight noisy images. SNR = 10.0.


ensemble average is zero at all (x, y), and it is reasonable to form the estimate as

                                                       M
                                 ˆ               1
                                 F I ( x, y ) = ----
                                                M
                                                   -   ∑ G i ( x, y )                       (12.7-3)
                                                       i=1


   Figure 12.7-1 presents a computer-simulated example of temporal averaging of a
sequence of noisy images. In this example the original image is unchanged in the
sequence. Each image observed is subjected to a different additive random noise
pattern.
   The concept of temporal averaging is also useful for image deblurring. Consider
an imaging system in which sequential frames contain a relatively stationary object
degraded by a different linear-shift invariant impulse response H i ( x, y ) over each
366     POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


frame. This type of imaging would be encountered, for example, when photograph-
ing distant objects through a turbulent atmosphere if the object does not move
significantly between frames. By taking a short exposure at each frame, the atmo-
spheric turbulence is “frozen” in space at each frame interval. For this type of
object, the degraded image at the ith frame interval is given by


                                       G i ( x, y ) = F I ( x, y ) ᭺ H i ( x, y )
                                                                   *                                               (12.7-4)


   for i = 1, 2,..., M. The Fourier spectra of the degraded images are then


                                  G i ( ω x, ω y ) = F I ( ω x, ω y )H i ( ω x, ω y )                              (12.7-5)


On taking the logarithm of the degraded image spectra


                   ln { G i ( ω x, ω y ) } = ln { F I ( ω x, ω y ) } + ln { H i ( ω x, ω y ) }                     (12.7-6)


the spectra of the ideal image and the degradation transfer function are found to sep-
arate additively. It is now possible to apply any of the common methods of statistical
estimation of a signal in the presence of additive noise. If the degradation impulse
responses are uncorrelated between frames, it is worthwhile to form the sum

           M                                                                       M

           ∑    ln { G i ( ω x, ω y ) } = M ln { F I ( ω x, ω y ) } +             ∑      ln { H i ( ω x, ω y ) }   (12.7-7)
          i=1                                                                    i=1

because for large M the latter summation approaches the constant value

                                                                   M
                                                                                    
                           H M ( ω x, ω y ) =         lim  ∑ ln { H i ( ω x, ω y ) }                             (12.7-8)
                                                     M → ∞ i = 1                    

The term H M ( ω x, ω y ) may be viewed as the average logarithm transfer function of
the atmospheric turbulence. An image estimate can be expressed as

                                                                           M
                                       H M ( ω x, ω y )                                               1⁄M
               ˆ
               F I ( ω x, ω y ) = exp  – ---------------------------- 
                                                     M
                                                                     -
                                                                       
                                                                           ∏     [ G i ( ω x, ω y ) ]              (12.7-9)
                                                                           i=1

An inverse Fourier transform then yields the spatial domain estimate. In any practi-
cal imaging system, Eq. 12.7-4 must be modified by the addition of a noise compo-
nent Ni(x, y). This noise component unfortunately invalidates the separation step of
Eq. 12.7-6, and therefore destroys the remainder of the derivation. One possible
ad hoc solution to this problem would be to perform noise smoothing or filtering on
REFERENCES          367

each observed image field and then utilize the resulting estimates as assumed noise-
less observations in Eq. 12.7-9. Alternatively, the blind restoration technique of
Stockham et al. (43) developed for nonstationary speech signals may be adapted to
the multiple-frame image restoration problem.


REFERENCES

 1. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing
    at the Image Processing Laboratory at the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7,
    July 1972, 821–828.
 2. M. M. Sondhi, “Image Restoration: The Removal of Spatially Invariant Degradations,”
    Proc. IEEE, 60, 7, July 1972, 842–853.
 3. H. C. Andrews, “Digital Image Restoration: A Survey,” IEEE Computer, 7, 5, May
    1974, 36–45.
 4. B. R. Hunt, “Digital Image Processing,” Proc. IEEE, 63, 4, April 1975, 693–708.
 5. H. C. Andrews and B. R. Hunt, Digital Image Restoration, Prentice Hall, Englewood
    Cliffs, NJ, 1977.
 6. B. R. Frieden, “Image Enhancement and Restoration,” in Picture Processing and Digital
    Filtering, T. S. Huang, Ed., Springer-Verlag, New York, 1975.
 7. T. G. Stockham, Jr., “A–D and D–A Converters: Their Effect on Digital Audio Fidelity,”
    in Digital Signal Processing, L. R. Rabiner and C. M. Rader, Eds., IEEE Press, New
    York, 1972, 484–496.
 8. A. Marechal, P. Croce, and K. Dietzel, “Amelioration du contrast des details des images
    photographiques par filtrage des fréquencies spatiales,” Optica Acta, 5, 1958, 256–262.
 9. J. Tsujiuchi, “Correction of Optical Images by Compensation of Aberrations and by Spa-
    tial Frequency Filtering,” in Progress in Optics, Vol. 2, E. Wolf, Ed., Wiley, New York,
    1963, 131–180.
10. J. L. Harris, Sr., “Image Evaluation and Restoration,” J. Optical Society of America, 56,
    5, May 1966, 569–574.
11. B. L. McGlamery, “Restoration of Turbulence-Degraded Images,” J. Optical Society of
    America, 57, 3, March 1967, 293–297.
12. P. F. Mueller and G. O. Reynolds, “Image Restoration by Removal of Random Media
    Degradations,” J. Optical Society of America, 57, 11, November 1967, 1338–1344.
13. C. W. Helstrom, “Image Restoration by the Method of Least Squares,” J. Optical Soci-
    ety of America, 57, 3, March 1967, 297–303.
14. J. L. Harris, Sr., “Potential and Limitations of Techniques for Processing Linear Motion-
    Degraded Imagery,” in Evaluation of Motion Degraded Images, US Government Print-
    ing Office, Washington DC, 1968, 131–138.
15. J. L. Homer, “Optical Spatial Filtering with the Least-Mean-Square-Error Filter,” J.
    Optical Society of America, 51, 5, May 1969, 553–558.
16. J. L. Homer, “Optical Restoration of Images Blurred by Atmospheric Turbulence Using
    Optimum Filter Theory,” Applied Optics, 9, 1, January 1970, 167–171.
17. B. L. Lewis and D. J. Sakrison, “Computer Enhancement of Scanning Electron Micro-
    graphs,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 267–278.
368      POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES


18. D. Slepian, “Restoration of Photographs Blurred by Image Motion,” Bell System Techni-
    cal J., XLVI, 10, December 1967, 2353–2362.
19. E. R. Cole, “The Removal of Unknown Image Blurs by Homomorphic Filtering,” Ph.D.
    dissertation, Department of Electrical Engineering, University of Utah, Salt Lake City,
    UT June 1973.
20. B. R. Hunt, “The Application of Constrained Least Squares Estimation to Image Resto-
    ration by Digital Computer,” IEEE Trans. Computers, C-23, 9, September 1973, 805–
    812.
21. N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression
    Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266.
22. W. K. Pratt and F. Davarian, “Fast Computational Techniques for Pseudoinverse and
    Wiener Image Restoration,” IEEE Trans. Computers, C-26, 6, June 1977, 571–580.
23. W. K. Pratt, “Pseudoinverse Image Restoration Computational Algorithms,” in Optical
    Information Processing Vol. 2, G. W. Stroke, Y. Nesterikhin, and E. S. Barrekette, Eds.,
    Plenum Press, New York, 1977.
24. B. W. Rust and W. R. Burrus, Mathematical Programming and the Numerical Solution
    of Linear Equations, American Elsevier, New York, 1972.
25. A. Albert, Regression and the Moore–Penrose Pseudoinverse, Academic Press, New
    York, 1972.
26. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digi-
    tal Image Processing,” American Mathematical. Monthly, 1, 82, January 1975, 1–13.
27. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digi-
    tal Image Processing,” IEEE Trans. Computers, C-25, 2, February 1976, 140–148.
28. T. S. Huang and P. M. Narendra, “Image Restoration by Singular Value Decomposition,”
    Applied Optics, 14, 9, September 1975, 2213–2216.
29. H. C. Andrews and C. L. Patterson, “Singular Value Decompositions and Digital Image
    Processing,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-24, 1, Febru-
    ary 1976, 26–53.
30. T. O. Lewis and P. L. Odell, Estimation in Linear Models, Prentice Hall, Englewood
    Cliffs, NJ, 1971.
31. W. K. Pratt, “Generalized Wiener Filter Computation Techniques,” IEEE Trans. Com-
    puters, C-21, 7, July 1972, 636–641.
32. A. Papoulis, Probability Random Variables and Stochastic Processes, 3rd Ed., McGraw-
    Hill, New York, 1991.
33. S. Twomey, “On the Numerical Solution of Fredholm Integral Equations of the First
    Kind by the Inversion of the Linear System Produced by Quadrature,” J. Association for
    Computing Machinery, 10, 1963, 97–101.
34. D. L. Phillips, “A Technique for the Numerical Solution of Certain Integral Equations of
    the First Kind,” J. Association for Computing Machinery, 9, 1964, 84-97.
35. A. N. Tikonov, “Regularization of Incorrectly Posed Problems,” Soviet Mathematics, 4,
    6, 1963, 1624–1627.
36. E. B. Barrett and R. N. Devich, “Linear Programming Compensation for Space-Variant
    Image Degradation,” Proc. SPIE/OSA Conference on Image Processing, J. C. Urbach,
    Ed., Pacific Grove, CA, February 1976, 74, 152–158.
37. D. P. MacAdam, “Digital Image Restoration by Constrained Deconvolution,” J. Optical
    Society of America, 60, 12, December 1970, 1617–1627.
REFERENCES         369

38. J. P. Burg, “Maximum Entropy Spectral Analysis,” 37th Annual Society of Exploration
    Geophysicists Meeting, Oklahoma City, OK, 1967.
39. J. A. Edward and M. M. Fitelson, “Notes on Maximum Entropy Processing,” IEEE
    Trans. Information Theory, IT-19, 2, March 1973, 232–234.
40. B. R. Frieden, “Restoring with Maximum Likelihood and Maximum Entropy,” J. Opti-
    cal Society America, 62, 4, April 1972, 511–518.
41. B. R. Frieden, “Maximum Entropy Restorations of Garrymede,” in Proc. SPIE/OSA
    Conference on Image Processing, J. C. Urbach, Ed., Pacific Grove, CA, February 1976,
    74, 160–165.
42. T. S. Huang, D. S. Baker, and S. P. Berger, “Iterative Image Restoration,” Applied
    Optics, 14, 5, May 1975, 1165–1168.
43. T. G. Stockham, Jr., T. M. Cannon, and P. B. Ingebretsen, “Blind Deconvolution
    Through Digital Signal Processing,” Proc. IEEE, 63, 4, April 1975, 678–692.
44. A. Papoulis, “Approximations of Point Spreads for Deconvolution,” J. Optical Society
    of America, 62, 1, January 1972, 77–80.
45. B. Tatian, “Asymptotic Expansions for Correcting Truncation Error in Transfer-Function
    Calculations,” J. Optical Society of America, 61, 9, September 1971, 1214–1224.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




13
GEOMETRICAL IMAGE MODIFICATION




One of the most common image processing operations is geometrical modification
in which an image is spatially translated, scaled, rotated, nonlinearly warped, or
viewed from a different perspective.


13.1. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION

Image translation, scaling, and rotation can be analyzed from a unified standpoint.
Let G ( j, k ) for 1 ≤ j ≤ J and 1 ≤ k ≤ K denote a discrete output image that is created
by geometrical modification of a discrete input image F ( p, q ) for 1 ≤ p ≤ P and
1 ≤ q ≤ Q . In this derivation, the input and output images may be different in size.
Geometrical image transformations are usually based on a Cartesian coordinate sys-
tem representation in which the origin ( 0, 0 ) is the lower left corner of an image,
while for a discrete image, typically, the upper left corner unit dimension pixel at
indices (1, 1) serves as the address origin. The relationships between the Cartesian
coordinate representations and the discrete image arrays of the input and output
images are illustrated in Figure 13.1-1. The output image array indices are related to
their Cartesian coordinates by


                                       xk = k – 1
                                                --
                                                2
                                                 -                             (13.1-1a)


                                     yk = J + 1 – j
                                              --
                                              2
                                               -                               (13.1-1b)


                                                                                     371
372     GEOMETRICAL IMAGE MODIFICATION




FIGURE 13.1-1. Relationship between discrete image array and Cartesian coordinate repre-
sentation.



Similarly, the input array relationship is given by


                                    uq = q – 1
                                             --
                                             2
                                              -                               (13.1-2a)

                                    vp = P + 1 – p
                                             --
                                             2
                                              -                               (13.1-2b)


13.1.1. Translation

Translation of F ( p, q ) with respect to its Cartesian origin to produce G ( j, k )
involves the computation of the relative offset addresses of the two images. The
translation address relationships are


                                     x k = uq + tx                            (13.1-3a)

                                      yj = vp + ty                            (13.1-3b)


where t x and ty are translation offset constants. There are two approaches to this
computation for discrete images: forward and reverse address computation. In the
forward approach, u q and v p are computed for each input pixel ( p, q ) and
TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION                373

substituted into Eq. 13.1-3 to obtain x k and y j . Next, the output array addresses
( j, k ) are computed by inverting Eq. 13.1-1. The composite computation reduces to


                                   j′ = p – ( P – J ) – t y                  (13.1-4a)

                                   k′ = q + tx                               (13.1-4b)


where the prime superscripts denote that j′ and k′ are not integers unless tx and t y
are integers. If j′ and k′ are rounded to their nearest integer values, data voids can
occur in the output image. The reverse computation approach involves calculation
of the input image addresses for integer output image addresses. The composite
address computation becomes

                                   p′ = j + ( P – J ) + ty                   (13.1-5a)

                                   q′ = k – t x                              (13.1-5b)


where again, the prime superscripts indicate that p′ and q′ are not necessarily inte-
gers. If they are not integers, it becomes necessary to interpolate pixel amplitudes of
                                                      ˆ
F ( p, q ) to generate a resampled pixel estimate F ( p, q ), which is transferred to
G ( j, k ). The geometrical resampling process is discussed in Section 13.5.


13.1.2. Scaling

Spatial size scaling of an image can be obtained by modifying the Cartesian coordi-
nates of the input image according to the relations

                                          xk = sx uq                         (13.1-6a)

                                          yj = sy vp                         (13.1-6b)


where s x and s y are positive-valued scaling constants, but not necessarily integer
valued. If s x and s y are each greater than unity, the address computation of Eq.
13.1-6 will lead to magnification. Conversely, if s x and s y are each less than unity,
minification results. The reverse address relations for the input image address are
found to be


                             p′ = ( 1 ⁄ s y ) ( j + J – 1 ) + P + 1
                                                        --
                                                        2
                                                         -        --
                                                                  2
                                                                   -         (13.1-7a)

                             q′ = ( 1 ⁄ s x ) ( k – 1 ) + 1
                                                    --
                                                    2
                                                     -    --
                                                          2
                                                           -                 (13.1-7b)
374        GEOMETRICAL IMAGE MODIFICATION


As with generalized translation, it is necessary to interpolate F ( p, q ) to obtain
G ( j, k ) .


13.1.3. Rotation

Rotation of an input image about its Cartesian origin can be accomplished by the
address computation


                                x k = u q cos θ – v p sin θ                     (13.1-8a)

                                    y j = u q sin θ + v p cos θ                (13.1-8b)


where θ is the counterclockwise angle of rotation with respect to the horizontal axis
of the input image. Again, interpolation is required to obtain G ( j, k ) . Rotation of an
input image about an arbitrary pivot point can be accomplished by translating the
origin of the image to the pivot point, performing the rotation, and then translating
back by the first translation offset. Equation 13.1-8 must be inverted and substitu-
tions made for the Cartesian coordinates in terms of the array indices in order to
obtain the reverse address indices ( p′, q′ ). This task is straightforward but results in
a messy expression. A more elegant approach is to formulate the address computa-
tion as a vector-space manipulation.


13.1.4. Generalized Linear Geometrical Transformations

The vector-space representations for translation, scaling, and rotation are given
below.


Translation:
                               xk          uq       tx
                                      =         +                                (13.1-9)
                               yj          vp       ty

Scaling:
                               xk          sx 0     uq
                                      =                                        (13.1-10)
                               yj          0 sy     vp

Rotation:

                               xk          cos θ – sin θ          uq
                                      =                                        (13.1-11)
                               yj          sin θ     cos θ        vp
TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION                                                 375

Now, consider a compound geometrical modification consisting of translation, fol-
lowed by scaling followed by rotation. The address computations for this compound
operation can be expressed as

     xk       cos θ – sin θ        sx         0       uq         cos θ – sin θ               sx    0      tx
          =                                                 +                                                  (13.1-12a)
     yj       sin θ       cos θ    0         sy       vp         sin θ           cos θ      0     sy      ty


or upon consolidation


              xk           s x cos θ – s y sin θ                uq             s x t x cos θ – s t sin θ
                                                                                                y y
                      =                                               +                                        (13.1-12b)
              yj           s x sin θ         s y cos θ          vp             s x t x sin θ + s y t y cos θ



Equation 13.1-12b is, of course, linear. It can be expressed as


                                        xk             c0 c 1        uq           c2
                                                  =                        +                                   (13.1-13a)
                                        yj             d0 d 1        vp          d2


in one-to-one correspondence with Eq. 13.1-12b. Equation 13.1-13a can be rewrit-
ten in the more compact form



                                       xk              c0       c1    c2           uq
                                              =                                                                (13.1-13b)
                                       yj              d0        d1       d2       vp
                                                                                   1


As a consequence, the three address calculations can be obtained as a single linear
address computation. It should be noted, however, that the three address calculations
are not commutative. Performing rotation followed by minification followed by
translation results in a mathematical transformation different than Eq. 13.1-12. The
overall results can be made identical by proper choice of the individual transforma-
tion parameters.
   To obtain the reverse address calculation, it is necessary to invert Eq. 13.1-13b to
solve for ( u q, v p ) in terms of ( x k, y j ). Because the matrix in Eq. 13.1-13b is not
square, it does not possess an inverse. Although it is possible to obtain ( u q, v p ) by a
pseudoinverse operation, it is convenient to augment the rectangular matrix as
follows:
376     GEOMETRICAL IMAGE MODIFICATION


                                  xk             c0    c1     c2     uq
                                  yj    =        d0    d1    d2      vp                           (13.1-14)
                                  1              0     0      1       1


This three-dimensional vector representation of a two-dimensional vector is a
special case of a homogeneous coordinates representation (1–3).
   The use of homogeneous coordinates enables a simple formulation of concate-
nated operators. For example, consider the rotation of an image by an angle θ about
a pivot point ( x c, y c ) in the image. This can be accomplished by

              xk          1 0 xc         cos θ        – sin θ 0           1   0 –xc         uq
              yj      =   0 1 yc         sin θ         cos θ 0            0   1    –yc      vp    (13.1-15)
              1           0 0 1             0            0       1        0   0        1    1

which reduces to a single 3 × 3 transformation:


               xk         cos θ         – sin θ       – x c cos θ + y c sin θ + x c        uq
               yj     =   sin θ          cos θ         – x c sin θ – y c cos θ + y c       vp     (13.1-16)
                  1         0               0                        1                     1

    The reverse address computation for the special case of Eq. 13.1-16, or the more
general case of Eq. 13.1-13, can be obtained by inverting the 3 × 3 transformation
matrices by numerical methods. Another approach, which is more computationally
efficient, is to initially develop the homogeneous transformation matrix in reverse
order as

                                   uq             a0 a1 a2           xk
                                   vp       =     b0 b1 b 2          yj                           (13.1-17)
                                   1              0     0    1       1

where for translation

                                                a0 = 1                                           (13.1-18a)
                                                a1 = 0                                           (13.1-18b)
                                                a2 = – tx                                        (13.1-18c)
                                                b0 = 0                                           (13.1-18d)
                                                b1 = 1                                           (13.1-18e)
                                                b 2 = –ty                                        (13.1-18f)
TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION                     377

and for scaling

                                         a 0 = 1 ⁄ sx                             (13.1-19a)

                                         a1 = 0                                   (13.1-19b)

                                         a2 = 0                                   (13.1-19c)

                                         b0 = 0                                   (13.1-19d)

                                         b 1 = 1 ⁄ sy                             (13.1-19e)

                                         b2 = 0                                   (13.1-19f)

and for rotation

                                         a 0 = cos θ                              (13.1-20a)

                                         a 1 = sin θ                              (13.1-20b)

                                         a2 = 0                                   (13.1-20c)

                                         b 0 = – sin θ                            (13.1-20d)

                                         b 1 = cos θ                              (13.1-20e)

                                         b2 = 0                                   (13.1-20f)

    Address computation for a rectangular destination array G ( j, k ) from a rectan-
gular source array F ( p, q ) of the same size results in two types of ambiguity: some
pixels of F ( p, q ) will map outside of G ( j, k ); and some pixels of G ( j, k ) will not be
mappable from F ( p, q ) because they will lie outside its limits. As an example,
Figure 13.1-2 illustrates rotation of an image by 45° about its center. If the desire
of the mapping is to produce a complete destination array G ( j, k ) , it is necessary
to access a sufficiently large source image F ( p, q ) to prevent mapping voids in
 G ( j, k ) . This is accomplished in Figure 13.1-2d by embedding the original image
of Figure 13.1-2a in a zero background that is sufficiently large to encompass the
rotated original.


13.1.5. Affine Transformation

The geometrical operations of translation, size scaling, and rotation are special cases
of a geometrical operator called an affine transformation. It is defined by Eq.
13.1-13b, in which the constants ci and di are general weighting factors. The affine
transformation is not only useful as a generalization of translation, scaling, and rota-
tion. It provides a means of image shearing in which the rows or columns
are successively uniformly translated with respect to one another. Figure 13.1-3
378     GEOMETRICAL IMAGE MODIFICATION




             (a) Original, 500 × 500              (b) Rotated, 500 × 500




             (c) Original, 708 × 708              (d) Rotated, 708 × 708

FIGURE 13.1-2. Image rotation by 45° on the washington_ir image about its center.




illustrates image shearing of rows of an image. In this example, c 0 = d 1 = 1.0 ,
c 1 = 0.1, d 0 = 0.0, and c 2 = d 2 = 0.0.



13.1.6. Separable Translation, Scaling, and Rotation

The address mapping computations for translation and scaling are separable in the
sense that the horizontal output image coordinate xk depends only on uq, and yj
depends only on vp. Consequently, it is possible to perform these operations
separably in two passes. In the first pass, a one-dimensional address translation is
performed independently on each row of an input image to produce an intermediate
array I ( p, k ). In the second pass, columns of the intermediate array are processed
independently to produce the final result G ( j, k ).
TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION                       379




               (a) Original                                             (b) Sheared
      FIGURE 13.1-3. Horizontal image shearing on the washington_ir image.



   Referring to Eq. 13.1-8, it is observed that the address computation for rotation is
of a form such that xk is a function of both uq and vp; and similarly for yj. One might
then conclude that rotation cannot be achieved by separable row and column pro-
cessing, but Catmull and Smith (4) have demonstrated otherwise. In the first pass of
the Catmull and Smith procedure, each row of F ( p, q ) is mapped into the corre-
sponding row of the intermediate array I ( p, k ) using the standard row address com-
putation of Eq. 13.1-8a. Thus


                                x k = u q cos θ – v p sin θ                           (13.1-21)


Then, each column of I ( p, k ) is processed to obtain the corresponding column of
G ( j, k ) using the address computation

                                         x k sin θ + v p
                                   y j = ----------------------------
                                                                    -                 (13.1-22)
                                                 cos θ

Substitution of Eq. 13.1-21 into Eq. 13.1-22 yields the proper composite y-axis
transformation of Eq. 13.1-8b. The “secret” of this separable rotation procedure is
the ability to invert Eq. 13.1-21 to obtain an analytic expression for uq in terms of xk.
In this case,

                                         x k + v p sin θ
                                   u q = ---------------------------
                                                                   -                  (13.1-23)
                                                 cos θ

when substituted into Eq. 13.1-21, gives the intermediate column warping function
of Eq. 13.1-22.
380     GEOMETRICAL IMAGE MODIFICATION


   The Catmull and Smith two-pass algorithm can be expressed in vector-space
form as

                        xk         1         0       cos θ   – sin θ   uq
                             =              1                                         (13.1-24)
                        yj       tan θ -----------
                                                 -    0        1       vp
                                       cos θ
The separable processing procedure must be used with caution. In the special case of
a rotation of 90°, all of the rows of F ( p, q ) are mapped into a single column of
I ( p, k ) , and hence the second pass cannot be executed. This problem can be avoided
by processing the columns of F ( p, q ) in the first pass. In general, the best overall
results are obtained by minimizing the amount of spatial pixel movement. For exam-
ple, if the rotation angle is + 80°, the original should be rotated by +90° by conven-
tional row–column swapping methods, and then that intermediate image should be
rotated by –10° using the separable method.
     Figure 13.14 provides an example of separable rotation of an image by 45°.
Figure 13.l-4a is the original, Figure 13.1-4b shows the result of the first pass and
Figure 13.1-4c presents the final result.




                                              (a) Original




                (b) First-pass result                        (c) Second-pass result
  FIGURE 13.1-4. Separable two-pass image rotation on the washington_ir image.
TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION                     381

   Separable, two-pass rotation offers the advantage of simpler computation com-
pared to one-pass rotation, but there are some disadvantages to two-pass rotation.
Two-pass rotation causes loss of high spatial frequencies of an image because
of the intermediate scaling step (5), as seen in Figure 13.1-4b. Also, there is the
potential of increased aliasing error (5,6), as discussed in Section 13.5.
   Several authors (5,7,8) have proposed a three-pass rotation procedure in which
there is no scaling step and hence no loss of high-spatial-frequency content with
proper interpolation. The vector-space representation of this procedure is given by

         xk        1     – tan ( θ ⁄ 2 )    1      0   1   – tan ( θ ⁄ 2 )   uq
               =                                                                     (13.1-25)
         yj        0           1           sin θ   1   0         1           vp

This transformation is a series of image shearing operations without scaling. Figure
13.1-5 illustrates three-pass rotation for rotation by 45°.




                   (a) Original                              (b) First-pass result




              (c) Second-pass result                         (d) Third-pass result

  FIGURE 13.1-5. Separable three-pass image rotation on the washington_ir image.
382      GEOMETRICAL IMAGE MODIFICATION


13.2 SPATIAL WARPING

The address computation procedures described in the preceding section can be
extended to provide nonlinear spatial warping of an image. In the literature, this pro-
cess is often called rubber-sheet stretching (9,10). Let

                                          x = X ( u, v )                           (13.2-1a)

                                          y = Y ( u, v )                           (13.2-1b)

denote the generalized forward address mapping functions from an input image to
an output image. The corresponding generalized reverse address mapping functions
are given by

                                          u = U ( x, y )                           (13.2-2a)

                                          v = V ( x, y )                           (13.2-2b)

For notational simplicity, the ( j, k ) and ( p, q ) subscripts have been dropped from
these and subsequent expressions. Consideration is given next to some examples
and applications of spatial warping.


13.2.1. Polynomial Warping

The reverse address computation procedure given by the linear mapping of Eq.
13.1-17 can be extended to higher dimensions. A second-order polynomial warp
address mapping can be expressed as

                                                        2                  2
                         u = a 0 + a 1 x + a 2 y + a 3 x + a 4 xy + a5 y           (13.2-3a)

                                                        2                  2
                         v = b0 + b 1 x + b 2 y + b 3 x + b 4 xy + b 5 y           (13.2-3b)

In vector notation,

                        u         a0    a1    a2   a3       a4   a5        1
                             =
                        v         b0    b1    b2   b3       b4   b5        x
                                                                           y
                                                                           2
                                                                                   (13.2-3c)
                                                                       x
                                                                       xy
                                                                           2
                                                                       y

For first-order address mapping, the weighting coefficients ( a i, b i ) can easily be related
to the physical mapping as described in Section 13.1. There is no simple physical
SPATIAL WARPING         383




                        FIGURE 13.2-1. Geometric distortion.

counterpart for second address mapping. Typically, second-order and higher-order
address mapping are performed to compensate for spatial distortion caused by a
physical imaging system. For example, Figure 13.2-1 illustrates the effects of imag-
ing a rectangular grid with an electronic camera that is subject to nonlinear pincush-
ion or barrel distortion. Figure 13.2-2 presents a generalization of the problem. An
ideal image F ( j, k ) is subject to an unknown physical spatial distortion. The
observed image is measured over a rectangular array O ( p, q ). The objective is to
                                                                               ˆ
perform a spatial correction warp to produce a corrected image array F ( j, k ) .
Assume that the address mapping from the ideal image space to the observation
space is given by
                                    u = O u { x, y }                        (13.2-4a)

                                    v = O v { x, y }                        (13.2-4b)




                       FIGURE 13.2-2. Spatial warping concept.
384      GEOMETRICAL IMAGE MODIFICATION


where Ou { x, y } and O v { x, y } are physical mapping functions. If these mapping
functions are known, then Eq. 13.2-4 can, in principle, be inverted to obtain the
proper corrective spatial warp mapping. If the physical mapping functions are not
known, Eq. 13.2-3 can be considered as an estimate of the physical mapping func-
tions based on the weighting coefficients ( a i, b i ) . These polynomial weighting coef-
ficients are normally chosen to minimize the mean-square error between a set of
observation coordinates ( u m, v m ) and the polynomial estimates ( u, v ) for a set
( 1 ≤ m ≤ M ) of known data points ( x m, y m ) called control points. It is convenient to
arrange the observation space coordinates into the vectors

                                    T
                                   u = [ u 1, u 2, …, u M ]                     (13.2-5a)

                                    T
                                   v = [ v 1, v 2, …, v M ]                    (13.2-5b)

Similarly, let the second-order polynomial coefficients be expressed in vector form as

                                    T
                                   a = [ a 0, a 1, …, a 5 ]                     (13.2-6a)

                                    T
                                   b = [ b 0, b 1, …, b 5 ]                    (13.2-6b)

The mean-square estimation error can be expressed in the compact form
                                     T                          T
                      E = ( u – Aa ) ( u – Aa ) + ( v – Ab ) ( v – Ab )          (13.2-7)
where

                                                         2             2
                               1        x1     y1       x1    x1 y1   y1
                                                         2             2
                               1        x2     y2       x2    x2 y2   y2
                      A =                                                        (13.2-8)

                                                        2             2
                               1     xM        yM       xM xM yM yM

From Appendix 1, it has been determined that the error will be minimum if

                                                    –
                                             a = A u                            (13.2-9a)
                                                    –
                                             b = A v                           (13.2-9b)

where A– is the generalized inverse of A. If the number of control points is chosen
greater than the number of polynomial coefficients, then

                                         –          T   –1
                                    A        = [A A] A                         (13.2-10)
SPATIAL WARPING         385




          (a) Source control points                (b) Destination control points




                                      (c) Warped
FIGURE 13.2-3. Second-order polynomial spatial warping on the mandrill_mon image.




provided that the control points are not linearly related. Following this procedure,
the polynomial coefficients ( a i, b i ) can easily be computed, and the address map-
ping of Eq. 13.2-1 can be obtained for all ( j, k ) pixels in the corrected image. Of
course, proper interpolation is necessary.
   Equation 13.2-3 can be extended to provide a higher-order approximation to the
physical mapping of Eq. 13.2-3. However, practical problems arise in computing the
pseudoinverse accurately for higher-order polynomials. For most applications, sec-
ond-order polynomial computation suffices. Figure 13.2-3 presents an example of
second-order polynomial warping of an image. In this example, the mapping of con-
trol points is indicated by the graphics overlay.
386      GEOMETRICAL IMAGE MODIFICATION




                      FIGURE 13.3-1. Basic imaging system model.


13.3. PERSPECTIVE TRANSFORMATION

Most two-dimensional images are views of three-dimensional scenes from the phys-
ical perspective of a camera imaging the scene. It is often desirable to modify an
observed image so as to simulate an alternative viewpoint. This can be accom-
plished by use of a perspective transformation.
   Figure 13.3-1 shows a simple model of an imaging system that projects points of light
in three-dimensional object space to points of light in a two-dimensional image plane
through a lens focused for distant objects. Let ( X, Y, Z ) be the continuous domain coordi-
nate of an object point in the scene, and let ( x, y ) be the continuous domain-projected
coordinate in the image plane. The image plane is assumed to be at the center of the coor-
dinate system. The lens is located at a distance f to the right of the image plane, where f is
the focal length of the lens. By use of similar triangles, it is easy to establish that

                                               fX
                                         x = ----------
                                                      -                            (13.3-1a)
                                             f–Z

                                               fY
                                         y = ----------
                                                      -                            (13.3-1b)
                                             f–Z

Thus the projected point ( x, y ) is related nonlinearly to the object point ( X, Y, Z ) .
This relationship can be simplified by utilization of homogeneous coordinates, as
introduced to the image processing community by Roberts (1).
   Let
                                                   X
                                         v =       Y                                (13.3-2)
                                                   Z
PERSPECTIVE TRANSFORMATION          387

                                                                            ˜
be a vector containing the object point coordinates. The homogeneous vector v cor-
responding to v is


                                               sX
                                      ˜
                                      v =      sY                              (13.3-3)
                                               sZ
                                                s

where s is a scaling constant. The Cartesian vector v can be generated from the
                        ˜
homogeneous vector v by dividing each of the first three components by the fourth.
The utility of this representation will soon become evident.
  Consider the following perspective transformation matrix:


                                     1    0         0      0
                              P =    0    1         0      0                   (13.3-4)
                                     0    0         1      0
                                     0    0       –1 ⁄ f   1

This is a modification of the Roberts (1) definition to account for a different labeling
of the axes and the use of column rather than row vectors. Forming the vector
product

                                         ˜    ˜
                                         w = Pv                               (13.3-5a)

yields

                                               sX
                                    ˜
                                    w =        sY                             (13.3-5b)
                                               sZ
                                            s – sZ ⁄ f

                                                                           ˜
The corresponding image plane coordinates are obtained by normalization of w to
obtain


                                                fX
                                              ----------
                                                       -
                                              f–Z
                                    w =          fY                            (13.3-6)
                                              ----------
                                                       -
                                              f–Z
                                                 fZ
                                              ----------
                                                       -
                                               f–Z
388      GEOMETRICAL IMAGE MODIFICATION


It should be observed that the first two elements of w correspond to the imaging
relationships of Eq. 13.3-1.
    It is possible to project a specific image point ( x i, y i ) back into three-dimensional
object space through an inverse perspective transformation

                                                      –1
                                                 ˜
                                                 v = P w ˜                        (13.3-7a)

where

                                                 1      0        0    0
                                P
                                    –1
                                         =       0      1        0    0           (13.3-7b)
                                                 0      0        1    0
                                                 0      0       1⁄f   1

and

                                             sx i

                               ˜             sy i
                               w =                                                (13.3-7c)
                                             sz i
                                             s

In Eq. 13.3-7c, z i is regarded as a free variable. Performing the inverse perspective
transformation yields the homogeneous vector


                                                         sx i
                                                         sy i
                                         ˜
                                         w =                                        (13.3-8)
                                                         sz i
                                                     s + sz i ⁄ f


The corresponding Cartesian coordinate vector is


                                                          fxi
                                                       ----------
                                                                -
                                                       f – zi
                                                          fyi
                                             w =       ----------
                                                                -                   (13.3-9)
                                                       f – zi
                                                          fz i
                                                       ----------
                                                                -
                                                       f – zi


or equivalently,
CAMERA IMAGING MODEL        389

                                               fx i
                                        x = ----------
                                                     -                         (13.3-10a)
                                            f – zi

                                               fyi
                                        y = ----------
                                                     -                         (13.3-10b)
                                            f – zi

                                               fzi
                                        z = ----------
                                                     -                         (13.3-10c)
                                            f – zi


Equation 13.3-10 illustrates the many-to-one nature of the perspective transforma-
tion. Choosing various values of the free variable z i results in various solutions for
( X, Y, Z ), all of which lie along a line from ( x i, y i ) in the image plane through the
lens center. Solving for the free variable z i in Eq. 13.3-l0c and substituting into
Eqs. 13.3-10a and 13.3-10b gives

                                          x
                                      X = ---i ( f – Z )
                                            -                                  (13.3-11a)
                                           f

                                          y
                                      Y = ---i ( f – Z )
                                            -                                  (13.3-11b)
                                           f


The meaning of this result is that because of the nature of the many-to-one perspec-
tive transformation, it is necessary to specify one of the object coordinates, say Z, in
order to determine the other two from the image plane coordinates ( x i, y i ). Practical
utilization of the perspective transformation is considered in the next section.


13.4. CAMERA IMAGING MODEL

The imaging model utilized in the preceding section to derive the perspective
transformation assumed, for notational simplicity, that the center of the image plane
was coincident with the center of the world reference coordinate system. In this
section, the imaging model is generalized to handle physical cameras used in
practical imaging geometries (11). This leads to two important results: a derivation
of the fundamental relationship between an object and image point; and a means of
changing a camera perspective by digital image processing.
    Figure 13.4-1 shows an electronic camera in world coordinate space. This camera
is physically supported by a gimbal that permits panning about an angle θ (horizon-
tal movement in this geometry) and tilting about an angle φ (vertical movement).
The gimbal center is at the coordinate ( X G, Y G, Z G ) in the world coordinate system.
The gimbal center and image plane center are offset by a vector with coordinates
( X o, Y o, Z o ).
390     GEOMETRICAL IMAGE MODIFICATION




                       FIGURE 13.4-1. Camera imaging model.



   If the camera were to be located at the center of the world coordinate origin, not
panned nor tilted with respect to the reference axes, and if the camera image plane
was not offset with respect to the gimbal, the homogeneous image model would be
as derived in Section 13.3; that is


                                      ˜    ˜
                                      w = Pv                                (13.4-1)


        ˜
where v is the homogeneous vector of the world coordinates of an object point, w ˜
is the homogeneous vector of the image plane coordinates, and P is the perspective
transformation matrix defined by Eq. 13.3-4. The camera imaging model can easily
be derived by modifying Eq. 13.4-1 sequentially using a three-dimensional exten-
sion of translation and rotation concepts presented in Section 13.1.
    The offset of the camera to location ( XG, YG, ZG ) can be accommodated by the
translation operation


                                     ˜        ˜
                                     w = PT G v                             (13.4-2)


where

                                      1 0    0 –XG
                                      0 1 0 –Y G
                              TG =                                          (13.4-3)
                                      0 0 1 –Z G
                                      0 0     0   1
CAMERA IMAGING MODEL       391

Pan and tilt are modeled by a rotation transformation


                                       ˜         ˜
                                       w = PRT G v                                 (13.4-4)


where R = R φ R θ and


                                   cos θ – sin θ        0       0
                           Rθ =    sin θ cos θ          0       0                  (13.4-5)
                                    0        0          1       0
                                    0        0          0       1


and


                                   1         0      0           0
                        Rφ =       0       cos φ – sin φ        0                  (13.4-6)
                                   0       sin φ cos φ          0
                                   0         0      0           1


The composite rotation matrix then becomes


                               cos θ            – sin θ      0       0
                     R =     cos φ sin θ       cos φ cos θ – sin φ   0             (13.4-7)
                             sin φ sin θ       sin φ cos θ cos φ     0
                                 0                  0         0      1


Finally, the camera-to-gimbal offset is modeled as


                                   ˜             ˜
                                   w = PT C RT G v                                 (13.4-8)


where


                                           1    0   0   –Xo
                                           0    1   0   –Yo
                               TC =                                                (13.4-9)
                                           0    0   1   –Zo
                                           0    0   0       1
392             GEOMETRICAL IMAGE MODIFICATION


Equation 13.4-8 is the final result giving the complete camera imaging model trans-
formation between an object and an image point. The explicit relationship between
an object point ( X, Y, Z ) and its image plane projection ( x, y ) can be obtained by
performing the matrix multiplications analytically and then forming the Cartesian
                                                      ˜
coordinates by dividing the first two components of w by the fourth. Upon perform-
ing these operations, one obtains


                                             f [ ( X – X G ) cos θ – ( Y – Y G ) sin θ – X 0 ]
      x = --------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                                                                                                                                             -   (13.4-10a)
          – ( X – X G ) sin θ sin φ – ( Y – Y G ) cos θ sin φ – ( Z – Z G ) cos φ + Z 0 + f

           f [ ( X – XG ) sin θ cos φ + ( Y – Y G ) cos θ cos φ – ( Z – Z G ) sin φ – Y 0 ]
      y = ------------------------------------------------------------------------------------------------------------------------------------------------------------------
                                                                                                                                                                           -     (13.4-10b)
          – ( X – X G ) sin θ sin φ – ( Y – Y G ) cosθ sin φ – ( Z – Z G ) cos φ + Z 0 + f


Equation 13.4-10 can be used to predict the spatial extent of the image of a physical
scene on an imaging sensor.
   Another important application of the camera imaging model is to form an image
by postprocessing such that the image appears to have been taken by a camera at a
                                                                      ˜        ˜
different physical perspective. Suppose that two images defined by w 1 and w 2 are
formed by taking two views of the same object with the same camera. The resulting
camera model relationships are then

                                                                             ˜                   ˜
                                                                             w 1 = PT C R 1 T G1 v                                                                               (13.4-11a)

                                                                             ˜                   ˜
                                                                             w 2 = PT C R 2 T G2 v                                                                               (13.4-11b)


Because the camera is identical for the two images, the matrices P and TC are
invariant in Eq. 13.4-11. It is now possible to perform an inverse computation of Eq.
13.4-11b to obtain

                                                                     –1      –1     –1    –1
                                                          ˜                                  ˜
                                                          v = [ TG1 ] [ R 1 ] [ TC ] [ P ] w 1                                                                                    (13.4-12)


and by substitution into Eq. 13.4-11b, it is possible to relate the image plane coordi-
nates of the image of the second view to that obtained in the first view. Thus
                                                                     –1      –1      –1    –1
                                          ˜                                                   ˜
                                          w 2 = PT C R 2 TG2 [ T G1 ] [ R 1 ] [ T C ] [ P ] w 1                                                                                   (13.4-13)

As a consequence, an artificial image of the second view can be generated by per-
forming the matrix multiplications of Eq. 13.4-13 mathematically on the physical
image of the first view. Does this always work? No, there are limitations. First, if
some portion of a physical scene were not “seen” by the physical camera, perhaps it
GEOMETRICAL IMAGE RESAMPLING                 393

was occluded by structures within the scene, then no amount of processing will rec-
reate the missing data. Second, the processed image may suffer severe degradations
resulting from undersampling if the two camera aspects are radically different. Nev-
ertheless, this technique has valuable applications.


13.5. GEOMETRICAL IMAGE RESAMPLING

As noted in the preceding sections of this chapter, the reverse address computation
process usually results in an address result lying between known pixel values of an
input image. Thus it is necessary to estimate the unknown pixel amplitude from its
known neighbors. This process is related to the image reconstruction task, as
described in Chapter 4, in which a space-continuous display is generated from an
array of image samples. However, the geometrical resampling process is usually not
spatially regular. Furthermore, the process is discrete to discrete; only one output
pixel is produced for each input address.
   In this section, consideration is given to the general geometrical resampling
process in which output pixels are estimated by interpolation of input pixels. The
special, but common case, of image magnification by an integer zooming factor is
also discussed. In this case, it is possible to perform pixel estimation by convolution.


13.5.1. Interpolation Methods

The simplest form of resampling interpolation is to choose the amplitude of an out-
put image pixel to be the amplitude of the input pixel nearest to the reverse address.
This process, called nearest-neighbor interpolation, can result in a spatial offset
error by as much as 1 ⁄ 2 pixel units. The resampling interpolation error can be
significantly reduced by utilizing all four nearest neighbors in the interpolation. A
common approach, called bilinear interpolation, is to interpolate linearly along each
row of an image and then interpolate that result linearly in the columnar direction.
Figure 13.5-1 illustrates the process. The estimated pixel is easily found to be


                   F ( p′, q′ ) = ( 1 – a ) [ ( 1 – b )F ( p, q ) + bF ( p, q + 1 ) ]

                                   + a [ ( 1 – b )F ( p + 1, q ) + bF ( p + 1, q + 1 ) ]   (13.5-1)


Although the horizontal and vertical interpolation operations are each linear, in gen-
eral, their sequential application results in a nonlinear surface fit between the four
neighboring pixels.
   The expression for bilinear interpolation of Eq. 13.5-1 can be generalized for any
interpolation function R { x } that is zero-valued outside the range of ± 1 sample
spacing. With this generalization, interpolation can be considered as the summing of
four weighted interpolation functions as given by
394        GEOMETRICAL IMAGE MODIFICATION


                                   F(p,q)                                F(p,q+1)
                                                    b

                                      a


                                                             ^
                                                             F(p',q')




                                 F(p+1,q)                               F(p+1,q+1)

                                 FIGURE 13.5-1. Bilinear interpolation.




 F ( p′, q′ ) = F ( p, q )R { – a }R { b } + F ( p, q + 1 )R { – a }R { – ( 1 – b ) }


                + F ( p + 1, q )R { 1 – a }R { b } + F ( p + 1, q + 1 )R { 1 – a }R { – ( 1 – b ) }

                                                                                                          (13.5-2)
In the special case of linear interpolation, R { x } = R 1 { x } , where R1 { x } is defined in
Eq. 4.3-2. Making this substitution, it is found that Eq. 13.5-2 is equivalent to the
bilinear interpolation expression of Eq. 13.5-1.
    Typically, for reasons of computational complexity, resampling interpolation is
limited to a 4 × 4 pixel neighborhood. Figure 13.5-2 defines a generalized bicubic
interpolation neighborhood in which the pixel F ( p, q ) is the nearest neighbor to the
pixel to be interpolated. The interpolated pixel may be expressed in the compact
form
                                  2       2
               F ( p′, q′ ) =    ∑ ∑             F ( p + m, q + n )R C { ( m – a ) }R C { – ( n – b ) }   (13.5-3)
                                m = – 1 n = –1

where RC ( x ) denotes a bicubic interpolation function such as a cubic B-spline or
cubic interpolation function, as defined in Section 4.3-2.


13.5.2. Convolution Methods

When an image is to be magnified by an integer zoom factor, pixel estimation can be
implemented efficiently by convolution (12). As an example, consider image magni-
fication by a factor of 2:1. This operation can be accomplished in two stages. First,
the input image is transferred to an array in which rows and columns of zeros are
interleaved with the input image data as follows:
GEOMETRICAL IMAGE RESAMPLING   395


 F(p−1,q−1)        F(p−1,q)        F(p−1,q+1)   F(p−1,q+2)




  F(p,q−1)          F(p,q)          F(p,q+1)      F(p,q+2)

                            b
                       a

                        ^
                       F(p',q')



 F(p+1,q−1)        F(p+1,q)        F(p+1,q+1)   F(p+1,q+2)




 F(p+2,q−1)        F(p+2,q)        F(p+2,q+1)   F(p+2,q+2)

          FIGURE 13.5-2. Bicubic interpolation.




FIGURE 13.5-3. Interpolation kernels for 2:1 magnification.
396     GEOMETRICAL IMAGE MODIFICATION




                 (a) Original                   (b) Zero interleaved quadrant




                   (c) Peg                              (d ) Pyramid




                   (e) Bell                          (f ) Cubic B-spline

FIGURE 13.5-4. Image interpolation on the mandrill_mon image for 2:1 magnification.
REFERENCES          397


                                                   A 0 B
                           A    B
                                                   0 0 0
                           C    D
                                                   C 0 D

                         input image           zero-interleaved
                        neighborhood            neighborhood


Next, the zero-interleaved neighborhood image is convolved with one of the discrete
interpolation kernels listed in Figure 13.5-3. Figure 13.5-4 presents the magnifica-
tion results for several interpolation kernels. The inevitable visual trade-off between
the interpolation error (the jaggy line artifacts) and the loss of high spatial frequency
detail in the image is apparent from the examples.
   This discrete convolution operation can easily be extended to higher-order magni-
fication factors. For N:1 magnification, the core kernel is a N × N peg array. For large
kernels it may be more computationally efficient in many cases, to perform the inter-
polation indirectly by Fourier domain filtering rather than by convolution (6).


REFERENCES

 1. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec-
    tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge,
    MA, 1965.
 2. D. F. Rogers, Mathematical Elements for Computer Graphics, 2nd ed., McGraw-Hill,
    New York, 1989.
 3. J. D. Foley et al., Computer Graphics: Principles and Practice, 2nd ed. in C, Addison-
    Wesley, Reading, MA, 1996.
 4. E. Catmull and A. R. Smith, “3-D Transformation of Images in Scanline Order,” Com-
    puter Graphics, SIGGRAPH '80 Proc., 14, 3, July 1980, 279–285.
 5. M. Unser, P. Thevenaz, and L. Yaroslavsky, “Convolution-Based Interpolation for Fast,
    High-Quality Rotation of Images, IEEE Trans. Image Processing, IP-4, 10, October
    1995, 1371–1381.
 6. D. Fraser and R. A. Schowengerdt, “Avoidance of Additional Aliasing in Multipass
    Image Rotations,” IEEE Trans. Image Processing, IP-3, 6, November 1994, 721–735.
 7. A. W. Paeth, “A Fast Algorithm for General Raster Rotation,” in Proc. Graphics Inter-
    face ‘86-Vision Interface, 1986, 77–81.
 8. P. E. Danielson and M. Hammerin, “High Accuracy Rotation of Images, in CVGIP:
    Graphical Models and Image Processing, 54, 4, July 1992, 340–344.
 9. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J.
    Research and Development, 20, 1, 1976, 40–56.
10. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing
    at the Image Processing Laboratory of the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7,
    July 1972, 821–828.
398      GEOMETRICAL IMAGE MODIFICATION


11. K. S. Fu, R. C. Gonzalez and C. S. G. Lee, Robotics: Control, Sensing, Vision, and Intel-
    ligence, McGraw-Hill, New York, 1987.
12. W. K. Pratt, “Image Processing and Analysis Using Primitive Computational Elements,”
    in Selected Topics in Signal Processing, S. Haykin, Ed., Prentice Hall, Englewood Cliffs,
    NJ, 1989.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




PART 5

IMAGE ANALYSIS

Image analysis is concerned with the extraction of measurements, data or informa-
tion from an image by automatic or semiautomatic methods. In the literature, this
field has been called image data extraction, scene analysis, image description, auto-
matic photo interpretation, image understanding, and a variety of other names.
    Image analysis is distinguished from other types of image processing, such as
coding, restoration, and enhancement, in that the ultimate product of an image anal-
ysis system is usually numerical output rather than a picture. Image analysis also
diverges from classical pattern recognition in that analysis systems, by definition,
are not limited to the classification of scene regions to a fixed number of categories,
but rather are designed to provide a description of complex scenes whose variety
may be enormously large and ill-defined in terms of a priori expectation.




                                                                                   399
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




14
MORPHOLOGICAL IMAGE PROCESSING




Morphological image processing is a type of processing in which the spatial form or
structure of objects within an image are modified. Dilation, erosion, and skeleton-
ization are three fundamental morphological operations. With dilation, an object
grows uniformly in spatial extent, whereas with erosion an object shrinks uniformly.
Skeletonization results in a stick figure representation of an object.
   The basic concepts of morphological image processing trace back to the research
on spatial set algebra by Minkowski (1) and the studies of Matheron (2) on topology.
Serra (3–5) developed much of the early foundation of the subject. Steinberg (6,7)
was a pioneer in applying morphological methods to medical and industrial vision
applications. This research work led to the development of the cytocomputer for
high-speed morphological image processing (8,9).
   In the following sections, morphological techniques are first described for binary
images. Then these morphological concepts are extended to gray scale images.


14.1. BINARY IMAGE CONNECTIVITY

Binary image morphological operations are based on the geometrical relationship or
connectivity of pixels that are deemed to be of the same class (10,11). In the binary
image of Figure 14.1-1a, the ring of black pixels, by all reasonable definitions of
connectivity, divides the image into three segments: the white pixels exterior to the
ring, the white pixels interior to the ring, and the black pixels of the ring itself. The
pixels within each segment are said to be connected to one another. This concept of
connectivity is easily understood for Figure 14.1-1a, but ambiguity arises when con-
sidering Figure 14.1-1b. Do the black pixels still define a ring, or do they instead
form four disconnected lines? The answers to these questions depend on the defini-
tion of connectivity.

                                                                                     401
402      MORPHOLOGICAL IMAGE PROCESSING




                             FIGURE 14.1-1. Connectivity.



   Consider the following neighborhood pixel pattern:


                                      X3   X2   X1
                                      X4   X    X0
                                      X5   X6   X7



in which a binary-valued pixel F ( j, k ) = X , where X = 0 (white) or X = 1 (black) is
surrounded by its eight nearest neighbors X 0, X 1, …, X 7. An alternative nomencla-
ture is to label the neighbors by compass directions: north, northeast, and so on:


                                   NW      N     NE
                                    W      X      E
                                   SW      S     SE


Pixel X is said to be four-connected to a neighbor if it is a logical 1 and if its east,
north, west, or south ( X 0, X 2, X4, X6 ) neighbor is a logical 1. Pixel X is said to be
eight-connected if it is a logical 1 and if its north, northeast, etc. ( X0, X1, …, X 7 )
neighbor is a logical 1.
   The connectivity relationship between a center pixel and its eight neighbors can
be quantified by the concept of a pixel bond, the sum of the bond weights between
the center pixel and each of its neighbors. Each four-connected neighbor has a bond
of two, and each eight-connected neighbor has a bond of one. In the following
example, the pixel bond is seven.


                                      1    1    1
                                      0    X    0
                                      1    1    0
BINARY IMAGE CONNECTIVITY           403




              FIGURE 14.1-2. Pixel neighborhood connectivity definitions.



    Under the definition of four-connectivity, Figure 14.1-1b has four disconnected
black line segments, but with the eight-connectivity definition, Figure 14.1-1b has a
ring of connected black pixels. Note, however, that under eight-connectivity, all
white pixels are connected together. Thus a paradox exists. If the black pixels are to
be eight-connected together in a ring, one would expect a division of the white pix-
els into pixels that are interior and exterior to the ring. To eliminate this dilemma,
eight-connectivity can be defined for the black pixels of the object, and four-connec-
tivity can be established for the white pixels of the background. Under this defini-
tion, a string of black pixels is said to be minimally connected if elimination of any
black pixel results in a loss of connectivity of the remaining black pixels. Figure
14.1-2 provides definitions of several other neighborhood connectivity relationships
between a center black pixel and its neighboring black and white pixels.
    The preceding definitions concerning connectivity have been based on a discrete
image model in which a continuous image field is sampled over a rectangular array
of points. Golay (12) has utilized a hexagonal grid structure. With such a structure,
many of the connectivity problems associated with a rectangular grid are eliminated.
In a hexagonal grid, neighboring pixels are said to be six-connected if they are in the
same set and share a common edge boundary. Algorithms have been developed for
the linking of boundary points for many feature extraction tasks (13). However, two
major drawbacks have hindered wide acceptance of the hexagonal grid. First, most
image scanners are inherently limited to rectangular scanning. The second problem
is that the hexagonal grid is not well suited to many spatial processing operations,
such as convolution and Fourier transformation.
404      MORPHOLOGICAL IMAGE PROCESSING


14.2. BINARY IMAGE HIT OR MISS TRANSFORMATIONS

The two basic morphological operations, dilation and erosion, plus many variants
can be defined and implemented by hit-or-miss transformations (3). The concept is
quite simple. Conceptually, a small odd-sized mask, typically 3 × 3 , is scanned over
a binary image. If the binary-valued pattern of the mask matches the state of the pix-
els under the mask (hit), an output pixel in spatial correspondence to the center pixel
of the mask is set to some desired binary state. For a pattern mismatch (miss), the
output pixel is set to the opposite binary state. For example, to perform simple
binary noise cleaning, if the isolated 3 × 3 pixel pattern

                                         0   0        0
                                         0   1        0
                                         0   0        0

is encountered, the output pixel is set to zero; otherwise, the output pixel is set to the
state of the input center pixel. In more complicated morphological algorithms, a
                       9
large number of the 2 = 512 possible mask patterns may cause hits.
    It is often possible to establish simple neighborhood logical relationships that
define the conditions for a hit. In the isolated pixel removal example, the defining
equation for the output pixel G ( j, k ) becomes

                          G ( j, k ) = X ∩ ( X 0 ∪ X 1 ∪ … ∪ X 7 )               (14.2-1)

where ∩ denotes the intersection operation (logical AND) and ∪ denotes the union
operation (logical OR). For complicated algorithms, the logical equation method of
definition can be cumbersome. It is often simpler to regard the hit masks as a collec-
tion of binary patterns.
   Hit-or-miss morphological algorithms are often implemented in digital image
processing hardware by a pixel stacker followed by a look-up table (LUT), as shown
in Figure 14.2-1 (14). Each pixel of the input image is a positive integer, represented
by a conventional binary code, whose most significant bit is a 1 (black) or a 0
(white). The pixel stacker extracts the bits of the center pixel X and its eight neigh-
bors and puts them in a neighborhood pixel stack. Pixel stacking can be performed
by convolution with the 3 × 3 pixel kernel

                                        –4       –3           –2
                                    2        2            2
                                        –5       0            –1
                                    2        2            2
                                        –6       –7           –8
                                    2        2            2

The binary number state of the neighborhood pixel stack becomes the numeric input
address of the LUT whose entry is Y For isolated pixel removal, integer entry 256,
corresponding to the neighborhood pixel stack state 100000000, contains Y = 0; all
other entries contain Y = X.
BINARY IMAGE HIT OR MISS TRANSFORMATIONS               405




      FIGURE 14.2-1. Look-up table flowchart for binary unconditional operations.




   Several other 3 × 3 hit-or-miss operators are described in the following subsec-
tions.


14.2.1. Additive Operators

Additive hit-or-miss morphological operators cause the center pixel of a 3 × 3 pixel
window to be converted from a logical 0 state to a logical 1 state if the neighboring
pixels meet certain predetermined conditions. The basic operators are now defined.


Interior Fill. Create a black pixel if all four-connected neighbor pixels are black.


                         G ( j, k ) = X ∪ [ X 0 ∩ X 2 ∩ X 4 ∩ X 6 ]            (14.2-2)


Diagonal Fill. Create a black pixel if creation eliminates the eight-connectivity of
the background.


                         G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ P 3 ∪ P 4 ]          (14.2-3a)
406     MORPHOLOGICAL IMAGE PROCESSING


where

                                P 1 = X ∩ X0 ∩ X 1 ∩ X 2                       (14.2-3b)

                                P 2 = X ∩ X2 ∩ X 3 ∩ X 4                       (14.2-3c)

                                P 3 = X ∩ X4 ∩ X 5 ∩ X 6                       (14.2-3d)

                                P 4 = X ∩ X6 ∩ X 7 ∩ X 0                       (14.2-3e)

   In Eq. 14.2-3, the overbar denotes the logical complement of a variable.


Bridge. Create a black pixel if creation results in connectivity of previously uncon-
nected neighboring black pixels.


                          G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ … ∪ P 6 ]             (14.2-4a)


where


               P 1 = X 2 ∩ X6 ∩ [ X3 ∪ X 4 ∪ X 5 ] ∩ [ X 0 ∪ X 1 ∪ X7 ] ∩ PQ   (14.2-4b)

               P 2 = X 0 ∩ X4 ∩ [ X1 ∪ X 2 ∪ X 3 ] ∩ [ X 5 ∪ X 6 ∪ X7 ] ∩ PQ   (14.2-4c)

               P 3 = X 0 ∩ X6 ∩ X7 ∩ [ X 2 ∪ X 3 ∪ X 4 ]                       (14.2-4d)

               P 4 = X 0 ∩ X2 ∩ X1 ∩ [ X 4 ∪ X 5 ∪ X6 ]                        (14.2-4e)

               P 5 = X2 ∩ X4 ∩ X 3 ∩ [ X 0 ∪ X 6 ∪ X7 ]                        (14.2-4f)

               P 6 = X4 ∩ X6 ∩ X 5 ∩ [ X 0 ∪ X 1 ∪ X2 ]                        (14.2-4g)

and

              P Q = L 1 ∪ L2 ∪ L 3 ∪ L 4                                       (14.2-4h)

               L1 = X ∩ X 0 ∩ X1 ∩ X 2 ∩ X 3 ∩ X4 ∩ X5 ∩ X 6 ∩ X 7             (14.2-4i)

               L2 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7             (14.2-4j)

               L3 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7             (14.2-4k)

               L 4 = X ∩ X0 ∩ X 1 ∩ X 2 ∩ X3 ∩ X 4 ∩ X 5 ∩ X6 ∩ X 7            (14.2-4l)
BINARY IMAGE HIT OR MISS TRANSFORMATIONS              407

   The following is one of 119 qualifying patterns

                                     1    0   0
                                     1    0   1
                                     0    0   1

A pattern such as

                                      0   0   0
                                      0   0   0
                                      1   0   1

does not qualify because the two black pixels will be connected when they are on
the middle row of a subsequent observation window if they are indeed unconnected.


Eight-Neighbor Dilate. Create a black pixel if at least one eight-connected neigh-
bor pixel is black.


                             G ( j, k ) = X ∪ X 0 ∪ … ∪ X 7                   (14.2-5)


   This hit-or-miss definition of dilation is a special case of a generalized dilation
operator that is introduced in Section 14.4. The dilate operator can be applied recur-
sively. With each iteration, objects will grow by a single pixel width ring of exterior
pixels. Figure 14.2-2 shows dilation for one and for three iterations for a binary
image. In the example, the original pixels are recorded as black, the background pix-
els are white, and the added pixels are midgray.


Fatten. Create a black pixel if at least one eight-connected neighbor pixel is black,
provided that creation does not result in a bridge between previously unconnected
black pixels in a 3 × 3 neighborhood.
   The following is an example of an input pattern in which the center pixel would
be set black for the basic dilation operator, but not for the fatten operator.

                                      0   0   1
                                      1   0   0
                                      1   1   0

There are 132 such qualifying patterns. This strategem will not prevent connection
of two objects separated by two rows or columns of white pixels. A solution to this
problem is considered in Section 14.3. Figure 14.2-3 provides an example of
fattening.
408     MORPHOLOGICAL IMAGE PROCESSING




                                        (a) Original




              (b) One iteration                             (c) Three iterations

                     FIGURE 14.2-2. Dilation of a binary image.



14.2.2. Subtractive Operators

Subtractive hit-or-miss morphological operators cause the center pixel of a 3 × 3
window to be converted from black to white if its neighboring pixels meet predeter-
mined conditions. The basic subtractive operators are defined below.


Isolated Pixel Remove. Erase a black pixel with eight white neighbors.


                          G ( j, k ) = X ∩ [ X 0 ∪ X 1 ∪ … ∪ X 7 ]                 (14.2-6)


Spur Remove. Erase a black pixel with a single eight-connected neighbor.
BINARY IMAGE HIT OR MISS TRANSFORMATIONS         409




                    FIGURE 14.2-3. Fattening of a binary image.



  The following is one of four qualifying patterns:


                                      0    0    0
                                      0    1    0
                                      1    0    0


Interior Pixel Remove. Erase a black pixel if all four-connected neighbors are
black.

                        G ( j, k ) = X ∩ [ X 0 ∪ X 2 ∪ X 4 ∪ X 6 ]        (14.2-7)

  There are 16 qualifying patterns.


H-Break. Erase a black pixel that is H-connected.
  There are two qualifying patterns.


                         1    1   1                 1   0    1
                         0    1   0                 1   1    1
                         1    1   1                 1   0    1

Eight-Neighbor Erode. Erase a black pixel if at least one eight-connected neighbor
pixel is white.


                             G ( j, k ) = X ∩ X 0 ∩ … ∩ X 7               (14.2-8)
410     MORPHOLOGICAL IMAGE PROCESSING




                                      (a) Original




               (b) One iteration                       (c) Three iterations

                       FIGURE 14.2-4. Erosion of a binary image.



   A generalized erosion operator is defined in Section 14.4. Recursive application
of the erosion operator will eventually erase all black pixels. Figure 14.2-4 shows
results for one and three iterations of the erode operator. The eroded pixels are midg-
ray. It should be noted that after three iterations, the ring is totally eroded.


14.2.3. Majority Black Operator

The following is the definition of the majority black operator:
Majority Black. Create a black pixel if five or more pixels in a 3 × 3 window are
black; otherwise, set the output pixel to white.
   The majority black operator is useful for filling small holes in objects and closing
short gaps in strokes. An example of its application to edge detection is given in
Chapter 15.
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING                       411

14.3. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND
      THICKENING

Shrinking, thinning, skeletonizing, and thickening are forms of conditional erosion
in which the erosion process is controlled to prevent total erasure and to ensure con-
nectivity.

14.3.1. Binary Image Shrinking

The following is a definition of shrinking:
Shrink. Erase black pixels such that an object without holes erodes to a single pixel
at or near its center of mass, and an object with holes erodes to a connected ring
lying midway between each hole and its nearest outer boundary.
    A 3 × 3 pixel object will be shrunk to a single pixel at its center. A 2 × 2 pixel
object will be arbitrarily shrunk, by definition, to a single pixel at its lower right corner.
    It is not possible to perform shrinking using single-stage 3 × 3 pixel hit-or-miss
transforms of the type described in the previous section. The 3 × 3 window does not
provide enough information to prevent total erasure and to ensure connectivity. A
5 × 5 hit-or-miss transform could provide sufficient information to perform proper
shrinking. But such an approach would result in excessive computational complex-
ity (i.e., 225 possible patterns to be examined!). References 15 and 16 describe two-
stage shrinking and thinning algorithms that perform a conditional marking of pixels
for erasure in a first stage, and then examine neighboring marked pixels in a second
stage to determine which ones can be unconditionally erased without total erasure or
loss of connectivity. The following algorithm developed by Pratt and Kabir (17) is a
pipeline processor version of the conditional marking scheme.
    In the algorithm, two concatenated 3 × 3 hit-or-miss transformations are per-
formed to obtain indirect information about pixel patterns within a 5 × 5 window.
Figure 14.3-1 is a flowchart for the look-up table implementation of this algorithm.
In the first stage, the states of nine neighboring pixels are gathered together by a
pixel stacker, and a following look-up table generates a conditional mark M for pos-
sible erasures. Table 14.3-1 lists all patterns, as indicated by the letter S in the table
column, which will be conditionally marked for erasure. In the second stage of the
algorithm, the center pixel X and the conditional marks in a 3 × 3 neighborhood cen-
tered about X are examined to create an output pixel. The shrinking operation can be
expressed logically as

                          G ( j, k ) = X ∩ [ M ∪ P ( M, M 0, …, M 7 ) ]             (14.3-1)

where P ( M, M 0, …, M 7 ) is an erasure inhibiting logical variable, as defined in Table
14.3-2. The first four patterns of the table prevent strokes of single pixel width from
being totally erased. The remaining patterns inhibit erasure that would break object
connectivity. There are a total of 157 inhibiting patterns. This two-stage process
must be performed iteratively until there are no further erasures.
412      MORPHOLOGICAL IMAGE PROCESSING




      FIGURE 14.3-1. Look-up table flowchart for binary conditional mark operations.




   As an example, the 2 × 2 square pixel object


                                         1    1
                                         1    1


results in the following intermediate array of conditional marks


                                         M    M
                                         M    M


The corner cluster pattern of Table 14.3-2 gives a hit only for the lower right corner
mark. The resulting output is


                                          0   0
                                          0   1
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING             413

TABLE 14.3-1. Shrink, Thin, and Skeletonize Conditional Mark Patterns [M = 1 if hit]
Table    Bond                              Pattern
                0 0 1   1 0 0   0 0 0    0 0 0
S        1      0 1 0   0 1 0   0 1 0    0 1 0
                0 0 0   0 0 0   1 0 0    0 0 1


                0 0 0   0 1 0   0 0 0    0 0 0
S        2      0 1 1   0 1 0   1 1 0    0 1 0
                0 0 0   0 0 0   0 0 0    0 1 0


                0 0 1   0 1 1   1 1 0    1 0 0       0 0 0   0 0 0   0 0 0    0 0 0
S        3      0 1 1   0 1 0   0 1 0    1 1 0       1 1 0   0 1 0   0 1 0    0 1 1
                0 0 0   0 0 0   0 0 0    0 0 0       1 0 0   1 1 0   0 1 1    0 0 1


                0 1 0   0 1 0   0 0 0    0 0 0
TK       4      0 1 1   1 1 0   1 1 0    0 1 1
                0 0 0   0 0 0   0 1 0    0 1 0


                0 0 1   1 1 1   1 0 0    0 0 0
STK      4      0 1 1   0 1 0   1 1 0    0 1 0
                0 0 1   0 0 0   1 0 0    1 1 1


                1 1 0   0 1 0   0 1 1    0 0 1
ST       5      0 1 1   0 1 1   1 1 0    0 1 1
                0 0 0   0 0 1   0 0 0    0 1 0


                0 1 1   1 1 0   0 0 0    0 0 0
ST       5      0 1 1   1 1 0   1 1 0    0 1 1
                0 0 0   0 0 0   1 1 0    0 1 1


                1 1 0   0 1 1
ST       6      0 1 1   1 1 0
                0 0 1   1 0 0


                1 1 1   0 1 1   1 1 1    1 1 0       1 0 0   0 0 0   0 0 0    0 0 1
STK      6      0 1 1   0 1 1   1 1 0    1 1 0       1 1 0   1 1 0   0 1 1    0 1 1
                0 0 0   0 0 1   0 0 0    1 0 0       1 1 0   1 1 1   1 1 1    0 1 1
                                                                         (Continued)
414     MORPHOLOGICAL IMAGE PROCESSING


TABLE 14.3-1 (Continued)
Table   Bond                                 Pattern
             1 1 1     1 1 1    1 0 0     0 0 1
STK     7    0 1 1     1 1 0    1 1 0     0 1 1
             0 0 1     1 0 0    1 1 1     1 1 1


             0 1 1     1 1 1    1 1 0     0 0 0
STK     8    0 1 1     1 1 1    1 1 0     1 1 1
             0 1 1     0 0 0    1 1 0     1 1 1


             1 1 1     0 1 1    1 1 1     1 1 1        1 1 1   1 1 0   1 0 0     0 0 1
STK     9    0 1 1     0 1 1    1 1 1     1 1 1        1 1 0   1 1 0   1 1 1     1 1 1
             0 1 1     1 1 1    1 0 0     0 0 1        1 1 0   1 1 1   1 1 1     1 1 1


             1 1 1     1 1 1    1 1 1     1 0 1
STK     10   0 1 1     1 1 1    1 1 0     1 1 1
             1 1 1     1 0 1    1 1 1     1 1 1


             1 1 1     1 1 1    1 1 0     0 1 1
K       11   1 1 1     1 1 1    1 1 1     1 1 1
             0 1 1     1 1 0    1 1 1     1 1 1



    Figure 14.3-2 shows an example of the shrinking of a binary image for four and 13
iterations of the algorithm. No further shrinking occurs for more than 13 iterations. At
this point, the shrinking operation has become idempotent (i. e., reapplication evokes
no further change. This shrinking algorithm does not shrink the symmetric original ring
object to a ring that is also symmetric because of some of the conditional mark patterns
of Table 14.3-2, which are necessary to ensure that objects of even dimension shrink to
a single pixel. For the same reason, the shrink ring is not minimally connected.


14.3.2. Binary Image Thinning

The following is a definition of thinning:

Thin. Erase black pixels such that an object without holes erodes to a minimally
connected stroke located equidistant from its nearest outer boundaries, and an object
with holes erodes to a minimally connected ring midway between each hole and its
nearest outer boundary.
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING   415

TABLE 14.3-2. Shrink and Thin Unconditional Mark Patterns
[P(M, M0, M1, M2, M3, M4, M5, M6, M7) = 1 if hit] a
                                               Pattern
Spur              Single 4-connection
0 0 M    M0 0     0 0 0 0 0 0
0 M0     0 M0     0 M0 0 MM
0 0 0    0 0 0    0 M0 0 0 0

L Cluster (thin only)
0 0 M 0 MM MM0             M0 0     0 0 0   0 0 0   0 0 0   0 0 0
0 MM 0 M0 0 M0             MM0      MM0     0 M0    0 M0    0 MM
0 0 0 0 0 0 0 0 0          0 0 0    M0 0    MM0     0 MM    0 0 M

4-Connected offset
0 MM MM0 0 M0              0 0 M
MM0 0 MM 0 MM              0 MM
0 0 0 0 0 0 0 0 M          0 M0

Spur corner cluster
0 A M MB 0 0 0 M           M0 0
0 MB A M0 A M0             0 MB
M0 0 0 0 M MB 0            0 AM

Corner cluster
MMD
MMD
DDD

Tee branch
D M0 0 MD         0 0 D    D0 0     D MD    0 M0    0 M0    D MD
MMM MMM           MMM      MMM      MM0     MM0     0 MM    0 MM
D0 0 0 0 D        0 MD     D M0     0 M0    D MD    D MD    0 M0


Vee branch
MD M MD C         CBA      A DM
D MD D MB         D MD     B MD
A B C MD A        MD M     CDM

Diagonal branch
D M0 0 MD         D0 M     M0 D
0 MM MM0          MM0      0 MM
M0 D D 0 M        0 MD     D M0
a
    A∪B∪C = 1     D = 0∪1      A ∪ B = 1.
416     MORPHOLOGICAL IMAGE PROCESSING




              (a) Four iterations                         (b) Thirteen iterations

                      FIGURE 14.3-2. Shrinking of a binary image.



The following is an example of the thinning of a 3 × 5 pixel object without holes

                      1   1    1    1   1        0   0      0     0   0
                      1   1    1    1   1        0   1      1     1   0
                      1   1    1    1   1        0   0      0     0   0
                           before                         after

A 2 × 5 object is thinned as follows:

                      1   1    1    1   1      0     0     0      0   0
                      1   1    1    1   1      0     1     1      1   1
                           before                        after

   Table 14.3-1 lists the conditional mark patterns, as indicated by the letter T in the
table column, for thinning by the conditional mark algorithm of Figure 14.3-1. The
shrink and thin unconditional patterns are identical, as shown in Table 14.3-2.
   Figure 14.3-3 contains an example of the thinning of a binary image for four and
eight iterations. Figure 14.3-4 provides an example of the thinning of an image of a
printed circuit board in order to locate solder pads that have been deposited improp-
erly and that do not have holes for component leads. The pads with holes erode to a
minimally connected ring, while the pads without holes erode to a point.
   Thinning can be applied to the background of an image containing several
objects as a means of separating the objects. Figure 14.3-5 provides an example of
the process. The original image appears in Figure 14.3-5a, and the background-
reversed image is Figure 14.3-5b. Figure 14.3-5c shows the effect of thinning the
background. The thinned strokes that separate the original objects are minimally
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING                417




              (a) Four iterations                       (b) Eight iterations
                      FIGURE 14.3-3. Thinning of a binary image.



connected, and therefore the background of the separating strokes is eight-connected
throughout the image. This is an example of the connectivity ambiguity discussed in
Section 14.1. To resolve this ambiguity, a diagonal fill operation can be applied to
the thinned strokes. The result, shown in Figure 14.3-5d, is called the exothin of the
original image. The name derives from the exoskeleton, discussed in the following
section.


14.3.3. Binary Image Skeletonizing

A skeleton or stick figure representation of an object can be used to describe its
structure. Thinned objects sometimes have the appearance of a skeleton, but they are
not always uniquely defined. For example, in Figure 14.3-3, both the rectangle and
ellipse thin to a horizontal line.




                  (a) Original                             (b) Thinned

               FIGURE 14.3-4. Thinning of a printed circuit board image.
418     MORPHOLOGICAL IMAGE PROCESSING




                 (a) Original                        (b) Background-reversed




            (c) Thinned background                         (d ) Exothin

                    FIGURE 14.3-5. Exothinning of a binary image.



    Blum (18) has introduced a skeletonizing technique called medial axis transfor-
mation that produces a unique skeleton for a given object. An intuitive explanation
of the medial axis transformation is based on the prairie fire analogy (19–22). Con-
sider the circle and rectangle regions of Figure 14.3-6 to be composed of dry grass
on a bare dirt background. If a fire were to be started simultaneously on the perime-
ter of the grass, the fire would proceed to burn toward the center of the regions until
all the grass was consumed. In the case of the circle, the fire would burn to the cen-
ter point of the circle, which is the quench point of the circle. For the rectangle, the
fire would proceed from each side. As the fire moved simultaneously from left and
top, the fire lines would meet and quench the fire. The quench points or quench lines
of a figure are called its medial axis skeleton. More generally, the medial axis skele-
ton consists of the set of points that are equally distant from two closest points of an
object boundary. The minimal distance function is called the quench distance of
the object. From the medial axis skeleton of an object and its quench distance, it is
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING                   419




                                        (a) Circle




                                      (b) Rectangle

                        FIGURE 14.3-6. Medial axis transforms.



possible to reconstruct the object boundary. The object boundary is determined by
the union of a set of circular disks formed by circumscribing a circle whose radius is
the quench distance at each point of the medial axis skeleton.
    A reasonably close approximation to the medial axis skeleton can be implemented
by a slight variation of the conditional marking implementation shown in Figure 14.3-
1. In this approach, an image is iteratively eroded using conditional and unconditional
mark patterns until no further erosion occurs. The conditional mark patterns for skele-
tonization are listed in Table 14.3-1 under the table indicator K. Table 14.3-3 lists the
unconditional mark patterns. At the conclusion of the last iteration, it is necessary to
perform a single iteration of bridging as defined by Eq. 14.2-4 to restore connectivity,
which will be lost whenever the following pattern is encountered:

                                     1 11 11
                                        1 111 1

Inhibiting the following mark pattern created by the bit pattern above:

                                          MM
                                        M M

will prevent elliptically shaped objects from being improperly skeletonized.
420        MORPHOLOGICAL IMAGE PROCESSING


TABLE 14.3-3. Skeletonize Unconditional Mark Patterns
[P(M, M0 , M1 , M2, M3, M4, M5, M6 , M7) = 1 if hit]a
                                           Pattern
Spur
    0   0     0            0       0   0             0   0   M   M   0   0
    0   M     0            0       M   0             0   M   0   0   M   0
    0   0     M            M       0   0             0   0   0   0   0   0
Single 4-connection
    0   0     0            0       0   0             0   0   0   0   M   0
    0   M     0            0       M   M             M   M   0   0   M   0
    0   M     0            0       0   0             0   0   0   0   0   0
L corner
    0   M     0            0       M   0             0   0   0   0   0   0
    0   M     M            M       M   0             0   M   M   M   M   0
    0   0     0            0       0   0             0   M   0   0   M   0


Corner cluster
    D   M     M            D       D   D             M   M   D   D   D   D
    D   M     M            M       M   D             M   M   D   D   M   M
    D   D     D            M       M   D             D   D   D   D   M   M
Tee branch
    D   M     D            D       M   D             D   D   D   D   M   D
 M      M     M            M       M   D             M   M   M   D   M   M
    D   0     0            D       M   D             D   M   D   D   M   D


Vee branch
 M      D     M            M       D   C             C   B   A   A   D   M
    D   M     D            D       M   B             D   M   D   B   M   D
    A   B     C            M       D   A             M   D   M   C   D   M
Digonal branch
    D   M     0            0       M   D             D   0   M   M   0   D
    0   M     M            M       M   0             M   M   0   0   M   M
 M      0     D            D       0   M             0   M   D   D   M   0
a
    A∪B∪C = 1         D = 0 ∪ 1.
BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING               421




                                  (a) Four iterations




                                   (b) Ten iterations
                   FIGURE 14.3-7. Skeletonizing of a binary image.



   Figure 14.3-7 shows an example of the skeletonization of a binary image. The
eroded pixels are midgray. It should observed that skeletonizing gives different
results than thinning for many objects. Prewitt (23, p. 136) has coined the term
exoskeleton for the skeleton of the background of object in a scene. The exoskeleton
partitions each objects from neighboring object, as does the thinning of the back-
ground.


14.3.4. Binary Image Thickening

In Section 14.2.1, the fatten operator was introduced as a means of dilating objects
such that objects separated by a single pixel stroke would not be fused. But the fat-
ten operator does not prevent fusion of objects separated by a double width white
stroke. This problem can be solved by iteratively thinning the background of an
image and then performing a diagonal fill operation. This process, called thickening,
when taken to its idempotent limit, forms the exothin of the image, as discussed in
Section 14.3.2. Figure 14.3-8 provides an example of thickening. The exothin oper-
ation is repeated three times on the background reversed version of the original
image. Figure 14.3-8b shows the final result obtained by reversing the background
of the exothinned image.
422      MORPHOLOGICAL IMAGE PROCESSING




                  (a) Original                                         (b) Thickened

                      FIGURE 14.3-8. Thickening of a binary image.




14.4. BINARY IMAGE GENERALIZED DILATION AND EROSION

Dilation and erosion, as defined earlier in terms of hit-or-miss transformations, are
limited to object modification by a single ring of boundary pixels during each itera-
tion of the process. The operations can be generalized.
    Before proceeding further, it is necessary to introduce some fundamental con-
cepts of image set algebra that are the basis for defining the generalized dilation and
erosions operators. Consider a binary-valued source image function F ( j, k ). A pixel
at coordinate ( j, k ) is a member of F ( j, k ) , as indicated by the symbol ∈, if and only
if it is a logical 1. A binary-valued image B ( j, k ) is a subset of a binary-valued
image A ( j, k ), as indicated by B ( j, k ) ⊆ A ( j, k ), if for every spatial occurrence of a
logical 1 of A ( j, k ), B ( j, k ) is a logical 1. The complement F ( j, k ) of F ( j, k ) is a
binary-valued image whose pixels are in the opposite logical state of those in F ( j, k ).
Figure 14.4-1 shows an example of the complement process and other image set
algebraic operations on a pair of binary images. A reflected image F ( j, k ) is an˜
image that has been flipped from left to right and from top to bottom. Figure 14.4-2
provides an example of image complementation. Translation of an image, as indi-
cated by the function


                                  G ( j, k ) = T r, c { F ( j, k ) }                   (14.4-1)


consists of spatially offsetting F ( j, k ) with respect to itself by r rows and c columns,
where – R ≤ r ≤ R and – C ≤ c ≤ C . Figure 14.4-2 presents an example of the transla-
tion of a binary image.
BINARY IMAGE GENERALIZED DILATION AND EROSION                   423




             FIGURE 14.4-1. Image set algebraic operations on binary arrays.



14.4.1. Generalized Dilation

Generalized dilation is expressed symbolically as


                                G ( j, k ) = F ( j, k ) ⊕ H ( j , k )              (14.4-2)


where F ( j, k ) for 1 ≤ j, k ≤ N is a binary-valued image and H ( j, k ) for 1 ≤ j, k ≤ L ,
where L is an odd integer, is a binary-valued array called a structuring element. For
notational simplicity, F ( j, k ) and H ( j, k ) are assumed to be square arrays. General-
ized dilation can be defined mathematically and implemented in several ways. The
Minkowski addition definition (1) is


                             G ( j, k ) =   ∪ ∪ T r , c { F ( j, k ) }             (14.4-3)
                                            
                                            
                                            
                                            
                                            




                                            ( r, c ) ∈ H
424      MORPHOLOGICAL IMAGE PROCESSING




                FIGURE 14.4-2. Reflection and translation of a binary array.



It states that G ( j, k ) is formed by the union of all translates of F ( j, k ) with respect to
itself in which the translation distance is the row and column index of pixels of
H ( j, k ) that is a logical 1. Figure 14.4-3 illustrates the concept. Equation 14.4-3
results in an M × M output array G ( j, k ) that is justified with the upper left corner of
the input array F ( j, k ) . The output array is of dimension M = N + L – 1, where L is
the size of the structuring element. In order to register the input and output images
properly, F ( j, k ) should be translated diagonally right by Q = ( L – 1 ) ⁄ 2 pixels. Fig-
ure 14.4-3 shows the exclusive-OR difference between G ( j, k ) and the translate of
F ( j, k ) . This operation identifies those pixels that have been added as a result of
generalized dilation.
    An alternative definition of generalized dilation is based on the scanning and pro-
cessing of F ( j, k ) by the structuring element H ( j, k ) . With this approach, generalized
dilation is formulated as (17)


                     G ( j, k ) =   ∪ ∪ F ( m, n ) ∩ H ( j – m + 1, k – n + 1 )
                                    m n
                                                                                      (14.4-4)


With reference to Eq. 7.1-7, the spatial limits of the union combination are

                            MAX { 1, j – L + 1 } ≤ m ≤ MIN { N, j }                  (14.4-5a)

                            MAX { 1, k – L + 1 } ≤ n ≤ MIN { N, k }                  (14.4-5b)

Equation 14.4-4 provides an output array that is justified with the upper left corner
of the input array. In image processing systems, it is often convenient to center the
input and output images and to limit their size to the same overall dimension. This
can be accomplished easily by modifying Eq. 14.4-4 to the form

                     G ( j, k ) =   ∪ ∪ F ( m , n ) ∩ H ( j – m + S, k – n + S )
                                    m n
                                                                                      (14.4-6)
BINARY IMAGE GENERALIZED DILATION AND EROSION              425




         FIGURE 14.4-3. Generalized dilation computed by Minkowski addition.



where S = ( L – 1 ) ⁄ 2 and, from Eq. 7.1-10, the limits of the union combination are


                        MAX { 1, j – Q } ≤ m ≤ MIN { N, j + Q }             (14.4-7a)
                        MAX { 1, k – Q } ≤ n ≤ MIN { N, k + Q }             (14.4-7b)
426         MORPHOLOGICAL IMAGE PROCESSING


and where Q = ( L – 1 ) ⁄ 2 . Equation 14.4-6 applies for S ≤ j, k ≤ N – Q and
G ( j, k ) = 0 elsewhere. The Minkowski addition definition of generalized erosion
given in Eq. 14.4-2 can be modified to provide a centered result by taking the trans-
lations about the center of the structuring element. In the following discussion, only
the centered definitions of generalized dilation will be utilized. In the special case
for which L = 3, Eq. 14.4-6 can be expressed explicitly as

( G ( j, k ) ) =
[ H ( 3, 3 ) ∩ F ( j – 1, k – 1 ) ] ∪ [ H ( 3, 2 ) ∩ F ( j – 1, k ) ] ∪ [ H ( 3, 1 ) ∩ F ( j – 1, K + 1 ) ]
∪ [ H ( 2, 3 ) ∩ F ( j, k – 1 ) ] ∪ [ H ( 2, 2 ) ∩ F ( j, k ) ] ∪ [ H ( 2, 1 ) ∩ F ( j, k + 1 ) ]
∪ [ H ( 1, 3 ) ∩ F ( j + 1, k – 1 ) ] ∪ [ H ( 1, 2 ) ∩ F ( j + 1, k ) ] ∪ [ H ( 1, 1 ) ∩ F ( j + 1, k + 1 ) ]

                                                                                                        (14.4-8)
If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , then G ( j, k ) , as computed by Eq. 14.4-8, gives the
same result as hit-or-miss dilation, as defined by Eq. 14.2-5.
   It is interesting to compare Eqs. 14.4-6 and 14.4-8, which define generalized
dilation, and Eqs. 7.1-14 and 7.1-15, which define convolution. In the generalized
dilation equation, the union operations are analogous to the summation operations of
convolution, while the intersection operation is analogous to point-by-point
multiplication. As with convolution, dilation can be conceived as the scanning and
processing of F ( j, k ) by H ( j, k ) rotated by 180°.


14.4.2. Generalized Erosion

Generalized erosion is expressed symbolically as


                                       G ( j, k ) = F ( j, k ) ᭺ H ( j, k )
                                                               –                                        (14.4-9)


where again H ( j, k ) is an odd size L × L structuring element. Serra (3) has adopted,
as his definition for erosion, the dual relationship of Minkowski addition given by
Eq. 14.4-1, which was introduced by Hadwiger (24). By this formulation, general-
ized erosion is defined to be


                                     G ( j, k ) =   ∩ ∩ Tr, c { F ( j, k ) }                          (14.4-10)
                                                    
                                                    
                                                    
                                                    
                                                    




                                                    ( r, c ) ∈ H


The meaning of this relation is that erosion of F ( j, k ) by H ( j, k ) is the intersection of
all translates of F ( j, k ) in which the translation distance is the row and column index
of pixels of H ( j, k ) that are in the logical 1 state. Steinberg et al. (6,25) have adopted
the subtly different formulation
BINARY IMAGE GENERALIZED DILATION AND EROSION                       427




 FIGURE 14.4-4. Comparison of erosion results for two definitions of generalized erosion.




                                G ( j, k ) =   ∩ ∩ Tr, c { F ( j, k ) }                (14.4-11)
                                               
                                               
                                               
                                               
                                               




                                                          ˜
                                               ( r, c ) ∈ H

introduced by Matheron (2), in which the translates of F ( j, k ) are governed by the
              ˜
reflection H ( j, k ) of the structuring element rather than by H ( j, k ) itself.
    Using the Steinberg definition, G ( j, k ) is a logical 1 if and only if the logical 1s
of H ( j, k ) form a subset of the spatially corresponding pattern of the logical 1s of
F ( j, k ) as H ( j, k ) is scanned over F ( j, k ) . It should be noted that the logical zeros of
H ( j, k ) do not have to match the logical zeros of F ( j, k ) . With the Serra definition,
the statements above hold when F ( j, k ) is scanned and processed by the reflection of
the structuring element. Figure 14.4-4 presents a comparison of the erosion results
for the two definitions of erosion. Clearly, the results are inconsistent.
    Pratt (26) has proposed a relation, which is the dual to the generalized dilation
expression of Eq. 14.4-6, as a definition of generalized erosion. By this formulation,
generalized erosion in centered form is


                     G ( j, k ) =   ∩ ∩ F ( m, n ) ∪ H ( j – m + S, k – n + S )
                                    m n
                                                                                       (14.4-12)


where S = ( L – 1 ) ⁄ 2 , and the limits of the intersection combination are given by
Eq. 14.4-7. In the special case for which L = 3, Eq. 14.4-12 becomes
428        MORPHOLOGICAL IMAGE PROCESSING


G ( j, k ) =
[ H ( 3, 3 ) ∪ F ( j – 1, k – 1 ) ] ∩ [ H ( 3, 2 ) ∪ F ( j – 1, k ) ] ∩ [ H ( 3, 1 ) ∪ F ( j – 1, k + 1 ) ]
∪ [ H ( 2, 3 ) ∪ F ( j, k – 1 ) ] ∩ [ H ( 2, 2 ) ∪ F ( j, k ) ] ∩ [ H ( 2, 1 ) ∪ F ( j, k + 1 ) ]
∩ [ H ( 1, 3 ) ∪ F ( j + 1, k – 1 ) ] ∩ [ H ( 1, 2 ) ∪ F ( j + 1, k ) ] ∩ [ H ( 1, 1 ) ∪ F ( j + 1, k + 1 ) ]
                                                                                                       (14.4-13)

If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , Eq. 14.4-13 gives the same result as hit-or-miss eight-
neighbor erosion as defined by Eq. 14.2-6. Pratt's definition is the same as the Serra
definition. However, Eq. 14.4-12 can easily be modified by substituting the reflec-
        ˜
tion H ( j, k ) for H ( j, k ) to provide equivalency with the Steinberg definition.
Unfortunately, the literature utilizes both definitions, which can lead to confusion.
The definition adopted in this book is that of Hadwiger, Serra, and Pratt, because the




      FIGURE 14.4-5. Generalized dilation and erosion for a 5 × 5 structuring element.
BINARY IMAGE GENERALIZED DILATION AND EROSION                 429

defining relationships (Eq. 14.4-1 or 14.4-12) are duals to their counterparts for gen-
eralized dilation (Eq. 14.4-3 or 14.4-6).
   Figure 14.4-5 shows examples of generalized dilation and erosion for a symmet-
ric 5 × 5 structuring element.


14.4.3. Properties of Generalized Dilation and Erosion

Consideration is now given to several mathematical properties of generalized
dilation and erosion. Proofs of these properties are found in Reference 25. For nota-
tional simplicity, in this subsection the spatial coordinates of a set are dropped, i.e.,
A( j, k) = A. Dilation is commutative:

                                    A⊕B = B⊕A                                (14.4-14a)

But in general, erosion is not commutative:

                                     A ᭺B ≠ B᭺A
                                       –     –                               (14.4-14b)

Dilation and erosion are increasing operations in the sense that if A ⊆ B , then

                                    A⊕C⊆B⊕C                                  (14.4-15a)

                                     A᭺ C ⊆ B ᭺ C
                                      –       –                              (14.4-15b)

Dilation and erosion are opposite in effect; dilation of the background of an object
behaves like erosion of the object. This statement can be quantified by the duality
relationship

                                    A᭺ B = A ⊕ B
                                     –                                         (14.4-16)

For the Steinberg definition of erosion, B on the right-hand side of Eq. 14.4-16
                                       ˜
should be replaced by its reflection B . Figure 14.4-6 contains an example of the
duality relationship.
   The dilation and erosion of the intersection and union of sets obey the following
relations:

                          [A ∩ B] ⊕ C ⊆ [A ⊕ C] ∩ [B ⊕ C]                    (14.4-17a)
                           [ A ∩ B ] ᭺ C = [ A ᭺C ] ∩ [ B ᭺ C ]
                                     –         –          –                  (14.4-17b)
                          [ A ∪ B ] ⊕ C = [ A ⊕ C] ∪ [ B ⊕ C ]               (14.4-17c)
                           [ A ∪ B ] ᭺ C ⊇ [ A ᭺ C ] ∪ [ B ᭺C ]
                                     –         –           –                 (14.4-17d)
430     MORPHOLOGICAL IMAGE PROCESSING




           FIGURE 14.4-6. Duality relationship between dilation and erosion.


The dilation and erosion of a set by the intersection of two other sets satisfy these
containment relations:

                          A ⊕ [B ∩ C] ⊆ [A ⊕ B] ∩ [A ⊕ C]                         (14.4-18a)

                          A᭺ [B ∩ C] ⊇ [A ᭺ B] ∪ [A ᭺ C]
                           –              –         –                             (14.4-18b)

On the other hand, dilation and erosion of a set by the union of a pair of sets are
governed by the equality relations

                          A ⊕ [B ∪ C] = [A ⊕ B] ∪ [A ⊕ C]                         (14.4-19a)

                           A ᭺[ B ∪ C] = [A ᭺B ] ∪ [ A ᭺C]
                             –              –          –                          (14.4-19b)

The following chain rules hold for dilation and erosion.

                               A ⊕[B ⊕ C] = [ A ⊕ B] ⊕ C                          (14.4-20a)

                              A ᭺[B ⊕ C ] = [ A ᭺ B] ᭺ C
                                –               –    –                            (14.4-20b)

14.4.4. Structuring Element Decomposition

Equation 14.4-20 is important because it indicates that if a L × L structuring element
can be expressed as

                 H ( j, k ) = K 1 ( j, k ) ⊕ … ⊕ Kq ( j, k ) ⊕ … ⊕ K Q ( j, k )    (14.4-21)
BINARY IMAGE GENERALIZED DILATION AND EROSION                   431




                   FIGURE 14.4-7. Structuring element decomposition.



where Kq ( j, k ) is a small structuring element, it is possible to perform dilation and
erosion by operating on an image sequentially. In Eq. 14.4-21, if the small structur-
ing elements K q ( j, k ) are all 3 × 3 arrays, then Q = ( L – 1 ) ⁄ 2 . Figure 14.4-7 gives
several examples of small structuring element decomposition. Sequential small
structuring element (SSE) dilation and erosion is analogous to small generating ker-
nel (SGK) convolution as given by Eq. 9.6-1. Not every large impulse response
array can be decomposed exactly into a sequence of SGK convolutions; similarly,
not every large structuring element can be decomposed into a sequence of SSE dila-
tions or erosions. Following is an example in which a 5 × 5 structuring element can-
not be decomposed into the sequential dilation of two 3 × 3 SSEs. Zhuang and
Haralick (27) have developed a computational search method to find a SEE decom-
position into 1 × 2 and 2 × 1 elements.
432      MORPHOLOGICAL IMAGE PROCESSING




      FIGURE 14.4-8. Small structuring element decomposition of a 5 × 5 pixel ring.




                                    1   1   1   1   1
                                    1   0   0   0   1
                                    1   0   0   0   1
                                    1   0   0   0   1
                                    1   1   1   1   1


   For two-dimensional convolution it is possible to decompose any large impulse
response array into a set of sequential SGKs that are computed in parallel and
BINARY IMAGE CLOSE AND OPEN OPERATIONS             433

summed together using the singular-value decomposition/small generating kernel
(SVD/SGK) algorithm, as illustrated by the flowchart of Figure 9.6-2. It is logical to
conjecture as to whether an analog to the SVD/SGK algorithm exists for dilation
and erosion. Equation 14.4-19 suggests that such an algorithm may exist. Figure
14.4-8 illustrates an SSE decomposition of the 5 × 5 ring example based on Eqs.
14.4-19a and 14.4-21. Unfortunately, no systematic method has yet been found to
decompose an arbitrarily large structuring element.


14.5. BINARY IMAGE CLOSE AND OPEN OPERATIONS

Dilation and erosion are often applied to an image in concatenation. Dilation fol-
lowed by erosion is called a close operation. It is expressed symbolically as

                                G ( j, k ) = F ( j, k ) • H ( j, k )            (14.5-1a)

where H ( j, k ) is a L × L structuring element. In accordance with the Serra formula-
tion of erosion, the close operation is defined as

                                                                 – ˜
                        G ( j, k ) = [ F ( j, k ) ⊕ H ( j, k ) ] ᭺ H ( j, k )   (14.5-1b)

where it should be noted that erosion is performed with the reflection of the structur-
ing element. Closing of an image with a compact structuring element without holes
(zeros), such as a square or circle, smooths contours of objects, eliminates small
holes in objects, and fuses short gaps between objects.
   An open operation, expressed symbolically as

                                G ( j, k ) = F ( j, k ) ᭺ H ( j, k )            (14.5-2a)

consists of erosion followed by dilation. It is defined as

                                                  – ˜
                        G ( j, k ) = [ F ( j, k ) ᭺ H ( j, k ) ] ⊕ H ( j, k )   (14.5-2b)

where again, the erosion is with the reflection of the structuring element. Opening of
an image smooths contours of objects, eliminates small objects, and breaks narrow
strokes.
   The close operation tends to increase the spatial extent of an object, while the
open operation decreases its spatial extent. In quantitative terms

                                F ( j, k ) • H ( j, k ) ⊇ F ( j, k )            (14.5-3a)

                                 F ( j, k ) ᭺ H ( j, k ) ⊆ F ( j, k )           (14.5-3b)
434   MORPHOLOGICAL IMAGE PROCESSING




                                     blob
                                 (a ) Original




               closing                              overlay of blob & closing
              (b ) Close                         (c ) Overlay of original and close




              opening                               overlay of blob & opening

              (d ) Open                          (e ) Overlay of original and open

         FIGURE 14.5-1. Close and open operations on a binary image.
GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS                                435

It can be shown that the close and open operations are stable in the sense that (25)

                        [ F ( j, k ) • H ( j, k ) ] • H ( j, k ) = F ( j, k ) • H ( j, k )          (14.5-4a)

                         [ F ( j, k ) ᭺ H ( j, k ) ]   ᭺   H ( j , k ) = F ( j , k ) ᭺H ( j , k )   (14.5-4b)

Also, it can be easily shown that the open and close operations satisfy the following
duality relationship:

                                  F ( j, k ) • H ( j, k ) = F ( j, k ) ᭺H ( j, k )                   (14.5-5)

Figure 14.5-1 presents examples of the close and open operations on a binary image.


14.6. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS

Morphological concepts can be extended to gray scale images, but the extension
often leads to theoretical issues and to implementation complexities. When applied
to a binary image, dilation and erosion operations cause an image to increase or
decrease in spatial extent, respectively. To generalize these concepts to a gray scale
image, it is assumed that the image contains visually distinct gray scale objects set
against a gray background. Also, it is assumed that the objects and background are
both relatively spatially smooth. Under these conditions, it is reasonable to ask:
Why not just threshold the image and perform binary image morphology? The rea-
son for not taking this approach is that the thresholding operation often introduces
significant error in segmenting objects from the background. This is especially true
when the gray scale image contains shading caused by nonuniform scene illumina-
tion.


14.6.1. Gray Scale Image Dilation and Erosion

Dilation or erosion of an image could, in principle, be accomplished by hit-or-miss
transformations in which the quantized gray scale patterns are examined in a 3 × 3
window and an output pixel is generated for each pattern. This approach is, how-
ever, not computationally feasible. For example, if a look-up table implementation
were to be used, the table would require 2 72 entries for 256-level quantization of
each pixel! The common alternative is to use gray scale extremum operations over a
3 × 3 pixel neighborhoods.
   Consider a gray scale image F ( j, k ) quantized to an arbitrary number of gray lev-
els. According to the extremum method of gray scale image dilation, the dilation
operation is defined as


      G ( j, k ) = MAX { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) }     (14.6-1)
436     MORPHOLOGICAL IMAGE PROCESSING




                                        printed circuit board
                                              (a ) Original




                      PCB profile                             dilation profile 1 iteration
                  (b ) Original profile                            (c ) One iteration




              dilation profile 2 iterations                   dilation profile 3 iterations
                  (d ) Two iterations                            (e ) Three iterations

FIGURE 14.6-1. One-dimensional gray scale image dilation on a printed circuit board
image.



where MAX { S 1, …, S 9 } generates the largest-amplitude pixel of the nine pixels in
the neighborhood. If F ( j, k ) is quantized to only two levels, Eq. 14.6-1 provides the
same result as that using binary image dilation as defined by Eq. 14.2-5.
GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS                            437

   By the extremum method, gray scale image erosion is defined as


    G ( j, k ) = MIN { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) }   (14.6-2)


where MIN { S 1, …, S 9 } generates the smallest-amplitude pixel of the nine pixels in
the 3 × 3 pixel neighborhood. If F ( j, k ) is binary-valued, then Eq. 14.6-2 gives the
same result as hit-or-miss erosion as defined in Eq. 14.2-8.
   In Chapter 10, when discussing the pseudomedian, it was shown that the MAX
and MIN operations can be computed sequentially. As a consequence, Eqs. 14.6-1
and 14.6-2 can be applied iteratively to an image. For example, three iterations gives
the same result as a single iteration using a 7 × 7 moving-window MAX or MIN
operator. By selectively excluding some of the terms S 1, …, S 9 of Eq. 14.6-1 or
14.6-2 during each iteration, it is possible to synthesize large nonsquare gray scale
structuring elements in the same number as illustrated in Figure 14.4-7 for binary
structuring elements. However, no systematic decomposition procedure has yet been
developed.
   Figures 14.6-1 and 14.6-2 show the amplitude profile of a row of a gray scale
image of a printed circuit board (PCB) after several dilation and erosion iterations.
The row selected is indicated by the white horizontal line in Figure 14.6-la. In
Figure 14.6-2, two-dimensional gray scale dilation and erosion are performed on the
PCB image.


14.6.2. Gray Scale Image Close and Open Operators

The close and open operations introduced in Section 14.5 for binary images can
easily be extended to gray scale images. Gray scale closing is realized by first per-
forming gray scale dilation with a gray scale structuring element, then gray scale
erosion with the same structuring element. Similarly, gray scale opening is accom-
plished by gray scale erosion followed by gray scale dilation. Figure 14.6-3 gives
examples of gray scale image closing and opening.
   Steinberg (28) has introduced the use of three-dimensional structuring elements
for gray scale image closing and opening operations. Although the concept is well
defined mathematically, it is simpler to describe in terms of a structural image
model. Consider a gray scale image to be modeled as an array of closely packed
square pegs, each of which is proportional in height to the amplitude of a corre-
sponding pixel. Then a three-dimensional structuring element, for example a sphere,
is placed over each peg. The bottom of the structuring element as it is translated
over the peg array forms another spatially discrete surface, which is the close array
of the original image. A spherical structuring element will touch pegs at peaks of the
original peg array, but will not touch pegs at the bottom of steep valleys. Conse-
quently, the close surface “fills in” dark spots in the original image. The opening of
a gray scale image can be conceptualized in a similar manner. An original image
is modeled as a peg array in which the height of each peg is inversely proportional to
438     MORPHOLOGICAL IMAGE PROCESSING




                                  erosion profile 1 iteration
                                      (a ) One iteration




              erosion profile 2 iterations           erosion profile 3 iterations
                  (b ) Two iterations                    (c ) Three iterations

FIGURE 14.6-2. One-dimensional gray scale image erosion on a printed circuit board
image.



the amplitude of each corresponding pixel (i.e., the gray scale is subtractively
inverted). The translated structuring element then forms the open surface of the orig-
inal image. For a spherical structuring element, bright spots in the original image are
made darker.


14.6.3. Conditional Gray Scale Image Morphological Operators

There have been attempts to develop morphological operators for gray scale images
that are analogous to binary image shrinking, thinning, skeletonizing, and thicken-
ing. The stumbling block to these extensions is the lack of a definition for connec-
tivity of neighboring gray scale pixels. Serra (4) has proposed approaches based on
topographic mapping techniques. Another approach is to iteratively perform the
basic dilation and erosion operations on a gray scale image and then use a binary
thresholded version of the resultant image to determine connectivity at each
iteration.
GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS               439




                                      Printed Circuit Board
                                         (a ) Original




                5x5 square dilation                           5x5 square erosion
                   (b ) Dilation                                 (c ) Erosion




                5x5 square closing                            5x5 square opening
                    (d ) Close                                    (e ) Open

FIGURE 14.6-3. Two-dimensional gray scale image dilation, erosion, close, and open on a
printed circuit board image.
440      MORPHOLOGICAL IMAGE PROCESSING


REFERENCES

 1. H. Minkowski, “Volumen und Oberfiläche,” Mathematische Annalen, 57, 1903, 447–
    459.
 2. G. Matheron, Random Sets and Integral Geometry, Wiley, New York, 1975.
 3. J. Serra, Image Analysis and Mathematical Morphology, Vol. 1, Academic Press, Lon-
    don, 1982.
 4. J. Serra, Image Analysis and Mathematical Morphology: Theoretical Advances, Vol. 2,
    Academic Press, London, 1988.
 5. J. Serra, “Introduction to Mathematical Morphology,” Computer Vision, Graphics, and
    Image Processing, 35, 3, September 1986, 283–305.
 6. S. R. Steinberg, “Parallel Architectures for Image Processing,” Proc. 3rd International
    IEEE Compsac, Chicago, 1981.
 7. S. R. Steinberg, “Biomedical Image Processing,” IEEE Computer, January 1983, 22–34.
 8. S. R. Steinberg, “Automatic Image Processor,” US patent 4,167,728.
 9. R. M. Lougheed and D. L. McCubbrey, “The Cytocomputer: A Practical Pipelined
    Image Processor,” Proc. 7th Annual International Symposium on Computer Architec-
    ture, 1980.
10. A. Rosenfeld, “Connectivity in Digital Pictures,” J. Association for Computing Machin-
    ery, 17, 1, January 1970, 146–160.
11. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969.
12. M. J. E. Golay, “Hexagonal Pattern Transformation,” IEEE Trans. Computers, C-18, 8,
    August 1969, 733–740.
13. K. Preston, Jr., “Feature Extraction by Golay Hexagonal Pattern Transforms,” IEEE
    Trans. Computers, C-20, 9, September 1971, 1007–1014.
14. F. A. Gerritsen and P. W. Verbeek, “Implementation of Cellular Logic Operators Using
    3 × 3 Convolutions and Lookup Table Hardware,” Computer Vision, Graphics, and
    Image Processing, 27, 1, 1984, 115–123.
15. A. Rosenfeld, “A Characterization of Parallel Thinning Algorithms,” Information and
    Control, 29, 1975, 286–291.
16. T. Pavlidis, “A Thinning Algorithm for Discrete Binary Images,” Computer Graphics
    and Image Processing, 13, 2, 1980, 142–157.
17. W. K. Pratt and I. Kabir, “Morphological Binary Image Processing with a Local Neigh-
    borhood Pipeline Processor,” Computer Graphics, Tokyo, 1984.
18. H. Blum, “A Transformation for Extracting New Descriptors of Shape,” in Symposium
    Models for Perception of Speech and Visual Form, W. Whaten-Dunn, Ed., MIT Press,
    Cambridge, MA, 1967.
19. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Inter-
    science, New York, 1973.
20. L. Calabi and W. E. Harnett, “Shape Recognition, Prairie Fires, Convex Deficiencies and
    Skeletons,” American Mathematical Monthly, 75, 4, April 1968, 335–342.
21. J. C. Mott-Smith, “Medial Axis Transforms,” in Picture Processing and Psychopicto-
    rics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.
REFERENCES         441

22. C. Arcelli and G. Sanniti Di Baja, “On the Sequential Approach to Medial Line Thinning
    Transformation,” IEEE Trans. Systems, Man and Cybernetics, SMC-8, 2, 1978, 139–
    144.
23. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy-
    chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.
24. H. Hadwiger, Vorslesunger uber Inhalt, Oberfläche und Isoperimetrie, Springer-Verlag,
    Berlin, 1957.
25. R. M. Haralick, S. R. Steinberg, and X. Zhuang, “Image Analysis Using Mathematical
    Morphology,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-9, 4, July
    1987, 532–550.
26. W. K. Pratt, “Image Processing with Primitive Computational Elements,” McMaster
    University, Hamilton, Ontario, Canada, 1987.
27. X. Zhuang and R. M. Haralick, “Morphological Structuring Element Decomposition,”
    Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 370–382.
28. S. R. Steinberg, “Grayscale Morphology,” Computer Vision, Graphics, and Image Pro-
    cessing, 35, 3, September 1986, 333–355.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                        Copyright © 2001 John Wiley & Sons, Inc.
                     ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




15
EDGE DETECTION




Changes or discontinuities in an image amplitude attribute such as luminance or tri-
stimulus value are fundamentally important primitive characteristics of an image
because they often provide an indication of the physical extent of objects within the
image. Local discontinuities in image luminance from one level to another are called
luminance edges. Global luminance discontinuities, called luminance boundary seg-
ments, are considered in Section 17.4. In this chapter the definition of a luminance
edge is limited to image amplitude discontinuities between reasonably smooth
regions. Discontinuity detection between textured regions is considered in Section
17.5. This chapter also considers edge detection in color images, as well as the
detection of lines and spots within an image.


15.1. EDGE, LINE, AND SPOT MODELS

Figure 15.1-1a is a sketch of a continuous domain, one-dimensional ramp edge
modeled as a ramp increase in image amplitude from a low to a high level, or vice
versa. The edge is characterized by its height, slope angle, and horizontal coordinate
of the slope midpoint. An edge exists if the edge height is greater than a specified
value. An ideal edge detector should produce an edge indication localized to a single
pixel located at the midpoint of the slope. If the slope angle of Figure 15.1-1a is 90°,
the resultant edge is called a step edge, as shown in Figure 15.1-1b. In a digital
imaging system, step edges usually exist only for artificially generated images such
as test patterns and bilevel graphics data. Digital images, resulting from digitization
of optical images of real scenes, generally do not possess step edges because the anti
aliasing low-pass filtering prior to digitization reduces the edge slope in the digital
image caused by any sudden luminance change in the scene. The one-dimensional

                                                                                    443
444     EDGE DETECTION




      FIGURE 15.1-1. One-dimensional, continuous domain edge and line models.



profile of a line is shown in Figure 15.1-1c. In the limit, as the line width w
approaches zero, the resultant amplitude discontinuity is called a roof edge.
   Continuous domain, two-dimensional models of edges and lines assume that the
amplitude discontinuity remains constant in a small neighborhood orthogonal to the
edge or line profile. Figure 15.1-2a is a sketch of a two-dimensional edge. In addi-
tion to the edge parameters of a one-dimensional edge, the orientation of the edge
slope with respect to a reference axis is also important. Figure 15.1-2b defines the
edge orientation nomenclature for edges of an octagonally shaped object whose
amplitude is higher than its background.
   Figure 15.1-3 contains step and unit width ramp edge models in the discrete
domain. The vertical ramp edge model in the figure contains a single transition pixel
whose amplitude is at the midvalue of its neighbors. This edge model can be obtained
 by performing a 2 × 2 pixel moving window average on the vertical step edge
EDGE, LINE, AND SPOT MODELS         445




           FIGURE 15.1-2. Two-dimensional, continuous domain edge model.



model. The figure also contains two versions of a diagonal ramp edge. The single-
pixel transition model contains a single midvalue transition pixel between the
regions of high and low amplitude; the smoothed transition model is generated by a
2 × 2 pixel moving window average of the diagonal step edge model. Figure 15.1-3
also presents models for a discrete step and ramp corner edge. The edge location for
discrete step edges is usually marked at the higher-amplitude side of an edge transi-
tion. For the single-pixel transition model and the smoothed transition vertical and
corner edge models, the proper edge location is at the transition pixel. The smoothed
transition diagonal ramp edge model has a pair of adjacent pixels in its transition
zone. The edge is usually marked at the higher-amplitude pixel of the pair. In Figure
15.1-3 the edge pixels are italicized.
   Discrete two-dimensional single-pixel line models are presented in Figure 15.1-4
for step lines and unit width ramp lines. The single-pixel transition model has a mid-
value transition pixel inserted between the high value of the line plateau and the
low-value background. The smoothed transition model is obtained by performing a
2 × 2 pixel moving window average on the step line model.
446      EDGE DETECTION




            FIGURE 15.1-3. Two-dimensional, discrete domain edge models.



    A spot, which can only be defined in two dimensions, consists of a plateau of
high amplitude against a lower amplitude background, or vice versa. Figure 15.1-5
presents single-pixel spot models in the discrete domain.
    There are two generic approaches to the detection of edges, lines, and spots in a
luminance image: differential detection and model fitting. With the differential
detection approach, as illustrated in Figure 15.1-6, spatial processing is performed
on an original image F ( j, k ) to produce a differential image G ( j, k ) with accentu-
ated spatial amplitude changes. Next, a differential detection operation is executed
to determine the pixel locations of significant differentials. The second general
approach to edge, line, or spot detection involves fitting of a local region of pixel
values to a model of the edge, line, or spot, as represented in Figures 15.1-1 to
15.1-5. If the fit is sufficiently close, an edge, line, or spot is said to exist, and its
assigned parameters are those of the appropriate model. A binary indicator map
E ( j, k ) is often generated to indicate the position of edges, lines, or spots within an
EDGE, LINE, AND SPOT MODELS           447




             FIGURE 15.1-4. Two-dimensional, discrete domain line models.




image. Typically, edge, line, and spot locations are specified by black pixels against
a white background.
   There are two major classes of differential edge detection: first- and second-order
derivative. For the first-order class, some form of spatial first-order differentiation is
performed, and the resulting edge gradient is compared to a threshold value. An
edge is judged present if the gradient exceeds the threshold. For the second-order
derivative class of differential edge detection, an edge is judged present if there is a
significant spatial change in the polarity of the second derivative.
   Sections 15.2 and 15.3 discuss the first- and second-order derivative forms of
edge detection, respectively. Edge fitting methods of edge detection are considered
in Section 15.4.
448     EDGE DETECTION




      FIGURE 15.1-5. Two-dimensional, discrete domain single pixel spot models.



15.2. FIRST-ORDER DERIVATIVE EDGE DETECTION

There are two fundamental methods for generating first-order derivative edge gradi-
ents. One method involves generation of gradients in two orthogonal directions in an
image; the second utilizes a set of directional derivatives.
FIRST-ORDER DERIVATIVE EDGE DETECTION                   449




               FIGURE 15.1-6. Differential edge, line, and spot detection.



15.2.1. Orthogonal Gradient Generation

An edge in a continuous domain edge segment F ( x, y ) such as the one depicted in
Figure 15.1-2a can be detected by forming the continuous one-dimensional gradient
G ( x, y ) along a line normal to the edge slope, which is at an angle θ with respect to
the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold
value), an edge is deemed present. The gradient along the line normal to the edge
slope can be computed in terms of the derivatives along orthogonal axes according
to the following (1, p. 106)

                                    ∂F ( x, y )                 ∂F ( x, y )
                       G ( x, y ) = ------------------- cos θ + ------------------- sin θ
                                                      -                           -         (15.2-1)
                                           ∂x                          ∂y

   Figure 15.2-1 describes the generation of an edge gradient G ( x, y ) in the discrete
domain in terms of a row gradient G R ( j, k ) and a column gradient G C ( j, k ) . The
spatial gradient amplitude is given by

                                                          2                    2 1⁄2
                        G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ]               (15.2-2)


For computational efficiency, the gradient amplitude is sometimes approximated by
the magnitude combination


                               G ( j, k ) = G R ( j, k ) + G C ( j, k )                     (15.2-3)




                    FIGURE 15.2-1. Orthogonal gradient generation.
450     EDGE DETECTION


The orientation of the spatial gradient with respect to the row axis is

                                                    G C ( j, k ) 
                              θ ( j, k ) = arc tan  ------------------- 
                                                                       -      (15.2-4)
                                                    G R ( j, k ) 

The remaining issue for discrete domain orthogonal gradient generation is to choose
a good discrete approximation to the continuous differentials of Eq. 15.2-1.
   The simplest method of discrete gradient generation is to form the running differ-
ence of pixels along rows and columns of the image. The row gradient is defined as


                             G R ( j, k ) = F ( j, k ) – F ( j, k – 1 )      (15.2-5a)


and the column gradient is


                             G C ( j, k ) = F ( j, k ) – F ( j + 1, k )      (15.2-5b)


These definitions of row and column gradients, and subsequent extensions, are cho-
sen such that GR and GC are positive for an edge that increases in amplitude from
left to right and from bottom to top in an image.
    As an example of the response of a pixel difference edge detector, the following
is the row gradient along the center row of the vertical step edge model of Figure
15.1-3:


                               0     0 0 0 h 0 0 0 0


In this sequence, h = b – a is the step edge height. The row gradient for the vertical
ramp edge model is


                                0 0 0 0 h h 0 0 0
                                        -- --
                                         - -
                                        2 2


For ramp edges, the running difference edge detector cannot localize the edge to a
single pixel. Figure 15.2-2 provides examples of horizontal and vertical differencing
gradients of the monochrome peppers image. In this and subsequent gradient display
photographs, the gradient range has been scaled over the full contrast range of the
photograph. It is visually apparent from the photograph that the running difference
technique is highly susceptible to small fluctuations in image luminance and that the
object boundaries are not well delineated.
FIRST-ORDER DERIVATIVE EDGE DETECTION                        451




                                            (a) Original




          (b) Horizontal magnitude                                  (c) Vertical magnitude

FIGURE 15.2-2. Horizontal and vertical differencing gradients of the peppers_mon
image.



  Diagonal edge gradients can be obtained by forming running differences of diag-
onal pairs of pixels. This is the basis of the Roberts (2) cross-difference operator,
which is defined in magnitude form as


                             G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k )                        (15.2-6a)


and in square-root form as

                                                     2                   2 1⁄2
                       G ( j , k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] ]                 (15.2-6b)
452     EDGE DETECTION


where

                           G 1 ( j, k ) = F ( j , k ) – F ( j + 1 , k + 1 )                  (15.2-6c)

                           G 2 ( j, k ) = F ( j, k + 1 ) – F ( j + 1, k )                    (15.2-6d)


The edge orientation with respect to the row axis is

                                          π             G 2 ( j, k ) 
                             θ ( j, k ) = -- + arc tan  ------------------ 
                                           -                              -                   (15.2-7)
                                          4             G 1 ( j, k ) 

Figure 15.2-3 presents the edge gradients of the peppers image for the Roberts oper-
ators. Visually, the objects in the image appear to be slightly better distinguished
with the Roberts square-root gradient than with the magnitude gradient. In Section
15.5, a quantitative evaluation of edge detectors confirms the superiority of the
square-root combination technique.
   The pixel difference method of gradient generation can be modified to localize
the edge center of the ramp edge model of Figure 15.1-3 by forming the pixel differ-
ence separated by a null value. The row and column gradients then become

                          G R ( j, k ) = F ( j, k + 1 ) – F ( j, k – 1 )                     (15.2-8a)

                          GC ( j, k ) = F ( j – 1, k ) – F ( j + 1, k )                      (15.2-8b)

The row gradient response for a vertical ramp edge model is then


                                           h    h
                                       0 0 -- h -- 0 0
                                            -    -
                                           2    2




                 (a) Magnitude                                             (b) Square root

            FIGURE 15.2-3. Roberts gradients of the peppers_mon image.
FIRST-ORDER DERIVATIVE EDGE DETECTION                     453




        FIGURE 15.2-4. Numbering convention for 3 × 3 edge detection operators.



Although the ramp edge is properly localized, the separated pixel difference gradi-
ent generation method remains highly sensitive to small luminance fluctuations in
the image. This problem can be alleviated by using two-dimensional gradient forma-
tion operators that perform differentiation in one coordinate direction and spatial
averaging in the orthogonal direction simultaneously.
   Prewitt (1, p. 108) has introduced a 3 × 3 pixel edge gradient operator described
by the pixel numbering convention of Figure 15.2-4. The Prewitt operator square
root edge gradient is defined as

                                                        2                   2 1⁄2
                          G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ]              (15.2-9a)


with


                                     1
                 G R ( j, k ) = ------------ [ ( A 2 + KA 3 + A 4 ) – ( A 0 + KA7 + A6 ) ]
                                           -                                                 (15.2-9b)
                                K+2

                                     1
                 G C ( j, k ) = ------------ [ ( A0 + KA1 + A2 ) – ( A 6 + KA 5 + A 4 ) ]
                                           -                                                 (15.2-9c)
                                K+2


where K = 1. In this formulation, the row and column gradients are normalized to
provide unit-gain positive and negative weighted averages about a separated edge
position. The Sobel operator edge detector (3, p. 271) differs from the Prewitt edge
detector in that the values of the north, south, east, and west pixels are doubled (i. e.,
K = 2). The motivation for this weighting is to give equal importance to each pixel
in terms of its contribution to the spatial gradient. Frei and Chen (4) have proposed
north, south, east, and west weightings by K = 2 so that the gradient is the same
for horizontal, vertical, and diagonal edges. The edge gradient G ( j, k ) for these three
operators along a row through the single pixel transition vertical ramp edge model
of Figure 15.1-3 is
454       EDGE DETECTION


                                                      h                    h
                                                  0 0 --
                                                       -          h        -- 0 0
                                                                            -
                                                      2                    2


Along a row through the single transition pixel diagonal ramp edge model, the gra-
dient is


                        h                  h             2 ( 1 + K )h                  h                      h
      0     -------------------------
                                    -    ------
                                              -      -----------------------------
                                                                                 -   ------
                                                                                          -       -------------------------
                                                                                                                          -   0
                2 (2 + K)                    2                2+K                        2            2 (2 + K)


In the Frei–Chen operator with K = 2 , the edge gradient is the same at the edge
center for the single-pixel transition vertical and diagonal ramp edge models.
The Prewitt gradient for a diagonal edge is 0.94 times that of a vertical edge. The




                           (a) Prewitt                                                        (b) Sobel




                                                     (c) Frei−Chen

 FIGURE 15.2-5. Prewitt, Sobel, and Frei–Chen gradients of the peppers_mon image.
FIRST-ORDER DERIVATIVE EDGE DETECTION         455

corresponding factor for a Sobel edge detector is 1.06. Consequently, the Prewitt
operator is more sensitive to horizontal and vertical edges than to diagonal edges;
the reverse is true for the Sobel operator. The gradients along a row through the
smoothed transition diagonal ramp edge model are different for vertical and diago-
nal edges for all three of the 3 × 3 edge detectors. None of them are able to localize
the edge to a single pixel.
   Figure 15.2-5 shows examples of the Prewitt, Sobel, and Frei–Chen gradients of
the peppers image. The reason that these operators visually appear to better delin-
eate object edges than the Roberts operator is attributable to their larger size, which
provides averaging of small luminance fluctuations.
   The row and column gradients for all the edge detectors mentioned previously in
this subsection involve a linear combination of pixels within a small neighborhood.
Consequently, the row and column gradients can be computed by the convolution
relationships


                             G R ( j, k ) = F ( j , k ) ᭺ H R ( j, k )
                                                        *                     (15.2-10a)

                             G C ( j, k ) = F ( j, k ) ᭺ H C ( j, k )
                                                       *                      (15.2-10b)


where H R ( j, k ) and H C ( j, k ) are 3 × 3 row and column impulse response arrays,
respectively, as defined in Figure 15.2-6. It should be noted that this specification of
the gradient impulse response arrays takes into account the 180° rotation of an
impulse response array inherent to the definition of convolution in Eq. 7.1-14.
   A limitation common to the edge gradient generation operators previously
defined is their inability to detect accurately edges in high-noise environments. This
problem can be alleviated by properly extending the size of the neighborhood opera-
tors over which the differential gradients are computed. As an example, a Prewitt-
type 7 × 7 operator has a row gradient impulse response of the form



                                  1      1      1      0     –1     –1   –1
                                  1      1      1      0     –1     –1   –1
                                  1      1      1      0     –1     –1   –1
                            1
                    H R = -----
                              -   1      1      1      0     –1     –1   –1    (15.2-11)
                          21
                                  1      1      1      0     –1     –1   –1
                                  1      1      1      0     –1     –1   –1
                                  1      1      1      0     –1     –1   –1


An operator of this type is called a boxcar operator. Figure 15.2-7 presents the box-
car gradient of a 7 × 7 array.
456     EDGE DETECTION




FIGURE 15.2-6. Impulse response arrays for 3 × 3 orthogonal differential gradient edge
operators.



   Abdou (5) has suggested a truncated pyramid operator that gives a linearly
decreasing weighting to pixels away from the center of an edge. The row gradient
impulse response array for a 7 × 7 truncated pyramid operator is given by


                                  1   1   1   0    –1   –1   –1
                                  1   2   2   0    –2   –2   –1
                                  1   2   3   0    –3   –2   –1
                            1
                    H R = -----
                              -   1   2   3   0    –3   –2   –1             (15.2-12)
                          34
                                  1   2   3   0    –3   –2   –1
                                  1   2   2   0    –2   –2   –1
                                  1   1   1   0    –1   –1   –1
FIRST-ORDER DERIVATIVE EDGE DETECTION               457




               (a) 7 × 7 boxcar                       (b) 9 × 9 truncated pyramid




             (c) 11 × 11 Argyle, s = 2.0                (d ) 11 × 11 Macleod, s = 2.0




                                  (e) 11 × 11 FDOG, s = 2.0

FIGURE 15.2-7. Boxcar, truncated pyramid, Argyle, Macleod, and FDOG gradients of the
peppers_mon image.
458      EDGE DETECTION


Argyle (6) and Macleod (7,8) have proposed large neighborhood Gaussian-shaped
weighting functions as a means of noise suppression. Let

                                                  2 –1 ⁄ 2                        2
                            g ( x, s ) = [ 2πs ]             exp { – 1 ⁄ 2 ( x ⁄ s ) }     (15.2-13)


denote a continuous domain Gaussian function with standard deviation s. Utilizing
this notation, the Argyle operator horizontal coordinate impulse response array can
be expressed as a sampled version of the continuous domain impulse response

                                    – 2 g ( x, s )g ( y, t )           for x ≥ 0         (15.2-14a)
                                   
                    H R ( j, k ) = 
                                    2g ( x, s )g ( y, t )
                                                                       for x < 0         (15.2-14b)


where s and t are spread parameters. The vertical impulse response function can be
expressed similarly. The Macleod operator horizontal gradient impulse response
function is given by

                           H R ( j, k ) = [ g ( x + s, s ) – g ( x – s, s ) ]g ( y, t )    (15.2-15)

The Argyle and Macleod operators, unlike the boxcar operator, give decreasing
importance to pixels far removed from the center of the neighborhood. Figure
15.2-7 provides examples of the Argyle and Macleod gradients.
   Extended-size differential gradient operators can be considered to be compound
operators in which a smoothing operation is performed on a noisy image followed
by a differentiation operation. The compound gradient impulse response can be
written as

                                   H ( j, k ) = H G ( j, k ) ᭺ H S ( j, k )
                                                             ‫ء‬                             (15.2-16)

where H G ( j, k ) is one of the gradient impulse response operators of Figure 15.2-6
and H S ( j, k ) is a low-pass filter impulse response. For example, if H S ( j, k ) is the
3 × 3 Prewitt row gradient operator and H S ( j, k ) = 1 ⁄ 9 , for all ( j, k ) , is a 3 × 3 uni-
form smoothing operator, the resultant 5 × 5 row gradient operator, after normaliza-
tion to unit positive and negative gain, becomes


                                             1        1         0      –1       –1
                                             2        2         0      –2       –2
                                   1
                           H R = -----
                                     -       3        3         0      –3       –3         (15.2-17)
                                 18
                                             2        2         0      –2       –2
                                             1        1         0      –1       –1
FIRST-ORDER DERIVATIVE EDGE DETECTION                     459

The decomposition of Eq. 15.2-16 applies in both directions. By applying the SVD/
SGK decomposition of Section 9.6, it is possible, for example, to decompose a 5 × 5
boxcar operator into the sequential convolution of a 3 × 3 smoothing kernel and a
3 × 3 differentiating kernel.
   A well-known example of a compound gradient operator is the first derivative of
Gaussian (FDOG) operator, in which Gaussian-shaped smoothing is followed by
differentiation (9). The FDOG continuous domain horizontal impulse response is


                                            – ∂[ g ( x, s )g ( y, t ) ]
                             H R ( j, k ) = ------------------------------------------
                                                                                     -   (15.2-18a)
                                                               ∂x


which upon differentiation yields


                                            – xg ( x, s )g ( y, t )
                              HR ( j, k ) = -------------------------------------
                                                                                -        (15.2-18b)
                                                               2
                                                             s

Figure 15.2-7 presents an example of the FDOG gradient.
      All of the differential edge enhancement operators presented previously in this
subsection have been derived heuristically. Canny (9) has taken an analytic
approach to the design of such operators. Canny's development is based on a one-
dimensional continuous domain model of a step edge of amplitude hE plus additive
white Gaussian noise with standard deviation σ n . It is assumed that edge detection
is performed by convolving a one-dimensional continuous domain noisy edge signal
 f ( x ) with an antisymmetric impulse response function h ( x ) , which is of zero
amplitude outside the range [ – W, W ] . An edge is marked at the local maximum of
the convolved gradient f ( x ) ᭺ h ( x ). The Canny operator impulse response h ( x ) is
                                 *
chosen to satisfy the following three criteria.

   1. Good detection. The amplitude signal-to-noise ratio (SNR) of the gradient is
      maximized to obtain a low probability of failure to mark real edge points and a
      low probability of falsely marking nonedge points. The SNR for the model is

                                               hE S ( h )
                                         SNR = ----------------
                                                              -                          (15.2-19a)
                                                     σn

       with


                                                         0
                                                      ∫– W h ( x ) d x
                                   S ( h ) = ----------------------------------          (15.2-19b)
                                                W                      2
                                             ∫ [ h ( x ) ] dx
                                                     –W
460      EDGE DETECTION


   2. Good localization. Edge points marked by the operator should be as close to
      the center of the edge as possible. The localization factor is defined as
                                                 hE L ( h )
                                           LOC = -----------------                    (15.2-20a)
                                                       σn

       with
                                                           h′ ( 0 )
                                    L ( h ) = -------------------------------------
                                                                                  -   (15.2-20b)
                                                  W                         2
                                              ∫       –W
                                                          [ h′ ( x ) ] dx

       where h′ ( x ) is the derivative of h ( x ) .
   3. Single response. There should be only a single response to a true edge. The
      distance between peaks of the gradient when only noise is present, denoted as
      xm, is set to some fraction k of the operator width factor W. Thus

                                                  x m = kW                             (15.2-21)

Canny has combined these three criteria by maximizing the product S ( h )L ( h ) subject
to the constraint of Eq. 15.2-21. Because of the complexity of the formulation, no
analytic solution has been found, but a variational approach has been developed.
Figure 15.2-8 contains plots of the Canny impulse response functions in terms of xm.




FIGURE 15.2-8. Comparison of Canny and first derivative of Gaussian impulse response
functions.
FIRST-ORDER DERIVATIVE EDGE DETECTION                   461

As noted from the figure, for low values of xm, the Canny function resembles a box-
car function, while for xm large, the Canny function is closely approximated by a
FDOG impulse response function.
   Discrete domain versions of the large operators defined in the continuous domain
can be obtained by sampling their continuous impulse response functions over some
W × W window. The window size should be chosen sufficiently large that truncation
of the impulse response function does not cause high-frequency artifacts. Demigny
and Kamie (10) have developed a discrete version of Canny’s criteria, which lead to
the computation of discrete domain edge detector impulse response arrays.


15.2.2. Edge Template Gradient Generation

With the orthogonal differential edge enhancement techniques discussed previously,
edge gradients are computed in two orthogonal directions, usually along rows and
columns, and then the edge direction is inferred by computing the vector sum of the
gradients. Another approach is to compute gradients in a large number of directions
by convolution of an image with a set of template gradient impulse response arrays.
The edge template gradient is defined as

               G ( j, k ) = MAX { G 1 ( j, k ) , …, G m ( j, k ) , …, G M ( j, k ) }   (15.2-22a)

where


                               G m ( j, k ) = F ( j , k ) ᭺ H m ( j, k )
                                                          ‫ء‬                            (15.2-22b)


is the gradient in the mth equispaced direction obtained by convolving an image
with a gradient impulse response array Hm ( j, k ). The edge angle is determined by the
direction of the largest gradient.
    Figure 15.2-9 defines eight gain-normalized compass gradient impulse response
arrays suggested by Prewitt (1, p. 111). The compass names indicate the slope direc-
tion of maximum response. Kirsch (11) has proposed a directional gradient defined
by

                                              7
                                                             
                               G ( j, k ) = MAX  5S i – 3T i                         (15.2-23a)
                                            i = 0            

where


                         Si = A i + Ai + 1 + A i + 2                                   (15.2-23b)

                        T i = A i + 3 + Ai + 4 + A i + 5 + Ai + 5 + A i + 6            (15.2-23c)
462     EDGE DETECTION




           FIGURE 15.2-9. Template gradient 3 × 3 impulse response arrays.




The subscripts of A i are evaluated modulo 8. It is possible to compute the Kirsch
gradient by convolution as in Eq. 15.2-22b. Figure 15.2-9 specifies the gain-normal-
ized Kirsch operator impulse response arrays. This figure also defines two other sets
of gain-normalized impulse response arrays proposed by Robinson (12), called the
Robinson three-level operator and the Robinson five-level operator, which are
derived from the Prewitt and Sobel operators, respectively. Figure 15.2-10 provides
a comparison of the edge gradients of the peppers image for the four 3 × 3 template
gradient operators.
FIRST-ORDER DERIVATIVE EDGE DETECTION         463




         (a) Prewitt compass gradient                    (b) Kirsch




           (c) Robinson three-level                (d) Robinson five-level

       FIGURE 15.2-10. 3 × 3 template gradients of the peppers_mon image.



   Nevatia and Babu (13) have developed an edge detection technique in which the
gain-normalized 5 × 5 masks defined in Figure 15.2-11 are utilized to detect edges
in 30° increments. Figure 15.2-12 shows the template gradients for the peppers
image. Larger template masks will provide both a finer quantization of the edge ori-
entation angle and a greater noise immunity, but the computational requirements
increase. Paplinski (14) has developed a design procedure for n-directional template
masks of arbitrary size.


15.2.3. Threshold Selection

After the edge gradient is formed for the differential edge detection methods, the
gradient is compared to a threshold to determine if an edge exists. The threshold
value determines the sensitivity of the edge detector. For noise-free images, the
464    EDGE DETECTION




      FIGURE 15.2-11. Nevatia–Babu template gradient impulse response arrays.
FIRST-ORDER DERIVATIVE EDGE DETECTION            465




         FIGURE 15.2-12. Nevatia–Babu gradient of the peppers_mon image.



threshold can be chosen such that all amplitude discontinuities of a minimum con-
trast level are detected as edges, and all others are called nonedges. With noisy
images, threshold selection becomes a trade-off between missing valid edges and
creating noise-induced false edges.
   Edge detection can be regarded as a hypothesis-testing problem to determine if
an image region contains an edge or contains no edge (15). Let P(edge) and P(no-
edge) denote the a priori probabilities of these events. Then the edge detection pro-
cess can be characterized by the probability of correct edge detection,

                                            ∞
                                 PD =      ∫t     p ( G edge ) dG             (15.2-24a)


and the probability of false detection,


                                        ∞
                              PF =    ∫t        p ( G no – e dge ) dG         (15.2-24b)


where t is the edge detection threshold and p(G|edge) and p(G|no-edge) are the con-
ditional probability densities of the edge gradient G ( j, k ). Figure 15.2-13 is a sketch
of typical edge gradient conditional densities. The probability of edge misclassifica-
tion error can be expressed as


                       P E = ( 1 – PD )P ( edge ) + ( P F )P ( no – e dge )    (15.2-25)
466     EDGE DETECTION




        FIGURE 15.2-13. Typical edge gradient conditional probability densities.



This error will be minimum if the threshold is chosen such that an edge is deemed
present when

                                 p ( G edge )                      P ( no – e dge )
                            ------------------------------------ ≥ ------------------------------
                                                               -                                -   (15.2-26)
                            p ( G no – e dge )                          P ( edge )

and the no-edge hypothesis is accepted otherwise. Equation 15.2-26 defines the
well-known maximum likelihood ratio test associated with the Bayes minimum
error decision rule of classical decision theory (16). Another common decision strat-
egy, called the Neyman–Pearson test, is to choose the threshold t to minimize P F for
a fixed acceptable P D (16).
    Application of a statistical decision rule to determine the threshold value requires
knowledge of the a priori edge probabilities and the conditional densities of the edge
gradient. The a priori probabilities can be estimated from images of the class under
analysis. Alternatively, the a priori probability ratio can be regarded as a sensitivity
control factor for the edge detector. The conditional densities can be determined, in
principle, for a statistical model of an ideal edge plus noise. Abdou (5) has derived
these densities for 2 × 2 and 3 × 3 edge detection operators for the case of a ramp
edge of width w = 1 and additive Gaussian noise. Henstock and Chelberg (17) have
used gamma densities as models of the conditional probability densities.
    There are two difficulties associated with the statistical approach of determining
the optimum edge detector threshold: reliability of the stochastic edge model and
analytic difficulties in deriving the edge gradient conditional densities. Another
approach, developed by Abdou and Pratt (5,15), which is based on pattern recogni-
tion techniques, avoids the difficulties of the statistical method. The pattern recogni-
tion method involves creation of a large number of prototype noisy image regions,
some of which contain edges and some without edges. These prototypes are then
used as a training set to find the threshold that minimizes the classification
error. Details of the design procedure are found in Reference 5. Table 15.2-1
TABLE 15.2-1. Threshold Levels and Associated Edge Detection Probabilities for 3 × 3 Edge Detectors as Determined by the Abdou and
      Pratt Pattern Recognition Design Procedure
                                                   Vertical Edge                                           Diagonal Edge
                                       SNR = 1                      SNR = 10                  SNR = 1                      SNR = 10
      Operator                   tN     PD        PF          tN       PD       PF      tN     PD        PF         tN      PD         PF
      Roberts orthogonal        1.36   0.559     0.400       0.67    0.892     0.105   1.74   0.551     0.469      0.78    0.778      0.221
       gradient
      Prewitt orthogonal        1.16   0.608     0.384       0.66    0.912     0.480   1.19   0.593     0.387      0.64    0.931      0.064
        gradient
      Sobel orthogonal          1.18   0.600     0.395       0.66    0.923     0.057   1.14   0.604     0.376      0.63    0.947      0.053
       gradient
      Prewitt compass           1.52   0.613     0.466       0.73    0.886     0.136   1.51   0.618     0.472      0.71    0.900      0.153
        template gradient
      Kirsch template           1.43   0.531     0.341       0.69    0.898     0.058   1.45   0.524     0.324      0.79    0.825      0.023
       gradient
      Robinson three-level      1.16   0.590     0.369       0.65    0.926     0.038   1.16   0.587     0.365      0.61    0.946      0.056
       template gradient
      Robinson five-level       1.24   0.581     0.361       0.66    0.924     0.049   1.22   0.593     0.374      0.65    0.931      0.054
       template gradient




467
468     EDGE DETECTION




              (a) Sobel, t = 0.06                      (b) FDOG, t = 0.08




               (c) Sobel, t = 0.08                     (d ) FDOG, t = 0.10




              (e) Sobel, t = 0.10                      (f ) FDOG, t = 0.12

FIGURE 15.2-14. Threshold sensitivity of the Sobel and first derivative of Gaussian edge
detectors for the peppers_mon image.
SECOND-ORDER DERIVATIVE EDGE DETECTION                469

provides a tabulation of the optimum threshold for several 2 × 2 and 3 × 3 edge
detectors for an experimental design with an evaluation set of 250 prototypes not in
the training set (15). The table also lists the probability of correct and false edge
detection as defined by Eq. 15.2-24 for theoretically derived gradient conditional
densities. In the table, the threshold is normalized such that t N = t ⁄ G M , where G M
is the maximum amplitude of the gradient in the absence of noise. The power signal-
                                              2
to-noise ratio is defined as SNR = ( h ⁄ σ n ) , where h is the edge height and σ n is the
noise standard deviation. In most of the cases of Table 15.2-1, the optimum thresh-
old results in approximately equal error probabilities (i.e., PF = 1 – P D ). This is the
same result that would be obtained by the Bayes design procedure when edges and
nonedges are equally probable. The tests associated with Table 15.2-1 were con-
ducted with relatively low signal-to-noise ratio images. Section 15.5 provides exam-
ples of such images. For high signal-to-noise ratio images, the optimum threshold is
much lower. As a rule of thumb, under the condition that P F = 1 – P D , the edge
detection threshold can be scaled linearly with signal-to-noise ratio. Hence, for an
image with SNR = 100, the threshold is about 10% of the peak gradient value.
    Figure 15.2-14 shows the effect of varying the first derivative edge detector
threshold for the 3 × 3 Sobel and the 11 × 11 FDOG edge detectors for the peppers
image, which is a relatively high signal-to-noise ratio image. For both edge detec-
tors, variation of the threshold provides a trade-off between delineation of strong
edges and definition of weak edges.

15.2.4. Morphological Post Processing

It is possible to improve edge delineation of first-derivative edge detectors by apply-
ing morphological operations on their edge maps. Figure 15.2-15 provides examples
for the 3 × 3 Sobel and 11 × 11 FDOG edge detectors. In the Sobel example, the
threshold is lowered slightly to improve the detection of weak edges. Then the mor-
phological majority black operation is performed on the edge map to eliminate
noise-induced edges. This is followed by the thinning operation to thin the edges to
minimally connected lines. In the FDOG example, the majority black noise smooth-
ing step is not necessary.

15.3. SECOND-ORDER DERIVATIVE EDGE DETECTION

Second-order derivative edge detection techniques employ some form of spatial sec-
ond-order differentiation to accentuate edges. An edge is marked if a significant spa-
tial change occurs in the second derivative. Two types of second-order derivative
methods are considered: Laplacian and directed second derivative.

15.3.1. Laplacian Generation

The edge Laplacian of an image function F ( x, y ) in the continuous domain is
defined as
470    EDGE DETECTION




                                     (a) Sobel, t = 0.07




          (b) Sobel majority black                         (c) Sobel thinned




            (d ) FDOG, t = 0.11                            (e ) FDOG thinned

 FIGURE 15.2-15. Morphological thinning of edge maps for the peppers_mon image.
SECOND-ORDER DERIVATIVE EDGE DETECTION                            471

                                       G ( x, y ) = – ∇2{ F ( x, y ) }                             (15.3-1a)

where, from Eq. 1.2-17, the Laplacian is

                                                           2            2
                                               2       ∂           ∂
                                            ∇ =                +                                   (15.3-1b)
                                                           2            2
                                                     ∂x            ∂y

The Laplacian G ( x, y ) is zero if F ( x, y ) is constant or changing linearly in ampli-
tude. If the rate of change of F ( x, y ) is greater than linear, G ( x, y ) exhibits a sign
change at the point of inflection of F ( x, y ). The zero crossing of G ( x, y ) indicates
the presence of an edge. The negative sign in the definition of Eq. 15.3-la is present
so that the zero crossing of G ( x, y ) has a positive slope for an edge whose amplitude
increases from left to right or bottom to top in an image.
   Torre and Poggio (18) have investigated the mathematical properties of the
Laplacian of an image function. They have found that if F ( x, y ) meets certain
smoothness constraints, the zero crossings of G ( x, y ) are closed curves.
   In the discrete domain, the simplest approximation to the continuous Laplacian is
to compute the difference of slopes along each axis:

               G ( j , k ) = [ F ( j, k ) – F ( j, k – 1 ) ] – [ F ( j, k + 1 ) – F ( j, k ) ]

                             + [ F ( j, k ) – F ( j + 1, k ) ] – [ F ( j – 1, k ) – F ( j, k ) ]    (15.3-2)

This four-neighbor Laplacian (1, p. 111) can be generated by the convolution opera-
tion

                                    G ( j, k ) = F ( j, k ) ᭺ H ( j, k )
                                                            ‫ء‬                                       (15.3-3)


with

                                   0           0      0   0                 –1   0
                              H = –1           2     –1 + 0                  2   0                 (15.3-4a)
                                   0           0      0   0                 –1   0
or

                                    0 –1 0
                               H = –1 4 – 1                                                        (15.3-4b)
                                    0 –1 0

where the two arrays of Eq. 15.3-4a correspond to the second derivatives along
image rows and columns, respectively, as in the continuous Laplacian of Eq. 15.3-lb.
The four-neighbor Laplacian is often normalized to provide unit-gain averages of the
positive weighted and negative weighted pixels in the 3 × 3 pixel neighborhood. The
gain-normalized four-neighbor Laplacian impulse response is defined by
472     EDGE DETECTION


                                        0 –1              0
                                    1
                                H = -- – 1 4
                                     -                   –1                 (15.3-5)
                                    4
                                         0 –1             0

Prewitt (1, p. 111) has suggested an eight-neighbor Laplacian defined by the gain-
normalized impulse response array


                                         –1      –1 – 1
                                   1
                               H = --
                                    -    –1       8 –1                      (15.3-6)
                                   8
                                         –1      –1 – 1

This array is not separable into a sum of second derivatives, as in Eq. 15.3-4a. A
separable eight-neighbor Laplacian can be obtained by the construction


                                –1 2 –1   – 1 –1 –1
                         H =    –1 2 –1 + 2    2 2                          (15.3-7)
                                –1 2 –1   –1 –1 – 1

in which the difference of slopes is averaged over three rows and three columns. The
gain-normalized version of the separable eight-neighbor Laplacian is given by

                                          –2     1 –2
                                H = 1
                                    --
                                     -     1     4 1                        (15.3-8)
                                    8
                                          –2     1 –2

   It is instructive to examine the Laplacian response to the edge models of Figure
15.1-3. As an example, the separable eight-neighbor Laplacian corresponding to the
center row of the vertical step edge model is

                                    – 3 h 3h
                                  0 -------- -----
                                           - -           0
                                       8       8

where h = b – a is the edge height. The Laplacian response of the vertical ramp
edge model is

                                  –3h            3h
                                0 -------- 0
                                         -       -----
                                                     -       0
                                    16           16

For the vertical edge ramp edge model, the edge lies at the zero crossing pixel
between the negative- and positive-value Laplacian responses. In the case of the step
edge, the zero crossing lies midway between the neighboring negative and positive
response pixels; the edge is correctly marked at the pixel to the right of the zero
SECOND-ORDER DERIVATIVE EDGE DETECTION             473

crossing. The Laplacian response for a single-transition-pixel diagonal ramp edge
model is

                            0     –h – h
                                  ----- -----
                                      -     -   0   h
                                                    --
                                                     -   h
                                                         --
                                                          -   0
                                    8 8             8    8

and the edge lies at the zero crossing at the center pixel. The Laplacian response for
the smoothed transition diagonal ramp edge model of Figure 15.1-3 is

                              –h –h –h h h h
                            0 ----- ----- ----- ----- -- ----- 0
                                  -     -     -     - -      -
                              16 8 16 16 8 16

   In this example, the zero crossing does not occur at a pixel location. The edge
should be marked at the pixel to the right of the zero crossing. Figure 15.3-1 shows
the Laplacian response for the two ramp corner edge models of Figure 15.1-3. The
edge transition pixels are indicated by line segments in the figure. A zero crossing
exists at the edge corner for the smoothed transition edge model, but not for the sin-
gle-pixel transition model. The zero crossings adjacent to the edge corner do not
occur at pixel samples for either of the edge models. From these examples, it can be




FIGURE 15.3-1. Separable eight-neighbor Laplacian responses for ramp corner models; all
values should be scaled by h/8.
474     EDGE DETECTION


concluded that zero crossings of the Laplacian do not always occur at pixel samples.
But for these edge models, marking an edge at a pixel with a positive response that
has a neighbor with a negative response identifies the edge correctly.
   Figure 15.3-2 shows the Laplacian responses of the peppers image for the three
types of 3 × 3 Laplacians. In these photographs, negative values are depicted as
dimmer than midgray and positive values are brighter than midgray.
   Marr and Hildrith (19) have proposed the Laplacian of Gaussian (LOG) edge
detection operator in which Gaussian-shaped smoothing is performed prior to appli-
cation of the Laplacian. The continuous domain LOG gradient is

                         G ( x, y ) = – ∇2{ F ( x, y ) ᭺ H S ( x, y ) }
                                                       *                                   (15.3-9a)

where

                                  G ( x, y ) = g ( x, s )g ( y, s )                        (15.3-9b)




              (a) Four-neighbor                                       (b) Eight-neighbor




         (c) Separable eight-neighbor                    (d ) 11 × 11 Laplacian of Gaussian

          FIGURE 15.3-2. Laplacian responses of the peppers_mon image.
SECOND-ORDER DERIVATIVE EDGE DETECTION                                     475

is the impulse response of the Gaussian smoothing function as defined by Eq.
15.2-13. As a result of the linearity of the second derivative operation and of the lin-
earity of convolution, it is possible to express the LOG response as

                                     G ( j, k ) = F ( j, k ) ᭺ H ( j, k )
                                                             *                                         (15.3-10a)

where

                                   H ( x, y ) = – ∇2{ g ( x, s )g ( y, s ) }                           (15.3-10b)

Upon differentiation, one obtains

                                                        2           2    x2 + y2 
                                          1- 
                          H ( x, y ) = -------  1 – x + y  exp
                                                     ----------------
                                             4                 2
                                                                          – ----------------           (15.3-11)
                                       πs               2s              2s2 

Figure 15.3-3 is a cross-sectional view of the LOG continuous domain impulse
response. In the literature it is often called the Mexican hat filter. It can be shown
(20,21) that the LOG impulse response can be expressed as


                                         2                                           2
                           1         y                               1         x 
           H ( x, y ) = -------  1 – ----  g ( x, s )g ( y, s ) + -------  1 – ----  g ( x, s )g ( y, s ) (15.3-12)
                              -          -                                -          -
                              2          2                                2          2
                        πs           s                            πs           s 


Consequently, the convolution operation can be computed separably along rows and
columns of an image. It is possible to approximate the LOG impulse response closely
by a difference of Gaussians (DOG) operator. The resultant impulse response is


                          H ( x, y ) = g ( x, s 1 )g ( y, s 1 ) – g ( x, s 2 )g ( y, s 2 )               (15.3-13)




FIGURE 15.3-3. Cross section of continuous domain Laplacian of Gaussian impulse
response.
476      EDGE DETECTION


where s 1 < s 2. Marr and Hildrith (19) have found that the ratio s 2 ⁄ s 1 = 1.6 provides
a good approximation to the LOG.
   A discrete domain version of the LOG operator can be obtained by sampling the
continuous domain impulse response function of Eq. 15.3-11 over a W × W window.
To avoid deleterious truncation effects, the size of the array should be set such that
W = 3c, or greater, where c = 2 2 s is the width of the positive center lobe of the
LOG function (21). Figure 15.3-2d shows the LOG response of the peppers image
for a 11 × 11 operator.


15.3.2. Laplacian Zero-Crossing Detection

From the discrete domain Laplacian response examples of the preceding section, it
has been shown that zero crossings do not always lie at pixel sample points. In fact,
for real images subject to luminance fluctuations that contain ramp edges of varying
slope, zero-valued Laplacian response pixels are unlikely.
   A simple approach to Laplacian zero-crossing detection in discrete domain
images is to form the maximum of all positive Laplacian responses and to form the
minimum of all negative-value responses in a 3 × 3 window, If the magnitude of the
difference between the maxima and the minima exceeds a threshold, an edge is
judged present.




                    FIGURE 15.3-4. Laplacian zero-crossing patterns.
SECOND-ORDER DERIVATIVE EDGE DETECTION                                477

   Huertas and Medioni (21) have developed a systematic method for classifying
3 × 3 Laplacian response patterns in order to determine edge direction. Figure
15.3-4 illustrates a somewhat simpler algorithm. In the figure, plus signs denote pos-
itive-value Laplacian responses, and negative signs denote negative Laplacian
responses. The algorithm can be implemented efficiently using morphological
image processing techniques.

15.3.3. Directed Second-Order Derivative Generation

Laplacian edge detection techniques employ rotationally invariant second-order
differentiation to determine the existence of an edge. The direction of the edge can
be ascertained during the zero-crossing detection process. An alternative approach is
first to estimate the edge direction and then compute the one-dimensional second-
order derivative along the edge direction. A zero crossing of the second-order
derivative specifies an edge.
    The directed second-order derivative of a continuous domain image F ( x, y )
along a line at an angle θ with respect to the horizontal axis is given by

                         2                         2                               2
       F ′′ ( x, y ) = ∂ F ( x, y ) cos 2 θ + ∂ F ( x, y ) cos θ sin θ + ---------------------- sin 2 θ
                       ---------------------- ----------------------     ∂ F ( x, y )                     (15.3-14)
                              ∂x
                                    2               ∂x ∂y                       ∂y
                                                                                      2


It should be noted that unlike the Laplacian, the directed second-order derivative is a
nonlinear operator. Convolving a smoothing function with F ( x, y ) prior to differen-
tiation is not equivalent to convolving the directed second derivative of F ( x, y ) with
the smoothing function.
    A key factor in the utilization of the directed second-order derivative edge detec-
tion method is the ability to determine its suspected edge direction accurately. One
approach is to employ some first-order derivative edge detection method to estimate
the edge direction, and then compute a discrete approximation to Eq. 15.3-14.
Another approach, proposed by Haralick (22), involves approximating F ( x, y ) by a
two-dimensional polynomial, from which the directed second-order derivative can
be determined analytically.
     As an illustration of Haralick's approximation method, called facet modeling, let
the continuous image function F ( x, y ) be approximated by a two-dimensional qua-
dratic polynomial

       ˆ                                       2                2        2       2         2 2
       F ( r, c ) = k 1 + k 2 r + k 3 c + k 4 r + k 5 rc + k 6 c + k 7 rc + k 8 r c + k 9 r c             (15.3-15)

about a candidate edge point ( j, k ) in the discrete image F ( j, k ), where the k n are
weighting factors to be determined from the discrete image data. In this notation, the
indices – ( W – 1 ) ⁄ 2 ≤ r, c ≤ ( W – 1 ) ⁄ 2 are treated as continuous variables in the row
(y-coordinate) and column (x-coordinate) directions of the discrete image, but the
discrete image is, of course, measurable only at integer values of r and c. From this
model, the estimated edge angle is
478     EDGE DETECTION


                                                 k2 
                                    θ = arc tan  ---- 
                                                     -                       (15.3-16)
                                                 k3 


In principle, any polynomial expansion can be used in the approximation. The
expansion of Eq. 15.3-15 was chosen because it can be expressed in terms of a set of
orthogonal polynomials. This greatly simplifies the computational task of determin-
ing the weighting factors. The quadratic expansion of Eq. 15.3-15 can be rewritten
as

                                                N
                               ˆ
                               F ( r, c ) =   ∑     a n Pn ( r, c )          (15.3-17)
                                              n=1

where Pn ( r, c ) denotes a set of discrete orthogonal polynomials and the a n are
weighting coefficients. Haralick (22) has used the following set of 3 × 3 Chebyshev
orthogonal polynomials:


                              P 1 ( r, c ) = 1                              (15.3-18a)

                              P 2 ( r, c ) = r                              (15.3-18b)

                              P3 ( r, c ) = c                               (15.3-18c)

                                             2 2
                              P4 ( r, c ) = r – --
                                                 -                          (15.3-18d)
                                                3

                              P5 ( r, c ) = rc                              (15.3-18e)

                                              2 2
                              P 6 ( r, c ) = c – --
                                                  -                         (15.3-18f)
                                                 3

                                                2 2
                            P 7 ( r, c ) = c  r – -- 
                                                   -                       (15.3-18g)
                                                   3

                                                  2 2
                              P 8 ( r, c ) = r  c – -- 
                                                      -                     (15.3-18h)
                                                    3

                                               2 2        2 2
                             P 9 ( r, c ) =  r – --   c – -- 
                                                  -          -             (15.3-18i)
                                                  3        3


defined over the (r, c) index set {-1, 0, 1}. To maintain notational consistency with
the gradient techniques discussed previously, r and c are indexed in accordance with
the (x, y) Cartesian coordinate system (i.e., r is incremented positively up rows and c
is incremented positively left to right across columns). The polynomial coefficients
kn of Eq. 15.3-15 are related to the Chebyshev weighting coefficients by
SECOND-ORDER DERIVATIVE EDGE DETECTION                           479

                                          2        2        4
                              k 1 = a 1 – -- a 4 – -- a 6 + -- a 9
                                           -        -        -                              (15.3-19a)
                                          3        3        9

                                          2
                              k 2 = a 2 – -- a 7
                                           -                                                (15.3-19b)
                                          3

                              k3 = a3 – 2 a8
                                        --
                                         -                                                  (15.3-19c)
                                        3

                                           2
                               k 4 = a 4 – -- a 9
                                            -                                               (15.3-19d)
                                           3

                               k5 = a5                                                      (15.3-19e)

                                           2
                               k 6 = a 6 – -- a 9
                                            -                                               (15.3-19f)
                                           3

                               k7 = a7                                                      (15.3-19g)

                                k8 = a8                                                     (15.3-19h)

                                k9 = a9                                                     (15.3-19i)


The optimum values of the set of weighting coefficients an that minimize the mean-
                                                                     ˆ
square error between the image data F ( r, c ) and its approximation F ( r, c ) are found
to be (22)

                                     ∑ ∑ Pn ( r, c )F ( r, c )
                               a n = ----------------------------------------------------
                                                                                        -    (15.3-20)
                                                                                   2
                                          ∑ ∑ [ P n ( r, c ) ]
As a consequence of the linear structure of this equation, the weighting coefficients
An ( j, k ) = a n at each point in the image F ( j, k ) can be computed by convolution of
the image with a set of impulse response arrays. Hence


                             An ( j, k ) = F ( j, k ) ᭺ H n ( j, k )
                                                      ‫ء‬                                     (15.3-21a)


where

                                                     P n ( – j, – k )
                              H n ( j, k ) = -----------------------------------------
                                                                                     -      (15.3-21b)
                                                                                     2
                                             ∑ ∑ [ Pn ( r, c ) ]
Figure 15.3-5 contains the nine impulse response arrays corresponding to the 3 × 3
Chebyshev polynomials. The arrays H2 and H3, which are used to determine the
edge angle, are seen from Figure 15.3-5 to be the Prewitt column and row operators,
respectively. The arrays H4 and H6 are second derivative operators along columns
480      EDGE DETECTION




          FIGURE 15.3-5. Chebyshev polynomial 3 × 3 impulse response arrays.



and rows, respectively, as noted in Eq. 15.3-7. Figure 15.3-6 shows the nine weight-
ing coefficient responses for the peppers image.
   The second derivative along the line normal to the edge slope can be expressed
explicitly by performing second-order differentiation on Eq. 15.3-15. The result is


          ˆ
          F ′′ ( r, c ) = 2k 4 sin 2 θ + 2k 5 sin θ cos θ + 2k 6 cos 2 θ

                        + ( 4k 7 sin θ cos θ + 2k 8 cos 2 θ )r + ( 2k 7 sin 2 θ + 4k 8 sin θ cos θ )c

                                           2                                               2
                        + ( 2k 9 cos 2 θ )r + ( 8k 9 sin θ cos θ )rc + ( 2k 9 sin 2 θ )c           (15.3-22)


This second derivative need only be evaluated on a line in the suspected edge direc-
tion. With the substitutions r = ρ sin θ and c = ρ cos θ, the directed second-order
derivative can be expressed as


         ˆ                       2                             2
         F ′′ ( ρ ) = 2 ( k 4 sin θ + k 5 sin θ cos θ + k 6 cos θ )

                                                                                               2
                     + 6 sin θ cos θ ( k 7 sin θ + k 8 cos θ ) ρ + 12 ( k 9 sin 2 θ cos 2 θ )ρ (15.3-23)


                                             ˆ
The next step is to detect zero crossings of F ′′ ( ρ ) in a unit pixel range – 0.5 ≤ ρ ≤ 0.5
of the suspected edge. This can be accomplished by computing the real root (if it
exists) within the range of the quadratic relation of Eq. 15.3-23.
SECOND-ORDER DERIVATIVE EDGE DETECTION          481




             (a ) Chebyshev 1                     (b ) Chebyshev 2




             (c ) Chebyshev 3                     (d ) Chebyshev 4




             (e ) Chebyshev 5                      (f ) Chebyshev 6


FIGURE 15.3-6. 3 × 3 Chebyshev polynomial responses for the peppers_mon image.
482     EDGE DETECTION




                                  (g ) Chebyshev 7




               (h ) Chebyshev 8                           (i ) Chebyshev 9
FIGURE 15.3-6 (Continued). 3 × 3 Chebyshev polynomial responses for the peppers_mon
image.



15.4. EDGE-FITTING EDGE DETECTION

Ideal edges may be viewed as one- or two-dimensional edges of the form sketched
in Figure 15.1-1. Actual image data can then be matched against, or fitted to, the
ideal edge models. If the fit is sufficiently accurate at a given image location, an
edge is assumed to exist with the same parameters as those of the ideal edge model.
   In the one-dimensional edge-fitting case described in Figure 15.4-1, the image
signal f(x) is fitted to a step function


                               a           for x < x 0                      (15.4-1a)
                               
                       s( x) = 
                               a + h
                                           for x ≥ x 0                      (15.4-1b)
EDGE-FITTING EDGE DETECTION       483




                FIGURE 15.4-1. One- and two-dimensional edge fitting.



An edge is assumed present if the mean-square error

                                      x0 + L                       2
                              E =   ∫x   0–L
                                               [ f ( x ) – s ( x ) ] dx             (15.4-2)


is below some threshold value. In the two-dimensional formulation, the ideal step
edge is defined as


                                  a                 for x cos θ + y sin θ < ρ   (15.4-3a)
                                  
                           s(x) = 
                                  
                                  a + h             for x cos θ + y sin θ ≥ ρ   (15.4-3b)


where θ and ρ jointly specify the polar distance from the center of a circular test
region to the normal point of the edge. The edge-fitting error is

                                                                   2
                           E =   ∫ ∫ [ F ( x, y ) – S ( x, y ) ]       dx dy        (15.4-4)


where the integration is over the circle in Figure 15.4-1.
484      EDGE DETECTION


   Hueckel (23) has developed a procedure for two-dimensional edge fitting in
which the pixels within the circle of Figure 15.4-1 are expanded in a set of two-
dimensional basis functions by a Fourier series in polar coordinates. Let B i ( x, y )
represent the basis functions. Then, the weighting coefficients for the expansions of
the image and the ideal step edge become


                               fi =   ∫ ∫ Bi ( x, y )F ( x, y ) dx dy           (15.4-5a)

                               si =   ∫ ∫ Bi ( x, y )S ( x, y ) dx dy           (15.4-5b)


In Hueckel's algorithm, the expansion is truncated to eight terms for computational
economy and to provide some noise smoothing. Minimization of the mean-square-
                                                                             2
error difference of Eq. 15.4-4 is equivalent to minimization of ( f i – s i ) for all coef-
ficients. Hueckel has performed this minimization, invoking some simplifying
approximations and has formulated a set of nonlinear equations expressing the
estimated edge parameter set in terms of the expansion coefficients f i .
    Nalwa and Binford (24) have proposed an edge-fitting scheme in which the edge
angle is first estimated by a sequential least-squares fit within a 5 × 5 region. Then,
the image data along the edge direction is fit to a hyperbolic tangent function


                                                       ρ      –ρ
                                               e –e
                                      tanh ρ = -------------------                (15.4-6)
                                                  ρ           –ρ
                                               e +e


as shown in Figure 15.4-2.
   Edge-fitting methods require substantially more computation than do derivative
edge detection methods. Their relative performance is considered in the following
section.




                    FIGURE 15.4-2. Hyperbolic tangent edge model.
LUMINANCE EDGE DETECTOR PERFORMANCE                 485

15.5. LUMINANCE EDGE DETECTOR PERFORMANCE

Relatively few comprehensive studies of edge detector performance have been
reported in the literature (15,25,26). A performance evaluation is difficult because
of the large number of methods proposed, problems in determining the optimum
parameters associated with each technique and the lack of definitive performance
criteria.
    In developing performance criteria for an edge detector, it is wise to distinguish
between mandatory and auxiliary information to be obtained from the detector.
Obviously, it is essential to determine the pixel location of an edge. Other informa-
tion of interest includes the height and slope angle of the edge as well as its spatial
orientation. Another useful item is a confidence factor associated with the edge deci-
sion, for example, the closeness of fit between actual image data and an idealized
model. Unfortunately, few edge detectors provide this full gamut of information.
    The next sections discuss several performance criteria. No attempt is made to
provide a comprehensive comparison of edge detectors.


15.5.1. Edge Detection Probability

The probability of correct edge detection PD and the probability of false edge detec-
tion PF, as specified by Eq. 15.2-24, are useful measures of edge detector perfor-
mance. The trade-off between PD and PF can be expressed parametrically in terms
of the detection threshold. Figure 15.5-1 presents analytically derived plots of PD
versus PF for several differential operators for vertical and diagonal edges and a sig-
nal-to-noise ratio of 1.0 and 10.0 (13). From these curves it is apparent that the
Sobel and Prewitt 3 × 3 operators are superior to the Roberts 2 × 2 operators. The
Prewitt operator is better than the Sobel operator for a vertical edge. But for a diago-
nal edge, the Sobel operator is superior. In the case of template-matching operators,
the Robinson three-level and five-level operators exhibit almost identical perfor-
mance, which is superior to the Kirsch and Prewitt compass gradient operators.
Finally, the Sobel and Prewitt differential operators perform slightly better than the
Robinson three- and Robinson five-level operators. It has not been possible to apply
this statistical approach to any of the larger operators because of analytic difficulties
in evaluating the detection probabilities.

15.5.2. Edge Detection Orientation

An important characteristic of an edge detector is its sensitivity to edge orientation.
Abdou and Pratt (15) have analytically determined the gradient response of 3 × 3
template matching edge detectors and 2 × 2 and 3 × 3 orthogonal gradient edge
detectors for square-root and magnitude combinations of the orthogonal gradients.
Figure 15.5-2 shows plots of the edge gradient as a function of actual edge orienta-
tion for a unit-width ramp edge model. The figure clearly shows that magni-
tude combination of the orthogonal gradients is inferior to square-root combination.
486      EDGE DETECTION




FIGURE 15.5-1. Probability of detection versus probability of false detection for 2 × 2 and
3 × 3 operators.



Figure 15.5-3 is a plot of the detected edge angle as a function of the actual orienta-
tion of an edge. The Sobel operator provides the most linear response. Laplacian
edge detectors are rotationally symmetric operators, and hence are invariant to edge
orientation. The edge angle can be determined to within 45° increments during the
3 × 3 pixel zero-crossing detection process.

15.5.3. Edge Detection Localization

Another important property of an edge detector is its ability to localize an edge.
Abdou and Pratt (15) have analyzed the edge localization capability of several first
derivative operators for unit width ramp edges. Figure 15.5-4 shows edge models in
which the sampled continuous ramp edge is displaced from the center of the
operator. Figure 15.5-5 shows plots of the gradient response as a function of edge
LUMINANCE EDGE DETECTOR PERFORMANCE                  487




FIGURE 15.5-2. Edge gradient response as a function of edge orientation for 2 × 2 and 3 × 3
first derivative operators.



displacement distance for vertical and diagonal edges for 2 × 2 and 3 × 3 orthogo-
nal gradient and 3 × 3 template matching edge detectors. All of the detectors, with
the exception of the Kirsch operator, exhibit a desirable monotonically decreasing
response as a function of edge displacement. If the edge detection threshold is set
at one-half the edge height, or greater, an edge will be properly localized in a noise-
free environment for all the operators, with the exception of the Kirsch operator,
for which the threshold must be slightly higher. Figure 15.5-6 illustrates the gradi-
ent response of boxcar operators as a function of their size (5). A gradient response




FIGURE 15.5-3. Detected edge orientation as a function of actual edge orientation for 2 × 2
and 3 × 3 first derivative operators.
488     EDGE DETECTION




                           (a) 2 × 2 model




                           (b) 3 × 3 model

              FIGURE 15.5-4. Edge models for edge localization analysis.



comparison of 7 × 7 orthogonal gradient operators is presented in Figure 15.5-7. For
such large operators, the detection threshold must be set relatively high to prevent
smeared edge markings. Setting a high threshold will, of course, cause low-ampli-
tude edges to be missed.
    Ramp edges of extended width can cause difficulties in edge localization. For
first-derivative edge detectors, edges are marked along the edge slope at all points
for which the slope exceeds some critical value. Raising the threshold results in the
missing of low-amplitude edges. Second derivative edge detection methods are
often able to eliminate smeared ramp edge markings. In the case of a unit width
ramp edge, a zero crossing will occur only at the midpoint of the edge slope.
Extended-width ramp edges will also exhibit a zero crossing at the ramp midpoint
provided that the size of the Laplacian operator exceeds the slope width. Figure
15.5-8 illustrates Laplacian of Gaussian (LOG) examples (21).
    Berzins (27) has investigated the accuracy to which the LOG zero crossings
locate a step edge. Figure 15.5-9 shows the LOG zero crossing in the vicinity of a
corner step edge. A zero crossing occurs exactly at the corner point, but the zero-
crossing curve deviates from the step edge adjacent to the corner point. The maxi-
mum deviation is about 0.3s, where s is the standard deviation of the Gaussian
smoothing function.
LUMINANCE EDGE DETECTOR PERFORMANCE               489




FIGURE 15.5-5. Edge gradient response as a function of edge displacement distance for
2 × 2 and 3 × 3 first derivative operators.




FIGURE 15.5-6. Edge gradient response as a function of edge displacement distance for
variable-size boxcar operators.
490      EDGE DETECTION




FIGURE 15.5-7 Edge gradient response as a function of edge displacement distance for
several 7 × 7 orthogonal gradient operators.



15.5.4. Edge Detector Figure of Merit

There are three major types of error associated with determination of an edge: (1)
missing valid edge points, (2) failure to localize edge points, and (3) classification of




FIGURE 15.5-8. Laplacian of Gaussian response of continuous domain for high- and low-
slope ramp edges.
LUMINANCE EDGE DETECTOR PERFORMANCE                  491




FIGURE 15.5-9. Locus of zero crossings in vicinity of a corner edge for a continuous Lapla-
cian of Gaussian edge detector.



noise fluctuations as edge points. Figure 15.5-10 illustrates a typical edge segment in a
discrete image, an ideal edge representation, and edge representations subject to var-
ious types of error.
    A common strategy in signal detection problems is to establish some bound on
the probability of false detection resulting from noise and then attempt to maximize
the probability of true signal detection. Extending this concept to edge detection
simply involves setting the edge detection threshold at a level such that the probabil-
ity of false detection resulting from noise alone does not exceed some desired value.
The probability of true edge detection can readily be evaluated by a coincidence
comparison of the edge maps of an ideal and an actual edge detector. The penalty for
nonlocalized edges is somewhat more difficult to access. Edge detectors that pro-
vide a smeared edge location should clearly be penalized; however, credit should be
given to edge detectors whose edge locations are localized but biased by a small
amount. Pratt (28) has introduced a figure of merit that balances these three types of
error. The figure of merit is defined by

                                              IA
                                       1                    1
                                  R = ----
                                      IN
                                         -   ∑      -----------------
                                                    1 + ad
                                                                    -
                                                                    2
                                                                                  (15.5-1)
                                             i =1



where I N = MAX { I I, I A } and I I and I A represent the number of ideal and actual
edge map points, a is a scaling constant, and d is the separation distance of an
actual edge point normal to a line of ideal edge points. The rating factor is normal-
ized so that R = 1 for a perfectly detected edge. The scaling factor may be adjusted
to penalize edges that are localized but offset from the true position. Normalization
by the maximum of the actual and ideal number of edge points ensures a penalty for
smeared or fragmented edges. As an example of performance, if a = 1 , the rating of
                                                                        -
                                                                        -
                                                                           9
492     EDGE DETECTION




                    FIGURE 15.5-10. Indications of edge location.


a vertical detected edge offset by one pixel becomes R = 0.90, and a two-pixel offset
gives a rating of R = 0.69. With a = 1 , a smeared edge of three pixels width centered
                                     -
                                     -
                                     9
about the true vertical edge yields a rating of R = 0.93, and a five-pixel-wide
smeared edge gives R = 0.84. A higher rating for a smeared edge than for an offset
edge is reasonable because it is possible to thin the smeared edge by morphological
postprocessing.
   The figure-of-merit criterion described above has been applied to the assessment
of some of the edge detectors discussed previously, using a test image consisting of
a 64 × 64 pixel array with a vertically oriented edge of variable contrast and slope
placed at its center. Independent Gaussian noise of standard deviation σ n has been
                                                                                     2
added to the edge image. The signal-to-noise ratio is defined as SNR = ( h ⁄ σ n ) ,
where h is the edge height scaled over the range 0.0 to 1.0. Because the purpose of
the testing is to compare various edge detection methods, for fairness it is important
that each edge detector be tuned to its best capabilities. Consequently, each edge
detector has been permitted to train both on random noise fields without edges and
LUMINANCE EDGE DETECTOR PERFORMANCE                   493

the actual test images before evaluation. For each edge detector, the threshold
parameter has been set to achieve the maximum figure of merit subject to the maxi-
mum allowable false detection rate.
    Figure 15.5-11 shows plots of the figure of merit for a vertical ramp edge as a
function of signal-to-noise ratio for several edge detectors (5). The figure of merit is
also plotted in Figure 15.5-12 as a function of edge width. The figure of merit curves
in the figures follow expected trends: low for wide and noisy edges; and high in the
opposite case. Some of the edge detection methods are universally superior to others
for all test images. As a check on the subjective validity of the edge location figure
of merit, Figures 15.5-13 and 15.5-14 present the edge maps obtained for several
high-and low-ranking edge detectors. These figures tend to corroborate the utility of
the figure of merit. A high figure of merit generally corresponds to a well-located
edge upon visual scrutiny, and vice versa.




                                (a) 3 × 3 edge detectors




                             (b) Larger size edge detectors

FIGURE 15.5-11. Edge location figure of merit for a vertical ramp edge as a function of sig-
nal-to-noise ratio for h = 0.1 and w = 1.
494     EDGE DETECTION



                                  100
                                              SOBEL

                FIGURE OF MERIT    80

                                   60   PREWITT
                                        COMPASS
                                                          ROBERTS
                                  40
                                                         MAGNITUDE

                                  20

                                   0
                                          7           5         3     1
                                              EDGE WIDTH, PIXELS

FIGURE 15.5-12. Edge location figure of merit for a vertical ramp edge as a function of
signal-to-noise ratio for h = 0.1 and SNR = 100.




15.5.5. Subjective Assessment

In many, if not most applications in which edge detection is performed to outline
objects in a real scene, the only performance measure of ultimate importance is
how well edge detector markings match with the visual perception of object
boundaries. A human observer is usually able to discern object boundaries in a
scene quite accurately in a perceptual sense. However, most observers have diffi-
culty recording their observations by tracing object boundaries. Nevertheless, in
 the evaluation of edge detectors, it is useful to assess them in terms of how well
they produce outline drawings of a real scene that are meaningful to a human
observer.
   The peppers image of Figure 15.2-2 has been used for the subjective assessment
of edge detectors. The peppers in the image are visually distinguishable objects, but
shadows and nonuniform lighting create a challenge to edge detectors, which by
definition do not utilize higher-order perceptive intelligence. Figures 15.5-15 and
15.5-16 present edge maps of the peppers image for several edge detectors. The
parameters of the various edge detectors have been chosen to produce the best visual
delineation of objects.
   Heath et al. (26) have performed extensive visual testing of several complex edge
detection algorithms, including the Canny and Nalwa–Binford methods, for a num-
ber of natural images. The judgment criterion was a numerical rating as to how well
the edge map generated by an edge detector allows for easy, quick, and accurate rec-
ognition of objects within a test image.
LUMINANCE EDGE DETECTOR PERFORMANCE              495




                                        SNR = 100
                    (a ) Original                    (b ) Edge map, R = 100%




                                        SNR = 10
                    (c ) Original                    (d ) Edge map, R = 85.1%




                                         SNR = 1
                    (e ) Original                   (f ) Edge Map, R = 24.2%

FIGURE 15.5-13. Edge location performance of Sobel edge detector as a function of signal-
to-noise ratio, h = 0.1, w = 1, a = 1/9.
496     EDGE DETECTION




                  (a ) Original                (b) East compass, R = 66.1%




       (c ) Roberts magnitude, R = 31.5%    (d ) Roberts square root, R = 37.0%




             (e ) Sobel, R = 85.1%                 (f ) Kirsch, R = 80.8%

FIGURE 15.5-14. Edge location performance of several edge detectors for SNR = 10,
h = 0.1, w = 1, a = 1/9.
LUMINANCE EDGE DETECTOR PERFORMANCE            497




           (a) 2 × 2 Roberts, t = 0.08           (b) 3 × 3 Prewitt, t = 0.08




            (c) 3 × 3 Sobel, t = 0.09            (d ) 3 × 3 Robinson five-level




        (e) 5 × 5 Nevatia−Babu, t = 0.05              (f ) 3 × 3 Laplacian

FIGURE 15.5-15. Edge maps of the peppers_mon image for several small edge detectors.
498        EDGE DETECTION




               (a) 7 × 7 boxcar, t = 0.10            (b) 9 × 9 truncated pyramid, t = 0.10




              (c) 11 × 11 Argyle, t = 0.05              (d ) 11 × 11 Macleod, t = 0.10




      (e) 11 × 11 derivative of Gaussian, t = 0.11    (f ) 11 × 11 Laplacian of Gaussian

FIGURE 15.5-16. Edge maps of the peppers_mon image for several large edge detectors.
LINE AND SPOT DETECTION         499

15.6. COLOR EDGE DETECTION

In Chapter 3 it was established that color images may be described quantitatively at
each pixel by a set of three tristimulus values T1, T2, T3, which are proportional to
the amount of red, green, and blue primary lights required to match the pixel color.
The luminance of the color is a weighted sum Y = a 1 T 1 + a 2 T 2 + a 3 T3 of the tris-
timulus values, where the a i are constants that depend on the spectral characteristics
of the primaries.
   Several definitions of a color edge have been proposed (29). An edge in a color
image can be said to exist if and only if the luminance field contains an edge. This
definition ignores discontinuities in hue and saturation that occur in regions of con-
stant luminance. Another definition is to judge a color edge present if an edge exists
in any of its constituent tristimulus components. A third definition is based on form-
ing the sum

                            G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k ) + G 3 ( j, k )           (15.6-1)

of the gradients G i ( j, k ) of the three tristimulus values or some linear or nonlinear
color components. A color edge exists if the gradient exceeds a threshold. Still
another definition is based on the vector sum gradient

                                                2                  2                  2 1⁄2
                  G ( j, k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] + [ G 3 ( j, k ) ] ]     (15.6-2)

   With the tricomponent definitions of color edges, results are dependent on the
particular color coordinate system chosen for representation. Figure 15.6-1 is a color
photograph of the peppers image and monochrome photographs of its red, green,
and blue components. The YIQ and L*a*b* coordinates are shown in Figure 15.6-2.
Edge maps of the individual RGB components are shown in Figure 15.6-3 for Sobel
edge detection. This figure also shows the logical OR of the RGB edge maps plus
the edge maps of the gradient sum and the vector sum. The RGB gradient vector
sum edge map provides slightly better visual edge delineation than that provided by
the gradient sum edge map; the logical OR edge map tends to produce thick edges
and numerous isolated edge points. Sobel edge maps for the YIQ and the L*a*b*
color components are presented in Figures 15.6-4 and 15.6-5. The YIQ gradient vec-
tor sum edge map gives the best visual edge delineation, but it does not delineate
edges quite as well as the RGB vector sum edge map. Edge detection results for the
L*a*b* coordinate system are quite poor because the a* component is very noise
sensitive.


15.7 LINE AND SPOT DETECTION

A line in an image could be considered to be composed of parallel, closely spaced
edges. Similarly, a spot could be considered to be a closed contour of edges. This
500     EDGE DETECTION




        (a ) Monochrome representation                       (b) Red component




            (c ) Green component                            (d ) Blue component

FIGURE 15.6-1. The peppers_gamma color image and its RGB color components. See
insert for a color representation of this figure.



method of line and spot detection involves the application of scene analysis tech-
niques to spatially relate the constituent edges of the lines and spots. The approach
taken in this chapter is to consider only small-scale models of lines and edges and to
apply the detection methodology developed previously for edges.
   Figure 15.1-4 presents several discrete models of lines. For the unit-width line
models, line detection can be accomplished by threshold detecting a line gradient

                                       4
                        G ( j, k ) = MAX { F ( j, k ) ᭺ H m ( j, k ) }
                                                      ‫ء‬                           (15.7-1)
                                     m=1
LINE AND SPOT DETECTION       501




           (a ) Y component                     (b ) L* component




            (c ) I component                    (d ) a* component




           (e ) Q component                     (f ) b* component

FIGURE 15.6-2. YIQ and L*a*b* color components of the peppers_gamma image.
502     EDGE DETECTION




             (a ) Red edge map                  (b ) Logical OR of RGB edges




            (c ) Green edge map                   (d ) RGB sum edge map




             (e) Blue edge map                  (f ) RGB vector sum edge map

FIGURE 15.6-3. Sobel edge maps for edge detection using the RGB color components of
the peppers_gamma image.
LINE AND SPOT DETECTION          503




              (a ) Y edge map                     (b ) Logical OR of YIQ edges




               (c ) I edge map                       (d ) YIQ sum edge map




              (e) Q edge map                      (f ) YIQ vector sum edge map

FIGURE 15.6-4. Sobel edge maps for edge detection using the YIQ color components of the
peppers_gamma image.
504     EDGE DETECTION




              (a ) L* edge map                  (b ) Logical OR of L*a *b * edges




              (c ) a * edge map                    (d ) L*a *b * sum edge map




              (e ) b * edge map                 (f ) L*a *b * vector sum edge map

FIGURE 15.6-5. Sobel edge maps for edge detection using the L*a*b* color components of
the peppers_gamma image.
LINE AND SPOT DETECTION       505

where H m ( j, k ) is a 3 × 3 line detector impulse response array corresponding to a
specific line orientation. Figure 15.7-1 contains two sets of line detector impulse
response arrays, weighted and unweighted, which are analogous to the Prewitt and
Sobel template matching edge detector impulse response arrays. The detection of
ramp lines, as modeled in Figure 15.1-4, requires 5 × 5 pixel templates.
   Unit-width step spots can be detected by thresholding a spot gradient

                                                     ‫ء‬
                             G ( j, k ) = F ( j, k ) ᭺ H ( j, k )                 (15.7-2)


where H ( j, k ) is an impulse response array chosen to accentuate the gradient of a
unit-width spot. One approach is to use one of the three types of 3 × 3 Laplacian
operators defined by Eq. 15.3-5, 15.3-6, or 15.3-8, which are discrete approxima-
tions to the sum of the row and column second derivatives of an image. The gradient
responses to these impulse response arrays for the unit-width spot model of Figure
15.1-6a are simply a replicas of each array centered at the spot, scaled by the spot
height h and zero elsewhere. It should be noted that the Laplacian gradient responses
are thresholded for spot detection, whereas the Laplacian responses are examined
for sign changes (zero crossings) for edge detection. The disadvantage to using
Laplacian operators for spot detection is that they evoke a gradient response for
edges, which can lead to false spot detection in a noisy environment. This problem
can be alleviated by the use of a 3 × 3 operator that approximates the continuous




             FIGURE 15.7-1. Line detector 3 × 3 impulse response arrays.
506     EDGE DETECTION

                                2   2
cross second derivative ∂2 ⁄ ∂x ∂y . Prewitt (1, p. 126) has suggested the following
discrete approximation:


                                         1     –2     1
                                   1
                               H = --
                                    -   –2      4    –2                          (15.7-3)
                                   8
                                         1     –2     1


The advantage of this operator is that it evokes no response for horizontally or verti-
cally oriented edges, however, it does generate a response for diagonally oriented
edges. The detection of unit-width spots modeled by the ramp model of Figure
15.1-5 requires a 5 × 5 impulse response array. The cross second derivative operator
of Eq. 15.7-3 and the separable eight-connected Laplacian operator are deceptively
similar in appearance; often, they are mistakenly exchanged with one another in the
literature. It should be noted that the cross second derivative is identical to within a
scale factor with the ninth Chebyshev polynomial impulse response array of Figure
15.3-5.
    Cook and Rosenfeld (30) and Zucker et al. (31) have suggested several algo-
rithms for detection of large spots. In one algorithm, an image is first smoothed with
a W × W low-pass filter impulse response array. Then the value of each point in the
averaged image is compared to the average value of its north, south, east, and west
neighbors spaced W pixels away. A spot is marked if the difference is sufficiently
large. A similar approach involves formation of the difference of the average pixel
amplitude in a W × W window and the average amplitude in a surrounding ring
region of width W.
    Chapter 19 considers the general problem of detecting objects within an image by
template matching. Such templates can be developed to detect large spots.


REFERENCES

 1. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy-
    chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York. 1970.
 2. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec-
    tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge,
    MA, 1965, 159–197.
 3. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York,
    1973.
 4. W. Frei and C. Chen, “Fast Boundary Detection: A Generalization and a New
    Algorithm,” IEEE Trans. Computers, C-26, 10, October 1977, 988–998.
 5. I. Abdou, “Quantitative Methods of Edge Detection,” USCIPI Report 830, Image
    Processing Institute, University of Southern California, Los Angeles, 1973.
 6. E. Argyle, “Techniques for Edge Detection,” Proc. IEEE, 59, 2, February 1971,
    285–287.
REFERENCES         507

 7. I. D. G. Macleod, “On Finding Structure in Pictures,” in Picture Processing and Psy-
    chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.
 8. I. D. G. Macleod, “Comments on Techniques for Edge Detection,” Proc. IEEE, 60, 3,
    March 1972, 344.
 9. J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Analy-
    sis and Machine Intelligence, PAMI-8, 6, November 1986, 679–698.
10. D. Demigny and T. Kamie, “A Discrete Expression of Canny’s Criteria for Step Edge
    Detector Performances Evaluation,” IEEE Trans. Pattern Analysis and Machine Intelli-
    gence, PAMI-19, 11, November 1997, 1199–1211.
11. R. Kirsch, “Computer Determination of the Constituent Structure of Biomedical
    Images,” Computers and Biomedical Research, 4, 3, 1971, 315–328.
12. G. S. Robinson, “Edge Detection by Compass Gradient Masks,” Computer Graphics
    and Image Processing, 6, 5, October 1977, 492–501.
13. R. Nevatia and K. R. Babu, “Linear Feature Extraction and Description,” Computer
    Graphics and Image Processing, 13, 3, July 1980, 257–269.
14. A. P. Paplinski, “Directional Filtering in Edge Detection,” IEEE Trans. Image Process-
    ing, IP-7, 4, April 1998, 611–615.
15. I. E. Abdou and W. K. Pratt, “Quantitative Design and Evaluation of Enhancement/
    Thresholding Edge Detectors,” Proc. IEEE, 67, 5, May 1979, 753–763.
16. K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, New
    York, 1972.
17. P. V. Henstock and D. M. Chelberg, “Automatic Gradient Threshold Determination for
    Edge Detection,” IEEE Trans. Image Processing, IP-5, 5, May 1996, 784–787.
18. V. Torre and T. A. Poggio, “On Edge Detection,” IEEE Trans. Pattern Analysis and
    Machine Intelligence, PAMI-8, 2, March 1986, 147–163.
19. D. Marr and E. Hildrith, “Theory of Edge Detection,” Proc. Royal Society of London,
    B207, 1980, 187–217.
20. J. S. Wiejak, H. Buxton, and B. F. Buxton, “Convolution with Separable Masks for Early
    Image Processing,” Computer Vision, Graphics, and Image Processing, 32, 3, December
    1985, 279–290.
21. A. Huertas and G. Medioni, “Detection of Intensity Changes Using Laplacian-Gaussian
    Masks,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 5, September
    1986, 651–664.
22. R. M. Haralick, “Digital Step Edges from Zero Crossing of Second Directional Deriva-
    tives,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, 1, January
    1984, 58–68.
23. M. Hueckel, “An Operator Which Locates Edges in Digital Pictures,” J. Association for
    Computing Machinery, 18, 1, January 1971, 113–125.
24. V. S. Nalwa and T. O. Binford, “On Detecting Edges,” IEEE Trans. Pattern Analysis and
    Machine Intelligence, PAMI-6, November 1986, 699–714.
25. J. R. Fram and E. S. Deutsch, “On the Evaluation of Edge Detection Schemes and Their
    Comparison with Human Performance,” IEEE Trans. Computers, C-24, 6, June 1975,
    616–628.
508      EDGE DETECTION


26. M. D. Heath, et al., “A Robust Visual Method for Assessing the Relative Performance of
    Edge-Detection Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence,
    PAMI-19, 12, December 1997, 1338–1359.
27. V. Berzins, “Accuracy of Laplacian Edge Detectors,” Computer Vision, Graphics, and
    Image Processing, 27, 2, August 1984, 195–210.
28. W. K. Pratt, Digital Image Processing, Wiley-Interscience, New York, 1978, 497–499.
29. G. S. Robinson, “Color Edge Detection,” Proc. SPIE Symposium on Advances in Image
    Transmission Techniques, 87, San Diego, CA, August 1976.
30. C. M. Cook and A. Rosenfeld, “Size Detectors,” Proc. IEEE Letters, 58, 12, December
    1970, 1956–1957.
31. S. W. Zucker, A. Rosenfeld, and L. S. Davis, “Picture Segmentation by Texture Discrim-
    ination,” IEEE Trans. Computers, C-24, 12, December 1975, 1228–1233.
Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt
                                         Copyright © 2001 John Wiley & Sons, Inc.
                      ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic)




16
IMAGE FEATURE EXTRACTION




An image feature is a distinguishing primitive characteristic or attribute of an image.
Some features are natural in the sense that such features are defined by the visual
appearance of an image, while other, artificial features result from specific manipu-
lations of an image. Natural features include the luminance of a region of pixels and
gray scale textural regions. Image amplitude histograms and spatial frequency spec-
tra are examples of artificial features.
    Image features are of major importance in the isolation of regions of common
property within an image (image segmentation) and subsequent identification or
labeling of such regions (image classification). Image segmentation is discussed in
Chapter 16. References 1 to 4 provide information on image classification tech-
niques.
    This chapter describes several types of image features that have been proposed
for image segmentation and classification. Before introducing them, however,
methods of evaluating their performance are discussed.


16.1. IMAGE FEATURE EVALUATION

There are two quantitative approaches to the evaluation of image features: prototype
performance and figure of merit. In the prototype performance approach for image
classification, a prototype image with regions (segments) that have been indepen-
dently categorized is classified by a classification procedure using various image
features to be evaluated. The classification error is then measured for each feature
set. The best set of features is, of course, that which results in the least classification
error. The prototype performance approach for image segmentation is similar in
nature. A prototype image with independently identified regions is segmented by a

                                                                                       509
510      IMAGE FEATURE EXTRACTION


segmentation procedure using a test set of features. Then, the detected segments are
compared to the known segments, and the segmentation error is evaluated. The
problems associated with the prototype performance methods of feature evaluation
are the integrity of the prototype data and the fact that the performance indication is
dependent not only on the quality of the features but also on the classification or seg-
mentation ability of the classifier or segmenter.
   The figure-of-merit approach to feature evaluation involves the establishment of
some functional distance measurements between sets of image features such that a
large distance implies a low classification error, and vice versa. Faugeras and Pratt
(5) have utilized the Bhattacharyya distance (3) figure-of-merit for texture feature
evaluation. The method should be extensible for other features as well. The Bhatta-
charyya distance (B-distance for simplicity) is a scalar function of the probability
densities of features of a pair of classes defined as

                                                                                1⁄2 
                             B ( S 1, S 2 ) = – ln  ∫ [ p ( x S 1 )p ( x S 2 ) ] dx                                 (16.1-1)
                                                                                    

where x denotes a vector containing individual image feature measurements with
conditional density p ( x S i ) . It can be shown (3) that the B-distance is related mono-
tonically to the Chernoff bound for the probability of classification error using a
Bayes classifier. The bound on the error probability is

                                                             1⁄2
                                 P ≤ [ P ( S 1 )P ( S2 ) ]         exp { – B ( S 1, S 2 ) }                           (16.1-2)


where P ( Si ) represents the a priori class probability. For future reference, the Cher-
noff error bound is tabulated in Table 16.1-1 as a function of B-distance for equally
likely feature classes.
   For Gaussian densities, the B-distance becomes


                                                                                       1                                 
                                          T Σ1 + Σ2
                                                                –1                     -- Σ 1 + Σ 2 
                                                                                              -
          B ( S 1, S 2 ) = 1 ( u 1 – u 2 )  -----------------  ( u 1 – u 2 ) + 1 ln  ---------------------------------  (16.1-3)
                           --
                            -                                -                  --
                                                                                  -
                                                                                             2
                                                                                                                        -
                           8                2                                   2
                                                                                       Σ1 1 ⁄ 2 Σ2 1 ⁄ 2 
                                                                                                                         


where ui and Σ i represent the feature mean vector and the feature covariance matrix
of the classes, respectively. Calculation of the B-distance for other densities is gener-
ally difficult. Consequently, the B-distance figure of merit is applicable only for
Gaussian-distributed feature data, which fortunately is the common case. In prac-
tice, features to be evaluated by Eq. 16.1-3 are measured in regions whose class has
been determined independently. Sufficient feature measurements need be taken so
that the feature mean vector and covariance can be estimated accurately.
AMPLITUDE FEATURES                    511

                  TABLE 16.1-1 Relationship of Bhattacharyya Distance
                  and Chernoff Error Bound
                               B ( S 1, S 2 )                        Error Bound
                                     1                               1.84 × 10–1
                                     2                               6.77 × 10–2
                                     4                               9.16 × 10–3
                                     6                               1.24 × 10–3
                                     8                               1.68 × 10–4
                                    10                               2.27 × 10–5
                                    12                               2.07 × 10–6



16.2. AMPLITUDE FEATURES

The most basic of all image features is some measure of image amplitude in terms of
luminance, tristimulus value, spectral value, or other units. There are many degrees
of freedom in establishing image amplitude features. Image variables such as lumi-
nance or tristimulus values may be utilized directly, or alternatively, some linear,
nonlinear, or perhaps noninvertible transformation can be performed to generate
variables in a new amplitude space. Amplitude measurements may be made at spe-
cific image points, [e.g., the amplitude F ( j, k ) at pixel coordinate ( j, k ) , or over a
neighborhood centered at ( j, k ) ]. For example, the average or mean image amplitude
in a W × W pixel neighborhood is given by

                                                      w       w
                                          1
                           M ( j, k ) = ------
                                        W
                                             -
                                             2       ∑        ∑     F ( j + m, k + n )                       (16.2-1)
                                                    m = –w n = –w


where W = 2w + 1. An advantage of a neighborhood, as opposed to a point measure-
ment, is a diminishing of noise effects because of the averaging process. A disadvan-
tage is that object edges falling within the neighborhood can lead to erroneous
measurements.
   The median of pixels within a W × W neighborhood can be used as an alternative
amplitude feature to the mean measurement of Eq. 16.2-1, or as an additional
feature. The median is defined to be that pixel amplitude in the window for which
one-half of the pixels are equal or smaller in amplitude, and one-half are equal or
greater in amplitude. Another useful image amplitude feature is the neighborhood
standard deviation, which can be computed as

                                    w           w                                                      1⁄2
                         1                                                                         2
           S ( j, k ) = ----
                        W
                           -       ∑            ∑    [ F ( j + m, k + n ) – M ( j + m, k + n ) ]             (16.2-2)
                                 m = –w n = – w
512     IMAGE FEATURE EXTRACTION




                  (a) Original                                  (b) 7 × 7 pyramid mean




          (c) 7 × 7 standard deviation                           (d ) 7 × 7 plus median

      FIGURE 16.2-1. Image amplitude features of the washington_ir image.



In the literature, the standard deviation image feature is sometimes called the image
dispersion. Figure 16.2-1 shows an original image and the mean, median, and stan-
dard deviation of the image computed over a small neighborhood.
    The mean and standard deviation of Eqs. 16.2-1 and 16.2-2 can be computed
indirectly in terms of the histogram of image pixels within a neighborhood. This
leads to a class of image amplitude histogram features. Referring to Section 5.7, the
first-order probability distribution of the amplitude of a quantized image may be
defined as

                                 P ( b ) = P R [ F ( j, k ) = r b ]                       (16.2-3)

where r b denotes the quantized amplitude level for 0 ≤ b ≤ L – 1 . The first-order his-
togram estimate of P(b) is simply
AMPLITUDE FEATURES       513

                                                     N( b)
                                           P ( b ) ≈ -----------
                                                               -                                (16.2-4)
                                                          M



where M represents the total number of pixels in a neighborhood window centered
about ( j, k ) , and N ( b ) is the number of pixels of amplitude r b in the same window.
   The shape of an image histogram provides many clues as to the character of the
image. For example, a narrowly distributed histogram indicates a low-contrast
image. A bimodal histogram often suggests that the image contains an object with a
narrow amplitude range against a background of differing amplitude. The following
measures have been formulated as quantitative shape descriptions of a first-order
histogram (6).

   Mean:

                                                    L–1
                                  SM ≡ b =           ∑        bP ( b )                          (16.2-5)
                                                    b=0

   Standard deviation:

                                              L–1                              1⁄2
                                                                   2
                          SD ≡ σb =            ∑      (b – b ) P( b)                            (16.2-6)
                                              b=0

   Skewness:

                                              L–1
                                      1                            3
                              S S = -----
                                    σb
                                        -
                                        3      ∑ ( b – b)              P(b )                    (16.2-7)
                                              b=0

   Kurtosis:

                                            L–1
                                     1-                        4
                             S K = -----
                                       4     ∑ ( b – b)            P( b ) – 3                   (16.2-8)
                                   σb       b=0

   Energy:

                                                L–1
                                                                       2
                                    SN =         ∑ [P(b )]                                      (16.2-9)
                                                b=0

   Entropy:

                                       L–1
                             SE = –    ∑         P ( b ) log 2 { P ( b ) }                  (16.2-10)
                                       b=0
514      IMAGE FEATURE EXTRACTION


                                                                  m,n

                                                 r                q


                                  j,k

                       FIGURE 16.2-2. Relationship of pixel pairs.



The factor of 3 inserted in the expression for the Kurtosis measure normalizes SK to
zero for a zero-mean, Gaussian-shaped histogram. Another useful histogram shape
measure is the histogram mode, which is the pixel amplitude corresponding to the
histogram peak (i.e., the most commonly occurring pixel amplitude in the window).
If the histogram peak is not unique, the pixel at the peak closest to the mean is usu-
ally chosen as the histogram shape descriptor.
    Second-order histogram features are based on the definition of the joint proba-
bility distribution of pairs of pixels. Consider two pixels F ( j, k ) and F ( m, n ) that
are located at coordinates ( j, k ) and ( m, n ), respectively, and, as shown in Figure
16.2-2, are separated by r radial units at an angle θ with respect to the horizontal
axis. The joint distribution of image amplitude values is then expressed as


                       P ( a, b ) = P R [ F ( j, k ) = r a, F ( m, n ) = r b ]   (16.2-11)


where r a and r b represent quantized pixel amplitude values. As a result of the dis-
crete rectilinear representation of an image, the separation parameters ( r, θ ) may
assume only certain discrete values. The histogram estimate of the second-order dis-
tribution is

                                                     N ( a, b )
                                        P ( a, b ) ≈ -----------------
                                                                     -           (16.2-12)
                                                             M

where M is the total number of pixels in the measurement window and N ( a, b )
denotes the number of occurrences for which F ( j, k ) = r a and F ( m, n ) = r b .
    If the pixel pairs within an image are highly correlated, the entries in P ( a, b ) will
be clustered along the diagonal of the array. Various measures, listed below, have
been proposed (6,7) as measures that specify the energy spread about the diagonal of
P ( a, b ).

   Autocorrelation:
                                          L–1 L–1
                                 SA =      ∑ ∑              abP ( a, b )         (16.2-13)
                                          a=0 b=0
AMPLITUDE FEATURES    515

  Covariance:

                                 L–1 L–1
                         SC =     ∑ ∑         ( a – a ) ( b – b )P ( a, b )                 (16.2-14a)
                                 a=0 b=0

  where


                                        L–1    L–1
                                 a =     ∑ ∑ aP ( a, b )                                    (16.2-14b)
                                        a=0 b=0

                                        L–1    L–1
                                 b =     ∑ ∑ bP ( a, b )                                    (16.2-14c)
                                        a=0 b=0



  Inertia:

                                   L–1     L–1
                                                              2
                            SI =       ∑ ∑ (a – b)                P ( a, b )                 (16.2-15)
                                   a=0 b=0

  Absolute value:

                                       L–1 L–1
                            SV =        ∑ ∑        a – b P ( a, b )                          (16.2-16)
                                       a=0 b=0

  Inverse difference:

                                        L–1 L–1
                                                          P ( a, b )
                             SF =       ∑ ∑         ----------------------------
                                                    1 + ( a – b)
                                                                               2
                                                                                             (16.2-17)
                                       a=0 b=0

  Energy:

                                        L–1 L–1
                                                                           2
                                 SG =    ∑ ∑ [ P ( a, b ) ]                                  (16.2-18)
                                        a=0 b=0

  Entropy:

                                 L–1    L–1
                        ST = –   ∑ ∑          P ( a, b ) log 2 { P ( a, b ) }                (16.2-19)
                                 a=0 b=0


The utilization of second-order histogram measures for texture analysis is consid-
ered in Section 16.6.
516      IMAGE FEATURE EXTRACTION


16.3. TRANSFORM COEFFICIENT FEATURES

The coefficients of a two-dimensional transform of a luminance image specify the
amplitude of the luminance patterns (two-dimensional basis functions) of a trans-
form such that the weighted sum of the luminance patterns is identical to the image.
By this characterization of a transform, the coefficients may be considered to indi-
cate the degree of correspondence of a particular luminance pattern with an image
field. If a basis pattern is of the same spatial form as a feature to be detected within
the image, image detection can be performed simply by monitoring the value of the
transform coefficient. The problem, in practice, is that objects to be detected within
an image are often of complex shape and luminance distribution, and hence do not
correspond closely to the more primitive luminance patterns of most image trans-
forms.
    Lendaris and Stanley (8) have investigated the application of the continuous two-
dimensional Fourier transform of an image, obtained by a coherent optical proces-
sor, as a means of image feature extraction. The optical system produces an electric
field radiation pattern proportional to

                                     ∞    ∞
                 F ( ω x, ω y ) =   ∫–∞ ∫–∞ F ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy   (16.3-1)


where ( ω x, ω y ) are the image spatial frequencies. An optical sensor produces an out-
put

                                                                           2
                                    M ( ω x, ω y ) = F ( ω x, ω y )                        (16.3-2)


proportional to the intensity of the radiation pattern. It should be observed that
F ( ω x, ω y ) and F ( x, y ) are unique transform pairs, but M ( ω x, ω y ) is not uniquely
related to F ( x, y ) . For example, M ( ω x, ω y ) does not change if the origin of F ( x, y )
is shifted. In some applications, the translation invariance of M ( ω x, ω y ) may be a
benefit. Angular integration of M ( ω x, ω y ) over the spatial frequency plane produces
a spatial frequency feature that is invariant to translation and rotation. Representing
M ( ω x, ω y ) in polar form, this feature is defined as


                                                      2π
                                      N (ρ) =     ∫0       M ( ρ, θ ) dθ                   (16.3-3)

                                           2       2         2
where θ = arc tan { ω x ⁄ ω y } and ρ = ω x + ω y . Invariance to changes in scale is an
attribute of the feature

                                                  ∞
                                      P(θ ) =    ∫0    M ( ρ, θ ) dρ                       (16.3-4)
TRANSFORM COEFFICIENT FEATURES         517




                      FIGURE 16.3-1. Fourier transform feature masks.



   The Fourier domain intensity pattern M ( ω x, ω y ) is normally examined in specific
regions to isolate image features. As an example, Figure 16.3-1 defines regions for
the following Fourier features:

   Horizontal slit:

                                      ∞        ωy (m + 1 )
                       S1( m ) =     ∫– ∞ ∫ω ( m )
                                                y
                                                             M ( ω x, ω y ) dω x dω y   (16.3-5)


   Vertical slit:

                                       ωx (m + 1 ) ∞
                        S2 ( m ) =   ∫ω ( m) ∫–∞ M ( ω x, ωy ) dωx dωy
                                          x
                                                                                        (16.3-6)


   Ring:

                                               ρ ( m + 1 ) 2π
                           S3 ( m ) =         ∫ρ ( m ) ∫0       M ( ρ, θ ) dρ dθ        (16.3-7)


   Sector:

                                                ∞   θ(m + 1 )
                            S4 ( m ) =        ∫0 ∫ θ ( m )      M ( ρ, θ ) dρ dθ        (16.3-8)
518    IMAGE FEATURE EXTRACTION




                (a ) Rectangle                               (b ) Rectangle transform




                  (c ) Ellipse                                 (d ) Ellipse transform




                  (e ) Triangle                                (f ) Triangle transform

      FIGURE 16.3-2. Discrete Fourier spectra of objects; log magnitude displays.



  For a discrete image array F ( j, k ) , the discrete Fourier transform

                                    N–1 N–1
                               1                             – 2πi             
                 F ( u, v ) = ---
                              N
                                -   ∑ ∑ F ( j, k ) exp  ----------- ( ux + vy ) 
                                                        N                       
                                                                                         (16.3-9)
                                    j=0 k=0
TEXTURE DEFINITION         519

for u, v = 0, …, N – 1 can be examined directly for feature extraction purposes. Hor-
izontal slit, vertical slit, ring, and sector features can be defined analogous to
Eqs. 16.3-5 to 16.3-8. This concept can be extended to other unitary transforms,
such as the Hadamard and Haar transforms. Figure 16.3-2 presents discrete Fourier
transform log magnitude displays of several geometric shapes.

16.4. TEXTURE DEFINITION

Many portions of images of natural scenes are devoid of sharp edges over large
areas. In these areas, the scene can often be characterized as exhibiting a consistent
structure analogous to the texture of cloth. Image texture measurements can be used
to segment an image and classify its segments.
    Several authors have attempted qualitatively to define texture. Pickett (9) states
that “texture is used to describe two dimensional arrays of variations... The ele-
ments and rules of spacing or arrangement may be arbitrarily manipulated, provided
a characteristic repetitiveness remains.” Hawkins (10) has provided a more detailed
description of texture: “The notion of texture appears to depend upon three ingredi-
ents: (1) some local 'order' is repeated over a region which is large in comparison to
the order's size, (2) the order consists in the nonrandom arrangement of elementary
parts and (3) the parts are roughly uniform entities having approximately the same
dimensions everywhere within the textured region.” Although these descriptions of
texture seem perceptually reasonably, they do not immediately lead to simple quan-
titative textural measures in the sense that the description of an edge discontinuity
leads to a quantitative description of an edge in terms of its location, slope angle,
and height.
    Texture is often qualitatively described by its coarseness in the sense that a patch
of wool cloth is coarser than a patch of silk cloth under the same viewing conditions.
The coarseness index is related to the spatial repetition period of the local structure.
A large period implies a coarse texture; a small period implies a fine texture. This
perceptual coarseness index is clearly not sufficient as a quantitative texture mea-
sure, but can at least be used as a guide for the slope of texture measures; that is,
small numerical texture measures should imply fine texture, and large numerical
measures should indicate coarse texture. It should be recognized that texture is a
neighborhood property of an image point. Therefore, texture measures are inher-
ently dependent on the size of the observation neighborhood. Because texture is a
spatial property, measurements should be restricted to regions of relative uniformity.
Hence it is necessary to establish the boundary of a uniform textural region by some
form of image segmentation before attempting texture measurements.
Texture may be classified as being artificial or natural. Artificial textures consist of
arrangements of symbols, such as line segments, dots, and stars placed against a
neutral background. Several examples of artificial texture are presented in Figure
16.4-1 (9). As the name implies, natural textures are images of natural scenes con-
taining semirepetitive arrangements of pixels. Examples include photographs
of brick walls, terrazzo tile, sand, and grass. Brodatz (11) has published an album of
photographs of naturally occurring textures. Figure 16.4-2 shows several natural
texture examples obtained by digitizing photographs from the Brodatz album.
520   IMAGE FEATURE EXTRACTION




                    FIGURE 16.4-1. Artificial texture.
VISUAL TEXTURE DISCRIMINATION           521




                  (a) Sand                                 (b) Grass




                   (c) Wool                                (d) Raffia

                        FIGURE 16.4-2. Brodatz texture fields.



16.5. VISUAL TEXTURE DISCRIMINATION

A discrete stochastic field is an array of numbers that are randomly distributed in
amplitude and governed by some joint probability density (12). When converted to
light intensities, such fields can be made to approximate natural textures surpris-
ingly well by control of the generating probability density. This technique is useful
for generating realistic appearing artificial scenes for applications such as airplane
flight simulators. Stochastic texture fields are also an extremely useful tool for
investigating human perception of texture as a guide to the development of texture
feature extraction methods.
    In the early 1960s, Julesz (13) attempted to determine the parameters of stochas-
tic texture fields of perceptual importance. This study was extended later by Julesz
et al. (14–16). Further extensions of Julesz’s work have been made by Pollack (17),
522      IMAGE FEATURE EXTRACTION




                FIGURE 16.5-1. Stochastic texture field generation model.



Purks and Richards (18), and Pratt et al. (19). These studies have provided valuable
insight into the mechanism of human visual perception and have led to some useful
quantitative texture measurement methods.
    Figure 16.5-1 is a model for stochastic texture generation. In this model, an array
of independent, identically distributed random variables W ( j, k ) passes through a
linear or nonlinear spatial operator O { · } to produce a stochastic texture array
F ( j, k ). By controlling the form of the generating probability density p ( W ) and the
spatial operator, it is possible to create texture fields with specified statistical proper-
ties. Consider a continuous amplitude pixel x 0 at some coordinate ( j, k ) in F ( j, k ) .
Let the set { z 1, z 2, …, z J } denote neighboring pixels but not necessarily nearest geo-
metric neighbors, raster scanned in a conventional top-to-bottom, left-to-right fash-
ion. The conditional probability density of x 0 conditioned on the state of its
neighbors is given by

                                                    p ( x 0, z 1, …, z J )
                            p ( x 0 z 1, …, z J ) = -------------------------------------   (16.5-1)
                                                        p ( z 1, …, z J )

The first-order density p ( x 0 ) employs no conditioning, the second-order density
p ( x 0 z 1 ) implies that J = 1, the third-order density implies that J = 2, and so on.


16.5.1. Julesz Texture Fields

In his pioneering tex
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing
Digital image processing

More Related Content

PPTX
1.arithmetic & logical operations
PPTX
Image Enhancement in Spatial Domain
PPTX
Block Truncation Coding
PPTX
COM2304: Digital Image Fundamentals - I
PPT
Chapter 5
PPTX
Point processing
PPTX
Hit and-miss transform
1.arithmetic & logical operations
Image Enhancement in Spatial Domain
Block Truncation Coding
COM2304: Digital Image Fundamentals - I
Chapter 5
Point processing
Hit and-miss transform

What's hot (20)

PDF
Dictionary Based Compression
PPTX
Fourier descriptors & moments
PPTX
Image processing second unit Notes
PDF
Noise Models
PPT
10 color image processing
PPT
Image segmentation ppt
PPTX
Digital Image Processing
PDF
Digital Image Processing: Image Segmentation
PPT
Interpixel redundancy
PPTX
Chapter 3 image enhancement (spatial domain)
PPTX
Lecture 1 for Digital Image Processing (2nd Edition)
PPT
morphological image processing
PPTX
5. gray level transformation
PPTX
Multimedia graphics and image data representation
PDF
Image restoration
PPTX
Image Processing and Computer Vision
PPTX
Chapter 6 color image processing
PPTX
Bit plane coding
PPTX
Image Filtering in the Frequency Domain
Dictionary Based Compression
Fourier descriptors & moments
Image processing second unit Notes
Noise Models
10 color image processing
Image segmentation ppt
Digital Image Processing
Digital Image Processing: Image Segmentation
Interpixel redundancy
Chapter 3 image enhancement (spatial domain)
Lecture 1 for Digital Image Processing (2nd Edition)
morphological image processing
5. gray level transformation
Multimedia graphics and image data representation
Image restoration
Image Processing and Computer Vision
Chapter 6 color image processing
Bit plane coding
Image Filtering in the Frequency Domain
Ad

Viewers also liked (11)

PPT
Digital Image Processing
PDF
the.image.processing.handbook.6th.edition.apr.2011
PDF
36324442 biosignal-and-bio-medical-image-processing-matlab-based-applications...
PDF
Image segmentation
PDF
Basics of Image processing
PPTX
Fourier transforms
PPTX
online music store
PPT
Fourier transform
DOC
Deterministic vs stochastic
PDF
Introduction to Digital Image Processing Using MATLAB
Digital Image Processing
the.image.processing.handbook.6th.edition.apr.2011
36324442 biosignal-and-bio-medical-image-processing-matlab-based-applications...
Image segmentation
Basics of Image processing
Fourier transforms
online music store
Fourier transform
Deterministic vs stochastic
Introduction to Digital Image Processing Using MATLAB
Ad

Similar to Digital image processing (20)

PPTX
Digital image processing
PPTX
Digital Image Processing Fundamental
PPTX
Digital image processing
PPTX
Spandana image processing and compression techniques (7840228)
PPTX
image processing using matlab in faculty 1
PDF
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
PDF
Image Processing, 2012
PDF
Image processing ieee-projects-ocularsystems.in-
PDF
Lec_2_Digital Image Fundamentals.pdf
PPTX
Ppt ---image processing
DOCX
Computer vision,,summer training programme
PPTX
Digital Image Processing Unit -2 Notes complete
PDF
PDF
General Review Of Algorithms Presented For Image Segmentation
PPTX
Lectures 1 3 final (4)
PPTX
Digital image processing
PPTX
Seema dip
PPTX
Introduction to Medical Imaging Applications
PPTX
DIP-CHAPTERs
PDF
Dr.maie-Lec_2_Digital Image Fundamentals.pdf
Digital image processing
Digital Image Processing Fundamental
Digital image processing
Spandana image processing and compression techniques (7840228)
image processing using matlab in faculty 1
Fundamentals_of_Digital image processing_A practicle approach with MatLab.pdf
Image Processing, 2012
Image processing ieee-projects-ocularsystems.in-
Lec_2_Digital Image Fundamentals.pdf
Ppt ---image processing
Computer vision,,summer training programme
Digital Image Processing Unit -2 Notes complete
General Review Of Algorithms Presented For Image Segmentation
Lectures 1 3 final (4)
Digital image processing
Seema dip
Introduction to Medical Imaging Applications
DIP-CHAPTERs
Dr.maie-Lec_2_Digital Image Fundamentals.pdf

Recently uploaded (20)

PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
01-Introduction-to-Information-Management.pdf
PDF
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
PDF
A systematic review of self-coping strategies used by university students to ...
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Complications of Minimal Access Surgery at WLH
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PPTX
Pharma ospi slides which help in ospi learning
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
O7-L3 Supply Chain Operations - ICLT Program
Abdominal Access Techniques with Prof. Dr. R K Mishra
Final Presentation General Medicine 03-08-2024.pptx
Supply Chain Operations Speaking Notes -ICLT Program
GDM (1) (1).pptx small presentation for students
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
102 student loan defaulters named and shamed – Is someone you know on the list?
01-Introduction-to-Information-Management.pdf
grade 11-chemistry_fetena_net_5883.pdf teacher guide for all student
A systematic review of self-coping strategies used by university students to ...
human mycosis Human fungal infections are called human mycosis..pptx
VCE English Exam - Section C Student Revision Booklet
Complications of Minimal Access Surgery at WLH
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Pharma ospi slides which help in ospi learning
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
FourierSeries-QuestionsWithAnswers(Part-A).pdf

Digital image processing

  • 1. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) DIGITAL IMAGE PROCESSING
  • 2. DIGITAL IMAGE PROCESSING PIKS Inside Third Edition WILLIAM K. PRATT PixelSoft, Inc. Los Altos, California A Wiley-Interscience Publication JOHN WILEY & SONS, INC. New York • Chichester • Weinheim • Brisbane • Singapore • Toronto
  • 3. Designations used by companies to distinguish their products are often claimed as trademarks. In all instances where John Wiley & Sons, Inc., is aware of a claim, the product names appear in initial capital or all capital letters. Readers, however, should contact the appropriate companies for more complete information regarding trademarks and registration. Copyright  2001 by John Wiley and Sons, Inc., New York. All rights reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic or mechanical, including uploading, downloading, printing, decompiling, recording or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without the prior written permission of the Publisher. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 605 Third Avenue, New York, NY 10158-0012, (212) 850-6011, fax (212) 850-6008, E-Mail: PERMREQ @ WILEY.COM. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering professional services. If professional advice or other expert assistance is required, the services of a competent professional person should be sought. ISBN 0-471-22132-5 This title is also available in print as ISBN 0-471-37407-5. For more information about Wiley products, visit our web site at www.Wiley.com.
  • 4. To my wife, Shelly whose image needs no enhancement
  • 5. CONTENTS Preface xiii Acknowledgments xvii PART 1 CONTINUOUS IMAGE CHARACTERIZATION 1 1 Continuous Image Mathematical Characterization 3 1.1 Image Representation, 3 1.2 Two-Dimensional Systems, 5 1.3 Two-Dimensional Fourier Transform, 10 1.4 Image Stochastic Characterization, 15 2 Psychophysical Vision Properties 23 2.1 Light Perception, 23 2.2 Eye Physiology, 26 2.3 Visual Phenomena, 29 2.4 Monochrome Vision Model, 33 2.5 Color Vision Model, 39 3 Photometry and Colorimetry 45 3.1 Photometry, 45 3.2 Color Matching, 49 vii
  • 6. viii CONTENTS 3.3 Colorimetry Concepts, 54 3.4 Tristimulus Value Transformation, 61 3.5 Color Spaces, 63 PART 2 DIGITAL IMAGE CHARACTERIZATION 89 4 Image Sampling and Reconstruction 91 4.1 Image Sampling and Reconstruction Concepts, 91 4.2 Image Sampling Systems, 99 4.3 Image Reconstruction Systems, 110 5 Discrete Image Mathematical Representation 121 5.1 Vector-Space Image Representation, 121 5.2 Generalized Two-Dimensional Linear Operator, 123 5.3 Image Statistical Characterization, 127 5.4 Image Probability Density Models, 132 5.5 Linear Operator Statistical Representation, 136 6 Image Quantization 141 6.1 Scalar Quantization, 141 6.2 Processing Quantized Variables, 147 6.3 Monochrome and Color Image Quantization, 150 PART 3 DISCRETE TWO-DIMENSIONAL LINEAR PROCESSING 159 7 Superposition and Convolution 161 7.1 Finite-Area Superposition and Convolution, 161 7.2 Sampled Image Superposition and Convolution, 170 7.3 Circulant Superposition and Convolution, 177 7.4 Superposition and Convolution Operator Relationships, 180 8 Unitary Transforms 185 8.1 General Unitary Transforms, 185 8.2 Fourier Transform, 189 8.3 Cosine, Sine, and Hartley Transforms, 195 8.4 Hadamard, Haar, and Daubechies Transforms, 200 8.5 Karhunen–Loeve Transform, 207 9 Linear Processing Techniques 213 9.1 Transform Domain Processing, 213 9.2 Transform Domain Superposition, 216
  • 7. CONTENTS ix 9.3 Fast Fourier Transform Convolution, 221 9.4 Fourier Transform Filtering, 229 9.5 Small Generating Kernel Convolution, 236 PART 4 IMAGE IMPROVEMENT 241 10 Image Enhancement 243 10.1 Contrast Manipulation, 243 10.2 Histogram Modification, 253 10.3 Noise Cleaning, 261 10.4 Edge Crispening, 278 10.5 Color Image Enhancement, 284 10.6 Multispectral Image Enhancement, 289 11 Image Restoration Models 297 11.1 General Image Restoration Models, 297 11.2 Optical Systems Models, 300 11.3 Photographic Process Models, 304 11.4 Discrete Image Restoration Models, 312 12 Point and Spatial Image Restoration Techniques 319 12.1 Sensor and Display Point Nonlinearity Correction, 319 12.2 Continuous Image Spatial Filtering Restoration, 325 12.3 Pseudoinverse Spatial Image Restoration, 335 12.4 SVD Pseudoinverse Spatial Image Restoration, 349 12.5 Statistical Estimation Spatial Image Restoration, 355 12.6 Constrained Image Restoration, 358 12.7 Blind Image Restoration, 363 13 Geometrical Image Modification 371 13.1 Translation, Minification, Magnification, and Rotation, 371 13.2 Spatial Warping, 382 13.3 Perspective Transformation, 386 13.4 Camera Imaging Model, 389 13.5 Geometrical Image Resampling, 393 PART 5 IMAGE ANALYSIS 399 14 Morphological Image Processing 401 14.1 Binary Image Connectivity, 401 14.2 Binary Image Hit or Miss Transformations, 404 14.3 Binary Image Shrinking, Thinning, Skeletonizing, and Thickening, 411
  • 8. x CONTENTS 14.4 Binary Image Generalized Dilation and Erosion, 422 14.5 Binary Image Close and Open Operations, 433 14.6 Gray Scale Image Morphological Operations, 435 15 Edge Detection 443 15.1 Edge, Line, and Spot Models, 443 15.2 First-Order Derivative Edge Detection, 448 15.3 Second-Order Derivative Edge Detection, 469 15.4 Edge-Fitting Edge Detection, 482 15.5 Luminance Edge Detector Performance, 485 15.6 Color Edge Detection, 499 15.7 Line and Spot Detection, 499 16 Image Feature Extraction 509 16.1 Image Feature Evaluation, 509 16.2 Amplitude Features, 511 16.3 Transform Coefficient Features, 516 16.4 Texture Definition, 519 16.5 Visual Texture Discrimination, 521 16.6 Texture Features, 529 17 Image Segmentation 551 17.1 Amplitude Segmentation Methods, 552 17.2 Clustering Segmentation Methods, 560 17.3 Region Segmentation Methods, 562 17.4 Boundary Detection, 566 17.5 Texture Segmentation, 580 17.6 Segment Labeling, 581 18 Shape Analysis 589 18.1 Topological Attributes, 589 18.2 Distance, Perimeter, and Area Measurements, 591 18.3 Spatial Moments, 597 18.4 Shape Orientation Descriptors, 607 18.5 Fourier Descriptors, 609 19 Image Detection and Registration 613 19.1 Template Matching, 613 19.2 Matched Filtering of Continuous Images, 616 19.3 Matched Filtering of Discrete Images, 623 19.4 Image Registration, 625
  • 9. CONTENTS xi PART 6 IMAGE PROCESSING SOFTWARE 641 20 PIKS Image Processing Software 643 20.1 PIKS Functional Overview, 643 20.2 PIKS Core Overview, 663 21 PIKS Image Processing Programming Exercises 673 21.1 Program Generation Exercises, 674 21.2 Image Manipulation Exercises, 675 21.3 Colour Space Exercises, 676 21.4 Region-of-Interest Exercises, 678 21.5 Image Measurement Exercises, 679 21.6 Quantization Exercises, 680 21.7 Convolution Exercises, 681 21.8 Unitary Transform Exercises, 682 21.9 Linear Processing Exercises, 682 21.10 Image Enhancement Exercises, 683 21.11 Image Restoration Models Exercises, 685 21.12 Image Restoration Exercises, 686 21.13 Geometrical Image Modification Exercises, 687 21.14 Morphological Image Processing Exercises, 687 21.15 Edge Detection Exercises, 689 21.16 Image Feature Extration Exercises, 690 21.17 Image Segmentation Exercises, 691 21.18 Shape Analysis Exercises, 691 21.19 Image Detection and Registration Exercises, 692 Appendix 1 Vector-Space Algebra Concepts 693 Appendix 2 Color Coordinate Conversion 709 Appendix 3 Image Error Measures 715 Bibliography 717 Index 723
  • 10. PREFACE In January 1978, I began the preface to the first edition of Digital Image Processing with the following statement: The field of image processing has grown considerably during the past decade with the increased utilization of imagery in myriad applications coupled with improvements in the size, speed, and cost effectiveness of digital computers and related signal processing technologies. Image processing has found a significant role in scientific, industrial, space, and government applications. In January 1991, in the preface to the second edition, I stated: Thirteen years later as I write this preface to the second edition, I find the quoted statement still to be valid. The 1980s have been a decade of significant growth and maturity in this field. At the beginning of that decade, many image processing tech- niques were of academic interest only; their execution was too slow and too costly. Today, thanks to algorithmic and implementation advances, image processing has become a vital cost-effective technology in a host of applications. Now, in this beginning of the twenty-first century, image processing has become a mature engineering discipline. But advances in the theoretical basis of image pro- cessing continue. Some of the reasons for this third edition of the book are to correct defects in the second edition, delete content of marginal interest, and add discussion of new, important topics. Another motivating factor is the inclusion of interactive, computer display imaging examples to illustrate image processing concepts. Finally, this third edition includes computer programming exercises to bolster its theoretical content. These exercises can be implemented using the Programmer’s Imaging Ker- nel System (PIKS) application program interface (API). PIKS is an International xiii
  • 11. xiv PREFACE Standards Organization (ISO) standard library of image processing operators and associated utilities. The PIKS Core version is included on a CD affixed to the back cover of this book. The book is intended to be an “industrial strength” introduction to digital image processing to be used as a text for an electrical engineering or computer science course in the subject. Also, it can be used as a reference manual for scientists who are engaged in image processing research, developers of image processing hardware and software systems, and practicing engineers and scientists who use image pro- cessing as a tool in their applications. Mathematical derivations are provided for most algorithms. The reader is assumed to have a basic background in linear system theory, vector space algebra, and random processes. Proficiency in C language pro- gramming is necessary for execution of the image processing programming exer- cises using PIKS. The book is divided into six parts. The first three parts cover the basic technolo- gies that are needed to support image processing applications. Part 1 contains three chapters concerned with the characterization of continuous images. Topics include the mathematical representation of continuous images, the psychophysical proper- ties of human vision, and photometry and colorimetry. In Part 2, image sampling and quantization techniques are explored along with the mathematical representa- tion of discrete images. Part 3 discusses two-dimensional signal processing tech- niques, including general linear operators and unitary transforms such as the Fourier, Hadamard, and Karhunen–Loeve transforms. The final chapter in Part 3 analyzes and compares linear processing techniques implemented by direct convolu- tion and Fourier domain filtering. The next two parts of the book cover the two principal application areas of image processing. Part 4 presents a discussion of image enhancement and restoration tech- niques, including restoration models, point and spatial restoration, and geometrical image modification. Part 5, entitled “Image Analysis,” concentrates on the extrac- tion of information from an image. Specific topics include morphological image processing, edge detection, image feature extraction, image segmentation, object shape analysis, and object detection. Part 6 discusses the software implementation of image processing applications. This part describes the PIKS API and explains its use as a means of implementing image processing algorithms. Image processing programming exercises are included in Part 6. This third edition represents a major revision of the second edition. In addition to Part 6, new topics include an expanded description of color spaces, the Hartley and Daubechies transforms, wavelet filtering, watershed and snake image segmentation, and Mellin transform matched filtering. Many of the photographic examples in the book are supplemented by executable programs for which readers can adjust algo- rithm parameters and even substitute their own source images. Although readers should find this book reasonably comprehensive, many impor- tant topics allied to the field of digital image processing have been omitted to limit the size and cost of the book. Among the most prominent omissions are the topics of pattern recognition, image reconstruction from projections, image understanding,
  • 12. PREFACE xv image coding, scientific visualization, and computer graphics. References to some of these topics are provided in the bibliography. WILLIAM K. PRATT Los Altos, California August 2000
  • 13. ACKNOWLEDGMENTS The first edition of this book was written while I was a professor of electrical engineering at the University of Southern California (USC). Image processing research at USC began in 1962 on a very modest scale, but the program increased in size and scope with the attendant international interest in the field. In 1971, Dr. Zohrab Kaprielian, then dean of engineering and vice president of academic research and administration, announced the establishment of the USC Image Processing Institute. This environment contributed significantly to the preparation of the first edition. I am deeply grateful to Professor Kaprielian for his role in providing university support of image processing and for his personal interest in my career. Also, I wish to thank the following past and present members of the Institute’s scientific staff who rendered invaluable assistance in the preparation of the first- edition manuscript: Jean-François Abramatic, Harry C. Andrews, Lee D. Davisson, Olivier Faugeras, Werner Frei, Ali Habibi, Anil K. Jain, Richard P. Kruger, Nasser E. Nahi, Ramakant Nevatia, Keith Price, Guner S. Robinson, Alexander A. Sawchuk, and Lloyd R. Welsh. In addition, I sincerely acknowledge the technical help of my graduate students at USC during preparation of the first edition: Ikram Abdou, Behnam Ashjari, Wen-Hsiung Chen, Faramarz Davarian, Michael N. Huhns, Kenneth I. Laws, Sang Uk Lee, Clanton Mancill, Nelson Mascarenhas, Clifford Reader, John Roese, and Robert H. Wallis. The first edition was the outgrowth of notes developed for the USC course “Image Processing.” I wish to thank the many students who suffered through the xvii
  • 14. xviii ACKNOWLEDGMENTS early versions of the notes for their valuable comments. Also, I appreciate the reviews of the notes provided by Harry C. Andrews, Werner Frei, Ali Habibi, and Ernest L. Hall, who taught the course. With regard to the first edition, I wish to offer words of appreciation to the Information Processing Techniques Office of the Advanced Research Projects Agency, directed by Larry G. Roberts, which provided partial financial support of my research at USC. During the academic year 1977–1978, I performed sabbatical research at the Institut de Recherche d’Informatique et Automatique in LeChesney, France and at the Université de Paris. My research was partially supported by these institutions, USC, and a Guggenheim Foundation fellowship. For this support, I am indebted. I left USC in 1979 with the intention of forming a company that would put some of my research ideas into practice. Toward that end, I joined a startup company, Compression Labs, Inc., of San Jose, California. There I worked on the development of facsimile and video coding products with Dr., Wen-Hsiung Chen and Dr. Robert H. Wallis. Concurrently, I directed a design team that developed a digital image processor called VICOM. The early contributors to its hardware and software design were William Bryant, Howard Halverson, Stephen K. Howell, Jeffrey Shaw, and William Zech. In 1981, I formed Vicom Systems, Inc., of San Jose, California, to manufacture and market the VICOM image processor. Many of the photographic examples in this book were processed on a VICOM. Work on the second edition began in 1986. In 1988, I joined Sun Microsystems, of Mountain View, California. At Sun, I collaborated with Stephen A. Howell and Ihtisham Kabir on the development of image processing software. During my time at Sun, I participated in the specification of the Programmers Imaging Kernel application program interface which was made an International Standards Organization standard in 1994. Much of the PIKS content is present in this book. Some of the principal contributors to PIKS include Timothy Butler, Adrian Clark, Patrick Krolak, and Gerard A. Paquette. In 1993, I formed PixelSoft, Inc., of Los Altos, California, to commercialize the PIKS standard. The PIKS Core version of the PixelSoft implementation is affixed to the back cover of this edition. Contributors to its development include Timothy Butler, Larry R. Hubble, and Gerard A. Paquette. In 1996, I joined Photon Dynamics, Inc., of San Jose, California, a manufacturer of machine vision equipment for the inspection of electronics displays and printed circuit boards. There, I collaborated with Larry R. Hubble, Sunil S. Sawkar, and Gerard A. Paquette on the development of several hardware and software products based on PIKS. I wish to thank all those previously cited, and many others too numerous to mention, for their assistance in this industrial phase of my career. Having participated in the design of hardware and software products has been an arduous but intellectually rewarding task. This industrial experience, I believe, has significantly enriched this third edition.
  • 15. ACKNOWLEDGMENTS xix I offer my appreciation to Ray Schmidt, who was responsible for many photo- graphic reproductions in the book, and to Kris Pendelton, who created much of the line art. Also, thanks are given to readers of the first two editions who reported errors both typographical and mental. Most of all, I wish to thank my wife, Shelly, for her support in the writing of the third edition. W. K. P.
  • 16. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) PART 1 CONTINUOUS IMAGE CHARACTERIZATION Although this book is concerned primarily with digital, as opposed to analog, image processing techniques. It should be remembered that most digital images represent continuous natural images. Exceptions are artificial digital images such as test patterns that are numerically created in the computer and images constructed by tomographic systems. Thus, it is important to understand the “physics” of image formation by sensors and optical systems including human visual perception. Another important consideration is the measurement of light in order quantitatively to describe images. Finally, it is useful to establish spatial and temporal characteristics of continuous image fields which provide the basis for the interrelationship of digital image samples. These topics are covered in the following chapters. 1
  • 17. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 1 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION In the design and analysis of image processing systems, it is convenient and often necessary mathematically to characterize the image to be processed. There are two basic mathematical characterizations of interest: deterministic and statistical. In deterministic image representation, a mathematical image function is defined and point properties of the image are considered. For a statistical image representation, the image is specified by average properties. The following sections develop the deterministic and statistical characterization of continuous images. Although the analysis is presented in the context of visual images, many of the results can be extended to general two-dimensional time-varying signals and fields. 1.1. IMAGE REPRESENTATION Let C ( x, y, t, λ ) represent the spatial energy distribution of an image source of radi- ant energy at spatial coordinates (x, y), at time t and wavelength λ . Because light intensity is a real positive quantity, that is, because intensity is proportional to the modulus squared of the electric field, the image light function is real and nonnega- tive. Furthermore, in all practical imaging systems, a small amount of background light is always present. The physical imaging system also imposes some restriction on the maximum intensity of an image, for example, film saturation and cathode ray tube (CRT) phosphor heating. Hence it is assumed that 0 < C ( x, y, t, λ ) ≤ A (1.1-1) 3
  • 18. 4 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION where A is the maximum image intensity. A physical image is necessarily limited in extent by the imaging system and image recording media. For mathematical sim- plicity, all images are assumed to be nonzero only over a rectangular region for which –Lx ≤ x ≤ Lx (1.1-2a) –Ly ≤ y ≤ Ly (1.1-2b) The physical image is, of course, observable only over some finite time interval. Thus let –T ≤ t ≤ T (1.1-2c) The image light function C ( x, y, t, λ ) is, therefore, a bounded four-dimensional function with bounded independent variables. As a final restriction, it is assumed that the image function is continuous over its domain of definition. The intensity response of a standard human observer to an image light function is commonly measured in terms of the instantaneous luminance of the light field as defined by ∞ Y ( x, y, t ) = ∫0 C ( x, y, t, λ )V ( λ ) d λ (1.1-3) where V ( λ ) represents the relative luminous efficiency function, that is, the spectral response of human vision. Similarly, the color response of a standard observer is commonly measured in terms of a set of tristimulus values that are linearly propor- tional to the amounts of red, green, and blue light needed to match a colored light. For an arbitrary red–green–blue coordinate system, the instantaneous tristimulus values are ∞ R ( x, y, t ) = ∫0 C ( x, y, t, λ )RS ( λ ) d λ (1.1-4a) ∞ G ( x, y, t ) = ∫0 C ( x, y, t, λ )G S ( λ ) d λ (1.1-4b) ∞ B ( x, y, t ) = ∫0 C ( x, y, t, λ )B S ( λ ) d λ (1.1-4c) where R S ( λ ) , G S ( λ ) , BS ( λ ) are spectral tristimulus values for the set of red, green, and blue primaries. The spectral tristimulus values are, in effect, the tristimulus
  • 19. TWO-DIMENSIONAL SYSTEMS 5 values required to match a unit amount of narrowband light at wavelength λ . In a multispectral imaging system, the image field observed is modeled as a spectrally weighted integral of the image light function. The ith spectral image field is then given as ∞ F i ( x, y, t ) = ∫0 C ( x, y, t, λ )S i ( λ ) d λ (1.1-5) where S i ( λ ) is the spectral response of the ith sensor. For notational simplicity, a single image function F ( x, y, t ) is selected to repre- sent an image field in a physical imaging system. For a monochrome imaging sys- tem, the image function F ( x, y, t ) nominally denotes the image luminance, or some converted or corrupted physical representation of the luminance, whereas in a color imaging system, F ( x, y, t ) signifies one of the tristimulus values, or some function of the tristimulus value. The image function F ( x, y, t ) is also used to denote general three-dimensional fields, such as the time-varying noise of an image scanner. In correspondence with the standard definition for one-dimensional time signals, the time average of an image function at a given point (x, y) is defined as 1 T 〈 F ( x, y, t )〉 T = lim ----- ∫ F ( x, y, t )L ( t ) dt - (1.1-6) T→∞ 2T –T where L(t) is a time-weighting function. Similarly, the average image brightness at a given time is given by the spatial average, 1 L L 〈 F ( x, y, t )〉 S = lim -------------- ∫ x ∫ y F ( x, y, t ) dx dy (1.1-7) Lx → ∞ 4L x L y –L x –Ly Ly → ∞ In many imaging systems, such as image projection devices, the image does not change with time, and the time variable may be dropped from the image function. For other types of systems, such as movie pictures, the image function is time sam- pled. It is also possible to convert the spatial variation into time variation, as in tele- vision, by an image scanning process. In the subsequent discussion, the time variable is dropped from the image field notation unless specifically required. 1.2. TWO-DIMENSIONAL SYSTEMS A two-dimensional system, in its most general form, is simply a mapping of some input set of two-dimensional functions F1(x, y), F2(x, y),..., FN(x, y) to a set of out- put two-dimensional functions G1(x, y), G2(x, y),..., GM(x, y), where ( – ∞ < x, y < ∞ ) denotes the independent, continuous spatial variables of the functions. This mapping may be represented by the operators O { · } for m = 1, 2,..., M, which relate the input to output set of functions by the set of equations
  • 20. 6 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION G 1 ( x, y ) = O 1 { F 1 ( x, y ), F 2 ( x, y ), …, FN ( x, y ) } … G m ( x, y ) = O m { F 1 ( x, y ), F 2 ( x, y ), …, F N ( x, y ) } (1.2-1) … G M ( x, y ) = O M { F 1 ( x, y ), F2 ( x, y ), …, F N ( x, y ) } In specific cases, the mapping may be many-to-few, few-to-many, or one-to-one. The one-to-one mapping is defined as G ( x, y ) = O { F ( x, y ) } (1.2-2) To proceed further with a discussion of the properties of two-dimensional systems, it is necessary to direct the discourse toward specific types of operators. 1.2.1. Singularity Operators Singularity operators are widely employed in the analysis of two-dimensional systems, especially systems that involve sampling of continuous functions. The two-dimensional Dirac delta function is a singularity operator that possesses the following properties: ε ε ∫–ε ∫–ε δ ( x, y ) dx dy = 1 for ε > 0 (1.2-3a) ∞ ∞ ∫–∞ ∫–∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη = F ( x, y ) (1.2-3b) In Eq. 1.2-3a, ε is an infinitesimally small limit of integration; Eq. 1.2-3b is called the sifting property of the Dirac delta function. The two-dimensional delta function can be decomposed into the product of two one-dimensional delta functions defined along orthonormal coordinates. Thus δ ( x, y ) = δ ( x )δ ( y ) (1.2-4) where the one-dimensional delta function satisfies one-dimensional versions of Eq. 1.2-3. The delta function also can be defined as a limit on a family of functions. General examples are given in References 1 and 2. 1.2.2. Additive Linear Operators A two-dimensional system is said to be an additive linear system if the system obeys the law of additive superposition. In the special case of one-to-one mappings, the additive superposition property requires that
  • 21. TWO-DIMENSIONAL SYSTEMS 7 O { a 1 F 1 ( x, y ) + a 2 F 2 ( x, y ) } = a 1 O { F 1 ( x, y ) } + a 2 O { F 2 ( x, y ) } (1.2-5) where a1 and a2 are constants that are possibly complex numbers. This additive superposition property can easily be extended to the general mapping of Eq. 1.2-1. A system input function F(x, y) can be represented as a sum of amplitude- weighted Dirac delta functions by the sifting integral, ∞ ∞ F ( x, y ) = ∫– ∞ ∫– ∞ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη (1.2-6) where F ( ξ, η ) is the weighting factor of the impulse located at coordinates ( ξ, η ) in the x–y plane, as shown in Figure 1.2-1. If the output of a general linear one-to-one system is defined to be G ( x, y ) = O { F ( x, y ) } (1.2-7) then  ∞ ∞  G ( x, y ) = O  ∫ ∫ F ( ξ, η )δ ( x – ξ, y – η ) d ξ dη (1.2-8a)  –∞ –∞  or ∞ ∞ G ( x, y ) = ∫–∞ ∫–∞ F ( ξ, η )O { δ ( x – ξ, y – η ) } d ξ dη (1.2-8b) In moving from Eq. 1.2-8a to Eq. 1.2-8b, the application order of the general lin- ear operator O { ⋅ } and the integral operator have been reversed. Also, the linear operator has been applied only to the term in the integrand that is dependent on the FIGURE1.2-1. Decomposition of image function.
  • 22. 8 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION spatial variables (x, y). The second term in the integrand of Eq. 1.2-8b, which is redefined as H ( x, y ; ξ, η) ≡ O { δ ( x – ξ, y – η ) } (1.2-9) is called the impulse response of the two-dimensional system. In optical systems, the impulse response is often called the point spread function of the system. Substitu- tion of the impulse response function into Eq. 1.2-8b yields the additive superposi- tion integral ∞ ∞ G ( x, y ) = ∫–∞ ∫–∞ F ( ξ, η )H ( x, y ; ξ, η) d ξ d η (1.2-10) An additive linear two-dimensional system is called space invariant (isoplanatic) if its impulse response depends only on the factors x – ξ and y – η . In an optical sys- tem, as shown in Figure 1.2-2, this implies that the image of a point source in the focal plane will change only in location, not in functional form, as the placement of the point source moves in the object plane. For a space-invariant system H ( x, y ; ξ, η ) = H ( x – ξ, y – η ) (1.2-11) and the superposition integral reduces to the special case called the convolution inte- gral, given by ∞ ∞ G ( x, y ) = ∫–∞ ∫–∞ F ( ξ, η )H ( x – ξ, y – η ) dξ dη (1.2-12a) Symbolically, G ( x, y ) = F ( x, y ) ᭺ H ( x, y ) * (1.2-12b) FIGURE 1.2-2. Point-source imaging system.
  • 23. TWO-DIMENSIONAL SYSTEMS 9 FIGURE 1.2-3. Graphical example of two-dimensional convolution. denotes the convolution operation. The convolution integral is symmetric in the sense that ∞ ∞ G ( x, y ) = ∫–∞ ∫–∞ F ( x – ξ, y – η )H ( ξ, η ) d ξ dη (1.2-13) Figure 1.2-3 provides a visualization of the convolution process. In Figure 1.2-3a and b, the input function F(x, y) and impulse response are plotted in the dummy coordinate system ( ξ, η ) . Next, in Figures 1.2-3c and d the coordinates of the impulse response are reversed, and the impulse response is offset by the spatial val- ues (x, y). In Figure 1.2-3e, the integrand product of the convolution integral of Eq. 1.2-12 is shown as a crosshatched region. The integral over this region is the value of G(x, y) at the offset coordinate (x, y). The complete function F(x, y) could, in effect, be computed by sequentially scanning the reversed, offset impulse response across the input function and simultaneously integrating the overlapped region. 1.2.3. Differential Operators Edge detection in images is commonly accomplished by performing a spatial differ- entiation of the image field followed by a thresholding operation to determine points of steep amplitude change. Horizontal and vertical spatial derivatives are defined as
  • 24. 10 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION d x = ∂F ( x, y ) ------------------- - (l.2-14a) ∂x ∂F ( x, y ) d y = ------------------- - (l.2-14b) ∂y The directional derivative of the image field along a vector direction z subtending an angle φ with respect to the horizontal axis is given by (3, p. 106) ∂F ( x, y ) ∇{ F ( x, y ) } = ------------------- = d x cos φ + d y sin φ - (l.2-15) ∂z The gradient magnitude is then 2 2 ∇{ F ( x, y ) } = dx + dy (l.2-16) Spatial second derivatives in the horizontal and vertical directions are defined as 2 ∂ F ( x, y ) d xx = ---------------------- (l.2-17a) 2 ∂x 2 ∂ F ( x, y ) d yy = ---------------------- (l.2-17b) 2 ∂y The sum of these two spatial derivatives is called the Laplacian operator: 2 2 ∂ F ( x, y ) ∂ F ( x, y ) ∇2{ F ( x, y ) } = ---------------------- + ---------------------- (l.2-18) 2 2 ∂x ∂y 1.3. TWO-DIMENSIONAL FOURIER TRANSFORM The two-dimensional Fourier transform of the image function F(x, y) is defined as (1,2) ∞ ∞ F ( ω x, ω y ) = ∫–∞ ∫–∞ F ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy (1.3-1) where ω x and ω y are spatial frequencies and i = – 1. Notationally, the Fourier transform is written as
  • 25. TWO-DIMENSIONAL FOURIER TRANSFORM 11 F ( ω x, ω y ) = O F { F ( x, y ) } (1.3-2) In general, the Fourier coefficient F ( ω x, ω y ) is a complex number that may be rep- resented in real and imaginary form, F ( ω x, ω y ) = R ( ω x, ω y ) + iI ( ω x, ω y ) (1.3-3a) or in magnitude and phase-angle form, F ( ω x, ω y ) = M ( ω x, ω y ) exp { iφ ( ω x, ω y ) } (1.3-3b) where 2 2 1⁄2 M ( ω x, ω y ) = [ R ( ω x, ω y ) + I ( ω x, ω y ) ] (1.3-4a)  I ( ω x, ω y )  φ ( ω x, ω y ) = arc tan  -----------------------  - (1.3-4b)  R ( ω x, ω y )  A sufficient condition for the existence of the Fourier transform of F(x, y) is that the function be absolutely integrable. That is, ∞ ∞ ∫–∞ ∫–∞ F ( x, y ) dx dy < ∞ (1.3-5) The input function F(x, y) can be recovered from its Fourier transform by the inver- sion formula 1- ∞ ∞ F ( x, y ) = -------- ∫ ∫ F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y (1.3-6a) 2 4π –∞ – ∞ or in operator form –1 F ( x, y ) = O F { F ( ω x, ω y ) } (1.3-6b) The functions F(x, y) and F ( ω x, ω y ) are called Fourier transform pairs.
  • 26. 12 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION The two-dimensional Fourier transform can be computed in two steps as a result of the separability of the kernel. Thus, let ∞ F y ( ω x, y ) = ∫–∞ F ( x, y ) exp { –i ( ωx x ) } dx (1.3-7) then ∞ F ( ω x, ω y ) = ∫–∞ F y ( ωx, y ) exp { –i ( ωy y ) } dy (1.3-8) Several useful properties of the two-dimensional Fourier transform are stated below. Proofs are given in References 1 and 2. Separability. If the image function is spatially separable such that F ( x, y ) = f x ( x )f y ( y ) (1.3-9) then F y ( ω x, ω y ) = f x ( ω x )f y ( ω y ) (1.3-10) where f x ( ω x ) and f y ( ω y ) are one-dimensional Fourier transforms of f x ( x ) and f y ( y ), respectively. Also, if F ( x, y ) and F ( ω x, ω y ) are two-dimensional Fourier transform pairs, the Fourier transform of F ∗ ( x, y ) is F ∗ ( – ω x, – ω y ) . An asterisk∗ used as a superscript denotes complex conjugation of a variable (i.e. if F = A + iB, then F ∗ = A – iB ). Finally, if F ( x, y ) is symmetric such that F ( x, y ) = F ( – x, – y ), then F ( ω x, ω y ) = F ( – ω x, – ω y ). Linearity. The Fourier transform is a linear operator. Thus O F { aF 1 ( x, y ) + bF 2 ( x, y ) } = aF 1 ( ω x, ω y ) + bF 2 ( ω x, ω y ) (1.3-11) where a and b are constants. Scaling. A linear scaling of the spatial variables results in an inverse scaling of the spatial frequencies as given by 1 ω x ωy O F { F ( ax, by ) } = -------- F  ----- , -----  - - - (1.3-12) ab  a b 
  • 27. TWO-DIMENSIONAL FOURIER TRANSFORM 13 Hence, stretching of an axis in one domain results in a contraction of the corre- sponding axis in the other domain plus an amplitude change. Shift. A positional shift in the input plane results in a phase shift in the output plane: OF { F ( x – a, y – b ) } = F ( ω x, ω y ) exp { – i ( ω x a + ω y b ) } (1.3-13a) Alternatively, a frequency shift in the Fourier plane results in the equivalence –1 OF { F ( ω x – a, ω y – b ) } = F ( x, y ) exp { i ( ax + by ) } (1.3-13b) Convolution. The two-dimensional Fourier transform of two convolved functions is equal to the products of the transforms of the functions. Thus OF { F ( x, y ) ᭺ H ( x, y ) } = F ( ω x, ω y )H ( ω x, ω y ) * (1.3-14) The inverse theorem states that 1 OF { F ( x, y )H ( x, y ) } = -------- F ( ω x, ω y ) ᭺ H ( ω x, ω y ) - * (1.3-15) 2 4π Parseval 's Theorem. The energy in the spatial and Fourier transform domains is related by ∞ ∞ 2 1 ∞ ∞ 2 ∫–∞ ∫–∞ F ( x, y ) dx dy = -------- ∫ ∫ F ( ω x, ω y ) dω x dω y 4π - 2 –∞ –∞ (1.3-16) Autocorrelation Theorem. The Fourier transform of the spatial autocorrelation of a function is equal to the magnitude squared of its Fourier transform. Hence  ∞ ∞  2 OF  ∫ ∫ F ( α, β )F∗ ( α – x, β – y ) dα dβ = F ( ω x, ω y ) (1.3-17)  –∞ –∞  Spatial Differentials. The Fourier transform of the directional derivative of an image function is related to the Fourier transform by  ∂F ( x, y )  OF  -------------------  = – i ω x F ( ω x, ω y ) - (1.3-18a)  ∂x 
  • 28. 14 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION  ∂F ( x, y )  OF  -------------------  = – i ω y F ( ω x, ω y ) - (1.3-18b)  ∂y  Consequently, the Fourier transform of the Laplacian of an image function is equal to  ∂2 F ( x, y ) 2  OF  ---------------------- + ∂ F ( x, y )  = – ( ω x + ω y ) F ( ω x, ω y ) 2 2 ---------------------- (1.3-19) 2 2  ∂x ∂y  The Fourier transform convolution theorem stated by Eq. 1.3-14 is an extremely useful tool for the analysis of additive linear systems. Consider an image function F ( x, y ) that is the input to an additive linear system with an impulse response H ( x, y ) . The output image function is given by the convolution integral ∞ ∞ G ( x, y ) = ∫–∞ ∫–∞ F ( α, β )H ( x – α, y – β ) dα dβ (1.3-20) Taking the Fourier transform of both sides of Eq. 1.3-20 and reversing the order of integration on the right-hand side results in ∞ ∞ ∞ ∞ G ( ω x, ω y ) = ∫–∞ ∫–∞ F ( α, β ) ∫–∞ ∫–∞ H ( x – α, y – β ) exp { – i ( ωx x + ωy y ) } dx dy dα dβ (1.3-21) By the Fourier transform shift theorem of Eq. 1.3-13, the inner integral is equal to the Fourier transform of H ( x, y ) multiplied by an exponential phase-shift factor. Thus ∞ ∞ G ( ω x, ω y ) = ∫–∞ ∫–∞ F ( α, β )H ( ωx, ω y ) exp { – i ( ω x α + ωy β ) } dα dβ (1.3-22) Performing the indicated Fourier transformation gives G ( ω x, ω y ) = H ( ω x, ω y )F ( ω x, ω y ) (1.3-23) Then an inverse transformation of Eq. 1.3-23 provides the output image function 1 ∞ ∞ G ( x, y ) = -------- ∫ ∫ H ( ω x, ω y )F ( ω x, ω y ) exp { i ( ω x x + ω y y ) } dω x dω y - (1.3-24) 2 4π –∞ –∞
  • 29. IMAGE STOCHASTIC CHARACTERIZATION 15 Equations 1.3-20 and 1.3-24 represent two alternative means of determining the out- put image response of an additive, linear, space-invariant system. The analytic or operational choice between the two approaches, convolution or Fourier processing, is usually problem dependent. 1.4. IMAGE STOCHASTIC CHARACTERIZATION The following presentation on the statistical characterization of images assumes general familiarity with probability theory, random variables, and stochastic pro- cess. References 2 and 4 to 7 can provide suitable background. The primary purpose of the discussion here is to introduce notation and develop stochastic image models. It is often convenient to regard an image as a sample of a stochastic process. For continuous images, the image function F(x, y, t) is assumed to be a member of a con- tinuous three-dimensional stochastic process with space variables (x, y) and time variable (t). The stochastic process F(x, y, t) can be described completely by knowledge of its joint probability density p { F 1, F2 …, F J ; x 1, y 1, t 1, x 2, y 2, t 2, …, xJ , yJ , tJ } for all sample points J, where (xj, yj, tj) represent space and time samples of image function Fj(xj, yj, tj). In general, high-order joint probability densities of images are usually not known, nor are they easily modeled. The first-order probability density p(F; x, y, t) can sometimes be modeled successfully on the basis of the physics of the process or histogram measurements. For example, the first-order probability density of random noise from an electronic sensor is usually well modeled by a Gaussian density of the form 2 2 –1 ⁄ 2  [ F ( x, y, t ) – η F ( x, y, t ) ]  p { F ; x, y, t} = [ 2πσ F ( x, y, t ) ] exp  – -----------------------------------------------------------  - (1.4-1) 2  2σ F ( x, y, t )  2 where the parameters η F ( x, y, t ) and σ F ( x, y, t ) denote the mean and variance of the process. The Gaussian density is also a reasonably accurate model for the probabil- ity density of the amplitude of unitary transform coefficients of an image. The probability density of the luminance function must be a one-sided density because the luminance measure is positive. Models that have found application include the Rayleigh density, F ( x, y, t )  [ F ( x, y, t ) ] 2  p { F ; x, y, t } = --------------------- exp  – ----------------------------  - (1.4-2a) 2 2 α  2α  the log-normal density,
  • 30. 16 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION 2 2 2 –1 ⁄ 2  [ log { F ( x, y, t ) } – η F ( x, y, t ) ]  p { F ; x, y, t} = [ 2πF ( x, y, t )σ F ( x, y, t ) ] exp  – --------------------------------------------------------------------------  - 2  2σ F ( x, y, t )  (1.4-2b) and the exponential density, p {F ; x, y, t} = α exp{ – α F ( x, y, t ) } (1.4-2c) all defined for F ≥ 0, where α is a constant. The two-sided, or Laplacian density, α p { F ; x, y, t} = --- exp{ – α F ( x, y, t ) } (1.4-3) 2 where α is a constant, is often selected as a model for the probability density of the difference of image samples. Finally, the uniform density 1 p { F ; x, y, t} = ----- - (1.4-4) 2π for – π ≤ F ≤ π is a common model for phase fluctuations of a random process. Con- ditional probability densities are also useful in characterizing a stochastic process. The conditional density of an image function evaluated at ( x 1, y 1, t 1 ) given knowl- edge of the image function at ( x 2, y 2, t 2 ) is defined as p { F 1, F 2 ; x 1, y 1, t 1, x 2, y 2, t 2} p { F 1 ; x 1, y 1, t 1 F2 ; x 2, y 2, t 2} = ------------------------------------------------------------------------ (1.4-5) p { F 2 ; x 2, y 2, t2} Higher-order conditional densities are defined in a similar manner. Another means of describing a stochastic process is through computation of its ensemble averages. The first moment or mean of the image function is defined as ∞ η F ( x, y, t ) = E { F ( x, y, t ) } = ∫– ∞ F ( x, y, t )p { F ; x, y, t} dF (1.4-6) where E { · } is the expectation operator, as defined by the right-hand side of Eq. 1.4-6. The second moment or autocorrelation function is given by R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = E { F ( x 1, y 1, t 1 )F ∗ ( x 2, y 2, t 2 ) } (1.4-7a) or in explicit form
  • 31. IMAGE STOCHASTIC CHARACTERIZATION 17 ∞ ∞ R ( x 1, y 1, t 1 ; x 2, y 2, t2 ) = ∫–∞ ∫–∞ F ( x1, x1, y1 )F∗ ( x2, y2, t2 ) × p { F 1, F 2 ; x 1, y 1, t1, x 2, y 2, t 2 } dF 1 dF2 (1.4-7b) The autocovariance of the image process is the autocorrelation about the mean, defined as K ( x1, y 1, t1 ; x 2, y 2, t2) = E { [ F ( x 1, y 1, t1 ) – η F ( x 1, y 1, t 1 ) ] [ F∗ ( x 2, y 2, t 2 ) – η∗ ( x 2, y 2, t2 ) ] } F (1.4-8a) or K ( x 1, y 1, t 1 ; x 2, y 2, t2) = R ( x1, y 1, t1 ; x 2, y 2, t2) – η F ( x 1, y 1, t1 ) η∗ ( x 2, y 2, t 2 ) F (1.4-8b) Finally, the variance of an image process is 2 σ F ( x, y, t ) = K ( x, y, t ; x, y, t ) (1.4-9) An image process is called stationary in the strict sense if its moments are unaf- fected by shifts in the space and time origins. The image process is said to be sta- tionary in the wide sense if its mean is constant and its autocorrelation is dependent on the differences in the image coordinates, x1 – x2, y1 – y2, t1 – t2, and not on their individual values. In other words, the image autocorrelation is not a function of position or time. For stationary image processes, E { F ( x, y, t ) } = η F (1.4-10a) R ( x 1, y 1, t 1 ; x 2, y 2, t 2) = R ( x1 – x 2, y 1 – y 2, t1 – t 2 ) (1.4-10b) The autocorrelation expression may then be written as R ( τx, τy, τt ) = E { F ( x + τ x, y + τy, t + τ t )F∗ ( x, y, t ) } (1.4-11)
  • 32. 18 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION Because R ( – τx, – τ y, – τ t ) = R∗ ( τx, τy, τt ) (1.4-12) then for an image function with F real, the autocorrelation is real and an even func- tion of τ x, τ y, τ t . The power spectral density, also called the power spectrum, of a stationary image process is defined as the three-dimensional Fourier transform of its autocorrelation function as given by ∞ ∞ ∞ W ( ω x, ω y, ω t ) = ∫–∞ ∫–∞ ∫–∞ R ( τx, τy, τt ) exp { –i ( ωx τx + ωy τy + ω t τt ) } dτx dτy dτt (1.4-13) In many imaging systems, the spatial and time image processes are separable so that the stationary correlation function may be written as R ( τx, τy, τt ) = R xy ( τx, τy )Rt ( τ t ) (1.4-14) Furthermore, the spatial autocorrelation function is often considered as the product of x and y axis autocorrelation functions, R xy ( τ x, τ y ) = Rx ( τ x )R y ( τ y ) (1.4-15) for computational simplicity. For scenes of manufactured objects, there is often a large amount of horizontal and vertical image structure, and the spatial separation approximation may be quite good. In natural scenes, there usually is no preferential direction of correlation; the spatial autocorrelation function tends to be rotationally symmetric and not separable. An image field is often modeled as a sample of a first-order Markov process for which the correlation between points on the image field is proportional to their geo- metric separation. The autocovariance function for the two-dimensional Markov process is  2 2 2 2 R xy ( τ x, τ y ) = C exp – α x τ x + α y τ y  (1.4-16)   where C is an energy scaling constant and α x and α y are spatial scaling constants. The corresponding power spectrum is 1 - 2C W ( ω x, ω y ) = --------------- ----------------------------------------------------- - (1.4-17) α x αy 1 + [ ωx ⁄ α2 + ω2 ⁄ α2 ] 2 x y y
  • 33. IMAGE STOCHASTIC CHARACTERIZATION 19 As a simplifying assumption, the Markov process is often assumed to be of separa- ble form with an autocovariance function K xy ( τx, τy ) = C exp { – α x τ x – α y τ y } (1.4-18) The power spectrum of this process is 4α x α y C W ( ω x, ω y ) = ------------------------------------------------ (1.4-19) 2 2 2 2 ( α x + ω x ) ( α y + ωy ) In the discussion of the deterministic characteristics of an image, both time and space averages of the image function have been defined. An ensemble average has also been defined for the statistical image characterization. A question of interest is: What is the relationship between the spatial-time averages and the ensemble aver- ages? The answer is that for certain stochastic processes, which are called ergodic processes, the spatial-time averages and the ensemble averages are equal. Proof of the ergodicity of a process in the general case is often difficult; it usually suffices to determine second-order ergodicity in which the first- and second-order space-time averages are equal to the first- and second-order ensemble averages. Often, the probability density or moments of a stochastic image field are known at the input to a system, and it is desired to determine the corresponding information at the system output. If the system transfer function is algebraic in nature, the output probability density can be determined in terms of the input probability density by a probability density transformation. For example, let the system output be related to the system input by G ( x, y, t ) = O F { F ( x, y, t ) } (1.4-20) where O F { · } is a monotonic operator on F(x, y). The probability density of the out- put field is then p { F ; x, y, t} p { G ; x, y, t} = ---------------------------------------------------- - (1.4-21) dO F { F ( x, y, t ) } ⁄ dF The extension to higher-order probability densities is straightforward, but often cumbersome. The moments of the output of a system can be obtained directly from knowledge of the output probability density, or in certain cases, indirectly in terms of the system operator. For example, if the system operator is additive linear, the mean of the sys- tem output is
  • 34. 20 CONTINUOUS IMAGE MATHEMATICAL CHARACTERIZATION E { G ( x, y, t ) } = E { O F { F ( x, y, t ) } } = O F { E { F ( x, y, t ) } } (1.4-22) It can be shown that if a system operator is additive linear, and if the system input image field is stationary in the strict sense, the system output is also stationary in the strict sense. Furthermore, if the input is stationary in the wide sense, the output is also wide-sense stationary. Consider an additive linear space-invariant system whose output is described by the three-dimensional convolution integral ∞ ∞ ∞ G ( x, y, t ) = ∫–∞ ∫–∞ ∫–∞ F ( x – α, y – β, t – γ )H ( α, β, γ ) dα d β dγ (1.4-23) where H(x, y, t) is the system impulse response. The mean of the output is then ∞ ∞ ∞ E { G ( x, y, t ) } = ∫–∞ ∫–∞ ∫–∞ E { F ( x – α, y – β, t – γ ) }H ( α, β, γ ) dα dβ dγ (1.4-24) If the input image field is stationary, its mean η F is a constant that may be brought outside the integral. As a result, ∞ ∞ ∞ E { G ( x, y, t ) } = η F ∫ ∫ ∫ H ( α, β, γ ) dα dβ dγ = η F H ( 0, 0, 0 ) (1.4-25) –∞ –∞ – ∞ where H ( 0, 0, 0 ) is the transfer function of the linear system evaluated at the origin in the spatial-time frequency domain. Following the same techniques, it can easily be shown that the autocorrelation functions of the system input and output are related by R G ( τ x, τ y, τ t ) = RF ( τx, τy, τt ) ᭺ H ( τx, τ y, τ t ) ᭺ H ∗ ( – τx, – τ y, – τ t ) * * (1.4-26) Taking Fourier transforms on both sides of Eq. 1.4-26 and invoking the Fourier transform convolution theorem, one obtains the relationship between the power spectra of the input and output image, W G ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t )H ( ω x, ω y, ω t )H ∗ ( ω x, ω y, ω t ) (1.4-27a) or 2 WG ( ω x, ω y, ω t ) = W F ( ω x, ω y, ω t ) H ( ω x, ω y, ω t ) (1.4-27b) This result is found useful in analyzing the effect of noise in imaging systems.
  • 35. REFERENCES 21 REFERENCES 1. J. W. Goodman, Introduction to Fourier Optics, 2nd Ed., McGraw-Hill, New York, 1996. 2. A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New York, 1968. 3. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy- chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 4. A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill, New York, 1991. 5. J. B. Thomas, An Introduction to Applied Probability Theory and Random Processes, Wiley, New York, 1971. 6. J. W. Goodman, Statistical Optics, Wiley, New York, 1985. 7. E. R. Dougherty, Random Processes for Image and Signal Processing, Vol. PM44, SPIE Press, Bellingham, Wash., 1998.
  • 36. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 2 PSYCHOPHYSICAL VISION PROPERTIES For efficient design of imaging systems for which the output is a photograph or dis- play to be viewed by a human observer, it is obviously beneficial to have an under- standing of the mechanism of human vision. Such knowledge can be utilized to develop conceptual models of the human visual process. These models are vital in the design of image processing systems and in the construction of measures of image fidelity and intelligibility. 2.1. LIGHT PERCEPTION Light, according to Webster's Dictionary (1), is “radiant energy which, by its action on the organs of vision, enables them to perform their function of sight.” Much is known about the physical properties of light, but the mechanisms by which light interacts with the organs of vision is not as well understood. Light is known to be a form of electromagnetic radiation lying in a relatively narrow region of the electro- magnetic spectrum over a wavelength band of about 350 to 780 nanometers (nm). A physical light source may be characterized by the rate of radiant energy (radiant intensity) that it emits at a particular spectral wavelength. Light entering the human visual system originates either from a self-luminous source or from light reflected from some object or from light transmitted through some translucent object. Let E ( λ ) represent the spectral energy distribution of light emitted from some primary light source, and also let t ( λ ) and r ( λ ) denote the wavelength-dependent transmis- sivity and reflectivity, respectively, of an object. Then, for a transmissive object, the observed light spectral energy distribution is C ( λ ) = t ( λ )E ( λ ) (2.1-1) 23
  • 37. 24 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.1-1. Spectral energy distributions of common physical light sources. and for a reflective object C ( λ ) = r ( λ )E ( λ ) (2.1-2) Figure 2.1-1 shows plots of the spectral energy distribution of several common sources of light encountered in imaging systems: sunlight, a tungsten lamp, a
  • 38. LIGHT PERCEPTION 25 light-emitting diode, a mercury arc lamp, and a helium–neon laser (2). A human being viewing each of the light sources will perceive the sources differently. Sun- light appears as an extremely bright yellowish-white light, while the tungsten light bulb appears less bright and somewhat yellowish. The light-emitting diode appears to be a dim green; the mercury arc light is a highly bright bluish-white light; and the laser produces an extremely bright and pure red beam. These observations provoke many questions. What are the attributes of the light sources that cause them to be perceived differently? Is the spectral energy distribution sufficient to explain the dif- ferences in perception? If not, what are adequate descriptors of visual perception? As will be seen, answers to these questions are only partially available. There are three common perceptual descriptors of a light sensation: brightness, hue, and saturation. The characteristics of these descriptors are considered below. If two light sources with the same spectral shape are observed, the source of greater physical intensity will generally appear to be perceptually brighter. How- ever, there are numerous examples in which an object of uniform intensity appears not to be of uniform brightness. Therefore, intensity is not an adequate quantitative measure of brightness. The attribute of light that distinguishes a red light from a green light or a yellow light, for example, is called the hue of the light. A prism and slit arrangement (Figure 2.1-2) can produce narrowband wavelength light of varying color. However, it is clear that the light wavelength is not an adequate measure of color because some colored lights encountered in nature are not contained in the rainbow of light produced by a prism. For example, purple light is absent. Purple light can be produced by combining equal amounts of red and blue narrowband lights. Other counterexamples exist. If two light sources with the same spectral energy distribu- tion are observed under identical conditions, they will appear to possess the same hue. However, it is possible to have two light sources with different spectral energy distributions that are perceived identically. Such lights are called metameric pairs. The third perceptual descriptor of a colored light is its saturation, the attribute that distinguishes a spectral light from a pastel light of the same hue. In effect, satu- ration describes the whiteness of a light source. Although it is possible to speak of the percentage saturation of a color referenced to a spectral color on a chromaticity diagram of the type shown in Figure 3.3-3, saturation is not usually considered to be a quantitative measure. FIGURE 2.1-2. Refraction of light from a prism.
  • 39. 26 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.1-3. Perceptual representation of light. As an aid to classifying colors, it is convenient to regard colors as being points in some color solid, as shown in Figure 2.1-3. The Munsell system of color classification actually has a form similar in shape to this figure (3). However, to be quantitatively useful, a color solid should possess metric significance. That is, a unit distance within the color solid should represent a constant perceptual color difference regardless of the particular pair of colors considered. The subject of perceptually significant color solids is considered later. 2.2. EYE PHYSIOLOGY A conceptual technique for the establishment of a model of the human visual system would be to perform a physiological analysis of the eye, the nerve paths to the brain, and those parts of the brain involved in visual perception. Such a task, of course, is presently beyond human abilities because of the large number of infinitesimally small elements in the visual chain. However, much has been learned from physio- logical studies of the eye that is helpful in the development of visual models (4–7).
  • 40. EYE PHYSIOLOGY 27 FIGURE 2.2-1. Eye cross section. Figure 2.2-1 shows the horizontal cross section of a human eyeball. The front of the eye is covered by a transparent surface called the cornea. The remaining outer cover, called the sclera, is composed of a fibrous coat that surrounds the choroid, a layer containing blood capillaries. Inside the choroid is the retina, which is com- posed of two types of receptors: rods and cones. Nerves connecting to the retina leave the eyeball through the optic nerve bundle. Light entering the cornea is focused on the retina surface by a lens that changes shape under muscular control to FIGURE 2.2-2. Sensitivity of rods and cones based on measurements by Wald.
  • 41. 28 PSYCHOPHYSICAL VISION PROPERTIES perform proper focusing of near and distant objects. An iris acts as a diaphram to control the amount of light entering the eye. The rods in the retina are long slender receptors; the cones are generally shorter and thicker in structure. There are also important operational distinctions. The rods are more sensitive than the cones to light. At low levels of illumination, the rods provide a visual response called scotopic vision. Cones respond to higher levels of illumination; their response is called photopic vision. Figure 2.2-2 illustrates the relative sensitivities of rods and cones as a function of illumination wavelength (7,8). An eye contains about 6.5 million cones and 100 million cones distributed over the retina (4). Figure 2.2-3 shows the distribution of rods and cones over a horizontal line on the retina (4). At a point near the optic nerve called the fovea, the density of cones is greatest. This is the region of sharpest photopic vision. There are no rods or cones in the vicin- ity of the optic nerve, and hence the eye has a blind spot in this region. FIGURE 2.2-3. Distribution of rods and cones on the retina.
  • 42. VISUAL PHENOMENA 29 FIGURE 2.2-4. Typical spectral absorption curves of pigments of the retina. In recent years, it has been determined experimentally that there are three basic types of cones in the retina (9, 10). These cones have different absorption character- istics as a function of wavelength with peak absorptions in the red, green, and blue regions of the optical spectrum. Figure 2.2-4 shows curves of the measured spectral absorption of pigments in the retina for a particular subject (10). Two major points of note regarding the curves are that the α cones, which are primarily responsible for blue light perception, have relatively low sensitivity, and the absorption curves overlap considerably. The existence of the three types of cones provides a physio- logical basis for the trichromatic theory of color vision. When a light stimulus activates a rod or cone, a photochemical transition occurs, producing a nerve impulse. The manner in which nerve impulses propagate through the visual system is presently not well established. It is known that the optic nerve bundle contains on the order of 800,000 nerve fibers. Because there are over 100,000,000 receptors in the retina, it is obvious that in many regions of the retina, the rods and cones must be interconnected to nerve fibers on a many-to-one basis. Because neither the photochemistry of the retina nor the propagation of nerve impulses within the eye is well understood, a deterministic characterization of the visual process is unavailable. One must be satisfied with the establishment of mod- els that characterize, and hopefully predict, human visual response. The following section describes several visual phenomena that should be considered in the model- ing of the human visual process. 2.3. VISUAL PHENOMENA The visual phenomena described below are interrelated, in some cases only mini- mally, but in others, to a very large extent. For simplification in presentation and, in some instances, lack of knowledge, the phenomena are considered disjoint.
  • 43. 30 PSYCHOPHYSICAL VISION PROPERTIES . (a) No background (b) With background FIGURE 2.3-1. Contrast sensitivity measurements. Contrast Sensitivity. The response of the eye to changes in the intensity of illumina- tion is known to be nonlinear. Consider a patch of light of intensity I + ∆I surrounded by a background of intensity I (Figure 2.3-1a). The just noticeable difference ∆I is to be determined as a function of I. Over a wide range of intensities, it is found that the ratio ∆I ⁄ I , called the Weber fraction, is nearly constant at a value of about 0.02 (11; 12, p. 62). This result does not hold at very low or very high intensities, as illus- trated by Figure 2.3-1a (13). Furthermore, contrast sensitivity is dependent on the intensity of the surround. Consider the experiment of Figure 2.3-1b, in which two patches of light, one of intensity I and the other of intensity I + ∆I , are sur- rounded by light of intensity Io. The Weber fraction ∆I ⁄ I for this experiment is plotted in Figure 2.3-1b as a function of the intensity of the background. In this situation it is found that the range over which the Weber fraction remains constant is reduced considerably compared to the experiment of Figure 2.3-1a. The envelope of the lower limits of the curves of Figure 2.3-lb is equivalent to the curve of Figure 2.3-1a. However, the range over which ∆I ⁄ I is approximately constant for a fixed background intensity I o is still comparable to the dynamic range of most electronic imaging systems.
  • 44. VISUAL PHENOMENA 31 (a ) Step chart photo (b ) Step chart intensity distribution (c ) Ramp chart photo D B (d ) Ramp chart intensity distribution FIGURE 2.3-2. Mach band effect.
  • 45. 32 PSYCHOPHYSICAL VISION PROPERTIES Because the differential of the logarithm of intensity is d ( log I ) = dI ---- - (2.3-1) I equal changes in the logarithm of the intensity of a light can be related to equal just noticeable changes in its intensity over the region of intensities, for which the Weber fraction is constant. For this reason, in many image processing systems, operations are performed on the logarithm of the intensity of an image point rather than the intensity. Mach Band. Consider the set of gray scale strips shown in of Figure 2.3-2a. The reflected light intensity from each strip is uniform over its width and differs from its neighbors by a constant amount; nevertheless, the visual appearance is that each strip is darker at its right side than at its left. This is called the Mach band effect (14). Figure 2.3-2c is a photograph of the Mach band pattern of Figure 2.3-2d. In the pho- tograph, a bright bar appears at position B and a dark bar appears at D. Neither bar would be predicted purely on the basis of the intensity distribution. The apparent Mach band overshoot in brightness is a consequence of the spatial frequency response of the eye. As will be seen shortly, the eye possesses a lower sensitivity to high and low spatial frequencies than to midfrequencies. The implication for the designer of image processing systems is that perfect fidelity of edge contours can be sacrificed to some extent because the eye has imperfect response to high-spatial- frequency brightness transitions. Simultaneous Contrast. The simultaneous contrast phenomenon (7) is illustrated in Figure 2.3-3. Each small square is actually the same intensity, but because of the dif- ferent intensities of the surrounds, the small squares do not appear equally bright. The hue of a patch of light is also dependent on the wavelength composition of sur- rounding light. A white patch on a black background will appear to be yellowish if the surround is a blue light. Chromatic Adaption. The hue of a perceived color depends on the adaption of a viewer (15). For example, the American flag will not immediately appear red, white, and blue if the viewer has been subjected to high-intensity red light before viewing the flag. The colors of the flag will appear to shift in hue toward the red complement, cyan. FIGURE 2.3-3. Simultaneous contrast.
  • 46. MONOCHROME VISION MODEL 33 Color Blindness. Approximately 8% of the males and 1% of the females in the world population are subject to some form of color blindness (16, p. 405). There are various degrees of color blindness. Some people, called monochromats, possess only rods or rods plus one type of cone, and therefore are only capable of monochro- matic vision. Dichromats are people who possess two of the three types of cones. Both monochromats and dichromats can distinguish colors insofar as they have learned to associate particular colors with particular objects. For example, dark roses are assumed to be red, and light roses are assumed to be yellow. But if a red rose were painted yellow such that its reflectivity was maintained at the same value, a monochromat might still call the rose red. Similar examples illustrate the inability of dichromats to distinguish hue accurately. 2.4. MONOCHROME VISION MODEL One of the modern techniques of optical system design entails the treatment of an optical system as a two-dimensional linear system that is linear in intensity and can be characterized by a two-dimensional transfer function (17). Consider the linear optical system of Figure 2.4-1. The system input is a spatial light distribution obtained by passing a constant-intensity light beam through a transparency with a spatial sine-wave transmittance. Because the system is linear, the spatial output intensity distribution will also exhibit sine-wave intensity variations with possible changes in the amplitude and phase of the output intensity compared to the input intensity. By varying the spatial frequency (number of intensity cycles per linear dimension) of the input transparency, and recording the output intensity level and phase, it is possible, in principle, to obtain the optical transfer function (OTF) of the optical system. Let H ( ω x, ω y ) represent the optical transfer function of a two-dimensional linear system where ω x = 2π ⁄ T x and ω y = 2π ⁄ Ty are angular spatial frequencies with spatial periods T x and Ty in the x and y coordinate directions, respectively. Then, with I I ( x, y ) denoting the input intensity distribution of the object and I o ( x, y ) FIGURE 2.4-1. Linear systems analysis of an optical system.
  • 47. 34 PSYCHOPHYSICAL VISION PROPERTIES representing the output intensity distribution of the image, the frequency spectra of the input and output signals are defined as ∞ ∞ I I ( ω x, ω y ) = ∫–∞ ∫–∞ II ( x, y ) exp { –i ( ω x x + ωy y )} dx dy (2.4-1) ∞ ∞ I O ( ω x, ω y ) = ∫–∞ ∫–∞ IO ( x, y ) exp { –i ( ωx x + ω y y )} dx dy (2.4-2) The input and output intensity spectra are related by I O ( ω x, ω y ) = H ( ω x, ω y ) I I ( ω x, ω y ) (2.4-3) The spatial distribution of the image intensity can be obtained by an inverse Fourier transformation of Eq. 2.4-2, yielding 1- ∞ ∞ I O ( x, y ) = -------- 4π 2 ∫–∞ ∫–∞ IO ( ω x, ωy ) exp { i ( ωx x + ωy y ) } dωx dωy (2.4-4) In many systems, the designer is interested only in the magnitude variations of the output intensity with respect to the magnitude variations of the input intensity, not the phase variations. The ratio of the magnitudes of the Fourier transforms of the input and output signals, I O ( ω x, ω y ) ------------------------------ = H ( ω x, ω y ) (2.4-5) I I ( ω x, ω y ) is called the modulation transfer function (MTF) of the optical system. Much effort has been given to application of the linear systems concept to the human visual system (18–24). A typical experiment to test the validity of the linear systems model is as follows. An observer is shown two sine-wave grating transpar- encies, a reference grating of constant contrast and spatial frequency and a variable- contrast test grating whose spatial frequency is set at a value different from that of the reference. Contrast is defined as the ratio max – min -------------------------- - max + min where max and min are the maximum and minimum of the grating intensity, respectively. The contrast of the test grating is varied until the brightnesses of the bright and dark regions of the two transparencies appear identical. In this manner it is possible to develop a plot of the MTF of the human visual system. Figure 2.4-2a is a
  • 48. MONOCHROME VISION MODEL 35 FIGURE 2.4-2. Hypothetical measurements of the spatial frequency response of the human visual system. Contrast Spatial frequency FIGURE 2.4-3. MTF measurements of the human visual system by modulated sine-wave grating.
  • 49. 36 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.4-4. Logarithmic model of monochrome vision. hypothetical plot of the MTF as a function of the input signal contrast. Another indi- cation of the form of the MTF can be obtained by observation of the composite sine- wave grating of Figure 2.4-3, in which spatial frequency increases in one coordinate direction and contrast increases in the other direction. The envelope of the visible bars generally follows the MTF curves of Figure 2.4-2a (23). Referring to Figure 2.4-2a, it is observed that the MTF measurement depends on the input contrast level. Furthermore, if the input sine-wave grating is rotated rela- tive to the optic axis of the eye, the shape of the MTF is altered somewhat. Thus, it can be concluded that the human visual system, as measured by this experiment, is nonlinear and anisotropic (rotationally variant). It has been postulated that the nonlinear response of the eye to intensity variations is logarithmic in nature and occurs near the beginning of the visual information processing system, that is, near the rods and cones, before spatial interaction occurs between visual signals from individual rods and cones. Figure 2.4-4 shows a simple logarithmic eye model for monochromatic vision. If the eye exhibits a logarithmic response to input intensity, then if a signal grating contains a recording of an exponential sine wave, that is, exp { sin { I I ( x, y ) } } , the human visual system can be linearized. A hypothetical MTF obtained by measuring an observer's response to an exponential sine-wave grating (Figure 2.4-2b) can be fitted reasonably well by a sin- gle curve for low-and mid-spatial frequencies. Figure 2.4-5 is a plot of the measured MTF of the human visual system obtained by Davidson (25) for an exponential FIGURE 2.4-5. MTF measurements with exponential sine-wave grating.
  • 50. MONOCHROME VISION MODEL 37 sine-wave test signal. The high-spatial-frequency portion of the curve has been extrapolated for an average input contrast. The logarithmic/linear system eye model of Figure 2.4-4 has proved to provide a reasonable prediction of visual response over a wide range of intensities. However, at high spatial frequencies and at very low or very high intensities, observed responses depart from responses predicted by the model. To establish a more accu- rate model, it is necessary to consider the physical mechanisms of the human visual system. The nonlinear response of rods and cones to intensity variations is still a subject of active research. Hypotheses have been introduced suggesting that the nonlinearity is based on chemical activity, electrical effects, and neural feedback. The basic loga- rithmic model assumes the form IO ( x, y ) = K 1 log { K 2 + K 3 I I ( x, y ) } (2.4-6) where the Ki are constants and I I ( x, y ) denotes the input field to the nonlinearity and I O ( x, y ) is its output. Another model that has been suggested (7, p. 253) follows the fractional response K 1 I I ( x, y ) I O ( x, y ) = ----------------------------- - (2.4-7) K 2 + I I ( x, y ) where K 1 and K 2 are constants. Mannos and Sakrison (26) have studied the effect of various nonlinearities employed in an analytical visual fidelity measure. Their results, which are discussed in greater detail in Chapter 3, establish that a power law nonlinearity of the form s I O ( x, y ) = [ I I ( x, y ) ] (2.4-8) where s is a constant, typically 1/3 or 1/2, provides good agreement between the visual fidelity measure and subjective assessment. The three models for the nonlin- ear response of rods and cones defined by Eqs. 2.4-6 to 2.4-8 can be forced to a reasonably close agreement over some midintensity range by an appropriate choice of scaling constants. The physical mechanisms accounting for the spatial frequency response of the eye are partially optical and partially neural. As an optical instrument, the eye has limited resolution because of the finite size of the lens aperture, optical aberrations, and the finite dimensions of the rods and cones. These effects can be modeled by a low-pass transfer function inserted between the receptor and the nonlinear response element. The most significant contributor to the frequency response of the eye is the lateral inhibition process (27). The basic mechanism of lateral inhibition is illustrated in
  • 51. 38 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.4-6. Lateral inhibition effect. Figure 2.4-6. A neural signal is assumed to be generated by a weighted contribution of many spatially adjacent rods and cones. Some receptors actually exert an inhibi- tory influence on the neural response. The weighting values are, in effect, the impulse response of the human visual system beyond the retina. The two-dimen- sional Fourier transform of this impulse response is the postretina transfer function. When a light pulse is presented to a human viewer, there is a measurable delay in its perception. Also, perception continues beyond the termination of the pulse for a short period of time. This delay and lag effect arising from neural temporal response limitations in the human visual system can be modeled by a linear temporal transfer function. Figure 2.4-7 shows a model for monochromatic vision based on results of the preceding discussion. In the model, the output of the wavelength-sensitive receptor is fed to a low-pass type of linear system that represents the optics of the eye. Next follows a general monotonic nonlinearity that represents the nonlinear intensity response of rods or cones. Then the lateral inhibition process is characterized by a linear system with a bandpass response. Temporal filtering effects are modeled by the following linear system. Hall and Hall (28) have investigated this model exten- sively and have found transfer functions for the various elements that accurately model the total system response. The monochromatic vision model of Figure 2.4-7, with appropriately scaled parameters, seems to be sufficiently detailed for most image processing applications. In fact, the simpler logarithmic model of Figure 2.4-4 is probably adequate for the bulk of applications.
  • 52. COLOR VISION MODEL 39 FIGURE 2.4-7. Extended model of monochrome vision. 2.5. COLOR VISION MODEL There have been many theories postulated to explain human color vision, beginning with the experiments of Newton and Maxwell (29–32). The classical model of human color vision, postulated by Thomas Young in 1802 (31), is the trichromatic model in which it is assumed that the eye possesses three types of sensors, each sensitive over a different wavelength band. It is interesting to note that there was no direct physiological evidence of the existence of three distinct types of sensors until about 1960 (9,10). Figure 2.5-1 shows a color vision model proposed by Frei (33). In this model, three receptors with spectral sensitivities s 1 ( λ ), s 2 ( λ ), s 3 ( λ ) , which represent the absorption pigments of the retina, produce signals e 1 = ∫ C ( λ )s 1 ( λ ) d λ (2.5-1a) e 2 = ∫ C ( λ )s 2 ( λ ) d λ (2.5-1b) e 3 = ∫ C ( λ )s 3 ( λ ) d λ (2.5-1c) where C ( λ ) is the spectral energy distribution of the incident light source. The three signals e 1, e 2, e 3 are then subjected to a logarithmic transfer function and combined to produce the outputs d 1 = log e 1 (2.5-2a) e2 d 2 = log e 2 – log e 1 = log ---- - (2.5-2b) e1 e3 d 3 = log e 3 – log e 1 = log ---- - (2.5-2c) e1
  • 53. 40 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.5-1 Color vision model. Finally, the signals d 1, d 2, d 3 pass through linear systems with transfer functions H 1 ( ω x, ω y ) , H 2 ( ω x, ω y ) , H 3 ( ω x, ω y ) to produce output signals g 1, g 2, g 3 that provide the basis for perception of color by the brain. In the model of Figure 2.5-1, the signals d 2 and d 3 are related to the chromaticity of a colored light while signal d 1 is proportional to its luminance. This model has been found to predict many color vision phenomena quite accurately, and also to sat- isfy the basic laws of colorimetry. For example, it is known that if the spectral energy of a colored light changes by a constant multiplicative factor, the hue and sat- uration of the light, as described quantitatively by its chromaticity coordinates, remain invariant over a wide dynamic range. Examination of Eqs. 2.5-1 and 2.5-2 indicates that the chrominance signals d 2 and d 3 are unchanged in this case, and that the luminance signal d 1 increases in a logarithmic manner. Other, more subtle evaluations of the model are described by Frei (33). As shown in Figure 2.2-4, some indication of the spectral sensitivities s i ( λ ) of the three types of retinal cones has been obtained by spectral absorption measure- ments of cone pigments. However, direct physiological measurements are difficult to perform accurately. Indirect estimates of cone spectral sensitivities have been obtained from measurements of the color response of color-blind peoples by Konig and Brodhun (34). Judd (35) has used these data to produce a linear transformation relating the spectral sensitivity functions s i ( λ ) to spectral tristimulus values obtained by colorimetric testing. The resulting sensitivity curves, shown in Figure 2.5-2, are unimodal and strictly positive, as expected from physiological consider- ations (34). The logarithmic color vision model of Figure 2.5-1 may easily be extended, in analogy with the monochromatic vision model of Figure 2.4-7, by inserting a linear transfer function after each cone receptor to account for the optical response of the eye. Also, a general nonlinearity may be substituted for the logarithmic transfer function. It should be noted that the order of the receptor summation and the transfer function operations can be reversed without affecting the output, because both are
  • 54. COLOR VISION MODEL 41 FIGURE 2.5-2. Spectral sensitivity functions of retinal cones based on Konig’s data. linear operations. Figure 2.5-3 shows the extended model for color vision. It is expected that the spatial frequency response of the g 1 neural signal through the color vision model should be similar to the luminance spatial frequency response discussed in Section 2.4. Sine-wave response measurements for colored lights obtained by van der Horst et al. (36), shown in Figure 2.5-4, indicate that the chro- matic response is shifted toward low spatial frequencies relative to the luminance response. Lateral inhibition effects should produce a low spatial frequency rolloff below the measured response. Color perception is relative; the perceived color of a given spectral energy distri- bution is dependent on the viewing surround and state of adaption of the viewer. A human viewer can adapt remarkably well to the surround or viewing illuminant of a scene and essentially normalize perception to some reference white or overall color balance of the scene. This property is known as chromatic adaption. FIGURE 2.5-3. Extended model of color vision.
  • 55. 42 PSYCHOPHYSICAL VISION PROPERTIES FIGURE 2.5-4. Spatial frequency response measurements of the human visual system. The simplest visual model for chromatic adaption, proposed by von Kries (37, 16, p. 435), involves the insertion of automatic gain controls between the cones and first linear system of Figure 2.5-2. These gains –1 ai = [ ∫ W ( λ )si ( λ ) dλ] (2.5-3) for i = 1, 2, 3 are adjusted such that the modified cone response is unity when view- ing a reference white with spectral energy distribution W ( λ ) . Von Kries's model is attractive because of its qualitative reasonableness and simplicity, but chromatic testing (16, p. 438) has shown that the model does not completely predict the chro- matic adaptation effect. Wallis (38) has suggested that chromatic adaption may, in part, result from a post-retinal neural inhibition mechanism that linearly attenuates slowly varying visual field components. The mechanism could be modeled by the low-spatial-frequency attenuation associated with the post-retinal transfer functions H Li ( ω x, ω y ) of Figure 2.5-3. Undoubtedly, both retinal and post-retinal mechanisms are responsible for the chromatic adaption effect. Further analysis and testing are required to model the effect adequately. REFERENCES 1. Webster's New Collegiate Dictionary, G. & C. Merriam Co. (The Riverside Press), Springfield, MA, 1960. 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965, 162–165. 3. Munsell Book of Color, Munsell Color Co., Baltimore. 4. M. H. Pirenne, Vision and the Eye, 2nd ed., Associated Book Publishers, London, 1967. 5. S. L. Polyak, The Retina, University of Chicago Press, Chicago, 1941.
  • 56. REFERENCES 43 6. L. H. Davson, The Physiology of the Eye, McGraw-Hill (Blakiston), New York, 1949. 7. T. N. Cornsweet, Visual Perception, Academic Press, New York, 1970. 8. G. Wald, “Human Vision and the Spectrum,” Science, 101, 2635, June 29, 1945, 653–658. 9. P. K. Brown and G. Wald, “Visual Pigment in Single Rods and Cones of the Human Ret- ina,” Science, 144, 3614, April 3, 1964, 45–52. 10. G. Wald, “The Receptors for Human Color Vision,” Science, 145, 3636, September 4, 1964, 1007–1017. 11. S. Hecht, “The Visual Discrimination of Intensity and the Weber–Fechner Law,” J. Gen- eral. Physiology., 7, 1924, 241. 12. W. F. Schreiber, Fundamentals of Electronic Imaging Systems, Springer-Verlag, Berlin, 1986. 13. S. S. Stevens, Handbook of Experimental Psychology, Wiley, New York, 1951. 14. F. Ratliff, Mach Bands: Quantitative Studies on Neural Networks in the Retina, Holden- Day, San Francisco, 1965. 15. G. S. Brindley, “Afterimages,” Scientific American, 209, 4, October 1963, 84–93. 16. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982. 17. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. 18. F. W. Campbell, “The Human Eye as an Optical Filter,” Proc. IEEE, 56, 6, June 1968, 1009–1014. 19. O. Bryngdahl, “Characteristics of the Visual System: Psychophysical Measurement of the Response to Spatial Sine-Wave Stimuli in the Mesopic Region,” J. Optical. Society of America, 54, 9, September 1964, 1152–1160. 20. E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, I. The Mach Phenomenon,” J. Optical Society of America, 51, 7, July 1961, 740–746. 21. E. M. Lowry and J. J. DePalma, “Sine Wave Response of the Visual System, II. Sine Wave and Square Wave Contrast Sensitivity,” J. Optical Society of America, 52, 3, March 1962, 328–335. 22. M. B. Sachs, J. Nachmias, and J. G. Robson, “Spatial Frequency Channels in Human Vision,” J. Optical Society of America, 61, 9, September 1971, 1176–1186. 23. T. G. Stockham, Jr., “Image Processing in the Context of a Visual Model,” Proc. IEEE, 60, 7, July 1972, 828–842. 24. D. E. Pearson, “A Realistic Model for Visual Communication Systems,” Proc. IEEE, 55, 3, March 1967, 380–389. 25. M. L. Davidson, “Perturbation Approach to Spatial Brightness Interaction in Human Vision,” J. Optical Society of America, 58, 9, September 1968, 1300–1308. 26. J. L. Mannos and D. J. Sakrison, “The Effects of a Visual Fidelity Criterion on the Encoding of Images,” IEEE Trans. Information. Theory, IT-20, 4, July 1974, 525–536. 27. F. Ratliff, H. K. Hartline, and W. H. Miller, “Spatial and Temporal Aspects of Retinal Inhibitory Interaction,” J. Optical Society of America, 53, 1, January 1963, 110–120. 28. C. F. Hall and E. L. Hall, “A Nonlinear Model for the Spatial Characteristics of the Human Visual System,” IEEE Trans, Systems, Man and Cybernetics, SMC-7, 3, March 1977, 161–170. 29. J. J. McCann, “Human Color Perception,” in Color: Theory and Imaging Systems, R. A. Enyard, Ed., Society of Photographic Scientists and Engineers, Washington, DC, 1973, 1–23.
  • 57. 44 PSYCHOPHYSICAL VISION PROPERTIES 30. I. Newton, Optiks, 4th ed., 1730; Dover Publications, New York, 1952. 31. T. Young, Philosophical Trans, 92, 1802, 12–48. 32. J. C. Maxwell, Scientific Papers of James Clerk Maxwell, W. D. Nevern, Ed., Dover Publications, New York, 1965. 33. W. Frei, “A New Model of Color Vision and Some Practical Limitations,” USCEE Report 530, University of Southern California, Image Processing Institute, Los Angeles March 1974, 128–143. 34. A. Konig and E. Brodhun, “Experimentell Untersuchungen uber die Psycho-physische fundamental in Bezug auf den Gesichtssinn,” Zweite Mittlg. S.B. Preuss Akademic der Wissenschaften, 1889, 641. 35. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J. Optical Society of America, 35, 3, March 1945, 199–221. 36. C. J. C. van der Horst, C. M. de Weert, and M. A. Bouman, “Transfer of Spatial Chroma- ticity Contrast at Threshold in the Human Eye,” J. Optical Society of America, 57, 10, October 1967, 1260–1266. 37. J. von Kries, “Die Gesichtsempfindungen,” Nagel's Handbuch der. Physiologie der Menschen, Vol. 3, 1904, 211. 38. R. H. Wallis, “Film Recording of Digital Color Images,” USCEE Report 570, University of Southern California, Image Processing Institute, Los Angeles, June 1975.
  • 58. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 3 PHOTOMETRY AND COLORIMETRY Chapter 2 dealt with human vision from a qualitative viewpoint in an attempt to establish models for monochrome and color vision. These models may be made quantitative by specifying measures of human light perception. Luminance mea- sures are the subject of the science of photometry, while color measures are treated by the science of colorimetry. 3.1. PHOTOMETRY A source of radiative energy may be characterized by its spectral energy distribution C ( λ ) , which specifies the time rate of energy the source emits per unit wavelength interval. The total power emitted by a radiant source, given by the integral of the spectral energy distribution, ∞ P = ∫0 C(λ ) d λ (3.1-1) is called the radiant flux of the source and is normally expressed in watts (W). A body that exists at an elevated temperature radiates electromagnetic energy proportional in amount to its temperature. A blackbody is an idealized type of heat radiator whose radiant flux is the maximum obtainable at any wavelength for a body at a fixed temperature. The spectral energy distribution of a blackbody is given by Planck's law (1): C1 C ( λ ) = ----------------------------------------------------- (3.1-2) 5 λ [ exp { C 2 ⁄ λT } – 1 ] 45
  • 59. 46 PHOTOMETRY AND COLORIMETRY FIGURE 3.1-1. Blackbody radiation functions. where λ is the radiation wavelength, T is the temperature of the body, and C 1 and C 2 are constants. Figure 3.1-1a is a plot of the spectral energy of a blackbody as a function of temperature and wavelength. In the visible region of the electromagnetic spectrum, the blackbody spectral energy distribution function of Eq. 3.1-2 can be approximated by Wien's radiation law (1): C1 C ( λ ) = --------------------------------------- - (3.1-3) 5 λ exp { C 2 ⁄ λT } Wien's radiation function is plotted in Figure 3.1-1b over the visible spectrum. The most basic physical light source, of course, is the sun. Figure 2.1-1a shows a plot of the measured spectral energy distribution of sunlight (2). The dashed line in FIGURE 3.1-2. CIE standard illumination sources.
  • 60. PHOTOMETRY 47 FIGURE 3.1-3. Spectral energy distribution of CRT phosphors. this figure, approximating the measured data, is a 6000 kelvin (K) blackbody curve. Incandescent lamps are often approximated as blackbody radiators of a given tem- perature in the range 1500 to 3500 K (3). The Commission Internationale de l'Eclairage (CIE), which is an international body concerned with standards for light and color, has established several standard sources of light, as illustrated in Figure 3.1-2 (4). Source SA is a tungsten filament lamp. Over the wavelength band 400 to 700 nm, source SB approximates direct sun- light, and source SC approximates light from an overcast sky. A hypothetical source, called Illuminant E, is often employed in colorimetric calculations. Illuminant E is assumed to emit constant radiant energy at all wavelengths. Cathode ray tube (CRT) phosphors are often utilized as light sources in image processing systems. Figure 3.1-3 describes the spectral energy distributions of common phosphors (5). Monochrome television receivers generally use a P4 phos- phor, which provides a relatively bright blue-white display. Color television displays utilize cathode ray tubes with red, green, and blue emitting phosphors arranged in triad dots or strips. The P22 phosphor is typical of the spectral energy distribution of commercial phosphor mixtures. Liquid crystal displays (LCDs) typically project a white light through red, green and blue vertical strip pixels. Figure 3.1-4 is a plot of typical color filter transmissivities (6). Photometric measurements seek to describe quantitatively the perceptual bright- ness of visible electromagnetic energy (7,8). The link between photometric mea- surements and radiometric measurements (physical intensity measurements) is the photopic luminosity function, as shown in Figure 3.1-5a (9). This curve, which is a CIE standard, specifies the spectral sensitivity of the human visual system to optical radiation as a function of wavelength for a typical person referred to as the standard
  • 61. 48 PHOTOMETRY AND COLORIMETRY FIGURE 3.1-4. Transmissivities of LCD color filters. observer. In essence, the curve is a standardized version of the measurement of cone sensitivity given in Figure 2.2-2 for photopic vision at relatively high levels of illu- mination. The standard luminosity function for scotopic vision at relatively low levels of illumination is illustrated in Figure 3.1-5b. Most imaging system designs are based on the photopic luminosity function, commonly called the relative lumi- nous efficiency. The perceptual brightness sensation evoked by a light source with spectral energy distribution C ( λ ) is specified by its luminous flux, as defined by ∞ F = Km ∫ C ( λ )V ( λ ) d λ (3.1-4) 0 where V ( λ ) represents the relative luminous efficiency and K m is a scaling con- stant. The modern unit of luminous flux is the lumen (lm), and the corresponding value for the scaling constant is K m = 685 lm/W. An infinitesimally narrowband source of 1 W of light at the peak wavelength of 555 nm of the relative luminous efficiency curve therefore results in a luminous flux of 685 lm.
  • 62. COLOR MATCHING 49 FIGURE 3.1-5. Relative luminous efficiency functions. 3.2. COLOR MATCHING The basis of the trichromatic theory of color vision is that it is possible to match an arbitrary color by superimposing appropriate amounts of three primary colors (10–14). In an additive color reproduction system such as color television, the three primaries are individual red, green, and blue light sources that are projected onto a common region of space to reproduce a colored light. In a subtractive color system, which is the basis of most color photography and color printing, a white light sequentially passes through cyan, magenta, and yellow filters to reproduce a colored light. 3.2.1. Additive Color Matching An additive color-matching experiment is illustrated in Figure 3.2-1. In Figure 3.2-1a, a patch of light (C) of arbitrary spectral energy distribution C ( λ ) , as shown in Figure 3.2-2a, is assumed to be imaged onto the surface of an ideal diffuse reflector (a surface that reflects uniformly over all directions and all wavelengths). A reference white light (W) with an energy distribution, as in Figure 3.2-2b, is imaged onto the surface along with three primary lights (P1), (P2), (P3) whose spectral energy distributions are sketched in Figure 3.2-2c to e. The three primary lights are first overlapped and their intensities are adjusted until the overlapping region of the three primary lights perceptually matches the reference white in terms of brightness, hue, and saturation. The amounts of the three primaries A 1 ( W ) , A 2 ( W ) , A3 ( W ) are then recorded in some physical units, such as watts. These are the matching values of the reference white. Next, the intensities of the primaries are adjusted until a match is achieved with the colored light (C), if a match is possible. The procedure to be followed if a match cannot be achieved is considered later. The intensities of the primaries
  • 63. 50 PHOTOMETRY AND COLORIMETRY FIGURE 3.2-1. Color matching. A1 ( C ), A 2 ( C ), A 3 ( C ) when a match is obtained are recorded, and normalized match- ing values T1 ( C ) , T 2 ( C ) , T3 ( C ) , called tristimulus values, are computed as A1 ( C ) A2 ( C ) A3 ( C ) T 1 ( C ) = --------------- - T 2 ( C ) = --------------- - T 3 ( C ) = --------------- - A1 ( W ) A2 ( W ) A3( W ) (3.2-1)
  • 64. COLOR MATCHING 51 FIGURE 3.2-2. Spectral energy distributions. If a match cannot be achieved by the procedure illustrated in Figure 3.2-1a, it is often possible to perform the color matching outlined in Figure 3.2-1b. One of the primaries, say (P3), is superimposed with the light (C), and the intensities of all three primaries are adjusted until a match is achieved between the overlapping region of primaries (P1) and (P2) with the overlapping region of (P3) and (C). If such a match is obtained, the tristimulus values are A1 ( C ) A2 ( C ) – A3 ( C ) T 1 ( C ) = --------------- - T 2 ( C ) = --------------- - T 3 ( C ) = ----------------- - (3.2-2) A1 ( W ) A2 ( W ) A3( W ) In this case, the tristimulus value T3 ( C ) is negative. If a match cannot be achieved with this geometry, a match is attempted between (P1) plus (P3) and (P2) plus (C). If a match is achieved by this configuration, tristimulus value T2 ( C ) will be negative. If this configuration fails, a match is attempted between (P2) plus (P3) and (P1) plus (C). A correct match is denoted with a negative value for T1 ( C ) .
  • 65. 52 PHOTOMETRY AND COLORIMETRY Finally, in the rare instance in which a match cannot be achieved by either of the configurations of Figure 3.2-1a or b, two of the primaries are superimposed with (C) and an attempt is made to match the overlapped region with the remaining primary. In the case illustrated in Figure 3.2-1c, if a match is achieved, the tristimulus values become A1 ( C ) –A2 ( C ) – A3 ( C ) T 1 ( C ) = --------------- - T 2 ( C ) = ----------------- - T 3 ( C ) = ----------------- - (3.2-3) A1 ( W ) A2 ( W ) A3( W ) If a match is not obtained by this configuration, one of the other two possibilities will yield a match. The process described above is a direct method for specifying a color quantita- tively. It has two drawbacks: The method is cumbersome and it depends on the per- ceptual variations of a single observer. In Section 3.3 we consider standardized quantitative color measurement in detail. 3.2.2. Subtractive Color Matching A subtractive color-matching experiment is shown in Figure 3.2-3. An illumination source with spectral energy distribution E ( λ ) passes sequentially through three dye filters that are nominally cyan, magenta, and yellow. The spectral absorption of the dye filters is a function of the dye concentration. It should be noted that the spectral transmissivities of practical dyes change shape in a nonlinear manner with dye con- centration. In the first step of the subtractive color-matching process, the dye concentrations of the three spectral filters are varied until a perceptual match is obtained with a refer- ence white (W). The dye concentrations are the matching values of the color match A1 ( W ) , A 2 ( W ) , A 3 ( W ) . Next, the three dye concentrations are varied until a match is obtained with a desired color (C). These matching values A1 ( C ), A2 ( C ), A3 ( C ) , are then used to compute the tristimulus values T1 ( C ) , T2 ( C ), T 3 ( C ), as in Eq. 3.2-1. FIGURE 3.2-3. Subtractive color matching.
  • 66. COLOR MATCHING 53 It should be apparent that there is no fundamental theoretical difference between color matching by an additive or a subtractive system. In a subtractive system, the yellow dye acts as a variable absorber of blue light, and with ideal dyes, the yellow dye effectively forms a blue primary light. In a similar manner, the magenta filter ideally forms the green primary, and the cyan filter ideally forms the red primary. Subtractive color systems ordinarily utilize cyan, magenta, and yellow dye spectral filters rather than red, green, and blue dye filters because the cyan, magenta, and yellow filters are notch filters which permit a greater transmission of light energy than do narrowband red, green, and blue bandpass filters. In color printing, a fourth filter layer of variable gray level density is often introduced to achieve a higher con- trast in reproduction because common dyes do not possess a wide density range. 3.2.3. Axioms of Color Matching The color-matching experiments described for additive and subtractive color match- ing have been performed quite accurately by a number of researchers. It has been found that perfect color matches sometimes cannot be obtained at either very high or very low levels of illumination. Also, the color matching results do depend to some extent on the spectral composition of the surrounding light. Nevertheless, the simple color matching experiments have been found to hold over a wide range of condi- tions. Grassman (15) has developed a set of eight axioms that define trichromatic color matching and that serve as a basis for quantitative color measurements. In the following presentation of these axioms, the symbol ◊ indicates a color match; the symbol ⊕ indicates an additive color mixture; the symbol • indicates units of a color. These axioms are: 1. Any color can be matched by a mixture of no more than three colored lights. 2. A color match at one radiance level holds over a wide range of levels. 3. Components of a mixture of colored lights cannot be resolved by the human eye. 4. The luminance of a color mixture is equal to the sum of the luminance of its components. 5. Law of addition. If color (M) matches color (N) and color (P) matches color (Q), then color (M) mixed with color (P) matches color (N) mixed with color (Q): ( M ) ◊ ( N ) ∩ ( P ) ◊ ( Q) ⇒ [ ( M) ⊕ ( P) ] ◊ [ ( N ) ⊕ ( Q )] (3.2-4) 6. Law of subtraction. If the mixture of (M) plus (P) matches the mixture of (N) plus (Q) and if (P) matches (Q), then (M) matches (N): [ (M ) ⊕ (P )] ◊ [(N ) ⊕ ( Q) ] ∩ [( P) ◊ (Q) ] ⇒ ( M) ◊ (N ) (3.2-5) 7. Transitive law. If (M) matches (N) and if (N) matches (P), then (M) matches (P):
  • 67. 54 PHOTOMETRY AND COLORIMETRY [ (M) ◊ (N)] ∩ [(N) ◊ (P) ] ⇒ (M) ◊ (P) (3.2-6) 8. Color matching. (a) c units of (C) matches the mixture of m units of (M) plus n units of (N) plus p units of (P): c • C ◊ [m • (M )] ⊕ [n • ( N) ] ⊕ [p • (P ) ] (3.2-7) or (b) a mixture of c units of C plus m units of M matches the mixture of n units of N plus p units of P: [c • (C )] ⊕ [m • ( M) ] ◊ [n • (N)] ⊕ [ p • (P) ] (3.2-8) or (c) a mixture of c units of (C) plus m units of (M) plus n units of (N) matches p units of P: [c • (C )] ⊕ [m • ( M) ] ⊕ [n • (N )] ◊ [ p • (P) ] (3.2-9) With Grassman's laws now specified, consideration is given to the development of a quantitative theory for color matching. 3.3. COLORIMETRY CONCEPTS Colorimetry is the science of quantitatively measuring color. In the trichromatic color system, color measurements are in terms of the tristimulus values of a color or a mathematical function of the tristimulus values. Referring to Section 3.2.3, the axioms of color matching state that a color C can be matched by three primary colors P1, P2, P3. The qualitative match is expressed as ( C ) ◊ [ A 1 ( C ) • ( P 1 ) ] ⊕ [ A 2 ( C ) • ( P 2 ) ] ⊕ [ A3 ( C ) • ( P 3 ) ] (3.3-1) where A 1 ( C ) , A2 ( C ) , A 3 ( C ) are the matching values of the color (C). Because the intensities of incoherent light sources add linearly, the spectral energy distribution of a color mixture is equal to the sum of the spectral energy distributions of its compo- nents. As a consequence of this fact and Eq. 3.3-1, the spectral energy distribution C ( λ ) can be replaced by its color-matching equivalent according to the relation 3 C ( λ ) ◊ A 1 ( C )P 1 ( λ ) + A2 ( C )P2 ( λ ) + A 3 ( C )P 3 ( λ ) = ∑ A j ( C )Pj ( λ ) (3.3-2) j =1
  • 68. COLORIMETRY CONCEPTS 55 Equation 3.3-2 simply means that the spectral energy distributions on both sides of the equivalence operator ◊ evoke the same color sensation. Color matching is usu- ally specified in terms of tristimulus values, which are normalized matching values, as defined by Aj ( C ) T j ( C ) = -------------- - (3.3-3) Aj ( W ) where A j ( W ) represents the matching value of the reference white. By this substitu- tion, Eq. 3.3-2 assumes the form 3 C(λ) ◊ ∑ Tj ( C )Aj ( W )Pj ( λ ) (3.3-4) j =1 From Grassman's fourth law, the luminance of a color mixture Y(C) is equal to the luminance of its primary components. Hence 3 Y(C ) = ∫ C ( λ )V ( λ ) dλ = ∑ ∫ Aj ( C )Pj ( λ )V ( λ ) dλ (3.3-5a) j =1 or 3 Y(C ) = ∑ ∫ Tj ( C )Aj ( W )Pj ( λ )V ( λ ) dλ (3.3-5b) j =1 where V ( λ ) is the relative luminous efficiency and P j ( λ ) represents the spectral energy distribution of a primary. Equations 3.3-4 and 3.3-5 represent the quantita- tive foundation for colorimetry. 3.3.1. Color Vision Model Verification Before proceeding further with quantitative descriptions of the color-matching pro- cess, it is instructive to determine whether the matching experiments and the axioms of color matching are satisfied by the color vision model presented in Section 2.5. In that model, the responses of the three types of receptors with sensitivities s 1 ( λ ) , s 2 ( λ ) , s 3 ( λ ) are modeled as e 1 ( C ) = ∫ C ( λ )s 1 ( λ ) d λ (3.3-6a) e 2 ( C ) = ∫ C ( λ )s 2 ( λ ) d λ (3.3-6b) e 3 ( C ) = ∫ C ( λ )s 3 ( λ ) d λ (3.3-6c)
  • 69. 56 PHOTOMETRY AND COLORIMETRY If a viewer observes the primary mixture instead of C, then from Eq. 3.3-4, substitu- tion for C ( λ ) should result in the same cone signals e i ( C ) . Thus 3 e1 ( C ) = ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s1 ( λ ) d λ (3.3-7a) j =1 3 e2( C ) = ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s2 ( λ ) d λ (3.3-7b) j =1 3 e3 ( C ) = ∑ Tj ( C )Aj ( W ) ∫ Pj ( λ )s3 ( λ ) d λ (3.3-7c) j =1 Equation 3.3-7 can be written more compactly in matrix form by defining k ij = ∫ Pj ( λ )si ( λ ) dλ (3.3-8) Then e1 ( C ) k 11 k 12 k 13 A1( W ) 0 0 T1 ( C ) e2 ( C ) = k 21 k 22 k 23 0 A2( W ) 0 T2 ( C ) (3.3-9) e3 ( C ) k 31 k 32 k 33 0 0 A3( W ) T3 ( C ) or in yet more abbreviated form, e ( C ) = KAt ( C ) (3.3-10) where the vectors and matrices of Eq. 3.3-10 are defined in correspondence with Eqs. 3.3-7 to 3.3-9. The vector space notation used in this section is consistent with the notation formally introduced in Appendix 1. Matrices are denoted as boldface uppercase symbols, and vectors are denoted as boldface lowercase symbols. It should be noted that for a given set of primaries, the matrix K is constant valued, and for a given reference white, the white matching values of the matrix A are con- stant. Hence, if a set of cone signals e i ( C ) were known for a color (C), the corre- sponding tristimulus values Tj ( C ) could in theory be obtained from –1 t ( C ) = [ KA ] e ( C ) (3.3-11)
  • 70. COLORIMETRY CONCEPTS 57 provided that the matrix inverse of [KA] exists. Thus, it has been shown that with proper selection of the tristimulus signals Tj ( C ) , any color can be matched in the sense that the cone signals will be the same for the primary mixture as for the actual color C. Unfortunately, the cone signals e i ( C ) are not easily measured physical quantities, and therefore, Eq. 3.3-11 cannot be used directly to compute the tristimu- lus values of a color. However, this has not been the intention of the derivation. Rather, Eq. 3.3-11 has been developed to show the consistency of the color-match- ing experiment with the color vision model. 3.3.2. Tristimulus Value Calculation It is possible indirectly to compute the tristimulus values of an arbitrary color for a particular set of primaries if the tristimulus values of the spectral colors (narrow- band light) are known for that set of primaries. Figure 3.3-1 is a typical sketch of the tristimulus values required to match a unit energy spectral color with three arbitrary primaries. These tristimulus values, which are fundamental to the definition of a pri- mary system, are denoted as Ts1 ( λ ) , T s2 ( λ ) , T s3 ( λ ) , where λ is a particular wave- length in the visible region. A unit energy spectral light ( C ψ ) at wavelength ψ with energy distribution δ ( λ – ψ ) is matched according to the equation 3 e i ( Cψ ) = ∫ δ ( λ – ψ )si ( λ ) d λ = ∑ ∫ Aj ( W )Pj ( λ )Ts ( ψ )si ( λ ) d λ j (3.3-12) j=1 Now, consider an arbitrary color [C] with spectral energy distribution C ( λ ) . At wavelength ψ , C ( ψ ) units of the color are matched by C ( ψ )Ts1 ( ψ ) , C ( ψ )Ts2 ( ψ ) , C ( ψ )T s ( ψ ) tristimulus units of the primaries as governed by 3 3 ∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ = ∑ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) d λ j (3.3-13) j =1 Integrating each side of Eq. 3.3-13 over ψ and invoking the sifting integral gives the cone signal for the color (C). Thus 3 ∫ ∫ C ( ψ )δ ( λ – ψ )si ( λ ) d λ dψ = ei ( C ) = ∑ ∫ ∫ Aj ( W )Pj ( λ )C ( ψ )Ts ( ψ )si ( λ ) dψ d λ j j =1 (3.3-14) By correspondence with Eq. 3.3-7, the tristimulus values of (C) must be equivalent to the second integral on the right of Eq. 3.3-14. Hence Tj ( C ) = ∫ C ( ψ )Ts ( ψ ) dψ j (3.3-15)
  • 71. 58 PHOTOMETRY AND COLORIMETRY FIGURE 3.3-1. Tristimulus values of typical red, green, and blue primaries required to match unit energy throughout the spectrum. From Figure 3.3-1 it is seen that the tristimulus values obtained from solution of Eq. 3.3-11 may be negative at some wavelengths. Because the tristimulus values represent units of energy, the physical interpretation of this mathematical result is that a color match can be obtained by adding the primary with negative tristimulus value to the original color and then matching this resultant color with the remaining primary. In this sense, any color can be matched by any set of primaries. However, from a practical viewpoint, negative tristimulus values are not physically realizable, and hence there are certain colors that cannot be matched in a practical color display (e.g., a color television receiver) with fixed primaries. Fortunately, it is possible to choose primaries so that most commonly occurring natural colors can be matched. The three tristimulus values T1, T2, T'3 can be considered to form the three axes of a color space as illustrated in Figure 3.3-2. A particular color may be described as a a vector in the color space, but it must be remembered that it is the coordinates of the vectors (tristimulus values), rather than the vector length, that specify the color. In Figure 3.3-2, a triangle, called a Maxwell triangle, has been drawn between the three primaries. The intersection point of a color vector with the triangle gives an indication of the hue and saturation of the color in terms of the distances of the point from the vertices of the triangle. FIGURE 3.3-2 Color space for typical red, green, and blue primaries.
  • 72. COLORIMETRY CONCEPTS 59 FIGURE 3.3-3. Chromaticity diagram for typical red, green, and blue primaries. Often the luminance of a color is not of interest in a color match. In such situa- tions, the hue and saturation of color (C) can be described in terms of chromaticity coordinates, which are normalized tristimulus values, as defined by T1 t 1 ≡ ----------------------------- - (3.3-16a) T 1 + T 2 + T3 T2 t 2 ≡ ----------------------------- - (3.3-16b) T 1 + T 2 + T3 T3 t 3 ≡ ----------------------------- - (3.3-16c) T 1 + T 2 + T3 Clearly, t3 = 1 – t 1 – t2 , and hence only two coordinates are necessary to describe a color match. Figure 3.3-3 is a plot of the chromaticity coordinates of the spectral colors for typical primaries. Only those colors within the triangle defined by the three primaries are realizable by physical primary light sources. 3.3.3. Luminance Calculation The tristimulus values of a color specify the amounts of the three primaries required to match a color where the units are measured relative to a match of a reference white. Often, it is necessary to determine the absolute rather than the relative amount of light from each primary needed to reproduce a color match. This informa- tion is found from luminance measurements of calculations of a color match.
  • 73. 60 PHOTOMETRY AND COLORIMETRY From Eq. 3.3-5 it is noted that the luminance of a matched color Y(C) is equal to the sum of the luminances of its primary components according to the relation 3 Y(C ) = ∑ Tj ( C ) ∫ Aj ( C )Pj ( λ )V ( λ ) d λ (3.3-17) j =1 The integrals of Eq. 3.3-17, Y ( Pj ) = ∫ Aj ( C )Pj ( λ )V ( λ ) d λ (3.3-18) are called luminosity coefficients of the primaries. These coefficients represent the luminances of unit amounts of the three primaries for a match to a specific reference white. Hence the luminance of a matched color can be written as Y ( C ) = T 1 ( C )Y ( P1 ) + T 2 ( C )Y ( P 2 ) + T 3 ( C )Y ( P 3 ) (3.3-19) Multiplying the right and left sides of Eq. 3.3-19 by the right and left sides, respec- tively, of the definition of the chromaticity coordinate T1 ( C ) t 1 ( C ) = --------------------------------------------------------- - (3.3-20) T 1 ( C ) + T 2 ( C ) + T3 ( C ) and rearranging gives t 1 ( C )Y ( C ) T 1 ( C ) = ------------------------------------------------------------------------------------------------- - (3.3-21a) t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 ) Similarly, t 2 ( C )Y ( C ) T 2 ( C ) = ------------------------------------------------------------------------------------------------- - (3.3-21b) t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 ) t 3 ( C )Y ( C ) T 3 ( C ) = ------------------------------------------------------------------------------------------------- - (3.3-21c) t 1 ( C )Y ( P1 ) + t 2 ( C )Y ( P2 ) + t 3 ( C )Y ( P 3 ) Thus the tristimulus values of a color can be expressed in terms of the luminance and chromaticity coordinates of the color.
  • 74. TRISTIMULUS VALUE TRANSFORMATION 61 3.4. TRISTIMULUS VALUE TRANSFORMATION From Eq. 3.3-7 it is clear that there is no unique set of primaries for matching colors. If the tristimulus values of a color are known for one set of primaries, a simple coor- dinate conversion can be performed to determine the tristimulus values for another set of primaries (16). Let (P1), (P2), (P3) be the original set of primaries with spec- tral energy distributions P1 ( λ ), P2 ( λ ), P3 ( λ ), with the units of a match determined by a white reference (W) with matching values A 1 ( W ), A 2 ( W ), A 3 ( W ). Now, consider ˜ ˜ ˜ ˜ a new set of primaries ( P 1 ) , ( P2 ) , ( P3 ) with spectral energy distributions P1 ( λ ) , ˜ ˜ ˜ P2 ( λ ), P 3 ( λ ). Matches are made to a reference white ( W ) , which may be different than the reference white of the original set of primaries, by matching values A1 ( W ), ˜ ˜ ˜ A2 ( W ), A3 ( W ). Referring to Eq. 3.3-10, an arbitrary color (C) can be matched by the tristimulus values T 1 ( C ) , T2 ( C ) , T 3 ( C ) with the original set of primaries or by the ˜ ˜ ˜ tristimulus values T1 ( C ) , T 2 ( C ) , T 3 ( C ) with the new set of primaries, according to the matching matrix relations ˜ ˜ ˜ ˜ e ( C ) = KA ( W )t ( C ) = K A ( W )t ( C ) (3.4-1) The tristimulus value units of the new set of primaries, with respect to the original set of primaries, must now be found. This can be accomplished by determining the color signals of the reference white for the second set of primaries in terms of both ˜ sets of primaries. The color signal equations for the reference white W become ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( W ) = KA ( W )t ( W ) = K A ( W )t ( W ) (3.4-2) ˜ ˜ ˜ ˜ ˜ ˜ where T 1 ( W ) = T 2 ( W ) = T 3 ( W ) = 1. Finally, it is necessary to relate the two sets of primaries by determining the color signals of each of the new primary colors ( P1 ) , ˜ ˜ ˜ ( P 2 ) , ( P3 ) in terms of both primary systems. These color signal equations are ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 1 ) = KA ( W )t ( P 1 ) = K A ( W )t ( P1 ) (3.4-3a) ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 2 ) = KA ( W )t ( P 2 ) = K A ( W )t ( P2 ) (3.4-3b) ˜ ˜ ˜ ˜ ˜ ˜ ˜ e ( P 3 ) = KA ( W )t ( P 3 ) = K A ( W )t ( P3 ) (3.4-3c) where 1 0 0 --------------- - ˜ 0 ˜( P1 ) = A1( W ) t ˜ ˜ ˜ 1 - t ( P2 ) = --------------- t ˜ ˜( P2 ) = 0 A2 ( W ) ˜ 1 --------------- - A3( W ) ˜ 0 0
  • 75. 62 PHOTOMETRY AND COLORIMETRY Matrix equations 3.4-1 to 3.4-3 may be solved jointly to obtain a relationship between the tristimulus values of the original and new primary system: T1( C ) ˜ ˜ T1 ( P 2 ) T1 ( P 3 ) T2 ( C ) ˜ ˜ T2 ( P 2 ) T2 ( P 3 ) T3 ( C ) T3 ( P 2 ) T3 ( P 3 ) ˜ ˜ ˜ T 1 ( C ) = ------------------------------------------------------------------------ - (3.4-4a) T (W ˜ ) T (P ) T (P ) ˜ ˜ 1 1 2 1 3 ˜ ˜ ˜ T2 ( W ) T2 ( P 2 ) T2 ( P 3 ) ˜ ˜ ˜ T 3 ( W ) T3 ( P 2 ) T3 ( P 3 ) ˜ T1 ( P 1 ) T1 ( C ) ˜ T1 ( P 3 ) ˜ T2 ( P 1 ) T2 ( C ) ˜ T2 ( P 3 ) ˜ T3 ( P 1 ) T3 ( C ) T3 ( P 3 ) ˜ ˜ T 2 ( C ) = ------------------------------------------------------------------------ - (3.4-4b) ˜ T ( P ) T ( W) T (P ) ˜ ˜ 1 1 1 1 3 ˜ ˜ ˜ T 2 ( P1 ) T 2 ( W ) T2 ( P 3 ) ˜ T 3 ( P1 ) ˜ ˜ T 3 ( W ) T3 ( P 3 ) ˜ T1 ( P1 ) ˜ T 1 ( P2 ) T1 ( C ) ˜ T2 ( P1 ) ˜ T 2 ( P2 ) T2( C ) ˜ T3 ( P1 ) T 3 ( P2 ) T 3 ( C ) ˜ ˜ T 3 ( C ) = ------------------------------------------------------------------------ - (3.4-4c) ˜ T (P ) T (P ) T (W) ˜ ˜ 1 1 1 2 1 ˜ T2 ( P 1 ) ˜ T2 ( P 2 ) ˜ T2 ( W ) ˜ T3 ( P 1 ) ˜ T3 ( P 2 ) ˜ T3 ( W ) where T denotes the determinant of matrix T. Equations 3.4-4 then may be written ˜ ˜ ˜ in terms of the chromaticity coordinates ti ( P 1 ), ti ( P 2 ), ti ( P 3 ) of the new set of pri- maries referenced to the original primary coordinate system. With this revision, ˜ T1 ( C ) m 11 m 12 m 13 T1( C ) ˜ T (C) = m 21 m 22 m 31 T2( C ) (3.4-5) 2 ˜ T3 ( C ) m 31 m 32 m 33 T3( C )
  • 76. COLOR SPACES 63 where ∆ ij m ij = ------ ∆i and ˜ ˜ ˜ ∆1 = T 1 ( W )∆ 11 + T 2 ( W )∆ 12 + T 3 ( W )∆ 13 ˜ ˜ ˜ ∆2 = T 1 ( W )∆ 21 + T 2 ( W )∆ 22 + T 3 ( W )∆ 23 ˜ ˜ ˜ ∆3 = T 1 ( W )∆ 31 + T 2 ( W )∆ 32 + T 3 ( W )∆ 33 ˜ ˜ ˜ ˜ ∆ 11 = t 2 ( P2 )t 3 ( P3 ) – t3 ( P 2 )t 2 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 12 = t 3 ( P2 )t 1 ( P3 ) – t1 ( P 2 )t 3 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 13 = t 1 ( P2 )t 2 ( P3 ) – t2 ( P 2 )t 1 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 21 = t 3 ( P1 )t 2 ( P3 ) – t2 ( P 1 )t 3 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 22 = t 1 ( P1 )t 3 ( P3 ) – t3 ( P 1 )t 1 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 23 = t 2 ( P1 )t 1 ( P3 ) – t1 ( P 1 )t 2 ( P 3 ) ˜ ˜ ˜ ˜ ∆ 31 = t 2 ( P 1 )t 3 ( P2 ) – t3 ( P 1 )t2 ( P 2 ) ˜ ˜ ˜ ˜ ∆ 32 = t 3 ( P 1 )t 1 ( P2 ) – t1 ( P 1 )t3 ( P 2 ) ˜ ˜ ˜ ˜ ∆ 33 = t 1 ( P 1 )t 2 ( P2 ) – t2 ( P 1 )t1 ( P 2 ) Thus, if the tristimulus values are known for a given set of primaries, conversion to another set of primaries merely entails a simple linear transformation of coordinates. 3.5. COLOR SPACES It has been shown that a color (C) can be matched by its tristimulus values T1 ( C ) , T 2 ( C ) , T 3 ( C ) for a given set of primaries. Alternatively, the color may be specified by its chromaticity values t 1 ( C ) , t 2 ( C ) and its luminance Y(C). Appendix 2 presents formulas for color coordinate conversion between tristimulus values and chromatic- ity coordinates for various representational combinations. A third approach in speci- fying a color is to represent the color by a linear or nonlinear invertible function of its tristimulus or chromaticity values. In this section we describe several standard and nonstandard color spaces for the representation of color images. They are categorized as colorimetric, subtractive, video, or nonstandard. Figure 3.5-1 illustrates the relationship between these color spaces. The figure also lists several example color spaces.
  • 77. 64 PHOTOMETRY AND COLORIMETRY nonstandard linear and nonlinear intercomponent colorimetric transformation linear linear intercomponent transformation colorimetric subtractive linear CMY/CMYK nonlinear RGB intercomponent transformation linear point nonlinear transformation point nonlinear colorimetric transformation intercomponent nonlinear transformation video video gamma gamma luma/chroma RGB YCC linear intercomponent transformation FIGURE 3.5-1. Relationship of color spaces. Natural color images, as opposed to computer-generated images, usually origi- nate from a color scanner or a color video camera. These devices incorporate three sensors that are spectrally sensitive to the red, green, and blue portions of the light spectrum. The color sensors typically generate red, green, and blue color signals that are linearly proportional to the amount of red, green, and blue light detected by each sensor. These signals are linearly proportional to the tristimulus values of a color at each pixel. As indicated in Figure 3.5-1, linear RGB images are the basis for the gen- eration of the various color space image representations. 3.5.1. Colorimetric Color Spaces The class of colorimetric color spaces includes all linear RGB images and the stan- dard colorimetric images derived from them by linear and nonlinear intercomponent transformations.
  • 78. COLOR SPACES 65 FIGURE 3.5-2. Tristimulus values of CIE spectral primaries required to match unit energy throughout the spectrum. Red = 700 nm, green = 546.1 nm, and blue = 435.8 nm. RCGCBC Spectral Primary Color Coordinate System. In 1931, the CIE developed a standard primary reference system with three monochromatic primaries at wave- lengths: red = 700 nm; green = 546.1 nm; blue = 435.8 nm (11). The units of the tristimulus values are such that the tristimulus values RC, GC, BC are equal when matching an equal-energy white, called Illuminant E, throughout the visible spectrum. The primary system is defined by tristimulus curves of the spectral colors, as shown in Figure 3.5-2. These curves have been obtained indirectly by experimental color-match- ing experiments performed by a number of observers. The collective color-matching response of these observers has been called the CIE Standard Observer. Figure 3.5-3 is a chromaticity diagram for the CIE spectral coordinate system. FIGURE 3.5-3. Chromaticity diagram for CIE spectral primary system.
  • 79. 66 PHOTOMETRY AND COLORIMETRY RNGNBN NTSC Receiver Primary Color Coordinate System. Commercial televi- sion receivers employ a cathode ray tube with three phosphors that glow in the red, green, and blue regions of the visible spectrum. Although the phosphors of commercial television receivers differ from manufacturer to manufacturer, it is common practice to reference them to the National Television Systems Committee (NTSC) receiver phosphor standard (14). The standard observer data for the CIE spectral primary system is related to the NTSC primary system by a pair of linear coordinate conversions. Figure 3.5-4 is a chromaticity diagram for the NTSC primary system. In this system, the units of the tristimulus values are normalized so that the tristimulus values are equal when matching the Illuminant C white reference. The NTSC phosphors are not pure monochromatic sources of radiation, and hence the gamut of colors producible by the NTSC phosphors is smaller than that available from the spectral primaries. This fact is clearly illustrated by Figure 3.5-3, in which the gamut of NTSC reproducible colors is plotted in the spectral primary chromaticity diagram (11). In modern practice, the NTSC chromaticities are combined with Illuminant D65. REGEBE EBU Receiver Primary Color Coordinate System. The European Broad- cast Union (EBU) has established a receiver primary system whose chromaticities are close in value to the CIE chromaticity coordinates, and the reference white is Illuminant C (17). The EBU chromaticities are also combined with the D65 illumi- nant. RRGRBR CCIR Receiver Primary Color Coordinate Systems. In 1990, the Interna- tional Telecommunications Union (ITU) issued its Recommendation 601, which FIGURE 3.5-4. Chromaticity diagram for NTSC receiver phosphor primary system.
  • 80. COLOR SPACES 67 specified the receiver primaries for standard resolution digital television (18). Also, in 1990 the ITU published its Recommendation 709 for digital high-definition tele- vision systems (19). Both standards are popularly referenced as CCIR Rec. 601 and CCIR Rec. 709, abbreviations of the former name of the standards committee, Comité Consultatif International des Radiocommunications. RSGSBS SMPTE Receiver Primary Color Coordinate System. The Society of Motion Picture and Television Engineers (SMPTE) has established a standard receiver primary color coordinate system with primaries that match modern receiver phosphors better than did the older NTSC primary system (20). In this coordinate system, the reference white is Illuminant D65. XYZ Color Coordinate System. In the CIE spectral primary system, the tristimulus values required to achieve a color match are sometimes negative. The CIE has developed a standard artificial primary coordinate system in which all tristimulus values required to match colors are positive (4). These artificial primaries are shown in the CIE primary chromaticity diagram of Figure 3.5-3 (11). The XYZ system primaries have been chosen so that the Y tristimulus value is equivalent to the lumi- nance of the color to be matched. Figure 3.5-5 is the chromaticity diagram for the CIE XYZ primary system referenced to equal-energy white (4). The linear transfor- mations between RCGCBC and XYZ are given by FIGURE 3.5-5. Chromaticity diagram for CIE XYZ primary system.
  • 81. 68 PHOTOMETRY AND COLORIMETRY X 0.49018626 0.30987954 0.19993420 RC Y = 0.17701522 0.81232418 0.01066060 GC (3.5-1a) Z 0.00000000 0.01007720 0.98992280 BC RC 2.36353918 – 0.89582361 – 0.46771557 X GC = – 0.51511248 1.42643694 0.08867553 Y (3.5-1b) BC 0.00524373 – 0.01452082 1.00927709 Z The color conversion matrices of Eq. 3.5-1 and those color conversion matrices defined later are quoted to eight decimal places (21,22). In many instances, this quo- tation is to a greater number of places than the original specification. The number of places has been increased to reduce computational errors when concatenating trans- formations between color representations. The color conversion matrix between XYZ and any other linear RGB color space can be computed by the following algorithm. 1. Compute the colorimetric weighting coefficients a(1), a(2), a(3) from –1 a(1) xR xG xB xW ⁄ yW a(2) = yR yG yB 1 (3.5-2a) a(3) zR zG zB zW ⁄ yW where xk, yk, zk are the chromaticity coordinates of the RGB primary set. 2. Compute the RGB-to-XYZ conversion matrix. M ( 1, 1 ) M ( 1, 2 ) M ( 1, 3 ) xR xG xB a(1) 0 0 M ( 2, 1 ) M ( 2, 2 ) M ( 2, 3 ) = yR yG yB 0 a(2) 0 (3.5-2b) M ( 3, 1 ) M ( 3, 2 ) M ( 3, 3 ) zR zG zB 0 0 a( 3) The XYZ-to-RGB conversion matrix is, of course, the matrix inverse of M . Table 3.5-1 lists the XYZ tristimulus values of several standard illuminants. The XYZ chro- maticity coordinates of the standard linear RGB color systems are presented in Table 3.5-2. From Eqs. 3.5-1 and 3.5-2 it is possible to derive a matrix transformation between RCGCBC and any linear colorimetric RGB color space. The book CD con- tains a file that lists the transformation matrices (22) between the standard RGB color coordinate systems and XYZ and UVW, defined below.
  • 82. COLOR SPACES 69 TABLE 3.5-1. XYZ Tristimulus Values of Standard Illuminants Illuminant X0 Y0 Z0 A 1.098700 1.000000 0.355900 C 0.980708 1.000000 1.182163 D50 0.964296 1.000000 0.825105 D65 0.950456 1.000000 1.089058 E 1.000000 1.000000 1.000000 TABLE 3.5-2. XYZ Chromaticity Coordinates of Standard Primaries Standard x y z CIE RC 0.640000 0.330000 0.030000 GC 0.300000 0.600000 0.100000 BC 0.150000 0.06000 0.790000 NTSC RN 0.670000 0.330000 0.000000 GN 0.210000 0.710000 0.080000 BN 0.140000 0.080000 0.780000 SMPTE RS 0.630000 0.340000 0.030000 GS 0.310000 0.595000 0.095000 BS 0.155000 0.070000 0.775000 EBU RE 0.640000 0.330000 0.030000 GE 0.290000 0.60000 0.110000 BE 0.150000 0.060000 0.790000 CCIR RR 0.640000 0.330000 0.030000 GR 0.30000 0.600000 0.100000 BR 0.150000 0.060000 0.790000 UVW Uniform Chromaticity Scale Color Coordinate System. In 1960, the CIE. adopted a coordinate system, called the Uniform Chromaticity Scale (UCS), in which, to a good approximation, equal changes in the chromaticity coordinates result in equal, just noticeable changes in the perceived hue and saturation of a color. The V component of the UCS coordinate system represents luminance. The u, v chromaticity coordinates are related to the x, y chromaticity coordinates by the rela- tions (23)
  • 83. 70 PHOTOMETRY AND COLORIMETRY 4x u = ---------------------------------- - (3.5-3a) – 2x + 12y + 3 6y v = ---------------------------------- - (3.5-3b) – 2x + 12y + 3 3u - x = -------------------------- (3.5-3c) 2u – 8v – 4 2v y = -------------------------- - (3.5-3d) 2u – 8v – 4 Figure 3.5-6 is a UCS chromaticity diagram. The tristimulus values of the uniform chromaticity scale coordinate system UVW are related to the tristimulus values of the spectral coordinate primary system by U 0.32679084 0.20658636 0.13328947 RC V = 0.17701522 0.81232418 0.01066060 GC (3.5-4a) W 0.02042971 1.06858510 0.41098519 BC RC 2.84373542 0.50732308 – 0.93543113 U GC = – 0.63965541 1.16041034 0.17735107 V (3.5-4b) BC 1.52178123 – 3.04235208 2.01855417 W FIGURE 3.5-6. Chromaticity diagram for CIE uniform chromaticity scale primary system.
  • 84. COLOR SPACES 71 U*V*W* Color Coordinate System. The U*V*W* color coordinate system, adopted by the CIE in 1964, is an extension of the UVW coordinate system in an attempt to obtain a color solid for which unit shifts in luminance and chrominance are uniformly perceptible. The U*V*W* coordinates are defined as (24) U∗ = 13W∗ ( u – u o ) (3.5-5a) V∗ = 13W∗ ( v – v o ) (3.5-5b) 1⁄3 W∗ = 25 ( 100Y ) – 17 (3.5-5c) where the luminance Y is measured over a scale of 0.0 to 1.0 and uo and vo are the chromaticity coordinates of the reference illuminant. The UVW and U*V*W* coordinate systems were rendered obsolete in 1976 by the introduction by the CIE of the more accurate L*a*b* and L*u*v* color coordi- nate systems. Although depreciated by the CIE, much valuable data has been col- lected in the UVW and U*V*W* color systems. L*a*b* Color Coordinate System. The L*a*b* cube root color coordinate system was developed to provide a computationally simple measure of color in agreement with the Munsell color system (25). The color coordinates are  Y 1⁄3 Y  116  -----  Y  – 16 for ----- > 0.008856 (3.5-6a)  Yo L∗ =  o  Y Y  903.3 ----- for 0.0 ≤ ----- ≤ 0.008856 (3.5-6b)  Yo Yo X Y a∗ = 500 f  -----  – f  -----  - (3.5-6c)  X o   Yo  X Z b∗ = 200 f  -----  – f  -----  - (3.5-6d)  Xo   Zo  where  w1 ⁄ 3 for w > 0.008856 (3.6-6e)  f( w) =    7.787 ( w ) + 0.1379 for 0.0 ≤ w ≤ 0.008856 (3.6-6f)
  • 85. 72 PHOTOMETRY AND COLORIMETRY The terms Xo, Yo, Zo are the tristimulus values for the reference white. Basically, L* is correlated with brightness, a* with redness-greenness, and b* with yellowness- blueness. The inverse relationship between L*a*b* and XYZ is  L∗ + 16  X = X o g  ------------------  (3.5-7a)  25    Y  a∗  Y = Y o g  f  -----  + --------  - (3.5-7b)   Y o  500    Y  b∗  Z = Z o g  f  -----  – --------  - (3.5-7c)   Y o  200  where  w3 for w > 0.20681 (3.6-7d)  g( w) =    0.1284 ( w – 0.1379 ) if 0.0 ≤ w ≤ 0.20689 (3.6-7e) L*u*v* Color Coordinate System. The L*u*v* coordinate system (26), which has evolved from the L*a*b* and the U*V*W* coordinate systems, became a CIE stan- dard in 1976. It is defined as   Y 1⁄3 -----   25  100 Y  – 16 Y for ----- ≥ 0.008856 (3.5-8a)  o Yo  L∗ =    Y Y  903.3 ----- for ----- < 0.008856 (3.5-8b)  Yo Yo u∗ = 13L∗ ( u′ – u′ ) o (3.5-8c) v∗ = 13L∗ ( v′ – v′ ) o (3.5-8d) where 4X u′ = -------------------------------- (3.5-8e) X + 15Y + 3Z 9Y v′ = -------------------------------- (3.5-8f) X + 15Y + 3Z
  • 86. COLOR SPACES 73 and u′ and v′ are obtained by substitution of the tristimulus values Xo, Yo, Zo for o o the reference white. The inverse relationship is given by X = 9u′ Y ------- - (3.5-9a) 4v′ ∗ 3 Y = Y o  L + 16  ----------------- - (3.5-9b)  25  12 – 3u′ – 20v′ Z = Y ------------------------------------ - (3.5-9c) 4v′ where u∗ u′ = ----------- + u′ o - (3.5-9d) 13L∗ v∗ - v' = ----------- + u′ o (3.5-9e) 13L∗ Figure 3.5-7 shows the linear RGB components of an NTSC receiver primary color image. This color image is printed in the color insert. If printed properly, the color image and its monochromatic component images will appear to be of “nor- mal” brightness. When displayed electronically, the linear images will appear too dark. Section 3.5.3 discusses the proper display of electronic images. Figures 3.5-8 to 3.5-10 show the XYZ, Yxy, and L*a*b* components of Figure 3.5-7. Section 10.1.1 describes amplitude-scaling methods for the display of image components outside the unit amplitude range. The amplitude range of each component is printed below each photograph. 3.5.2. Subtractive Color Spaces The color printing and color photographic processes (see Section 11-3) are based on a subtractive color representation. In color printing, the linear RGB color compo- nents are transformed to cyan (C), magenta (M), and yellow (Y) inks, which are overlaid at each pixel on a, usually, white paper. The simplest transformation rela- tionship is C = 1.0 – R (3.5-10a) M = 1.0 – G (3.5-10b) Y = 1.0 – B (3.5-10c)
  • 87. 74 PHOTOMETRY AND COLORIMETRY (a) Linear R, 0.000 to 0.965 (b) Linear G, 0.000 to 1.000 (c) Linear B, 0.000 to 0.965 FIGURE 3.5-7. Linear RGB components of the dolls_linear color image. See insert for a color representation of this figure. where the linear RGB components are tristimulus values over [0.0, 1.0]. The inverse relations are R = 1.0 – C (3.5-11a) G = 1.0 – M (3.5-11b) B = 1.0 – Y (3.5-11c) In high-quality printing systems, the RGB-to-CMY transformations, which are usu- ally proprietary, involve color component cross-coupling and point nonlinearities.
  • 88. COLOR SPACES 75 (a) X, 0.000 to 0.952 (b) Y, 0.000 to 0.985 (c ) Z, 0.000 to 1,143 FIGURE 3.5-8. XYZ components of the dolls_linear color image. To achieve dark black printing without using excessive amounts of CMY inks, it is common to add a fourth component, a black ink, called the key (K) or black com- ponent. The black component is set proportional to the smallest of the CMY compo- nents as computed by Eq. 3.5-10. The common RGB-to-CMYK transformation, which is based on the undercolor removal algorithm (27), is C = 1.0 – R – uK b (3.5-12a) M = 1.0 – G – uK b (3.5-12b) Y = 1.0 – B – uKb (3.5-12c) K = bKb (3.5-12d)
  • 89. 76 PHOTOMETRY AND COLORIMETRY (a) Y, 0.000 to 0.965 (b) x, 0.140 to 0.670 (c) y, 0.080 to 0.710 FIGURE 3.5-9. Yxy components of the dolls_linear color image. where K b = MIN { 1.0 – R, 1.0 – G, 1.0 – B } (3.5-12e) and 0.0 ≤ u ≤ 1.0 is the undercolor removal factor and 0.0 ≤ b ≤ 1.0 is the blackness factor. Figure 3.5-11 presents the CMY components of the color image of Figure 3.5-7. 3.5.3 Video Color Spaces The red, green, and blue signals from video camera sensors typically are linearly proportional to the light striking each sensor. However, the light generated by cathode tube displays is approximately proportional to the display amplitude drive signals
  • 90. COLOR SPACES 77 (a) L*, −16.000 to 99.434 (b) a*, −55.928 to 69.291 (c) b*, −65.224 to 90.171 FIGURE 3.5-10. L*a*b* components of the dolls_linear color image. raised to a power in the range 2.0 to 3.0 (28). To obtain a good-quality display, it is necessary to compensate for this point nonlinearity. The compensation process, called gamma correction, involves passing the camera sensor signals through a point nonlinearity with a power, typically, of about 0.45. In television systems, to reduce receiver cost, gamma correction is performed at the television camera rather than at the receiver. A linear RGB image that has been gamma corrected is called a gamma RGB image. Liquid crystal displays are reasonably linear in the sense that the light generated is approximately proportional to the display amplitude drive signal. But because LCDs are used in lieu of CRTs in many applications, they usu- ally employ circuitry to compensate for the gamma correction at the sensor.
  • 91. 78 PHOTOMETRY AND COLORIMETRY (a) C, 0.0035 to 1.000 (b) M, 0.000 to 1.000 (c) Y, 0.0035 to 1.000 FIGURE 3.5-11. CMY components of the dolls_linear color image. In high-precision applications, gamma correction follows a linear law for low- amplitude components and a power law for high-amplitude components according to the relations (22)  c Kc 2 + c for K ≥ b (3.5-13a) 1 3 ˜ =  K    c4 K for 0.0 ≤ K < b (3.5-13b)
  • 92. COLOR SPACES 79 ˜ where K denotes a linear RGB component and K is the gamma-corrected component. The constants ck and the breakpoint b are specified in Table 3.5-3 for the general case and for conversion to the SMPTE, CCIR and CIE lightness components. Figure 3.5-12 is a plot of the gamma correction curve for the CCIR Rec. 709 primaries. TABLE 3.5-3. Gamma Correction Constants General SMPTE CCIR CIE L* c1 1.00 1.1115 1.099 116.0 c2 0.45 0.45 0.45 0.3333 c3 0.00 -0.1115 -0.099 -16.0 c4 0.00 4.0 4.5 903.3 b 0.00 0.0228 0.018 0.008856 The inverse gamma correction relation is   K – c 3 1 ⁄ c2 ˜ ˜ for K ≥ c 4 b (3.5-14a)   --------------  -   c1  k =   K ˜  ---- -  c4 ˜ for 0.0 ≤ K < c 4 b (3.5-14b) FIGURE 3.5-12. Gamma correction curve for the CCIR Rec. 709 primaries.
  • 93. 80 PHOTOMETRY AND COLORIMETRY (a) Gamma R, 0.000 to 0.984 (b) Gamma G, 0.000 to 1.000 (c) Gamma B, 0.000 to 0.984 FIGURE 3.5-13. Gamma RGB components of the dolls_gamma color image. See insert for a color representation of this figure. Figure 3.5-13 shows the gamma RGB components of the color image of Figure 3.5-7. The gamma color image is printed in the color insert. The gamma components have been printed as if they were linear components to illustrate the effects of the point transformation. When viewed on an electronic display, the gamma RGB color image will appear to be of “normal” brightness. YIQ NTSC Transmission Color Coordinate System. In the development of the color television system in the United States, NTSC formulated a color coordinate system for transmission composed of three values, Y, I, Q (14). The Y value, called luma, is proportional to the gamma-corrected luminance of a color. The other two components, I and Q, called chroma, jointly describe the hue and saturation
  • 94. COLOR SPACES 81 attributes of an image. The reasons for transmitting the YIQ components rather than ˜ ˜ ˜ the gamma-corrected RN G N B N components directly from a color camera were two fold: The Y signal alone could be used with existing monochrome receivers to dis- play monochrome images; and it was found possible to limit the spatial bandwidth of the I and Q signals without noticeable image degradation. As a result of the latter property, a clever analog modulation scheme was developed such that the bandwidth of a color television carrier could be restricted to the same bandwidth as a mono- chrome carrier. The YIQ transformations for an Illuminant C reference white are given by Y 0.29889531 0.58662247 0.11448223 ˜ RN I = 0.59597799 – 0.27417610 – 0.32180189 ˜ G (3.5-15a) N Q 0.21147017 – 0.52261711 0.31114694 ˜ BN ˜ RN 1.00000000 0.95608445 0.62088850 Y ˜ G N = 1.00000000 – 0.27137664 – 0.64860590 I (3.5-15b) ˜ BN 1.00000000 – 1.10561724 1.70250126 Q where the tilde denotes that the component has been gamma corrected. Figure 3.5-14 presents the YIQ components of the gamma color image of Figure 3.5-13. YUV EBU Transmission Color Coordinate System. In the PAL and SECAM color television systems (29) used in many countries, the luma Y and two color differ- ences, ˜ BE – Y U = --------------- - (3.5-16a) 2.03 ˜ RE – Y V = --------------- - (3.5-16b) 1.14 ˜ ˜ are used as transmission coordinates, where RE and B E are the gamma-corrected EBU red and blue components, respectively. The YUV coordinate system was ini- tially proposed as the NTSC transmission standard but was later replaced by the YIQ system because it was found (4) that the I and Q signals could be reduced in band- width to a greater degree than the U and V signals for an equal level of visual qual- ity. The I and Q signals are related to the U and V signals by a simple rotation of coordinates in color space:
  • 95. 82 PHOTOMETRY AND COLORIMETRY (a) Y, 0.000 to 0.994 (b) l, −0.276 to 0.347 (c) Q, = 0.147 to 0.169 FIGURE 3.5-14. YIQ components of the gamma corrected dolls_gamma color image. I = – U sin 33° + V cos 33° (3.5-17a) Q = U cos 33° + V sin 33° (3.5-17b) It should be noted that the U and V components of the YUV video color space are not equivalent to the U and V components of the UVW uniform chromaticity system. YCbCr CCIR Rec. 601 Transmission Color Coordinate System. The CCIR Rec. 601 color coordinate system YCbCr is defined for the transmission of luma and chroma components coded in the integer range 0 to 255. The YCbCr transformations for unit range components are defined as (28)
  • 96. COLOR SPACES 83 Y 0.29900000 0.58700000 0.11400000 ˜ RS Cb = – 0.16873600 – 0.33126400 0.50000000 ˜ G (3.5-18a) S Cr 0.50000000 – 0.4186680 – 0.08131200 ˜ BS ˜ RS 1.00000000 – 0.0009264 1.40168676 Y ˜ G = 1.00000000 – 0.34369538 – 0.71416904 Cb (3.5-18b) S ˜ BS 1.00000000 1.77216042 0.00099022 Cr where the tilde denotes that the component has been gamma corrected. Photo YCC Color Coordinate System. Eastman Kodak company has developed an image storage system, called PhotoCD, in which a photographic negative is scanned, converted to a luma/chroma format similar to Rec. 601YCbCr, and recorded in a proprietary compressed form on a compact disk. The PhotoYCC format and its associated RGB display format have become defacto standards. PhotoYCC employs the CCIR Rec. 709 primaries for scanning. The conversion to YCC is defined as (27,28,30) Y 0.299 0.587 0.114 ˜ R 709 C1 = – 0.299 – 0.587 0.500 ˜ G (3.5-19a) 709 C2 0.500 – 0.587 0.114 ˜ B 709 Transformation from PhotoCD components for display is not an exact inverse of Eq. 3.5-19a, in order to preserve the extended dynamic range of film images. The YC1C2-to-RDGDBD display components is given by RD 0.969 0.000 1.000 Y GD = 0.969 – 0.194 – 0.509 C1 (3.5-19b) BD 0.969 1.000 0.000 C2 3.5.4. Nonstandard Color Spaces Several nonstandard color spaces used for image processing applications are described in this section.
  • 97. 84 PHOTOMETRY AND COLORIMETRY IHS Color Coordinate System. The IHS coordinate system (31) has been used within the image processing community as a quantitative means of specifying the intensity, hue, and saturation of a color. It is defined by the relations I 1 1 1 R -- - -- - -- - 3 3 3 –1 –1 2 V1 = ------ - ------ - ------ - G (3.5-20a) 6 6 6 1 –1 ------ - ------ - 0 V2 6 6 B  V2 H = arc tan  ----- - (3.5-20b)  V1 2 2 1⁄2 S = ( V1 + V 2 ) (3.5-20c) By this definition, the color blue is the zero reference for hue. The inverse relation- ship is V 1 = S cos { H } (3.5-21a) V2 = S sin { H } (3.5-21b) – 6 6 R 1 --------- - ------ - I 6 2 = 6 – 6 (3.5-21c) G 1 ------ - --------- - V1 6 2 1 6 ------ - 0 B V2 3 Figure 3.5-15 shows the IHS components of the gamma RGB image of Figure 3.5-13. Karhunen–Loeve Color Coordinate System. Typically, the R, G, and B tristimulus values of a color image are highly correlated with one another (32). In the develop- ment of efficient quantization, coding, and processing techniques for color images, it is often desirable to work with components that are uncorrelated. If the second- order moments of the RGB tristimulus values are known, or at least estimable, it is
  • 98. COLOR SPACES 85 (a) l, 0.000 to 0.989 (b) H, −3.136 to 3.142 (c) S, 0.000 to 0.476 FIGURE 3.5-15. IHS components of the dolls_gamma color image. possible to derive an orthogonal coordinate system, in which the components are uncorrelated, by a Karhunen–Loeve (K–L) transformation of the RGB tristimulus values. The K-L color transform is defined as K1 m 11 m12 m 13 R K2 = m 21 m 22 m 23 G (3.5-22a) K3 m 31 m 32 m 33 B
  • 99. 86 PHOTOMETRY AND COLORIMETRY R m 11 m 21 m 31 K1 G = m 12 m 22 m 32 K2 (3.5-22b) B m 13 m 23 m 33 K3 where the transformation matrix with general term m ij composed of the eigenvec- tors of the RGB covariance matrix with general term u ij . The transformation matrix satisfies the relation m 11 m 12 m 13 u 11 u 12 u 13 m 11 m 21 m 31 λ1 0 0 m 21 m 22 m 23 u 12 u 22 u 23 m 12 m 22 m 32 = 0 λ2 0 m 31 m 32 m 33 u 13 u 23 u 33 m 13 m 23 m 33 0 0 λ3 (3.5-23) where λ 1 , λ 2 , λ 3 are the eigenvalues of the covariance matrix and 2 u 11 = E { ( R – R ) } (3.5-24a) 2 u 22 = E { ( G – G ) } (3.5-24b) 2 u 33 = E { ( B – B ) } (3.5-24c) u 12 = E { ( R – R ) ( G – G ) } (3.5-24d) u 13 = E { ( R – R ) ( B – B ) } (3.5-24e) u 23 = E { ( G – G ) ( B – B ) } (3.5-24f) In Eq. 3.5-23, E { · } is the expectation operator and the overbar denotes the mean value of a random variable. Retinal Cone Color Coordinate System. As indicated in Chapter 2, in the discus- sion of models of the human visual system for color vision, indirect measurements of the spectral sensitivities s 1 ( λ ) , s 2 ( λ ) , s 3 ( λ ) have been made for the three types of retinal cones. It has been found that these spectral sensitivity functions can be lin- early related to spectral tristimulus values established by colorimetric experimenta- tion. Hence a set of cone signals T1, T2, T3 may be regarded as tristimulus values in a retinal cone color coordinate system. The tristimulus values of the retinal cone color coordinate system are related to the XYZ system by the coordinate conversion matrix (33)
  • 100. REFERENCES 87 T1 0.000000 1.000000 0.000000 X T2 = – 0.460000 1.359000 0.101000 Y (3.5-25) T3 0.000000 0.000000 1.000000 Z REFERENCES 1. T. P. Merrit and F. F. Hall, Jr., “Blackbody Radiation,” Proc. IRE, 47, 9, September 1959, 1435–1442. 2. H. H. Malitson, “The Solar Energy Spectrum,” Sky and Telescope, 29, 4, March 1965, 162–165. 3. R. D. Larabee, “Spectral Emissivity of Tungsten,” J. Optical of Society America, 49, 6, June 1959, 619–625. 4. The Science of Color, Crowell, New York, 1973. 5. D. G. Fink, Ed., Television Engineering Handbook, McGraw-Hill, New York, 1957. 6. Toray Industries, Inc. LCD Color Filter Specification. 7. J. W. T. Walsh, Photometry, Constable, London, 1953. 8. M. Born and E. Wolf, Principles of Optics, 6th ed., Pergamon Press, New York, 1981. 9. K. S. Weaver, “The Visibility of Radiation at Low Intensities,” J. Optical Society of America, 27, 1, January 1937, 39–43. 10. G. Wyszecki and W. S. Stiles, Color Science, 2nd ed., Wiley, New York, 1982. 11. R. W. G. Hunt, The Reproduction of Colour, 5th ed., Wiley, New York, 1957. 12. W. D. Wright, The Measurement of Color, Adam Hilger, London, 1944, 204–205. 13. R. A. Enyord, Ed., Color: Theory and Imaging Systems, Society of Photographic Scien- tists and Engineers, Washington, DC, 1973. 14. F. J. Bingley, “Color Vision and Colorimetry,” in Television Engineering Handbook, D. G. Fink, ed., McGraw–Hill, New York, 1957. 15. H. Grassman, “On the Theory of Compound Colours,” Philosophical Magazine, Ser. 4, 7, April 1854, 254–264. 16. W. T. Wintringham, “Color Television and Colorimetry,” Proc. IRE, 39, 10, October 1951, 1135–1172. 17. “EBU Standard for Chromaticity Tolerances for Studio Monitors,” Technical Report 3213-E, European Broadcast Union, Brussels, 1975. 18. “Encoding Parameters of Digital Television for Studios”, Recommendation ITU-R BT.601-4, (International Telecommunications Union, Geneva; 1990). 19 “Basic Parameter Values for the HDTV Standard for the Studio and for International Programme Exchange,” Recommendation ITU-R BT 709, International Telecommuni- cations Unions, Geneva; 1990. 20. L. E. DeMarsh, “Colorimetric Standards in U.S. Color Television. A Report to the Sub- committee on Systems Colorimetry of the SMPTE Television Committee,” J. Society of Motion Picture and Television Engineers, 83, 1974.
  • 101. 88 PHOTOMETRY AND COLORIMETRY 21. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Part 1: Common Architecture for Imaging,” ISO/IEC 12087-1:1995(E). 22. “Information Technology, Computer Graphics and Image Processing, Image Processing and Interchange, Part 2: Programmer’s Imaging Kernel System Application Program Interface,” ISO/IEC 12087-2:1995(E). 23. D. L. MacAdam, “Projective Transformations of ICI Color Specifications,” J. Optical Society of America, 27, 8, August 1937, 294–299. 24. G. Wyszecki, “Proposal for a New Color-Difference Formula,” J. Optical Society of America, 53, 11, November 1963, 1318–1319. 25. “CIE Colorimetry Committee Proposal for Study of Color Spaces,” Technical, Note, J. Optical Society of America, 64, 6, June 1974, 896–897. 26. Colorimetry, 2nd ed., Publication 15.2, Central Bureau, Commission Internationale de l'Eclairage, Vienna, 1986. 27. W. K. Pratt, Developing Visual Applications, XIL: An Imaging Foundation Library, Sun Microsystems Press, Mountain View, CA, 1997. 28. C. A. Poynton, A Technical Introduction to Digital Video, Wiley, New York, 1996. 29. P. S. Carnt and G. B. Townsend, Color Television Vol. 2; PAL, SECAM, and Other Sys- tems, Iliffe, London, 1969. 30. I. Kabir, High Performance Computer Imaging, Manning Publications, Greenwich, CT, 1996. 31. W. Niblack, An Introduction to Digital Image Processing, Prentice Hall, Englewood Cliffs, NJ, 1985. 32. W. K. Pratt, “Spatial Transform Coding of Color Images,” IEEE Trans. Communication Technology, COM-19, 12, December 1971, 980–992. 33. D. B. Judd, “Standard Response Functions for Protanopic and Deuteranopic Vision,” J. Optical Society of America, 35, 3, March 1945, 199–221.
  • 102. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) PART 2 DIGITAL IMAGE CHARACTERIZATION Digital image processing is based on the conversion of a continuous image field to equivalent digital form. This part of the book considers the image sampling and quantization processes that perform the analog image to digital image conversion. The inverse operation of producing continuous image displays from digital image arrays is also analyzed. Vector-space methods of image representation are developed for deterministic and stochastic image arrays. 89
  • 103. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 4 IMAGE SAMPLING AND RECONSTRUCTION In digital image processing systems, one usually deals with arrays of numbers obtained by spatially sampling points of a physical image. After processing, another array of numbers is produced, and these numbers are then used to reconstruct a con- tinuous image for viewing. Image samples nominally represent some physical mea- surements of a continuous image field, for example, measurements of the image intensity or photographic density. Measurement uncertainties exist in any physical measurement apparatus. It is important to be able to model these measurement errors in order to specify the validity of the measurements and to design processes for compensation of the measurement errors. Also, it is often not possible to mea- sure an image field directly. Instead, measurements are made of some function related to the desired image field, and this function is then inverted to obtain the desired image field. Inversion operations of this nature are discussed in the sections on image restoration. In this chapter the image sampling and reconstruction process is considered for both theoretically exact and practical systems. 4.1. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS In the design and analysis of image sampling and reconstruction systems, input images are usually regarded as deterministic fields (1–5). However, in some situations it is advantageous to consider the input to an image processing system, especially a noise input, as a sample of a two-dimensional random process (5–7). Both viewpoints are developed here for the analysis of image sampling and reconstruction methods. 91
  • 104. 92 IMAGE SAMPLING AND RECONSTRUCTION FIGURE 4.1-1. Dirac delta function sampling array. 4.1.1. Sampling Deterministic Fields Let F I ( x, y ) denote a continuous, infinite-extent, ideal image field representing the luminance, photographic density, or some desired parameter of a physical image. In a perfect image sampling system, spatial samples of the ideal image would, in effect, be obtained by multiplying the ideal image by a spatial sampling function ∞ ∞ S ( x, y ) = ∑ ∑ δ ( x – j ∆x, y – k ∆y ) (4.1-1) j = –∞ k = – ∞ composed of an infinite array of Dirac delta functions arranged in a grid of spacing ( ∆x, ∆y ) as shown in Figure 4.1-1. The sampled image is then represented as ∞ ∞ F P ( x, y ) = FI ( x, y )S ( x, y ) = ∑ ∑ FI ( j ∆x, k ∆y )δ ( x – j ∆x, y – k ∆y ) (4.1-2) j = –∞ k = –∞ where it is observed that F I ( x, y ) may be brought inside the summation and evalu- ated only at the sample points ( j ∆x, k ∆y) . It is convenient, for purposes of analysis, to consider the spatial frequency domain representation F P ( ω x, ω y ) of the sampled image obtained by taking the continuous two-dimensional Fourier transform of the sampled image. Thus ∞ ∞ F P ( ω x, ω y ) = ∫–∞ ∫–∞ FP ( x, y ) exp { –i ( ωx x + ωy y ) } dx dy (4.1-3)
  • 105. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS 93 By the Fourier transform convolution theorem, the Fourier transform of the sampled image can be expressed as the convolution of the Fourier transforms of the ideal image F I ( ω x, ω y ) and the sampling function S ( ω x, ω y ) as expressed by 1 F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) ᭺ S ( ω x, ω y ) - * (4.1-4) 2 4π The two-dimensional Fourier transform of the spatial sampling function is an infi- nite array of Dirac delta functions in the spatial frequency domain as given by (4, p. 22) 2 ∞ ∞ 4π - S ( ω x, ω y ) = -------------- ∆x ∆y ∑ ∑ δ ( ω x – j ω xs, ω y – k ω ys ) (4.1-5) j = –∞ k = –∞ where ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y represent the Fourier domain sampling fre- quencies. It will be assumed that the spectrum of the ideal image is bandlimited to some bounds such that F I ( ω x, ω y ) = 0 for ω x > ω xc and ω y > ω yc . Performing the convolution of Eq. 4.1-4 yields 1 ∞ ∞ F P ( ω x, ω y ) = -------------- ∆x ∆y - ∫– ∞ ∫– ∞ F I ( ω x – α , ω y – β ) ∞ ∞ × ∑ ∑ δ ( ω x – j ω xs, ω y – k ω ys ) dα dβ (4.1-6) j = – ∞ k = –∞ Upon changing the order of summation and integration and invoking the sifting property of the delta function, the sampled image spectrum becomes ∞ ∞ 1 - F P ( ω x, ω y ) = -------------- ∆x ∆y ∑ ∑ F I ( ω x – j ω xs, ω y – k ω ys ) (4.1-7) j = –∞ k = – ∞ As can be seen from Figure 4.1-2, the spectrum of the sampled image consists of the spectrum of the ideal image infinitely repeated over the frequency plane in a grid of resolution ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . It should be noted that if ∆x and ∆y are chosen too large with respect to the spatial frequency limits of F I ( ω x, ω y ) , the individual spectra will overlap. A continuous image field may be obtained from the image samples of FP ( x, y ) by linear spatial interpolation or by linear spatial filtering of the sampled image. Let R ( x, y ) denote the continuous domain impulse response of an interpolation filter and R ( ω x, ω y ) represent its transfer function. Then the reconstructed image is obtained
  • 106. 94 IMAGE SAMPLING AND RECONSTRUCTION wY wX (a) Original image wY wX 2p ∆y 2p ∆x (b) Sampled image FIGURE 4.1-2. Typical sampled image spectra. by a convolution of the samples with the reconstruction filter impulse response. The reconstructed image then becomes FR ( x, y ) = F P ( x, y ) ᭺ R ( x, y ) * (4.1-8) Upon substituting for FP ( x, y ) from Eq. 4.1-2 and performing the convolution, one obtains ∞ ∞ FR ( x, y ) = ∑ ∑ F I ( j ∆x, k ∆y )R ( x – j ∆x, y – k ∆y ) (4.1-9) j = –∞ k = –∞ Thus it is seen that the impulse response function R ( x, y ) acts as a two-dimensional interpolation waveform for the image samples. The spatial frequency spectrum of the reconstructed image obtained from Eq. 4.1-8 is equal to the product of the recon- struction filter transform and the spectrum of the sampled image, F R ( ω x, ω y ) = F P ( ω x, ω y )R ( ω x, ω y ) (4.1-10) or, from Eq. 4.1-7, ∞ ∞ 1 - F R ( ω x, ω y ) = -------------- R ( ω x, ω y ) ∆x ∆y ∑ ∑ F I ( ω x – j ω xs, ω y – k ω ys ) (4.1-11) j = –∞ k = – ∞
  • 107. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS 95 It is clear from Eq. 4.1-11 that if there is no spectrum overlap and if R ( ω x, ω y ) filters out all spectra for j, k ≠ 0 , the spectrum of the reconstructed image can be made equal to the spectrum of the ideal image, and therefore the images themselves can be made identical. The first condition is met for a bandlimited image if the sampling period is chosen such that the rectangular region bounded by the image cutoff frequencies ( ω xc, ω yc ) lies within a rectangular region defined by one-half the sam- pling frequency. Hence ω xs ω ys ω xc ≤ ------- - ω yc ≤ ------- - (4.1-12a) 2 2 or, equivalently, π π ∆x ≤ -------- ∆y ≤ -------- (4.1-12b) ω xc ω yc In physical terms, the sampling period must be equal to or smaller than one-half the period of the finest detail within the image. This sampling condition is equivalent to the one-dimensional sampling theorem constraint for time-varying signals that requires a time-varying signal to be sampled at a rate of at least twice its highest-fre- quency component. If equality holds in Eq. 4.1-12, the image is said to be sampled at its Nyquist rate; if ∆x and ∆y are smaller than required by the Nyquist criterion, the image is called oversampled; and if the opposite case holds, the image is under- sampled. If the original image is sampled at a spatial rate sufficient to prevent spectral overlap in the sampled image, exact reconstruction of the ideal image can be achieved by spatial filtering the samples with an appropriate filter. For example, as shown in Figure 4.1-3, a filter with a transfer function of the form K for ω x ≤ ω xL and ω y ≤ ω yL (4.1-13a)  R ( ω x, ω y ) =   0 otherwise (4.1-13b) where K is a scaling constant, satisfies the condition of exact reconstruction if ω xL > ω xc and ω yL > ω yc . The point-spread function or impulse response of this reconstruction filter is Kω xL ω yL sin { ω xL x } sin { ω yL y } R ( x, y ) = ---------------------- -------------------------- -------------------------- - (4.1-14) π 2 ω xL x ω yL y
  • 108. 96 IMAGE SAMPLING AND RECONSTRUCTION FIGURE 4.1-3. Sampled image reconstruction filters. With this filter, an image is reconstructed with an infinite sum of ( sin θ ) ⁄ θ func- tions, called sinc functions. Another type of reconstruction filter that could be employed is the cylindrical filter with a transfer function 2 2 K for ω x + ω y ≤ ω 0 (4.1-15a)  R ( ω x, ω y ) =   0 otherwise (4.1-15b) 2 2 2 provided that ω 0 > ω xc + ω yc . The impulse response for this filter is
  • 109. IMAGE SAMPLING AND RECONSTRUCTION CONCEPTS 97  2 2 J1  ω0 x + y    R ( x, y ) = 2πω 0 K --------------------------------------- - (4.1-16) 2 2 x +y where J 1 { · } is a first-order Bessel function. There are a number of reconstruction filters, or equivalently, interpolation waveforms, that could be employed to provide perfect image reconstruction. In practice, however, it is often difficult to implement optimum reconstruction filters for imaging systems. 4.1.2. Sampling Random Image Fields In the previous discussion of image sampling and reconstruction, the ideal input image field has been considered to be a deterministic function. It has been shown that if the Fourier transform of the ideal image is bandlimited, then discrete image samples taken at the Nyquist rate are sufficient to reconstruct an exact replica of the ideal image with proper sample interpolation. It will now be shown that similar results hold for sampling two-dimensional random fields. Let FI ( x, y ) denote a continuous two-dimensional stationary random process with known mean η F I and autocorrelation function R F ( τ x, τ y ) = E { F I ( x 1, y 1 )F * ( x 2, y 2 ) } I (4.1-17) I where τ x = x 1 – x 2 and τ y = y 1 – y 2 . This process is spatially sampled by a Dirac sampling array yielding ∞ ∞ F P ( x, y ) = FI ( x, y )S ( x, y ) = F I ( x, y ) ∑ ∑ δ ( x – j ∆x, y – k ∆y ) (4.1-18) j = –∞ k = –∞ The autocorrelation of the sampled process is then * RF ( τ x, τ y ) = E { F P ( x 1, y 1 ) F P ( x 2, y 2 ) } (4.1-19) P = E { F I ( x 1, y 1 ) F *( x 2, y 2 ) }S ( x 1, y 1 )S ( x 2, y 2 ) I The first term on the right-hand side of Eq. 4.1-19 is the autocorrelation of the stationary ideal image field. It should be observed that the product of the two Dirac sampling functions on the right-hand side of Eq. 4.1-19 is itself a Dirac sampling function of the form
  • 110. 98 IMAGE SAMPLING AND RECONSTRUCTION S ( x 1, y 1 )S ( x 2, y 2 ) = S ( x 1 – x 2, y 1 – y 2 ) = S ( τ x, τ y ) (4.1-20) Hence the sampled random field is also stationary with an autocorrelation function R F ( τ x, τ y ) = R F ( τ x, τ y )S ( τ x, τ y ) (4.1-21) P I Taking the two-dimensional Fourier transform of Eq. 4.1-21 yields the power spec- trum of the sampled random field. By the Fourier transform convolution theorem 1- W F ( ω x, ω y ) = -------- W F ( ω x, ω y ) ᭺ S ( ω x, ω y ) (4.1-22) P 2 I * 4π where W F I ( ω x, ω y ) and W F P ( ω x, ω y ) represent the power spectral densities of the ideal image and sampled ideal image, respectively, and S ( ω x, ω y ) is the Fourier transform of the Dirac sampling array. Then, by the derivation leading to Eq. 4.1-7, it is found that the spectrum of the sampled field can be written as ∞ ∞ 1 WF ( ω x, ω y ) = -------------- P ∆x ∆y - ∑ ∑ W F ( ω x – j ω xs, ω y – k ω ys ) I (4.1-23) j = –∞ k = –∞ Thus the sampled image power spectrum is composed of the power spectrum of the continuous ideal image field replicated over the spatial frequency domain at integer multiples of the sampling spatial frequency ( 2π ⁄ ∆x, 2π ⁄ ∆y ) . If the power spectrum of the continuous ideal image field is bandlimited such that W F I ( ω x, ω y ) = 0 for ω x > ω xc and ω y > ω yc , where ω xc and are ω yc cutoff frequencies, the individual spectra of Eq. 4.1-23 will not overlap if the spatial sampling periods are chosen such that ∆x < π ⁄ ω xc and ∆y < π ⁄ ω yc . A continuous random field F R ( x, y ) may be recon- structed from samples of the random ideal image field by the interpolation formula ∞ ∞ F R ( x, y ) = ∑ ∑ F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y ) (4.1-24) j = – ∞ k = –∞ where R ( x, y ) is the deterministic interpolation function. The reconstructed field and the ideal image field can be made equivalent in the mean-square sense (5, p. 284), that is, 2 E { F I ( x, y ) – F R ( x, y ) } = 0 (4.1-25) if the Nyquist sampling criteria are met and if suitable interpolation functions, such as the sinc function or Bessel function of Eqs. 4.1-14 and 4.1-16, are utilized.
  • 111. IMAGE SAMPLING SYSTEMS 99 FIGURE 4.1-4. Spectra of a sampled noisy image. The preceding results are directly applicable to the practical problem of sampling a deterministic image field plus additive noise, which is modeled as a random field. Figure 4.1-4 shows the spectrum of a sampled noisy image. This sketch indicates a significant potential problem. The spectrum of the noise may be wider than the ideal image spectrum, and if the noise process is undersampled, its tails will overlap into the passband of the image reconstruction filter, leading to additional noise artifacts. A solution to this problem is to prefilter the noisy image before sampling to reduce the noise bandwidth. 4.2. IMAGE SAMPLING SYSTEMS In a physical image sampling system, the sampling array will be of finite extent, the sampling pulses will be of finite width, and the image may be undersampled. The consequences of nonideal sampling are explored next. As a basis for the discussion, Figure 4.2-1 illustrates a common image scanning system. In operation, a narrow light beam is scanned directly across a positive photographic transparency of an ideal image. The light passing through the transparency is collected by a condenser lens and is directed toward the surface of a photodetector. The electrical output of the photodetector is integrated over the time period during which the light beam strikes a resolution cell. In the analysis it will be assumed that the sampling is noise-free. The results developed in Section 4.1 for
  • 112. 100 IMAGE SAMPLING AND RECONSTRUCTION FIGURE 4.2-1. Image scanning system. sampling noisy images can be combined with the results developed in this section quite readily. Also, it should be noted that the analysis is easily extended to a wide class of physical image sampling systems. 4.2.1. Sampling Pulse Effects Under the assumptions stated above, the sampled image function is given by F P ( x, y ) = FI ( x, y )S ( x, y ) (4.2-1) where the sampling array J K S ( x, y ) = ∑ ∑ P ( x – j ∆x, y – k ∆y) (4.2-2) j = –J k = –K is composed of (2J + 1)(2K + 1) identical pulses P ( x, y ) arranged in a grid of spac- ing ∆x, ∆y . The symmetrical limits on the summation are chosen for notational simplicity. The sampling pulses are assumed scaled such that ∞ ∞ ∫–∞ ∫–∞ P ( x, y ) dx dy = 1 (4.2-3) For purposes of analysis, the sampling function may be assumed to be generated by a finite array of Dirac delta functions DT ( x, y ) passing through a linear filter with impulse response P ( x, y ). Thus
  • 113. IMAGE SAMPLING SYSTEMS 101 S ( x, y ) = D T ( x, y ) ᭺ P ( x, y ) * (4.2-4) where J K D T ( x, y ) = ∑ ∑ δ ( x – j ∆x, y – k ∆y) (4.2-5) j = –J k = –K Combining Eqs. 4.2-1 and 4.2-2 results in an expression for the sampled image function, J K F P ( x, y ) = ∑ ∑ F I ( j ∆x, k ∆ y)P ( x – j ∆x, y – k ∆y) (4.2-6) j = – J k = –K The spectrum of the sampled image function is given by 1 F P ( ω x, ω y ) = -------- F I ( ω x, ω y ) ᭺ [ D T ( ω x, ω y )P ( ω x, ω y ) ] - * (4.2-7) 2 4π where P ( ω x, ω y ) is the Fourier transform of P ( x, y ) . The Fourier transform of the truncated sampling array is found to be (5, p. 105)     sin  ω x ( J + 1 ) ∆ x sin  ω y ( K + 1 ) ∆ y  -- - -- - 2 2     D T ( ω x, ω y ) = --------------------------------------------- ---------------------------------------------- - (4.2-8) sin { ω x ∆x ⁄ 2 } sin { ω y ∆ y ⁄ 2 } Figure 4.2-2 depicts D T ( ω x, ω y ) . In the limit as J and K become large, the right-hand side of Eq. 4.2-7 becomes an array of Dirac delta functions. FIGURE 4.2-2. Truncated sampling train and its Fourier spectrum.
  • 114. 102 IMAGE SAMPLING AND RECONSTRUCTION In an image reconstruction system, an image is reconstructed by interpolation of its samples. Ideal interpolation waveforms such as the sinc function of Eq. 4.1-14 or the Bessel function of Eq. 4.1-16 generally extend over the entire image field. If the sampling array is truncated, the reconstructed image will be in error near its bound- ary because the tails of the interpolation waveforms will be truncated in the vicinity of the boundary (8,9). However, the error is usually negligibly small at distances of about 8 to 10 Nyquist samples or greater from the boundary. The actual numerical samples of an image are obtained by a spatial integration of FS ( x, y ) over some finite resolution cell. In the scanning system of Figure 4.2-1, the integration is inherently performed on the photodetector surface. The image sample value of the resolution cell (j, k) may then be expressed as j∆x + A x k∆y + A y F S ( j ∆x, k ∆y) = ∫j∆x – A ∫k∆y – A x y F I ( x, y )P ( x – j ∆x, y – k ∆y ) dx dy (4.2-9) where Ax and Ay denote the maximum dimensions of the resolution cell. It is assumed that only one sample pulse exists during the integration time of the detec- tor. If this assumption is not valid, consideration must be given to the difficult prob- lem of sample crosstalk. In the sampling system under discussion, the width of the resolution cell may be larger than the sample spacing. Thus the model provides for sequentially overlapped samples in time. By a simple change of variables, Eq. 4.2-9 may be rewritten as Ax Ay FS ( j ∆x, k ∆y) = ∫–A ∫–A FI ( j ∆x – α, k ∆y – β )P ( – α, – β ) dx dy x y (4.2-10) Because only a single sampling pulse is assumed to occur during the integration period, the limits of Eq. 4.2-10 can be extended infinitely . In this formulation, Eq. 4.2-10 is recognized to be equivalent to a convolution of the ideal continuous image FI ( x, y ) with an impulse response function P ( – x, – y ) with reversed coordinates, followed by sampling over a finite area with Dirac delta functions. Thus, neglecting the effects of the finite size of the sampling array, the model for finite extent pulse sampling becomes F S ( j ∆x, k ∆y) = [ FI ( x, y ) ᭺ P ( – x, – y ) ]δ ( x – j ∆x, y – k ∆y) * (4.2-11) In most sampling systems, the sampling pulse is symmetric, so that P ( – x, – y ) = P ( x, y ). Equation 4.2-11 provides a simple relation that is useful in assessing the effect of finite extent pulse sampling. If the ideal image is bandlimited and Ax and Ay sat- isfy the Nyquist criterion, the finite extent of the sample pulse represents an equiv- alent linear spatial degradation (an image blur) that occurs before ideal sampling. Part 4 considers methods of compensating for this degradation. A finite-extent sampling pulse is not always a detriment, however. Consider the situation in which
  • 115. IMAGE SAMPLING SYSTEMS 103 the ideal image is insufficiently bandlimited so that it is undersampled. The finite- extent pulse, in effect, provides a low-pass filtering of the ideal image, which, in turn, serves to limit its spatial frequency content, and hence to minimize aliasing error. 4.2.2. Aliasing Effects To achieve perfect image reconstruction in a sampled imaging system, it is neces- sary to bandlimit the image to be sampled, spatially sample the image at the Nyquist or higher rate, and properly interpolate the image samples. Sample interpolation is considered in the next section; an analysis is presented here of the effect of under- sampling an image. If there is spectral overlap resulting from undersampling, as indicated by the shaded regions in Figure 4.2-3, spurious spatial frequency components will be intro- duced into the reconstruction. The effect is called an aliasing error (10,11). Aliasing effects in an actual image are shown in Figure 4.2-4. Spatial undersampling of the image creates artificial low-spatial-frequency components in the reconstruction. In the field of optics, aliasing errors are called moiré patterns. From Eq. 4.1-7 the spectrum of a sampled image can be written in the form 1 F P ( ω x, ω y ) = ------------- [ F I ( ω x, ω y ) + F Q ( ω x, ω y ) ] (4.2-12) ∆x ∆y − FIGURE 4.2-3. Spectra of undersampled two-dimensional function.
  • 116. 104 IMAGE SAMPLING AND RECONSTRUCTION (a) Original image (b) Sampled image FIGURE 4.2-4. Example of aliasing error in a sampled image.
  • 117. IMAGE SAMPLING SYSTEMS 105 where F I ( ω x, ω y ) represents the spectrum of the original image sampled at period ( ∆x, ∆y ) . The term ∞ ∞ 1 F Q ( ω x, ω y ) = ------------- ∆x ∆y - ∑ ∑ F I ( ω x – j ω xs, ω y – k ω ys ) (4.2-13) j = –∞ k = – ∞ for j ≠ 0 and k ≠ 0 describes the spectrum of the higher-order components of the sampled image repeated over spatial frequencies ω xs = 2π ⁄ ∆x and ω ys = 2π ⁄ ∆y. If there were no spectral foldover, optimal interpolation of the sampled image components could be obtained by passing the sampled image through a zonal low- pass filter defined by K for ω x ≤ ω xs ⁄ 2 and ω y ≤ ω ys ⁄ 2 (4.2-14a)  R ( ω x, ω y ) =   0 otherwise (4.2-14b) where K is a scaling constant. Applying this interpolation strategy to an undersam- pled image yields a reconstructed image field FR ( x, y ) = FI ( x, y ) + A ( x, y ) (4.2-15) where 1 - ωxs ⁄ 2 ωys ⁄ 2 F ( ω , ω ) exp { i ( ω x + ω y ) } dω dω (4.2-16) A ( x, y ) = -------- ∫ 2 ∫ 4π – ωxs ⁄ 2 –ωys ⁄ 2 Q x y x y x y represents the aliasing error artifact in the reconstructed image. The factor K has absorbed the amplitude scaling factors. Figure 4.2-5 shows the reconstructed image FIGURE 4.2-5. Reconstructed image spectrum.
  • 118. 106 IMAGE SAMPLING AND RECONSTRUCTION FIGURE 4.2-6. Model for analysis of aliasing effect. spectrum that illustrates the spectral foldover in the zonal low-pass filter passband. The aliasing error component of Eq. 4.2-16 can be reduced substantially by low- pass filtering before sampling to attenuate the spectral foldover. Figure 4.2-6 shows a model for the quantitative analysis of aliasing effects. In this model, the ideal image FI ( x, y ) is assumed to be a sample of a two-dimensional random process with known power-spectral density W FI ( ω x, ω y ) . The ideal image is linearly filtered by a presampling spatial filter with a transfer function H ( ω x, ω y ) . This filter is assumed to be a low-pass type of filter with a smooth attenuation of high spatial frequencies (i.e., not a zonal low-pass filter with a sharp cutoff). The fil- tered image is then spatially sampled by an ideal Dirac delta function sampler at a resolution ∆x, ∆y. Next, a reconstruction filter interpolates the image samples to pro- duce a replica of the ideal image. From Eq. 1.4-27, the power spectral density at the presampling filter output is found to be 2 W F ( ω x, ω y ) = H ( ω x, ω y ) W F ( ω x, ω y ) (4.2-17) O I and the Fourier spectrum of the sampled image field is ∞ ∞ 1 W F ( ω x, ω y ) = ------------- P ∆x ∆y ∑ ∑ W F ( ω x – j ω xs, ω y – k ω ys ) O (4.2-18) j = – ∞ k = –∞ Figure 4.2-7 shows the sampled image power spectral density and the foldover alias- ing spectral density from the first sideband with and without presampling low-pass filtering. It is desirable to isolate the undersampling effect from the effect of improper reconstruction. Therefore, assume for this analysis that the reconstruction filter R ( ω x, ω y ) is an optimal filter of the form given in Eq. 4.2-14. The energy passing through the reconstruction filter for j = k = 0 is then ω xs ⁄ 2 ω ys ⁄ 2 2 ER = ∫– ω ∫ xs ⁄ 2 – ω ys ⁄ 2 W F ( ω x, ω y ) H ( ω x, ω y ) dω x dω y I (4.2-19)
  • 119. IMAGE SAMPLING SYSTEMS 107 FIGURE 4.2-7. Effect of presampling filtering on a sampled image. Ideally, the presampling filter should be a low-pass zonal filter with a transfer func- tion identical to that of the reconstruction filter as given by Eq. 4.2-14. In this case, the sampled image energy would assume the maximum value ω xs ⁄ 2 ω ys ⁄ 2 E RM = ∫– ω ∫ xs ⁄ 2 – ω ys ⁄ 2 W F ( ω x, ω y ) dω x dω y I (4.2-20) Image resolution degradation resulting from the presampling filter may then be measured by the ratio E RM – E R E R = ----------------------- (4.2-21) ERM The aliasing error in a sampled image system is generally measured in terms of the energy, from higher-order sidebands, that folds over into the passband of the reconstruction filter. Assume, for simplicity, that the sampling rate is sufficient so that the spectral foldover from spectra centered at ( ± j ω xs ⁄ 2, ± k ω ys ⁄ 2 ) is negligible for j ≥ 2 and k ≥ 2 . The total aliasing error energy, as indicated by the doubly cross- hatched region of Figure 4.2-7, is then EA = E O – ER (4.2-22) where ∞ ∞ 2 EO = ∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y ) I dω x dω y (4.2-23)
  • 120. 108 IMAGE SAMPLING AND RECONSTRUCTION denotes the energy of the output of the presampling filter. The aliasing error is defined as (10) EA E A = ------ - (4.2-24) EO Aliasing error can be reduced by attenuating high spatial frequencies of F I ( x, y ) with the presampling filter. However, any attenuation within the passband of the reconstruction filter represents a loss of resolution of the sampled image. As a result, there is a trade-off between sampled image resolution and aliasing error. Consideration is now given to the aliasing error versus resolution performance of several practical types of presampling filters. Perhaps the simplest means of spa- tially filtering an image formed by incoherent light is to pass the image through a lens with a restricted aperture. Spatial filtering can then be achieved by controlling the degree of lens misfocus. Figure 11.2-2 is a plot of the optical transfer function of a circular lens as a function of the degree of lens misfocus. Even a perfectly focused lens produces some blurring because of the diffraction limit of its aperture. The transfer function of a diffraction-limited circular lens of diameter d is given by (12, p. 83)  for 0 ≤ ω ≤ ω 0 (4.2-25a)  -- a cos  -----  – ----- 1 –  -----  2 2 - ω ω ω π  - - ω  - H(ω) =   ω0  ω0 0   0  for ω > ω 0 (4.2-25b) where ω 0 = πd ⁄ R and R is the distance from the lens to the focal plane. In Section 4.2.1, it was noted that sampling with a finite-extent sampling pulse is equivalent to ideal sampling of an image that has been passed through a spatial filter whose impulse response is equal to the pulse shape of the sampling pulse with reversed coordinates. Thus the sampling pulse may be utilized to perform presampling filter- ing. A common pulse shape is the rectangular pulse 1  ----- for x, y ≤ T -- - (4.2-26a)  2 2 P ( x, y ) =  T  0 for x, y > T -- - (4.2-26b) 2 obtained with an incoherent light imaging system of a scanning microdensitometer. The transfer function for a square scanning spot is
  • 121. IMAGE SAMPLING SYSTEMS 109 sin { ω x T ⁄ 2 } sin { ω y T ⁄ 2 } P ( ω x, ω y ) = ------------------------------ ------------------------------ - - (4.2-27) ωxT ⁄ 2 ωy T ⁄ 2 Cathode ray tube displays produce display spots with a two-dimensional Gaussian shape of the form 1  x2 + y2  P ( x, y ) = ------------- exp  – ----------------  (4.2-28) 2 2πσw  2σ 2  w where σ w is a measure of the spot spread. The equivalent transfer function of the Gaussian-shaped scanning spot 2 2 2  ( ω x + ω y )σ w  P ( ω x, ω y ) = exp  – -------------------------------  (4.2-29)  2  Examples of the aliasing error-resolution trade-offs for a diffraction-limited aper- ture, a square sampling spot, and a Gaussian-shaped spot are presented in Figure 4.2-8 as a function of the parameter ω 0. The square pulse width is set at T = 2π ⁄ ω 0, so that the first zero of the sinc function coincides with the lens cutoff frequency. The spread of the Gaussian spot is set at σ w = 2 ⁄ ω 0, corresponding to two stan- dard deviation units in crosssection. In this example, the input image spectrum is modeled as FIGURE 4.2-8. Aliasing error and resolution error obtained with different types of prefiltering.
  • 122. 110 IMAGE SAMPLING AND RECONSTRUCTION A W F ( ω x, ω y ) = ---------------------------------- - (4.2-30) I 2m 1 + ( ω ⁄ ωc ) where A is an amplitude constant, m is an integer governing the rate of falloff of the Fourier spectrum, and ω c is the spatial frequency at the half-amplitude point. The curves of Figure 4.2-8 indicate that the Gaussian spot and square spot scanning pre- filters provide about the same results, while the diffraction-limited lens yields a somewhat greater loss in resolution for the same aliasing error level. A defocused lens would give even poorer results. 4.3. IMAGE RECONSTRUCTION SYSTEMS In Section 4.1 the conditions for exact image reconstruction were stated: The origi- nal image must be spatially sampled at a rate of at least twice its highest spatial fre- quency, and the reconstruction filter, or equivalent interpolator, must be designed to pass the spectral component at j = 0, k = 0 without distortion and reject all spectra for which j, k ≠ 0. With physical image reconstruction systems, these conditions are impossible to achieve exactly. Consideration is now given to the effects of using imperfect reconstruction functions. 4.3.1. Implementation Techniques In most digital image processing systems, electrical image samples are sequentially output from the processor in a normal raster scan fashion. A continuous image is generated from these electrical samples by driving an optical display such as a cath- ode ray tube (CRT) with the intensity of each point set proportional to the image sample amplitude. The light array on the CRT can then be imaged onto a ground- glass screen for viewing or onto photographic film for recording with a light projec- tion system incorporating an incoherent spatial filter possessing a desired optical transfer function. Optimal transfer functions with a perfectly flat passband over the image spectrum and a sharp cutoff to zero outside the spectrum cannot be physically implemented. The most common means of image reconstruction is by use of electro-optical techniques. For example, image reconstruction can be performed quite simply by electrically defocusing the writing spot of a CRT display. The drawback of this tech- nique is the difficulty of accurately controlling the spot shape over the image field. In a scanning microdensitometer, image reconstruction is usually accomplished by projecting a rectangularly shaped spot of light onto photographic film. Generally, the spot size is set at the same size as the sample spacing to fill the image field com- pletely. The resulting interpolation is simple to perform, but not optimal. If a small writing spot can be achieved with a CRT display or a projected light display, it is possible approximately to synthesize any desired interpolation by subscanning a res- olution cell, as shown in Figure 4.3-1.
  • 123. IMAGE RECONSTRUCTION SYSTEMS 111 FIGURE 4.3-1. Image reconstruction by subscanning. The following subsections introduce several one- and two-dimensional interpola- tion functions and discuss their theoretical performance. Chapter 13 presents meth- ods of digitally implementing image reconstruction systems. FIGURE 4.3-2. One-dimensional interpolation waveforms.
  • 124. 112 IMAGE SAMPLING AND RECONSTRUCTION 4.3.2. Interpolation Functions Figure 4.3-2 illustrates several one-dimensional interpolation functions. As stated previously, the sinc function, provides an exact reconstruction, but it cannot be physically generated by an incoherent optical filtering system. It is possible to approximate the sinc function by truncating it and then performing subscanning (Figure 4.3-1). The simplest interpolation waveform is the square pulse function, which results in a zero-order interpolation of the samples. It is defined mathemati- cally as R0 ( x ) = 1 for – 1 ≤ x ≤ 1 -- 2 - -- 2 - (4.3-1) and zero otherwise, where for notational simplicity, the sample spacing is assumed to be of unit dimension. A triangle function, defined as x + 1 for – 1 ≤ x ≤ 0 (4.3-2a)  R1( x ) =   1 – x for 0 < x ≤ 1 (4.3-2b) FIGURE 4.3-3. One-dimensional interpolation.
  • 125. IMAGE RECONSTRUCTION SYSTEMS 113 provides the first-order linear sample interpolation with trianglar interpolation waveforms. Figure 4.3-3 illustrates one-dimensional interpolation using sinc, square, and triangle functions. The triangle function may be considered to be the result of convolving a square function with itself. Convolution of the triangle function with the square function yields a bell-shaped interpolation waveform (in Figure 4.3-2d). It is defined as  1 ( x + 3 )2 for – 3 ≤ x ≤ – 1 -- - --- (4.3-3a)  -- 2 - -- 2 - 2 2   1 1 R 2 ( x ) =  3 – ( x )2 -- - for – --- < x ≤ --- (4.3-3b)  4 2 2   1 3 2  -- ( x – -- ) 2 - 2 - for 1 < x ≤ --- --- 3 (4.3-3c) 2 2 This process quickly converges to the Gaussian-shaped waveform of Figure 4.3-2f. Convolving the bell-shaped waveform with the square function results in a third- order polynomial function called a cubic B-spline (13,14). It is defined mathemati- cally as 1  2 + --- x 3 – ( x ) 2 for 0 ≤ x ≤ 1 (4.3-4a)  -- 2 3 - R3 ( x ) =   1 (2 – x )3 -- - 6 for 1 < x ≤ 2 (4.3-4b) The cubic B-spline is a particularly attractive candidate for image interpolation because of its properties of continuity and smoothness at the sample points. It can be shown by direct differentiation of Eq. 4.3-4, that R3(x) is continuous in its first and second derivatives at the sample points. As mentioned earlier, the sinc function can be approximated by truncating its tails. Typically, this is done over a four-sample interval. The problem with this approach is that the slope discontinuity at the ends of the waveform leads to ampli- tude ripples in a reconstructed function. This problem can be eliminated by generat- ing a cubic convolution function (15,16), which forces the slope of the ends of the interpolation to be zero. The cubic convolution interpolation function can be expressed in the following general form: 3 2  A 1 x + B1 x + C 1 x + D1 for 0 ≤ x ≤ 1 (4.3-5a)  Rc ( x ) =   3 2  A 2 x + B2 x + C 2 x + D2 for 1 < x ≤ 2 (4.3-5b)
  • 126. 114 IMAGE SAMPLING AND RECONSTRUCTION where Ai, Bi, Ci, Di are weighting factors. The weighting factors are determined by satisfying two sets of extraneous conditions: 1. R c ( x ) = 1 at x = 0, and R c ( x ) = 0 at x = 1, 2. 2. The first-order derivative R' c ( x ) = 0 at x = 0, 1, 2. These conditions results in seven equations for the eight unknowns and lead to the parametric expression 3 2 (a + 2) x – (a + 3) x + 1 for 0 ≤ x ≤ 1 (4.3-6a)  Rc ( x ) =   3 2  a x – 5a x + 8a x – 4a for 1 < x ≤ 2 (4.3-6b) where a ≡ A2 of Eq. 4.3-5 is the remaining unknown weighting factor. Rifman (15) and Bernstein (16) have set a = – 1, which causes R c ( x ) to have the same slope, - 1, at x = 1 as the sinc function. Keys (17) has proposed setting a = – 1 ⁄ 2 , which provides an interpolation function that approximates the original unsam- pled image to as high a degree as possible in the sense of a power series expan- sion. The factor a in Eq. 4.3-6 can be used as a tuning parameter to obtain a best visual interpolation (18,19). Table 4.3-1 defines several orthogonally separable two-dimensional interpola- tion functions for which R ( x, y ) = R ( x )R ( y ). The separable square function has a square peg shape. The separable triangle function has the shape of a pyramid. Using a triangle interpolation function for one-dimensional interpolation is equivalent to linearly connecting adjacent sample peaks as shown in Figure 4.3-3c. The extension to two dimensions does not hold because, in general, it is not possible to fit a plane to four adjacent samples. One approach, illustrated in Figure 4.3-4a, is to perform a planar fit in a piecewise fashion. In region I of Figure 4.3-4a, points are linearly interpolated in the plane defined by pixels A, B, C, while in region II, interpolation is performed in the plane defined by pixels B, C, D. A computationally simpler method, called bilinear interpolation, is described in Figure 4.3-4b. Bilinear interpolation is performed by linearly interpo- lating points along separable orthogonal coordinates of the continuous image field. The resultant interpolated surface of Figure 4.3-4b, connecting pixels A, B, C, D, is generally nonplanar. Chapter 13 shows that bilinear interpolation is equiv- alent to interpolation with a pyramid function.
  • 127. IMAGE RECONSTRUCTION SYSTEMS 115 TABLE 4.3-1. Two-Dimensional Interpolation Functions Function Definition Separable sinc 4 sin { 2πx ⁄ T x } sin { 2πy ⁄ T y } 2π R ( x, y ) = ---------- --------------------------------- --------------------------------- - T x = ------- - T x T y 2πx ⁄ T x 2πy ⁄ T y ω xs 2π T y = ------- - ω ys 1 ω x ≤ ω xs , ω y ≤ ω ys ᏾ ( ω x, ω y ) =  0 otherwise Separable square  1 Tx Ty  ---------- - x ≤ ---- , - y ≤ ---- - R 0 ( x, y ) =  T x T y 2 2   0 otherwise sin { ω x T x ⁄ 2 } sin { ω y T y ⁄ 2 } ᏾ 0 ( ω x ,ω y ) = ------------------------------------------------------------------- - ( ωx Tx ⁄ 2 ) ( ω y Ty ⁄ 2 ) Separable triangle R 1 ( x, y ) = R 0 ( x, y ) ᭺ R0 ( x, y ) * 2 ᏾ 1 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y ) Separable bell R 2 ( x, y ) = R 0 ( x, y ) ᭺ R1 ( x, y ) * 3 ᏾ 2 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y ) Separable cubic B-spline R 3 ( x, y ) = R 0 ( x, y ) ᭺ R2 ( x, y ) * 4 ᏾ 3 ( ω x, ω y ) = ᏾ 0 ( ω x, ω y ) Gaussian 2 –1  x2 + y2  R ( x, y ) = [ 2πσ w ] exp  – ----------------   2σ 3  w 2 2 2  σw ( ωx + ωy )  ᏾ ( ω x, ω y ) = exp  – -------------------------------   2  4.3.3. Effect of Imperfect Reconstruction Filters The performance of practical image reconstruction systems will now be analyzed. It will be assumed that the input to the image reconstruction system is composed of samples of an ideal image obtained by sampling with a finite array of Dirac samples at the Nyquist rate. From Eq. 4.1-9 the reconstructed image is found to be ∞ ∞ F R ( x, y ) = ∑ ∑ F I ( j ∆x, k ∆y)R ( x – j ∆x, y – k ∆y) (4.3-7) j = –∞ k = –∞
  • 128. 116 IMAGE SAMPLING AND RECONSTRUCTION FIGURE 4.3-4. Two-dimensional linear interpolation. where R(x, y) is the two-dimensional interpolation function of the image reconstruc- tion system. Ideally, the reconstructed image would be the exact replica of the ideal image as obtained from Eq. 4.1-9. That is, ∞ ∞ ˆ F R ( x, y ) = ∑ ∑ F I ( j ∆x, k ∆y)R I ( x – j ∆ x, y – k ∆y) (4.3-8) j = – ∞ k = –∞ where R I ( x, y ) represents an optimum interpolation function such as given by Eq. 4.1-14 or 4.1-16. The reconstruction error over the bounds of the sampled image is then ∞ ∞ ED ( x, y ) = ∑ ∑ FI ( j ∆x, k ∆y) [ R ( x – j ∆x, y – k ∆y) – R I ( x – j ∆x, y – k ∆y) ] (4.3-9) j = –∞ k = – ∞ There are two contributors to the reconstruction error: (1) the physical system interpolation function R(x, y) may differ from the ideal interpolation function RI ( x, y ) , and (2) the finite bounds of the reconstruction, which cause truncation of the interpolation functions at the boundary. In most sampled imaging systems, the boundary reconstruction error is ignored because the error generally becomes negli- gible at distances of a few samples from the boundary. The utilization of nonideal interpolation functions leads to a potential loss of image resolution and to the intro- duction of high-spatial-frequency artifacts. The effect of an imperfect reconstruction filter may be analyzed conveniently by examination of the frequency spectrum of a reconstructed image, as derived in Eq. 4.1-11: ∞ ∞ 1 F R ( ω x, ω y ) = -------------- R ( ω x, ω y ) ∆x ∆y - ∑ ∑ F I ( ω x – j ω xs, ω y – k ω ys ) (4.3-10) j = –∞ k = –∞
  • 129. IMAGE RECONSTRUCTION SYSTEMS 117 Ideally, R ( ω x, ω y ) should select the spectral component for j = 0, k = 0 with uniform attenuation at all spatial frequencies and should reject all other spectral components. An imperfect filter may attenuate the frequency components of the zero-order spec- tra, causing a loss of image resolution, and may also permit higher-order spectral modes to contribute to the restoration, and therefore introduce distortion in the resto- ration. Figure 4.3-5 provides a graphic example of the effect of an imperfect image reconstruction filter. A typical cross section of a sampled image is shown in Figure 4.3-5a. With an ideal reconstruction filter employing sinc functions for interpola- tion, the central image spectrum is extracted and all sidebands are rejected, as shown in Figure 4.3-5c. Figure 4.3-5d is a plot of the transfer function for a zero-order interpolation reconstruction filter in which the reconstructed pixel amplitudes over the pixel sample area are set at the sample value. The resulting spectrum shown in Figure 4.3-5e exhibits distortion from attenuation of the central spectral mode and spurious high-frequency signal components. Following the analysis leading to Eq. 4.2-21, the resolution loss resulting from the use of a nonideal reconstruction function R(x, y) may be specified quantitatively as E RM – E R E R = ----------------------- (4.3-11) ERM FIGURE 4.3-5. Power spectra for perfect and imperfect reconstruction: (a) Sampled image input W F ( ω x, 0 ) ; (b) sinc function reconstruction filter transfer function R ( ω x, 0 ) ; (c) sinc I function interpolator output W F ( ω x, 0 ) ; (d) zero-order interpolation reconstruction filter O transfer function R ( ω x, 0 ) ; (e) zero-order interpolator output W F ( ω x, 0 ). O
  • 130. 118 IMAGE SAMPLING AND RECONSTRUCTION where ω xs ⁄ 2 ω ys ⁄ 2 2 ER = ∫– ω xs ∫ ⁄ 2 – ω ys ⁄ 2 W F ( ω x, ω y ) H ( ω x, w y ) I dω x dω y (4.3-12) represents the actual interpolated image energy in the Nyquist sampling band limits, and ω xs ⁄ 2 ω ys ⁄ 2 E RM = ∫– ω xs ∫ ⁄ 2 – ω ys ⁄ 2 W F ( ω x, ω y ) dω x dω y I (4.3-13) is the ideal interpolated image energy. The interpolation error attributable to high- spatial-frequency artifacts may be defined as EH E H = ------ - (4.3-14) ET where ∞ ∞ 2 ET = ∫– ∞ ∫– ∞ W F ( ω x, ω y ) H ( ω x, ω y ) I dω x dω y (4.3-15) denotes the total energy of the interpolated image and EH = ET – ER (4.3-16) is that portion of the interpolated image energy lying outside the Nyquist band lim- its. Table 4.3-2 lists the resolution error and interpolation error obtained with several separable two-dimensional interpolation functions. In this example, the power spec- tral density of the ideal image is assumed to be of the form  ω- – ω 2 ω 2 2 2 W F ( ω x, ω y ) = I  2 s ----- for ω ≤  -----  2 s - (4.3-17) and zero elsewhere. The interpolation error contribution of highest-order components, j 1, j2 > 2 , is assumed negligible. The table indicates that zero-order
  • 131. REFERENCES 119 TABLE 4.3-2. Interpolation Error and Resolution Error for Various Separable Two- Dimensional Interpolation Functions Percent Percent Resoluton Error Interpolation Error Function ER EH Sinc 0.0 0.0 Square 26.9 15.7 Triangle 44.0 3.7 Bell 55.4 1.1 Cubic B-spline 63.2 0.3 3T 38.6 10.3 Gaussian σ w = ----- - 8 54.6 2.0 Gaussian σ w = T -- - 2 5T 66.7 0.3 Gaussian σ w = ----- - 8 interpolation with a square interpolation function results in a significant amount of resolution error. Interpolation error reduces significantly for higher-order convolu- tional interpolation functions, but at the expense of resolution error. REFERENCES 1. F. T. Whittaker, “On the Functions Which Are Represented by the Expansions of the Interpolation Theory,” Proc. Royal Society of Edinburgh, A35, 1915, 181–194. 2. C. E. Shannon, “Communication in the Presence of Noise,” Proc. IRE, 37, 1, January 1949, 10–21. 3. H. J. Landa, “Sampling, Data Transmission, and the Nyquist Rate,” Proc. IEEE, 55, 10, October 1967, 1701–1706. 4. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. 5. A. Papoulis, Systems and Transforms with Applications in Optics, McGraw-Hill, New York, 1966. 6. S. P. Lloyd, “A Sampling Theorem for Stationary (Wide Sense) Stochastic Processes,” Trans. American Mathematical Society, 92, 1, July 1959, 1–12. 7. H. S. Shapiro and R. A. Silverman, “Alias-Free Sampling of Random Noise,” J. SIAM, 8, 2, June 1960, 225–248. 8. J. L. Brown, Jr., “Bounds for Truncation Error in Sampling Expansions of Band-Limited Signals,” IEEE Trans. Information Theory, IT-15, 4, July 1969, 440–444. 9. H. D. Helms and J. B. Thomas, “Truncation Error of Sampling Theory Expansions,” Proc. IRE, 50, 2, February 1962, 179–184. 10. J. J. Downing, “Data Sampling and Pulse Amplitude Modulation,” in Aerospace Teleme- try, H. L. Stiltz, Ed., Prentice Hall, Englewood Cliffs, NJ, 1961.
  • 132. 120 IMAGE SAMPLING AND RECONSTRUCTION 11. D. G. Childers, “Study and Experimental Investigation on Sampling Rate and Aliasing in Time Division Telemetry Systems,” IRE Trans. Space Electronics and Telemetry, SET-8, December 1962, 267–283. 12. E. L. O'Neill, Introduction to Statistical Optics, Addison-Wesley, Reading, MA, 1963. 13. H. S. Hou and H. C. Andrews, “Cubic Splines for Image Interpolation and Digital Filter- ing,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-26, 6, December 1978, 508–517. 14. T. N. E. Greville, “Introduction to Spline Functions,” in Theory and Applications of Spline Functions, T. N. E. Greville, Ed., Academic Press, New York, 1969. 15. S. S. Rifman, “Digital Rectification of ERTS Multispectral Imagery,” Proc. Symposium on Significant Results Obtained from ERTS-1 (NASA SP-327), I, Sec. B, 1973, 1131– 1142. 16. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J. Research and Development, 20, 1976, 40–57. 17. R. G. Keys, “Cubic Convolution Interpolation for Digital Image Processing,” IEEE Trans. Acoustics, Speech, and Signal Processing, AASP-29, 6, December 1981, 1153– 1160. 18. K. W. Simon, “Digital Image Reconstruction and Resampling of Landsat Imagery,” Proc. Symposium on Machine Processing of Remotely Sensed Data, Purdue University, Lafayette, IN, IEEE 75, CH 1009-0-C, June 1975, 3A-1–3A-11. 19. S. K. Park and R. A. Schowengerdt, “Image Reconstruction by Parametric Cubic Convo- lution,” Computer Vision, Graphics, and Image Processing, 23, 3, September 1983, 258– 272.
  • 133. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 5 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION Chapter 1 presented a mathematical characterization of continuous image fields. This chapter develops a vector-space algebra formalism for representing discrete image fields from a deterministic and statistical viewpoint. Appendix 1 presents a summary of vector-space algebra concepts. 5.1. VECTOR-SPACE IMAGE REPRESENTATION In Chapter 1 a generalized continuous image function F(x, y, t) was selected to represent the luminance, tristimulus value, or some other appropriate measure of a physical imaging system. Image sampling techniques, discussed in Chapter 4, indicated means by which a discrete array F(j, k) could be extracted from the contin- uous image field at some time instant over some rectangular area – J ≤ j ≤ J , – K ≤ k ≤ K . It is often helpful to regard this sampled image array as a N 1 × N 2 element matrix F = [ F ( n 1, n 2 ) ] (5.1-1) for 1 ≤ n i ≤ Ni where the indices of the sampled array are reindexed for consistency with standard vector-space notation. Figure 5.1-1 illustrates the geometric relation- ship between the Cartesian coordinate system of a continuous image and its array of samples. Each image sample is called a pixel. 121
  • 134. 122 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION FIGURE 5.1-1. Geometric relationship between a continuous image and its array of samples. For purposes of analysis, it is often convenient to convert the image matrix to vector form by column (or row) scanning F, and then stringing the elements together in a long vector (1). An equivalent scanning operation can be expressed in quantita- tive form by the use of a N2 × 1 operational vector v n and a N 1 ⋅ N 2 × N 2 matrix N n defined as 0 1 0 1 … … … … 0 n–1 0 n–1 vn = 1 n Nn = 1 n (5.1-2) 0 n+1 0 n+1 … … … … 0 N2 0 N2 Then the vector representation of the image matrix F is given by the stacking opera- tion N2 f = ∑ N n Fv n (5.1-3) n=1 In essence, the vector v n extracts the nth column from F and the matrix N n places this column into the nth segment of the vector f. Thus, f contains the column-
  • 135. GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR 123 scanned elements of F. The inverse relation of casting the vector f into matrix form is obtained from N2 T T F = ∑ N n fv n (5.1-4) n=1 With the matrix-to-vector operator of Eq. 5.1-3 and the vector-to-matrix operator of Eq. 5.1-4, it is now possible easily to convert between vector and matrix representa- tions of a two-dimensional array. The advantages of dealing with images in vector form are a more compact notation and the ability to apply results derived previously for one-dimensional signal processing applications. It should be recognized that Eqs 5.1-3 and 5.1-4 represent more than a lexicographic ordering between an array and a vector; these equations define mathematical operators that may be manipulated ana- lytically. Numerous examples of the applications of the stacking operators are given in subsequent sections. 5.2. GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR A large class of image processing operations are linear in nature; an output image field is formed from linear combinations of pixels of an input image field. Such operations include superposition, convolution, unitary transformation, and discrete linear filtering. Consider the N 1 × N 2 element input image array F ( n1, n 2 ). A generalized linear operation on this image field results in a M 1 × M 2 output image array P ( m 1, m 2 ) as defined by N1 N2 P ( m 1, m 2 ) = ∑ ∑ F ( n 1, n 2 )O ( n 1, n 2 ; m 1, m2 ) (5.2-1) n1 = 1 n 2= 1 where the operator kernel O ( n 1, n 2 ; m 1, m 2 ) represents a weighting constant, which, in general, is a function of both input and output image coordinates (1). For the analysis of linear image processing operations, it is convenient to adopt the vector-space formulation developed in Section 5.1. Thus, let the input image array F ( n1, n 2 ) be represented as matrix F or alternatively, as a vector f obtained by column scanning F. Similarly, let the output image array P ( m1, m2 ) be represented by the matrix P or the column-scanned vector p. For notational simplicity, in the subsequent discussions, the input and output image arrays are assumed to be square and of dimensions N 1 = N 2 = N and M 1 = M 2 = M , respectively. Now, let T 2 2 2 denote the M × N matrix performing a linear transformation on the N × 1 input 2 image vector f yielding the M × 1 output image vector p = Tf (5.2-2)
  • 136. 124 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION The matrix T may be partitioned into M × N submatrices T mn and written as T11 T 12 … T 1N T21 T 22 … T 2N T = (5.2-3) … … … TM1 TM2 … T MN From Eq. 5.1-3, it is possible to relate the output image vector p to the input image matrix F by the equation N p = ∑ TN n Fv n (5.2-4) n =1 Furthermore, from Eq. 5.1-4, the output image matrix P is related to the input image vector p by M ∑ T T P = Mm pu m (5.2-5) m=1 Combining the above yields the relation between the input and output image matri- ces, M N T T P = ∑ ∑ ( M m TNn )F ( v n u m ) (5.2-6) m=1 n=1 where it is observed that the operators Mm and N n simply extract the partition T mn from T. Hence M N ∑ ∑ Tmn F ( vn um ) T P = (5.2-7) m =1 n =1 If the linear transformation is separable such that T may be expressed in the direct product form T = TC ⊗ TR (5.2-8)
  • 137. GENERALIZED TWO-DIMENSIONAL LINEAR OPERATOR 125 FIGURE 5.2-1. Structure of linear operator matrices. where T R and T C are row and column operators on F, then T mn = T R ( m, n )T C (5.2-9) As a consequence, M N ∑ ∑ T T P = TC F T R ( m, n )v n u m = T C FT R (5.2-10) m =1 n = 1 Hence the output image matrix P can be produced by sequential row and column operations. In many image processing applications, the linear transformations operator T is highly structured, and computational simplifications are possible. Special cases of interest are listed below and illustrated in Figure 5.2-1 for the case in which the input and output images are of the same dimension, M = N .
  • 138. 126 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION 1. Column processing of F: T = diag [ T C1, T C2, …, T CN ] (5.2-11) where T Cj is the transformation matrix for the jth column. 2. Identical column processing of F: T = diag [ T C, T C, …, T C ] = T C ⊗ I N (5.2-12) 3. Row processing of F: T mn = diag [ T R1 ( m, n ), T R2 ( m, n ), …, T RN ( m, n ) ] (5.2-13) where T Rj is the transformation matrix for the jth row. 4. Identical row processing of F: T mn = diag [ T R ( m, n ), T R ( m, n ), …, T R ( m, n ) ] (5.2-14a) and T = IN ⊗ TR (5.2-14b) 5. Identical row and identical column processing of F: T = TC ⊗ I N + I N ⊗ T R (5.2-15) The number of computational operations for each of these cases is tabulated in Table 5.2-1. Equation 5.2-10 indicates that separable two-dimensional linear transforms can be computed by sequential one-dimensional row and column operations on a data array. As indicated by Table 5.2-1, a considerable savings in computation is possible 2 2 for such transforms: computation by Eq 5.2-2 in the general case requires M N 2 2 operations; computation by Eq. 5.2-10, when it applies, requires only MN + M N operations. Furthermore, F may be stored in a serial memory and fetched line by line. With this technique, however, it is necessary to transpose the result of the col- umn transforms in order to perform the row transforms. References 2 and 3 describe algorithms for line storage matrix transposition.
  • 139. IMAGE STATISTICAL CHARACTERIZATION 127 TABLE 5.2-1. Computational Requirements for Linear Transform Operator Operations Case (Multiply and Add) General N4 Column processing N3 Row processing N3 Row and column processing 2N3– N2 Separable row and column processing matrix form 2N3 5.3. IMAGE STATISTICAL CHARACTERIZATION The statistical descriptors of continuous images presented in Chapter 1 can be applied directly to characterize discrete images. In this section, expressions are developed for the statistical moments of discrete image arrays. Joint probability density models for discrete image fields are described in the following section. Ref- erence 4 provides background information for this subject. The moments of a discrete image process may be expressed conveniently in vector-space form. The mean value of the discrete image function is a matrix of the form E { F } = [ E { F ( n 1, n 2 ) } ] (5.3-1) If the image array is written as a column-scanned vector, the mean of the image vec- tor is N2 ηf = E { f } = ∑ N n E { F }v n (5.3-2) n= 1 The correlation function of the image array is given by R ( n 1, n 2 ; n 3 , n 4 ) = E { F ( n 1, n 2 )F∗ ( n 3, n 4 ) } (5.3-3) where the n i represent points of the image array. Similarly, the covariance function of the image array is K ( n 1, n 2 ; n 3 , n 4) = E { [ F ( n 1, n 2 ) – E { F ( n 1, n 2 ) } ] [ F∗ ( n 3, n 4 ) – E { F∗ ( n 3, n 4 ) } ] } (5.3-4)
  • 140. 128 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION Finally, the variance function of the image array is obtained directly from the cova- riance function as 2 σ ( n 1, n 2 ) = K ( n 1, n 2 ; n 1, n 2 ) (5.3-5) If the image array is represented in vector form, the correlation matrix of f can be written in terms of the correlation of elements of F as N2 N T    2 T T T  R f = E { ff∗ } = E    ∑ Nm Fvm   ∑ vn F∗ Nn   n = 1  (5.3-6a) m=1 or N2 N2  T T ∑ ∑ T Rf = N m E  Fv m v n F∗ N n (5.3-6b) m=1 n=1   The term  T T E  Fv m v n F∗  = R mn (5.3-7)   is the N 1 × N1 correlation matrix of the mth and nth columns of F. Hence it is possi- ble to express R f in partitioned form as R11 R 12 … R 1N 2 R21 R 22 … R 2N Rf = 2 … … … RN RN … RN 21 22 2 N2 (5.3-8) The covariance matrix of f can be found from its correlation matrix and mean vector by the relation T K f = R f – η f η f∗ (5.3-9) A variance matrix V F of the array F ( n1, n 2 ) is defined as a matrix whose elements represent the variances of the corresponding elements of the array. The elements of this matrix may be extracted directly from the covariance matrix partitions of K f . That is,
  • 141. IMAGE STATISTICAL CHARACTERIZATION 129 V F ( n 1, n 2 ) = K n n 2 ( n 1, n1 ) (5.3-10) 2, If the image matrix F is wide-sense stationary, the correlation function can be expressed as R ( n 1, n 2 ; n3, n 4 ) = R ( n 1 – n3 , n 2 – n 4 ) = R ( j, k ) (5.3-11) where j = n 1 – n 3 and k = n 2 – n 4. Correspondingly, the covariance matrix parti- tions of Eq. 5.3-9 are related by K mn = K k m≥n (5.3-12a) mn = K∗ K∗ k m<n (5.3-12b) where k = m – n + 1. Hence, for a wide-sense-stationary image array K1 K2 … KN 2 K∗ K1 … KN –1 Kf = 2 2 (5.3-13) … … … K∗ N K∗ N –1 … K1 2 2 The matrix of Eq. 5.3-13 is of block Toeplitz form (5). Finally, if the covariance between elements is separable into the product of row and column covariance func- tions, then the covariance matrix of the image vector can be expressed as the direct product of row and column covariance matrices. Under this condition K R ( 1, 1 )K C K R ( 1, 2 )K C … K R ( 1, N 2 )K C K R ( 2, 1 )K C K R ( 2, 2 )K C … K R ( 2, N 2 )K C Kf = KC ⊗ KR = … … … K R ( N 2, 1 )K C K R ( N 2, 2 )K C … K R ( N 2, N2 )K C (5.3-14) where K C is a N 1 × N 1 covariance matrix of each column of F and K R is a N 2 × N 2 covariance matrix of the rows of F.
  • 142. 130 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION As a special case, consider the situation in which adjacent pixels along an image row have a correlation of ( 0.0 ≤ ρ R ≤ 1.0 ) and a self-correlation of unity. Then the covariance matrix reduces to N –1 1 ρR … ρR 2 2 N –2 KR = σR ρR 1 … ρR 2 (5.3-15) … … … N –1 N –2 ρR 2 ρR 2 … 1 FIGURE 5.3-1. Covariance measurements of the smpte_girl_luminance mono- chrome image.
  • 143. IMAGE STATISTICAL CHARACTERIZATION 131 FIGURE 5.3-2. Photograph of smpte_girl_luminance image. 2 where σ R denotes the variance of pixels along a row. This is an example of the covariance matrix of a Markov process, analogous to the continuous autocovariance function exp ( – α x ) . Figure 5.3-1 contains a plot by Davisson (6) of the measured covariance of pixels along an image line of the monochrome image of Figure 5.3-2. The data points can be fit quite well with a Markov covariance function with ρ = 0.953 . Similarly, the covariance between lines can be modeled well with a Markov covariance function with ρ = 0.965 . If the horizontal and vertical covari- ances were exactly separable, the covariance function for pixels along the image diagonal would be equal to the product of the horizontal and vertical axis covariance functions. In this example, the approximation was found to be reasonably accurate for up to five pixel separations. The discrete power-spectral density of a discrete image random process may be defined, in analogy with the continuous power spectrum of Eq. 1.4-13, as the two- dimensional discrete Fourier transform of its stationary autocorrelation function. Thus, from Eq. 5.3-11 N1 – 1 N2– 1  ju kv  W ( u, v ) = ∑ ∑ R ( j, k ) exp  – 2πi  ----- + -----   - - (5.3-16) N  j= 0 k=0  1 N2  Figure 5.3-3 shows perspective plots of the power-spectral densities for separable and circularly symmetric Markov processes.
  • 144. 132 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION u v (a) Separable u v (b) Circularly symmetric FIGURE 5.3-3. Power spectral densities of Markov process sources; N = 256, log magnitude displays. 5.4. IMAGE PROBABILITY DENSITY MODELS A discrete image array F ( n1, n 2 ) can be completely characterized statistically by its joint probability density, written in matrix form as
  • 145. IMAGE PROBABILITY DENSITY MODELS 133 p ( F ) ≡ p { F ( 1, 1 ), F ( 2, 1 ), …, F ( N 1, N 2 ) } (5.4-1a) or in corresponding vector form as p ( f ) ≡ p { f ( 1 ), f ( 2 ), …, f ( Q ) } (5.4-1b) where Q = N 1 ⋅ N 2 is the order of the joint density. If all pixel values are statistically independent, the joint density factors into the product p ( f ) ≡ p { f ( 1 ) }p { f ( 2 ) }…p { f ( Q ) } (5.4-2) of its first-order marginal densities. The most common joint probability density is the joint Gaussian, which may be expressed as –Q ⁄ 2 –1 ⁄ 2  1 T –1  p ( f ) = ( 2π ) Kf exp  – -- ( f – η f ) K f ( f – η f )  - (5.4-3)  2  where K f is the covariance matrix of f, η f is the mean of f and K f denotes the determinant of K f . The joint Gaussian density is useful as a model for the density of unitary transform coefficients of an image. However, the Gaussian density is not an adequate model for the luminance values of an image because luminance is a posi- tive quantity and the Gaussian variables are bipolar. Expressions for joint densities, other than the Gaussian density, are rarely found in the literature. Huhns (7) has developed a technique of generating high-order den- sities in terms of specified first-order marginal densities and a specified covariance matrix between the ensemble elements. In Chapter 6, techniques are developed for quantizing variables to some discrete set of values called reconstruction levels. Let r jq ( q ) denote the reconstruction level of the pixel at vector coordinate (q). Then the probability of occurrence of the possi- ble states of the image vector can be written in terms of the joint probability distri- bution as P ( f ) = p { f ( 1 ) = r j ( 1 ) }p { f ( 2 ) = r j ( 2 ) }…p { f ( Q ) = r j ( Q ) } (5.4-4) 1 2 Q where 0 ≤ j q ≤ j Q = J – 1. Normally, the reconstruction levels are set identically for each vector component and the joint probability distribution reduces to P ( f ) = p { f ( 1 ) = r j }p { f ( 2 ) = r j }…p { f ( Q ) = r j } (5.4-5) 1 2 Q
  • 146. 134 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION Probability distributions of image values can be estimated by histogram measure- ments. For example, the first-order probability distribution P [ f ( q ) ] = PR [ f ( q ) = r j ] (5.4-6) of the amplitude value at vector coordinate q can be estimated by examining a large collection of images representative of a given image class (e.g., chest x-rays, aerial scenes of crops). The first-order histogram estimate of the probability distribution is the frequency ratio Np ( j ) H E ( j ; q ) = ------------- - (5.4-7) Np where N p represents the total number of images examined and N p ( j ) denotes the number for which f ( q ) = r j for j = 0, 1,..., J – 1. If the image source is statistically stationary, the first-order probability distribution of Eq. 5.4-6 will be the same for all vector components q. Furthermore, if the image source is ergodic, ensemble aver- ages (measurements over a collection of pictures) can be replaced by spatial aver- ages. Under the ergodic assumption, the first-order probability distribution can be estimated by measurement of the spatial histogram NS ( j ) HS ( j ) = ------------ - (5.4-8) Q where N S ( j ) denotes the number of pixels in an image for which f ( q ) = r j for 1 ≤ q ≤ Q and 0 ≤ j ≤ J – 1 . For example, for an image with 256 gray levels, H S ( j ) denotes the number of pixels possessing gray level j for 0 ≤ j ≤ 255. Figure 5.4-1 shows first-order histograms of the red, green, and blue components of a color image. Most natural images possess many more dark pixels than bright pixels, and their histograms tend to fall off exponentially at higher luminance levels. Estimates of the second-order probability distribution for ergodic image sources can be obtained by measurement of the second-order spatial histogram, which is a measure of the joint occurrence of pairs of pixels separated by a specified distance. With reference to Figure 5.4-2, let F ( n1, n 2 ) and F ( n3, n 4 ) denote a pair of pixels separated by r radial units at an angle θ with respect to the horizontal axis. As a consequence of the rectilinear grid, the separation parameters may only assume cer- tain discrete values. The second-order spatial histogram is then the frequency ratio NS ( j1, j2 ) H S ( j 1, j 2 ; r, θ ) = ----------------------- - (5.4-9) QT
  • 147. IMAGE PROBABILITY DENSITY MODELS 135 FIGURE 5.4-1. Histograms of the red, green and blue components of the smpte_girl _linear color image. where N S ( j 1, j2 ) denotes the number of pixel pairs for which F ( n1, n 2 ) = r j1 and F ( n 3, n 4 ) = r j . The factor QT in the denominator of Eq. 5.4-9 represents the total 2 number of pixels lying in an image region for which the separation is ( r, θ ). Because of boundary effects, QT < Q. Second-order spatial histograms of a monochrome image are presented in Figure 5.4-3 as a function of pixel separation distance and angle. As the separation increases, the pairs of pixels become less correlated and the histogram energy tends to spread more uniformly about the plane.
  • 148. 136 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION FIGURE 5.4-2. Geometric relationships of pixel pairs. j2 j1 FIGURE 5.4-3. Second-order histogram of the smpte_girl_luminance monochrome image; r = 1 and θ = 0 . 5.5. LINEAR OPERATOR STATISTICAL REPRESENTATION If an input image array is considered to be a sample of a random process with known first and second-order moments, the first- and second-order moments of the output image array can be determined for a given linear transformation. First, the mean of the output image array is  N1 N2    E { P ( m 1, m2 ) } = E  ∑ ∑ F ( n1, n2 )O ( n1, n2 ; m1, m2 ) (5.5-1a)  n =1 n2 = 1   1 
  • 149. LINEAR OPERATOR STATISTICAL REPRESENTATION 137 Because the expectation operator is linear, N1 N2 E { P ( m 1, m 2 ) } = ∑ ∑ E { F ( n 1, n 2 ) }O ( n 1, n 2 ; m 1, m 2 ) (5.5-1b) n1 = 1 n2 = 1 The correlation function of the output image array is RP ( m 1, m 2 ; m 3 , m4 ) = E { P ( m 1, m 2 )P∗ ( m 3, m 4 ) } (5.5-2a) or in expanded form  N1 N2  R P ( m 1, m2 ; m 3 , m 4 ) = E   ∑ ∑ F ( n 1, n 2 )O ( n 1, n 2 ; m 1, m 2 ) × n 1 = 1 n 2= 1  N1 N2   ∑ ∑ F∗ ( n 3, n 4 )O∗ ( n 3, n 3 ; m 3, m4 )   n3 = 1 n4 = 1  (5.5-2b) After multiplication of the series and performance of the expectation operation, one obtains N1 N2 N1 N2 R P ( m 1, m 2 ; m 3, m 4 ) = ∑ ∑ ∑ ∑ RF ( n 1, n 2, n 3, n 4 )O ( n 1, n 2 ; m1, m 2 ) n1 = 1 n2 = 1 n 3 = 1 n4 = 1 × O∗ ( n 3, n 3 ; m3, m 4 ) (5.5-3) where R F ( n 1, n 2 ; n 3 , n 4 ) represents the correlation function of the input image array. In a similar manner, the covariance function of the output image is found to be N1 N2 N1 N2 KP ( m 1, m 2 ; m 3, m 4 ) = ∑ ∑ ∑ ∑ KF ( n 1, n 2, n 3, n 4 )O ( n 1, n 2 ; m 1, m 2 ) n1 = 1 n2 = 1 n 3 = 1 n4 = 1 × O∗ ( n 3, n 3 ; m3, m 4 ) (5.5-4)
  • 150. 138 DISCRETE IMAGE MATHEMATICAL CHARACTERIZATION If the input and output image arrays are expressed in vector form, the formulation of the moments of the transformed image becomes much more compact. The mean of the output vector p is η p = E { p } = E { Tf } = TE { f } = Tηf η (5.5-5) and the correlation matrix of p is T T T T R p = E { pp∗ } = E { Tff∗ T∗ } = TRf T∗ (5.5-6) Finally, the covariance matrix of p is T K p = TK f T∗ (5.5-7) Applications of this theory to superposition and unitary transform operators are given in following chapters. A special case of the general linear transformation p = Tf , of fundamental importance, occurs when the covariance matrix of Eq. 5.5-7 assumes the form T K p = TK f T∗ = Λ (5.5-8) where Λ is a diagonal matrix. In this case, the elements of p are uncorrelated. From Appendix A1.2, it is found that the transformation T, which produces the diagonal matrix Λ , has rows that are eigenvectors of K f . The diagonal elements of Λ are the corresponding eigenvalues of K f . This operation is called both a matrix diagonal- ization and a principal components transformation. REFERENCES 1. W. K. Pratt, “Vector Formulation of Two Dimensional Signal Processing Operations,” Computer Graphics and Image Processing, 4, 1, March 1975, 1–24. 2. J. O. Eklundh, “A Fast Computer Method for Matrix Transposing,” IEEE Trans. Com- puters, C-21, 7, July 1972, 801–803. 3. R. E. Twogood and M. P. Ekstrom, “An Extension of Eklundh's Matrix Transposition Algorithm and Its Applications in Digital Image Processing,” IEEE Trans. Computers, C-25, 9, September 1976, 950–952. 4. A. Papoulis, Probability, Random Variables, and Stochastic Processes, 3rd ed., McGraw-Hill, New York, 1991.
  • 151. REFERENCES 139 5. U. Grenander and G. Szego, Toeplitz Forms and Their Applications, University of Cali- fornia Press, Berkeley, CA, 1958. 6. L. D. Davisson, private communication. 7. M. N. Huhns, “Optimum Restoration of Quantized Correlated Signals,” USCIPI Report 600, University of Southern California, Image Processing Institute, Los Angeles August 1975.
  • 152. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 6 IMAGE QUANTIZATION Any analog quantity that is to be processed by a digital computer or digital system must be converted to an integer number proportional to its amplitude. The conver- sion process between analog samples and discrete-valued samples is called quanti- zation. The following section includes an analytic treatment of the quantization process, which is applicable not only for images but for a wide class of signals encountered in image processing systems. Section 6.2 considers the processing of quantized variables. The last section discusses the subjective effects of quantizing monochrome and color images. 6.1. SCALAR QUANTIZATION Figure 6.1-1 illustrates a typical example of the quantization of a scalar signal. In the quantization process, the amplitude of an analog signal sample is compared to a set of decision levels. If the sample amplitude falls between two decision levels, it is quantized to a fixed reconstruction level lying in the quantization band. In a digital system, each quantized sample is assigned a binary code. An equal-length binary code is indicated in the example. For the development of quantitative scalar signal quantization techniques, let f and ˆ represent the amplitude of a real, scalar signal sample and its quantized value, f respectively. It is assumed that f is a sample of a random process with known proba- bility density p ( f ) . Furthermore, it is assumed that f is constrained to lie in the range aL ≤ f ≤ a U (6.1-1) 141
  • 153. 142 IMAGE QUANTIZATION 256 11111111 255 11111110 254 33 00100000 32 00011111 31 00011110 30 3 00000010 2 00000001 1 00000000 0 ORIGINAL DECISION BINARY QUANTIZED RECONSTRUCTION SAMPLE LEVELS CODE SAMPLE LEVELS FIGURE 6.1-1. Sample quantization. where a U and a L represent upper and lower limits. Quantization entails specification of a set of decision levels d j and a set of recon- struction levels r j such that if dj ≤ f < dj + 1 (6.1-2) the sample is quantized to a reconstruction value r j . Figure 6.1-2a illustrates the placement of decision and reconstruction levels along a line for J quantization lev- els. The staircase representation of Figure 6.1-2b is another common form of description. Decision and reconstruction levels are chosen to minimize some desired quanti- zation error measure between f and ˆ . The quantization error measure usually f employed is the mean-square error because this measure is tractable, and it usually correlates reasonably well with subjective criteria. For J quantization levels, the mean-square quantization error is J–1 2 aU 2 2 E = E{( f – ˆ ) } = f ∫a ( f – ˆ ) p ( f ) df = f ∑ ( f – rj ) p ( f ) df (6.1-3) L j=0
  • 154. SCALAR QUANTIZATION 143 FIGURE 6.1-2. Quantization decision and reconstruction levels. For a large number of quantization levels J, the probability density may be repre- sented as a constant value p ( r j ) over each quantization band. Hence J –1 dj + 1 2 E = ∑ p ( r j ) ∫d j ( f – r j ) df (6.1-4) j= 0 which evaluates to J–1 1 3 3 E = -- 3 - ∑ p ( rj ) [ ( dj + 1 – rj ) – ( dj – rj ) ] (6.1-5) j= 0 The optimum placing of the reconstruction level r j within the range d j – 1 to d j can be determined by minimization of E with respect to r j . Setting dE ------ = 0 (6.1-6) dr j yields dj + 1 + d j r j = ---------------------- (6.1-7) 2
  • 155. 144 IMAGE QUANTIZATION Therefore, the optimum placement of reconstruction levels is at the midpoint between each pair of decision levels. Substitution for this choice of reconstruction levels into the expression for the quantization error yields J–1 1 3 E = ----- 12 - ∑ p ( rj ) ( dj + 1 – dj ) (6.1-8) j =0 The optimum choice for decision levels may be found by minimization of E in Eq. 6.1-8 by the method of Lagrange multipliers. Following this procedure, Panter and Dite (1) found that the decision levels may be computed to a good approximation from the integral equation aj –1 ⁄ 3 ( aU – aL ) ∫ [ p ( f ) ] df aL d j = --------------------------------------------------------------- - (6.1-9a) aU –1 ⁄ 3 ∫ aL [p( f ) ] df where j ( a U – aL ) a j = ------------------------ + a L - (6.1-9b) J for j = 0, 1,..., J. If the probability density of the sample is uniform, the decision lev- els will be uniformly spaced. For nonuniform probability densities, the spacing of decision levels is narrow in large-amplitude regions of the probability density func- tion and widens in low-amplitude portions of the density. Equation 6.1-9 does not reduce to closed form for most probability density functions commonly encountered in image processing systems models, and hence the decision levels must be obtained by numerical integration. If the number of quantization levels is not large, the approximation of Eq. 6.1-4 becomes inaccurate, and exact solutions must be explored. From Eq. 6.1-3, setting the partial derivatives of the error expression with respect to the decision and recon- struction levels equal to zero yields ∂E 2 2 ------ = ( d j – r j ) p ( d j ) – ( d j – r j – 1 ) p ( d j ) = 0 - (6.1-10a) ∂d j ∂E d ------ = 2 ∫ j + 1 ( f – rj )p ( f ) df = 0 (6.1-10b) ∂r j dj
  • 156. SCALAR QUANTIZATION 145 Upon simplification, the set of equations r j = 2d j – r j – 1 (6.1-11a) dj + 1 ∫d fp ( f ) df r j = ------------------------------ - j (6.1-11b) dj + 1 ∫ dj p ( f ) df is obtained. Recursive solution of these equations for a given probability distribution p ( f ) provides optimum values for the decision and reconstruction levels. Max (2) has developed a solution for optimum decision and reconstruction levels for a Gaus- sian density and has computed tables of optimum levels as a function of the number of quantization steps. Table 6.1-1 lists placements of decision and quantization lev- els for uniform, Gaussian, Laplacian, and Rayleigh densities for the Max quantizer. If the decision and reconstruction levels are selected to satisfy Eq. 6.1-11, it can easily be shown that the mean-square quantization error becomes J–1 dj + 1 2 2 dj + 1 E min = ∑ ∫d j f p ( f ) df – r j ∫ dj p ( f ) df (6.1-12) j=0 In the special case of a uniform probability density, the minimum mean-square quantization error becomes 1 E min = ----------- - (6.1-13) 2 12J Quantization errors for most other densities must be determined by computation. It is possible to perform nonlinear quantization by a companding operation, as shown in Figure 6.1-3, in which the sample is transformed nonlinearly, linear quanti- zation is performed, and the inverse nonlinear transformation is taken (3). In the com- panding system of quantization, the probability density of the transformed samples is forced to be uniform. Thus, from Figure 6.1-3, the transformed sample value is g = T{ f } (6.1-14) where the nonlinear transformation T { · } is chosen such that the probability density of g is uniform. Thus, FIGURE 6.1-3. Companding quantizer.
  • 157. 146 IMAGE QUANTIZATION TABLE 6.1-1. Placement of Decision and Reconstruction Levels for Max Quantizer Uniform Gaussian Laplacian Rayleigh Bits di ri di ri di ri di ri 1 –1.0000 –0.5000 –∞ –0.7979 –∞ –0.7071 0.0000 1.2657 0.0000 0.5000 0.0000 0.7979 0.0000 0.7071 2.0985 2.9313 1.0000 ∞ –∞ ∞ 2 –1.0000 –0.7500 –∞ –1.5104 ∞ –1.8340 0.0000 0.8079 –0.5000 –0.2500 –0.9816 –0.4528 –1.1269 –0.4198 1.2545 1.7010 –0.0000 0.2500 0.0000 0.4528 0.0000 0.4198 2.1667 2.6325 0.5000 0.7500 0.9816 1.5104 1.1269 1.8340 3.2465 3.8604 1.0000 ∞ ∞ ∞ 3 –1.0000 –0.8750 –∞ –2.1519 –∞ –3.0867 0.0000 0.5016 –0.7500 –0.6250 –1.7479 –1.3439 –2.3796 –1.6725 0.7619 1.0222 –0.5000 –0.3750 –1.0500 –0.7560 –1.2527 –0.8330 1.2594 1.4966 –0.2500 –0.1250 –0.5005 –0.2451 –0.5332 –0.2334 1.7327 1.9688 0.0000 0.1250 0.0000 0.2451 0.0000 0.2334 2.2182 2.4675 0.2500 0.3750 0.5005 0.7560 0.5332 0.8330 2.7476 3.0277 0.5000 0.6250 1.0500 1.3439 1.2527 1.6725 3.3707 3.7137 0.7500 0.8750 1.7479 2.1519 2.3796 3.0867 4.2124 4.7111 1.0000 ∞ ∞ ∞ 4 –1.0000 –0.9375 –∞ –2.7326 –∞ –4.4311 0.0000 0.3057 –0.8750 –0.8125 –2.4008 –2.0690 –3.7240 –3.0169 0.4606 0.6156 –0.7500 –0.6875 –1.8435 –1.6180 –2.5971 –2.1773 0.7509 0.8863 –0.6250 –0.5625 –1.4371 –1.2562 –1.8776 –1.5778 1.0130 1.1397 –0.5000 –0.4375 –1.0993 –0.9423 –1.3444 –1.1110 1.2624 1.3850 –0.3750 –0.3125 –0.7995 –0.6568 –0.9198 –0.7287 1.5064 1.6277 –0.2500 –0.1875 –0.5224 –0.3880 –0.5667 –0.4048 1.7499 1.8721 –0.1250 –0.0625 –0.2582 –0.1284 –0.2664 –0.1240 1.9970 2.1220 0.0000 0.0625 0.0000 0.1284 0.0000 0.1240 2.2517 2.3814 0.1250 0.1875 0.2582 0.3880 0.2644 0.4048 2.5182 2.6550 0.2500 0.3125 0.5224 0.6568 0.5667 0.7287 2.8021 2.9492 0.3750 0.4375 0.7995 0.9423 0.9198 1.1110 3.1110 3.2729 0.5000 0.5625 1.0993 1.2562 1.3444 1.5778 3.4566 3.6403 0.6250 0.6875 1.4371 1.6180 1.8776 2.1773 3.8588 4.0772 0.7500 0.8125 1.8435 2.0690 2.5971 3.0169 4.3579 4.6385 0.8750 0.9375 2.4008 2.7326 3.7240 4.4311 5.0649 5.4913 1.0000 ∞ ∞ ∞
  • 158. PROCESSING QUANTIZED VARIABLES 147 p(g) = 1 (6.1-15) for – 1 ≤ g ≤ 1 . If f is a zero mean random variable, the proper transformation func- -- 2 - -- 2 - tion is (4) f 1 T{ f } = ∫–∞ p ( z ) dz – -- 2 - (6.1-16) That is, the nonlinear transformation function is equivalent to the cumulative proba- bility distribution of f. Table 6.1-2 contains the companding transformations and inverses for the Gaussian, Rayleigh, and Laplacian probability densities. It should be noted that nonlinear quantization by the companding technique is an approxima- tion to optimum quantization, as specified by the Max solution. The accuracy of the approximation improves as the number of quantization levels increases. 6.2. PROCESSING QUANTIZED VARIABLES Numbers within a digital computer that represent image variables, such as lumi- nance or tristimulus values, normally are input as the integer codes corresponding to the quantization reconstruction levels of the variables, as illustrated in Figure 6.1-1. If the quantization is linear, the jth integer value is given by f – aL j = ( J – 1 ) ----------------- - (6.2-1) aU – a L N where J is the maximum integer value, f is the unquantized pixel value over a lower-to-upper range of a L to a U , and [ · ] N denotes the nearest integer value of the argument. The corresponding reconstruction value is aU – a L aU – aL r j = ----------------- j + ----------------- + a L - - (6.2-2) J 2J Hence, r j is linearly proportional to j. If the computer processing operation is itself linear, the integer code j can be numerically processed rather than the real number r j . However, if nonlinear processing is to be performed, for example, taking the loga- rithm of a pixel, it is necessary to process r j as a real variable rather than the integer j because the operation is scale dependent. If the quantization is nonlinear, all process- ing must be performed in the real variable domain. In a digital computer, there are two major forms of numeric representation: real and integer. Real numbers are stored in floating-point form, and typically have a large dynamic range with fine precision. Integer numbers can be strictly positive or bipolar (negative or positive). The two's complement number system is commonly
  • 159. 148 TABLE 6.1.-2. Companding Quantization Transformations Probability Density Forward Transformation Inverse Transformation 2 –1 1  f  ˆ = f ˆ 2 σ erf { 2 g } 2 –1 ⁄ 2  f  - - Gaussian p ( f ) = ( 2πσ ) - exp  – --------  g = -- erf  ----------  2  2σ   2σ 2  2 2 1⁄2 f  f  1  f  2  1 ˆ  Rayleigh - - p ( f ) = ----- exp  – --------  - - g = -- – exp  – --------  ˆ = f - 2σ ln  1 ⁄  -- – g  2 2  2   σ  2σ 2   2σ 2  Laplacian 1 --- p ( f ) = α exp { – α f } -- - g = 1 [ 1 – exp { – αf } ] f ≥0 ˆ ˆ = – --- ln { 1 – 2 g } f ˆ g≥0 2 2 α 1 1 - g = – -- [ 1 – exp { αf } ] f < 0 ˆ ˆ = --- ln { 1 + 2 g } f ˆ g<0 2 α 2- x 2 2- 0 where erf {x} ≡ ------ ∫ exp { – y } dy and α = ------ π σ
  • 160. PROCESSING QUANTIZED VARIABLES 149 used in computers and digital processing hardware for representing bipolar integers. The general format is as follows: S.M1,M2,...,MB-1 where S is a sign bit (0 for positive, 1 for negative), followed, conceptually, by a binary point, Mb denotes a magnitude bit, and B is the number of bits in the com- puter word. Table 6.2-1 lists the two's complement correspondence between integer, fractional, and decimal numbers for a 4-bit word. In this representation, all pixels –(B – 1 ) are scaled in amplitude between –1.0 and 1.0 – 2 . One of the advantages of TABLE 6.2-1. Two’s Complement Code for 4-Bit Code Word Fractional Decimal Code Value Value 7 0.111 + -- - +0.875 8 6 0.110 + -- - +0.750 8 5 0.101 + -- - +0.625 8 4 0.100 + -- - +0.500 8 3 0.011 + -- - +0.375 8 2 0.010 + -- - +0.250 8 0.001 +1-- - +0.125 8 0.000 0 0.000 1.111 –1 -- - –0.125 8 1.110 –2 -- - –0.250 8 1.101 –3 -- - –0.375 8 1.100 –4 -- - –0.500 8 1.011 –5 -- - –0.625 8 1.010 –6 -- - –0.750 8 1.001 –7 -- - –0.875 8 1.000 –8 -- - –1.000 8
  • 161. 150 IMAGE QUANTIZATION this representation is that pixel scaling is independent of precision in the sense that a pixel F ( j, k ) is bounded over the range – 1.0 ≤ F ( j, k ) < 1.0 regardless of the number of bits in a word. 6.3. MONOCHROME AND COLOR IMAGE QUANTIZATION This section considers the subjective and quantitative effects of the quantization of monochrome and color images. 6.3.1. Monochrome Image Quantization Monochrome images are typically input to a digital image processor as a sequence of uniform-length binary code words. In the literature, the binary code is often called a pulse code modulation (PCM) code. Because uniform-length code words are used for each image sample, the number of amplitude quantization levels is determined by the relationship B L = 2 (6.3-1) where B represents the number of code bits allocated to each sample. A bit rate compression can be achieved for PCM coding by the simple expedient of restricting the number of bits assigned to each sample. If image quality is to be judged by an analytic measure, B is simply taken as the smallest value that satisfies the minimal acceptable image quality measure. For a subjective assessment, B is lowered until quantization effects become unacceptable. The eye is only capable of judging the absolute brightness of about 10 to 15 shades of gray, but it is much more sensitive to the difference in the brightness of adjacent gray shades. For a reduced number of quantization levels, the first noticeable artifact is a gray scale contouring caused by a jump in the reconstructed image brightness between quantization levels in a region where the original image is slowly changing in brightness. The minimal number of quantization bits required for basic PCM coding to prevent gray scale contouring is dependent on a variety of factors, including the linearity of the image display and noise effects before and after the image digitizer. Assuming that an image sensor produces an output pixel sample proportional to the image intensity, a question of concern then is: Should the image intensity itself, or some function of the image intensity, be quantized? Furthermore, should the quantiza- tion scale be linear or nonlinear? Linearity or nonlinearity of the quantization scale can
  • 162. MONOCHROME AND COLOR IMAGE QUANTIZATION 151 (a) 8 bit, 256 levels (b) 7 bit, 128 levels (c) 6 bit, 64 levels (d) 5 bit, 32 levels (e) 4 bit, 16 levels (f ) 3 bit, 8 levels FIGURE 6.3-1. Uniform quantization of the peppers_ramp_luminance monochrome image.
  • 163. 152 IMAGE QUANTIZATION be viewed as a matter of implementation. A given nonlinear quantization scale can be realized by the companding operation of Figure 6.1-3, in which a nonlinear amplification weighting of the continuous signal to be quantized is performed, followed by linear quantization, followed by an inverse weighting of the quantized amplitude. Thus, consideration is limited here to linear quantization of companded pixel samples. There have been many experimental studies to determine the number and place- ment of quantization levels required to minimize the effect of gray scale contouring (5–8). Goodall (5) performed some of the earliest experiments on digital television and concluded that 6 bits of intensity quantization (64 levels) were required for good quality and that 5 bits (32 levels) would suffice for a moderate amount of contour- ing. Other investigators have reached similar conclusions. In most studies, however, there has been some question as to the linearity and calibration of the imaging sys- tem. As noted in Section 3.5.3, most television cameras and monitors exhibit a non- linear response to light intensity. Also, the photographic film that is often used to record the experimental results is highly nonlinear. Finally, any camera or monitor noise tends to diminish the effects of contouring. Figure 6.3-1 contains photographs of an image linearly quantized with a variable number of quantization levels. The source image is a split image in which the left side is a luminance image and the right side is a computer-generated linear ramp. In Figure 6.3-1, the luminance signal of the image has been uniformly quantized with from 8 to 256 levels (3 to 8 bits). Gray scale contouring in these pictures is apparent in the ramp part of the split image for 6 or fewer bits. The contouring of the lumi- nance image part of the split image becomes noticeable for 5 bits. As discussed in Section 2-4, it has been postulated that the eye responds logarithmically or to a power law of incident light amplitude. There have been several efforts to quantitatively model this nonlinear response by a lightness function Λ , which is related to incident luminance. Priest et al. (9) have proposed a square-root nonlinearity 1⁄2 Λ = ( 100.0Y ) (6.3-2) where 0.0 ≤ Y ≤ 1.0 and 0.0 ≤ Λ ≤ 10.0 . Ladd and Pinney (10) have suggested a cube- root scale 1⁄3 Λ = 2.468 ( 100.0Y ) – 1.636 (6.3-3) A logarithm scale Λ = 5.0 [ log { 100.0Y } ] (6.3-4) 10
  • 164. MONOCHROME AND COLOR IMAGE QUANTIZATION 153 FIGURE 6.3-2. Lightness scales. where 0.01 ≤ Y ≤ 1.0 has also been proposed by Foss et al. (11). Figure 6.3-2 com- pares these three scaling functions. In an effort to reduce the grey scale contouring of linear quantization, it is reason- able to apply a lightness scaling function prior to quantization, and then to apply its inverse to the reconstructed value in correspondence to the companding quantizer of Figure 6.1-3. Figure 6.3-3 presents a comparison of linear, square-root, cube-root, and logarithmic quantization for a 4-bit quantizer. Among the lightness scale quan- tizers, the gray scale contouring appears least for the square-root scaling. The light- ness quantizers exhibit less contouring than the linear quantizer in dark areas but worse contouring for bright regions. 6.3.2. Color Image Quantization A color image may be represented by its red, green, and blue source tristimulus val- ues or any linear or nonlinear invertible function of the source tristimulus values. If the red, green, and blue tristimulus values are to be quantized individually, the selec- tion of the number and placement of quantization levels follows the same general considerations as for a monochrome image. The eye exhibits a nonlinear response to spectral lights as well as white light, and therefore, it is subjectively preferable to compand the tristimulus values before quantization. It is known, however, that the eye is most sensitive to brightness changes in the blue region of the spectrum, mod- erately sensitive to brightness changes in the green spectral region, and least sensi- tive to red changes. Thus, it is possible to assign quantization levels on this basis more efficiently than simply using an equal number for each tristimulus value.
  • 165. 154 IMAGE QUANTIZATION (a) Linear (b) Log (c) Square root (d) Cube root FIGURE 6.3-3. Comparison of lightness scale quantization of the peppers_ramp _luminance image for 4 bit quantization. Figure 6.3-4 is a general block diagram for a color image quantization system. A source image described by source tristimulus values R, G, B is converted to three components x(1), x(2), x(3), which are then quantized. Next, the quantized compo- ˆ nents x ( 1 ) , x ( 2 ) , x ( 3 ) are converted back to the original color coordinate system, ˆ ˆ ˆ ˆ ˆ producing the quantized tristimulus values R, G , B . The quantizer in Figure 6.3-4 effectively partitions the color space of the color coordinates x(1), x(2), x(3) into quantization cells and assigns a single color value to all colors within a cell. To be most efficient, the three color components x(1), x(2), x(3) should be quantized jointly. However, implementation considerations often dictate separate quantization of the color components. In such a system, x(1), x(2), x(3) are individually quantized over
  • 166. MONOCHROME AND COLOR IMAGE QUANTIZATION 155 FIGURE 6.3-4 Color image quantization model. FIGURE 6.3-5. Loci of reproducible colors for RNGNBN and UVW coordinate systems. their maximum ranges. In effect, the physical color solid is enclosed in a rectangular solid, which is then divided into rectangular quantization cells. If the source tristimulus values are converted to some other coordinate system for quantization, some immediate problems arise. As an example, consider the quantization of the UVW tristimulus values. Figure 6.3-5 shows the locus of reproducible colors for the RGB source tristimulus values plotted as a cube and the transformation of this color cube into the UVW coordinate system. It is seen that the RGB cube becomes a parallelepiped. If the UVW tristimulus values are to be quantized individually over their maximum and minimum limits, many of the quantization cells represent nonreproducible colors and hence are wasted. It is only worthwhile to quantize colors within the parallelepiped, but this generally is a difficult operation to implement efficiently. In the present analysis, it is assumed that each color component is linearly quan- B(i) tized over its maximum range into 2 levels, where B(i) represents the number of bits assigned to the component x(i). The total number of bits allotted to the coding is fixed at BT = B ( 1 ) + B ( 2 ) + B ( 3 ) (6.3-5)
  • 167. 156 IMAGE QUANTIZATION FIGURE 6.3-6. Chromaticity shifts resulting from uniform quantization of the smpte_girl_linear color image. Let a U ( i ) represent the upper bound of x(i) and a L ( i ) the lower bound. Then each quantization cell has dimension aU ( i ) – aL ( i ) q ( i ) = ------------------------------- - (6.3-6) B(i) 2 Any color with color component x(i) within the quantization cell will be quantized ˆ to the color component value x ( i ) . The maximum quantization error along each color coordinate axis is then
  • 168. REFERENCES 157 aU ( i ) – aL ( i ) ˆ ε ( i ) = x ( i ) – x ( i ) = ------------------------------- - (6.3-7) B(i ) + 1 2 Thus, the coordinates of the quantized color become ˆ x(i) = x(i) ± ε(i ) (6.3-8) ˆ subject to the conditions a L ( i ) ≤ x ( i ) ≤ a U ( i ) . It should be observed that the values of ˆ x ( i ) will always lie within the smallest cube enclosing the color solid for the given color coordinate system. Figure 6.3-6 illustrates chromaticity shifts of various colors for quantization in the RN GN BN and Yuv coordinate systems (12). Jain and Pratt (12) have investigated the optimal assignment of quantization deci- sion levels for color images in order to minimize the geodesic color distance between an original color and its reconstructed representation. Interestingly enough, it was found that quantization of the RN GN BN color coordinates provided better results than for other common color coordinate systems. The primary reason was that all quantization levels were occupied in the RN GN BN system, but many levels were unoccupied with the other systems. This consideration seemed to override the metric nonuniformity of the RN GN BN color space. REFERENCES 1. P. F. Panter and W. Dite, “Quantization Distortion in Pulse Code Modulation with Non- uniform Spacing of Levels,” Proc. IRE, 39, 1, January 1951, 44–48. 2. J. Max, “Quantizing for Minimum Distortion,” IRE Trans. Information Theory, IT-6, 1, March 1960, 7–12. 3. V. R. Algazi, “Useful Approximations to Optimum Quantization,” IEEE Trans. Commu- nication Technology, COM-14, 3, June 1966, 297–301. 4. R. M. Gray, “Vector Quantization,” IEEE ASSP Magazine, April 1984, 4–29. 5. W. M. Goodall, “Television by Pulse Code Modulation,” Bell System Technical J., Janu- ary 1951. 6. R. L. Cabrey, “Video Transmission over Telephone Cable Pairs by Pulse Code Modula- tion,” Proc. IRE, 48, 9, September 1960, 1546–1551. 7. L. H. Harper, “PCM Picture Transmission,” IEEE Spectrum, 3, 6, June 1966, 146. 8. F. W. Scoville and T. S. Huang, “The Subjective Effect of Spatial and Brightness Quanti- zation in PCM Picture Transmission,” NEREM Record, 1965, 234–235. 9. I. G. Priest, K. S. Gibson, and H. J. McNicholas, “An Examination of the Munsell Color System, I. Spectral and Total Reflection and the Munsell Scale of Value,” Technical Paper 167, National Bureau of Standards, Washington, DC, 1920. 10. J. H. Ladd and J. E. Pinney, “Empherical Relationships with the Munsell Value Scale,” Proc. IRE (Correspondence), 43, 9, 1955, 1137.
  • 169. 158 IMAGE QUANTIZATION 11. C. E. Foss, D. Nickerson, and W. C. Granville, “Analysis of the Oswald Color System,” J. Optical Society of America, 34, 1, July 1944, 361–381. 12. A. K. Jain and W. K. Pratt, “Color Image Quantization,” IEEE Publication 72 CH0 601- 5-NTC, National Telecommunications Conference 1972 Record, Houston, TX, Decem- ber 1972.
  • 170. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) PART 3 DISCRETE TWO-DIMENSIONAL LINEAR PROCESSING Part 3 of the book is concerned with a unified analysis of discrete two-dimensional linear processing operations. Several forms of discrete two-dimensional superposition and convolution operators are developed and related to one another. Two-dimensional transforms, such as the Fourier, Hartley, cosine, and Karhunen– Loeve transforms, are introduced. Consideration is given to the utilization of two- dimensional transforms as an alternative means of achieving convolutional processing more efficiently. 159
  • 171. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 7 SUPERPOSITION AND CONVOLUTION In Chapter 1, superposition and convolution operations were derived for continuous two-dimensional image fields. This chapter provides a derivation of these operations for discrete two-dimensional images. Three types of superposition and convolution operators are defined: finite area, sampled image, and circulant area. The finite-area operator is a linear filtering process performed on a discrete image data array. The sampled image operator is a discrete model of a continuous two-dimensional image filtering process. The circulant area operator provides a basis for a computationally efficient means of performing either finite-area or sampled image superposition and convolution. 7.1. FINITE-AREA SUPERPOSITION AND CONVOLUTION Mathematical expressions for finite-area superposition and convolution are devel- oped below for both series and vector-space formulations. 7.1.1. Finite-Area Superposition and Convolution: Series Formulation Let F ( n1, n 2 ) denote an image array for n1, n2 = 1, 2,..., N. For notational simplicity, all arrays in this chapter are assumed square. In correspondence with Eq. 1.2-6, the image array can be represented at some point ( m 1 , m 2 ) as a sum of amplitude weighted Dirac delta functions by the discrete sifting summation F ( m 1, m 2 ) = ∑∑ F ( n 1, n 2 )δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ) (7.1-1) n1 n2 161
  • 172. 162 SUPERPOSITION AND CONVOLUTION The term 1 if m1 = n 1 and m 2 = n 2 (7.1-2a)  δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ) =   0 otherwise (7.1-2b) is a discrete delta function. Now consider a spatial linear operator O { · } that pro- duces an output image array Q ( m 1, m 2 ) = O { F ( m 1, m 2 ) } (7.1-3) by a linear spatial combination of pixels within a neighborhood of ( m 1, m 2 ) . From the sifting summation of Eq. 7.1-1,   Q ( m 1, m 2 ) = O  ∑ ∑ F ( n 1, n 2 )δ ( m1 – n 1 + 1, m 2 – n 2 + 1 ) (7.1-4a)  n1 n2  or Q ( m 1, m 2 ) = ∑∑ F ( n 1, n 2 )O { δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ) } (7.1-4b) n1 n2 recognizing that O { · } is a linear operator and that F ( n 1, n 2 ) in the summation of Eq. 7.1-4a is a constant in the sense that it does not depend on ( m 1, m 2 ) . The term O { δ ( t 1, t 2 ) } for ti = m i – n i + 1 is the response at output coordinate ( m 1, m 2 ) to a unit amplitude input at coordinate ( n 1, n 2 ) . It is called the impulse response function array of the linear operator and is written as δ ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2 ) = O { δ ( t1, t2 ) } for 1 ≤ t1, t2 ≤ L (7.1-5) and is zero otherwise. For notational simplicity, the impulse response array is con- sidered to be square. In Eq. 7.1-5 it is assumed that the impulse response array is of limited spatial extent. This means that an output image pixel is influenced by input image pixels only within some finite area L × L neighborhood of the corresponding output image pixel. The output coordinates ( m 1, m 2 ) in Eq. 7.1-5 following the semicolon indicate that in the general case, called finite area superposition, the impulse response array can change form for each point ( m 1, m 2 ) in the processed array Q ( m 1, m 2 ). Follow- ing this nomenclature, the finite area superposition operation is defined as
  • 173. FINITE-AREA SUPERPOSITION AND CONVOLUTION 163 FIGURE 7.1-1. Relationships between input data, output data, and impulse response arrays for finite-area superposition; upper left corner justified array definition. Q ( m 1, m 2 ) = ∑∑ F ( n 1, n 2 )H ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2 ) (7.1-6) n 1 n2 The limits of the summation are MAX { 1, m i – L + 1 } ≤ n i ≤ MIN { N, m i } (7.1-7) where MAX { a, b } and MIN { a, b } denote the maximum and minimum of the argu- ments, respectively. Examination of the indices of the impulse response array at its extreme positions indicates that M = N + L - 1, and hence the processed output array Q is of larger dimension than the input array F. Figure 7.1-1 illustrates the geometry of finite-area superposition. If the impulse response array H is spatially invariant, the superposition operation reduces to the convolution operation. Q ( m 1, m 2 ) = ∑∑ F ( n 1, n 2 )H ( m 1 – n 1 + 1, m2 – n 2 + 1 ) (7.1-8) n1 n2 Figure 7.1-2 presents a graphical example of convolution with a 3 × 3 impulse response array. Equation 7.1-6 expresses the finite-area superposition operation in left-justified form in which the input and output arrays are aligned at their upper left corners. It is often notationally convenient to utilize a definition in which the output array is cen- tered with respect to the input array. This definition of centered superposition is given by
  • 174. 164 SUPERPOSITION AND CONVOLUTION FIGURE 7.1-2. Graphical example of finite-area convolution with a 3 × 3 impulse response array; upper left corner justified array definition. Q c ( j 1, j2 ) = ∑∑ F ( n 1, n 2 )H ( j 1 – n 1 + L c, j2 – n 2 + L c ; j1, j 2 ) (7.1-9) n 1 n2 where – ( L – 3 ) ⁄ 2 ≤ ji ≤ N + ( L – 1 ) ⁄ 2 and L c = ( L + 1 ) ⁄ 2 . The limits of the summa- tion are MAX { 1, j i – ( L – 1 ) ⁄ 2 } ≤ n i ≤ MIN { N, j i + ( L – 1 ) ⁄ 2 } (7.1-10) Figure 7.1-3 shows the spatial relationships between the arrays F, H, and Qc for cen- tered superposition with a 5 × 5 impulse response array. In digital computers and digital image processors, it is often convenient to restrict the input and output arrays to be of the same dimension. For such systems, Eq. 7.1-9 needs only to be evaluated over the range 1 ≤ j i ≤ N . When the impulse response
  • 175. FINITE-AREA SUPERPOSITION AND CONVOLUTION 165 FIGURE 7.1-3. Relationships between input data, output data, and impulse response arrays for finite-area superposition; centered array definition. array is located on the border of the input array, the product computation of Eq. 7.1-9 does not involve all of the elements of the impulse response array. This situa- tion is illustrated in Figure 7.1-3, where the impulse response array is in the upper left corner of the input array. The input array pixels “missing” from the computation are shown crosshatched in Figure 7.1-3. Several methods have been proposed to deal with this border effect. One method is to perform the computation of all of the impulse response elements as if the missing pixels are of some constant value. If the constant value is zero, the result is called centered, zero padded superposition. A variant of this method is to regard the missing pixels to be mirror images of the input array pixels, as indicated in the lower left corner of Figure 7.1-3. In this case the centered, reflected boundary superposition definition becomes Q c ( j 1, j2 ) = ∑∑ F ( n′ , n′ )H ( j 1 – n 1 + L c , j 2 – n 2 + L c ; j 1, j 2 ) 1 2 (7.1-11) n 1 n2 where the summation limits are ji – ( L – 1 ) ⁄ 2 ≤ ni ≤ ji + ( L – 1 ) ⁄ 2 (7.1-12)
  • 176. 166 SUPERPOSITION AND CONVOLUTION and  2 – ni for n i ≤ 0 (7.1-13a)    n' i =  n i for 1 ≤ n i ≤ N (7.1-13b)    2N – n  i for n i > N (7.1-13c) In many implementations, the superposition computation is limited to the range ( L + 1 ) ⁄ 2 ≤ j i ≤ N – ( L – 1 ) ⁄ 2 , and the border elements of the N × N array Qc are set to zero. In effect, the superposition operation is computed only when the impulse response array is fully embedded within the confines of the input array. This region is described by the dashed lines in Figure 7.1-3. This form of superposition is called centered, zero boundary superposition. If the impulse response array H is spatially invariant, the centered definition for convolution becomes Q c ( j 1, j 2 ) = ∑∑ F ( n 1, n 2 )H ( j 1 – n 1 + L c, j2 – n 2 + L c ) (7.1-14) n1 n2 The 3 × 3 impulse response array, which is called a small generating kernel (SGK), is fundamental to many image processing algorithms (1). When the SGK is totally embedded within the input data array, the general term of the centered convolution operation can be expressed explicitly as Q c ( j1, j 2 ) = H ( 3, 3 )F ( j1 – 1, j 2 – 1 ) + H ( 3, 2 )F ( j 1 – 1, j2 ) + H ( 3, 1 )F ( j 1 – 1, j 2 + 1 ) + H ( 2, 3 )F ( j 1, j2 – 1 ) + H ( 2, 2 )F ( j1, j 2 ) + H ( 2, 1 )F ( j 1, j 2 + 1 ) + H ( 1, 3 )F ( j 1 + 1, j 2 – 1 ) + H ( 1, 2 )F ( j 1 + 1, j2 ) + H ( 1, 1 )F ( j 1 + 1, j 2 + 1 ) (7.1-15) for 2 ≤ j i ≤ N – 1 . In Chapter 9 it will be shown that convolution with arbitrary-size impulse response arrays can be achieved by sequential convolutions with SGKs. The four different forms of superposition and convolution are each useful in var- ious image processing applications. The upper left corner–justified definition is appropriate for computing the correlation function between two images. The cen- tered, zero padded and centered, reflected boundary definitions are generally employed for image enhancement filtering. Finally, the centered, zero boundary def- inition is used for the computation of spatial derivatives in edge detection. In this application, the derivatives are not meaningful in the border region.
  • 177. FINITE-AREA SUPERPOSITION AND CONVOLUTION 167 0.040 0.080 0.120 0.160 0.200 0.200 0.200 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.080 0.160 0.240 0.320 0.400 0.400 0.400 0.000 0.000 0.000 0.000 0.000 0.000 0.000 0.120 0.240 0.360 0.480 0.600 0.600 0.600 0.000 0.000 1.000 1.000 1.000 1.000 1.000 0.160 0.320 0.480 0.640 0.800 0.800 0.800 0.000 0.000 1.000 1.000 1.000 1.000 1.000 0.200 0.400 0.600 0.800 1.000 1.000 1.000 0.000 0.000 1.000 1.000 1.000 1.000 1.000 0.200 0.400 0.600 0.800 1.000 1.000 1.000 0.000 0.000 1.000 1.000 1.000 1.000 1.000 0.200 0.400 0.600 0.800 1.000 1.000 1.000 0.000 0.000 1.000 1.000 1.000 1.000 1.000 (a) Upper left corner justified (b) Centered, zero boundary 0.360 0.480 0.600 0.600 0.600 0.600 0.600 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.480 0.640 0.800 0.800 0.800 0.800 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.600 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.600 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.600 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.600 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 0.600 0.800 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 1.000 (c) Centered, zero padded (d) Centered, reflected FIGURE 7.1-4 Finite-area convolution boundary conditions, upper left corner of convolved image. Figure 7.1-4 shows computer printouts of pixels in the upper left corner of a convolved image for the four types of convolution boundary conditions. In this example, the source image is constant of maximum value 1.0. The convolution impulse response array is a 5 × 5 uniform array. 7.1.2. Finite-Area Superposition and Convolution: Vector-Space Formulation 2 If the arrays F and Q of Eq. 7.1-6 are represented in vector form by the N × 1 vec- 2 tor f and the M × 1 vector q, respectively, the finite-area superposition operation can be written as (2) q = Df (7.1-16) 2 2 where D is a M × N matrix containing the elements of the impulse response. It is convenient to partition the superposition operator matrix D into submatrices of dimension M × N . Observing the summation limits of Eq. 7.1-7, it is seen that D 1, 1 0 … 0 D 2, 1 D 2, 2 … 0 … … D = D L, 1 D L, 2 D M – L + 1, N (7.1-17) 0 D L + 1, 1 … … … … 0 … 0 D M, N
  • 178. 168 SUPERPOSITION AND CONVOLUTION 11 0 0 0 21 11 0 0 31 21 0 0 0 31 0 0 12 0 11 0 22 12 21 11 32 22 31 21 11 12 13 0 32 0 31 H = 21 22 23 D= 13 0 12 0 31 32 33 23 13 22 12 33 23 32 22 0 33 0 32 0 0 13 0 0 0 23 13 0 0 33 23 0 0 0 33 (a) (b) FIGURE 7.1-5 Finite-area convolution operators: (a) general impulse array, M = 4, N = 2, L = 3; (b) Gaussian-shaped impulse array, M = 16, N = 8, L = 9. The general nonzero term of D is then given by Dm ( m 1, n 1 ) = H ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m1, m 2 ) (7.1-18) 2 ,n 2 Thus, it is observed that D is highly structured and quite sparse, with the center band of submatrices containing stripes of zero-valued elements. If the impulse response is position invariant, the structure of D does not depend explicitly on the output array coordinate ( m 1, m2 ) . Also, Dm = Dm + 1 ,n 2 + 1 (7.1-19) 2 ,n 2 2 As a result, the columns of D are shifted versions of the first column. Under these conditions, the finite-area superposition operator is known as the finite-area convo- lution operator. Figure 7.1-5a contains a notational example of the finite-area con- volution operator for a 2 × 2 (N = 2) input data array, a 4 × 4 (M = 4) output data array, and a 3 × 3 (L = 3) impulse response array. The integer pairs (i, j) at each ele- ment of D represent the element (i, j) of H ( i, j ). The basic structure of D can be seen more clearly in the larger matrix depicted in Figure 7.l-5b. In this example, M = 16,
  • 179. FINITE-AREA SUPERPOSITION AND CONVOLUTION 169 N = 8, L = 9, and the impulse response has a symmetrical Gaussian shape. Note that D is a 256 × 64 matrix in this example. Following the same technique as that leading to Eq. 5.4-7, the matrix form of the superposition operator may be written as M N T Q = ∑ ∑ Dm ,n Fvn um (7.1-20) m =1 n =1 If the impulse response is spatially invariant and is of separable form such that T H = h C hR (7.1-21) where hR and hC are column vectors representing row and column impulse responses, respectively, then D = DC ⊗ DR (7.1-22) The matrices D R and DC are M × N matrices of the form hR ( 1 ) 0 … 0 hR ( 2 ) hR ( 1 ) … hR ( 3 ) hR ( 2 ) … 0 hR ( 1 ) (7.1-23) … DR = hR ( L ) … … … 0 … 0 … 0 hR ( L ) The two-dimensional convolution operation may then be computed by sequential row and column one-dimensional convolutions. Thus T Q = D C FD R (7.1-24) In vector form, the general finite-area superposition or convolution operator requires 2 2 N L operations if the zero-valued multiplications of D are avoided. The separable operator of Eq. 7.1-24 can be computed with only NL ( M + N ) operations.
  • 180. 170 SUPERPOSITION AND CONVOLUTION 7.2. SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION Many applications in image processing require a discretization of the superposition integral relating the input and output continuous fields of a linear system. For exam- ple, image blurring by an optical system, sampling with a finite-area aperture or imaging through atmospheric turbulence, may be modeled by the superposition inte- gral equation ˜ ∞ ∞ ˜ ˜ G ( x, y ) = ∫–∞ ∫–∞ F ( α, β )J ( x, y ; α, β ) dα dβ (7.2-1a) ˜ ˜ where F ( x ,y ) and G ( x, y ) denote the input and output fields of a linear system, ˜ respectively, and the kernel J ( x, y ; α , β ) represents the impulse response of the linear system model. In this chapter, a tilde over a variable indicates that the spatial indices of the variable are bipolar; that is, they range from negative to positive spatial limits. In this formulation, the impulse response may change form as a function of its four indices: the input and output coordinates. If the linear system is space invariant, the output image field may be described by the convolution integral ˜ ∞ ∞ ˜ ˜ G ( x, y ) = ∫–∞ ∫–∞ F ( α, β )J ( x – α, y – β ) dα dβ (7.2-1b) For discrete processing, physical image sampling will be performed on the output image field. Numerical representation of the integral must also be performed in order to relate the physical samples of the output field to points on the input field. Numerical representation of a superposition or convolution integral is an impor- tant topic because improper representations may lead to gross modeling errors or numerical instability in an image processing application. Also, selection of a numer- ical representation algorithm usually has a significant impact on digital processing computational requirements. As a first step in the discretization of the superposition integral, the output image field is physically sampled by a ( 2J + 1 ) × ( 2J + 1 ) array of Dirac pulses at a resolu- tion ∆S to obtain an array whose general term is ˜ ˜ G ( j 1 ∆S, j 2 ∆S) = G ( x, y )δ ( x – j1 ∆S, y – j2 ∆S) (7.2-2) where – J ≤ ji ≤ J . Equal horizontal and vertical spacing of sample pulses is assumed for notational simplicity. The effect of finite area sample pulses can easily be incor- ˜ porated by replacing the impulse response with J ( x, y ; α, β ) ᭺ P ( – x ,– y ), where * P ( – x ,– y ) represents the pulse shape of the sampling pulse. The delta function may be brought under the integral sign of the superposition integral of Eq. 7.2-la to give ˜ ∞ ∞ ˜ ˜ G ( j 1 ∆S, j 2 ∆ S) = ∫–∞ ∫–∞ F ( α, β )J ( j1 ∆S, j 2 ∆S ; α, β ) dα dβ (7.2-3)
  • 181. SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION 171 It should be noted that the physical sampling is performed on the observed image spatial variables (x, y); physical sampling does not affect the dummy variables of integration ( α, β ) . Next, the impulse response must be truncated to some spatial bounds. Thus, let ˜ J ( x, y ; α, β ) = 0 (7.2-4) for x > T and y > T . Then, j 1 ∆S + T j 2 ∆S + T ˜ ˜ ˜ G ( j 1 ∆S, j1 ∆S) = ∫j ∆S – T ∫ j ∆S – T 1 2 F ( α, β ) J ( j 1 ∆S, j 2 ∆S ; α, β ) dα dβ (7.2-5) Truncation of the impulse response is equivalent to multiplying the impulse response by a window function V(x, y), which is unity for x < T and y < T and zero elsewhere. By the Fourier convolution theorem, the Fourier spectrum of G(x, y) is equivalently convolved with the Fourier transform of V(x, y), which is a two- dimensional sinc function. This distortion of the Fourier spectrum of G(x, y) results in the introduction of high-spatial-frequency artifacts (a Gibbs phenomenon) at spa- tial frequency multiples of 2π ⁄ T . Truncation distortion can be reduced by using a shaped window, such as the Bartlett, Blackman, Hamming, or Hanning windows (3), which smooth the sharp cutoff effects of a rectangular window. This step is especially important for image restoration modeling because ill-conditioning of the superposition operator may lead to severe amplification of the truncation artifacts. In the next step of the discrete representation, the continuous ideal image array ˜ F ( α, β ) is represented by mesh points on a rectangular grid of resolution ∆I and dimension ( 2K + 1 ) × ( 2K + 1 ). This is not a physical sampling process, but merely an abstract numerical representation whose general term is described by ˜ ˜ F ( k 1 ∆I, k2 ∆I) = F ( α, β )δ ( α – k 1 ∆I, α – k 2 ∆I) (7.2-6) where K iL ≤ k i ≤ K iU , with K iU and K iL denoting the upper and lower index limits. If the ultimate objective is to estimate the continuous ideal image field by pro- cessing the physical observation samples, the mesh spacing ∆I should be fine enough to satisfy the Nyquist criterion for the ideal image. That is, if the spectrum of the ideal image is bandlimited and the limits are known, the mesh spacing should be set at the corresponding Nyquist spacing. Ideally, this will permit perfect interpola- ˜ ˜ tion of the estimated points F ( k 1 ∆I, k 2 ∆I) to reconstruct F ( x, y ). The continuous integration of Eq. 7.2-5 can now be approximated by a discrete summation by employing a quadrature integration formula (4). The physical image samples may then be expressed as K 1U K 2U ˜ ˜ ˜ ˜ G ( j 1 ∆ S, j 2 ∆S) = ∑ ∑ F ( k 1 ∆ I, k 2 ∆ I)W ( k 1, k 2 )J ( j 1 ∆ S, j 2 ∆ S ; k 1 ∆I, k 2 ∆I ) k 1 = K 1L k 2 = K 2L (7.2-7)
  • 182. 172 SUPERPOSITION AND CONVOLUTION ˜ where W ( k 1 , k 2 ) is a weighting coefficient for the particular quadrature formula employed. Usually, a rectangular quadrature formula is used, and the weighting coefficients are unity. In any case, it is notationally convenient to lump the weight- ing coefficient and the impulse response function together so that ˜ ˜ ˜ H ( j1 ∆S, j 2 ∆S; k 1 ∆I, k 2 ∆I) = W ( k 1, k 2 )J ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I) (7.2-8) Then, K 1U K 2U ˜ ˜ ˜ G ( j 1 ∆S, j 2 ∆S) = ∑ ∑ F ( k 1 ∆I, k 2 ∆I )H ( j 1 ∆S, j 2 ∆S ; k 1 ∆I, k 2 ∆I ) (7.2-9) k 1 = K 1L k 2 = K 2L ˜ Again, it should be noted that H is not spatially discretized; the function is simply evaluated at its appropriate spatial argument. The limits of summation of Eq. 7.2-9 are ∆S T ∆S T K iL = j i ------ – ----- - - K iU = ji ------ + ----- - - (7.2-10) ∆I ∆I N ∆I ∆I N where [ · ] N denotes the nearest integer value of the argument. Figure 7.2-1 provides an example relating actual physical sample values ˜ ˜ G ( j1 ∆ S, j2 ∆S) to mesh points F ( k 1 ∆I, k 2 ∆I ) on the ideal image field. In this exam- ple, the mesh spacing is twice as large as the physical sample spacing. In the figure, FIGURE 7.2-1. Relationship of physical image samples to mesh points on an ideal image field for numerical representation of a superposition integral.
  • 183. SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION 173 FIGURE 7.2-2. Relationship between regions of physical samples and mesh points for numerical representation of a superposition integral. the values of the impulse response function that are utilized in the summation of Eq. 7.2-9 are represented as dots. An important observation should be made about the discrete model of Eq. 7.2-9 for a sampled superposition integral; the physical area of the ideal image field ˜ F ( x, y ) containing mesh points contributing to physical image samples is larger ˜ than the sample image G ( j 1 ∆S, j2 ∆S) regardless of the relative number of physical samples and mesh points. The dimensions of the two image fields, as shown in Figure 7.2-2, are related by J ∆S + T = K ∆I (7.2-11) to within an accuracy of one sample spacing. At this point in the discussion, a discrete and finite model for the sampled super- ˜ position integral has been obtained in which the physical samples G ( j1 ∆S, j2 ∆S) are related to points on an ideal image field F 1˜ ( k ∆I, k ∆I) by a discrete mathemati- 2 cal superposition operation. This discrete superposition is an approximation to con- tinuous superposition because of the truncation of the impulse response function ˜ J ( x, y ; α, β ) and quadrature integration. The truncation approximation can, of course, be made arbitrarily small by extending the bounds of definition of the impulse response, but at the expense of large dimensionality. Also, the quadrature integration approximation can be improved by use of complicated formulas of quadrature, but again the price paid is computational complexity. It should be noted, however, that discrete superposition is a perfect approximation to continuous super- position if the spatial functions of Eq. 7.2-1 are all bandlimited and the physical
  • 184. 174 SUPERPOSITION AND CONVOLUTION sampling and numerical representation periods are selected to be the corresponding Nyquist period (5). It is often convenient to reformulate Eq. 7.2-9 into vector-space form. Toward ˜ ˜ this end, the arrays G and F are reindexed to M × M and N × N arrays, respectively, such that all indices are positive. Let ˜ F ( n 1 ∆I, n2 ∆I ) = F ( k 1 ∆I, k2 ∆I ) (7.2-12a) where n i = k i + K + 1 and let G ( m 1 ∆S, m 2 ∆S) = G ( j 1 ∆S, j2 ∆S ) (7.2-12b) where m i = j i + J + 1 . Also, let the impulse response be redefined such that ˜ H ( m1 ∆S, m 2 ∆S ; n 1 ∆ I, n 2 ∆I ) = H ( j1 ∆S, j2 ∆S ; k 1 ∆I, k 2 ∆I ) (7.2-12c) Figure 7.2-3 illustrates the geometrical relationship between these functions. The discrete superposition relationship of Eq. 7.2-9 for the shifted arrays becomes N 1U N 2U G ( m 1 ∆S, m 2 ∆S) = ∑ ∑ F ( n 1 ∆I, n 2 ∆I ) H ( m1 ∆ S, m 2 ∆S ; n 1 ∆I, n 2 ∆I ) n 1 = N 1L n 2 = N 2L (7.2-13) for ( 1 ≤ m i ≤ M ) where N iL = m i ∆S ------ - N iU = m i ∆S + 2T ------ ----- - - ∆I N ∆I ∆I N Following the techniques outlined in Chapter 5, the vectors g and f may be formed by column scanning the matrices G and F to obtain g = Bf (7.2-14) 2 2 where B is a M × N matrix of the form B 1, 1 B1, 2 … B ( 1, L ) 0… 0 … … … 0 B 2, 2 B = (7.2-15) 0 … 0 … 0 B M, N – L + 1 … B M, N
  • 185. SAMPLED IMAGE SUPERPOSITION AND CONVOLUTION 175 FIGURE 7.2-3. Sampled image arrays. The general term of B is defined as Bm 2, n 2 ( m 1, n 1 ) = H ( m1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I ) (7.2-16) for 1 ≤ m i ≤ M and m i ≤ n i ≤ m i + L – 1 where L = [ 2T ⁄ ∆I ] N represents the nearest odd integer dimension of the impulse response in resolution units ∆I . For descrip- tional simplicity, B is called the blur matrix of the superposition integral. If the impulse response function is translation invariant such that H ( m 1 ∆S, m 2 ∆S ; n 1 ∆I, n 2 ∆I ) = H ( m 1 ∆S – n 1 ∆I, m2 ∆S – n 2 ∆I ) (7.2-17) then the discrete superposition operation of Eq. 7.2-13 becomes a discrete convolu- tion operation of the form N 1U N 2U G ( m 1 ∆S, m 2 ∆S ) = ∑ ∑ F ( n 1 ∆I, n 2 ∆I )H ( m1 ∆S – n 1 ∆I, m 2 ∆S – n 2 ∆I ) n 1 = N 1L n 2 = N 2L (7.2-18) If the physical sample and quadrature mesh spacings are equal, the general term of the blur matrix assumes the form Bm 2, n 2 ( m 1, n 1 ) = H ( m 1 – n 1 + L, m 2 – n 2 + L ) (7.2-19)
  • 186. 176 SUPERPOSITION AND CONVOLUTION 11 12 13 H = 21 22 23 31 32 33 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 B= 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 (a) (b) FIGURE 7.2-4. Sampled infinite area convolution operators: (a) General impulse array, M = 2, N = 4, L = 3; (b) Gaussian-shaped impulse array, M = 8, N = 16, L = 9. In Eq. 7.2-19, the mesh spacing variable ∆I is understood. In addition, Bm = Bm (7.2-20) 2, n2 2 + 1, n 2 + 1 Consequently, the rows of B are shifted versions of the first row. The operator B then becomes a sampled infinite area convolution operator, and the series form rep- resentation of Eq. 7.2-19 reduces to m1 + L – 1 m 2 + L – 1 G ( m 1 ∆S, m 2 ∆S ) = ∑ ∑ F ( n 1, n 2 )H ( m 1 – n 1 + L, m2 – n 2 + L ) (7.2-21) n1 = m1 n2 = m2 where the sampling spacing is understood. Figure 7.2-4a is a notational example of the sampled image convolution operator for a 4 × 4 (N = 4) data array, a 2 × 2 (M = 2) filtered data array, and a 3 × 3 (L = 3) impulse response array. An extension to larger dimension is shown in Figure 7.2-4b for M = 8, N = 16, L = 9 and a Gaussian-shaped impulse response. When the impulse response is spatially invariant and orthogonally separable, B = BC ⊗ BR (7.2-22) where B R and B C are M × N matrices of the form
  • 187. CIRCULANT SUPERPOSITION AND CONVOLUTION 177 hR ( L ) hR ( L – 1 ) … hR ( 1 ) 0 … 0 0 hR ( L ) … BR = (7.2-23) 0 … 0 … 0 hR ( L ) … hR ( 1 ) The two-dimensional convolution operation then reduces to sequential row and col- umn convolutions of the matrix form of the image array. Thus T G = B C FB R (7.2-24) 2 2 The superposition or convolution operator expressed in vector form requires M L operations if the zero multiplications of B are avoided. A separable convolution operator can be computed in matrix form with only ML ( M + N ) operations. 7.3. CIRCULANT SUPERPOSITION AND CONVOLUTION In circulant superposition (2), the input data, the processed output, and the impulse response arrays are all assumed spatially periodic over a common period. To unify the presentation, these arrays will be defined in terms of the spatially limited arrays considered previously. First, let the N × N data array F ( n1 ,n 2 ) be embedded in the upper left corner of a J × J array ( J > N ) of zeros, giving  F ( n 1, n 2 ) for 1 ≤ n i ≤ N (7.3-1a)   FE ( n 1, n 2 ) =     0 for N + 1 ≤ n i ≤ J (7.3-1b) In a similar manner, an extended impulse response array is created by embedding the spatially limited impulse array in a J × J matrix of zeros. Thus, let H(l , l ; m , m ) for 1 ≤ li ≤ L (7.3-2a)  1 2 1 2  H E ( l 1, l 2 ; m 1, m 2 ) =    0 for L + 1 ≤ l i ≤ J (7.3-2b) 
  • 188. 178 SUPERPOSITION AND CONVOLUTION Periodic arrays F E ( n 1, n 2 ) and H E ( l 1, l 2 ; m 1, m 2 ) are now formed by replicating the extended arrays over the spatial period J. Then, the circulant superposition of these functions is defined as J J KE ( m 1, m 2 ) = ∑ ∑ F E ( n 1, n 2 )H E ( m 1 – n 1 + 1, m 2 – n 2 + 1 ; m 1, m 2) (7.3-3) n1 = 1 n2 = 1 Similarity of this equation with Eq. 7.1-6 describing finite-area superposition is evi- dent. In fact, if J is chosen such that J = N + L – 1, the terms F E ( n 1, n 2 ) = F ( n 1, n 2 ) for 1 ≤ ni ≤ N . The similarity of the circulant superposition operation and the sam- pled image superposition operation should also be noted. These relations become clearer in the vector-space representation of the circulant superposition operation. 2 Let the arrays FE and KE be expressed in vector form as the J × 1 vectors fE and kE, respectively. Then, the circulant superposition operator can be written as k E = CfE (7.3-4) 2 2 where C is a J × J matrix containing elements of the array HE. The circulant superposition operator can then be conveniently expressed in terms of J × J subma- trices Cmn as given by C 1, 1 0 0 … 0 C 1, J – L + 2 … C 1, J C 2, 1 C 2, 2 0 … 0 0 … … · 0 C L – 1, J … C 2, 1 C L, 2 0 … 0 C = (7.3-5) 0 C L + 1, 2 … … … 0 … 0 … 0 C J, J – L + 1 C J, J – L + 2 … C J, J where Cm n 2 ( m 1, n 1 ) = H E ( k 1, k 2 ; m 1, m 2 ) (7.3-6) 2,
  • 189. CIRCULANT SUPERPOSITION AND CONVOLUTION 179 11 12 13 H = 21 22 23 31 32 33 11 0 31 21 0 0 0 0 13 0 33 23 12 0 32 22 21 11 0 31 0 0 0 0 23 13 0 33 22 12 0 32 31 21 11 0 0 0 0 0 33 23 13 0 32 22 12 0 0 31 21 11 0 0 0 0 0 33 23 13 0 32 22 12 12 0 32 22 11 0 31 21 0 0 0 0 13 0 33 23 22 12 0 32 21 11 0 31 0 0 0 0 23 13 0 33 32 22 12 0 31 21 11 0 0 0 0 0 33 23 13 0 0 32 22 12 0 31 21 11 0 0 0 0 0 33 23 13 C= 13 0 33 23 12 0 32 22 11 0 31 21 0 0 0 0 23 13 0 33 22 12 0 32 21 11 0 31 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 0 0 0 0 13 0 33 23 12 0 32 22 11 0 31 21 0 0 0 0 23 13 0 33 22 12 0 32 21 11 0 31 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 0 0 0 0 0 0 33 23 13 0 32 22 12 0 31 21 11 (a) (b) FIGURE 7.3-1. Circulant convolution operators: (a) General impulse array, J = 4, L = 3; (b) Gaussian-shaped impulse array, J = 16, L = 9.
  • 190. 180 SUPERPOSITION AND CONVOLUTION for 1 ≤ n i ≤ J and 1 ≤ m i ≤ J with k i = ( m i – n i + 1 ) modulo J and HE(0, 0) = 0. It should be noted that each row and column of C contains L nonzero submatrices. If the impulse response array is spatially invariant, then Cm = Cm (7.3-7) 2, n2 2 + 1, n 2 + 1 and the submatrices of the rows (columns) can be obtained by a circular shift of the first row (column). Figure 7.3-la illustrates the circulant convolution operator for 16 × 16 (J = 4) data and filtered data arrays and for a 3 × 3 (L = 3) impulse response array. In Figure 7.3-lb, the operator is shown for J = 16 and L = 9 with a Gaussian- shaped impulse response. Finally, when the impulse response is spatially invariant and orthogonally separa- ble, C = CC ⊗ CR (7.3-8) where C R and CC are J × J matrices of the form hR ( 1 ) 0 … 0 h R ( L ) … hR ( 3 ) hR ( 2 ) hR ( 2 ) hR ( 1 ) … 0 0 hR ( 3 ) … … … … hR ( L – 1 ) … 0 hR ( L ) (7.3-9) CR = hR ( L ) hR ( L – 1 ) 0 0 hR ( L ) … 0 … 0 … 0 hR ( L ) … … h (2) h (1) R R Two-dimensional circulant convolution may then be computed as T KE = C C FE C R (7.3-10) 7.4. SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS The elements of the finite-area superposition operator D and the elements of the sampled image superposition operator B can be extracted from circulant superposi- tion operator C by use of selection matrices defined as (2)
  • 191. SUPERPOSITION AND CONVOLUTION OPERATOR RELATIONSHIPS 181 (K) S1 J = IK 0 (7.4-1a) (K ) S2 J = 0A IK 0 (7.4-1b) (K) (K) where S1 J and S2 J are K × J matrices, IK is a K × K identity matrix, and 0 A is a K × L – 1 matrix. For future reference, it should be noted that the generalized inverses of S1 and S2 and their transposes are (K) – ( K) T [ S1 J ] = [ S1 J ] (7.4-2a) ( K) T – K [ [ S1 J ] ] = S1 J (7.4-2b) ( K) – (K) T [ S2 J ] = [ S2 J ] (7.4-2c) ( K) T – K [ [ S2 J ] ] = S2 J (7.4-2d) Examination of the structure of the various superposition operators indicates that (M) (M) ( N) (N) T D = [ S1 J ⊗ S1 J ]C [ S1 J ⊗ S1 J ] (7.4-3a) (M) (M) ( N) ( N) T B = [ S2 J ⊗ S2 J ]C [ S1 J ⊗ S1 J ] (7.4-3b) That is, the matrix D is obtained by extracting the first M rows and N columns of sub- matrices Cmn of C. The first M rows and N columns of each submatrix are also extracted. A similar explanation holds for the extraction of B from C. In Figure 7.3-1, the elements of C to be extracted to form D and B are indicated by boxes. From the definition of the extended input data array of Eq. 7.3-1, it is obvious that the spatially limited input data vector f can be obtained from the extended data vector fE by the selection operation (N) (N ) f = [ S1 J ⊗ S1 J ]f E (7.4-4a) and furthermore, ( N) ( N) T fE = [ S1 J ⊗ S1 J ] f (7.4-4b)
  • 192. 182 SUPERPOSITION AND CONVOLUTION It can also be shown that the output vector for finite-area superposition can be obtained from the output vector for circulant superposition by the selection opera- tion (M) (M) q = [ S1 J ⊗ S1 J ]k E (7.4-5a) The inverse relationship also exists in the form (M) (M) T k E = [ S1J ⊗ S1 J ] q (7.4-5b) For sampled image superposition (M) (M) g = [ S2 J ⊗ S2 J ]k E (7.4-6) but it is not possible to obtain kE from g because of the underdeterminacy of the sampled image superposition operator. Expressing both q and kE of Eq. 7.4-5a in matrix form leads to M J (M) (M) ∑ ∑ Mm [ S1J T T Q = ⊗ S1 J ]N n K E v n u m (7.4-7) m=1 n=1 As a result of the separability of the selection operator, Eq. 7.4-7 reduces to (M) (M) T Q = [ S1 J ]K E [ S1 J ] (7.4-8) Similarly, for Eq. 7.4-6 describing sampled infinite-area superposition, FIGURE 7.4-1. Location of elements of processed data Q and G from KE.
  • 193. REFERENCES 183 (M) (M) T G = [ S2 J ]K E [ S2 J ] (7.4-9) Figure 7.4-1 illustrates the locations of the elements of G and Q extracted from KE for finite-area and sampled infinite-area superposition. In summary, it has been shown that the output data vectors for either finite-area or sampled image superposition can be obtained by a simple selection operation on the output data vector of circulant superposition. Computational advantages that can be realized from this result are considered in Chapter 9. REFERENCES 1. J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image Processing, Chicago, May 1978. 2. W. K. Pratt, “Vector Formulation of Two Dimensional Signal Processing Operations,” Computer Graphics and Image Processing, 4, 1, March 1975, 1–24. 3. A. V. Oppenheim and R. W. Schaefer (Contributor), Digital Signal Processing, Prentice Hall, Englewood Cliffs, NJ, 1975. 4. T. R. McCalla, Introduction to Numerical Methods and FORTRAN Programming, Wiley, New York, 1967. 5. A. Papoulis, Systems and Transforms with Applications in Optics, 2nd ed., McGraw- Hill, New York, 1981.
  • 194. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 8 UNITARY TRANSFORMS Two-dimensional unitary transforms have found two major applications in image processing. Transforms have been utilized to extract features from images. For example, with the Fourier transform, the average value or dc term is proportional to the average image amplitude, and the high-frequency terms (ac term) give an indica- tion of the amplitude and orientation of edges within an image. Dimensionality reduction in computation is a second image processing application. Stated simply, those transform coefficients that are small may be excluded from processing opera- tions, such as filtering, without much loss in processing accuracy. Another applica- tion in the field of image coding is transform image coding, in which a bandwidth reduction is achieved by discarding or grossly quantizing low-magnitude transform coefficients. In this chapter we consider the properties of unitary transforms com- monly used in image processing. 8.1. GENERAL UNITARY TRANSFORMS A unitary transform is a specific type of linear transformation in which the basic lin- ear operation of Eq. 5.4-1 is exactly invertible and the operator kernel satisfies cer- tain orthogonality conditions (1,2). The forward unitary transform of the N 1 × N 2 image array F ( n1, n 2 ) results in a N1 × N 2 transformed image array as defined by N1 N2 F ( m 1, m 2 ) = ∑ ∑ F ( n 1, n 2 )A ( n 1, n 2 ; m 1, m 2 ) (8.1-1) n1 = 1 n2 = 1 185
  • 195. 186 UNITARY TRANSFORMS where A ( n1, n 2 ; m1 , m 2 ) represents the forward transform kernel. A reverse or inverse transformation provides a mapping from the transform domain to the image space as given by N1 N2 F ( n 1, n 2 ) = ∑ ∑ F ( m 1, m 2 )B ( n 1, n 2 ; m 1, m2 ) (8.1-2) m1 = 1 m2 = 1 where B ( n1, n 2 ; m 1, m2 ) denotes the inverse transform kernel. The transformation is unitary if the following orthonormality conditions are met: ∑ ∑ A ( n1, n2 ; m1, m2 )A∗ ( j1, j2 ; m1, m2 ) = δ ( n1 – j1, n 2 – j2 ) (8.1-3a) m1 m2 ∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( j1, j2 ; m1, m2 ) = δ ( n 1 – j 1, n 2 – j2 ) (8.1-3b) m1 m 2 ∑ ∑ A ( n1, n2 ; m 1, m 2 )A∗ ( n 1, n 2 ; k 1, k 2 ) = δ ( m 1 – k 1, m 2 – k 2 ) (8.1-3c) n1 n2 ∑ ∑ B ( n1, n 2 ; m1, m2 )B∗ ( n1, n2 ; k1, k2 ) = δ ( m 1 – k 1, m 2 – k 2 ) (8.1-3c) n1 n2 The transformation is said to be separable if its kernels can be written in the form A ( n1, n 2 ; m 1, m 2 ) = A C ( n 1, m 1 )AR ( n 2, m 2 ) (8.1-4a) B ( n1, n 2 ; m 1, m 2 ) = B C ( n 1, m 1 )BR ( n 2, m 2 ) (8.1-4b) where the kernel subscripts indicate row and column one-dimensional transform operations. A separable two-dimensional unitary transform can be computed in two steps. First, a one-dimensional transform is taken along each column of the image, yielding N1 P ( m 1, n 2 ) = ∑ F ( n 1, n 2 )A C ( n 1, m 1 ) (8.1-5) n1= 1 Next, a second one-dimensional unitary transform is taken along each row of P ( m1, n 2 ), giving N2 F ( m 1, m 2 ) = ∑ P ( m1, n 2 )A R ( n 2, m 2 ) (8.1-6) n2 = 1
  • 196. GENERAL UNITARY TRANSFORMS 187 Unitary transforms can conveniently be expressed in vector-space form (3). Let F and f denote the matrix and vector representations of an image array, and let F and f be the matrix and vector forms of the transformed image. Then, the two-dimen- sional unitary transform written in vector form is given by f = Af (8.1-7) where A is the forward transformation matrix. The reverse transform is f = Bf f (8.1-8) where B represents the inverse transformation matrix. It is obvious then that –1 B = A (8.1-9) For a unitary transformation, the matrix inverse is given by –1 T A = A∗ (8.1-10) and A is said to be a unitary matrix. A real unitary matrix is called an orthogonal matrix. For such a matrix, –1 T A = A (8.1-11) If the transform kernels are separable such that A = AC ⊗ AR (8.1-12) where A R and A C are row and column unitary transform matrices, then the trans- formed image matrix can be obtained from the image matrix by T F = A C FA R (8.1-13a) The inverse transformation is given by T F = BC F BR (8.1-13b)
  • 197. 188 UNITARY TRANSFORMS –1 –1 where B C = A C and BR = A R . Separable unitary transforms can also be expressed in a hybrid series–vector space form as a sum of vector outer products. Let a C ( n 1 ) and a R ( n 2 ) represent rows n1 and n2 of the unitary matrices AR and AR, respectively. Then, it is easily verified that N1 N2 ∑ ∑ T F = F ( n 1, n 2 )a C ( n 1 )a R ( n 2 ) (8.1-14a) n1 = 1 n2 = 1 Similarly, N1 N2 ∑ ∑ T F = F ( m 1, m 2 )b C ( m 1 )b R ( m 2 ) (8.1-14b) m1 = 1 m2 = 1 where b C ( m 1 ) and b R ( m 2 ) denote rows m1 and m2 of the unitary matrices BC and BR, respectively. The vector outer products of Eq. 8.1-14 form a series of matrices, called basis matrices, that provide matrix decompositions of the image matrix F or its unitary transformation F. There are several ways in which a unitary transformation may be viewed. An image transformation can be interpreted as a decomposition of the image data into a generalized two-dimensional spectrum (4). Each spectral component in the trans- form domain corresponds to the amount of energy of the spectral function within the original image. In this context, the concept of frequency may now be generalized to include transformations by functions other than sine and cosine waveforms. This type of generalized spectral analysis is useful in the investigation of specific decom- positions that are best suited for particular classes of images. Another way to visual- ize an image transformation is to consider the transformation as a multidimensional rotation of coordinates. One of the major properties of a unitary transformation is that measure is preserved. For example, the mean-square difference between two images is equal to the mean-square difference between the unitary transforms of the images. A third approach to the visualization of image transformation is to consider Eq. 8.1-2 as a means of synthesizing an image with a set of two-dimensional mathe- matical functions B ( n1, n 2 ; m 1, m 2 ) for a fixed transform domain coordinate ( m 1, m2 ) . In this interpretation, the kernel B ( n 1, n 2 ; m 1, m 2 ) is called a two-dimen- sional basis function and the transform coefficient F ( m1, m 2 ) is the amplitude of the basis function required in the synthesis of the image. In the remainder of this chapter, to simplify the analysis of two-dimensional uni- tary transforms, all image arrays are considered square of dimension N. Further- more, when expressing transformation operations in series form, as in Eqs. 8.1-1 and 8.1-2, the indices are renumbered and renamed. Thus the input image array is denoted by F(j, k) for j, k = 0, 1, 2,..., N - 1, and the transformed image array is rep- resented by F(u, v) for u, v = 0, 1, 2,..., N - 1. With these definitions, the forward uni- tary transform becomes
  • 198. FOURIER TRANSFORM 189 N–1 N–1 F ( u, v ) = ∑ ∑ F ( j, k )A ( j, k ; u, v ) (8.1-15a) j=0 k=0 and the inverse transform is N–1 N–1 F ( j, k ) = ∑ ∑ F ( u, v )B ( j, k ; u, v ) (8.1-15b) u=0 v=0 8.2. FOURIER TRANSFORM The discrete two-dimensional Fourier transform of an image array is defined in series form as (5–10) N–1 N–1 1  – 2πi  F ( u, v ) = --- N - ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )   N  (8.2-1a) j=0 k=0 where i = – 1 , and the discrete inverse transform is given by N–1 N–1 1  2πi  F ( j, k ) = --- N - ∑ ∑ F ( u, v ) exp  -------- ( uj + vk )   N  (8.2-1b) u=0 v=0 The indices (u, v) are called the spatial frequencies of the transformation in analogy with the continuous Fourier transform. It should be noted that Eq. 8.2-1 is not uni- versally accepted by all authors; some prefer to place all scaling constants in the inverse transform equation, while still others employ a reversal in the sign of the kernels. Because the transform kernels are separable and symmetric, the two dimensional transforms can be computed as sequential row and column one-dimensional trans- forms. The basis functions of the transform are complex exponentials that may be decomposed into sine and cosine components. The resulting Fourier transform pairs then become  – 2πi   2π   2π  A ( j, k ; u, v ) = exp  ----------- ( uj + vk )  = cos  ----- ( uj + vk )  – i sin  ----- ( uj + vk )  - - (8.2-2a)  N  N  N   2πi   2π   2π  B ( j, k ; u, v ) = exp  ------- ( uj + vk )  = cos  ----- ( uj + vk )  + i sin  ----- ( uj + vk )  - - - (8.2-2b)  N   N   N  Figure 8.2-1 shows plots of the sine and cosine components of the one-dimensional Fourier basis functions for N = 16. It should be observed that the basis functions are a rough approximation to continuous sinusoids only for low frequencies; in fact, the
  • 199. 190 UNITARY TRANSFORMS FIGURE 8.2-1 Fourier transform basis functions, N = 16. highest-frequency basis function is a square wave. Also, there are obvious redun- dancies between the sine and cosine components. The Fourier transform plane possesses many interesting structural properties. The spectral component at the origin of the Fourier domain N–1 N–1 1 F ( 0, 0 ) = --- N - ∑ ∑ F ( j, k ) (8.2-3) j=0 k=0 is equal to N times the spatial average of the image plane. Making the substitutions u = u + mN , v = v + nN in Eq. 8.2-1, where m and n are constants, results in
  • 200. FOURIER TRANSFORM 191 FIGURE 8.2-2. Periodic image and Fourier transform arrays. N–1 N–1 1  – 2πi  F ( u + mN, v + nN ) = --- N - ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )  exp { –2πi ( mj + nk ) }  N  j=0 k=0 (8.2-4) For all integer values of m and n, the second exponential term of Eq. 8.2-5 assumes a value of unity, and the transform domain is found to be periodic. Thus, as shown in Figure 8.2-2a, F ( u + mN, v + nN ) = F ( u, v ) (8.2-5) for m, n = 0, ± 1, ± 2, … . The two-dimensional Fourier transform of an image is essentially a Fourier series representation of a two-dimensional field. For the Fourier series representation to be valid, the field must be periodic. Thus, as shown in Figure 8.2-2b, the original image must be considered to be periodic horizontally and vertically. The right side of the image therefore abuts the left side, and the top and bottom of the image are adjacent. Spatial frequencies along the coordinate axes of the transform plane arise from these transitions. If the image array represents a luminance field, F ( j, k ) will be a real positive function. However, its Fourier transform will, in general, be complex. Because the 2 transform domain contains 2N components, the real and imaginary, or phase and magnitude components, of each coefficient, it might be thought that the Fourier transformation causes an increase in dimensionality. This, however, is not the case because F ( u, v ) exhibits a property of conjugate symmetry. From Eq. 8.2-4, with m and n set to integer values, conjugation yields
  • 201. 192 UNITARY TRANSFORMS FIGURE 8.2-3. Fourier transform frequency domain. N–1 N–1 1  – 2πi  F * ( u + mN, v + nN ) = --- N - ∑ ∑ F ( j, k ) exp  ----------- ( uj + vk )   N  (8.2-6) j=0 k=0 By the substitution u = – u and v = – v it can be shown that F ( u, v ) = F * ( – u + mN, – v + nN ) (8.2-7) for n = 0, ± 1, ±2, … . As a result of the conjugate symmetry property, almost one- half of the transform domain samples are redundant; that is, they can be generated from other transform samples. Figure 8.2-3 shows the transform plane with a set of redundant components crosshatched. It is possible, of course, to choose the left half- plane samples rather than the upper plane samples as the nonredundant set. Figure 8.2-4 shows a monochrome test image and various versions of its Fourier transform, as computed by Eq. 8.2-1a, where the test image has been scaled over unit range 0.0 ≤ F ( j, k ) ≤ 1.0. Because the dynamic range of transform components is much larger than the exposure range of photographic film, it is necessary to com- press the coefficient values to produce a useful display. Amplitude compression to a unit range display array D ( u, v ) can be obtained by clipping large-magnitude values according to the relation
  • 202. FOURIER TRANSFORM 193 (a) Original (b) Clipped magnitude, nonordered (c) Log magnitude, nonordered (d) Log magnitude, ordered FIGURE 8.2-4. Fourier transform of the smpte_girl_luma image.  1.0 if F ( u, v ) ≥ c F max (8.2-8a)  D ( u, v ) =  F ( u, v )  -------------------- c F max -  if F ( u, v ) < c F max (8.2-8b) where 0.0 < c ≤ 1.0 is the clipping factor and F max is the maximum coefficient magnitude. Another form of amplitude compression is to take the logarithm of each component as given by log { a + b F ( u, v ) } D ( u, v ) = ------------------------------------------------ - (8.2-9) log { a + b F max }
  • 203. 194 UNITARY TRANSFORMS where a and b are scaling constants. Figure 8.2-4b is a clipped magnitude display of the magnitude of the Fourier transform coefficients. Figure 8.2-4c is a logarithmic display for a = 1.0 and b = 100.0. In mathematical operations with continuous signals, the origin of the transform domain is usually at its geometric center. Similarly, the Fraunhofer diffraction pat- tern of a photographic transparency of transmittance F ( x, y ) produced by a coher- ent optical system has its zero-frequency term at the center of its display. A computer-generated two-dimensional discrete Fourier transform with its origin at its center can be produced by a simple reordering of its transform coefficients. Alterna- tively, the quadrants of the Fourier transform, as computed by Eq. 8.2-la, can be j+k reordered automatically by multiplying the image function by the factor ( – 1 ) prior to the Fourier transformation. The proof of this assertion follows from Eq. 8.2-4 with the substitution m = n = 1 . Then, by the identity -- 2 - j+k exp { iπ ( j + k ) } = ( – 1 ) (8.2-10) Eq. 8.2-5 can be expressed as N–1 N–1 1 j+k  – 2πi  F ( u + N ⁄ 2, v + N ⁄ 2 ) = --- N - ∑ ∑ F ( j, k ) ( – 1 ) exp  ----------- ( uj + vk )   N  j=0 k=0 (8.2-11) Figure 8.2-4d contains a log magnitude display of the reordered Fourier compo- nents. The conjugate symmetry in the Fourier domain is readily apparent from the photograph. The Fourier transform written in series form in Eq. 8.2-1 may be redefined in vector-space form as f = Af (8.2-12a) T f = A∗ f (8.2-12b) where f and f are vectors obtained by column scanning the matrices F and F, respectively. The transformation matrix A can be written in direct product form as A = AC ⊗ A R (8.2-13)
  • 204. COSINE, SINE, AND HARTLEY TRANSFORMS 195 where 0 0 0 0 W W W … W 0 1 2 N–1 W W W … W AR = AC = 0 2 4 2(N – 1) (8.2-14) W W W … W … … 2 0 · · (N – 1) W … W with W = exp { – 2πi ⁄ N }. As a result of the direct product decomposition of A, the image matrix and transformed image matrix are related by F = A C FA R (8.2-15a) F = A C∗ F A R∗ (8.2-15b) The properties of the Fourier transform previously proved in series form obviously hold in the matrix formulation. One of the major contributions to the field of image processing was the discovery (5) of an efficient computational algorithm for the discrete Fourier transform (DFT). Brute-force computation of the discrete Fourier transform of a one-dimensional 2 sequence of N values requires on the order of N complex multiply and add opera- tions. A fast Fourier transform (FFT) requires on the order of N log N operations. For large images the computational savings are substantial. The original FFT algo- rithms were limited to images whose dimensions are a power of 2 (e.g., 9 N = 2 = 512 ). Modern algorithms exist for less restrictive image dimensions. Although the Fourier transform possesses many desirable analytic properties, it has a major drawback: Complex, rather than real number computations are necessary. Also, for image coding it does not provide as efficient image energy compaction as other transforms. 8.3. COSINE, SINE, AND HARTLEY TRANSFORMS The cosine, sine, and Hartley transforms are unitary transforms that utilize sinusoidal basis functions, as does the Fourier transform. The cosine and sine transforms are not simply the cosine and sine parts of the Fourier transform. In fact, the cosine and sine parts of the Fourier transform, individually, are not orthogonal functions. The Hartley transform jointly utilizes sine and cosine basis functions, but its coefficients are real numbers, as contrasted with the Fourier transform whose coefficients are, in general, complex numbers.
  • 205. 196 UNITARY TRANSFORMS 8.3.1. Cosine Transform The cosine transform, discovered by Ahmed et al. (12), has found wide application in transform image coding. In fact, it is the foundation of the JPEG standard (13) for still image coding and the MPEG standard for the coding of moving images (14). The forward cosine transform is defined as (12) N–1 N–1 2 π 1  π 1  F ( u, v ) = --- C ( u )C ( v ) N - ∑ ∑ F ( j, k ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ]  N - 2  N - 2  j=0 k=0 (8.3-1a) N–1 N–1 2 π 1  π 1  F ( j, k ) = --- N - ∑ ∑ C ( u )C ( v )F ( u, v ) cos  --- [ u ( j + --- ) ]  cos  --- [ v ( k + --- ) ]  N - 2  N - 2  j=0 k=0 (8.3-1b) –1 ⁄ 2 where C ( 0 ) = ( 2 ) and C ( w ) = 1 for w = 1, 2,..., N – 1. It has been observed that the basis functions of the cosine transform are actually a class of discrete Che- byshev polynomials (12). Figure 8.3-1 is a plot of the cosine transform basis functions for N = 16. A photo- graph of the cosine transform of the test image of Figure 8.2-4a is shown in Figure 8.3-2a. The origin is placed in the upper left corner of the picture, consistent with matrix notation. It should be observed that as with the Fourier transform, the image energy tends to concentrate toward the lower spatial frequencies. The cosine transform of a N × N image can be computed by reflecting the image about its edges to obtain a 2N × 2N array, taking the FFT of the array and then extracting the real parts of the Fourier transform (15). Algorithms also exist for the direct computation of each row or column of Eq. 8.3-1 with on the order of N log N real arithmetic operations (12,16). 8.3.2. Sine Transform The sine transform, introduced by Jain (17), as a fast algorithmic substitute for the Karhunen–Loeve transform of a Markov process is defined in one-dimensional form by the basis functions 2  ( j + 1 ) ( u + 1 )π  A ( u, j ) = ------------ sin  -------------------------------------  - - (8.3-2) N+1  N+1  for u, j = 0, 1, 2,..., N – 1. Consider the tridiagonal matrix
  • 206. COSINE, SINE, AND HARTLEY TRANSFORMS 197 FIGURE 8.3-1. Cosine transform basis functions, N = 16. · 1 –α 0 … 0 · · –α 1 –α · · · · T = · · · · (8.3-3) · · · –α 1 –α 0 … 0 –α 1 2 where α = ρ ⁄ ( 1 + ρ ) and 0.0 ≤ ρ ≤ 1.0 is the adjacent element correlation of a Markov process covariance matrix. It can be shown (18) that the basis functions of
  • 207. 198 UNITARY TRANSFORMS (a) Cosine (b) Sine (c) Hartley FIGURE 8.3-2. Cosine, sine, and Hartley transforms of the smpte_girl_luma image, log magnitude displays Eq. 8.3-2, inserted as the elements of a unitary matrix A, diagonalize the matrix T in the sense that T ATA = D (8.3-4) Matrix D is a diagonal matrix composed of the terms 2 1–ρ D ( k, k ) = ------------------------------------------------------------------------ (8.3-5) 2 1 – 2ρ cos { kπ ⁄ ( N + 1 ) } + ρ for k = 1, 2,..., N. Jain (17) has shown that the cosine and sine transforms are interre- lated in that they diagonalize a family of tridiagonal matrices.
  • 208. COSINE, SINE, AND HARTLEY TRANSFORMS 199 FIGURE 8.3-3. Sine transform basis functions, N = 15. The two-dimensional sine transform is defined as N–1 N–1 2  ( j + 1 ) ( u + 1 )π   ( k + 1 ) ( v + 1 )π  F ( u, v ) = ------------ N+1 - ∑ ∑ F ( j, k ) sin  -------------------------------------   N+1 -  sin  -------------------------------------  (8.3-6)  N+1  j=0 k=0 Its inverse is of identical form. Sine transform basis functions are plotted in Figure 8.3-3 for N = 15. Figure 8.3-2b is a photograph of the sine transform of the test image. The sine transform can also be computed directly from Eq. 8.3-10, or efficiently with a Fourier trans- form algorithm (17).
  • 209. 200 UNITARY TRANSFORMS 8.3.3. Hartley Transform Bracewell (19,20) has proposed a discrete real-valued unitary transform, called the Hartley transform, as a substitute for the Fourier transform in many filtering appli- cations. The name derives from the continuous integral version introduced by Hart- ley in 1942 (21). The discrete two-dimensional Hartley transform is defined by the transform pair N–1 N–1 1  2π  F ( u, v ) = --- N - ∑ ∑ F ( j, k ) cas  ------ ( uj + vk )   N  (8.3-7a) j=0 k=0 N–1 N–1 1  2π  F ( j, k ) = --- N - ∑ ∑ F ( u, v ) cas  ----- ( uj + vk )   N -  (8.3-7b) u=0 v=0 where casθ ≡ cos θ + sin θ . The structural similarity between the Fourier and Hartley transforms becomes evident when comparing Eq. 8.3-7 and Eq. 8.2-2. It can be readily shown (17) that the cas θ function is an orthogonal function. Also, the Hartley transform possesses equivalent but not mathematically identical structural properties of the discrete Fourier transform (20). Figure 8.3-2c is a photo- graph of the Hartley transform of the test image. The Hartley transform can be computed efficiently by a FFT-like algorithm (20). The choice between the Fourier and Hartley transforms for a given application is usually based on computational efficiency. In some computing structures, the Hart- ley transform may be more efficiently computed, while in other computing environ- ments, the Fourier transform may be computationally superior. 8.4. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS The Hadamard, Haar, and Daubechies transforms are related members of a family of nonsinusoidal transforms. 8.4.1. Hadamard Transform The Hadamard transform (22,23) is based on the Hadamard matrix (24), which is a square array of plus and minus 1s whose rows and columns are orthogonal. A nor- malized N × N Hadamard matrix satisfies the relation T HH = I (8.4-1) The smallest orthonormal Hadamard matrix is the 2 × 2 Hadamard matrix given by
  • 210. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS 201 FIGURE 8.4-1. Nonordered Hadamard matrices of size 4 and 8. 1 H 2 = ------ 1 1 - (8.4-2) 2 1 –1 It is known that if a Hadamard matrix of size N exists (N > 2), then N = 0 modulo 4 (22). The existence of a Hadamard matrix for every value of N satisfying this requirement has not been shown, but constructions are available for nearly all per- missible values of N up to 200. The simplest construction is for a Hadamard matrix of size N = 2n, where n is an integer. In this case, if H N is a Hadamard matrix of size N, the matrix 1 HN HN H 2N = ------ - (8.4-3) 2 HN – HN is a Hadamard matrix of size 2N. Figure 8.4-1 shows Hadamard matrices of size 4 and 8 obtained by the construction of Eq. 8.4-3. Harmuth (25) has suggested a frequency interpretation for the Hadamard matrix generated from the core matrix of Eq. 8.4-3; the number of sign changes along each row of the Hadamard matrix divided by 2 is called the sequency of the row. It is pos- n sible to construct a Hadamard matrix of order N = 2 whose number of sign changes per row increases from 0 to N – 1. This attribute is called the sequency property of the unitary matrix.
  • 211. 202 UNITARY TRANSFORMS FIGURE 8.4-2. Hadamard transform basis functions, N = 16. The rows of the Hadamard matrix of Eq. 8.4-3 can be considered to be samples of rectangular waves with a subperiod of 1/N units. These continuous functions are called Walsh functions (26). In this context, the Hadamard matrix merely performs the decomposition of a function by a set of rectangular waveforms rather than the sine–cosine waveforms with the Fourier transform. A series formulation exists for the Hadamard transform (23). Hadamard transform basis functions for the ordered transform with N = 16 are shown in Figure 8.4-2. The ordered Hadamard transform of the test image in shown in Figure 8.4-3a.
  • 212. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS 203 (a) Hadamard (b) Haar FIGURE 8.4-3. Hadamard and Haar transforms of the smpte_girl_luma image, log magnitude displays. 8.4.2. Haar Transform The Haar transform (1,26,27) is derived from the Haar matrix. The following are 4 × 4 and 8 × 8 orthonormal Haar matrices: 1 1 1 1 1 1 1 –1 –1 H 4 = -- - (8.4-4) 2 2 – 2 0 0 0 0 2 – 2 1 1 1 1 1 1 1 1 1 1 1 1 –1 –1 –1 –1 2 2 – 2 – 2 0 0 0 0 1 0 0 0 0 2 2 – 2 – 2 H 8 = ------ - (8.4-5) 8 2 –2 0 0 0 0 0 0 0 0 2 –2 0 0 0 0 0 0 0 0 2 –2 0 0 0 0 0 0 0 0 2 –2 Extensions to higher-order Haar matrices follow the structure indicated by Eqs. 8.4-4 and 8.4-5. Figure 8.4-4 is a plot of the Haar basis functions for N = 16 .
  • 213. 204 UNITARY TRANSFORMS FIGURE 8.4-4. Haar transform basis functions, N = 16. The Haar transform can be computed recursively (29) using the following N × N recursion matrix VN RN = (8.4-6) WN where V N is a N ⁄ 2 × N scaling matrix and WN is a N ⁄ 2 × N wavelet matrix defined as 110 00 0 … 00 00 001 10 0 … 00 00 1 V N = ------ - (8.4-7a) 2 000 00 0 … 11 00 000 00 0 … 00 11
  • 214. HADAMARD, HAAR, AND DAUBECHIES TRANSFORMS 205 1 –1 0 0 0 0 … 0 0 0 0 0 0 1 –1 0 0 … 0 0 0 0 1 W N = ------ - (8.4-7b) … … 2 0 0 0 0 0 0 … 1 –1 0 0 0 0 0 0 0 0 … 0 0 1 –1 The elements of the rows of V N are called first-level scaling signals, and the elements of the rows of W N are called first-level Haar wavelets (29). The first-level Haar transform of a N × 1 vector f is T f1 = R N f = [ a 1 d1 ] (8.4-8) where a1 = VN f (8.4-9a) d 1 = WN f (8.4-9b) The vector a 1 represents the running average or trend of the elements of f , and the vector d1 represents the running fluctuation of the elements of f . The next step in the recursion process is to compute the second-level Haar transform from the trend part of the first-level transform and concatenate it with the first-level fluctuation vector. This results in T f 2 = [ a2 d2 d1 ] (8.4-10) where a 2 = VN ⁄ 2 a1 (8.4-11a) d2 = WN ⁄ 2 a 1 (8.4-11b) are N ⁄ 4 × 1 vectors. The process continues until the full transform T f ≡ fn = [ an d n d n – 1 … d 1 ] (8.4-12) n is obtained where N = 2 . It should be noted that the intermediate levels are unitary transforms.
  • 215. 206 UNITARY TRANSFORMS The Haar transform can be likened to a sampling process in which rows of the transform matrix sample an input data sequence with finer and finer resolution increasing of powers of 2. In image processing applications, the Haar transform pro- vides a transform domain in which a type of differential energy is concentrated in localized regions. 8.4.3. Daubechies Transforms Daubechies (30) has discovered a class of wavelet transforms that utilize running averages and running differences of the elements of a vector, as with the Haar trans- form. The difference between the Haar and Daubechies transforms is that the aver- ages and differences are grouped in four or more elements. The Daubechies transform of support four, called Daub4, can be defined in a manner similar to the Haar recursive generation process. The first-level scaling and wavelet matrices are defined as α1 α2 α3 α 4 0 0 … 0 0 0 0 0 0 α1 α 2 α3 α4 … 0 0 0 0 VN = … … … … … … … … … … (8.4-13a) 0 0 0 0 0 0 … α1 α2 α3 α4 α3 α4 0 0 0 0 … 0 0 α1 α2 β1 β2 β3 β4 0 0 … 0 0 0 0 0 0 β1 β2 β3 β4 … 0 0 0 0 … … … … … … … … … … WN = (8.4-13b) 0 0 0 0 0 0 … β1 β2 β3 β4 β3 β4 0 0 0 0 … 0 0 β1 β2 where 1+ 3 α 1 = – β 4 = --------------- - (8.4-14a) 4 2 3+ 3 α 2 = β 3 = --------------- - (8.4-14b) 4 2 α3 = –β2 = 3 – 3 --------------- - (8.4-14c) 4 2 1– 3 α 4 = β 1 = --------------- - (8.4-14d) 4 2
  • 216. KARHUNEN–LOEVE TRANSFORM 207 In Eqs. 8.4-13a and 8.4-13b, the row-to-row shift is by two elements, and the last two scale factors wrap around on the last rows. Following the recursion process of the Haar transform results in the Daub4 transform final stage: T f ≡ f n = [ an dn dn – 1 … d 1 ] (8.4-15) Daubechies has extended the wavelet transform concept for higher degrees of support, 6, 8, 10,..., by straightforward extension of Eq. 8.4-13 (29). Daubechies also has also constructed another family of wavelets, called coiflets, after a sugges- tion of Coifman (29). 8.5. KARHUNEN–LOEVE TRANSFORM Techniques for transforming continuous signals into a set of uncorrelated represen- tational coefficients were originally developed by Karhunen (31) and Loeve (32). Hotelling (33) has been credited (34) with the conversion procedure that transforms discrete signals into a sequence of uncorrelated coefficients. However, most of the literature in the field refers to both discrete and continuous transformations as either a Karhunen–Loeve transform or an eigenvector transform. The Karhunen–Loeve transformation is a transformation of the general form N–1 N–1 F ( u, v ) = ∑ ∑ F ( j, k )A ( j, k ; u, v ) (8.5-1) j=0 k=0 for which the kernel A(j, k; u, v) satisfies the equation N–1 N–1 λ ( u, v )A ( j, k ; u, v ) = ∑ ∑ K F ( j, k ; j′, k′ ) A ( j′, k′ ; u, v ) (8.5-2) j′ = 0 k′ = 0 where KF ( j, k ; j′, k′ ) denotes the covariance function of the image array and λ ( u, v ) is a constant for fixed (u, v). The set of functions defined by the kernel are the eigen- functions of the covariance function, and λ ( u, v ) represents the eigenvalues of the covariance function. It is usually not possible to express the kernel in explicit form. If the covariance function is separable such that K F ( j, k ; j′, k′ ) = K C ( j, j′ )K R ( k, k′ ) (8.5-3) then the Karhunen-Loeve kernel is also separable and A ( j, k ; u , v ) = A C ( u, j )AR ( v, k ) (8.5-4)
  • 217. 208 UNITARY TRANSFORMS The row and column kernels satisfy the equations N–1 λ R ( u )AR ( v, k ) = ∑ K R ( k, k′ )A R ( v, k′ ) (8.5-5a) k′ = 0 N–1 λ C ( v )A C ( u, j ) = ∑ KC ( j, j′ )AC ( u, j′ ) (8.5-5b) j′ = 0 In the special case in which the covariance matrix is of separable first-order Markov process form, the eigenfunctions can be written in explicit form. For a one-dimen- sional Markov process with correlation factor ρ , the eigenfunctions and eigenvalues are given by (35) 2 1⁄2  N–1 ( u + 1 )π  A ( u, j ) = ----------------------- - sin  w ( u )  j – ------------ + --------------------  - (8.5-6) 2  N + λ ( u)  2  2  and 2 1–ρ λ ( u ) = -------------------------------------------------------- - for 0 ≤ j, u ≤ N – 1 (8.5-7) 2 1 – 2ρ cos { w ( u ) } + ρ where w(u) denotes the root of the transcendental equation 2 ( 1 – ρ ) sin w tan { Nw } = -------------------------------------------------- - (8.5-8) 2 cos w – 2ρ + ρ cos w The eigenvectors can also be generated by the recursion formula (36) λ(u) A ( u, 0 ) = -------------- [ A ( u, 0 ) – ρA ( u, 1 ) ] (8.5-9a) 2 1–ρ λ(u) 2 A ( u, j ) = -------------- [ – ρA ( u, j – 1 ) + ( 1 + ρ )A ( u, j ) – ρA ( u, j + 1 ) ] for 0 < j < N – 1 2 1–ρ (8.5-9b) λ( u) A ( u, N – 1 ) = -------------- [ – ρA ( u, N – 2 ) + ρA ( u, N – 1 ) ] (8.5-9c) 2 1–ρ by initially setting A(u, 0) = 1 and subsequently normalizing the eigenvectors.
  • 218. KARHUNEN–LOEVE TRANSFORM 209 If the image array and transformed image array are expressed in vector form, the Karhunen–Loeve transform pairs are f = Af (8.5-10) T f = A f (8.5-11) The transformation matrix A satisfies the relation AK f = Λ A (8.5-12) where K f is the covariance matrix of f, A is a matrix whose rows are eigenvectors of K f , and Λ is a diagonal matrix of the form λ(1) 0 … 0 0 λ(2) … Λ = … (8.5-13) 0 … 2 0 … 0 λ( N ) If K f is of separable form, then A = AC ⊗ A R (8.5-14) where AR and AC satisfy the relations AR KR = ΛR AR (8.5-15a) AC KC = ΛC AC (8.5-15b) and λ ( w ) = λ R ( v )λ C ( u ) for u, v = 1, 2,..., N. Figure 8.5-1 is a plot of the Karhunen–Loeve basis functions for a one- dimensional Markov process with adjacent element correlation ρ = 0.9.
  • 219. 210 UNITARY TRANSFORMS FIGURE 8.5-1. Karhunen–Loeve transform basis functions, N = 16. REFERENCES 1. H. C. Andrews, Computer Techniques in Image Processing, Academic Press, New York, 1970. 2. H. C. Andrews, “Two Dimensional Transforms,” in Topics in Applied Physics: Picture Pro- cessing and Digital Filtering, Vol. 6, T. S. Huang, Ed., Springer-Verlag, New York, 1975. 3. R. Bellman, Introduction to Matrix Analysis, 2nd ed., Society for Industrial and Applied Mathematics, Philadelphia, 1997.
  • 220. REFERENCES 211 4. H. C. Andrews and K. Caspari, “A Generalized Technique for Spectral Analysis,” IEEE Trans. Computers, C-19, 1, January 1970, 16–25. 5. J. W. Cooley and J. W. Tukey, “An Algorithm for the Machine Calculation of Complex Fourier Series,” Mathematics of Computation 19, 90, April 1965, 297–301. 6. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU- 15, 2, June 1967. 7. W. T. Cochran et al., “What Is the Fast Fourier Transform?” Proc. IEEE, 55, 10, 1967, 1664–1674. 8. IEEE Trans. Audio and Electroacoustics, Special Issue on Fast Fourier Transforms, AU- 17, 2, June 1969. 9. J. W. Cooley, P. A. Lewis, and P. D. Welch, “Historical Notes on the Fast Fourier Trans- form,” Proc. IEEE, 55, 10, October 1967, 1675–1677. 10. B. O. Brigham and R. B. Morrow, “The Fast Fourier Transform,” IEEE Spectrum, 4, 12, December 1967, 63–70. 11. C. S. Burrus and T. W. Parks, DFT/FFT and Convolution Algorithms, Wiley-Inter- science, New York, 1985. 12. N. Ahmed, T. Natarajan, and K. R. Rao, “On Image Processing and a Discrete Cosine Transform,” IEEE Trans. Computers, C-23, 1, January 1974, 90–93. 13. W. B. Pennebaker and J. L. Mitchell, JPEG Still Image Data Compression Standard, Van Nostrand Reinhold, New York, 1993. 14. K. R. Rao and J. J. Hwang, Techniques and Standards for Image, Video, and Audio Cod- ing, Prentice Hall, Upper Saddle River, NJ, 1996. 15. R. W. Means, H. J. Whitehouse, and J. M. Speiser, “Television Encoding Using a Hybrid Discrete Cosine Transform and a Differential Pulse Code Modulator in Real Time,” Proc. National Telecommunications Conference, San Diego, CA, December 1974, 61– 66. 16. W. H. Chen, C. Smith, and S. C. Fralick, “Fast Computational Algorithm for the Discrete Cosine Transform,” IEEE Trans. Communications., COM-25, 9, September 1977, 1004–1009. 17. A. K. Jain, “A Fast Karhunen–Loeve Transform for Finite Discrete Images,” Proc. National Electronics Conference, Chicago, October 1974, 323–328. 18. A. K. Jain and E. Angel, “Image Restoration, Modeling, and Reduction of Dimensional- ity,” IEEE Trans. Computers, C-23, 5, May 1974, 470–476. 19. R. M. Bracewell, “The Discrete Hartley Transform,” J. Optical Society of America, 73, 12, December 1983, 1832–1835. 20. R. M. Bracewell, The Hartley Transform, Oxford University Press, Oxford, 1986. 21. R. V. L. Hartley, “A More Symmetrical Fourier Analysis Applied to Transmission Prob- lems,” Proc. IRE, 30, 1942, 144–150. 22. J. E. Whelchel, Jr. and D. F. Guinn, “The Fast Fourier–Hadamard Transform and Its Use in Signal Representation and Classification,” EASCON 1968 Convention Record, 1968, 561–573. 23. W. K. Pratt, H. C. Andrews, and J. Kane, “Hadamard Transform Image Coding,” Proc. IEEE, 57, 1, January 1969, 58–68. 24. J. Hadamard, “Resolution d'une question relative aux determinants,” Bull. Sciences Mathematiques, Ser. 2, 17, Part I, 1893, 240–246.
  • 221. 212 UNITARY TRANSFORMS 25. H. F. Harmuth, Transmission of Information by Orthogonal Functions, Springer-Verlag, New York, 1969. 26. J. L. Walsh, “A Closed Set of Orthogonal Functions,” American J. Mathematics, 45, 1923, 5–24. 27. A. Haar, “Zur Theorie der Orthogonalen-Funktionen,” Mathematische Annalen, 5, 1955, 17–31. 28. K. R. Rao, M. A. Narasimhan, and K. Revuluri, “Image Data Processing by Hadamard– Haar Transforms,” IEEE Trans. Computers, C-23, 9, September 1975, 888–896. 29. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall/ CRC, Press, Boca Raton, FL, 1999. 30. I. Daubechies, Ten Lectures on Wavelets, SIAM, Philadelphia, 1992. 31. H. Karhunen, 1947, English translation by I. Selin, “On Linear Methods in Probability Theory,” Doc. T-131, Rand Corporation, Santa Monica, CA, August 11, 1960. 32. M. Loeve, Fonctions aldatories de seconde ordre, Hermann, Paris, 1948. 33. H. Hotelling, “Analysis of a Complex of Statistical Variables into Principal Compo- nents,” J. Educational Psychology, 24, 1933, 417–441, 498–520. 34. P. A. Wintz, “Transform Picture Coding,” Proc. IEEE, 60, 7, July 1972, 809–820. 35. W. D. Ray and R. M. Driver, “Further Decomposition of the Karhunen–Loeve Series Representation of a Stationary Random Process,” IEEE Trans. Information Theory, IT- 16, 6, November 1970, 663–668. 36. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans. Computers, C-21, 7, July 1972, 636–641.
  • 222. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 9 LINEAR PROCESSING TECHNIQUES Most discrete image processing computational algorithms are linear in nature; an output image array is produced by a weighted linear combination of elements of an input array. The popularity of linear operations stems from the relative simplicity of spatial linear processing as opposed to spatial nonlinear processing. However, for image processing operations, conventional linear processing is often computation- ally infeasible without efficient computational algorithms because of the large image arrays. This chapter considers indirect computational techniques that permit more efficient linear processing than by conventional methods. 9.1. TRANSFORM DOMAIN PROCESSING Two-dimensional linear transformations have been defined in Section 5.4 in series form as N1 N2 P ( m 1, m 2 ) = ∑ ∑ F ( n 1, n 2 )T ( n 1, n 2 ; m 1, m 2 ) (9.1-1) n1 = 1 n2 = 1 and defined in vector form as p = Tf (9.1-2) It will now be demonstrated that such linear transformations can often be computed more efficiently by an indirect computational procedure utilizing two-dimensional unitary transforms than by the direct computation indicated by Eq. 9.1-1 or 9.1-2. 213
  • 223. 214 LINEAR PROCESSING TECHNIQUES FIGURE 9.1-1. Direct processing and generalized linear filtering; series formulation. Figure 9.1-1 is a block diagram of the indirect computation technique called gen- eralized linear filtering (1). In the process, the input array F ( n1, n 2 ) undergoes a two-dimensional unitary transformation, resulting in an array of transform coeffi- cients F ( u 1, u 2 ) . Next, a linear combination of these coefficients is taken according to the general relation M1 M2 ˜ F ( w 1, w 2 ) = ∑ ∑ F ( u 1, u 2 )T ( u 1, u 2 ; w 1, w 2 ) (9.1-3) u1 = 1 u2 = 1 where T ( u 1, u 2 ; w 1, w 2 ) represents the linear filtering transformation function. Finally, an inverse unitary transformation is performed to reconstruct the processed array P ( m1, m 2 ) . If this computational procedure is to be more efficient than direct computation by Eq. 9.1-1, it is necessary that fast computational algorithms exist for the unitary transformation, and also the kernel T ( u 1, u 2 ; w 1, w 2 ) must be reasonably sparse; that is, it must contain many zero elements. The generalized linear filtering process can also be defined in terms of vector- space computations as shown in Figure 9.1-2. For notational simplicity, let N1 = N2 = N and M1 = M2 = M. Then the generalized linear filtering process can be described by the equations f = [ A 2 ]f (9.1-4a) N ˜ = Tf f f (9.1-4b) ] ˜ –1 p = [A 2 f (9.1-4c) M
  • 224. TRANSFORM DOMAIN PROCESSING 215 FIGURE 9.1-2. Direct processing and generalized linear filtering; vector formulation. 2 2 2 2 where A 2 is a N × N unitary transform matrix, T is a M × N linear filtering N 2 2 transform operation, and A 2 is a M × M unitary transform matrix. From M Eq. 9.1-4, the input and output vectors are related by –1 p = [A 2 ] T [ A 2 ]f (9.1-5) M N Therefore, equating Eqs. 9.1-2 and 9.1-5 yields the relations between T and T given by –1 T = [A 2 ] T [A 2] (9.1-6a) M N –1 T = [A 2 ]T [ A 2 ] (9.1-6b) M N 2 2 If direct processing is employed, computation by Eq. 9.1-2 requires k P ( M N ) oper- ations, where 0 ≤ k P ≤ 1 is a measure of the sparseness of T. With the generalized linear filtering technique, the number of operations required for a given operator are: 4 Forward transform: N by direct transformation 2 2N log 2 N by fast transformation 2 2 Filter multiplication: kT M N 4 Inverse transform: M by direct transformation 2 2M log 2 M by fast transformation
  • 225. 216 LINEAR PROCESSING TECHNIQUES where 0 ≤ k T ≤ 1 is a measure of the sparseness of T. If k T = 1 and direct unitary transform computation is performed, it is obvious that the generalized linear filter- ing concept is not as efficient as direct computation. However, if fast transform algorithms, similar in structure to the fast Fourier transform, are employed, general- ized linear filtering will be more efficient than direct processing if the sparseness index satisfies the inequality 2 2 k T < k P – ------ log 2 N – ----- log 2 M - - (9.1-7) 2 2 M N In many applications, T will be sufficiently sparse such that the inequality will be satisfied. In fact, unitary transformation tends to decorrelate the elements of T caus- ing T to be sparse. Also, it is often possible to render the filter matrix sparse by setting small-magnitude elements to zero without seriously affecting computational accuracy (1). In subsequent sections, the structure of superposition and convolution operators is analyzed to determine the feasibility of generalized linear filtering in these appli- cations. 9.2. TRANSFORM DOMAIN SUPERPOSITION The superposition operations discussed in Chapter 7 can often be performed more efficiently by transform domain processing rather than by direct processing. Figure 9.2-1a and b illustrate block diagrams of the computational steps involved in direct finite area or sampled image superposition. In Figure 9.2-1d and e, an alternative form of processing is illustrated in which a unitary transformation operation is per- formed on the data vector f before multiplication by a finite area filter matrix D or sampled image filter matrix B. An inverse transform reconstructs the output vector. From Figure 9.2-1, for finite-area superposition, because q = Df (9.2-1a) and –1 q = [A 2 ] D [ A 2 ]f (9.2-1b) M N then clearly the finite-area filter matrix may be expressed as –1 D = [A 2 ]D [ A 2 ] (9.2-2a) M N
  • 226. TRANSFORM DOMAIN SUPERPOSITION 217 FIGURE 9.2-1. Data and transform domain superposition.
  • 227. 218 LINEAR PROCESSING TECHNIQUES Similarly, –1 B = [A 2 ]B [ A 2 ] (9.2-2b) M N If direct finite-area superposition is performed, the required number of 2 2 computational operations is approximately N L , where L is the dimension of the impulse response matrix. In this case, the sparseness index of D is L 2 k D =  ---  N - (9.2-3a) 2 2 Direct sampled image superposition requires on the order of M L operations, and the corresponding sparseness index of B is L 2 k B =  ----  - (9.2-3b) M Figure 9.2-1f is a block diagram of a system for performing circulant superposition by transform domain processing. In this case, the input vector kE is the extended data vector, obtained by embedding the input image array F ( n1, n 2 ) in the left cor- ner of a J × J array of zeros and then column scanning the resultant matrix. Follow- ing the same reasoning as above, it is seen that –1 k E = Cf E = [ A 2 ] C [ A 2 ]f (9.2-4a) J J and hence, –1 C = [ A 2 ]C [ A 2 ] (9.2-4b) J J As noted in Chapter 7, the equivalent output vector for either finite-area or sampled image superposition can be obtained by an element selection operation of kE. For finite-area superposition, (M) (M) q = [ S1 J ⊗ S1 J ]k E (9.2-5a) and for sampled image superposition (M) (M) g = [ S2 J ⊗ S2 J ]k E (9.2-5b)
  • 228. TRANSFORM DOMAIN SUPERPOSITION 219 Also, the matrix form of the output for finite-area superposition is related to the extended image matrix KE by (M) (M) T Q = [ S1 J ]K E [ S1 J ] (9.2-6a) For sampled image superposition, (M) (M) T G = [ S2 J ]K E [ S2 J ] (9.2-6b) The number of computational operations required to obtain kE by transform domain processing is given by the previous analysis for M = N = J. 4 Direct transformation 3J 2 2 Fast transformation: J + 4J log 2 J 2 If C is sparse, many of the J filter multiplication operations can be avoided. From the discussion above, it can be seen that the secret to computationally effi- cient superposition is to select a transformation that possesses a fast computational algorithm that results in a relatively sparse transform domain superposition filter matrix. As an example, consider finite-area convolution performed by Fourier domain processing (2,3). Referring to Figure 9.2-1, let A 2 = AK ⊗ AK (9.2-7) K where 1 - ( x – 1) (y – 1 )   AK = ------- W with W ≡ exp  – 2πi  ----------- K  K  (K) 2 for x, y = 1, 2,..., K. Also, let h E denote the K × 1 vector representation of the extended spatially invariant impulse response array of Eq. 7.3-2 for J = K. The Fou- (K) rier transform of h E is denoted as (K) (K ) hE = [ A 2 ]h E (9.2-8) K 2 2 These transform components are then inserted as the diagonal elements of a K × K matrix ( K) ( K) (K) 2 H = diag [ h E ( 1 ), …, h E ( K ) ] (9.2-9)
  • 229. 220 LINEAR PROCESSING TECHNIQUES Then, it can be shown, after considerable manipulation, that the Fourier transform domain superposition matrices for finite area and sampled image convolution can be written as (4) (M) D = H [ PD ⊗ PD ] (9.2-10) for N = M – L + 1 and (N) B = [ PB ⊗ PB ] H (9.2-11) where N = M + L + 1 and –(u – 1 ) (L – 1 ) 1 1 – WM P D ( u, v ) = -------- --------------------------------------------------------------- - - (9.2-12a) M 1 – W M – ( u – 1 ) – W N –( v – 1 ) –( v – 1 ) ( L – 1 ) 1 1 – WN PB ( u, v ) = ------- --------------------------------------------------------------- - - (9.2-12b) N 1 – W M –( u – 1 ) – W N– ( v – 1 ) Thus the transform domain convolution operators each consist of a scalar weighting (K ) matrix H and an interpolation matrix ( P ⊗ P ) that performs the dimensionality con- 2 2 version between the N - element input vector and the M - element output vector. Generally, the interpolation matrix is relatively sparse, and therefore, transform domain superposition is quite efficient. Now, consider circulant area convolution in the transform domain. Following the previous analysis it is found (4) that the circulant area convolution filter matrix reduces to a scalar operator (J ) C = JH (9.2-13) Thus, as indicated in Eqs. 9.2-10 to 9.2-13, the Fourier domain convolution filter matrices can be expressed in a compact closed form for analysis or operational stor- age. No closed-form expressions have been found for other unitary transforms. Fourier domain convolution is computationally efficient because the convolution operator C is a circulant matrix, and the corresponding filter matrix C is of diagonal form. Actually, as can be seen from Eq. 9.1-6, the Fourier transform basis vectors are eigenvectors of C (5). This result does not hold true for superposition in general, nor for convolution using other unitary transforms. However, in many instances, the filter matrices D, B, and C are relatively sparse, and computational savings can often be achieved by transform domain processing.
  • 230. FAST FOURIER TRANSFORM CONVOLUTION 221 Signal Fourier Hadamard (a) Finite length convolution (b) Sampled data convolution (c) Circulant convolution FIGURE 9.2-2. One-dimensional Fourier and Hadamard domain convolution matrices. Figure 9.2-2 shows the Fourier and Hadamard domain filter matrices for the three forms of convolution for a one-dimensional input vector and a Gaussian-shaped impulse response (6). As expected, the transform domain representations are much more sparse than the data domain representations. Also, the Fourier domain circulant convolution filter is seen to be of diagonal form. Figure 9.2-3 illustrates the structure of the three convolution matrices for two-dimensional convolution (4). 9.3. FAST FOURIER TRANSFORM CONVOLUTION As noted previously, the equivalent output vector for either finite-area or sampled image convolution can be obtained by an element selection operation on the extended output vector kE for circulant convolution or its matrix counterpart KE.
  • 231. 222 LINEAR PROCESSING TECHNIQUES Spatial domain Fourier domain (a) Finite-area convolution (b) Sampled image convolution (c) Circulant convolution FIGURE 9.2-3. Two-dimensional Fourier domain convolution matrices. This result, combined with Eq. 9.2-13, leads to a particularly efficient means of con- volution computation indicated by the following steps: 1. Embed the impulse response matrix in the upper left corner of an all-zero J × J matrix, J ≥ M for finite-area convolution or J ≥ N for sampled infinite-area convolution, and take the two-dimensional Fourier trans- form of the extended impulse response matrix, giving
  • 232. FAST FOURIER TRANSFORM CONVOLUTION 223 HE = AJ HE AJ (9.3-1) 2. Embed the input data array in the upper left corner of an all-zero J × J matrix, and take the two-dimensional Fourier transform of the extended input data matrix to obtain FE = A J FE A J (9.3-2) 3. Perform the scalar multiplication K E ( m, n ) = JH E ( m, n )F E ( m, n ) (9.3-3) where 1 ≤ m, n ≤ J . 4. Take the inverse Fourier transform –1 –1 KE = [ A 2 ] HE [ A 2 ] (9.3-4) J J 5. Extract the desired output matrix (M) (M) T Q = [ S1 J ]K E [ S1 J ] (9.3-5a) or (M) (M) T G = [ S2 J ]K E [ S2 J ] (9.3-5b) It is important that the size of the extended arrays in steps 1 and 2 be chosen large enough to satisfy the inequalities indicated. If the computational steps are performed with J = N, the resulting output array, shown in Figure 9.3-1, will contain erroneous terms in a boundary region of width L – 1 elements, on the top and left-hand side of the output field. This is the wraparound error associated with incorrect use of the Fourier domain convolution method. In addition, for finite area (D-type) convolu- tion, the bottom and right-hand-side strip of output elements will be missing. If the computation is performed with J = M, the output array will be completely filled with the correct terms for D-type convolution. To force J = M for B-type convolution, it is necessary to truncate the bottom and right-hand side of the input array. As a conse- quence, the top and left-hand-side elements of the output array are erroneous.
  • 233. 224 LINEAR PROCESSING TECHNIQUES FIGURE 9.3-1. Wraparound error effects. Figure 9.3-2 illustrates the Fourier transform convolution process with proper zero padding. The example in Figure 9.3-3 shows the effect of no zero padding. In both examples, the image has been filtered using a 11 × 11 uniform impulse response array. The source image of Figure 9.3-3 is 512 × 512 pixels. The source image of Figure 9.3-2 is 502 × 502 pixels. It has been obtained by truncating the bot- tom 10 rows and right 10 columns of the source image of Figure 9.3-3. Figure 9.3-4 shows computer printouts of the upper left corner of the processed images. Figure 9.3-4a is the result of finite-area convolution. The same output is realized in Figure 9.3-4b for proper zero padding. Figure 9.3-4c shows the wraparound error effect for no zero padding. In many signal processing applications, the same impulse response operator is used on different data, and hence step 1 of the computational algorithm need not be repeated. The filter matrix HE may be either stored functionally or indirectly as a computational algorithm. Using a fast Fourier transform algorithm, the forward and 2 inverse transforms require on the order of 2J log 2 J operations each. The scalar 2 2 multiplication requires J operations, in general, for a total of J ( 1 + 4 log 2 J ) opera- tions. For an N × N input array, an M × M output array, and an L × L impulse 2 2 response array, finite-area convolution requires N L operations, and sampled 2 2 image convolution requires M L operations. If the dimension of the impulse response L is sufficiently large with respect to the dimension of the input array N, Fourier domain convolution will be more efficient than direct convolution, perhaps by an order of magnitude or more. Figure 9.3-5 is a plot of L versus N for equality
  • 234. FAST FOURIER TRANSFORM CONVOLUTION 225 (a) HE (b) E (c) FE (d ) E (e) KE (f ) E FIGURE 9.3-2. Fourier transform convolution of the candy_502_luma image with proper zero padding, clipped magnitude displays of Fourier images.
  • 235. 226 LINEAR PROCESSING TECHNIQUES (a ) H E (b ) E (c ) F E (d ) E (e ) k E (f ) E FIGURE 9.3-3. Fourier transform convolution of the candy_512_luma image with improper zero padding, clipped magnitude displays of Fourier images.
  • 236. FAST FOURIER TRANSFORM CONVOLUTION 227 0.001 0.002 0.003 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.013 0.013 0.013 0.013 0.013 0.002 0.005 0.007 0.009 0.011 0.014 0.016 0.018 0.021 0.023 0.025 0.025 0.026 0.026 0.026 0.003 0.007 0.010 0.014 0.017 0.020 0.024 0.027 0.031 0.034 0.038 0.038 0.038 0.039 0.039 0.005 0.009 0.014 0.018 0.023 0.027 0.032 0.036 0.041 0.046 0.050 0.051 0.051 0.051 0.051 0.006 0.011 0.017 0.023 0.028 0.034 0.040 0.045 0.051 0.057 0.063 0.063 0.063 0.064 0.064 0.007 0.014 0.020 0.027 0.034 0.041 0.048 0.054 0.061 0.068 0.075 0.076 0.076 0.076 0.076 0.008 0.016 0.024 0.032 0.040 0.048 0.056 0.064 0.072 0.080 0.088 0.088 0.088 0.088 0.088 0.009 0.018 0.027 0.036 0.045 0.054 0.064 0.073 0.082 0.091 0.100 0.100 0.100 0.100 0.101 0.010 0.020 0.031 0.041 0.051 0.061 0.071 0.081 0.092 0.102 0.112 0.112 0.112 0.113 0.113 0.011 0.023 0.034 0.045 0.056 0.068 0.079 0.090 0.102 0.113 0.124 0.124 0.125 0.125 0.125 0.012 0.025 0.037 0.050 0.062 0.074 0.087 0.099 0.112 0.124 0.136 0.137 0.137 0.137 0.137 0.012 0.025 0.037 0.049 0.062 0.074 0.086 0.099 0.111 0.124 0.136 0.136 0.136 0.136 0.136 0.012 0.025 0.037 0.049 0.062 0.074 0.086 0.099 0.111 0.123 0.135 0.135 0.135 0.135 0.135 0.012 0.025 0.037 0.049 0.061 0.074 0.086 0.098 0.110 0.123 0.135 0.135 0.135 0.135 0.134 0.012 0.025 0.037 0.049 0.061 0.074 0.086 0.098 0.110 0.122 0.134 0.134 0.134 0.134 0.134 (a) Finite-area convolution 0.001 0.002 0.003 0.005 0.006 0.007 0.008 0.009 0.010 0.011 0.013 0.013 0.013 0.013 0.013 0.002 0.005 0.007 0.009 0.011 0.014 0.016 0.018 0.021 0.023 0.025 0.025 0.026 0.026 0.026 0.003 0.007 0.010 0.014 0.017 0.020 0.024 0.027 0.031 0.034 0.038 0.038 0.038 0.039 0.039 0.005 0.009 0.014 0.018 0.023 0.027 0.032 0.036 0.041 0.046 0.050 0.051 0.051 0.051 0.051 0.006 0.011 0.017 0.023 0.028 0.034 0.040 0.045 0.051 0.057 0.063 0.063 0.063 0.064 0.064 0.007 0.014 0.020 0.027 0.034 0.041 0.048 0.054 0.061 0.068 0.075 0.076 0.076 0.076 0.076 0.008 0.016 0.024 0.032 0.040 0.048 0.056 0.064 0.072 0.080 0.088 0.088 0.088 0.088 0.088 0.009 0.018 0.027 0.036 0.045 0.054 0.064 0.073 0.082 0.091 0.100 0.100 0.100 0.100 0.101 0.010 0.020 0.031 0.041 0.051 0.061 0.071 0.081 0.092 0.102 0.112 0.112 0.112 0.113 0.113 0.011 0.023 0.034 0.045 0.056 0.068 0.079 0.090 0.102 0.113 0.124 0.124 0.125 0.125 0.125 0.012 0.025 0.037 0.050 0.062 0.074 0.087 0.099 0.112 0.124 0.136 0.137 0.137 0.137 0.137 0.012 0.025 0.037 0.049 0.062 0.074 0.086 0.099 0.111 0.124 0.136 0.136 0.136 0.136 0.136 0.012 0.025 0.037 0.049 0.062 0.074 0.086 0.099 0.111 0.123 0.135 0.135 0.135 0.135 0.135 0.012 0.025 0.037 0.049 0.061 0.074 0.086 0.098 0.110 0.123 0.135 0.135 0.135 0.135 0.134 0.012 0.025 0.037 0.049 0.061 0.074 0.086 0.098 0.110 0.122 0.134 0.134 0.134 0.134 0.134 (b) Fourier transform convolution with proper zero padding 0.771 0.700 0.626 0.552 0.479 0.407 0.334 0.260 0.187 0.113 0.040 0.036 0.034 0.033 0.034 0.721 0.655 0.587 0.519 0.452 0.385 0.319 0.252 0.185 0.118 0.050 0.047 0.044 0.044 0.045 0.673 0.612 0.550 0.488 4.426 0.365 0.304 0.243 0.182 0.122 0.061 0.057 0.055 0.055 0.055 0.624 0.569 0.513 0.456 0.399 0.344 0.288 0.234 0.180 0.125 0.071 0.067 0.065 0.065 0.065 0.578 0.528 0.477 0.426 0.374 0.324 0.274 0.225 0.177 0.129 0.081 0.078 0.076 0.075 0.075 0.532 0.488 0.442 0.396 0.350 0.305 0.260 0.217 0.174 0.133 0.091 0.088 0.086 0.085 0.086 0.486 0.448 0.407 0.367 0.326 0.286 0.246 0.208 0.172 0.136 0.101 0.098 0.096 0.096 0.096 0.438 0.405 0.371 0.336 0.301 0.266 0.232 0.200 0.169 0.139 0.110 0108 0.107 0.106 0.106 0.387 0.361 0.333 0.304 0.275 0.246 0.218 0.191 0.166 0.142 0.119 0.118 0.117 0.116 0.116 0.334 0.313 0.292 0.270 0.247 0.225 0.203 0.182 0.163 0.145 0.128 0.127 0.127 0.127 0.127 0.278 0.264 0.249 0.233 0.218 0.202 0.186 0.172 0.159 0.148 0.136 0.137 0.137 0.137 0.137 0.273 0.260 0.246 0.231 0.216 0.200 0.185 0.171 0.158 0.147 0.136 0.136 0.136 0.136 0.136 0.266 0.254 0.241 0.228 0.213 0.198 0.183 0.169 0.157 0.146 0.135 0.135 0.135 0.135 0.135 0.257 0.246 0.234 0.222 0.209 0.195 0.181 0.168 0.156 0.145 0.135 0.135 0.135 0.135 0.134 0.247 0.237 0.227 0.215 0.204 0.192 0.179 0.166 0.155 0.144 0.134 0.134 0.134 0.134 0.134 (c) Fourier transform convolution without zero padding FIGURE 9.3-4. Wraparound error for Fourier transform convolution, upper left corner of processed image. between direct and Fourier domain finite area convolution. The jaggedness of the plot, in this example, arises from discrete changes in J (64, 128, 256,...) as N increases. Fourier domain processing is more computationally efficient than direct process- ing for image convolution if the impulse response is sufficiently large. However, if the image to be processed is large, the relative computational advantage of Fourier domain processing diminishes. Also, there are attendant problems of computational
  • 237. 228 LINEAR PROCESSING TECHNIQUES FIGURE 9.3-5. Comparison of direct and Fourier domain processing for finite-area convo- lution. accuracy with large Fourier transforms. Both difficulties can be alleviated by a block-mode filtering technique in which a large image is separately processed in adjacent overlapped blocks (2, 7–9). Figure 9.3-6a illustrates the extraction of a NB × NB pixel block from the upper left corner of a large image array. After convolution with a L × L impulse response, the resulting M B × M B pixel block is placed in the upper left corner of an output FIGURE 9.3-6. Geometric arrangement of blocks for block-mode filtering.
  • 238. FOURIER TRANSFORM FILTERING 229 data array as indicated in Figure 9.3-6a. Next, a second block of N B × N B pixels is extracted from the input array to produce a second block of M B × M B output pixels that will lie adjacent to the first block. As indicated in Figure 9.3-6b, this second input block must be overlapped by (L – 1) pixels in order to generate an adjacent output block. The computational process then proceeds until all input blocks are filled along the first row. If a partial input block remains along the row, zero-value elements can be added to complete the block. Next, an input block, overlapped by (L –1) pixels with the first row blocks, is extracted to produce the first block of the second output row. The algorithm continues in this fashion until all output points are computed. A total of 2 2 O F = N + 2N log 2 N (9.3-6) operations is required for Fourier domain convolution over the full size image array. With block-mode filtering with N B × N B input pixel blocks, the required number of operations is 2 2 2 O B = R ( N B + 2NB log 2 N ) (9.3-7) where R represents the largest integer value of the ratio N ⁄ ( N B + L – 1 ). Hunt (9) has determined the optimum block size as a function of the original image size and impulse response size. 9.4. FOURIER TRANSFORM FILTERING The discrete Fourier transform convolution processing algorithm of Section 9.3 is often utilized for computer simulation of continuous Fourier domain filtering. In this section we consider discrete Fourier transform filter design techniques. 9.4.1. Transfer Function Generation The first step in the discrete Fourier transform filtering process is generation of the discrete domain transfer function. For simplicity, the following discussion is limited to one-dimensional signals. The extension to two dimensions is straightforward. Consider a one-dimensional continuous signal f C ( x ) of wide extent which is bandlimited such that its Fourier transform f C ( ω ) is zero for ω greater than a cut- off frequency ω 0. This signal is to be convolved with a continuous impulse function h C ( x ) whose transfer function h C ( ω ) is also bandlimited to ω 0 . From Chapter 1 it is known that the convolution can be performed either in the spatial domain by the operation
  • 239. 230 LINEAR PROCESSING TECHNIQUES ∞ gC ( x ) = ∫–∞ fC ( α )hC ( x – α ) dα (9.4-1a) or in the continuous Fourier domain by 1 ∞ g C ( x ) = ----- 2π - ∫–∞ fC ( ω )hC ( ω ) exp { iωx } dω (9.4-1b) Chapter 7 has presented techniques for the discretization of the convolution inte- gral of Eq. 9.4-1. In this process, the continuous impulse response function h C ( x ) must be truncated by spatial multiplication of a window function y(x) to produce the windowed impulse response b C ( x ) = h C ( x )y ( x ) (9.4-2) where y(x) = 0 for x > T . The window function is designed to smooth the truncation effect. The resulting convolution integral is then approximated as x+T gC ( x ) = ∫x – T fC ( α )b C ( x – α ) dα (9.4-3) Next, the output signal g C ( x ) is sampled over 2J + 1 points at a resolution ∆ = π ⁄ ω 0, and the continuous integration is replaced by a quadrature summation at the same resolution ∆ , yielding the discrete representation j+K g C ( j∆ ) = ∑ f C ( k∆ )b C [ ( j – k )∆ ] (9.4-4) k=j–K where K is the nearest integer value of the ratio T ⁄ ∆. Computation of Eq. 9.4-4 by discrete Fourier transform processing requires formation of the discrete domain transfer function b D ( u ) . If the continuous domain impulse response function h C ( x ) is known analytically, the samples of the windowed impulse response function are inserted as the first L = 2K + 1 elements of a J-element sequence and the remaining J – L elements are set to zero. Thus, let b D ( p ) = b C ( – K ), …, b C ( 0 ), …, b C ( K ) , 0, …, 0 (9.4-5)              L terms where 0 ≤ p ≤ P – 1. The terms of b D ( p ) can be extracted from the continuous impulse response function h C ( x ) and the window function by the sampling operation
  • 240. FOURIER TRANSFORM FILTERING 231 b D ( p ) = y ( x )h C ( x )δ ( x – p∆ ) (9.4-6) The next step in the discrete Fourier transform convolution algorithm is to perform a discrete Fourier transform of b D ( p ) over P points to obtain P–1 1  – 2πipu  b D ( u ) = ------- P ∑ b D ( p ) exp  -----------------   P -  (9.4-7) p=1 where 0 ≤ u ≤ P – 1 . If the continuous domain transfer function hC ( ω ) is known analytically, then b D ( u ) can be obtained directly. It can be shown that   b D ( u ) = ---------------- exp  – iπ ( L – 1 )  h C  2πu  1 - ------------------------- - --------- - (9.4-8a) 2 P  P∆  4 Pπ   bD( P – u ) = b * ( u ) D (9.4-8b) for u = 0, 1,..., P/2, where bC ( ω ) = h C ( ω ) ᭺ y ( ω ) * (9.4-8c) and y ( ω ) is the continuous domain Fourier transform of the window function y(x). If h C ( ω ) and y ( ω ) are known analytically, then, in principle, h C ( ω ) can be obtained by analytically performing the convolution operation of Eq. 9.4-8c and evaluating the resulting continuous function at points 2πu ⁄ P∆. In practice, the analytic convo- lution is often difficult to perform, especially in two dimensions. An alternative is to perform an analytic inverse Fourier transformation of the transfer function h C ( ω ) to obtain its continuous domain impulse response h C ( x ) and then form b D ( u ) from the steps of Eqs. 9.4-5 to 9.4-7. Still another alternative is to form b D ( u ) from h C ( ω ) according to Eqs. 9.4-8a and 9.4-8b, take its discrete inverse Fourier transform, win- dow the resulting sequence, and then form b D ( u ) from Eq. 9.4-7. 9.4.2. Windowing Functions The windowing operation performed explicitly in the spatial domain according to Eq. 9.4-6 or implicitly in the Fourier domain by Eq. 9.4-8 is absolutely imperative if the wraparound error effect described in Section 9.3 is to be avoided. A common mistake in image filtering is to set the values of the discrete impulse response func- tion arbitrarily equal to samples of the continuous impulse response function. The corresponding extended discrete impulse response function will generally possess nonzero elements in each of its J elements. That is, the length L of the discrete
  • 241. 232 LINEAR PROCESSING TECHNIQUES impulse response embedded in the extended vector of Eq. 9.4-5 will implicitly be set equal to J. Therefore, all elements of the output filtering operation will be subject to wraparound error. A variety of window functions have been proposed for discrete linear filtering (10–12). Several of the most common are listed in Table 9.4-1 and sketched in Figure 9.4-1. Figure 9.4-2 shows plots of the transfer functions of these window functions. The window transfer functions consist of a main lobe and sidelobes whose peaks decrease in magnitude with increasing frequency. Examination of the structure of Eq. 9.4-8 indicates that the main lobe causes a loss in frequency response over the signal passband from 0 to ω 0 , while the sidelobes are responsible for an aliasing error because the windowed impulse response function b C ( ω ) is not bandlimited. A tapered window function reduces the magnitude of the sidelobes and consequently attenuates the aliasing error, but the main lobe becomes wider, causing the signal frequency response within the passband to be reduced. A design trade-off must be made between these complementary sources of error. Both sources of degradation can be reduced by increasing the truncation length of the windowed impulse response, but this strategy will either result in a shorter length output sequence or an increased number of computational operations. TABLE 9.4-1. Window Functions a Function Definition Rectangular w(n) = 1 0≤n≤L–1 2n -  ----------- 0≤n–L–1 ----------- - L–1 2 Barlett (triangular) w(n) =  2  2 – ----------- L–1 - ----------- ≤ n ≤ L – 1 -  L–1 2 1  2πn  Hanning w(n) = --  1 – cos  -----------  - - 0≤n≤L–1 2  L – 1  2πn-   Hamming w(n) = 0.54 - 0.46 cos  -----------  0≤n≤L–1  L – 1  2πn   4πn  Blackman w(n) = 0.42 – 0.5 cos  -----------  + 0.08 cos  -----------  - - 0≤n≤L–1  L–1 L – 1   2 2 1 ⁄ 2 I0  ωa [ ( ( L – 1 ) ⁄ 2 ) – [ n – ( ( L – 1 ) ⁄ 2 ) ] ]  Kaiser   ----------------------------------------------------------------------------------------------------------------- 0 ≤ n ≤ L – 1 - I0 { ωa [ ( L – 1 ) ⁄ 2 ] } a I 0 { · } is the modified zeroth-order Bessel function of the first kind and ω a is a design parameter.
  • 242. FOURIER TRANSFORM FILTERING 233 FIGURE 9.4-1. One-dimensional window functions. 9.4.3. Discrete Domain Transfer Functions In practice, it is common to define the discrete domain transform directly in the dis- crete Fourier transform frequency space. The following are definitions of several widely used transfer functions for a N × N pixel image. Applications of these filters are presented in Chapter 10. 1. Zonal low-pass filter: H ( u, v ) = 1 0≤u≤C–1 and 0 ≤ v ≤ C – 1 0≤u≤C–1 and N + 1 – C ≤ v ≤ N – 1 N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1 N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1 (9.4-9a) H ( u, v ) = 0 otherwise (9.4-9b) where C is the filter cutoff frequency for 0 < C ≤ 1 + N ⁄ 2. Figure 9.4-3 illustrates the low-pass filter zones.
  • 243. 234 LINEAR PROCESSING TECHNIQUES (a) Rectangular (b) Triangular (c) Hanning (d) Hamming (e) Blackman FIGURE 9.4-2. Transfer functions of one-dimensional window functions. 2. Zonal high-pass filter: H ( 0, 0 ) = 0 (9.4-10a) H ( u, v ) = 0 0≤u≤C–1 and 0 ≤ v ≤ C – 1 0≤u≤C–1 and N + 1 – C ≤ v ≤ N – 1 N + 1 – C ≤ u ≤ N – 1 and 0 ≤ v ≤ C – 1 N + 1 – C ≤ u ≤ N – 1 and N + 1 – C ≤ v ≤ N – 1 (9.4-10b) H ( u, v ) = 1 otherwise (9.4-10c)
  • 244. FOURIER TRANSFORM FILTERING 235 FIGURE 9.4-3. Zonal filter transfer function definition. 3. Gaussian filter: H ( u, v ) = G ( u, v ) 0≤u≤N⁄2 and 0 ≤ v ≤ N ⁄ 2 0≤u≤N⁄2 and 1 + N ⁄ 2 ≤ v ≤ N – 1 1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1 (9.4-11a) where  1 2 2  G ( u, v ) = exp  – -- [ ( s u u ) + ( s v v ) ]  - (9.4-11b)  2  and su and sv are the Gaussian filter spread factors.
  • 245. 236 LINEAR PROCESSING TECHNIQUES 4. Butterworth low-pass filter: H ( u, v ) = B ( u, v ) 0≤u≤N⁄2 and 0 ≤ v ≤ N ⁄ 2 0≤u≤N⁄2 and 1 + N ⁄ 2 ≤ v ≤ N – 1 1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1+N⁄2≤u≤N–1 and 1 + N ⁄ 2 ≤ v ≤ N – 1 (9.4-12a) where 1 B ( u, v ) = ------------------------------------------------- - 2n 2 2 1⁄2 (9.4-12b) 1 + (u + v ) ----------------------------- - C where the integer variable n is the order of the filter. The Butterworth low-pass filter 2 2 1⁄2 provides an attenuation of 50% at the cutoff frequency C = ( u + v ) . 5. Butterworth high-pass filter: H ( u, v ) = B ( u, v ) 0≤u≤N⁄2 and 0 ≤ v ≤ N ⁄ 2 0≤u≤N⁄2 and 1 + N ⁄ 2 ≤ v ≤ N – 1 1 + N ⁄ 2 ≤ u ≤ N – 1 and 0 ≤ v ≤ N ⁄ 2 1 + N ⁄ 2 ≤ u ≤ N – 1 and 1 + N ⁄ 2 ≤ v ≤ N – 1 (9.4-13a) where 1 B ( u, v ) = ------------------------------------------------- - (9.4-13b) C 2n 1+ ----------------------------- - 2 2 1⁄2 (u + v ) Figure 9.4-4 shows the transfer functions of zonal and Butterworth low- and high- pass filters for a 512 × 512 pixel image. 9.5. SMALL GENERATING KERNEL CONVOLUTION It is possible to perform convolution on a N × N image array F( j, k) with an arbitrary L × L impulse response array H( j, k) by a sequential technique called small
  • 246. SMALL GENERATING KERNEL CONVOLUTION 237 (a) Zonal low-pass (b) Butterworth low-pass (c) Zonal high-pass (d ) Butterworth high-pass FIGURE 9.4-4. Zonal and Butterworth low- and high-pass transfer functions; 512 × 512 images; cutoff frequency = 64. generating kernel (SGK) convolution (13–16). Figure 9.5-1 illustrates the decompo- sition process in which a L × L prototype impulse response array H( j, k) is sequen- tially decomposed into 3 × 3 pixel SGKs according to the relation ˆ H ( j, k ) = K 1 ( j, k ) ᭺ K 2 ( j, k ) ᭺ … ᭺ K Q ( j, k ) ‫ء‬ ‫ء‬ ‫ء‬ (9.5-1) ˆ ‫ء‬ where H ( j, k ) is the synthesized impulse response array, the symbol ᭺ denotes cen- tered two-dimensional finite-area convolution, as defined by Eq. 7.1-14, and K i ( j, k ) is the ith 3 × 3 pixel SGK of the decomposition, where Q = ( L – 1 ) ⁄ 2 . The SGK convolution technique can be extended to larger SGK kernels. Generally, the SGK synthesis of Eq. 9.5-1 is not exact. Techniques have been developed for choosing the ˆ SGKs to minimize the mean-square error between H ( j, k ) and H ( j, k ) (13).
  • 247. 238 LINEAR PROCESSING TECHNIQUES FIGURE 9.5-1. Cascade decomposition of a two-dimensional impulse response array into small generating kernels. Two-dimensional convolution can be performed sequentially without approxima- tion error by utilizing the singular-value decomposition technique described in Appendix A1.2 in conjunction with the SGK decimation (17–19). With this method, called SVD/SGK convolution, the impulse response array H ( j, k ) is regarded as a matrix H. Suppose that H is orthogonally separable such that it can be expressed in the outer product form T H = ab (9.5-2) where a and b are column and row operator vectors, respectively. Then, the two- dimensional convolution operation can be performed by first convolving the col- umns of F ( j, k ) with the impulse response sequence a(j) corresponding to the vec- tor a, and then convolving the rows of that resulting array with the sequence b(k) corresponding to the vector b. If H is not separable, the matrix can be expressed as a sum of separable matrices by the singular-value decomposition by which R H = ∑ Hi (9.5-3a) i=1 T Hi = si ai bi (9.5-3b) where R ≥ 1 is the rank of H, si is the ith singular value of H. The vectors ai and bi are the L × 1 eigenvectors of HHT and HTH, respectively. Each eigenvector ai and bi of Eq. 9.5-3 can be considered to be a one-dimen- sional sequence, which can be decimated by a small generating kernel expansion as a i ( j ) = c i [ a i1 ( j ) ᭺ … ᭺ a iq ( j ) ᭺ … ᭺ a iQ ( j ) ] ‫ء‬ ‫ء‬ ‫ء‬ ‫ء‬ (9.5-4a) b i ( k ) = r i [ b i1 ( k ) ᭺ … ᭺ b iq ( k ) ᭺ … ᭺ b iQ ( k ) ] ‫ء‬ ‫ء‬ ‫ء‬ ‫ء‬ (9.5-4b) where a iq ( j ) and b iq ( k ) are 3 × 1 impulse response sequences corresponding to the ith singular-value channel and the qth SGK expansion. The terms ci and ri are col- umn and row gain constants. They are equal to the sum of the elements of their respective sequences if the sum is nonzero, and equal to the sum of the magnitudes
  • 248. REFERENCES 239 FIGURE 9.5-2. Nonseparable SVD/SGK expansion. otherwise. The former case applies for a unit-gain filter impulse response, while the latter case applies for a differentiating filter. As a result of the linearity of the SVD expansion of Eq. 9.5-3b, the large size impulse response array Hi ( j, k ) corresponding to the matrix Hi of Eq. 9.5-3a can be synthesized by sequential 3 × 3 convolutions according to the relation H i ( j, k ) = r i c i [ K i1 ( j, k ) ᭺ … ᭺ K iq ( j, k ) ᭺ … ᭺ KiQ ( j, k ) ] * * * * (9.5-5) where K iq ( j, k ) is the qth SGK of the ith SVD channel. Each K iq ( j, k ) is formed by an outer product expansion of a pair of the a iq ( j ) and b iq ( k ) terms of Eq. 9.5-4. The ordering is important only for low-precision computation when roundoff error becomes a consideration. Figure 9.5-2 is the flowchart for SVD/SGK convolution. The weight- ing terms in the figure are W i = si ri ci (9.5-6) Reference 19 describes the design procedure for computing the K iq ( j, k ) . REFERENCES 1. W. K. Pratt, “Generalized Wiener Filtering Computation Techniques,” IEEE Trans. Computers, C-21, 7, July 1972, 636–641. 2. T. G. Stockham, Jr., “High Speed Convolution and Correlation,” Proc. Spring Joint Computer Conference, 1966, 229–233. 3. W. M. Gentleman and G. Sande, “Fast Fourier Transforms for Fun and Profit,” Proc. Fall Joint Computer Conference, 1966, 563–578.
  • 249. 240 LINEAR PROCESSING TECHNIQUES 4. W. K. Pratt, “Vector Formulation of Two-Dimensional Signal Processing Operations,” Computer Graphics and Image Processing, 4, 1, March 1975, 1–24. 5. B. R. Hunt, “A Matrix Theory Proof of the Discrete Convolution Theorem,” IEEE Trans. Audio and Electroacoustics, AU-19, 4, December 1973, 285–288. 6. W. K. Pratt, “Transform Domain Signal Processing Techniques,” Proc. National Elec- tronics Conference, Chicago, 1974. 7. H. D. Helms, “Fast Fourier Transform Method of Computing Difference Equations and Simulating Filters,” IEEE Trans. Audio and Electroacoustics, AU-15, 2, June 1967, 85– 90. 8. M. P. Ekstrom and V. R. Algazi, “Optimum Design of Two-Dimensional Nonrecursive Digital Filters,” Proc. 4th Asilomar Conference on Circuits and Systems, Pacific Grove, CA, November 1970. 9. B. R. Hunt, “Computational Considerations in Digital Image Enhancement,” Proc. Con- ference on Two-Dimensional Signal Processing, University of Missouri, Columbia, MO, October 1971. 10. A. V. Oppenheim and R. W. Schafer, Digital Signal Processing, Prentice Hall, Engle- wood Cliffs, NJ, 1975. 11. R. B. Blackman and J. W. Tukey, The Measurement of Power Spectra, Dover Publica- tions, New York, 1958. 12. J. F. Kaiser, “Digital Filters”, Chapter 7 in Systems Analysis by Digital Computer, F. F. Kuo and J. F. Kaiser, Eds., Wiley, New York, 1966. 13. J. F. Abramatic and O. D. Faugeras, “Design of Two-Dimensional FIR Filters from Small Generating Kernels,” Proc. IEEE Conference on Pattern Recognition and Image Processing, Chicago, May 1978. 14. W. K. Pratt, J. F. Abramatic, and O. D. Faugeras, “Method and Apparatus for Improved Digital Image Processing,” U.S. patent 4,330,833, May 18, 1982. 15. J. F. Abramatic and O. D. Faugeras, “Sequential Convolution Techniques for Image Fil- tering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 1, February 1982, 1–10. 16. J. F. Abramatic and O. D. Faugeras, “Correction to Sequential Convolution Techniques for Image Filtering,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 2, April 1982, 346. 17. W. K. Pratt, “Intelligent Image Processing Display Terminal,” Proc. SPIE, 199, August 1979, 189–194. 18. J. F. Abramatic and S. U. Lee, “Singular Value Decomposition of 2-D Impulse Responses,” Proc. International Conference on Acoustics, Speech, and Signal Process- ing, Denver, CO, April 1980, 749–752. 19. S. U. Lee, “Design of SVD/SGK Convolution Filters for Image Processing,” Report USCIPI 950, University Southern California, Image Processing Institute, January 1980.
  • 250. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) PART 4 IMAGE IMPROVEMENT The use of digital processing techniques for image improvement has received much interest with the publicity given to applications in space imagery and medical research. Other applications include image improvement for photographic surveys and industrial radiographic analysis. Image improvement is a term coined to denote three types of image manipulation processes: image enhancement, image restoration, and geometrical image modi- fication. Image enhancement entails operations that improve the appearance to a human viewer, or operations to convert an image to a format better suited to machine processing. Image restoration has commonly been defined as the modification of an observed image in order to compensate for defects in the imaging system that produced the observed image. Geometrical image modification includes image magnification, minification, rotation, and nonlinear spatial warping. Chapter 10 describes several techniques of monochrome and color image enhancement. The chapters that follow develop models for image formation and restoration, and present methods of point and spatial image restoration. The final chapter of this part considers geometrical image modification. 241
  • 251. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 10 IMAGE ENHANCEMENT Image enhancement processes consist of a collection of techniques that seek to improve the visual appearance of an image or to convert the image to a form better suited for analysis by a human or a machine. In an image enhancement system, there is no conscious effort to improve the fidelity of a reproduced image with regard to some ideal form of the image, as is done in image restoration. Actually, there is some evidence to indicate that often a distorted image, for example, an image with amplitude overshoot and undershoot about its object edges, is more subjectively pleasing than a perfectly reproduced original. For image analysis purposes, the definition of image enhancement stops short of information extraction. As an example, an image enhancement system might emphasize the edge outline of objects in an image by high-frequency filtering. This edge-enhanced image would then serve as an input to a machine that would trace the outline of the edges, and perhaps make measurements of the shape and size of the outline. In this application, the image enhancement processor would emphasize salient features of the original image and simplify the processing task of a data- extraction machine. There is no general unifying theory of image enhancement at present because there is no general standard of image quality that can serve as a design criterion for an image enhancement processor. Consideration is given here to a variety of tech- niques that have proved useful for human observation improvement and image anal- ysis. 10.1. CONTRAST MANIPULATION One of the most common defects of photographic or electronic images is poor con- trast resulting from a reduced, and perhaps nonlinear, image amplitude range. Image 243
  • 252. 244 IMAGE ENHANCEMENT FIGURE 10.1-1. Continuous and quantized image contrast enhancement. contrast can often be improved by amplitude rescaling of each pixel (1,2). Figure 10.1-1a illustrates a transfer function for contrast enhancement of a typical continuous amplitude low-contrast image. For continuous amplitude images, the transfer function operator can be implemented by photographic techniques, but it is often difficult to realize an arbitrary transfer function accurately. For quantized amplitude images, implementation of the transfer function is a relatively simple task. However, in the design of the transfer function operator, consideration must be given to the effects of amplitude quantization. With reference to Figure l0.l-lb, suppose that an original image is quantized to J levels, but it occupies a smaller range. The output image is also assumed to be restricted to J levels, and the mapping is linear. In the mapping strategy indicated in Figure 10.1-1b, the output level chosen is that level closest to the exact mapping of an input level. It is obvious from the diagram that the output image will have unoccupied levels within its range, and some of the gray scale transitions will be larger than in the original image. The latter effect may result in noticeable gray scale contouring. If the output image is quantized to more levels than the input image, it is possible to approach a linear placement of output levels, and hence, decrease the gray scale contouring effect.
  • 253. CONTRAST MANIPULATION 245 (a) Linear image scaling (b) Linear image scaling with clipping (c) Absolute value scaling FIGURE 10.1-2. Image scaling methods. 10.1.1. Amplitude Scaling A digitally processed image may occupy a range different from the range of the original image. In fact, the numerical range of the processed image may encompass negative values, which cannot be mapped directly into a light intensity range. Figure 10.1-2 illustrates several possibilities of scaling an output image back into the domain of values occupied by the original image. By the first technique, the pro- cessed image is linearly mapped over its entire range, while by the second technique, the extreme amplitude values of the processed image are clipped to maximum and minimum limits. The second technique is often subjectively preferable, especially for images in which a relatively small number of pixels exceed the limits. Contrast enhancement algorithms often possess an option to clip a fixed percentage of the amplitude values on each end of the amplitude scale. In medical image enhancement applications, the contrast modification operation shown in Figure 10.2-2b, for a ≥ 0, is called a window-level transformation. The window value is the width of the linear slope, b – a; the level is located at the midpoint c of the slope line. The third technique of amplitude scaling, shown in Figure 10.1-2c, utilizes an absolute value transformation for visualizing an image with negatively valued pixels. This is a
  • 254. 246 IMAGE ENHANCEMENT (a) Linear, full range, − 0.147 to 0.169 (b) Clipping, 0.000 to 0.169 (c) Absolute value, 0.000 to 0.169 FIGURE 10.1-3. Image scaling of the Q component of the YIQ representation of the dolls_gamma color image. useful transformation for systems that utilize the two's complement numbering con- vention for amplitude representation. In such systems, if the amplitude of a pixel overshoots +1.0 (maximum luminance white) by a small amount, it wraps around by the same amount to –1.0, which is also maximum luminance white. Similarly, pixel undershoots remain near black. Figure 10.1-3 illustrates the amplitude scaling of the Q component of the YIQ transformation, shown in Figure 3.5-14, of a monochrome image containing nega- tive pixels. Figure 10.1-3a presents the result of amplitude scaling with the linear function of Figure 10.1-2a over the amplitude range of the image. In this example, the most negative pixels are mapped to black (0.0), and the most positive pixels are mapped to white (1.0). Amplitude scaling in which negative value pixels are clipped to zero is shown in Figure 10.1-3b. The black regions of the image correspond to
  • 255. CONTRAST MANIPULATION 247 (a) Original (b) Original histogram (c) Min. clip = 0.17, max. clip = 0.64 (d) Enhancement histogram (e) Min. clip = 0.24, max. clip = 0.35 (f) Enhancement histogram FIGURE 10.1-4. Window-level contrast stretching of an earth satellite image.
  • 256. 248 IMAGE ENHANCEMENT negative pixel values of the Q component. Absolute value scaling is presented in Figure 10.1-3c. Figure 10.1-4 shows examples of contrast stretching of a poorly digitized original satellite image along with gray scale histograms of the original and enhanced pic- tures. In Figure 10.1-4c, the clip levels are set at the histogram limits of the original, while in Figure 10.1-4e, the clip levels truncate 5% of the original image upper and lower level amplitudes. It is readily apparent from the histogram of Figure 10.1-4f that the contrast-stretched image of Figure 10.1-4e has many unoccupied amplitude levels. Gray scale contouring is at the threshold of visibility. 10.1.2. Contrast Modification Section 10.1.1 dealt with amplitude scaling of images that do not properly utilize the dynamic range of a display; they may lie partly outside the dynamic range or occupy only a portion of the dynamic range. In this section, attention is directed to point transformations that modify the contrast of an image within a display's dynamic range. Figure 10.1-5a contains an original image of a jet aircraft that has been digitized to 256 gray levels and numerically scaled over the range of 0.0 (black) to 1.0 (white). (a) Original (b) Original histogram (c) Transfer function (d ) Contrast stretched FIGURE 10.1-5. Window-level contrast stretching of the jet_mon image.
  • 257. CONTRAST MANIPULATION 249 (a ) Square function (b ) Square output (c ) Cube function (d ) Cube output FIGURE 10.1-6. Square and cube contrast modification of the jet_mon image. The histogram of the image is shown in Figure 10.1-5b. Examination of the histogram of the image reveals that the image contains relatively few low- or high- amplitude pixels. Consequently, applying the window-level contrast stretching function of Figure 10.1-5c results in the image of Figure 10.1-5d, which possesses better visual contrast but does not exhibit noticeable visual clipping. Consideration will now be given to several nonlinear point transformations, some of which will be seen to improve visual contrast, while others clearly impair visual contrast. Figures 10.1-6 and 10.1-7 provide examples of power law point transformations in which the processed image is defined by p G ( j, k ) = [ F ( j, k ) ] (10.1-1)
  • 258. 250 IMAGE ENHANCEMENT (a) Square root function (b) Square root output (c ) Cube root function (d ) Cube root output FIGURE 10.1-7. Square root and cube root contrast modification of the jet_mon image. where 0.0 ≤ F ( j, k ) ≤ 1.0 represents the original image and p is the power law vari- able. It is important that the amplitude limits of Eq. 10.1-1 be observed; processing of the integer code (e.g., 0 to 255) by Eq. 10.1-1 will give erroneous results. The square function provides the best visual result. The rubber band transfer function shown in Figure 10.1-8a provides a simple piecewise linear approximation to the power law curves. It is often useful in interactive enhancement machines in which the inflection point is interactively placed. The Gaussian error function behaves like a square function for low-amplitude pixels and like a square root function for high- amplitude pixels. It is defined as  F ( j, k ) – 0.5  0.5 erf  -----------------------------  + --------- - -  a 2  a 2 G ( j, k ) = ---------------------------------------------------------------- - (10.1-2a)  0.5  2 erf  ---------  - a 2
  • 259. CONTRAST MANIPULATION 251 (a ) Rubber-band function (b ) Rubber-band output FIGURE 10.1-8. Rubber-band contrast modification of the jet_mon image. where 2- x 2 erf { x } = ------ π ∫0 exp { –y } dy (10.1-2b) and a is the standard deviation of the Gaussian distribution. The logarithm function is useful for scaling image arrays with a very wide dynamic range. The logarithmic point transformation is given by log e { 1.0 + aF ( j, k ) } G ( j, k ) = -------------------------------------------------- (10.1-3) log e { 2.0 } under the assumption that 0.0 ≤ F ( j, k ) ≤ 1.0, where a is a positive scaling factor. Figure 8.2-4 illustrates the logarithmic transformation applied to an array of Fourier transform coefficients. There are applications in image processing in which monotonically decreasing and nonmonotonic amplitude scaling is useful. For example, contrast reverse and contrast inverse transfer functions, as illustrated in Figure 10.1-9, are often helpful in visualizing detail in dark areas of an image. The reverse function is defined as G ( j, k ) = 1.0 – F ( j, k ) (10.1-4)
  • 260. 252 IMAGE ENHANCEMENT (a) Reverse function (b) Reverse function output (c) Inverse function (d) Inverse function output FIGURE 10.1-9. Reverse and inverse function contrast modification of the jet_mon image. where 0.0 ≤ F ( j, k ) ≤ 1.0 The inverse function  1.0 for 0.0 ≤ F ( j, k ) < 0.1 (10.1-5a)  G ( j, k ) =  0.1  --------------- -  F ( j, k ) for 0.1 ≤ F ( j, k ) ≤ 1.0 (10.1-5b) is clipped at the 10% input amplitude level to maintain the output amplitude within the range of unity. Amplitude-level slicing, as illustrated in Figure 10.1-10, is a useful interactive tool for visually analyzing the spatial distribution of pixels of certain amplitude within an image. With the function of Figure 10.1-10a, all pixels within the ampli- tude passband are rendered maximum white in the output, and pixels outside the passband are rendered black. Pixels outside the amplitude passband are displayed in their original state with the function of Figure 10.1-10b.
  • 261. HISTOGRAM MODIFICATION 253 FIGURE 10.1-10. Level slicing contrast modification functions. 10.2. HISTOGRAM MODIFICATION The luminance histogram of a typical natural scene that has been linearly quantized is usually highly skewed toward the darker levels; a majority of the pixels possess a luminance less than the average. In such images, detail in the darker regions is often not perceptible. One means of enhancing these types of images is a technique called histogram modification, in which the original image is rescaled so that the histogram of the enhanced image follows some desired form. Andrews, Hall, and others (3–5) have produced enhanced imagery by a histogram equalization process for which the histogram of the enhanced image is forced to be uniform. Frei (6) has explored the use of histogram modification procedures that produce enhanced images possessing exponential or hyperbolic-shaped histograms. Ketcham (7) and Hummel (8) have demonstrated improved results by an adaptive histogram modifi- cation procedure.
  • 262. 254 IMAGE ENHANCEMENT FIGURE 10.2-1. Approximate gray level histogram equalization with unequal number of quantization levels. 10.2.1. Nonadaptive Histogram Modification Figure 10.2-1 gives an example of histogram equalization. In the figure, H F ( c ) for c = 1, 2,..., C, represents the fractional number of pixels in an input image whose amplitude is quantized to the cth reconstruction level. Histogram equalization seeks to produce an output image field G by point rescaling such that the normalized gray-level histogram H G ( d ) = 1 ⁄ D for d = 1, 2,..., D. In the example of Figure 10.2-1, the number of output levels is set at one-half of the number of input levels. The scaling algorithm is developed as follows. The average value of the histogram is computed. Then, starting at the lowest gray level of the original, the pixels in the quantization bins are combined until the sum is closest to the average. All of these pixels are then rescaled to the new first reconstruction level at the midpoint of the enhanced image first quantization bin. The process is repeated for higher-value gray levels. If the number of reconstruction levels of the original image is large, it is possible to rescale the gray levels so that the enhanced image histogram is almost constant. It should be noted that the number of reconstruction levels of the enhanced image must be less than the number of levels of the original image to provide proper gray scale redistribution if all pixels in each quantization level are to be treated similarly. This process results in a somewhat larger quantization error. It is possible to perform the gray scale histogram equalization process with the same number of gray levels for the original and enhanced images, and still achieve a constant histogram of the enhanced image, by randomly redistributing pixels from input to output quantization bins.
  • 263. HISTOGRAM MODIFICATION 255 The histogram modification process can be considered to be a monotonic point transformation g d = T { f c } for which the input amplitude variable f 1 ≤ f c ≤ fC is mapped into an output variable g 1 ≤ g d ≤ g D such that the output probability distri- bution PR { g d = b d } follows some desired form for a given input probability distri- bution PR { f c = a c } where ac and bd are reconstruction values of the cth and dth levels. Clearly, the input and output probability distributions must each sum to unity. Thus, C ∑ PR { f c = ac } = 1 (10.2-1a) c=1 D ∑ PR { gd = bd } = 1 (10.2-1b) d=1 Furthermore, the cumulative distributions must equate for any input index c. That is, the probability that pixels in the input image have an amplitude less than or equal to ac must be equal to the probability that pixels in the output image have amplitude less than or equal to bd, where b d = T { a c } because the transformation is mono- tonic. Hence d c ∑ PR { g n = bn } = ∑ PR { fm = am } (10.2-2) n=1 m=1 The summation on the right is the cumulative probability distribution of the input image. For a given image, the cumulative distribution is replaced by the cumulative histogram to yield the relationship d c ∑ PR { g n = bn } = ∑ HF ( m ) (10.2-3) n=1 m=1 Equation 10.2-3 now must be inverted to obtain a solution for gd in terms of fc. In general, this is a difficult or impossible task to perform analytically, but certainly possible by numerical methods. The resulting solution is simply a table that indi- cates the output image level for each input image level. The histogram transformation can be obtained in approximate form by replacing the discrete probability distributions of Eq. 10.2-2 by continuous probability densi- ties. The resulting approximation is g f ∫g m in p g ( g ) dg = ∫f m in p f ( f ) df (10.2-4)
  • 264. 256 TABLE 10.2-1. Histogram Modification Transfer Functions Output Probability Density Model Transfer Functiona Uniform 1 - p g ( g ) = ---------------------------- g min ≤ g ≤ g max g = ( g max – g min )Pf ( f ) + g min g max – g min Exponential p g ( g ) = α exp { – α ( g – g min ) } g ≤ g min 1 g = g min – --- ln { 1 – Pf ( f ) } α 2 1⁄2 g – g min  ( g – g min )  2  Rayleigh - - p g ( g ) = ------------------- exp  – --------------------------  g ≥ g min 1 - g = g min + 2α ln  --------------------  2 2 α  2α   1 – Pf( f )  –2 ⁄ 3 3 g 1 ---------------------------- 1⁄3 1⁄3 1⁄3 Hyperbolic - p g ( g ) = -- - g = g max – g min [ P f ( f ) ] + g max (Cube root) 3 g 1 ⁄ 3 – g1 ⁄ 3 max min 1 g max P f ( f ) Hyperbolic p g ( g ) = ------------------------------------------------------------- g = g min  -----------  g [ ln { g max } – ln { g min } ] g  (Logarithmic) min aThe cumulative probability distribution P (f), of the input image is approximated by its cumulative histogram: f j pf ( f ) ≈ ∑ HF ( m ) m=0
  • 265. HISTOGRAM MODIFICATION 257 (a) Original (b) Original histogram (c) Transfer function (d ) Enhanced (e ) Enhanced histogram FIGURE 10.2-2. Histogram equalization of the projectile image.
  • 266. 258 IMAGE ENHANCEMENT where p f ( f ) and p g ( g ) are the probability densities of f and g, respectively. The integral on the right is the cumulative distribution function P f ( f ) of the input vari- able f. Hence, g ∫g m in pg ( g ) dg = P f ( f ) (10.2-5) In the special case, for which the output density is forced to be the uniform density, 1 p g ( g ) = ---------------------------- - (10.2-6) g max – g min for g min ≤ g ≤ g max , the histogram equalization transfer function becomes g = ( g max – g min )P f ( f ) + g min (10.2-7) Table 10.2-1 lists several output image histograms and their corresponding transfer functions. Figure 10.2-2 provides an example of histogram equalization for an x-ray of a projectile. The original image and its histogram are shown in Figure 10.2-2a and b, respectively. The transfer function of Figure 10.2-2c is equivalent to the cumulative histogram of the original image. In the histogram equalized result of Figure 10.2-2, ablating material from the projectile, not seen in the original, is clearly visible. The histogram of the enhanced image appears peaked, but close examination reveals that many gray level output values are unoccupied. If the high occupancy gray levels were to be averaged with their unoccupied neighbors, the resulting histogram would be much more uniform. Histogram equalization usually performs best on images with detail hidden in dark regions. Good-quality originals are often degraded by histogram equalization. As an example, Figure 10.2-3 shows the result of histogram equalization on the jet image. Frei (6) has suggested the histogram hyperbolization procedure listed in Table 10.2-1 and described in Figure 10.2-4. With this method, the input image histogram is modified by a transfer function such that the output image probability density is of hyperbolic form. Then the resulting gray scale probability density following the assumed logarithmic or cube root response of the photoreceptors of the eye model will be uniform. In essence, histogram equalization is performed after the cones of the retina. 10.2.2. Adaptive Histogram Modification The histogram modification methods discussed in Section 10.2.1 involve applica- tion of the same transformation or mapping function to each pixel in an image. The mapping function is based on the histogram of the entire image. This process can be
  • 267. HISTOGRAM MODIFICATION 259 (a ) Original (b) Transfer function (c ) Histogram equalized FIGURE 10.2-3. Histogram equalization of the jet_mon image. made spatially adaptive by applying histogram modification to each pixel based on the histogram of pixels within a moving window neighborhood. This technique is obviously computationally intensive, as it requires histogram generation, mapping function computation, and mapping function application at each pixel. Pizer et al. (9) have proposed an adaptive histogram equalization technique in which histograms are generated only at a rectangular grid of points and the mappings at each pixel are generated by interpolating mappings of the four nearest grid points. Figure 10.2-5 illustrates the geometry. A histogram is computed at each grid point in a window about the grid point. The window dimension can be smaller or larger than the grid spacing. Let M00, M01, M10, M11 denote the histogram modification map- pings generated at four neighboring grid points. The mapping to be applied at pixel F(j, k) is determined by a bilinear interpolation of the mappings of the four nearest grid points as given by M = a [ bM00 + ( 1 – b )M 10 ] + ( 1 – a ) [ bM 01 + ( 1 – b )M 11 ] (10.2-8a)
  • 268. 260 IMAGE ENHANCEMENT FIGURE 10.2-4. Histogram hyperbolization. where k – k0 a = --------------- - (10.2-8b) k1 – k0 j – j0 b = ------------- - (10.2-8c) j1 – j0 Pixels in the border region of the grid points are handled as special cases of Eq. 10.2-8. Equation 10.2-8 is best suited for general-purpose computer calculation. FIGURE 10.2-5. Array geometry for interpolative adaptive histogram modification. * Grid point; • pixel to be computed.
  • 269. NOISE CLEANING 261 (a) Original (b) Nonadaptive (c) Adaptive FIGURE 10.2-6. Nonadaptive and adaptive histogram equalization of the brainscan image. For parallel processors, it is often more efficient to use the histogram generated in the histogram window of Figure 10.2-5 and apply the resultant mapping function to all pixels in the mapping window of the figure. This process is then repeated at all grid points. At each pixel coordinate (j, k), the four histogram modified pixels obtained from the four overlapped mappings are combined by bilinear interpolation. Figure 10.2-6 presents a comparison between nonadaptive and adaptive histogram equalization of a monochrome image. In the adaptive histogram equalization exam- ple, the histogram window is 64 × 64 . 10.3. NOISE CLEANING An image may be subject to noise and interference from several sources, including electrical sensor noise, photographic grain noise, and channel errors. These noise
  • 270. 262 IMAGE ENHANCEMENT effects can be reduced by classical statistical filtering techniques to be discussed in Chapter 12. Another approach, discussed in this section, is the application of ad hoc noise cleaning techniques. Image noise arising from a noisy sensor or channel transmission errors usually appears as discrete isolated pixel variations that are not spatially correlated. Pixels that are in error often appear visually to be markedly different from their neighbors. This observation is the basis of many noise cleaning algorithms (10–13). In this sec- tion we describe several linear and nonlinear techniques that have proved useful for noise reduction. Figure 10.3-1 shows two test images, which will be used to evaluate noise clean- ing techniques. Figure 10.3-1b has been obtained by adding uniformly distributed noise to the original image of Figure 10.3-1a. In the impulse noise example of Figure 10.3-1c, maximum-amplitude pixels replace original image pixels in a spa- tially random manner. (a ) Original (b ) Original with uniform noise (c) Original with impulse noise FIGURE 10.3-1. Noisy test images derived from the peppers_mon image.
  • 271. NOISE CLEANING 263 10.3.1. Linear Noise Cleaning Noise added to an image generally has a higher-spatial-frequency spectrum than the normal image components because of its spatial decorrelatedness. Hence, simple low-pass filtering can be effective for noise cleaning. Consideration will now be given to convolution and Fourier domain methods of noise cleaning. Spatial Domain Processing. Following the techniques outlined in Chapter 7, a spa- tially filtered output image G ( j, k ) can be formed by discrete convolution of an input image F ( j, k ) with a L × L impulse response array H ( j, k ) according to the relation G ( j, k ) = ∑∑ F ( m, n )H ( m + j + C, n + k + C ) (10.13-1) where C = (L + 1)/2. Equation 10.3-1 utilizes the centered convolution notation developed by Eq. 7.1-14, whereby the input and output arrays are centered with respect to one another, with the outer boundary of G ( j, k ) of width ( L – 1 ) ⁄ 2 pixels set to zero. For noise cleaning, H should be of low-pass form, with all positive elements. Several common 3 × 3 pixel impulse response arrays of low-pass form are listed below. 1 1 1 Mask 1: H = 1 -- - 1 1 1 (10.3-2a) 9 1 1 1 1 1 1 1 Mask 2: H = ----- - 1 2 1 (10.3-2b) 10 1 1 1 1 2 1 Mask 3: 1- H = ----- (10.3-2c) 16 2 4 2 1 2 1 These arrays, called noise cleaning masks, are normalized to unit weighting so that the noise-cleaning process does not introduce an amplitude bias in the processed image. The effect of noise cleaning with the arrays on the uniform noise and impulse noise test images is shown in Figure 10.3-2. Mask 1 and 2 of Eq. 10.3-2 are special cases of a 3 × 3 parametric low-pass filter whose impulse response is defined as 1 b 1 1 - H = ----------- 2 (10.3-3) b+2 b b b 1 b 1
  • 272. 264 IMAGE ENHANCEMENT (a ) Uniform noise, mask 1 (b ) Impulse noise, mask 1 (c ) Uniform noise, mask 2 (d ) Impulse noise, mask 2 (e ) Uniform noise, mask 3 (f ) Impulse noise, mask 3 FIGURE 10.3-2. Noise cleaning with 3 × 3 low-pass impulse response arrays on the noisy test images.
  • 273. NOISE CLEANING 265 (a ) Uniform rectangle (b) Uniform circular (c ) Pyramid (d ) Gaussian, s = 1.0 FIGURE 10.3-3. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image with uniform noise. The concept of low-pass filtering noise cleaning can be extended to larger impulse response arrays. Figures 10.3-3 and 10.3-4 present noise cleaning results for several 7 × 7 impulse response arrays for uniform and impulse noise. As expected, use of a larger impulse response array provides more noise smoothing, but at the expense of the loss of fine image detail. Fourier Domain Processing. It is possible to perform linear noise cleaning in the Fourier domain (13) using the techniques outlined in Section 9.3. Properly executed, there is no difference in results between convolution and Fourier filtering; the choice is a matter of implementation considerations. High-frequency noise effects can be reduced by Fourier domain filtering with a zonal low-pass filter with a transfer function defined by Eq. 9.3-9. The sharp cutoff characteristic of the zonal low-pass filter leads to ringing artifacts in a filtered image. This deleterious effect can be eliminated by the use of a smooth cutoff filter,
  • 274. 266 IMAGE ENHANCEMENT (a ) Uniform rectangle (b) Uniform circular (c ) Pyramid (d ) Gaussian, s = 1.0 FIGURE 10.3-4. Noise cleaning with 7 × 7 impulse response arrays on the noisy test image with impulse noise. such as the Butterworth low-pass filter whose transfer function is specified by Eq. 9.4-12. Figure 10.3-5 shows the results of zonal and Butterworth low-pass filter- ing of noisy images. Unlike convolution, Fourier domain processing, often provides quantitative and intuitive insight into the nature of the noise process, which is useful in designing noise cleaning spatial filters. As an example, Figure 10.3-6a shows an original image subject to periodic interference. Its two-dimensional Fourier transform, shown in Figure 10.3-6b, exhibits a strong response at the two points in the Fourier plane corresponding to the frequency response of the interference. When multiplied point by point with the Fourier transform of the original image, the bandstop filter of Figure 10.3-6c attenuates the interference energy in the Fourier domain. Figure 10.3-6d shows the noise-cleaned result obtained by taking an inverse Fourier trans- form of the product.
  • 275. NOISE CLEANING 267 (a ) Uniform noise, zonal (b) Impulse noise, zonal (c ) Uniform noise, Butterworth (d ) Impulse noise, Butterworth FIGURE 10.3-5. Noise cleaning with zonal and Butterworth low-pass filtering on the noisy test images; cutoff frequency = 64. Homomorphic Filtering. Homomorphic filtering (14) is a useful technique for image enhancement when an image is subject to multiplicative noise or interference. Figure 10.3-7 describes the process. The input image F ( j, k ) is assumed to be mod- eled as the product of a noise-free image S ( j, k ) and an illumination interference array I ( j, k ). Thus, F ( j, k ) = I ( j, k )S ( j, k ) (10.3-4) Ideally, I ( j, k ) would be a constant for all ( j, k ) . Taking the logarithm of Eq. 10.3-4 yields the additive linear result
  • 276. 268 IMAGE ENHANCEMENT (a) Original (b) Original Fourier transform (c) Bandstop filter (d ) Noise cleaned FIGURE 10.3-6. Noise cleaning with Fourier domain band stop filtering on the parts image with periodic interference. log { F ( j, k ) } = log { I ( j, k ) } + log { S ( j, k ) } (10.3-5) Conventional linear filtering techniques can now be applied to reduce the log inter- ference component. Exponentiation after filtering completes the enhancement pro- cess. Figure 10.3-8 provides an example of homomorphic filtering. In this example, the illumination field I ( j, k ) increases from left to right from a value of 0.1 to 1.0. FIGURE 10.3-7. Homomorphic filtering.
  • 277. NOISE CLEANING 269 (a) Illumination field (b) Original (c) Homomorphic filtering FIGURE 10.3-8. Homomorphic filtering on the washington_ir image with a Butter- worth high-pass filter; cutoff frequency = 4. Therefore, the observed image appears quite dim on its left side. Homomorphic filtering (Figure 10.3-8c) compensates for the nonuniform illumination. 10.3.2. Nonlinear Noise Cleaning The linear processing techniques described previously perform reasonably well on images with continuous noise, such as additive uniform or Gaussian distributed noise. However, they tend to provide too much smoothing for impulselike noise. Nonlinear techniques often provide a better trade-off between noise smoothing and the retention of fine image detail. Several nonlinear techniques are presented below. Mastin (15) has performed subjective testing of several of these operators.
  • 278. 270 IMAGE ENHANCEMENT FIGURE 10.3-9. Outlier noise cleaning algorithm. Outlier. Figure 10.3-9 describes a simple outlier noise cleaning technique in which each pixel is compared to the average of its eight neighbors. If the magnitude of the difference is greater than some threshold level, the pixel is judged to be noisy, and it is replaced by its neighborhood average. The eight-neighbor average can be com- puted by convolution of the observed image with the impulse response array 1 1 1 1 H = -- 1 0 1 - (10.3-6) 8 1 1 1 Figure 10.3-10 presents the results of outlier noise cleaning for a threshold level of 10%. (a ) Uniform noise (b ) Impulse noise FIGURE 10.3-10. Noise cleaning with the outlier algorithm on the noisy test images.
  • 279. NOISE CLEANING 271 The outlier operator can be extended straightforwardly to larger windows. Davis and Rosenfeld (16) have suggested a variant of the outlier technique in which the center pixel in a window is replaced by the average of its k neighbors whose ampli- tudes are closest to the center pixel. Median Filter. Median filtering is a nonlinear signal processing technique devel- oped by Tukey (17) that is useful for noise suppression in images. In one-dimen- sional form, the median filter consists of a sliding window encompassing an odd number of pixels. The center pixel in the window is replaced by the median of the pixels in the window. The median of a discrete sequence a1, a2,..., aN for N odd is that member of the sequence for which (N – 1)/2 elements are smaller or equal in value and (N – 1)/2 elements are larger or equal in value. For example, if the values of the pixels within a window are 0.1, 0.2, 0.9, 0.4, 0.5, the center pixel would be replaced by the value 0.4, which is the median value of the sorted sequence 0.1, 0.2, 0.4, 0.5, 0.9. In this example, if the value 0.9 were a noise spike in a monotonically increasing sequence, the median filter would result in a considerable improvement. On the other hand, the value 0.9 might represent a valid signal pulse for a wide- bandwidth sensor, and the resultant image would suffer some loss of resolution. Thus, in some cases the median filter will provide noise suppression, while in other cases it will cause signal suppression. Figure 10.3-11 illustrates some examples of the operation of a median filter and a mean (smoothing) filter for a discrete step function, ramp function, pulse function, and a triangle function with a window of five pixels. It is seen from these examples that the median filter has the usually desirable property of not affecting step func- tions or ramp functions. Pulse functions, whose periods are less than one-half the window width, are suppressed. But the peak of the triangle is flattened. Operation of the median filter can be analyzed to a limited extent. It can be shown that the median of the product of a constant K and a sequence f ( j ) is MED { K [ f ( j ) ] } = K [ MED { f ( j ) } ] (10.3-7) However, for two arbitrary sequences f ( j ) and g ( j ), it does not follow that the median of the sum of the sequences is equal to the sum of their medians. That is, in general, MED { f ( j ) + g ( j ) } ≠ MED { f ( j ) } + MED { g ( j ) } (10.3-8) The sequences 0.1, 0.2, 0.3, 0.4, 0.5 and 0.1, 0.2, 0.3, 0.2, 0.1 are examples for which the additive linearity property does not hold. There are various strategies for application of the median filter for noise suppres- sion. One method would be to try a median filter with a window of length 3. If there is no significant signal loss, the window length could be increased to 5 for median
  • 280. 272 IMAGE ENHANCEMENT FIGURE 10.3-11. Median filtering on one-dimensional test signals. filtering of the original. The process would be terminated when the median filter begins to do more harm than good. It is also possible to perform cascaded median filtering on a signal using a fixed-or variable-length window. In general, regions that are unchanged by a single pass of the filter will remain unchanged in subsequent passes. Regions in which the signal period is lower than one-half the window width will be continually altered by each successive pass. Usually, the process will con- tinue until the resultant period is greater than one-half the window width, but it can be shown that some sequences will never converge (18). The concept of the median filter can be extended easily to two dimensions by uti- lizing a two-dimensional window of some desired shape such as a rectangle or dis- crete approximation to a circle. It is obvious that a two-dimensional L × L median filter will provide a greater degree of noise suppression than sequential processing with L × 1 median filters, but two-dimensional processing also results in greater sig- nal suppression. Figure 10.3-12 illustrates the effect of two-dimensional median filtering of a spatial peg function with a 3 × 3 square filter and a 5 × 5 plus sign– shaped filter. In this example, the square median has deleted the corners of the peg, but the plus median has not affected the corners. Figures 10.3-13 and 10.3-14 show results of plus sign shaped median filtering on the noisy test images of Figure 10.3-1 for impulse and uniform noise, respectively.
  • 281. NOISE CLEANING 273 FIGURE 10.3-12. Median filtering on two-dimensional test signals. In the impulse noise example, application of the 3 × 3 median significantly reduces the noise effect, but some residual noise remains. Applying two 3 × 3 median filters in cascade provides further improvement. The 5 × 5 median filter removes almost all of the impulse noise. There is no visible impulse noise in the 7 × 7 median filter result, but the image has become somewhat blurred. In the case of uniform noise, median filtering provides little visual improvement. Huang et al. (19) and Astola and Campbell (20) have developed fast median fil- tering algorithms. The latter can be generalized to implement any rank ordering. Pseudomedian Filter. Median filtering is computationally intensive; the number of operations grows exponentially with window size. Pratt et al. (21) have proposed a computationally simpler operator, called the pseudomedian filter, which possesses many of the properties of the median filter. Let {SL} denote a sequence of elements s1, s2,..., sL. The pseudomedian of the sequence is
  • 282. 274 IMAGE ENHANCEMENT (a) 3 × 3 median filter (b) 3 × 3 cascaded median filter (c) 5 × 5 median filter (d) 7 × 7 median filter FIGURE 10.3-13. Median filtering on the noisy test image with uniform noise. PMED { S L } = ( 1 ⁄ 2 )MAXIMIN { S L } + ( 1 ⁄ 2 )MINIMAX { S L } (10.3-9) where for M = (L + 1)/2 MAXIMIN { S L } = MAX { [ MIN ( s 1, …, s M ) ], [ MIN ( s 2, …, s M + 1 ) ] …, [ MIN ( sL – M + 1, …, sL ) ] } (10.3-10a) MINIMAX { S L } = MIN { [ MAX ( s 1, …, s M ) ], [ MAX ( s 2, …, s M + 1 ) ] …, [ MAX ( s L – M + 1, …, s L ) ] } (10.3-10b)
  • 283. NOISE CLEANING 275 (a ) 3 × 3 median filter (b) 5 × 5 median filter (c ) 7 × 7 median filter FIGURE 10.3-14. Median filtering on the noisy test image with uniform noise. Operationally, the sequence of L elements is decomposed into subsequences of M elements, each of which is slid to the right by one element in relation to its predecessor, and the appropriate MAX and MIN operations are computed. As will be demonstrated, the MAXIMIN and MINIMAX operators are, by themselves, useful operators. It should be noted that it is possible to recursively decompose the MAX and MIN functions on long sequences into sliding functions of length 2 and 3 for pipeline computation (21). The one-dimensional pseudomedian concept can be extended in a variety of ways. One approach is to compute the MAX and MIN functions over rectangular windows. As with the median filter, this approach tends to over smooth an image. A plus-shape pseudomedian generally provides better subjective results. Consider a plus-shaped window containing the following two-dimensional set elements {SE}
  • 284. 276 IMAGE ENHANCEMENT y1 · · · x 1 … xM … xC · · · yR Let the sequences {XC} and {YR} denote the elements along the horizontal and ver- tical axes of the window, respectively. Note that the element xM is common to both sequences. Then the plus-shaped pseudomedian can be defined as PMED { S E } = ( 1 ⁄ 2 )MAX [ MAXIMIN { X C }, MAXIMIN { Y R } ] + ( 1 ⁄ 2 ) MIN [ MINIMAX { XC }, MINIMAX { Y R } ] (10.3-11) The MAXIMIN operator in one- or two-dimensional form is useful for removing bright impulse noise but has little or no effect on dark impulse noise. Conversely, the MINIMAX operator does a good job in removing dark, but not bright, impulse noise. A logical conclusion is to cascade the operators. Figure 10.3-16 shows the results of MAXIMIN, MINIMAX, and pseudomedian filtering on an image subjected to salt and pepper noise. As observed, the MAXIMIN operator reduces the salt noise, while the MINIMAX operator reduces the pepper noise. The pseudomedian provides attenuation for both types of noise. The cascade MINIMAX and MAXIMIN operators, in either order, show excellent results. Wavelet De-noising. Section 8.4-3 introduced wavelet transforms. The usefulness of wavelet transforms for image coding derives from the property that most of the energy of a transformed image is concentrated in the trend transform components rather than the fluctuation components (22). The fluctuation components may be grossly quantized without serious image degradation. This energy compaction prop- erty can also be exploited for noise removal. The concept, called wavelet de-noising (22,23), is quite simple. The wavelet transform coefficients are thresholded such that the presumably noisy, low-amplitude coefficients are set to zero.
  • 285. NOISE CLEANING 277 (a ) Original (b) MAXIMIN (c ) MINIMAX (d ) Pseudomedian (e ) MINIMAX of MAXIMIN (f ) MAXIMIN of MINIMAX FIGURE 10.3-15. 5 × 5 plus-shape MINIMAX, MAXIMIN, and pseudomedian filtering on the noisy test images.
  • 286. 278 IMAGE ENHANCEMENT 10.4. EDGE CRISPENING Psychophysical experiments indicate that a photograph or visual signal with accentuated or crispened edges is often more subjectively pleasing than an exact photometric reproduction. Edge crispening can be accomplished in a variety of ways. 10.4.1. Linear Edge Crispening Edge crispening can be performed by discrete convolution, as defined by Eq. 10.3-1, in which the impulse response array H is of high-pass form. Several common 3 × 3 high-pass masks are given below (24–26). Mask 1: 0 –1 0 H = –1 5 –1 (10.4-1a) 0 –1 0 Mask 2: –1 –1 –1 H = –1 9 –1 (10.4-1b) –1 –1 –1 Mask 3: 1 –2 1 H = –2 5 –2 (10.3-1c) 1 –2 1 These masks possess the property that the sum of their elements is unity, to avoid amplitude bias in the processed image. Figure 10.4-1 provides examples of edge crispening on a monochrome image with the masks of Eq. 10.4-1. Mask 2 appears to provide the best visual results. To obtain edge crispening on electronically scanned images, the scanner signal can be passed through an electrical filter with a high-frequency bandpass character- istic. Another possibility for scanned images is the technique of unsharp masking (27,28). In this process, the image is effectively scanned with two overlapping aper- tures, one at normal resolution and the other at a lower spatial resolution, which upon sampling produces normal and low-resolution images F ( j, k ) and F L ( j, k ), respectively. An unsharp masked image c 1–c - G ( j, k ) = -------------- F ( j, k ) – -------------- F L ( j, k ) - (10.4-2) 2c – 1 2c – 1
  • 287. EDGE CRISPENING 279 (a ) Original (b) Mask 1 (c ) Mask 2 (d ) Mask 3 FIGURE 10.4-1. Edge crispening with 3 × 3 masks on the chest_xray image. is then generated by forming the weighted difference between the normal and low- resolution images, where c is a weighting constant. Typically, c is in the range 3/5 to 5/6, so that the ratio of normal to low-resolution components in the masked image is from 1.5:1 to 5:1. Figure 10.4-2 illustrates typical scan signals obtained when scan- ning over an object edge. The masked signal has a longer-duration edge gradient as well as an overshoot and undershoot, as compared to the original signal. Subjec- tively, the apparent sharpness of the original image is improved. Figure 10.4-3 presents examples of unsharp masking in which the low-resolution image is obtained by convolution with a uniform L × L impulse response array. The sharpen- ing effect is stronger as L increases and c decreases. Linear edge crispening can be performed by Fourier domain filtering. A zonal high-pass filter with a transfer function given by Eq. 9.4-10 suppresses all spatial frequencies below the cutoff frequency except for the dc component, which is nec- essary to maintain the average amplitude of the filtered image. Figure 10.4-4 shows
  • 288. 280 IMAGE ENHANCEMENT FIGURE 10.4-2. Waveforms in an unsharp masking image enhancement system. the result of zonal high-pass filtering of an image. Zonal high-pass filtering often causes ringing in a filtered image. Such ringing can be reduced significantly by utili- zation of a high-pass filter with a smooth cutoff response. One such filter is the Butterworth high-pass filter, whose transfer function is defined by Eq. 9.4-13. Figure 10.4-4 shows the results of zonal and Butterworth high-pass filtering. In both examples, the filtered images are biased to a midgray level for display. 10.4.2. Statistical Differencing Another form of edge crispening, called statistical differencing (29, p. 100), involves the generation of an image by dividing each pixel value by its estimated standard deviation D ( j, k ) according to the basic relation F ( j, k ) G ( j, k ) = ----------------- (10.4-3) D ( j, k ) where the estimated standard deviation j+w k+w 1⁄2 1 2 D ( j, k ) = ---- W - ∑ ∑ [ F ( m, n ) – M ( m, n ) ] (10.4-4) m = j–w n = k–w
  • 289. EDGE CRISPENING 281 (a) L = 3, c = 0.6 (b) L = 3, c = 0.8 (c) L = 7, c = 0.6 (d ) L = 7, c = 0.8 FIGURE 10.4-3. Unsharp mask processing for L × L uniform low-pass convolution on the chest_xray image. is computed at each pixel over some W × W neighborhood where W = 2w + 1. The function M ( j, k ) is the estimated mean value of the original image at point (j, k), which is computed as j+w k+w 1- M ( j, k ) = ------- W 2 ∑ ∑ F ( m, n ) (10.4-5) m = j–w n = k–w The enhanced image G ( j, k ) is increased in amplitude with respect to the original at pixels that deviate significantly from their neighbors, and is decreased in relative amplitude elsewhere. The process is analogous to automatic gain control for an audio signal.
  • 290. 282 IMAGE ENHANCEMENT (a) Zonal filtering (b) Butterworth filtering FIGURE 10.4-4. Zonal and Butterworth high-pass filtering on the chest_xray image; cutoff frequency = 32. Wallis (30) has suggested a generalization of the statistical differencing operator in which the enhanced image is forced to a form with desired first- and second-order moments. The Wallis operator is defined by A max D d G ( j, k ) = [ F ( j, k ) – M ( j, k ) ] ------------------------------------------ + [ pM d + ( 1 – p )M ( j, k ) ] - (10.4-6) A max D ( j, k ) + D d where Md and Dd represent desired average mean and standard deviation factors, Amax is a maximum gain factor that prevents overly large output values when D ( j, k ) is small and 0.0 ≤ p ≤ 1.0 is a mean proportionality factor controlling the background flatness of the enhanced image. The Wallis operator can be expressed in a more general form as G ( j, k ) = [ F ( j, k ) – M ( j, k ) ]A ( j, k ) + B ( j, k ) (10.4-7) where A ( j, k ) is a spatially dependent gain factor and B ( j, k ) is a spatially depen- dent background factor. These gain and background factors can be derived directly from Eq. 10.4-4, or they can be specified in some other manner. For the Wallis oper- ator, it is convenient to specify the desired average standard deviation Dd such that the spatial gain ranges between maximum Amax and minimum Amin limits. This can be accomplished by setting Dd to the value
  • 291. EDGE CRISPENING 283 (a) Original (b) Mean, 0.00 to 0.98 (c) Standard deviation, 0.01 to 0.26 (d ) Background, 0.09 to 0.88 (e) Spatial gain, 0.75 to 2.35 (f) Wallis enhancement, − 0.07 to 1.12 FIGURE 10.4-5. Wallis statistical differencing on the bridge image for Md = 0.45, Dd = 0.28, p = 0.20, Amax = 2.50, Amin = 0.75 using a 9 × 9 pyramid array.
  • 292. 284 IMAGE ENHANCEMENT (a) Original (b) Wallis enhancement FIGURE 10.4-6. Wallis statistical differencing on the chest_xray image for Md = 0.64, Dd = 0.22, p = 0.20, Amax = 2.50, Amin = 0.75 using a 11 × 11 pyramid array. A min A max D max D d = ------------------------------------- - (10.4-8) A max – A min where Dmax is the maximum value of D ( j, k ) . The summations of Eqs. 10.4-4 and 10.4-5 can be implemented by convolutions with a uniform impulse array. But, overshoot and undershoot effects may occur. Better results are usually obtained with a pyramid or Gaussian-shaped array. Figure 10.4-5 shows the mean, standard deviation, spatial gain, and Wallis statis- tical differencing result on a monochrome image. Figure 10.4-6 presents a medical imaging example. 10.5. COLOR IMAGE ENHANCEMENT The image enhancement techniques discussed previously have all been applied to monochrome images. This section considers the enhancement of natural color images and introduces the pseudocolor and false color image enhancement methods. In the literature, the terms pseudocolor and false color have often been used improp- erly. Pseudocolor produces a color image from a monochrome image, while false color produces an enhanced color image from an original natural color image or from multispectral image bands. 10.5.1. Natural Color Image Enhancement The monochrome image enhancement methods described previously can be applied to natural color images by processing each color component individually. However,
  • 293. COLOR IMAGE ENHANCEMENT 285 care must be taken to avoid changing the average value of the processed image com- ponents. Otherwise, the processed color image may exhibit deleterious shifts in hue and saturation. Typically, color images are processed in the RGB color space. For some image enhancement algorithms, there are computational advantages to processing in a luma-chroma space, such as YIQ, or a lightness-chrominance space, such as L*u*v*. As an example, if the objective is to perform edge crispening of a color image, it is usually only necessary to apply the enhancement method to the luma or lightness component. Because of the high-spatial-frequency response limitations of human vision, edge crispening of the chroma or chrominance components may not be per- ceptible. Faugeras (31) has investigated color image enhancement in a perceptual space based on a color vision model similar to the model presented in Figure 2.5-3. The procedure is to transform the RGB tristimulus value original images according to the color vision model to produce a set of three perceptual space images that, ideally, are perceptually independent. Then, an image enhancement method is applied inde- pendently to the perceptual space images. Finally, the enhanced perceptual space images are subjected to steps that invert the color vision model and produce an enhanced color image represented in RGB color space. 10.5.2. Pseudocolor Pseudocolor (32–34) is a color mapping of a monochrome image array which is intended to enhance the detectability of detail within the image. The pseudocolor mapping of an array F ( j, k ) is defined as R ( j, k ) = O R { F ( j, k ) } (10.5-1a) G ( j, k ) = O G { F ( j, k ) } (10.5-1b) B ( j, k ) = O B { F ( j, k ) } (10.5-1c) where R ( j, k ) , G ( j, k ) , B ( j, k ) are display color components and O R { F ( j, k ) }, O G { F ( j, k ) } , O B { F ( j, k ) } are linear or nonlinear functional operators. This map- ping defines a path in three-dimensional color space parametrically in terms of the array F ( j, k ). Figure 10.5-1 illustrates the RGB color space and two color mappings that originate at black and terminate at white. Mapping A represents the achromatic path through all shades of gray; it is the normal representation of a monochrome image. Mapping B is a spiral path through color space. Another class of pseudocolor mappings includes those mappings that exclude all shades of gray. Mapping C, which follows the edges of the RGB color cube, is such an example. This mapping follows the perimeter of the gamut of reproducible colors as depicted by the uniform chromaticity scale (UCS) chromaticity chart shown in
  • 294. 286 IMAGE ENHANCEMENT FIGURE 10.5-1. Black-to-white and RGB perimeter pseudocolor mappings. Figure 10.5-2. The luminances of the colors red, green, blue, cyan, magenta, and yellow that lie along the perimeter of reproducible colors are noted in the figure. It is seen that the luminance of the pseudocolor scale varies between a minimum of 0.114 for blue to a maximum of 0.886 for yellow. A maximum luminance of unity is reached only for white. In some applications it may be desirable to fix the luminance of all displayed colors so that discrimination along the pseudocolor scale is by hue and saturation attributes of a color only. Loci of constant luminance are plotted in Figure 10.5-2. Figure 10.5-2 also includes bounds for displayed colors of constant luminance. For example, if the RGB perimeter path is followed, the maximum luminance of any color must be limited to 0.114, the luminance of blue. At a luminance of 0.2, the RGB perimeter path can be followed except for the region around saturated blue. At higher luminance levels, the gamut of constant luminance colors becomes severely limited. Figure 10.5-2b is a plot of the 0.5 luminance locus. Inscribed within this locus is the locus of those colors of largest constant saturation. A pseudocolor scale along this path would have the property that all points differ only in hue. With a given pseudocolor path in color space, it is necessary to choose the scaling between the data plane variable and the incremental path distance. On the UCS chromaticity chart, incremental distances are subjectively almost equally noticeable. Therefore, it is reasonable to subdivide geometrically the path length into equal increments. Figure 10.5-3 shows examples of pseudocoloring of a gray scale chart image and a seismic image. 10.5.3. False Color False color is a point-by-point mapping of an original color image, described by its three primary colors, or of a set of multispectral image planes of a scene, to a color
  • 295. COLOR IMAGE ENHANCEMENT 287 FIGURE 10.5-2. Luminance loci for NTSC colors. space defined by display tristimulus values that are linear or nonlinear functions of the original image pixel values (35,36). A common intent is to provide a displayed image with objects possessing different or false colors from what might be expected.
  • 296. 288 IMAGE ENHANCEMENT (a) Gray scale chart (b) Pseudocolor of chart (c ) Seismic (d ) Pseudocolor of seismic FIGURE 10.5-3. Pseudocoloring of the gray_chart and seismic images. See insert for a color representation of this figure. For example, blue sky in a normal scene might be converted to appear red, and green grass transformed to blue. One possible reason for such a color mapping is to place normal objects in a strange color world so that a human observer will pay more attention to the objects than if they were colored normally. Another reason for false color mappings is the attempt to color a normal scene to match the color sensitivity of a human viewer. For example, it is known that the luminance response of cones in the retina peaks in the green region of the visible spectrum. Thus, if a normally red object is false colored to appear green, it may become more easily detectable. Another psychophysical property of color vision that can be exploited is the contrast sensitivity of the eye to changes in blue light. In some situation it may be worthwhile to map the normal colors of objects with fine detail into shades of blue.
  • 297. MULTISPECTRAL IMAGE ENHANCEMENT 289 A third application of false color is to produce a natural color representation of a set of multispectral images of a scene. Some of the multispectral images may even be obtained from sensors whose wavelength response is outside the visible wave- length range, for example, infrared or ultraviolet. In a false color mapping, the red, green, and blue display color components are related to natural or multispectral images Fi by R D = O R { F 1, F 2, … } (10.5-2a) G D = O G { F 1, F2, … } (10.5-2b) B D = O B { F 1, F 2, … } (10.5-2c) where O R { · } , O R { · } , O B { · } are general functional operators. As a simple exam- ple, the set of red, green, and blue sensor tristimulus values (RS = F1, GS = F2, BS = F3) may be interchanged according to the relation RD 0 1 0 RS GD = 0 0 1 GS (10.5-3) BD 1 0 0 BS Green objects in the original will appear red in the display, blue objects will appear green, and red objects will appear blue. A general linear false color mapping of nat- ural color images can be defined as RD m 11 m 12 m 13 RS GD = m 21 m 22 m 21 GS (10.5-4) BD m 23 m 32 m 33 BS This color mapping should be recognized as a linear coordinate conversion of colors reproduced by the primaries of the original image to a new set of primaries. Figure 10.5-4 provides examples of false color mappings of a pair of images. 10.6. MULTISPECTRAL IMAGE ENHANCEMENT Enhancement procedures are often performed on multispectral image bands of a scene in order to accentuate salient features to assist in subsequent human interpre- tation or machine analysis (35,37). These procedures include individual image band
  • 298. 290 IMAGE ENHANCEMENT (a) Infrared band (b) Blue band (c) R = infrared, G = 0, B = blue (d ) R = infrared, G = 1/2 [infrared + blue], B = blue FIGURE 10.5-4. False coloring of multispectral images. See insert for a color representation of this figure. enhancement techniques, such as contrast stretching, noise cleaning, and edge crisp- ening, as described earlier. Other methods, considered in this section, involve the joint processing of multispectral image bands. Multispectral image bands can be subtracted in pairs according to the relation D m, n ( j, k ) = Fm ( j, k ) – Fn ( j, k ) (10.6-1) in order to accentuate reflectivity variations between the multispectral bands. An associated advantage is the removal of any unknown but common bias components that may exist. Another simple but highly effective means of multispectral image enhancement is the formation of ratios of the image bands. The ratio image between the mth and nth multispectral bands is defined as
  • 299. MULTISPECTRAL IMAGE ENHANCEMENT 291 F m ( j, k ) Rm, n ( j, k ) = ------------------- - (10.6-2) F n ( j, k ) It is assumed that the image bands are adjusted to have nonzero pixel values. In many multispectral imaging systems, the image band Fn ( j, k ) can be modeled by the product of an object reflectivity function R n ( j, k ) and an illumination function I ( j, k ) that is identical for all multispectral bands. Ratioing of such imagery provides an automatic compensation of the illumination factor. The ratio F m ( j, k ) ⁄ [ F n ( j, k ) ± ∆ ( j, k ) ], for which ∆ ( j, k ) represents a quantization level uncer- tainty, can vary considerably if F n ( j, k ) is small. This variation can be reduced significantly by forming the logarithm of the ratios defined by (24) L m, n ( j, k ) = log { R m, n ( j, k ) } = log { F m ( j, k ) } – log { F n ( j, k ) } (10.6-3) There are a total of N(N – 1) different difference or ratio pairs that may be formed from N multispectral bands. To reduce the number of combinations to be consid- ered, the differences or ratios are often formed with respect to an average image field: N 1 A ( j, k ) = --- N - ∑ F n ( j, k ) (10.6-4) n=1 Unitary transforms between multispectral planes have also been employed as a means of enhancement. For N image bands, a N × 1 vector F 1 ( j, k ) F 2 ( j, k ) · x = (10.6-5) · · F N ( j, k ) is formed at each coordinate (j, k). Then, a transformation y = Ax (10.6-6)
  • 300. 292 IMAGE ENHANCEMENT is formed where A is a N × N unitary matrix. A common transformation is the prin- cipal components decomposition, described in Section 5.8, in which the rows of the matrix A are composed of the eigenvectors of the covariance matrix Kx between the bands. The matrix A performs a diagonalization of the covariance matrix Kx such that the covariance matrix of the transformed imagery bands T K y = AK x A = Λ (10.6-7) is a diagonal matrix Λ whose elements are the eigenvalues of Kx arranged in descending value. The principal components decomposition, therefore, results in a set of decorrelated data arrays whose energies are ranged in amplitude. This process, of course, requires knowledge of the covariance matrix between the multispectral bands. The covariance matrix must be either modeled, estimated, or measured. If the covariance matrix is highly nonstationary, the principal components method becomes difficult to utilize. Figure 10.6-1 contains a set of four multispectral images, and Figure 10.6-2 exhibits their corresponding log ratios (37). Principal components bands of these multispectral images are illustrated in Figure 10.6-3 (37). (a) Band 4 (green) (b) Band 5 (red) (c) Band 6 (infrared 1) (d ) Band 7 (infrared 2) FIGURE 10.6-1. Multispectral images.
  • 301. MULTISPECTRAL IMAGE ENHANCEMENT 293 (a) Band 4 (b ) Band 4 Band 5 Band 6 (c) Band 4 (d ) Band 5 Band 7 Band 6 (e) Band 5 (f ) Band 6 Band 7 Band 7 FIGURE 10.6-2. Logarithmic ratios of multispectral images.
  • 302. 294 IMAGE ENHANCEMENT (a) First band (b) Second band (c) Third band (d ) Fourth band FIGURE 10.6-3. Principal components of multispectral images. REFERENCES 1. R. Nathan, “Picture Enhancement for the Moon, Mars, and Man,” in Pictorial Pattern Recognition, G. C. Cheng, ed., Thompson, Washington DC, 1968, 239–235. 2. F. Billingsley, “Applications of Digital Image Processing,” Applied Optics, 9, 2, Febru- ary 1970, 289–299. 3. H. C. Andrews, A. G. Tescher, and R. P. Kruger, “Image Processing by Digital Com- puter,” IEEE Spectrum, 9, 7, July 1972, 20–32. 4. E. L. Hall et al., “A Survey of Preprocessing and Feature Extraction Techniques for Radiographic Images,” IEEE Trans. Computers, C-20, 9, September 1971, 1032–1044. 5. E. L. Hall, “Almost Uniform Distribution for Computer Image Enhancement,” IEEE Trans. Computers, C-23, 2, February 1974, 207–208. 6. W. Frei, “Image Enhancement by Histogram Hyperbolization,” Computer Graphics and Image Processing, 6, 3, June 1977, 286–294.
  • 303. REFERENCES 295 7. D. J. Ketcham, “Real Time Image Enhancement Technique,” Proc. SPIE/OSA Confer- ence on Image Processing, Pacific Grove, CA, 74, February 1976, 120–125. 8. R. A. Hummel, “Image Enhancement by Histogram Transformation,” Computer Graph- ics and Image Processing, 6, 2, 1977, 184–195. 9. S. M. Pizer et al., “Adaptive Histogram Equalization and Its Variations,” Computer Vision, Graphics, and Image Processing. 39, 3, September 1987, 355–368. 10. G. P. Dineen, “Programming Pattern Recognition,” Proc. Western Joint Computer Con- ference, March 1955, 94–100. 11. R. E. Graham, “Snow Removal: A Noise Stripping Process for Picture Signals,” IRE Trans. Information Theory, IT-8, 1, February 1962, 129–144. 12. A. Rosenfeld, C. M. Park, and J. P. Strong, “Noise Cleaning in Digital Pictures,” Proc. EASCON Convention Record, October 1969, 264–273. 13. R. Nathan, “Spatial Frequency Filtering,” in Picture Processing and Psychopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970, 151–164. 14. A. V. Oppenheim, R. W. Schaefer, and T. G. Stockham, Jr., “Nonlinear Filtering of Mul- tiplied and Convolved Signals,” Proc. IEEE, 56, 8, August 1968, 1264–1291. 15. G. A. Mastin, “Adaptive Filters for Digital Image Noise Smoothing: An Evaluation,” Computer Vision, Graphics, and Image Processing, 31, 1, July 1985, 103–121. 16. L. S. Davis and A. Rosenfeld, “Noise Cleaning by Iterated Local Averaging,” IEEE Trans. Systems, Man and Cybernetics, SMC-7, 1978, 705–710. 17. J. W. Tukey, Exploratory Data Analysis, Addison-Wesley, Reading, MA, 1971. 18. T. A. Nodes and N. C. Gallagher, Jr., “Median Filters: Some Manipulations and Their Properties,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-30, 5, Octo- ber 1982, 739–746. 19. T. S. Huang, G. J. Yang, and G. Y. Tang, “A Fast Two-Dimensional Median Filtering Algorithm,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 1, Febru- ary 1979, 13–18. 20. J. T. Astola and T. G. Campbell, “On Computation of the Running Median,” IEEE Trans. Acoustics, Speech, and Signal Processing, 37, 4, April 1989, 572–574. 21. W. K. Pratt, T. J. Cooper, and I. Kabir, “Pseudomedian Filter,” Proc. SPIE Conference, Los Angeles, January 1984. 22. J. S. Walker, A Primer on Wavelets and Their Scientific Applications, Chapman & Hall CRC Press, Boca Raton, FL, 1999. 23. S. Mallat, A Wavelet Tour of Signal Processing, Academic Press, New York, 1998. 24. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec- tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965. 25. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy- chopictorics, B. S. Lipkin and A. Rosenfeld, eds., Academic Press, New York, 1970, 75– 150. 26. A. Arcese, P. H. Mengert, and E. W. Trombini, “Image Detection Through Bipolar Cor- relation,” IEEE Trans. Information Theory, IT-16, 5, September 1970, 534–541. 27. W. F. Schreiber, “Wirephoto Quality Improvement by Unsharp Masking,” J. Pattern Recognition, 2, 1970, 111–121.
  • 304. 296 IMAGE ENHANCEMENT 28. J-S. Lee, “Digital Image Enhancement and Noise Filtering by Use of Local Statistics,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-2, 2, March 1980, 165–168. 29. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969. 30. R. H. Wallis, “An Approach for the Space Variant Restoration and Enhancement of Images,” Proc. Symposium on Current Mathematical Problems in Image Science, Monterey, CA, November 1976. 31. O. D. Faugeras, “Digital Color Image Processing Within the Framework of a Human Visual Model,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-27, 4, August 1979, 380–393. 32. C. Gazley, J. E. Reibert, and R. H. Stratton, “Computer Works a New Trick in Seeing Pseudo Color Processing,” Aeronautics and Astronautics, 4, April 1967, 56. 33. L. W. Nichols and J. Lamar, “Conversion of Infrared Images to Visible in Color,” Applied Optics, 7, 9, September 1968, 1757. 34. E. R. Kreins and L. J. Allison, “Color Enhancement of Nimbus High Resolution Infrared Radiometer Data,” Applied Optics, 9, 3, March 1970, 681. 35. A. F. H. Goetz et al., “Application of ERTS Images and Image Processing to Regional Geologic Problems and Geologic Mapping in Northern Arizona,” Technical Report 32-1597, Jet Propulsion Laboratory, Pasadena, CA, May 1975. 36. W. Find, “Image Coloration as an Interpretation Aid,” Proc. SPIE/OSA Conference on Image Processing, Pacific Grove, CA, February 1976, 74, 209–215. 37. G. S. Robinson and W. Frei, “Final Research Report on Computer Processing of ERTS Images,” Report USCIPI 640, University of Southern California, Image Processing Institute, Los Angeles, September 1975.
  • 305. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 11 IMAGE RESTORATION MODELS Image restoration may be viewed as an estimation process in which operations are performed on an observed or measured image field to estimate the ideal image field that would be observed if no image degradation were present in an imaging system. Mathematical models are described in this chapter for image degradation in general classes of imaging systems. These models are then utilized in subsequent chapters as a basis for the development of image restoration techniques. 11.1. GENERAL IMAGE RESTORATION MODELS In order effectively to design a digital image restoration system, it is necessary quantitatively to characterize the image degradation effects of the physical imaging system, the image digitizer, and the image display. Basically, the procedure is to model the image degradation effects and then perform operations to undo the model to obtain a restored image. It should be emphasized that accurate image modeling is often the key to effective image restoration. There are two basic approaches to the modeling of image degradation effects: a priori modeling and a posteriori modeling. In the former case, measurements are made on the physical imaging system, digi- tizer, and display to determine their response for an arbitrary image field. In some instances it will be possible to model the system response deterministically, while in other situations it will only be possible to determine the system response in a sto- chastic sense. The a posteriori modeling approach is to develop the model for the image degradations based on measurements of a particular image to be restored. Basically, these two approaches differ only in the manner in which information is gathered to describe the character of the image degradation. 297
  • 306. 298 IMAGE RESTORATION MODELS FIGURE 11.1-1. Digital image restoration model. Figure 11.1-1 shows a general model of a digital imaging system and restoration process. In the model, a continuous image light distribution C ( x, y, t, λ ) dependent on spatial coordinates (x, y), time (t), and spectral wavelength ( λ ) is assumed to exist as the driving force of a physical imaging system subject to point and spatial degradation effects and corrupted by deterministic and stochastic disturbances. Potential degradations include diffraction in the optical system, sensor nonlineari- ties, optical system aberrations, film nonlinearities, atmospheric turbulence effects, image motion blur, and geometric distortion. Noise disturbances may be caused by electronic imaging sensors or film granularity. In this model, the physical imaging (i) system produces a set of output image fields FO ( x, y, t j ) at time instant t j described by the general relation (i ) F O ( x, y, tj ) = O P { C ( x, y, t, λ ) } (11.1-1) where O P { · } represents a general operator that is dependent on the space coordi- nates (x, y), the time history (t), the wavelength ( λ ), and the amplitude of the light distribution (C). For a monochrome imaging system, there will only be a single out- (i) put field, while for a natural color imaging system, FO ( x, y, t j ) may denote the red, green, and blue tristimulus bands for i = 1, 2, 3, respectively. Multispectral imagery may also involve several output bands of data. (i) In the general model of Figure 11.1-1, each observed image field FO ( x, y, t j ) is digitized, following the techniques outlined in Part 3, to produce an array of image (i ) samples F S ( m 1, m 2, t j ) at each time instant t j . The output samples of the digitizer are related to the input observed field by (i) (i) F S ( m 1, m 2, t j ) = O G { F O ( x, y, t j ) } (11.1-2)
  • 307. GENERAL IMAGE RESTORATION MODELS 299 where O G { · } is an operator modeling the image digitization process. A digital image restoration system that follows produces an output array (i) FK ( k 1, k 2, tj ) by the transformation (i ) ( i) F K ( k 1, k 2, t j ) = O R { FS ( m 1, m2, t j ) } (11.1-3) where O R { · } represents the designed restoration operator. Next, the output samples of the digital restoration system are interpolated by the image display system to pro- duce a continuous image estimate F ( i ) ( x, y, t j ). This operation is governed by the ˆ I relation ˆ (i) (i) FI ( x, y, t j ) = O D { FK ( k 1, k 2, tj ) } (11.1-4) where O D { · } models the display transformation. The function of the digital image restoration system is to compensate for degra- dations of the physical imaging system, the digitizer, and the image display system (i) to produce an estimate of a hypothetical ideal image field FI ( x, y, t j ) that would be displayed if all physical elements were perfect. The perfect imaging system would produce an ideal image field modeled by (i)  ∞ t  FI ( x, y, t j ) = O I  ∫ ∫ j C ( x, y, t, λ )U i ( t, λ ) dt dλ (11.1-5)  0 tj – T  where U i ( t, λ ) is a desired temporal and spectral response function, T is the observa- tion period, and O I { · } is a desired point and spatial response function. Usually, it will not be possible to restore perfectly the observed image such that the output image field is identical to the ideal image field. The design objective of the image restoration processor is to minimize some error measure between (i) FI ( x, y, t j ) and F ( i ) ( x, y, t j ). The discussion here is limited, for the most part, to a ˆ I consideration of techniques that minimize the mean-square error between the ideal and estimated image fields as defined by  (i ) ˆ (i ) 2 E i = E  [ F I ( x, y, t j ) – F I ( x, y, t j ) ]  (11.1-6)   where E { · } denotes the expectation operator. Often, it will be desirable to place side constraints on the error minimization, for example, to require that the image estimate be strictly positive if it is to represent light intensities that are positive. Because the restoration process is to be performed digitally, it is often more con- venient to restrict the error measure to discrete points on the ideal and estimated image fields. These discrete arrays are obtained by mathematical models of perfect image digitizers that produce the arrays
  • 308. 300 IMAGE RESTORATION MODELS (i) (i) FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ ) (11.1-7a) ˆ (i) ˆ (i) FI ( n 1, n 2, t j ) = F I ( x, y, tj )δ ( x – n 1 ∆, y – n 2 ∆ ) (11.1-7b) It is assumed that continuous image fields are sampled at a spatial period ∆ satisfy- ing the Nyquist criterion. Also, quantization error is assumed negligible. It should be noted that the processes indicated by the blocks of Figure 11.1-1 above the dashed division line represent mathematical modeling and are not physical operations per- formed on physical image fields and arrays. With this discretization of the continu- ous ideal and estimated image fields, the corresponding mean-square restoration error becomes  (i ) ˆ (i) 2 E i = E  [ F I ( n 1, n 2, t j ) – F I ( n 1, n 2, tj ) ]  (11.1-8)   With the relationships of Figure 11.1-1 quantitatively established, the restoration problem may be formulated as follows: (i) Given the sampled observation F S ( m 1, m 2, t j ) expressed in terms of the image light distribution C ( x, y, t, λ ), determine the transfer function O K { · } (i ) ˆ (i) that minimizes the error measure between F I ( x, y, t j ) and FI ( x, y, t j ) subject to desired constraints. There are no general solutions for the restoration problem as formulated above because of the complexity of the physical imaging system. To proceed further, it is necessary to be more specific about the type of degradation and the method of resto- ration. The following sections describe models for the elements of the generalized imaging system of Figure 11.1-1. 11.2. OPTICAL SYSTEMS MODELS One of the major advances in the field of optics during the past 40 years has been the application of system concepts to optical imaging. Imaging devices consisting of lenses, mirrors, prisms, and so on, can be considered to provide a deterministic transformation of an input spatial light distribution to some output spatial light dis- tribution. Also, the system concept can be extended to encompass the spatial propa- gation of light through free space or some dielectric medium. In the study of geometric optics, it is assumed that light rays always travel in a straight-line path in a homogeneous medium. By this assumption, a bundle of rays passing through a clear aperture onto a screen produces a geometric light projection of the aperture. However, if the light distribution at the region between the light and
  • 309. OPTICAL SYSTEMS MODELS 301 FIGURE 11.2-1. Generalized optical imaging system. dark areas on the screen is examined in detail, it is found that the boundary is not sharp. This effect is more pronounced as the aperture size is decreased. For a pin- hole aperture, the entire screen appears diffusely illuminated. From a simplistic viewpoint, the aperture causes a bending of rays called diffraction. Diffraction of light can be quantitatively characterized by considering light as electromagnetic radiation that satisfies Maxwell's equations. The formulation of a complete theory of optical imaging from the basic electromagnetic principles of diffraction theory is a complex and lengthy task. In the following, only the key points of the formulation are presented; details may be found in References 1 to 3. Figure 11.2-1 is a diagram of a generalized optical imaging system. A point in the object plane at coordinate ( x o, y o ) of intensity I o ( x o, y o ) radiates energy toward an imaging system characterized by an entrance pupil, exit pupil, and intervening sys- tem transformation. Electromagnetic waves emanating from the optical system are focused to a point ( x i, y i ) on the image plane producing an intensity I i ( x i, y i ) . The imaging system is said to be diffraction limited if the light distribution at the image plane produced by a point-source object consists of a converging spherical wave whose extent is limited only by the exit pupil. If the wavefront of the electromag- netic radiation emanating from the exit pupil is not spherical, the optical system is said to possess aberrations. In most optical image formation systems, the optical radiation emitted by an object arises from light transmitted or reflected from an incoherent light source. The image radiation can often be regarded as quasimonochromatic in the sense that the spectral bandwidth of the image radiation detected at the image plane is small with respect to the center wavelength of the radiation. Under these joint assumptions, the imaging system of Figure 11.2-1 will respond as a linear system in terms of the intensity of its input and output fields. The relationship between the image intensity and object intensity for the optical system can then be represented by the superposi- tion integral equation ∞ ∞ Ii ( x i, y i ) = ∫–∞ ∫–∞ H ( xi, yi ; xo, yo )Io ( xo, yo ) dxo dyo (11.2-1)
  • 310. 302 IMAGE RESTORATION MODELS where H ( x i, y i ; x o, y o ) represents the image intensity response to a point source of light. Often, the intensity impulse response is space invariant and the input–output relationship is given by the convolution equation ∞ ∞ I i ( x i, y i ) = ∫–∞ ∫–∞ H ( xi – xo, yi – yo )Io ( xo, yo ) dxo dyo (11.2-2) In this case, the normalized Fourier transforms ∞ ∞ ∫–∞ ∫–∞ Io ( xo, yo ) exp{ –i ( ωx xo + ωy yo ) } dxo dyo Io ( ω x, ω y ) = ----------------------------------------------------------------------------------------------------------------------- - (11.2-3a) ∞ ∞ ∫ ∫ –∞ –∞ I o ( x o, y o ) dx o dy o ∞ ∞ ∫–∞ ∫–∞ Ii ( xi, yi ) exp{ –i ( ωx xi + ω y yi ) } dxi dy i I i ( ω x, ω y ) = --------------------------------------------------------------------------------------------------------------- - (11.2-3b) ∞ ∞ ∫ ∫ – ∞ –∞ Ii ( x i, y i ) dx i dyi of the object and image intensity fields are related by I o ( ω x, ω y ) = H ( ω x, ω y ) I i ( ω x, ω y ) (11.2-4) where H ( ω x, ω y ) , which is called the optical transfer function (OTF), is defined by ∞ ∞ ∫–∞ ∫–∞ H ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy H ( ω x, ω y ) = -------------------------------------------------------------------------------------------------------- - (11.2-5) ∞ ∞ ∫ ∫ –∞ –∞ H ( x , y ) dx dy The absolute value H ( ω x, ω y ) of the OTF is known as the modulation transfer function (MTF) of the optical system. The most common optical image formation system is a circular thin lens. Figure 11.2-2 illustrates the OTF for such a lens as a function of its degree of misfocus (1, p. 486; 4). For extreme misfocus, the OTF will actually become negative at some spatial frequencies. In this state, the lens will cause a contrast reversal: Dark objects will appear light, and vice versa. Earth's atmosphere acts as an imaging system for optical radiation transversing a path through the atmosphere. Normally, the index of refraction of the atmos- phere remains relatively constant over the optical extent of an object, but in some instances atmospheric turbulence can produce a spatially variable index of
  • 311. OPTICAL SYSTEMS MODELS 303 FIGURE 11.2-2. Cross section of transfer function of a lens. Numbers indicate degree of misfocus. refraction that leads to an effective blurring of any imaged object. An equivalent impulse response  2 2 5 ⁄ 6 H ( x, y ) = K 1 exp  – ( K 2 x + K3 y )  (11.2-6)   where the Kn are constants, has been predicted and verified mathematically by experimentation (5) for long-exposure image formation. For convenience in analy- sis, the function 5/6 is often replaced by unity to obtain a Gaussian-shaped impulse response model of the form   x2 2 y  H ( x, y ) = K exp  –  -------- + --------   (11.2-7)   2b 2 2b 2   x y where K is an amplitude scaling constant and bx and by are blur-spread factors. Under the assumption that the impulse response of a physical imaging system is independent of spectral wavelength and time, the observed image field can be mod- eled by the superposition integral equation (i)  ∞ ∞  F O ( x, y, t j ) = O C  ∫ ∫ C ( α, β, t, λ )H ( x, y ; α, β ) dα dβ  (11.2-8)  –∞ –∞  where O C { · } is an operator that models the spectral and temporal characteristics of the physical imaging system. If the impulse response is spatially invariant, the model reduces to the convolution integral equation
  • 312. 304 IMAGE RESTORATION MODELS (i )  ∞ ∞  F O ( x, y, t j ) = OC  ∫ ∫ C ( α, β, t, λ )H ( x – α, y – β ) dα d β  (11.2-9)  –∞ –∞  11.3. PHOTOGRAPHIC PROCESS MODELS There are many different types of materials and chemical processes that have been utilized for photographic image recording. No attempt is made here either to survey the field of photography or to deeply investigate the physics of photography. Refer- ences 6 to 8 contain such discussions. Rather, the attempt here is to develop mathe- matical models of the photographic process in order to characterize quantitatively the photographic components of an imaging system. 11.3.1. Monochromatic Photography The most common material for photographic image recording is silver halide emul- sion, depicted in Figure 11.3-1. In this material, silver halide grains are suspended in a transparent layer of gelatin that is deposited on a glass, acetate, or paper backing. If the backing is transparent, a transparency can be produced, and if the backing is a white paper, a reflection print can be obtained. When light strikes a grain, an electro- chemical conversion process occurs, and part of the grain is converted to metallic silver. A development center is then said to exist in the grain. In the development process, a chemical developing agent causes grains with partial silver content to be converted entirely to metallic silver. Next, the film is fixed by chemically removing unexposed grains. The photographic process described above is called a non reversal process. It produces a negative image in the sense that the silver density is inversely propor- tional to the exposing light. A positive reflection print of an image can be obtained in a two-stage process with nonreversal materials. First, a negative transparency is produced, and then the negative transparency is illuminated to expose negative reflection print paper. The resulting silver density on the developed paper is then proportional to the light intensity that exposed the negative transparency. A positive transparency of an image can be obtained with a reversal type of film. This film is exposed and undergoes a first development similar to that of a nonreversal film. At this stage in the photographic process, all grains that have been exposed FIGURE 11.3-1. Cross section of silver halide emulsion.
  • 313. PHOTOGRAPHIC PROCESS MODELS 305 to light are converted completely to metallic silver. In the next step, the metallic silver grains are chemically removed. The film is then uniformly exposed to light, or alternatively, a chemical process is performed to expose the remaining silver halide grains. Then the exposed grains are developed and fixed to produce a positive trans- parency whose density is proportional to the original light exposure. The relationships between light intensity exposing a film and the density of silver grains in a transparency or print can be described quantitatively by sensitometric measurements. Through sensitometry, a model is sought that will predict the spec- tral light distribution passing through an illuminated transparency or reflected from a print as a function of the spectral light distribution of the exposing light and certain physical parameters of the photographic process. The first stage of the photographic process, that of exposing the silver halide grains, can be modeled to a first-order approximation by the integral equation X ( C ) = kx ∫ C ( λ )L ( λ ) dλ (11.3-1) where X(C) is the integrated exposure, C ( λ ) represents the spectral energy distribu- tion of the exposing light, L ( λ ) denotes the spectral sensitivity of the film or paper plus any spectral losses resulting from filters or optical elements, and kx is an expo- sure constant that is controllable by an aperture or exposure time setting. Equation 11.3-1 assumes a fixed exposure time. Ideally, if the exposure time were to be increased by a certain factor, the exposure would be increased by the same factor. Unfortunately, this relationship does not hold exactly. The departure from linearity is called a reciprocity failure of the film. Another anomaly in exposure prediction is the intermittency effect, in which the exposures for a constant intensity light and for an intermittently flashed light differ even though the incident energy is the same for both sources. Thus, if Eq. 11.3-1 is to be utilized as an exposure model, it is neces- sary to observe its limitations: The equation is strictly valid only for a fixed expo- sure time and constant-intensity illumination. The transmittance τ ( λ ) of a developed reversal or non-reversal transparency as a function of wavelength can be ideally related to the density of silver grains by the exponential law of absorption as given by τ ( λ ) = exp { – d e D ( λ ) } (11.3-2) where D ( λ ) represents the characteristic density as a function of wavelength for a reference exposure value, and de is a variable proportional to the actual exposure. For monochrome transparencies, the characteristic density function D ( λ ) is reason- ably constant over the visible region. As Eq. 11.3-2 indicates, high silver densities result in low transmittances, and vice versa. It is common practice to change the pro- portionality constant of Eq. 11.3-2 so that measurements are made in exponent ten units. Thus, the transparency transmittance can be equivalently written as
  • 314. 306 IMAGE RESTORATION MODELS –dx D ( λ ) τ ( λ ) = 10 (11.3-3) where dx is the density variable, inversely proportional to exposure, for exponent 10 units. From Eq. 11.3-3, it is seen that the photographic density is logarithmically related to the transmittance. Thus, d x D ( λ ) = – log 10 τ ( λ ) (11.3-4) The reflectivity r o ( λ ) of a photographic print as a function of wavelength is also inversely proportional to its silver density, and follows the exponential law of absorption of Eq. 11.3-2. Thus, from Eqs. 11.3-3 and 11.3-4, one obtains directly –d x D ( λ ) r o ( λ ) = 10 (11.3-5) d x D ( λ ) = – log 10 r o ( λ ) (11.3-6) where dx is an appropriately evaluated variable proportional to the exposure of the photographic paper. The relational model between photographic density and transmittance or reflectivity is straightforward and reasonably accurate. The major problem is the next step of modeling the relationship between the exposure X(C) and the density variable dx. Figure 11.3-2a shows a typical curve of the transmittance of a nonreversal transparency (a) (b) (c) (d) FIGURE 11.3-2. Relationships between transmittance, density, and exposure for a nonreversal film.
  • 315. PHOTOGRAPHIC PROCESS MODELS 307 FIGURE 11.3-3. H & D curves for a reversal film as a function of development time. as a function of exposure. It is to be noted that the curve is highly nonlinear except for a relatively narrow region in the lower exposure range. In Figure 11.3-2b, the curve of Figure 11.3-2a has been replotted as transmittance versus the logarithm of exposure. An approximate linear relationship is found to exist between transmit- tance and the logarithm of exposure, but operation in this exposure region is usually of little use in imaging systems. The parameter of interest in photography is the pho- tographic density variable dx, which is plotted as a function of exposure and loga- rithm of exposure in Figure 11.3-2c and 11.3-2d. The plot of density versus logarithm of exposure is known as the H & D curve after Hurter and Driffield, who performed fundamental investigations of the relationships between density and exposure. Figure 11.3-3 is a plot of the H & D curve for a reversal type of film. In Figure 11.3-2d, the central portion of the curve, which is approximately linear, has been approximated by the line defined by d x = γ [ log 10 X ( C ) – KF ] (11.3-7) where γ represents the slope of the line and KF denotes the intercept of the line with the log exposure axis. The slope of the curve γ (gamma,) is a measure of the contrast of the film, while the factor KF is a measure of the film speed; that is, a measure of the base exposure required to produce a negative in the linear region of the H & D curve. If the exposure is restricted to the linear portion of the H & D curve, substitu- tion of Eq. 11.3-7 into Eq. 11.3-3 yields a transmittance function – γD ( λ ) τ ( λ ) = Kτ ( λ ) [ X ( C ) ] (11.3-8a) where γK F D ( λ ) K τ ( λ ) ≡ 10 (11.3-8b)
  • 316. 308 IMAGE RESTORATION MODELS FIGURE 11.3-4. Color film integral tripack. With the exposure model of Eq. 11.3-1, the transmittance or reflection models of Eqs. 11.3-3 and 11.3-5, and the H & D curve, or its linearized model of Eq. 11.3-7, it is possible mathematically to model the monochrome photographic process. 11.3.2. Color Photography Modern color photography systems utilize an integral tripack film, as illustrated in Figure 11.3-4, to produce positive or negative transparencies. In a cross section of this film, the first layer is a silver halide emulsion sensitive to blue light. A yellow filter following the blue emulsion prevents blue light from passing through to the green and red silver emulsions that follow in consecutive layers and are naturally sensitive to blue light. A transparent base supports the emulsion layers. Upon devel- opment, the blue emulsion layer is converted into a yellow dye transparency whose dye concentration is proportional to the blue exposure for a negative transparency and inversely proportional for a positive transparency. Similarly, the green and blue emulsion layers become magenta and cyan dye layers, respectively. Color prints can be obtained by a variety of processes (7). The most common technique is to produce a positive print from a color negative transparency onto nonreversal color paper. In the establishment of a mathematical model of the color photographic process, each emulsion layer can be considered to react to light as does an emulsion layer of a monochrome photographic material. To a first approximation, this assumption is correct. However, there are often significant interactions between the emulsion and dye layers, Each emulsion layer possesses a characteristic sensitivity, as shown by the typical curves of Figure 11.3-5. The integrated exposures of the layers are given by X R ( C ) = d R ∫ C ( λ )L R ( λ ) dλ (11.3-9a) XG ( C ) = d G ∫ C ( λ )L G ( λ ) dλ (11.3-9b) X B ( C ) = d B ∫ C ( λ )L B ( λ ) dλ (11.3-9c)
  • 317. PHOTOGRAPHIC PROCESS MODELS 309 FIGURE 11.3-5. Spectral sensitivities of typical film layer emulsions. where dR, dG, dB are proportionality constants whose values are adjusted so that the exposures are equal for a reference white illumination and so that the film is not sat- urated. In the chemical development process of the film, a positive transparency is produced with three absorptive dye layers of cyan, magenta, and yellow dyes. The transmittance τT ( λ ) of the developed transparency is the product of the transmittance of the cyan τTC ( λ ), the magenta τ TM ( λ ), and the yellow τ TY ( λ ) dyes. Hence, τ T ( λ ) = τ TC ( λ )τTM ( λ )τ TY ( λ ) (11.3-10) The transmittance of each dye is a function of its spectral absorption characteristic and its concentration. This functional dependence is conveniently expressed in terms of the relative density of each dye as – cD NC ( λ ) τ TC ( λ ) = 10 (11.3-11a) – mD NM ( λ ) τTM ( λ ) = 10 (11.3-11b) – yD NY ( λ ) τTY ( λ ) = 10 (11.3-11c) where c, m, y represent the relative amounts of the cyan, magenta, and yellow dyes, and D NC ( λ ) , D NM ( λ ) , D NY ( λ ) denote the spectral densities of unit amounts of the dyes. For unit amounts of the dyes, the transparency transmittance is – D TN ( λ ) τTN ( λ ) = 10 (11.3-12a)
  • 318. 310 IMAGE RESTORATION MODELS FIGURE 11.3-6. Spectral dye densities and neutral density of a typical reversal color film. where D TN ( λ ) = D NC ( λ ) + D NM ( λ ) + D NY ( λ ) (11.3-12b) Such a transparency appears to be a neutral gray when illuminated by a reference white light. Figure 11.3-6 illustrates the typical dye densities and neutral density for a reversal film. The relationship between the exposure values and dye layer densities is, in gen- eral, quite complex. For example, the amount of cyan dye produced is a nonlinear function not only of the red exposure, but is also dependent to a smaller extent on the green and blue exposures. Similar relationships hold for the amounts of magenta and yellow dyes produced by their exposures. Often, these interimage effects can be neglected, and it can be assumed that the cyan dye is produced only by the red expo- sure, the magenta dye by the green exposure, and the blue dye by the yellow expo- sure. For this assumption, the dye density–exposure relationship can be characterized by the Hurter–Driffield plot of equivalent neutral density versus the logarithm of exposure for each dye. Figure 11.3-7 shows a typical H & D curve for a reversal film. In the central portion of each H & D curve, the density versus expo- sure characteristic can be modeled as c = γ C log 10 X R + K FC (11.3-13a) m = γ M log 10 X G + K FM (11.3-13b) y = γY log 10 X B + K FY (11.3-13c)
  • 319. PHOTOGRAPHIC PROCESS MODELS 311 FIGURE 11.3-7. H & D curves for a typical reversal color film. where γC , γM , γ Y , representing the slopes of the curves in the linear region, are called dye layer gammas. The spectral energy distribution of light passing through a developed transpar- ency is the product of the transparency transmittance and the incident illumination spectral energy distribution E ( λ ) as given by – [ cD NC ( λ ) + mD NM ( λ ) + yD NY ( λ ) ] CT ( λ ) = E ( λ )10 (11.3-14) Figure 11.3-8 is a block diagram of the complete color film recording and reproduc- tion process. The original light with distribution C ( λ ) and the light passing through the transparency C T ( λ ) at a given resolution element are rarely identical. That is, a spectral match is usually not achieved in the photographic process. Furthermore, the lights C and CT usually do not even provide a colorimetric match.
  • 320. 312 IMAGE RESTORATION MODELS FIGURE 11.3-8. Color film model. 11.4. DISCRETE IMAGE RESTORATION MODELS This chapter began with an introduction to a general model of an imaging system and a digital restoration process. Next, typical components of the imaging system were described and modeled within the context of the general model. Now, the dis- cussion turns to the development of several discrete image restoration models. In the development of these models, it is assumed that the spectral wavelength response and temporal response characteristics of the physical imaging system can be sepa- rated from the spatial and point characteristics. The following discussion considers only spatial and point characteristics. After each element of the digital image restoration system of Figure 11.1-1 is modeled, following the techniques described previously, the restoration system may be conceptually distilled to three equations: Observed image: F S ( m 1, m 2 ) = O M { F I ( n 1, n 2 ), N 1 ( m 1, m 2 ), …, N N ( m 1, m 2 ) } (11.4-1a) Compensated image: FK ( k 1, k 2 ) = O R { F S ( m 1, m 2 ) } (11.4-1b) Restored image: ˆ F I ( n 1, n 2 ) = O D { F K ( k 1, k 2 ) } (11.4-1c)
  • 321. DISCRETE IMAGE RESTORATION MODELS 313 ˆ where FS represents an array of observed image samples, FI and F I are arrays of ideal image points and estimates, respectively, FK is an array of compensated image points from the digital restoration system, Ni denotes arrays of noise samples from various system elements, and O M { · } , O R { · } , O D { · } represent general transfer functions of the imaging system, restoration processor, and display system, respec- tively. Vector-space equivalents of Eq. 11.4-1 can be formed for purposes of analysis by column scanning of the arrays of Eq. 11.4-1. These relationships are given by f S = O M { f I, n 1, …, n N } (11.4-2a) fK = OR { fS } (11.4-2b) ˆ = O {f } fI (11.4-2c) D K Several estimation approaches to the solution of 11.4-1 or 11.4-2 are described in the following chapters. Unfortunately, general solutions have not been found; recourse must be made to specific solutions for less general models. The most common digital restoration model is that of Figure 11.4-1a, in which a continuous image field is subjected to a linear blur, the electrical sensor responds nonlinearly to its input intensity, and the sensor amplifier introduces additive Gauss- ian noise independent of the image field. The physical image digitizer that follows may also introduce an effective blurring of the sampled image as the result of sam- pling with extended pulses. In this model, display degradation is ignored. FIGURE 11.4-1. Imaging and restoration models for a sampled blurred image with additive noise.
  • 322. 314 IMAGE RESTORATION MODELS Figure 11.4-1b shows a restoration model for the imaging system. It is assumed that the imaging blur can be modeled as a superposition operation with an impulse response J(x, y) that may be space variant. The sensor is assumed to respond nonlin- early to the input field FB(x, y) on a point-by-point basis, and its output is subject to an additive noise field N(x, y). The effect of sampling with extended sampling pulses, which are assumed symmetric, can be modeled as a convolution of FO(x, y) with each pulse P(x, y) followed by perfect sampling. ˆ The objective of the restoration is to produce an array of samples F I ( n 1, n 2 ) that are estimates of points on the ideal input image field FI(x, y) obtained by a perfect image digitizer sampling at a spatial period ∆I . To produce a digital restoration model, it is necessary quantitatively to relate the physical image samples FS ( m 1, m 2 ) to the ideal image points FI ( n 1, n 2 ) following the techniques outlined in Section 7.2. This is accomplished by truncating the sampling pulse equivalent impulse response P(x, y) to some spatial limits ± TP , and then extracting points from the continuous observed field FO(x, y) at a grid spacing ∆P . The discrete representation must then be carried one step further by relating points on the observed image field FO(x, y) to points on the image field FP(x, y) and the noise field N(x, y). The final step in the development of the discrete restoration model involves discretization of the super- position operation with J(x, y). There are two potential sources of error in this mod- eling process: truncation of the impulse responses J(x, y) and P(x, y), and quadrature integration errors. Both sources of error can be made negligibly small by choosing the truncation limits TB and TP large and by choosing the quadrature spacings ∆I and ∆P small. This, of course, increases the sizes of the arrays, and eventually, the amount of storage and processing required. Actually, as is subsequently shown, the numerical stability of the restoration estimate may be impaired by improving the accuracy of the discretization process! The relative dimensions of the various arrays of the restoration model are impor- tant. Figure 11.4-2 shows the nested nature of the arrays. The image array observed, FO ( k 1, k 2 ), is smaller than the ideal image array, F I ( n 1, n 2 ), by the half-width of the truncated impulse response J(x, y). Similarly, the array of physical sample points FS(m1, m2) is smaller than the array of image points observed, FO ( k 1, k 2 ), by the half-width of the truncated impulse response P ( x, y ). It is convenient to form vector equivalents of the various arrays of the restoration model in order to utilize the formal structure of vector algebra in the subsequent restoration analysis. Again, following the techniques of Section 7.2, the arrays are reindexed so that the first element appears in the upper-left corner of each array. Next, the vector relationships between the stages of the model are obtained by col- umn scanning of the arrays to give fS = BP fO (11.4-3a) fO = fP + n (11.4-3b) fP = O P { f B } (11.4-3c) f B = BB f I (11.4-3d)
  • 323. DISCRETE IMAGE RESTORATION MODELS 315 FIGURE 11.4-2. Relationships of sampled image arrays. where the blur matrix BP contains samples of P(x, y) and BB contains samples of J(x, y). The nonlinear operation of Eq. 1 l.4-3c is defined as a point-by-point nonlin- ear transformation. That is, fP ( i ) = OP { fB ( i ) } (11.4-4) Equations 11.4-3a to 11.4-3d can be combined to yield a single equation for the observed physical image samples in terms of points on the ideal image: fS = BP OP { BB fI } + BP n (11.4-5) Several special cases of Eq. 11.4-5 will now be defined. First, if the point nonlin- earity is absent, fS = BfI + n B (11.4-6)
  • 324. 316 IMAGE RESTORATION MODELS (a) Original (b) Impulse response (c) Observation FIGURE 11.4-3. Image arrays for underdetermined model. where B = BPBB and nB = BPn. This is the classical discrete model consisting of a set of linear equations with measurement uncertainty. Another case that will be defined for later discussion occurs when the spatial blur of the physical image digi- tizer is negligible. In this case, f S = O P { Bf I } + n (11.4-7) where B = BB is defined by Eq. 7.2-15. Chapter 12 contains results for several image restoration experiments based on the restoration model defined by Eq. 11.4-6. An artificial image has been generated for these computer simulation experiments (9). The original image used for the analysis of underdetermined restoration techniques, shown in Figure 11.4-3a, consists of a 4 × 4 pixel square of intensity 245 placed against an extended background of intensity
  • 325. REFERENCES 317 10 referenced to an intensity scale of 0 to 255. All images are zoomed for display purposes. The Gaussian-shaped impulse response function is defined as   l1 l2   H ( l1, l 2 ) = K exp  –  -------- + --------   - - (11.4-8)   2b C 2b 2   2 R over a 5 × 5 point array where K is an amplitude scaling constant and bC and bR are blur-spread constants. In the computer simulation restoration experiments, the observed blurred image model has been obtained by multiplying the column-scanned original image of Figure 11.4-3a by the blur matrix B. Next, additive white Gaussian observation noise has been simulated by adding output variables from an appropriate random number generator to the blurred images. For display, all image points restored are clipped to the intensity range 0 to 255. REFERENCES 1. M. Born and E. Wolf, Principles of Optics, 7th ed., Pergamon Press, New York, 1999. 2. J. W. Goodman, Introduction to Fourier Optics, 2nd ed., McGraw-Hill, New York, 1996. 3. E. L. O'Neill and E. H. O’Neill, Introduction to Statistical Optics, reprint ed., Addison- Wesley, Reading, MA, 1992. 4. H. H. Hopkins, Proc. Royal Society, A, 231, 1184, July 1955, 98. 5. R. E. Hufnagel and N. R. Stanley, “Modulation Transfer Function Associated with Image Transmission Through Turbulent Media,” J. Optical Society of America, 54, 1, January 1964, 52–61. 6. K. Henney and B. Dudley, Handbook of Photography, McGraw-Hill, New York, 1939. 7. R. M. Evans, W. T. Hanson, and W. L. Brewer, Principles of Color Photography, Wiley, New York, 1953. 8. C. E. Mees, The Theory of Photographic Process, Macmillan, New York, 1966. 9. N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266.
  • 326. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 12 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES A common defect in imaging systems is unwanted nonlinearities in the sensor and display systems. Post processing correction of sensor signals and pre-processing correction of display signals can reduce such degradations substantially (1). Such point restoration processing is usually relatively simple to implement. One of the most common image restoration tasks is that of spatial image restoration to compen- sate for image blur and to diminish noise effects. References 2 to 6 contain surveys of spatial image restoration methods. 12.1. SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION This section considers methods for compensation of point nonlinearities of sensors and displays. 12.1.1. Sensor Point Nonlinearity Correction In imaging systems in which the source degradation can be separated into cascaded spatial and point effects, it is often possible directly to compensate for the point deg- radation (7). Consider a physical imaging system that produces an observed image field FO ( x, y ) according to the separable model F O ( x, y ) = O Q { O D { C ( x, y, λ ) } } (12.1-1) 319
  • 327. 320 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES FIGURE 12.1-1. Point luminance correction for an image sensor. where C ( x, y, λ ) is the spectral energy distribution of the input light field, OQ { · } represents the point amplitude response of the sensor and O D { · } denotes the spatial and wavelength responses. Sensor luminance correction can then be accomplished by passing the observed image through a correction system with a point restoration operator O R { · } ideally chosen such that OR { OQ { · } } = 1 (12.1-2) For continuous images in optical form, it may be difficult to implement a desired point restoration operator if the operator is nonlinear. Compensation for images in analog electrical form can be accomplished with a nonlinear amplifier, while digital image compensation can be performed by arithmetic operators or by a table look-up procedure. Figure 12.1-1 is a block diagram that illustrates the point luminance correction methodology. The sensor input is a point light distribution function C that is con- verted to a binary number B for eventual entry into a computer or digital processor. In some imaging applications, processing will be performed directly on the binary representation, while in other applications, it will be preferable to convert to a real fixed-point computer number linearly proportional to the sensor input luminance. In ˜ the former case, the binary correction unit will produce a binary number B that is designed to be linearly proportional to C, and in the latter case, the fixed-point cor- ˜ rection unit will produce a fixed-point number C that is designed to be equal to C. A typical measured response B versus sensor input luminance level C is shown in Figure 12.1-2a, while Figure 12.1-2b shows the corresponding compensated response that is desired. The measured response can be obtained by scanning a gray scale test chart of known luminance values and observing the digitized binary value B at each step. Repeated measurements should be made to reduce the effects of noise and measurement errors. For calibration purposes, it is convenient to regard the binary-coded luminance as a fixed-point binary number. As an example, if the luminance range is sliced to 4096 levels and coded with 12 bits, the binary represen- tation would be B = b8 b7 b6 b5 b4 b3 b2 b1. b–1 b–2 b–3 b–4 (12.1-3)
  • 328. SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION 321 FIGURE 12.1-2. Measured and compensated sensor luminance response. The whole-number part in this example ranges from 0 to 255, and the fractional part divides each integer step into 16 subdivisions. In this format, the scanner can pro- duce output levels over the range 255.9375 ≤ B ≤ 0.0 (12.1-4) After the measured gray scale data points of Figure 12.1-2a have been obtained, a smooth analytic curve C = g{B} (12.1-5) is fitted to the data. The desired luminance response in real number and binary num- ber forms is
  • 329. 322 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES ˜ C = C (12.1-6a) ˜ C – C min B = B max ------------------------------ - (12.1-6b) C max – C min Hence, the required compensation relationships are ˜ C = g{ B} (12.1-7a) ˜ g { B } – C min B = B max ------------------------------- (12.1-7b) C max – C min The limits of the luminance function are commonly normalized to the range 0.0 to 1.0. To improve the accuracy of the calibration procedure, it is first wise to perform a rough calibration and then repeat the procedure as often as required to refine the cor- rection curve. It should be observed that because B is a binary number, the corrected ˜ luminance value C will be a quantized real number. Furthermore, the corrected ˜ binary coded luminance B will be subject to binary roundoff of the right-hand side of Eq. 12.1-7b. As a consequence of the nonlinearity of the fitted curve C = g { B } and the amplitude quantization inherent to the digitizer, it is possible that some of the corrected binary-coded luminance values may be unoccupied. In other words, ˜ the image histogram of B may possess gaps. To minimize this effect, the number of output levels can be limited to less than the number of input levels. For example, B ˜ may be coded to 12 bits and B coded to only 8 bits. Another alternative is to add pseudorandom noise to B ˜ to smooth out the occupancy levels. Many image scanning devices exhibit a variable spatial nonlinear point lumi- nance response. Conceptually, the point correction techniques described previously could be performed at each pixel value using the measured calibrated curve at that point. Such a process, however, would be mechanically prohibitive. An alternative approach, called gain correction, that is often successful is to model the variable spatial response by some smooth normalized two-dimensional curve G(j, k) over the sensor surface. Then, the corrected spatial response can be obtained by the operation ˜ F ( j, k ) F ( j, k ) = ---------------- - (12.1-8) G ( j, k ) ˜ where F ( j, k ) and F ( j, k ) represent the raw and corrected sensor responses, respec- tively. Figure 12.1-3 provides an example of adaptive gain correction of a charge cou- pled device (CCD) camera. Figure 12.1-3a is an image of a spatially flat light box surface obtained with the CCD camera. A line profile plot of a diagonal line through the original image is presented in Figure 12.1-3b. Figure 12.3-3c is the gain-cor- rected original, in which G ( j, k ) is obtained by Fourier domain low-pass filtering of
  • 330. SENSOR AND DISPLAY POINT NONLINEARITY CORRECTION 323 (a) Original (b) Line profile of original (c) Gain corrected (d) Line profile of gain corrected FIGURE 12.1-3. Gain correction of a CCD camera image. the original image. The line profile plot of Figure 12.1-3d shows the “flattened” result. 12.1.2. Display Point Nonlinearity Correction Correction of an image display for point luminance nonlinearities is identical in principle to the correction of point luminance nonlinearities of an image sensor. The procedure illustrated in Figure 12.1-4 involves distortion of the binary coded image ˜ luminance variable B to form a corrected binary coded luminance function B so that ˜ the displayed luminance C will be linearly proportional to B. In this formulation, the display may include a photographic record of a displayed light field. The desired overall response is ˜ C max – C min ˜ ˜ ˜ C = B ------------------------------ + C min - (12.1-9) B max Normally, the maximum and minimum limits of the displayed luminance ˜ function C are not absolute quantities, but rather are transmissivities or reflectivities
  • 331. 324 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES FIGURE 12.1-4. Point luminance correction of an image display. normalized over a unit range. The measured response of the display and image reconstruction system is modeled by the nonlinear function C = f {B} (12.1-10) Therefore, the desired linear response can be obtained by setting ˜  C max – C min ˜ ˜  ˜ B = g  B ------------------------------ + Cmin  - (12.1-11)  Bmax  where g { · } is the inverse function of f { · } . The experimental procedure for determining the correction function g { · } will be described for the common example of producing a photographic print from an image display. The first step involves the generation of a digital gray scale step chart over the full range of the binary number B. Usually, about 16 equally spaced levels of B are sufficient. Next, the reflective luminance must be measured over each step of the developed print to produce a plot such as in Figure 12.1-5. The data points are then fitted by the smooth analytic curve B = g { C }, which forms the desired trans- formation of Eq. 12.1-10. It is important that enough bits be allocated to B so that the discrete mapping g { · } can be approximated to sufficient accuracy. Also, the ˜ number of bits allocated to B must be sufficient to prevent gray scale contouring as the result of the nonlinear spacing of display levels. A 10-bit representation of B and ˜ an 8-bit representation of B should be adequate in most applications. Image display devices such as cathode ray tube displays often exhibit spatial luminance variation. Typically, a displayed image is brighter at the center of the dis- play screen than at its periphery. Correction techniques, as described by Eq. 12.1-8, can be utilized for compensation of spatial luminance variations.
  • 332. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION 325 FIGURE 12.1-5. Measured image display response. 12.2. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION For the class of imaging systems in which the spatial degradation can be modeled by a linear-shift-invariant impulse response and the noise is additive, restoration of continuous images can be performed by linear filtering techniques. Figure 12.2-1 contains a block diagram for the analysis of such techniques. An ideal image FI ( x, y ) passes through a linear spatial degradation system with an impulse response H D ( x, y ) and is combined with additive noise N ( x, y ). The noise is assumed to be uncorrelated with the ideal image. The image field observed can be represented by the convolution operation as ∞ ∞ F O ( x, y ) = ∫– ∞ ∫– ∞ FI ( α, β ) H D ( x – α, y – β ) dα dβ + N ( x, y ) (12.2-1a) or FO ( x, y ) = F I ( x, y ) ᭺ H D ( x, y ) + N ( x, y ) ‫ء‬ (12.2-1b) The restoration system consists of a linear-shift-invariant filter defined by the impulse response H R ( x, y ). After restoration with this filter, the reconstructed image becomes ˆ ∞ ∞ F I ( x, y ) = ∫–∞ ∫–∞ FO ( α, β )HR ( x – α, y – β ) dα dβ (12.2-2a) or ˆ F I ( x, y ) = F O ( x, y ) ᭺ H R ( x, y ) ‫ء‬ (12.2-2b)
  • 333. 326 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES FIGURE 12.2-1. Continuous image restoration model. Substitution of Eq. 12.2-lb into Eq. 12.2-2b yields ˆ F I ( x, y ) = [ F I ( x, y ) ᭺ H D ( x, y ) + N ( x, y ) ] ᭺ H R ( x, y ) ‫ء‬ ‫ء‬ (12.2-3) It is analytically convenient to consider the reconstructed image in the Fourier trans- form domain. By the Fourier transform convolution theorem, ˆ F I ( ω x, ω y ) = [ F I ( ω x, ω y )H D ( ω x, ω y ) + N ( ω x, ω y ) ]H R ( ω x, ω y ) (12.2-4) ˆ where F I ( ω x, ω y ), F I ( ω x, ω y ), N ( ω x, ω y ), HD ( ω x, ω y ), H R ( ω x, ω y ) are the two-dimen- ˆ sional Fourier transforms of FI ( x, y ), F I ( x, y ) N ( x, y ), H D ( x, y ), H R ( x, y ), respec- , tively. The following sections describe various types of continuous image restoration filters. 12.2.1. Inverse Filter The earliest attempts at image restoration were based on the concept of inverse fil- tering, in which the transfer function of the degrading system is inverted to yield a restored image (8–12). If the restoration inverse filter transfer function is chosen so that 1 H R ( ω x, ω y ) = --------------------------- - (12.2-5) H D ( ω x, ω y ) then the spectrum of the reconstructed image becomes N ( ω x, ω y ) ˆ F I ( ω x, ω y ) = F I ( ω x, ω y ) + --------------------------- - (12.2-6) H D ( ω x, ω y )
  • 334. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION 327 FIGURE 12.2-2. Typical spectra of an inverse filtering image restoration system. Upon inverse Fourier transformation, the restored image field 1 ∞ ∞ N ( ω x, ω y ) ˆ FI ( x, y ) = FI ( x, y ) + -------- 4π - 2 ∫– ∞ ∫– ∞ --------------------------- exp { i ( ω x x + ω y y ) } dω x dω y H D ( ω x, ω y ) - (12.2-7) is obtained. In the absence of source noise, a perfect reconstruction results, but if source noise is present, there will be an additive reconstruction error whose value can become quite large at spatial frequencies for which HD ( ω x, ω y ) is small. Typically, H D ( ω x, ω y ) and F I ( ω x, ω y ) are small at high spatial frequencies, hence image quality becomes severely impaired in high-detail regions of the recon- structed image. Figure 12.2-2 shows typical frequency spectra involved in inverse filtering. The presence of noise may severely affect the uniqueness of a restoration esti- mate. That is, small changes in N ( x, y ) may radically change the value of the esti- ˆ mate F I ( x, y ) . For example, consider the dither function Z ( x, y ) added to an ideal image to produce a perturbed image F Z ( x, y ) = F I ( x, y ) + Z ( x, y ) (12.2-8) There may be many dither functions for which
  • 335. 328 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES ∞ ∞ ∫– ∞ ∫– ∞ Z ( α, β )H D ( x – α, y – β ) dα dβ < N ( x, y ) (12.2-9) For such functions, the perturbed image field FZ ( x, y ) may satisfy the convolution integral of Eq. 12.2-1 to within the accuracy of the observed image field. Specifi- cally, it can be shown that if the dither function is a high-frequency sinusoid of arbitrary amplitude, then in the limit  ∞ ∞  lim  ∫ ∫ sin { n ( α + β ) } H D ( x – α, y – β ) dα dβ  = 0 (12.2-10) n → ∞ – ∞ – ∞  For image restoration, this fact is particularly disturbing, for two reasons. High-fre- quency signal components may be present in an ideal image, yet their presence may be masked by observation noise. Conversely, a small amount of observation noise may lead to a reconstruction of F I ( x, y ) that contains very large amplitude high-fre- quency components. If relatively small perturbations N ( x, y ) in the observation result in large dither functions for a particular degradation impulse response, the convolution integral of Eq. 12.2-1 is said to be unstable or ill conditioned. This potential instability is dependent on the structure of the degradation impulse response function. There have been several ad hoc proposals to alleviate noise problems inherent to inverse filtering. One approach (10) is to choose a restoration filter with a transfer function H K ( ω x, ω y ) H R ( ω x, ω y ) = --------------------------- - (12.2-11) H D ( ω x, ω y ) where H K ( ω x, ω y ) has a value of unity at spatial frequencies for which the expected magnitude of the ideal image spectrum is greater than the expected magnitude of the noise spectrum, and zero elsewhere. The reconstructed image spectrum is then N ( ω x, ω y )H K ( ω x, ω y ) ˆ F I ( ω x, ω y ) = F I ( ω x, ω y )H K ( ω x, ω y ) + ----------------------------------------------------- - (12.2-12) H D ( ω x, ω y ) The result is a compromise between noise suppression and loss of high-frequency image detail. Another fundamental difficulty with inverse filtering is that the transfer function of the degradation may have zeros in its passband. At such points in the frequency spectrum, the inverse filter is not physically realizable, and therefore the filter must be approximated by a large value response at such points.
  • 336. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION 329 12.2.2. Wiener Filter It should not be surprising that inverse filtering performs poorly in the presence of noise because the filter design ignores the noise process. Improved restoration qual- ity is possible with Wiener filtering techniques, which incorporate a priori statistical knowledge of the noise field (13–17). In the general derivation of the Wiener filter, it is assumed that the ideal image FI ( x, y ) and the observed image F O ( x, y ) of Figure 12.2-1 are samples of two- dimensional, continuous stochastic fields with zero-value spatial means. The impulse response of the restoration filter is chosen to minimize the mean-square res- toration error ˆ 2 E = E { [ F I ( x, y ) – FI ( x, y ) ] } (12.2-13) The mean-square error is minimized when the following orthogonality condition is met (13): ˆ E { [ FI ( x, y ) – FI ( x, y ) ]F O ( x′, y′ ) } = 0 (12.2-14) for all image coordinate pairs ( x, y ) and ( x′, y′ ). Upon substitution of Eq. 12.2-2a for the restored image and some linear algebraic manipulation, one obtains ∞ ∞ E { FI ( x, y )FO ( x, y ) } = ∫–∞ ∫–∞ E { FO ( α, β )FO ( x′, y′ ) }HR ( x – α, y – β ) dα dβ (12.2-15) Under the assumption that the ideal image and observed image are jointly stationary, the expectation terms can be expressed as covariance functions, as in Eq. 1.4-8. This yields ∞ ∞ K F F ( x – x′, y – y′ ) = I O ∫– ∞ ∫– ∞ K F O FO ( α – x′, β – y′ )H R ( x – α, y – β ) dα dβ (12.2-16) Then, taking the two-dimensional Fourier transform of both sides of Eq. 12.2-16 and solving for H R ( ω x, ω y ) , the following general expression for the Wiener filter trans- fer function is obtained: W F F ( ω x, ω y ) HR ( ω x, ω y ) = ------------------------------------- I O (12.2-17) W F F ( ω x, ω y ) O O In the special case of the additive noise model of Figure 12.2-1:
  • 337. 330 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES * WF F ( ω x, ω y ) = HD ( ω x, ω y )W F ( ω x, ω y ) (12.2-18a) I O I 2 WF ( ω x, ω y ) = H D ( ω x, ω y ) W F ( ω x, ω y ) + WN ( ω x, ω y ) (12.2-18b) O FO I This leads to the additive noise Wiener filter * H D ( ω x, ω y )W F ( ω x, ω y ) H R ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------- I - (12.2-19a) 2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y ) I or * HD ( ω x, ω y ) H R ( ω x, ω y ) = -------------------------------------------------------------------------------------------------------- - (12.2-19b) 2 H D ( ω x, ω y ) + W N ( ω x, ω y ) ⁄ W F ( ω x, ω y ) I In the latter formulation, the transfer function of the restoration filter can be expressed in terms of the signal-to-noise power ratio W F ( ω x, ω y ) SNR ( ω x, ω y ) ≡ ----------------------------- I - (12.2-20) W N ( ω x, ω y ) at each spatial frequency. Figure 12.2-3 shows cross-sectional sketches of a typical ideal image spectrum, noise spectrum, blur transfer function, and the resulting Wiener filter transfer function. As noted from the figure, this version of the Wiener filter acts as a bandpass filter. It performs as an inverse filter at low spatial frequen- cies, and as a smooth rolloff low-pass filter at high spatial frequencies. Equation 12.2-19 is valid when the ideal image and observed image stochastic processes are zero mean. In this case, the reconstructed image Fourier transform is ˆ F I ( ω x, ω y ) = H R ( ω x, ω y )F O ( ω x, ω y ) (12.2-21) If the ideal image and observed image means are nonzero, the proper form of the reconstructed image Fourier transform is ˆ F I ( ω x, ω y ) = H R ( ω x, ω y ) [ F O ( ω x, ω y ) – M O ( ω x, ω y ) ] + MI ( ω x, ω y ) (12.2-22a) where M O ( ω x, ω y ) = H D ( ω x, ω y )M I ( ω x, ω y ) + MN ( ω x, ω y ) (12.2-22b)
  • 338. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION 331 FIGURE 12.2-3. Typical spectra of a Wiener filtering image restoration system. and MI ( ω x, ω y ) and M N ( ω x, ω y ) are the two-dimensional Fourier transforms of the means of the ideal image and noise, respectively. It should be noted that Eq. 12.2-22 accommodates spatially varying mean models. In practice, it is common to estimate the mean of the observed image by its spatial average M O ( x, y ) and apply the Wiener filter of Eq. 12.2-19 to the observed image difference F O ( x, y ) – M O ( x, y ), and then add back the ideal image mean M I ( x, y ) to the Wiener filter result. It is useful to investigate special cases of Eq. 12.2-19. If the ideal image is assumed to be uncorrelated with unit energy, W F I ( ω x, ω y ) = 1 and the Wiener filter becomes H ∗ ( ω x, ω y ) D H R ( ω x, ω y ) = --------------------------------------------------------------------- - (12.2-23) 2 H D ( ω x, ω y ) + W N ( ω x, ω y )
  • 339. 332 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES This version of the Wiener filter provides less noise smoothing than does the general case of Eq. 12.2-19. If there is no blurring of the ideal image, H D ( ω x, ω y ) = 1 and the Wiener filter becomes a noise smoothing filter with a transfer function 1 HR ( ω x, ω y ) = -------------------------------------- - (12.2-24) 1 + W N ( ω x, ω y ) In many imaging systems, the impulse response of the blur may not be fixed; rather, it changes shape in a random manner. A practical example is the blur caused by imaging through a turbulent atmosphere. Obviously, a Wiener filter applied to this problem would perform better if it could dynamically adapt to the changing blur impulse response. If this is not possible, a design improvement in the Wiener filter can be obtained by considering the impulse response to be a sample of a two-dimen- sional stochastic process with a known mean shape and with a random perturbation about the mean modeled by a known power spectral density. Transfer functions for this type of restoration filter have been developed by Slepian (18). 12.2.3. Parametric Estimation Filters Several variations of the Wiener filter have been developed for image restoration. Some techniques are ad hoc, while others have a quantitative basis. Cole (19) has proposed a restoration filter with a transfer function W F ( ω x, ω y ) 1⁄2 H R ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------- I - (12.2-25) 2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y ) I The power spectrum of the filter output is 2 W F ( ω x, ω y ) = HR ( ω x, ω y ) W F ( ω x, ω y ) ˆ (12.2-26) I O where W FO ( ω x, ω y ) represents the power spectrum of the observation, which is related to the power spectrum of the ideal image by 2 W F ( ω x, ω y ) = HD ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y ) (12.2-27) O I Thus, it is easily seen that the power spectrum of the reconstructed image is identical to the power spectrum of the ideal image field. That is, W F ( ω x, ω y ) = W F ( ω x, ω y ) ˆ (12.2-28) I I
  • 340. CONTINUOUS IMAGE SPATIAL FILTERING RESTORATION 333 For this reason, the restoration filter defined by Eq. 12.2-25 is called the image power-spectrum filter. In contrast, the power spectrum for the reconstructed image as obtained by the Wiener filter of Eq. 12.2-19 is 2 2 H D ( ω x, ω y ) [ W F ( ω x, ω y ) ] WF ( ω x, ω y ) = ---------------------------------------------------------------------------------------------------- ˆI I - (12.2-29) 2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y ) I In this case, the power spectra of the reconstructed and ideal images become identi- cal only for a noise-free observation. Although equivalence of the power spectra of the ideal and reconstructed images appears to be an attractive feature of the image power-spectrum filter, it should be realized that it is more important that the Fourier spectra (Fourier transforms) of the ideal and reconstructed images be identical because their Fourier transform pairs are unique, but power-spectra transform pairs are not necessarily unique. Furthermore, the Wiener filter provides a minimum mean-square error estimate, while the image power-spectrum filter may result in a large residual mean-square error. Cole (19) has also introduced a geometrical mean filter, defined by the transfer function –S H ∗ ( ω x, ω y )W F ( ω x, ω y ) D 1–S H R ( ω x, ω y ) = [ H D ( ω x, ω y ) ] ---------------------------------------------------------------------------------------------------- I - (12.2-30) 2 H D ( ω x, ω y ) W F ( ω x, ω y ) + W N ( ω x, ω y ) I where 0 ≤ S ≤ 1 is a design parameter. If S = 1 ⁄ 2 and H D = H ∗ , the geometrical D mean filter reduces to the image power-spectrum filter as given in Eq. 12.2-25. Hunt (20) has developed another parametric restoration filter, called the con- strained least-squares filter, whose transfer function is of the form H ∗ ( ω x, ω y ) D H R ( ω x, ω y ) = ------------------------------------------------------------------------ - (12.2-31) 2 2 H D ( ω x, ω y ) + γ C ( ω x, ω y ) where γ is a design constant and C ( ω x, ω y ) is a design spectral variable. If γ = 1 2 and C ( ω x, ω y ) is set equal to the reciprocal of the spectral signal-to-noise power ratio of Eq. 12.2-20, the constrained least-squares filter becomes equivalent to the Wiener filter of Eq. 12.2-19b. The spectral variable can also be used to minimize higher-order derivatives of the estimate. 12.2.4. Application to Discrete Images The inverse filtering, Wiener filtering, and parametric estimation filtering tech- niques developed for continuous image fields are often applied to the restoration of
  • 341. 334 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Original (b) Blurred, b = 2.0 (c) Blurred with noise, SNR = 10.0 FIGURE 12.2-4. Blurred test images. discrete images. The common procedure has been to replace each of the continuous spectral functions involved in the filtering operation by its discrete two-dimensional Fourier transform counterpart. However, care must be taken in this conversion pro- cess so that the discrete filtering operation is an accurate representation of the con- tinuous convolution process and that the discrete form of the restoration filter impulse response accurately models the appropriate continuous filter impulse response. Figures 12.2-4 to 12.2-7 present examples of continuous image spatial filtering techniques by discrete Fourier transform filtering. The original image of Figure 12.2-4a has been blurred with a Gaussian-shaped impulse response with b = 2.0 to obtain the blurred image of Figure 12.2-4b. White Gaussian noise has been added to the blurred image to give the noisy blurred image of Figure l2.2-4c, which has a sig- nal-to-noise ratio of 10.0.
  • 342. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 335 Figure 12.2-5 shows the results of inverse filter image restoration of the blurred and noisy-blurred images. In Figure 12.2-5a, the inverse filter transfer function follows Eq. 12.2-5 (i.e., no high-frequency cutoff). The restored image for the noise- free observation is corrupted completely by the effects of computational error. The computation was performed using 32-bit floating-point arithmetic. In Figure 12.2-5c the inverse filter restoration is performed with a circular cutoff inverse filter as defined by Eq. 12.2-11 with C = 200 for the 512 × 512 pixel noise-free observation. Some faint artifacts are visible in the restoration. In Figure 12.2-5e the cutoff fre- quency is reduced to C = 150 . The restored image appears relatively sharp and free of artifacts. Figure 12.2-5b, d, and f show the result of inverse filtering on the noisy- blurred observed image with varying cutoff frequencies. These restorations illustrate the trade-off between the level of artifacts and the degree of deblurring. Figure 12.2-6 shows the results of Wiener filter image restoration. In all cases, the noise power spectral density is white and the signal power spectral density is circularly symmetric Markovian with a correlation factor ρ . For the noise-free observation, the Wiener filter provides restorations that are free of artifacts but only slightly sharper than the blurred observation. For the noisy observation, the restoration artifacts are less noticeable than for an inverse filter. Figure 12.2-7 presents restorations using the power spectrum filter. For a noise- free observation, the power spectrum filter gives a restoration of similar quality to an inverse filter with a low cutoff frequency. For a noisy observation, the power spectrum filter restorations appear to be grainier than for the Wiener filter. The continuous image field restoration techniques derived in this section are advantageous in that they are relatively simple to understand and to implement using Fourier domain processing. However, these techniques face several important limitations. First, there is no provision for aliasing error effects caused by physical undersampling of the observed image. Second, the formulation inherently assumes that the quadrature spacing of the convolution integral is the same as the physical sampling. Third, the methods only permit restoration for linear, space-invariant deg- radation. Fourth, and perhaps most important, it is difficult to analyze the effects of numerical errors in the restoration process and to develop methods of combatting such errors. For these reasons, it is necessary to turn to the discrete model of a sam- pled blurred image developed in Section 7.2 and then reformulate the restoration problem on a firm numeric basic. This is the subject of the remaining sections of the chapter. 12.3. PSEUDOINVERSE SPATIAL IMAGE RESTORATION The matrix pseudoinverse defined in Chapter 5 can be used for spatial image resto- ration of digital images when it is possible to model the spatial degradation as a vector-space operation on a vector of ideal image points yielding a vector of physi- cal observed samples obtained from the degraded image (21–23).
  • 343. 336 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Noise-free, no cutoff (b) Noisy, C = 100 (c) Noise-free, C = 200 (d ) Noisy, C = 75 (e) Noise-free, C = 150 (f ) Noisy, C = 50 FIGURE 12.2-5. Inverse filter image restoration on the blurred test images.
  • 344. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 337 (a) Noise-free, r = 0.9 (b) Noisy, r = 0.9 (c) Noise-free, r = 0.5 (d ) Noisy, r = 0.5 (e) Noise-free, r = 0.0 (f ) Noisy, r = 0.0 FIGURE 12.2-6. Wiener filter image restoration on the blurred test images; SNR = 10.0.
  • 345. 338 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Noise-free, r = 0.5 (b) Noisy, r = 0.5 (c) Noisy, r = 0.5 (d ) Noisy, r = 0.0 FIGURE 12.2-7. Power spectrum filter image restoration on the blurred test images; SNR = 10.0. 12.3.1. Pseudoinverse: Image Blur The first application of the pseudoinverse to be considered is that of the restoration of a blurred image described by the vector-space model g = Bf (12.3-1) 2 as derived in Eq. 11.5-6, where g is a P × 1 vector ( P = M ) containing the M × M 2 physical samples of the blurred image, f is a Q × 1 vector ( Q = N ) containing N × N points of the ideal image and B is the P × Q matrix whose elements are points
  • 346. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 339 on the impulse function. If the physical sample period and the quadrature represen- tation period are identical, P will be smaller than Q, and the system of equations will be underdetermined. By oversampling the blurred image, it is possible to force P > Q or even P = Q . In either case, the system of equations is called overdeter- mined. An overdetermined set of equations can also be obtained if some of the elements of the ideal image vector can be specified through a priori knowledge. For example, if the ideal image is known to contain a limited size object against a black background (zero luminance), the elements of f beyond the limits may be set to zero. ˆ In discrete form, the restoration problem reduces to finding a solution f to Eq. 12.3-1 in the sense that ˆ Bf = g (12.3-2) Because the vector g is determined by physical sampling and the elements of B are specified independently by system modeling, there is no guarantee that a f even ˆ exists to satisfy Eq. 12.3-2. If there is a solution, the system of equations is said to be consistent; otherwise, the system of equations is inconsistent. In Appendix 1 it is shown that inconsistency in the set of equations of Eq. 12.3-1 can be characterized as g = Bf + e { f } (12.3-3) where e { f } is a vector of remainder elements whose value depends on f. If the set of equations is inconsistent, a solution of the form ˆ = Wg f (12.3-4) is sought for which the linear operator W minimizes the least-squares modeling error E M = [ e { ˆ } ] [ e { ˆ } ] = [ g – Bf ] [ g – Bf ] ˆ ˆ T T f f (12.3-5) This error is shown, in Appendix 1, to be minimized when the operator W = B$ is set equal to the least-squares inverse of B. The least-squares inverse is not necessar- ily unique. It is also proved in Appendix 1 that the generalized inverse operator W = B–, which is a special case of the least-squares inverse, is unique, minimizes the least-squares modeling error, and simultaneously provides a minimum norm estimate. That is, the sum of the squares of ˆ is a minimum for all possible mini- f mum least-square error estimates. For the restoration of image blur, the generalized inverse provides a lowest-intensity restored image.
  • 347. 340 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES If Eq. 12.3-1 represents a consistent set of equations, one or more solutions may exist for Eq. 12.3-2. The solution commonly chosen is the estimate that minimizes the least-squares estimation error defined in the equivalent forms EE = ( f – ˆ) ( f – ˆ) T f f (12.3-6a) E E = tr { ( f – f ) ( f – ˆ ) } ˆ T f (12.3-6b) In Appendix 1 it is proved that the estimation error is minimum for a generalized inverse (W = B–) estimate. The resultant residual estimation error then becomes T – E E = f [ I – [ B B ] ]f (12.3-7a) or T – E E = tr { ff [ I – [ B B ] ] } (12.3-7b) The estimate is perfect, of course, if B–B = I. Thus, it is seen that the generalized inverse is an optimal solution, in the sense defined previously, for both consistent and inconsistent sets of equations modeling image blur. From Eq. 5.5-5, the generalized inverse has been found to be algebra- ically equivalent to – T –1 T B = [B B] B (12.3-8a) if the P × Q matrix B is of rank Q. If B is of rank P, then – T T –1 B = B [B B] (12.3-8b) For a consistent set of equations and a rank Q generalized inverse, the estimate –1 ˆ = B – g = B – Bf = [ [ B T B ] B T ]Bf = f f (12.3-9) is obviously perfect. However, in all other cases, a residual estimation error may occur. Clearly, it would be desirable to deal with an overdetermined blur matrix of rank Q in order to achieve a perfect estimate. Unfortunately, this situation is rarely
  • 348. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 341 achieved in image restoration. Oversampling the blurred image can produce an overdetermined set of equations ( P > Q ), but the rank of the blur matrix is likely to be much less than Q because the rows of the blur matrix will become more linearly dependent with finer sampling. A major problem in application of the generalized inverse to image restoration is dimensionality. The generalized inverse is a Q × P matrix where P is equal to the number of pixel observations and Q is equal to the number of pixels to be estimated in an image. It is usually not computationally feasible to use the generalized inverse operator, defined by Eq. 12.3-8, over large images because of difficulties in reliably computing the generalized inverse and the large number of vector multiplications associated with Eq. 12.3-4. Computational savings can be realized if the blur matrix B is separable such that B = BC ⊗ B R (12.3-10) where BC and BR are column and row blur operators. In this case, the generalized inverse is separable in the sense that – – – B = BC ⊗ BR (12.3-11) – – where B C and B R are generalized inverses of BC and BR, respectively. Thus, when the blur matrix is of separable form, it becomes possible to form the estimate of the image by sequentially applying the generalized inverse of the row blur matrix to each row of the observed image array and then using the column generalized inverse operator on each column of the array. Pseudoinverse restoration of large images can be accomplished in an approxi- mate fashion by a block mode restoration process, similar to the block mode filter- ing technique of Section 9.3, in which the blurred image is partitioned into small blocks that are restored individually. It is wise to overlap the blocks and accept only the pixel estimates in the center of each restored block because these pixels exhibit the least uncertainty. Section 12.3.3 describes an efficient computational algorithm for pseudoinverse restoration for space-invariant blur. Figure l2.3-1a shows a blurred image based on the model of Figure 11.5-3. Figure 12.3-1b shows a restored image using generalized inverse image restoration. In this example, the observation is noise free and the blur impulse response function is Gaussian shaped, as defined in Eq. 11.5-8, with bR = bC = 1.2. Only the center 8 × 8 region of the 12 × 12 blurred picture is displayed, zoomed to an image size of 256 × 256 pixels. The restored image appears to be visually improved compared to the blurred image, but the restoration is not identical to the original unblurred image of Figure 11.5-3a. The figure also gives the percentage least-squares error (PLSE) as defined in Appendix 3, between the blurred image and the original unblurred image, and between the restored image and the original. The restored image has less error than the blurred image.
  • 349. 342 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Blurred, PLSE = 4.97% (b) Restored, PLSE = 1.41% FIGURE 12.3-1. Pseudoinverse image restoration for test image blurred with Gaussian shape impulse response. M = 8, N = 12, L = 5; bR = bC = 1.2; noise-free observation. 12.3.2. Pseudoinverse: Image Blur Plus Additive Noise In many imaging systems, an ideal image is subject to both blur and additive noise; the resulting vector-space model takes the form g = Bf + n (12.3-12) where g and n are P × 1 vectors of the observed image field and noise field, respec- tively, f is a Q × 1 vector of ideal image points, and B is a P × Q blur matrix. The vector n is composed of two additive components: samples of an additive external noise process and elements of the vector difference ( g – Bf ) arising from modeling errors in the formulation of B. As a result of the noise contribution, there may be no vector solutions ˆ that satisfy Eq. 12.3-12. However, as indicated in Appendix 1, the f generalized inverse B– can be utilized to determine a least-squares error, minimum norm estimate. In the absence of modeling error, the estimate ˆ = B – g = B – Bf + B – n f (12.3-13) – differs from the ideal image because of the additive noise contribution B n. Also, – for the underdetermined model, B B will not be an identity matrix. If B is an over- – determined rank Q matrix, as defined in Eq. 12.3-8a, then B B = I , and the resulting – estimate is equal to the original image vector f plus a perturbation vector ∆ f = B n . The perturbation error in the estimate can be measured as the ratio of the vector
  • 350. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 343 norm of the perturbation to the vector norm of the estimate. It can be shown (24, p. 52) that the relative error is subject to the bound ∆f ----------- < B – n- ⋅ B -------- (12.3-14) f g – The product B ⋅ B , which is called the condition number C{B} of B, deter- mines the relative error in the estimate in terms of the ratio of the vector norm of the noise to the vector norm of the observation. The condition number can be computed directly or found in terms of the ratio W1 C { B } = ------- - (12.3-15) WN of the largest W1 to smallest WN singular values of B. The noise perturbation error for the underdetermined matrix B is also governed by Eqs. 12.3-14 and 12.3-15 if WN is defined to be the smallest nonzero singular value of B (25, p. 41). Obviously, the larger the condition number of the blur matrix, the greater will be the sensitivity to noise perturbations. Figure 12.3-2 contains image restoration examples for a Gaussian-shaped blur function for several values of the blur standard deviation and a noise variance of 10.0 on an amplitude scale of 0.0 to 255.0. As expected, observation noise degrades the restoration. Also as expected, the restoration for a moderate degree of blur is worse than the restoration for less blur. However, this trend does not continue; the restoration for severe blur is actually better in a subjective sense than for moderate blur. This seemingly anomalous behavior, which results from spatial truncation of the point-spread function, can be explained in terms of the condition number of the blur matrix. Figure 12.3-3 is a plot of the condition number of the blur matrix of the previous examples as a function of the blur coefficient (21). For small amounts of blur, the condition number is low. A maximum is attained for moderate blur, fol- lowed by a decrease in the curve for increasing values of the blur coefficient. The curve tends to stabilize as the blur coefficient approaches infinity. This curve pro- vides an explanation for the previous experimental results. In the restoration opera- tion, the blur impulse response is spatially truncated over a square region of 5 × 5 quadrature points. As the blur coefficient increases, for fixed M and N, the blur impulse response becomes increasingly wider, and its tails become truncated to a greater extent. In the limit, the nonzero elements in the blur matrix become constant values, and the condition number assumes a constant level. For small values of the blur coefficient, the truncation effect is negligible, and the condition number curve follows an ascending path toward infinity with the asymptotic value obtained for a smoothly represented blur impulse response. As the blur factor increases, the num- ber of nonzero elements in the blur matrix increases, and the condition number stabilizes to a constant value. In effect, a trade-off exists between numerical errors caused by ill-conditioning and modeling accuracy. Although this conclusion
  • 351. 344 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES Blurred Restored bR = bC = 0.6 (a ) PLSE = 1.30% (b ) PLSE = 0.21% bR = bC = 1.2 (c ) PLSE = 4.91% (d ) PLSE = 2695.81% bR = bC = 50.0 (e ) PLSE = 7.99% (f ) PLSE = 7.29% FIGURE 12.3-2. Pseudoinverse image restoration for test image blurred with Gaussian shape impulse response. M = 8, N = 12, L = 5; noisy observation, Var = 10.0.
  • 352. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 345 FIGURE 12.3-3. Condition number curve. is formulated on the basis of a particular degradation model, the inference seems to be more general because the inverse of the integral operator that describes the blur is unbounded. Therefore, the closer the discrete model follows the continuous model, the greater the degree of ill-conditioning. A move in the opposite direction reduces singularity but imposes modeling errors. This inevitable dilemma can only be bro- ken with the intervention of correct a priori knowledge about the original image. 12.3.3. Pseudoinverse Computational Algorithms Efficient computational algorithms have been developed by Pratt and Davarian (22) for pseudoinverse image restoration for space-invariant blur. To simplify the expla- nation of these algorithms, consideration will initially be limited to a one-dimen- sional example. Let the N × 1 vector fT and the M × 1 vector g T be formed by selecting the center portions of f and g, respectively. The truncated vectors are obtained by dropping L - 1 elements at each end of the appropriate vector. Figure 12.3-4a illustrates the rela- tionships of all vectors for N = 9 original vector points, M = 7 observations and an impulse response of length L = 3. The elements f T and g T are entries in the adjoint model q E = Cf E + n E (12.3-16a)
  • 353. 346 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES FIGURE 12.3-4. One-dimensional sampled continuous convolution and discrete convolution. where the extended vectors q E , f E and n E are defined in correspondence with g fT nT = C + (12.3-16b) 0 0 0 where g is a M × 1 vector, f T and nT are K × 1 vectors, and C is a J × J matrix. As noted in Figure 12.3-4b, the vector q is identical to the image observation g over its R = M – 2 ( L – 1 ) center elements. The outer elements of q can be approximated by ˜ q ≈ q = Eg (12.3-17) where E, called an extraction weighting matrix, is defined as a 0 0 E = 0 I 0 (12.3-18) 0 0 b where a and b are L × L submatrices, which perform a windowing function similar to that described in Section 9.4.2 (22). Combining Eqs. 12.3-17 and 12.3-18, an estimate of fT can be obtained from –1 ˆ f E = C qEˆ (12.3-19)
  • 354. PSEUDOINVERSE SPATIAL IMAGE RESTORATION 347 (a ) Original image vectors, f (b ) Truncated image vectors, fT (c ) Observation vectors, g (d ) Windowed observation vectors, q ^ ^ (e ) Restoration without windowing, fT (f ) Restoration with windowing, fT FIGURE 12.3-5. Pseudoinverse image restoration for small degree of horizontal blur, bR = 1.5.
  • 355. 348 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES Equation 12.3-19 can be solved efficiently using Fourier domain convolution techniques, as described in Section 9.3. Computation of the pseudoinverse by Fou- 2 rier processing requires on the order of J ( 1 + 4 log 2 J ) operations in two 2 2 dimensions; spatial domain computation requires about M N operations. As an example, for M = 256 and L = 17, the computational savings are nearly 1750:1 (22). Figure 12.3-5 is a computer simulation example of the operation of the pseudoin- verse image restoration algorithm for one-dimensional blur of an image. In the first step of the simulation, the center K pixels of the original image are extracted to form the set of truncated image vectors f T shown in Figure 12.3-5b. Next, the truncated image vectors are subjected to a simulated blur with a Gaussian-shaped impulse response with bR = 1.5 to produce the observation of Figure 12.3-5c. Figure 12.3-5d shows the result of the extraction operation on the observation. Restoration results without and with the extraction weighting operator E are presented in Figure 12.3-5e and f, respectively. These results graphically illustrate the importance of the ∧ (a) Observation, g (b) Restoration, fT Gaussian blur, bR = 2.0 ∧ (c ) Observation, g (d ) Restoration, fT Uniform motion blur, L = 15.0 FIGURE 12.3-6. Pseudoinverse image restoration for moderate and high degrees of horizon- tal blur.
  • 356. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION 349 extraction operation. Without weighting, errors at the observation boundary completely destroy the estimate in the boundary region, but with weighting the restoration is subjectively satisfying, and the restoration error is significantly reduced. Figure 12.3-6 shows simulation results for the experiment of Figure 12.3-5 when the degree of blur is increased by setting bR = 2.0. The higher degree of blur greatly increases the ill-conditioning of the blur matrix, and the residual error in formation of the modified observation after weighting leads to the disappointing estimate of Figure 12.3-6b. Figure 12.3-6c and d illustrate the restoration improve- ment obtained with the pseudoinverse algorithm for horizontal image motion blur. In this example, the blur impulse response is constant, and the corresponding blur matrix is better conditioned than the blur matrix for Gaussian image blur. 12.4. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION In Appendix 1 it is shown that any matrix can be decomposed into a series of eigen- matrices by the technique of singular value decomposition. For image restoration, this concept has been extended (26–29) to the eigendecomposition of blur matrices in the imaging model g = Bf + n (12.4-1) From Eq. A1.2-3, the blur matrix B may be expressed as 1⁄2 T B = UΛ Λ V (12.4-2) where the P × P matrix U and the Q × Q matrix V are unitary matrices composed of the eigenvectors of BBT and BTB, respectively and Λ is a P × Q matrix whose diag- onal terms λ ( i ) contain the eigenvalues of BBT and BTB. As a consequence of the orthogonality of U and V, it is possible to express the blur matrix in the series form R 1⁄2 ∑ [ λ( i )] T B = u i vi (12.4-3) i=1 where ui and v i are the ith columns of U and V, respectively, and R is the rank of the matrix B. From Eq. 12.4-2, because U and V are unitary matrices, the generalized inverse of B is R 1⁄2 –1 ⁄ 2 ∑ [ λ( i ) ] – T T B = VΛ Λ U = vi ui (12.4-4) i=1 Figure 12.4-1 shows an example of the SVD decomposition of a blur matrix. The generalized inverse estimate can then be expressed as
  • 357. 350 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Blur matrix, B (b) u1v1 , l(1) = 0.871 T (c ) u2v2 , l(2) = 0.573 T (d ) u3v3 , l(3) = 0.285 T (e) u4v4 , l(4) = 0.108 T (f ) u5v5 , l(5) = 0.034 T (g) u6v6T, l(6) = 0.014 (h ) u7v7 , l(7) = 0.011 T (i ) u8v8T, l(8) = 0.010 FIGURE 12.4-1. SVD decomposition of a blur matrix for bR = 2.0, M = 8, N = 16, L = 9.
  • 358. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION 351 ˆ = B– g = VΛ1 ⁄ 2 U T g f Λ (12.4-5a) or, equivalently, R R ˆ = –1 ⁄ 2 –1 ⁄ 2 ∑ ∑ [λ(i)] T T f [λ(i )] vi ui g = [ u i g ]v i (12.4-5b) i=1 i=1 T recognizing the fact that the inner product u i g is a scalar. Equation 12.4-5 provides the basis for sequential estimation; the kth estimate of f in a sequence of estimates is equal to ˆ = ˆ –1 ⁄ 2 T fk fk – 1 + [ λ ( k ) ] [ u k g ]v k (12.4-6) One of the principal advantages of the sequential formulation is that problems of ill- conditioning generally occur only for higher-order singular values. Thus, it is possi- ble interactively to terminate the expansion before numerical problems occur. Figure 12.4-2 shows an example of sequential SVD restoration for the underde- termined model example of Figure 11.5-3 with a poorly conditioned Gaussian blur matrix. A one-step pseudoinverse would have resulted in the final image estimate that is totally overwhelmed by numerical errors. The sixth step, which is the best subjective restoration, offers a considerable improvement over the blurred original, but the lowest least-squares error occurs for three singular values. The major limitation of the SVD image restoration method formulation in Eqs. 12.4-5 and 12.4-6 is computational. The eigenvectors ui and v i must first be deter- mined for the matrix BBT and BTB. Then the vector computations of Eq 12.4-5 or 12.4-6 must be performed. Even if B is direct-product separable, permitting separa- ble row and column SVD pseudoinversion, the computational task is staggering in the general case. The pseudoinverse computational algorithm described in the preceding section can be adapted for SVD image restoration in the special case of space-invariant blur (23). From the adjoint model of Eq. 12.3-16 given by q E = Cf E + n E (12.4-7) the circulant matrix C can be expanded in SVD form as 1⁄2 T C = X∆ ∆ Y∗ (12.4-8) where X and Y are unitary matrices defined by
  • 359. 352 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a ) 8 singular values PLSE = 2695.81% (b ) 7 singular values PLSE = 148.93% (c ) 6 singular values PLSE = 6.88% (d ) 5 singular values PLSE = 3.31% (e ) 4 singular values PLSE = 3.06% (f ) 3 singular values PLSE = 3.05% (g ) 2 singular values PLSE = 9.52% (h) 1 singular value PLSE = 9.52% FIGURE 12.4-2. SVD restoration for test image blurred with a Gaussian-shaped impulse response. bR = bC = 1.2, M = 8, N = 12, L = 5; noisy observation, Var = 10.0.
  • 360. SVD PSEUDOINVERSE SPATIAL IMAGE RESTORATION 353 T T X [ CC ]X∗ = ∆ (12.4-9a) T T Y [ C C ]Y∗ = ∆ (12.4-9b) Because C is circulant, CCT is also circulant. Therefore X and Y must be equivalent –1 to the Fourier transform matrix A or A because the Fourier matrix produces a diagonalization of a circulant matrix. For purposes of standardization, let –1 X = Y = A . As a consequence, the eigenvectors x i = y i , which are rows of X and Y, are actually the complex exponential basis functions  2πi  x k ( j ) = exp  ------- ( k – 1 ) ( j – 1 )  * - (12.4-10)  J  of a Fourier transform for 1 ≤ j, k ≤ J . Furthermore, T ∆ = C C∗ (12.4-11) where C is the Fourier domain circular area convolution matrix. Then, in correspon- dence with Eq. 12.4-5 ˆ –1 –1 ⁄ 2 fE = A Λ ˜ Aq E (12.4-12) ˜ where q E is the modified blurred image observation of Eqs. 12.3-19 and 12.3-20. Equation 12.4-12 should be recognized as being a Fourier domain pseudoinverse estimate. Sequential SVD restoration, analogous to the procedure of Eq. 12.4-6, can –1 ⁄ 2 be obtained by replacing the SVD pseudoinverse matrix ∆ of Eq. 12.4-12 by the operator –1 ⁄ 2 [ ∆T ( 1 ) ] 0 · –1 ⁄ 2 · [ ∆T( 2 ) ] · · –1 ⁄ 2 … ∆T = · –1 ⁄ 2 · (12.4-13) [ ∆T ( T ) ] · · 0 · · … 0 0
  • 361. 354 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Blurred observation (b) Restoration, T = 58 (c ) Restoration, T = 60 FIGURE 12.4-3. Sequential SVD pseudoinverse image restoration for horizontal Gaussian blur, bR = 3.0, L = 23, J = 256. Complete truncation of the high-frequency terms to avoid ill-conditioning effects may not be necessary in all situations. As an alternative to truncation, the diagonal –1 ⁄ 2 zero elements can be replaced by [ ∆T ( T ) ] or perhaps by some sequence that declines in value as a function of frequency. This concept is actually analogous to the truncated inverse filtering technique defined by Eq. 12.2-11 for continuous image fields. Figure 12.4-3 shows an example of SVD pseudoinverse image restoration for one-dimensional Gaussian image blur with bR = 3.0. It should be noted that the res- toration attempt with the standard pseudoinverse shown in Figure 12.3-6b was sub- ject to severe ill-conditioning errors at a blur spread of bR = 2.0.
  • 362. STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION 355 12.5. STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION A fundamental limitation of pseudoinverse restoration techniques is that observation noise may lead to severe numerical instability and render the image estimate unus- able. This problem can be alleviated in some instances by statistical restoration techniques that incorporate some a priori statistical knowledge of the observation noise (21). 12.5.1. Regression Spatial Image Restoration Consider the vector-space model g = Bf + n (12.5-1) for a blurred image plus additive noise in which B is a P × Q blur matrix and the noise is assumed to be zero mean with known covariance matrix Kn. The regression method seeks to form an estimate ˆ f = Wg (12.5-2) where W is a restoration matrix that minimizes the weighted error measure ˆ ˆ T –1 ˆ Θ { f } = [ g – Bf ] K n [ g – Bf ] (12.5-3) Minimization of the restoration error can be accomplished by the classical method ˆ ˆ of setting the partial derivative of Θ { f } with respect to f to zero. In the underdeter- mined case, for which P < Q , it can be shown (30) that the minimum norm estimate regression operator is –1 – –1 W = [K B] K (12.5-4) where K is a matrix obtained from the spectral factorization T K n = KK (12.5-5) 2 of the noise covariance matrix K n. For white noise, K = σ n I, and the regression operator assumes the form of a rank P generalized inverse for an underdetermined system as given by Eq. 12.3-8b.
  • 363. 356 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES 12.5.2. Wiener Estimation Spatial Image Restoration With the regression technique of spatial image restoration, the noise field is modeled as a sample of a two-dimensional random process with a known mean and covari- ance function. Wiener estimation techniques assume, in addition, that the ideal image is also a sample of a two-dimensional random process with known first and second moments (21,22,31). Wiener Estimation: General Case. Consider the general discrete model of Figure 12.5-1 in which a Q × 1 image vector f is subject to some unspecified type of point and spatial degradation resulting in the P × 1 vector of observations g. An estimate of f is formed by the linear operation ˆ = Wg + b f (12.5-6) where W is a Q × P restoration matrix and b is a Q × 1 bias vector. The objective of Wiener estimation is to choose W and b to minimize the mean-square restoration error, which may be defined as E = E{[f – ˆ] [f – ˆ] } T f f (12.5-7a) or E = tr { E { [ f – ˆ ] [ f – ˆ ] } } T f f (12.5-7b) Equation 12.5-7a expresses the error in inner-product form as the sum of the squares of the elements of the error vector [ f – ˆ ], while Eq. 12.5-7b forms the covariance f matrix of the error, and then sums together its variance terms (diagonal elements) by the trace operation. Minimization of Eq. 12.5-7 in either of its forms can be accomplished by differentiation of E with respect to ˆ . An alternative approach, f FIGURE 12.5-1. Wiener estimation for spatial image restoration.
  • 364. STATISTICAL ESTIMATION SPATIAL IMAGE RESTORATION 357 which is of quite general utility, is to employ the orthogonality principle (32, p. 219) to determine the values of W and b that minimize the mean-square error. In the con- text of image restoration, the orthogonality principle specifies two necessary and sufficient conditions for the minimization of the mean-square restoration error: 1. The expected value of the image estimate must equal the expected value of the image E{ ˆ} = E{f } f (12.5-8) 2. The restoration error must be orthogonal to the observation about its mean E{[f – ˆ][g – E{g }] } = 0 T f (12.5-9) From condition 1, one obtains b = E { f } – WE { g } (12.5-10) and from condition 2 T E{[ W + b – f][g – E{g }] } = 0 (12.5-11) Upon substitution for the bias vector b from Eq. 12.5-10 and simplification, Eq. 12.5-11 yields –1 W = K fg [ K gg ] (12.5-12) where K gg is the P × P covariance matrix of the observation vector (assumed nons- ingular) and K fg is the Q × P cross-covariance matrix between the image and obser- vation vectors. Thus, the optimal bias vector b and restoration matrix W may be directly determined in terms of the first and second joint moments of the ideal image and observation vectors. It should be noted that these solutions apply for nonlinear and space-variant degradations. Subsequent sections describe applications of Wiener estimation to specific restoration models. Wiener Estimation: Image Blur with Additive Noise. For the discrete model for a blurred image subjective to additive noise given by g = Bf + n (12.5-13)
  • 365. 358 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES the Wiener estimator is composed of a bias term b = E { f } – WE { g } = E { f } – WBE { f } + WE { n } (12.5-14) and a matrix operator –1 T T –1 W = K fg [ K gg ] = K f B [ BK f B + K n ] (12.5-15) 2 2 If the ideal image field is assumed uncorrelated, K f = σ f I where σ f represents the image energy. Equation 12.5-15 then reduces to 2 T 2 T –1 W = σ f B [ σ f BB + K n ] (12.5-16) 2 For a white-noise process with energy σ n , the Wiener filter matrix becomes 2 T  T σn  W = B  BB + ----- I - (12.5-17)  2 σ  f 2 2 As the ratio of image energy to noise energy ( σ f ⁄ σ n ) approaches infinity, the Wiener estimator of Eq. 12.5-17 becomes equivalent to the generalized inverse esti- mator. Figure 12.5-2 shows restoration examples for the model of Figure 11.5-3 for a Gaussian-shaped blur function. Wiener restorations of large size images are given in Figure 12.5-3 using a fast computational algorithm developed by Pratt and Davarian (22). In the example of Figure 12.5-3a illustrating horizontal image motion blur, the impulse response is of rectangular shape of length L = 11. The center pixels have been restored and replaced within the context of the blurred image to show the visual restoration improvement. The noise level and blur impulse response of the electron microscope original image of Figure 12.5-3c were estimated directly from the photographic transparency using techniques to be described in Section 12.7. The parameters were then utilized to restore the center pixel region, which was then replaced in the context of the blurred original. 12.6. CONSTRAINED IMAGE RESTORATION The previously described image restoration techniques have treated images as arrays of numbers. They have not considered that a restored natural image should be sub- ject to physical constraints. A restored natural image should be spatially smooth and strictly positive in amplitude.
  • 366. CONSTRAINED IMAGE RESTORATION 359 Blurred Restored bR = bC = 1.2, Var = 10.0, r = 0.75, SNR = 200.0 (a) PLSE = 4.91% (b) PLSE = 3.71% bR = bC = 50.0, Var = 10.0, r = 0.75, SNR = 200.0 (c) PLSE = 7.99% (d ) PLSE = 4.20% bR = bC = 50.0, Var = 100.0, r = 0.75, SNR = 60.0 (e) PLSE = 7.93% (f ) PLSE = 4.74% FIGURE 12.5-2. Wiener estimation for test image blurred with Gaussian-shaped impulse response. M = 8, N = 12, L = 5.
  • 367. 360 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES (a) Observation (b) Restoration (c) Observation (d ) Restoration FIGURE 12.5-3. Wiener image restoration. 12.6.1. Smoothing Methods Smoothing and regularization techniques (33–35) have been used in an attempt to overcome the ill-conditioning problems associated with image restoration. Basi- cally, these methods attempt to force smoothness on the solution of a least-squares error problem. Two formulations of these methods are considered (21). The first formulation consists of finding the minimum of ˆ Sf subject to the equality constraint f ˆ T ˆ T ˆ [ g – Bf ] M [ g – Bf ] = e (12.6-1) where S is a smoothing matrix, M is an error-weighting matrix, and e denotes a residual scalar estimation error. The error-weighting matrix is often chosen to be
  • 368. CONSTRAINED IMAGE RESTORATION 361 –1 equal to the inverse of the observation noise covariance matrix, M = K n . The Lagrangian estimate satisfying Eq. 12.6-1 is (19) –1 T – 1 T 1 –1 –1 ˆ f = S B BS B + -- M - g (12.6-2) γ In Eq. 12.6-2, the Lagrangian factor γ is chosen so that Eq. 12.6-1 is satisfied; that is, the compromise between residual error and smoothness of the estimator is deemed satisfactory. Now consider the second formulation, which involves solving an equality-con- strained least-squares problem by minimizing the left-hand side of Eq. 12.6-1 such that ˆT ˆ f Sf = d (12.6-3) where the scalar d represents a fixed degree of smoothing. In this case, the optimal solution for an underdetermined nonsingular system is found to be ˆ = S –1 B T [ BS – 1 B T + γM– 1 ] –1 g f (12.6-4) A comparison of Eqs. 12.6-2 and 12.6-4 reveals that the two inverse problems are solved by the same expression, the only difference being the Lagrange multipliers, which are inverses of one another. The smoothing estimates of Eq. 12.6-4 are closely related to the regression and Wiener estimates derived previously. If γ = 0, –1 S = I and M = K n where K n is the observation noise covariance matrix, then the smoothing and regression estimates become equivalent. Substitution of γ = 1, –1 –1 S = K f and M = K n where K f is the image covariance matrix results in equivalence to the Wiener estimator. These equivalences account for the relative smoothness of the estimates obtained with regression and Wiener restoration as compared to pseudoinverse restoration. A problem that occurs with the smoothing and regularizing techniques is that even though the variance of a solution can be calculated, its bias can only be determined as a function of f. 12.6.2. Constrained Restoration Techniques Equality and inequality constraints have been suggested (21) as a means of improving restoration performance for ill-conditioned restoration models. Examples of con- straints include the specification of individual pixel values, of ratios of the values of some pixels, or the sum of part or all of the pixels, or amplitude limits of pixel values. Quite often a priori information is available in the form of inequality constraints involving pixel values. The physics of the image formation process requires that
  • 369. 362 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES pixel values be non-negative quantities. Furthermore, an upper bound on these val- ues is often known because images are digitized with a finite number of bits assigned to each pixel. Amplitude constraints are also inherently introduced by the need to “fit” a restored image to the dynamic range of a display. One approach is lin- early to rescale the restored image to the display image. This procedure is usually undesirable because only a few out-of-range pixels will cause the contrast of all other pixels to be reduced. Also, the average luminance of a restored image is usu- ally affected by rescaling. Another common display method involves clipping of all pixel values exceeding the display limits. Although this procedure is subjectively preferable to rescaling, bias errors may be introduced. If a priori pixel amplitude limits are established for image restoration, it is best to incorporate these limits directly in the restoration process rather than arbitrarily invoke the limits on the restored image. Several techniques of inequality constrained restoration have been proposed. Consider the general case of constrained restoration in which the vector estimate ˆ f is subject to the inequality constraint l≤ˆ≤u f (12.6-5) where u and l are vectors containing upper and lower limits of the pixel estimate, respectively. For least-squares restoration, the quadratic error must be minimized subject to the constraint of Eq. 12.6-5. Under this framework, restoration reduces to the solution of a quadratic programming problem (21). In the case of an absolute error measure, the restoration task can be formulated as a linear programming prob- lem (36,37). The a priori knowledge involving the inequality constraints may sub- stantially reduce pixel uncertainty in the restored image; however, as in the case of equality constraints, an unknown amount of bias may be introduced. Figure 12.6-1 is an example of image restoration for the Gaussian blur model of Chapter 11 by pseudoinverse restoration and with inequality constrained (21) in which the scaled luminance of each pixel of the restored image has been limited to the range of 0 to 255. The improvement obtained by the constraint is substantial. Unfortunately, the quadratic programming solution employed in this example requires a considerable amount of computation. A brute-force extension of the pro- cedure does not appear feasible. Several other methods have been proposed for constrained image restoration. One simple approach, based on the concept of homomorphic filtering, is to take the logarithm of each observation. Exponentiation of the corresponding estimates auto- matically yields a strictly positive result. Burg (38), Edward and Fitelson (39), and Frieden (6,40,41) have developed restoration methods providing a positivity con- straint, which are based on a maximum entropy principle originally employed to estimate a probability density from observation of its moments. Huang et al. (42) have introduced a projection method of constrained image restoration in which the set of equations g = Bf are iteratively solved by numerical means. At each stage of the solution the intermediate estimates are amplitude clipped to conform to ampli- tude limits.
  • 370. BLIND IMAGE RESTORATION 363 (a) Blurred observation (b) Unconstrained restoration (c) Constrained restoration FIGURE 12.6-1. Comparison of unconstrained and inequality constrained image restoration for a test image blurred with Gaussian-shaped impulse response. bR = bC = 1.2, M = 12, N = 8, L = 5; noisy observation, Var = 10.0. 12.7. BLIND IMAGE RESTORATION Most image restoration techniques are based on some a priori knowledge of the image degradation; the point luminance and spatial impulse responses of the system degradation are assumed known. In many applications, such information is simply not available. The degradation may be difficult to measure or may be time varying in an unpredictable manner. In such cases, information about the degradation must be extracted from the observed image either explicitly or implicitly. This task is called blind image restoration (5,19,43). Discussion here is limited to blind image restoration methods for blurred images subject to additive noise.
  • 371. 364 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES There are two major approaches to blind image restoration: direct measurement and indirect estimation. With the former approach, the blur impulse response and noise level are first estimated from an image to be restored, and then these parame- ters are utilized in the restoration. Indirect estimation techniques employ temporal or spatial averaging to either obtain a restoration or to determine key elements of a res- toration algorithm. 12.7.1. Direct Measurement Methods Direct measurement blind restoration of a blurred noisy image usually requires mea- surement of the blur impulse response and noise power spectrum or covariance function of the observed image. The blur impulse response is usually measured by isolating the image of a suspected object within a picture. By definition, the blur impulse response is the image of a point-source object. Therefore, a point source in the observed scene yields a direct indication of the impulse response. The image of a suspected sharp edge can also be utilized to derive the blur impulse response. Aver- aging several parallel line scans normal to the edge will significantly reduce noise effects. The noise covariance function of an observed image can be estimated by measuring the image covariance over a region of relatively constant background luminance. References 5, 44, and 45 provide further details on direct measurement methods. 12.7.2. Indirect Estimation Methods Temporal redundancy of scenes in real-time television systems can be exploited to perform blind restoration indirectly. As an illustration, consider the ith observed image frame Gi ( x, y ) = FI ( x, y ) + N i ( x, y ) (12.7-1) of a television system in which F I ( x, y ) is an ideal image and N i ( x, y ) is an additive noise field independent of the ideal image. If the ideal image remains constant over a sequence of M frames, then temporal summation of the observed images yields the relation M M 1- 1- FI ( x, y ) = ---- M ∑ G i ( x, y ) – ---- M ∑ N i ( x, y ) (12.7-2) i=1 i=1 The value of the noise term on the right will tend toward its ensemble average E { N ( x, y ) } for M large. In the common case of zero-mean white Gaussian noise, the
  • 372. BLIND IMAGE RESTORATION 365 (a) Noise-free original (b) Noisy image 1 (c) Noisy image 2 (d ) Temporal average FIGURE 12.7-1 Temporal averaging of a sequence of eight noisy images. SNR = 10.0. ensemble average is zero at all (x, y), and it is reasonable to form the estimate as M ˆ 1 F I ( x, y ) = ---- M - ∑ G i ( x, y ) (12.7-3) i=1 Figure 12.7-1 presents a computer-simulated example of temporal averaging of a sequence of noisy images. In this example the original image is unchanged in the sequence. Each image observed is subjected to a different additive random noise pattern. The concept of temporal averaging is also useful for image deblurring. Consider an imaging system in which sequential frames contain a relatively stationary object degraded by a different linear-shift invariant impulse response H i ( x, y ) over each
  • 373. 366 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES frame. This type of imaging would be encountered, for example, when photograph- ing distant objects through a turbulent atmosphere if the object does not move significantly between frames. By taking a short exposure at each frame, the atmo- spheric turbulence is “frozen” in space at each frame interval. For this type of object, the degraded image at the ith frame interval is given by G i ( x, y ) = F I ( x, y ) ᭺ H i ( x, y ) * (12.7-4) for i = 1, 2,..., M. The Fourier spectra of the degraded images are then G i ( ω x, ω y ) = F I ( ω x, ω y )H i ( ω x, ω y ) (12.7-5) On taking the logarithm of the degraded image spectra ln { G i ( ω x, ω y ) } = ln { F I ( ω x, ω y ) } + ln { H i ( ω x, ω y ) } (12.7-6) the spectra of the ideal image and the degradation transfer function are found to sep- arate additively. It is now possible to apply any of the common methods of statistical estimation of a signal in the presence of additive noise. If the degradation impulse responses are uncorrelated between frames, it is worthwhile to form the sum M M ∑ ln { G i ( ω x, ω y ) } = M ln { F I ( ω x, ω y ) } + ∑ ln { H i ( ω x, ω y ) } (12.7-7) i=1 i=1 because for large M the latter summation approaches the constant value M   H M ( ω x, ω y ) = lim  ∑ ln { H i ( ω x, ω y ) } (12.7-8) M → ∞ i = 1  The term H M ( ω x, ω y ) may be viewed as the average logarithm transfer function of the atmospheric turbulence. An image estimate can be expressed as M  H M ( ω x, ω y )  1⁄M ˆ F I ( ω x, ω y ) = exp  – ----------------------------   M -  ∏ [ G i ( ω x, ω y ) ] (12.7-9) i=1 An inverse Fourier transform then yields the spatial domain estimate. In any practi- cal imaging system, Eq. 12.7-4 must be modified by the addition of a noise compo- nent Ni(x, y). This noise component unfortunately invalidates the separation step of Eq. 12.7-6, and therefore destroys the remainder of the derivation. One possible ad hoc solution to this problem would be to perform noise smoothing or filtering on
  • 374. REFERENCES 367 each observed image field and then utilize the resulting estimates as assumed noise- less observations in Eq. 12.7-9. Alternatively, the blind restoration technique of Stockham et al. (43) developed for nonstationary speech signals may be adapted to the multiple-frame image restoration problem. REFERENCES 1. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing at the Image Processing Laboratory at the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7, July 1972, 821–828. 2. M. M. Sondhi, “Image Restoration: The Removal of Spatially Invariant Degradations,” Proc. IEEE, 60, 7, July 1972, 842–853. 3. H. C. Andrews, “Digital Image Restoration: A Survey,” IEEE Computer, 7, 5, May 1974, 36–45. 4. B. R. Hunt, “Digital Image Processing,” Proc. IEEE, 63, 4, April 1975, 693–708. 5. H. C. Andrews and B. R. Hunt, Digital Image Restoration, Prentice Hall, Englewood Cliffs, NJ, 1977. 6. B. R. Frieden, “Image Enhancement and Restoration,” in Picture Processing and Digital Filtering, T. S. Huang, Ed., Springer-Verlag, New York, 1975. 7. T. G. Stockham, Jr., “A–D and D–A Converters: Their Effect on Digital Audio Fidelity,” in Digital Signal Processing, L. R. Rabiner and C. M. Rader, Eds., IEEE Press, New York, 1972, 484–496. 8. A. Marechal, P. Croce, and K. Dietzel, “Amelioration du contrast des details des images photographiques par filtrage des fréquencies spatiales,” Optica Acta, 5, 1958, 256–262. 9. J. Tsujiuchi, “Correction of Optical Images by Compensation of Aberrations and by Spa- tial Frequency Filtering,” in Progress in Optics, Vol. 2, E. Wolf, Ed., Wiley, New York, 1963, 131–180. 10. J. L. Harris, Sr., “Image Evaluation and Restoration,” J. Optical Society of America, 56, 5, May 1966, 569–574. 11. B. L. McGlamery, “Restoration of Turbulence-Degraded Images,” J. Optical Society of America, 57, 3, March 1967, 293–297. 12. P. F. Mueller and G. O. Reynolds, “Image Restoration by Removal of Random Media Degradations,” J. Optical Society of America, 57, 11, November 1967, 1338–1344. 13. C. W. Helstrom, “Image Restoration by the Method of Least Squares,” J. Optical Soci- ety of America, 57, 3, March 1967, 297–303. 14. J. L. Harris, Sr., “Potential and Limitations of Techniques for Processing Linear Motion- Degraded Imagery,” in Evaluation of Motion Degraded Images, US Government Print- ing Office, Washington DC, 1968, 131–138. 15. J. L. Homer, “Optical Spatial Filtering with the Least-Mean-Square-Error Filter,” J. Optical Society of America, 51, 5, May 1969, 553–558. 16. J. L. Homer, “Optical Restoration of Images Blurred by Atmospheric Turbulence Using Optimum Filter Theory,” Applied Optics, 9, 1, January 1970, 167–171. 17. B. L. Lewis and D. J. Sakrison, “Computer Enhancement of Scanning Electron Micro- graphs,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 267–278.
  • 375. 368 POINT AND SPATIAL IMAGE RESTORATION TECHNIQUES 18. D. Slepian, “Restoration of Photographs Blurred by Image Motion,” Bell System Techni- cal J., XLVI, 10, December 1967, 2353–2362. 19. E. R. Cole, “The Removal of Unknown Image Blurs by Homomorphic Filtering,” Ph.D. dissertation, Department of Electrical Engineering, University of Utah, Salt Lake City, UT June 1973. 20. B. R. Hunt, “The Application of Constrained Least Squares Estimation to Image Resto- ration by Digital Computer,” IEEE Trans. Computers, C-23, 9, September 1973, 805– 812. 21. N. D. A. Mascarenhas and W. K. Pratt, “Digital Image Restoration Under a Regression Model,” IEEE Trans. Circuits and Systems, CAS-22, 3, March 1975, 252–266. 22. W. K. Pratt and F. Davarian, “Fast Computational Techniques for Pseudoinverse and Wiener Image Restoration,” IEEE Trans. Computers, C-26, 6, June 1977, 571–580. 23. W. K. Pratt, “Pseudoinverse Image Restoration Computational Algorithms,” in Optical Information Processing Vol. 2, G. W. Stroke, Y. Nesterikhin, and E. S. Barrekette, Eds., Plenum Press, New York, 1977. 24. B. W. Rust and W. R. Burrus, Mathematical Programming and the Numerical Solution of Linear Equations, American Elsevier, New York, 1972. 25. A. Albert, Regression and the Moore–Penrose Pseudoinverse, Academic Press, New York, 1972. 26. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digi- tal Image Processing,” American Mathematical. Monthly, 1, 82, January 1975, 1–13. 27. H. C. Andrews and C. L. Patterson, “Outer Product Expansions and Their Uses in Digi- tal Image Processing,” IEEE Trans. Computers, C-25, 2, February 1976, 140–148. 28. T. S. Huang and P. M. Narendra, “Image Restoration by Singular Value Decomposition,” Applied Optics, 14, 9, September 1975, 2213–2216. 29. H. C. Andrews and C. L. Patterson, “Singular Value Decompositions and Digital Image Processing,” IEEE Trans. Acoustics, Speech, and Signal Processing, ASSP-24, 1, Febru- ary 1976, 26–53. 30. T. O. Lewis and P. L. Odell, Estimation in Linear Models, Prentice Hall, Englewood Cliffs, NJ, 1971. 31. W. K. Pratt, “Generalized Wiener Filter Computation Techniques,” IEEE Trans. Com- puters, C-21, 7, July 1972, 636–641. 32. A. Papoulis, Probability Random Variables and Stochastic Processes, 3rd Ed., McGraw- Hill, New York, 1991. 33. S. Twomey, “On the Numerical Solution of Fredholm Integral Equations of the First Kind by the Inversion of the Linear System Produced by Quadrature,” J. Association for Computing Machinery, 10, 1963, 97–101. 34. D. L. Phillips, “A Technique for the Numerical Solution of Certain Integral Equations of the First Kind,” J. Association for Computing Machinery, 9, 1964, 84-97. 35. A. N. Tikonov, “Regularization of Incorrectly Posed Problems,” Soviet Mathematics, 4, 6, 1963, 1624–1627. 36. E. B. Barrett and R. N. Devich, “Linear Programming Compensation for Space-Variant Image Degradation,” Proc. SPIE/OSA Conference on Image Processing, J. C. Urbach, Ed., Pacific Grove, CA, February 1976, 74, 152–158. 37. D. P. MacAdam, “Digital Image Restoration by Constrained Deconvolution,” J. Optical Society of America, 60, 12, December 1970, 1617–1627.
  • 376. REFERENCES 369 38. J. P. Burg, “Maximum Entropy Spectral Analysis,” 37th Annual Society of Exploration Geophysicists Meeting, Oklahoma City, OK, 1967. 39. J. A. Edward and M. M. Fitelson, “Notes on Maximum Entropy Processing,” IEEE Trans. Information Theory, IT-19, 2, March 1973, 232–234. 40. B. R. Frieden, “Restoring with Maximum Likelihood and Maximum Entropy,” J. Opti- cal Society America, 62, 4, April 1972, 511–518. 41. B. R. Frieden, “Maximum Entropy Restorations of Garrymede,” in Proc. SPIE/OSA Conference on Image Processing, J. C. Urbach, Ed., Pacific Grove, CA, February 1976, 74, 160–165. 42. T. S. Huang, D. S. Baker, and S. P. Berger, “Iterative Image Restoration,” Applied Optics, 14, 5, May 1975, 1165–1168. 43. T. G. Stockham, Jr., T. M. Cannon, and P. B. Ingebretsen, “Blind Deconvolution Through Digital Signal Processing,” Proc. IEEE, 63, 4, April 1975, 678–692. 44. A. Papoulis, “Approximations of Point Spreads for Deconvolution,” J. Optical Society of America, 62, 1, January 1972, 77–80. 45. B. Tatian, “Asymptotic Expansions for Correcting Truncation Error in Transfer-Function Calculations,” J. Optical Society of America, 61, 9, September 1971, 1214–1224.
  • 377. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 13 GEOMETRICAL IMAGE MODIFICATION One of the most common image processing operations is geometrical modification in which an image is spatially translated, scaled, rotated, nonlinearly warped, or viewed from a different perspective. 13.1. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION Image translation, scaling, and rotation can be analyzed from a unified standpoint. Let G ( j, k ) for 1 ≤ j ≤ J and 1 ≤ k ≤ K denote a discrete output image that is created by geometrical modification of a discrete input image F ( p, q ) for 1 ≤ p ≤ P and 1 ≤ q ≤ Q . In this derivation, the input and output images may be different in size. Geometrical image transformations are usually based on a Cartesian coordinate sys- tem representation in which the origin ( 0, 0 ) is the lower left corner of an image, while for a discrete image, typically, the upper left corner unit dimension pixel at indices (1, 1) serves as the address origin. The relationships between the Cartesian coordinate representations and the discrete image arrays of the input and output images are illustrated in Figure 13.1-1. The output image array indices are related to their Cartesian coordinates by xk = k – 1 -- 2 - (13.1-1a) yk = J + 1 – j -- 2 - (13.1-1b) 371
  • 378. 372 GEOMETRICAL IMAGE MODIFICATION FIGURE 13.1-1. Relationship between discrete image array and Cartesian coordinate repre- sentation. Similarly, the input array relationship is given by uq = q – 1 -- 2 - (13.1-2a) vp = P + 1 – p -- 2 - (13.1-2b) 13.1.1. Translation Translation of F ( p, q ) with respect to its Cartesian origin to produce G ( j, k ) involves the computation of the relative offset addresses of the two images. The translation address relationships are x k = uq + tx (13.1-3a) yj = vp + ty (13.1-3b) where t x and ty are translation offset constants. There are two approaches to this computation for discrete images: forward and reverse address computation. In the forward approach, u q and v p are computed for each input pixel ( p, q ) and
  • 379. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION 373 substituted into Eq. 13.1-3 to obtain x k and y j . Next, the output array addresses ( j, k ) are computed by inverting Eq. 13.1-1. The composite computation reduces to j′ = p – ( P – J ) – t y (13.1-4a) k′ = q + tx (13.1-4b) where the prime superscripts denote that j′ and k′ are not integers unless tx and t y are integers. If j′ and k′ are rounded to their nearest integer values, data voids can occur in the output image. The reverse computation approach involves calculation of the input image addresses for integer output image addresses. The composite address computation becomes p′ = j + ( P – J ) + ty (13.1-5a) q′ = k – t x (13.1-5b) where again, the prime superscripts indicate that p′ and q′ are not necessarily inte- gers. If they are not integers, it becomes necessary to interpolate pixel amplitudes of ˆ F ( p, q ) to generate a resampled pixel estimate F ( p, q ), which is transferred to G ( j, k ). The geometrical resampling process is discussed in Section 13.5. 13.1.2. Scaling Spatial size scaling of an image can be obtained by modifying the Cartesian coordi- nates of the input image according to the relations xk = sx uq (13.1-6a) yj = sy vp (13.1-6b) where s x and s y are positive-valued scaling constants, but not necessarily integer valued. If s x and s y are each greater than unity, the address computation of Eq. 13.1-6 will lead to magnification. Conversely, if s x and s y are each less than unity, minification results. The reverse address relations for the input image address are found to be p′ = ( 1 ⁄ s y ) ( j + J – 1 ) + P + 1 -- 2 - -- 2 - (13.1-7a) q′ = ( 1 ⁄ s x ) ( k – 1 ) + 1 -- 2 - -- 2 - (13.1-7b)
  • 380. 374 GEOMETRICAL IMAGE MODIFICATION As with generalized translation, it is necessary to interpolate F ( p, q ) to obtain G ( j, k ) . 13.1.3. Rotation Rotation of an input image about its Cartesian origin can be accomplished by the address computation x k = u q cos θ – v p sin θ (13.1-8a) y j = u q sin θ + v p cos θ (13.1-8b) where θ is the counterclockwise angle of rotation with respect to the horizontal axis of the input image. Again, interpolation is required to obtain G ( j, k ) . Rotation of an input image about an arbitrary pivot point can be accomplished by translating the origin of the image to the pivot point, performing the rotation, and then translating back by the first translation offset. Equation 13.1-8 must be inverted and substitu- tions made for the Cartesian coordinates in terms of the array indices in order to obtain the reverse address indices ( p′, q′ ). This task is straightforward but results in a messy expression. A more elegant approach is to formulate the address computa- tion as a vector-space manipulation. 13.1.4. Generalized Linear Geometrical Transformations The vector-space representations for translation, scaling, and rotation are given below. Translation: xk uq tx = + (13.1-9) yj vp ty Scaling: xk sx 0 uq = (13.1-10) yj 0 sy vp Rotation: xk cos θ – sin θ uq = (13.1-11) yj sin θ cos θ vp
  • 381. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION 375 Now, consider a compound geometrical modification consisting of translation, fol- lowed by scaling followed by rotation. The address computations for this compound operation can be expressed as xk cos θ – sin θ sx 0 uq cos θ – sin θ sx 0 tx = + (13.1-12a) yj sin θ cos θ 0 sy vp sin θ cos θ 0 sy ty or upon consolidation xk s x cos θ – s y sin θ uq s x t x cos θ – s t sin θ y y = + (13.1-12b) yj s x sin θ s y cos θ vp s x t x sin θ + s y t y cos θ Equation 13.1-12b is, of course, linear. It can be expressed as xk c0 c 1 uq c2 = + (13.1-13a) yj d0 d 1 vp d2 in one-to-one correspondence with Eq. 13.1-12b. Equation 13.1-13a can be rewrit- ten in the more compact form xk c0 c1 c2 uq = (13.1-13b) yj d0 d1 d2 vp 1 As a consequence, the three address calculations can be obtained as a single linear address computation. It should be noted, however, that the three address calculations are not commutative. Performing rotation followed by minification followed by translation results in a mathematical transformation different than Eq. 13.1-12. The overall results can be made identical by proper choice of the individual transforma- tion parameters. To obtain the reverse address calculation, it is necessary to invert Eq. 13.1-13b to solve for ( u q, v p ) in terms of ( x k, y j ). Because the matrix in Eq. 13.1-13b is not square, it does not possess an inverse. Although it is possible to obtain ( u q, v p ) by a pseudoinverse operation, it is convenient to augment the rectangular matrix as follows:
  • 382. 376 GEOMETRICAL IMAGE MODIFICATION xk c0 c1 c2 uq yj = d0 d1 d2 vp (13.1-14) 1 0 0 1 1 This three-dimensional vector representation of a two-dimensional vector is a special case of a homogeneous coordinates representation (1–3). The use of homogeneous coordinates enables a simple formulation of concate- nated operators. For example, consider the rotation of an image by an angle θ about a pivot point ( x c, y c ) in the image. This can be accomplished by xk 1 0 xc cos θ – sin θ 0 1 0 –xc uq yj = 0 1 yc sin θ cos θ 0 0 1 –yc vp (13.1-15) 1 0 0 1 0 0 1 0 0 1 1 which reduces to a single 3 × 3 transformation: xk cos θ – sin θ – x c cos θ + y c sin θ + x c uq yj = sin θ cos θ – x c sin θ – y c cos θ + y c vp (13.1-16) 1 0 0 1 1 The reverse address computation for the special case of Eq. 13.1-16, or the more general case of Eq. 13.1-13, can be obtained by inverting the 3 × 3 transformation matrices by numerical methods. Another approach, which is more computationally efficient, is to initially develop the homogeneous transformation matrix in reverse order as uq a0 a1 a2 xk vp = b0 b1 b 2 yj (13.1-17) 1 0 0 1 1 where for translation a0 = 1 (13.1-18a) a1 = 0 (13.1-18b) a2 = – tx (13.1-18c) b0 = 0 (13.1-18d) b1 = 1 (13.1-18e) b 2 = –ty (13.1-18f)
  • 383. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION 377 and for scaling a 0 = 1 ⁄ sx (13.1-19a) a1 = 0 (13.1-19b) a2 = 0 (13.1-19c) b0 = 0 (13.1-19d) b 1 = 1 ⁄ sy (13.1-19e) b2 = 0 (13.1-19f) and for rotation a 0 = cos θ (13.1-20a) a 1 = sin θ (13.1-20b) a2 = 0 (13.1-20c) b 0 = – sin θ (13.1-20d) b 1 = cos θ (13.1-20e) b2 = 0 (13.1-20f) Address computation for a rectangular destination array G ( j, k ) from a rectan- gular source array F ( p, q ) of the same size results in two types of ambiguity: some pixels of F ( p, q ) will map outside of G ( j, k ); and some pixels of G ( j, k ) will not be mappable from F ( p, q ) because they will lie outside its limits. As an example, Figure 13.1-2 illustrates rotation of an image by 45° about its center. If the desire of the mapping is to produce a complete destination array G ( j, k ) , it is necessary to access a sufficiently large source image F ( p, q ) to prevent mapping voids in G ( j, k ) . This is accomplished in Figure 13.1-2d by embedding the original image of Figure 13.1-2a in a zero background that is sufficiently large to encompass the rotated original. 13.1.5. Affine Transformation The geometrical operations of translation, size scaling, and rotation are special cases of a geometrical operator called an affine transformation. It is defined by Eq. 13.1-13b, in which the constants ci and di are general weighting factors. The affine transformation is not only useful as a generalization of translation, scaling, and rota- tion. It provides a means of image shearing in which the rows or columns are successively uniformly translated with respect to one another. Figure 13.1-3
  • 384. 378 GEOMETRICAL IMAGE MODIFICATION (a) Original, 500 × 500 (b) Rotated, 500 × 500 (c) Original, 708 × 708 (d) Rotated, 708 × 708 FIGURE 13.1-2. Image rotation by 45° on the washington_ir image about its center. illustrates image shearing of rows of an image. In this example, c 0 = d 1 = 1.0 , c 1 = 0.1, d 0 = 0.0, and c 2 = d 2 = 0.0. 13.1.6. Separable Translation, Scaling, and Rotation The address mapping computations for translation and scaling are separable in the sense that the horizontal output image coordinate xk depends only on uq, and yj depends only on vp. Consequently, it is possible to perform these operations separably in two passes. In the first pass, a one-dimensional address translation is performed independently on each row of an input image to produce an intermediate array I ( p, k ). In the second pass, columns of the intermediate array are processed independently to produce the final result G ( j, k ).
  • 385. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION 379 (a) Original (b) Sheared FIGURE 13.1-3. Horizontal image shearing on the washington_ir image. Referring to Eq. 13.1-8, it is observed that the address computation for rotation is of a form such that xk is a function of both uq and vp; and similarly for yj. One might then conclude that rotation cannot be achieved by separable row and column pro- cessing, but Catmull and Smith (4) have demonstrated otherwise. In the first pass of the Catmull and Smith procedure, each row of F ( p, q ) is mapped into the corre- sponding row of the intermediate array I ( p, k ) using the standard row address com- putation of Eq. 13.1-8a. Thus x k = u q cos θ – v p sin θ (13.1-21) Then, each column of I ( p, k ) is processed to obtain the corresponding column of G ( j, k ) using the address computation x k sin θ + v p y j = ---------------------------- - (13.1-22) cos θ Substitution of Eq. 13.1-21 into Eq. 13.1-22 yields the proper composite y-axis transformation of Eq. 13.1-8b. The “secret” of this separable rotation procedure is the ability to invert Eq. 13.1-21 to obtain an analytic expression for uq in terms of xk. In this case, x k + v p sin θ u q = --------------------------- - (13.1-23) cos θ when substituted into Eq. 13.1-21, gives the intermediate column warping function of Eq. 13.1-22.
  • 386. 380 GEOMETRICAL IMAGE MODIFICATION The Catmull and Smith two-pass algorithm can be expressed in vector-space form as xk 1 0 cos θ – sin θ uq = 1 (13.1-24) yj tan θ ----------- - 0 1 vp cos θ The separable processing procedure must be used with caution. In the special case of a rotation of 90°, all of the rows of F ( p, q ) are mapped into a single column of I ( p, k ) , and hence the second pass cannot be executed. This problem can be avoided by processing the columns of F ( p, q ) in the first pass. In general, the best overall results are obtained by minimizing the amount of spatial pixel movement. For exam- ple, if the rotation angle is + 80°, the original should be rotated by +90° by conven- tional row–column swapping methods, and then that intermediate image should be rotated by –10° using the separable method. Figure 13.14 provides an example of separable rotation of an image by 45°. Figure 13.l-4a is the original, Figure 13.1-4b shows the result of the first pass and Figure 13.1-4c presents the final result. (a) Original (b) First-pass result (c) Second-pass result FIGURE 13.1-4. Separable two-pass image rotation on the washington_ir image.
  • 387. TRANSLATION, MINIFICATION, MAGNIFICATION, AND ROTATION 381 Separable, two-pass rotation offers the advantage of simpler computation com- pared to one-pass rotation, but there are some disadvantages to two-pass rotation. Two-pass rotation causes loss of high spatial frequencies of an image because of the intermediate scaling step (5), as seen in Figure 13.1-4b. Also, there is the potential of increased aliasing error (5,6), as discussed in Section 13.5. Several authors (5,7,8) have proposed a three-pass rotation procedure in which there is no scaling step and hence no loss of high-spatial-frequency content with proper interpolation. The vector-space representation of this procedure is given by xk 1 – tan ( θ ⁄ 2 ) 1 0 1 – tan ( θ ⁄ 2 ) uq = (13.1-25) yj 0 1 sin θ 1 0 1 vp This transformation is a series of image shearing operations without scaling. Figure 13.1-5 illustrates three-pass rotation for rotation by 45°. (a) Original (b) First-pass result (c) Second-pass result (d) Third-pass result FIGURE 13.1-5. Separable three-pass image rotation on the washington_ir image.
  • 388. 382 GEOMETRICAL IMAGE MODIFICATION 13.2 SPATIAL WARPING The address computation procedures described in the preceding section can be extended to provide nonlinear spatial warping of an image. In the literature, this pro- cess is often called rubber-sheet stretching (9,10). Let x = X ( u, v ) (13.2-1a) y = Y ( u, v ) (13.2-1b) denote the generalized forward address mapping functions from an input image to an output image. The corresponding generalized reverse address mapping functions are given by u = U ( x, y ) (13.2-2a) v = V ( x, y ) (13.2-2b) For notational simplicity, the ( j, k ) and ( p, q ) subscripts have been dropped from these and subsequent expressions. Consideration is given next to some examples and applications of spatial warping. 13.2.1. Polynomial Warping The reverse address computation procedure given by the linear mapping of Eq. 13.1-17 can be extended to higher dimensions. A second-order polynomial warp address mapping can be expressed as 2 2 u = a 0 + a 1 x + a 2 y + a 3 x + a 4 xy + a5 y (13.2-3a) 2 2 v = b0 + b 1 x + b 2 y + b 3 x + b 4 xy + b 5 y (13.2-3b) In vector notation, u a0 a1 a2 a3 a4 a5 1 = v b0 b1 b2 b3 b4 b5 x y 2 (13.2-3c) x xy 2 y For first-order address mapping, the weighting coefficients ( a i, b i ) can easily be related to the physical mapping as described in Section 13.1. There is no simple physical
  • 389. SPATIAL WARPING 383 FIGURE 13.2-1. Geometric distortion. counterpart for second address mapping. Typically, second-order and higher-order address mapping are performed to compensate for spatial distortion caused by a physical imaging system. For example, Figure 13.2-1 illustrates the effects of imag- ing a rectangular grid with an electronic camera that is subject to nonlinear pincush- ion or barrel distortion. Figure 13.2-2 presents a generalization of the problem. An ideal image F ( j, k ) is subject to an unknown physical spatial distortion. The observed image is measured over a rectangular array O ( p, q ). The objective is to ˆ perform a spatial correction warp to produce a corrected image array F ( j, k ) . Assume that the address mapping from the ideal image space to the observation space is given by u = O u { x, y } (13.2-4a) v = O v { x, y } (13.2-4b) FIGURE 13.2-2. Spatial warping concept.
  • 390. 384 GEOMETRICAL IMAGE MODIFICATION where Ou { x, y } and O v { x, y } are physical mapping functions. If these mapping functions are known, then Eq. 13.2-4 can, in principle, be inverted to obtain the proper corrective spatial warp mapping. If the physical mapping functions are not known, Eq. 13.2-3 can be considered as an estimate of the physical mapping func- tions based on the weighting coefficients ( a i, b i ) . These polynomial weighting coef- ficients are normally chosen to minimize the mean-square error between a set of observation coordinates ( u m, v m ) and the polynomial estimates ( u, v ) for a set ( 1 ≤ m ≤ M ) of known data points ( x m, y m ) called control points. It is convenient to arrange the observation space coordinates into the vectors T u = [ u 1, u 2, …, u M ] (13.2-5a) T v = [ v 1, v 2, …, v M ] (13.2-5b) Similarly, let the second-order polynomial coefficients be expressed in vector form as T a = [ a 0, a 1, …, a 5 ] (13.2-6a) T b = [ b 0, b 1, …, b 5 ] (13.2-6b) The mean-square estimation error can be expressed in the compact form T T E = ( u – Aa ) ( u – Aa ) + ( v – Ab ) ( v – Ab ) (13.2-7) where 2 2 1 x1 y1 x1 x1 y1 y1 2 2 1 x2 y2 x2 x2 y2 y2 A = (13.2-8) 2 2 1 xM yM xM xM yM yM From Appendix 1, it has been determined that the error will be minimum if – a = A u (13.2-9a) – b = A v (13.2-9b) where A– is the generalized inverse of A. If the number of control points is chosen greater than the number of polynomial coefficients, then – T –1 A = [A A] A (13.2-10)
  • 391. SPATIAL WARPING 385 (a) Source control points (b) Destination control points (c) Warped FIGURE 13.2-3. Second-order polynomial spatial warping on the mandrill_mon image. provided that the control points are not linearly related. Following this procedure, the polynomial coefficients ( a i, b i ) can easily be computed, and the address map- ping of Eq. 13.2-1 can be obtained for all ( j, k ) pixels in the corrected image. Of course, proper interpolation is necessary. Equation 13.2-3 can be extended to provide a higher-order approximation to the physical mapping of Eq. 13.2-3. However, practical problems arise in computing the pseudoinverse accurately for higher-order polynomials. For most applications, sec- ond-order polynomial computation suffices. Figure 13.2-3 presents an example of second-order polynomial warping of an image. In this example, the mapping of con- trol points is indicated by the graphics overlay.
  • 392. 386 GEOMETRICAL IMAGE MODIFICATION FIGURE 13.3-1. Basic imaging system model. 13.3. PERSPECTIVE TRANSFORMATION Most two-dimensional images are views of three-dimensional scenes from the phys- ical perspective of a camera imaging the scene. It is often desirable to modify an observed image so as to simulate an alternative viewpoint. This can be accom- plished by use of a perspective transformation. Figure 13.3-1 shows a simple model of an imaging system that projects points of light in three-dimensional object space to points of light in a two-dimensional image plane through a lens focused for distant objects. Let ( X, Y, Z ) be the continuous domain coordi- nate of an object point in the scene, and let ( x, y ) be the continuous domain-projected coordinate in the image plane. The image plane is assumed to be at the center of the coor- dinate system. The lens is located at a distance f to the right of the image plane, where f is the focal length of the lens. By use of similar triangles, it is easy to establish that fX x = ---------- - (13.3-1a) f–Z fY y = ---------- - (13.3-1b) f–Z Thus the projected point ( x, y ) is related nonlinearly to the object point ( X, Y, Z ) . This relationship can be simplified by utilization of homogeneous coordinates, as introduced to the image processing community by Roberts (1). Let X v = Y (13.3-2) Z
  • 393. PERSPECTIVE TRANSFORMATION 387 ˜ be a vector containing the object point coordinates. The homogeneous vector v cor- responding to v is sX ˜ v = sY (13.3-3) sZ s where s is a scaling constant. The Cartesian vector v can be generated from the ˜ homogeneous vector v by dividing each of the first three components by the fourth. The utility of this representation will soon become evident. Consider the following perspective transformation matrix: 1 0 0 0 P = 0 1 0 0 (13.3-4) 0 0 1 0 0 0 –1 ⁄ f 1 This is a modification of the Roberts (1) definition to account for a different labeling of the axes and the use of column rather than row vectors. Forming the vector product ˜ ˜ w = Pv (13.3-5a) yields sX ˜ w = sY (13.3-5b) sZ s – sZ ⁄ f ˜ The corresponding image plane coordinates are obtained by normalization of w to obtain fX ---------- - f–Z w = fY (13.3-6) ---------- - f–Z fZ ---------- - f–Z
  • 394. 388 GEOMETRICAL IMAGE MODIFICATION It should be observed that the first two elements of w correspond to the imaging relationships of Eq. 13.3-1. It is possible to project a specific image point ( x i, y i ) back into three-dimensional object space through an inverse perspective transformation –1 ˜ v = P w ˜ (13.3-7a) where 1 0 0 0 P –1 = 0 1 0 0 (13.3-7b) 0 0 1 0 0 0 1⁄f 1 and sx i ˜ sy i w = (13.3-7c) sz i s In Eq. 13.3-7c, z i is regarded as a free variable. Performing the inverse perspective transformation yields the homogeneous vector sx i sy i ˜ w = (13.3-8) sz i s + sz i ⁄ f The corresponding Cartesian coordinate vector is fxi ---------- - f – zi fyi w = ---------- - (13.3-9) f – zi fz i ---------- - f – zi or equivalently,
  • 395. CAMERA IMAGING MODEL 389 fx i x = ---------- - (13.3-10a) f – zi fyi y = ---------- - (13.3-10b) f – zi fzi z = ---------- - (13.3-10c) f – zi Equation 13.3-10 illustrates the many-to-one nature of the perspective transforma- tion. Choosing various values of the free variable z i results in various solutions for ( X, Y, Z ), all of which lie along a line from ( x i, y i ) in the image plane through the lens center. Solving for the free variable z i in Eq. 13.3-l0c and substituting into Eqs. 13.3-10a and 13.3-10b gives x X = ---i ( f – Z ) - (13.3-11a) f y Y = ---i ( f – Z ) - (13.3-11b) f The meaning of this result is that because of the nature of the many-to-one perspec- tive transformation, it is necessary to specify one of the object coordinates, say Z, in order to determine the other two from the image plane coordinates ( x i, y i ). Practical utilization of the perspective transformation is considered in the next section. 13.4. CAMERA IMAGING MODEL The imaging model utilized in the preceding section to derive the perspective transformation assumed, for notational simplicity, that the center of the image plane was coincident with the center of the world reference coordinate system. In this section, the imaging model is generalized to handle physical cameras used in practical imaging geometries (11). This leads to two important results: a derivation of the fundamental relationship between an object and image point; and a means of changing a camera perspective by digital image processing. Figure 13.4-1 shows an electronic camera in world coordinate space. This camera is physically supported by a gimbal that permits panning about an angle θ (horizon- tal movement in this geometry) and tilting about an angle φ (vertical movement). The gimbal center is at the coordinate ( X G, Y G, Z G ) in the world coordinate system. The gimbal center and image plane center are offset by a vector with coordinates ( X o, Y o, Z o ).
  • 396. 390 GEOMETRICAL IMAGE MODIFICATION FIGURE 13.4-1. Camera imaging model. If the camera were to be located at the center of the world coordinate origin, not panned nor tilted with respect to the reference axes, and if the camera image plane was not offset with respect to the gimbal, the homogeneous image model would be as derived in Section 13.3; that is ˜ ˜ w = Pv (13.4-1) ˜ where v is the homogeneous vector of the world coordinates of an object point, w ˜ is the homogeneous vector of the image plane coordinates, and P is the perspective transformation matrix defined by Eq. 13.3-4. The camera imaging model can easily be derived by modifying Eq. 13.4-1 sequentially using a three-dimensional exten- sion of translation and rotation concepts presented in Section 13.1. The offset of the camera to location ( XG, YG, ZG ) can be accommodated by the translation operation ˜ ˜ w = PT G v (13.4-2) where 1 0 0 –XG 0 1 0 –Y G TG = (13.4-3) 0 0 1 –Z G 0 0 0 1
  • 397. CAMERA IMAGING MODEL 391 Pan and tilt are modeled by a rotation transformation ˜ ˜ w = PRT G v (13.4-4) where R = R φ R θ and cos θ – sin θ 0 0 Rθ = sin θ cos θ 0 0 (13.4-5) 0 0 1 0 0 0 0 1 and 1 0 0 0 Rφ = 0 cos φ – sin φ 0 (13.4-6) 0 sin φ cos φ 0 0 0 0 1 The composite rotation matrix then becomes cos θ – sin θ 0 0 R = cos φ sin θ cos φ cos θ – sin φ 0 (13.4-7) sin φ sin θ sin φ cos θ cos φ 0 0 0 0 1 Finally, the camera-to-gimbal offset is modeled as ˜ ˜ w = PT C RT G v (13.4-8) where 1 0 0 –Xo 0 1 0 –Yo TC = (13.4-9) 0 0 1 –Zo 0 0 0 1
  • 398. 392 GEOMETRICAL IMAGE MODIFICATION Equation 13.4-8 is the final result giving the complete camera imaging model trans- formation between an object and an image point. The explicit relationship between an object point ( X, Y, Z ) and its image plane projection ( x, y ) can be obtained by performing the matrix multiplications analytically and then forming the Cartesian ˜ coordinates by dividing the first two components of w by the fourth. Upon perform- ing these operations, one obtains f [ ( X – X G ) cos θ – ( Y – Y G ) sin θ – X 0 ] x = -------------------------------------------------------------------------------------------------------------------------------------------------------------------- - (13.4-10a) – ( X – X G ) sin θ sin φ – ( Y – Y G ) cos θ sin φ – ( Z – Z G ) cos φ + Z 0 + f f [ ( X – XG ) sin θ cos φ + ( Y – Y G ) cos θ cos φ – ( Z – Z G ) sin φ – Y 0 ] y = ------------------------------------------------------------------------------------------------------------------------------------------------------------------ - (13.4-10b) – ( X – X G ) sin θ sin φ – ( Y – Y G ) cosθ sin φ – ( Z – Z G ) cos φ + Z 0 + f Equation 13.4-10 can be used to predict the spatial extent of the image of a physical scene on an imaging sensor. Another important application of the camera imaging model is to form an image by postprocessing such that the image appears to have been taken by a camera at a ˜ ˜ different physical perspective. Suppose that two images defined by w 1 and w 2 are formed by taking two views of the same object with the same camera. The resulting camera model relationships are then ˜ ˜ w 1 = PT C R 1 T G1 v (13.4-11a) ˜ ˜ w 2 = PT C R 2 T G2 v (13.4-11b) Because the camera is identical for the two images, the matrices P and TC are invariant in Eq. 13.4-11. It is now possible to perform an inverse computation of Eq. 13.4-11b to obtain –1 –1 –1 –1 ˜ ˜ v = [ TG1 ] [ R 1 ] [ TC ] [ P ] w 1 (13.4-12) and by substitution into Eq. 13.4-11b, it is possible to relate the image plane coordi- nates of the image of the second view to that obtained in the first view. Thus –1 –1 –1 –1 ˜ ˜ w 2 = PT C R 2 TG2 [ T G1 ] [ R 1 ] [ T C ] [ P ] w 1 (13.4-13) As a consequence, an artificial image of the second view can be generated by per- forming the matrix multiplications of Eq. 13.4-13 mathematically on the physical image of the first view. Does this always work? No, there are limitations. First, if some portion of a physical scene were not “seen” by the physical camera, perhaps it
  • 399. GEOMETRICAL IMAGE RESAMPLING 393 was occluded by structures within the scene, then no amount of processing will rec- reate the missing data. Second, the processed image may suffer severe degradations resulting from undersampling if the two camera aspects are radically different. Nev- ertheless, this technique has valuable applications. 13.5. GEOMETRICAL IMAGE RESAMPLING As noted in the preceding sections of this chapter, the reverse address computation process usually results in an address result lying between known pixel values of an input image. Thus it is necessary to estimate the unknown pixel amplitude from its known neighbors. This process is related to the image reconstruction task, as described in Chapter 4, in which a space-continuous display is generated from an array of image samples. However, the geometrical resampling process is usually not spatially regular. Furthermore, the process is discrete to discrete; only one output pixel is produced for each input address. In this section, consideration is given to the general geometrical resampling process in which output pixels are estimated by interpolation of input pixels. The special, but common case, of image magnification by an integer zooming factor is also discussed. In this case, it is possible to perform pixel estimation by convolution. 13.5.1. Interpolation Methods The simplest form of resampling interpolation is to choose the amplitude of an out- put image pixel to be the amplitude of the input pixel nearest to the reverse address. This process, called nearest-neighbor interpolation, can result in a spatial offset error by as much as 1 ⁄ 2 pixel units. The resampling interpolation error can be significantly reduced by utilizing all four nearest neighbors in the interpolation. A common approach, called bilinear interpolation, is to interpolate linearly along each row of an image and then interpolate that result linearly in the columnar direction. Figure 13.5-1 illustrates the process. The estimated pixel is easily found to be F ( p′, q′ ) = ( 1 – a ) [ ( 1 – b )F ( p, q ) + bF ( p, q + 1 ) ] + a [ ( 1 – b )F ( p + 1, q ) + bF ( p + 1, q + 1 ) ] (13.5-1) Although the horizontal and vertical interpolation operations are each linear, in gen- eral, their sequential application results in a nonlinear surface fit between the four neighboring pixels. The expression for bilinear interpolation of Eq. 13.5-1 can be generalized for any interpolation function R { x } that is zero-valued outside the range of ± 1 sample spacing. With this generalization, interpolation can be considered as the summing of four weighted interpolation functions as given by
  • 400. 394 GEOMETRICAL IMAGE MODIFICATION F(p,q) F(p,q+1) b a ^ F(p',q') F(p+1,q) F(p+1,q+1) FIGURE 13.5-1. Bilinear interpolation. F ( p′, q′ ) = F ( p, q )R { – a }R { b } + F ( p, q + 1 )R { – a }R { – ( 1 – b ) } + F ( p + 1, q )R { 1 – a }R { b } + F ( p + 1, q + 1 )R { 1 – a }R { – ( 1 – b ) } (13.5-2) In the special case of linear interpolation, R { x } = R 1 { x } , where R1 { x } is defined in Eq. 4.3-2. Making this substitution, it is found that Eq. 13.5-2 is equivalent to the bilinear interpolation expression of Eq. 13.5-1. Typically, for reasons of computational complexity, resampling interpolation is limited to a 4 × 4 pixel neighborhood. Figure 13.5-2 defines a generalized bicubic interpolation neighborhood in which the pixel F ( p, q ) is the nearest neighbor to the pixel to be interpolated. The interpolated pixel may be expressed in the compact form 2 2 F ( p′, q′ ) = ∑ ∑ F ( p + m, q + n )R C { ( m – a ) }R C { – ( n – b ) } (13.5-3) m = – 1 n = –1 where RC ( x ) denotes a bicubic interpolation function such as a cubic B-spline or cubic interpolation function, as defined in Section 4.3-2. 13.5.2. Convolution Methods When an image is to be magnified by an integer zoom factor, pixel estimation can be implemented efficiently by convolution (12). As an example, consider image magni- fication by a factor of 2:1. This operation can be accomplished in two stages. First, the input image is transferred to an array in which rows and columns of zeros are interleaved with the input image data as follows:
  • 401. GEOMETRICAL IMAGE RESAMPLING 395 F(p−1,q−1) F(p−1,q) F(p−1,q+1) F(p−1,q+2) F(p,q−1) F(p,q) F(p,q+1) F(p,q+2) b a ^ F(p',q') F(p+1,q−1) F(p+1,q) F(p+1,q+1) F(p+1,q+2) F(p+2,q−1) F(p+2,q) F(p+2,q+1) F(p+2,q+2) FIGURE 13.5-2. Bicubic interpolation. FIGURE 13.5-3. Interpolation kernels for 2:1 magnification.
  • 402. 396 GEOMETRICAL IMAGE MODIFICATION (a) Original (b) Zero interleaved quadrant (c) Peg (d ) Pyramid (e) Bell (f ) Cubic B-spline FIGURE 13.5-4. Image interpolation on the mandrill_mon image for 2:1 magnification.
  • 403. REFERENCES 397 A 0 B A B 0 0 0 C D C 0 D input image zero-interleaved neighborhood neighborhood Next, the zero-interleaved neighborhood image is convolved with one of the discrete interpolation kernels listed in Figure 13.5-3. Figure 13.5-4 presents the magnifica- tion results for several interpolation kernels. The inevitable visual trade-off between the interpolation error (the jaggy line artifacts) and the loss of high spatial frequency detail in the image is apparent from the examples. This discrete convolution operation can easily be extended to higher-order magni- fication factors. For N:1 magnification, the core kernel is a N × N peg array. For large kernels it may be more computationally efficient in many cases, to perform the inter- polation indirectly by Fourier domain filtering rather than by convolution (6). REFERENCES 1. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec- tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965. 2. D. F. Rogers, Mathematical Elements for Computer Graphics, 2nd ed., McGraw-Hill, New York, 1989. 3. J. D. Foley et al., Computer Graphics: Principles and Practice, 2nd ed. in C, Addison- Wesley, Reading, MA, 1996. 4. E. Catmull and A. R. Smith, “3-D Transformation of Images in Scanline Order,” Com- puter Graphics, SIGGRAPH '80 Proc., 14, 3, July 1980, 279–285. 5. M. Unser, P. Thevenaz, and L. Yaroslavsky, “Convolution-Based Interpolation for Fast, High-Quality Rotation of Images, IEEE Trans. Image Processing, IP-4, 10, October 1995, 1371–1381. 6. D. Fraser and R. A. Schowengerdt, “Avoidance of Additional Aliasing in Multipass Image Rotations,” IEEE Trans. Image Processing, IP-3, 6, November 1994, 721–735. 7. A. W. Paeth, “A Fast Algorithm for General Raster Rotation,” in Proc. Graphics Inter- face ‘86-Vision Interface, 1986, 77–81. 8. P. E. Danielson and M. Hammerin, “High Accuracy Rotation of Images, in CVGIP: Graphical Models and Image Processing, 54, 4, July 1992, 340–344. 9. R. Bernstein, “Digital Image Processing of Earth Observation Sensor Data,” IBM J. Research and Development, 20, 1, 1976, 40–56. 10. D. A. O’Handley and W. B. Green, “Recent Developments in Digital Image Processing at the Image Processing Laboratory of the Jet Propulsion Laboratory,” Proc. IEEE, 60, 7, July 1972, 821–828.
  • 404. 398 GEOMETRICAL IMAGE MODIFICATION 11. K. S. Fu, R. C. Gonzalez and C. S. G. Lee, Robotics: Control, Sensing, Vision, and Intel- ligence, McGraw-Hill, New York, 1987. 12. W. K. Pratt, “Image Processing and Analysis Using Primitive Computational Elements,” in Selected Topics in Signal Processing, S. Haykin, Ed., Prentice Hall, Englewood Cliffs, NJ, 1989.
  • 405. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) PART 5 IMAGE ANALYSIS Image analysis is concerned with the extraction of measurements, data or informa- tion from an image by automatic or semiautomatic methods. In the literature, this field has been called image data extraction, scene analysis, image description, auto- matic photo interpretation, image understanding, and a variety of other names. Image analysis is distinguished from other types of image processing, such as coding, restoration, and enhancement, in that the ultimate product of an image anal- ysis system is usually numerical output rather than a picture. Image analysis also diverges from classical pattern recognition in that analysis systems, by definition, are not limited to the classification of scene regions to a fixed number of categories, but rather are designed to provide a description of complex scenes whose variety may be enormously large and ill-defined in terms of a priori expectation. 399
  • 406. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 14 MORPHOLOGICAL IMAGE PROCESSING Morphological image processing is a type of processing in which the spatial form or structure of objects within an image are modified. Dilation, erosion, and skeleton- ization are three fundamental morphological operations. With dilation, an object grows uniformly in spatial extent, whereas with erosion an object shrinks uniformly. Skeletonization results in a stick figure representation of an object. The basic concepts of morphological image processing trace back to the research on spatial set algebra by Minkowski (1) and the studies of Matheron (2) on topology. Serra (3–5) developed much of the early foundation of the subject. Steinberg (6,7) was a pioneer in applying morphological methods to medical and industrial vision applications. This research work led to the development of the cytocomputer for high-speed morphological image processing (8,9). In the following sections, morphological techniques are first described for binary images. Then these morphological concepts are extended to gray scale images. 14.1. BINARY IMAGE CONNECTIVITY Binary image morphological operations are based on the geometrical relationship or connectivity of pixels that are deemed to be of the same class (10,11). In the binary image of Figure 14.1-1a, the ring of black pixels, by all reasonable definitions of connectivity, divides the image into three segments: the white pixels exterior to the ring, the white pixels interior to the ring, and the black pixels of the ring itself. The pixels within each segment are said to be connected to one another. This concept of connectivity is easily understood for Figure 14.1-1a, but ambiguity arises when con- sidering Figure 14.1-1b. Do the black pixels still define a ring, or do they instead form four disconnected lines? The answers to these questions depend on the defini- tion of connectivity. 401
  • 407. 402 MORPHOLOGICAL IMAGE PROCESSING FIGURE 14.1-1. Connectivity. Consider the following neighborhood pixel pattern: X3 X2 X1 X4 X X0 X5 X6 X7 in which a binary-valued pixel F ( j, k ) = X , where X = 0 (white) or X = 1 (black) is surrounded by its eight nearest neighbors X 0, X 1, …, X 7. An alternative nomencla- ture is to label the neighbors by compass directions: north, northeast, and so on: NW N NE W X E SW S SE Pixel X is said to be four-connected to a neighbor if it is a logical 1 and if its east, north, west, or south ( X 0, X 2, X4, X6 ) neighbor is a logical 1. Pixel X is said to be eight-connected if it is a logical 1 and if its north, northeast, etc. ( X0, X1, …, X 7 ) neighbor is a logical 1. The connectivity relationship between a center pixel and its eight neighbors can be quantified by the concept of a pixel bond, the sum of the bond weights between the center pixel and each of its neighbors. Each four-connected neighbor has a bond of two, and each eight-connected neighbor has a bond of one. In the following example, the pixel bond is seven. 1 1 1 0 X 0 1 1 0
  • 408. BINARY IMAGE CONNECTIVITY 403 FIGURE 14.1-2. Pixel neighborhood connectivity definitions. Under the definition of four-connectivity, Figure 14.1-1b has four disconnected black line segments, but with the eight-connectivity definition, Figure 14.1-1b has a ring of connected black pixels. Note, however, that under eight-connectivity, all white pixels are connected together. Thus a paradox exists. If the black pixels are to be eight-connected together in a ring, one would expect a division of the white pix- els into pixels that are interior and exterior to the ring. To eliminate this dilemma, eight-connectivity can be defined for the black pixels of the object, and four-connec- tivity can be established for the white pixels of the background. Under this defini- tion, a string of black pixels is said to be minimally connected if elimination of any black pixel results in a loss of connectivity of the remaining black pixels. Figure 14.1-2 provides definitions of several other neighborhood connectivity relationships between a center black pixel and its neighboring black and white pixels. The preceding definitions concerning connectivity have been based on a discrete image model in which a continuous image field is sampled over a rectangular array of points. Golay (12) has utilized a hexagonal grid structure. With such a structure, many of the connectivity problems associated with a rectangular grid are eliminated. In a hexagonal grid, neighboring pixels are said to be six-connected if they are in the same set and share a common edge boundary. Algorithms have been developed for the linking of boundary points for many feature extraction tasks (13). However, two major drawbacks have hindered wide acceptance of the hexagonal grid. First, most image scanners are inherently limited to rectangular scanning. The second problem is that the hexagonal grid is not well suited to many spatial processing operations, such as convolution and Fourier transformation.
  • 409. 404 MORPHOLOGICAL IMAGE PROCESSING 14.2. BINARY IMAGE HIT OR MISS TRANSFORMATIONS The two basic morphological operations, dilation and erosion, plus many variants can be defined and implemented by hit-or-miss transformations (3). The concept is quite simple. Conceptually, a small odd-sized mask, typically 3 × 3 , is scanned over a binary image. If the binary-valued pattern of the mask matches the state of the pix- els under the mask (hit), an output pixel in spatial correspondence to the center pixel of the mask is set to some desired binary state. For a pattern mismatch (miss), the output pixel is set to the opposite binary state. For example, to perform simple binary noise cleaning, if the isolated 3 × 3 pixel pattern 0 0 0 0 1 0 0 0 0 is encountered, the output pixel is set to zero; otherwise, the output pixel is set to the state of the input center pixel. In more complicated morphological algorithms, a 9 large number of the 2 = 512 possible mask patterns may cause hits. It is often possible to establish simple neighborhood logical relationships that define the conditions for a hit. In the isolated pixel removal example, the defining equation for the output pixel G ( j, k ) becomes G ( j, k ) = X ∩ ( X 0 ∪ X 1 ∪ … ∪ X 7 ) (14.2-1) where ∩ denotes the intersection operation (logical AND) and ∪ denotes the union operation (logical OR). For complicated algorithms, the logical equation method of definition can be cumbersome. It is often simpler to regard the hit masks as a collec- tion of binary patterns. Hit-or-miss morphological algorithms are often implemented in digital image processing hardware by a pixel stacker followed by a look-up table (LUT), as shown in Figure 14.2-1 (14). Each pixel of the input image is a positive integer, represented by a conventional binary code, whose most significant bit is a 1 (black) or a 0 (white). The pixel stacker extracts the bits of the center pixel X and its eight neigh- bors and puts them in a neighborhood pixel stack. Pixel stacking can be performed by convolution with the 3 × 3 pixel kernel –4 –3 –2 2 2 2 –5 0 –1 2 2 2 –6 –7 –8 2 2 2 The binary number state of the neighborhood pixel stack becomes the numeric input address of the LUT whose entry is Y For isolated pixel removal, integer entry 256, corresponding to the neighborhood pixel stack state 100000000, contains Y = 0; all other entries contain Y = X.
  • 410. BINARY IMAGE HIT OR MISS TRANSFORMATIONS 405 FIGURE 14.2-1. Look-up table flowchart for binary unconditional operations. Several other 3 × 3 hit-or-miss operators are described in the following subsec- tions. 14.2.1. Additive Operators Additive hit-or-miss morphological operators cause the center pixel of a 3 × 3 pixel window to be converted from a logical 0 state to a logical 1 state if the neighboring pixels meet certain predetermined conditions. The basic operators are now defined. Interior Fill. Create a black pixel if all four-connected neighbor pixels are black. G ( j, k ) = X ∪ [ X 0 ∩ X 2 ∩ X 4 ∩ X 6 ] (14.2-2) Diagonal Fill. Create a black pixel if creation eliminates the eight-connectivity of the background. G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ P 3 ∪ P 4 ] (14.2-3a)
  • 411. 406 MORPHOLOGICAL IMAGE PROCESSING where P 1 = X ∩ X0 ∩ X 1 ∩ X 2 (14.2-3b) P 2 = X ∩ X2 ∩ X 3 ∩ X 4 (14.2-3c) P 3 = X ∩ X4 ∩ X 5 ∩ X 6 (14.2-3d) P 4 = X ∩ X6 ∩ X 7 ∩ X 0 (14.2-3e) In Eq. 14.2-3, the overbar denotes the logical complement of a variable. Bridge. Create a black pixel if creation results in connectivity of previously uncon- nected neighboring black pixels. G ( j, k ) = X ∪ [ P 1 ∪ P 2 ∪ … ∪ P 6 ] (14.2-4a) where P 1 = X 2 ∩ X6 ∩ [ X3 ∪ X 4 ∪ X 5 ] ∩ [ X 0 ∪ X 1 ∪ X7 ] ∩ PQ (14.2-4b) P 2 = X 0 ∩ X4 ∩ [ X1 ∪ X 2 ∪ X 3 ] ∩ [ X 5 ∪ X 6 ∪ X7 ] ∩ PQ (14.2-4c) P 3 = X 0 ∩ X6 ∩ X7 ∩ [ X 2 ∪ X 3 ∪ X 4 ] (14.2-4d) P 4 = X 0 ∩ X2 ∩ X1 ∩ [ X 4 ∪ X 5 ∪ X6 ] (14.2-4e) P 5 = X2 ∩ X4 ∩ X 3 ∩ [ X 0 ∪ X 6 ∪ X7 ] (14.2-4f) P 6 = X4 ∩ X6 ∩ X 5 ∩ [ X 0 ∪ X 1 ∪ X2 ] (14.2-4g) and P Q = L 1 ∪ L2 ∪ L 3 ∪ L 4 (14.2-4h) L1 = X ∩ X 0 ∩ X1 ∩ X 2 ∩ X 3 ∩ X4 ∩ X5 ∩ X 6 ∩ X 7 (14.2-4i) L2 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7 (14.2-4j) L3 = X ∩ X0 ∩ X 1 ∩ X2 ∩ X 3 ∩ X 4 ∩ X5 ∩ X 6 ∩ X 7 (14.2-4k) L 4 = X ∩ X0 ∩ X 1 ∩ X 2 ∩ X3 ∩ X 4 ∩ X 5 ∩ X6 ∩ X 7 (14.2-4l)
  • 412. BINARY IMAGE HIT OR MISS TRANSFORMATIONS 407 The following is one of 119 qualifying patterns 1 0 0 1 0 1 0 0 1 A pattern such as 0 0 0 0 0 0 1 0 1 does not qualify because the two black pixels will be connected when they are on the middle row of a subsequent observation window if they are indeed unconnected. Eight-Neighbor Dilate. Create a black pixel if at least one eight-connected neigh- bor pixel is black. G ( j, k ) = X ∪ X 0 ∪ … ∪ X 7 (14.2-5) This hit-or-miss definition of dilation is a special case of a generalized dilation operator that is introduced in Section 14.4. The dilate operator can be applied recur- sively. With each iteration, objects will grow by a single pixel width ring of exterior pixels. Figure 14.2-2 shows dilation for one and for three iterations for a binary image. In the example, the original pixels are recorded as black, the background pix- els are white, and the added pixels are midgray. Fatten. Create a black pixel if at least one eight-connected neighbor pixel is black, provided that creation does not result in a bridge between previously unconnected black pixels in a 3 × 3 neighborhood. The following is an example of an input pattern in which the center pixel would be set black for the basic dilation operator, but not for the fatten operator. 0 0 1 1 0 0 1 1 0 There are 132 such qualifying patterns. This strategem will not prevent connection of two objects separated by two rows or columns of white pixels. A solution to this problem is considered in Section 14.3. Figure 14.2-3 provides an example of fattening.
  • 413. 408 MORPHOLOGICAL IMAGE PROCESSING (a) Original (b) One iteration (c) Three iterations FIGURE 14.2-2. Dilation of a binary image. 14.2.2. Subtractive Operators Subtractive hit-or-miss morphological operators cause the center pixel of a 3 × 3 window to be converted from black to white if its neighboring pixels meet predeter- mined conditions. The basic subtractive operators are defined below. Isolated Pixel Remove. Erase a black pixel with eight white neighbors. G ( j, k ) = X ∩ [ X 0 ∪ X 1 ∪ … ∪ X 7 ] (14.2-6) Spur Remove. Erase a black pixel with a single eight-connected neighbor.
  • 414. BINARY IMAGE HIT OR MISS TRANSFORMATIONS 409 FIGURE 14.2-3. Fattening of a binary image. The following is one of four qualifying patterns: 0 0 0 0 1 0 1 0 0 Interior Pixel Remove. Erase a black pixel if all four-connected neighbors are black. G ( j, k ) = X ∩ [ X 0 ∪ X 2 ∪ X 4 ∪ X 6 ] (14.2-7) There are 16 qualifying patterns. H-Break. Erase a black pixel that is H-connected. There are two qualifying patterns. 1 1 1 1 0 1 0 1 0 1 1 1 1 1 1 1 0 1 Eight-Neighbor Erode. Erase a black pixel if at least one eight-connected neighbor pixel is white. G ( j, k ) = X ∩ X 0 ∩ … ∩ X 7 (14.2-8)
  • 415. 410 MORPHOLOGICAL IMAGE PROCESSING (a) Original (b) One iteration (c) Three iterations FIGURE 14.2-4. Erosion of a binary image. A generalized erosion operator is defined in Section 14.4. Recursive application of the erosion operator will eventually erase all black pixels. Figure 14.2-4 shows results for one and three iterations of the erode operator. The eroded pixels are midg- ray. It should be noted that after three iterations, the ring is totally eroded. 14.2.3. Majority Black Operator The following is the definition of the majority black operator: Majority Black. Create a black pixel if five or more pixels in a 3 × 3 window are black; otherwise, set the output pixel to white. The majority black operator is useful for filling small holes in objects and closing short gaps in strokes. An example of its application to edge detection is given in Chapter 15.
  • 416. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 411 14.3. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING Shrinking, thinning, skeletonizing, and thickening are forms of conditional erosion in which the erosion process is controlled to prevent total erasure and to ensure con- nectivity. 14.3.1. Binary Image Shrinking The following is a definition of shrinking: Shrink. Erase black pixels such that an object without holes erodes to a single pixel at or near its center of mass, and an object with holes erodes to a connected ring lying midway between each hole and its nearest outer boundary. A 3 × 3 pixel object will be shrunk to a single pixel at its center. A 2 × 2 pixel object will be arbitrarily shrunk, by definition, to a single pixel at its lower right corner. It is not possible to perform shrinking using single-stage 3 × 3 pixel hit-or-miss transforms of the type described in the previous section. The 3 × 3 window does not provide enough information to prevent total erasure and to ensure connectivity. A 5 × 5 hit-or-miss transform could provide sufficient information to perform proper shrinking. But such an approach would result in excessive computational complex- ity (i.e., 225 possible patterns to be examined!). References 15 and 16 describe two- stage shrinking and thinning algorithms that perform a conditional marking of pixels for erasure in a first stage, and then examine neighboring marked pixels in a second stage to determine which ones can be unconditionally erased without total erasure or loss of connectivity. The following algorithm developed by Pratt and Kabir (17) is a pipeline processor version of the conditional marking scheme. In the algorithm, two concatenated 3 × 3 hit-or-miss transformations are per- formed to obtain indirect information about pixel patterns within a 5 × 5 window. Figure 14.3-1 is a flowchart for the look-up table implementation of this algorithm. In the first stage, the states of nine neighboring pixels are gathered together by a pixel stacker, and a following look-up table generates a conditional mark M for pos- sible erasures. Table 14.3-1 lists all patterns, as indicated by the letter S in the table column, which will be conditionally marked for erasure. In the second stage of the algorithm, the center pixel X and the conditional marks in a 3 × 3 neighborhood cen- tered about X are examined to create an output pixel. The shrinking operation can be expressed logically as G ( j, k ) = X ∩ [ M ∪ P ( M, M 0, …, M 7 ) ] (14.3-1) where P ( M, M 0, …, M 7 ) is an erasure inhibiting logical variable, as defined in Table 14.3-2. The first four patterns of the table prevent strokes of single pixel width from being totally erased. The remaining patterns inhibit erasure that would break object connectivity. There are a total of 157 inhibiting patterns. This two-stage process must be performed iteratively until there are no further erasures.
  • 417. 412 MORPHOLOGICAL IMAGE PROCESSING FIGURE 14.3-1. Look-up table flowchart for binary conditional mark operations. As an example, the 2 × 2 square pixel object 1 1 1 1 results in the following intermediate array of conditional marks M M M M The corner cluster pattern of Table 14.3-2 gives a hit only for the lower right corner mark. The resulting output is 0 0 0 1
  • 418. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 413 TABLE 14.3-1. Shrink, Thin, and Skeletonize Conditional Mark Patterns [M = 1 if hit] Table Bond Pattern 0 0 1 1 0 0 0 0 0 0 0 0 S 1 0 1 0 0 1 0 0 1 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 0 0 0 S 2 0 1 1 0 1 0 1 1 0 0 1 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 1 0 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 S 3 0 1 1 0 1 0 0 1 0 1 1 0 1 1 0 0 1 0 0 1 0 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 1 0 0 1 1 0 0 1 0 1 0 0 1 0 0 0 0 0 0 0 TK 4 0 1 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 0 1 0 0 1 0 0 0 1 1 1 1 1 0 0 0 0 0 STK 4 0 1 1 0 1 0 1 1 0 0 1 0 0 0 1 0 0 0 1 0 0 1 1 1 1 1 0 0 1 0 0 1 1 0 0 1 ST 5 0 1 1 0 1 1 1 1 0 0 1 1 0 0 0 0 0 1 0 0 0 0 1 0 0 1 1 1 1 0 0 0 0 0 0 0 ST 5 0 1 1 1 1 0 1 1 0 0 1 1 0 0 0 0 0 0 1 1 0 0 1 1 1 1 0 0 1 1 ST 6 0 1 1 1 1 0 0 0 1 1 0 0 1 1 1 0 1 1 1 1 1 1 1 0 1 0 0 0 0 0 0 0 0 0 0 1 STK 6 0 1 1 0 1 1 1 1 0 1 1 0 1 1 0 1 1 0 0 1 1 0 1 1 0 0 0 0 0 1 0 0 0 1 0 0 1 1 0 1 1 1 1 1 1 0 1 1 (Continued)
  • 419. 414 MORPHOLOGICAL IMAGE PROCESSING TABLE 14.3-1 (Continued) Table Bond Pattern 1 1 1 1 1 1 1 0 0 0 0 1 STK 7 0 1 1 1 1 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 0 0 0 STK 8 0 1 1 1 1 1 1 1 0 1 1 1 0 1 1 0 0 0 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 0 0 1 STK 9 0 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 0 1 1 1 1 1 1 0 1 1 1 1 1 1 0 0 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 STK 10 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 K 11 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 1 1 1 1 Figure 14.3-2 shows an example of the shrinking of a binary image for four and 13 iterations of the algorithm. No further shrinking occurs for more than 13 iterations. At this point, the shrinking operation has become idempotent (i. e., reapplication evokes no further change. This shrinking algorithm does not shrink the symmetric original ring object to a ring that is also symmetric because of some of the conditional mark patterns of Table 14.3-2, which are necessary to ensure that objects of even dimension shrink to a single pixel. For the same reason, the shrink ring is not minimally connected. 14.3.2. Binary Image Thinning The following is a definition of thinning: Thin. Erase black pixels such that an object without holes erodes to a minimally connected stroke located equidistant from its nearest outer boundaries, and an object with holes erodes to a minimally connected ring midway between each hole and its nearest outer boundary.
  • 420. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 415 TABLE 14.3-2. Shrink and Thin Unconditional Mark Patterns [P(M, M0, M1, M2, M3, M4, M5, M6, M7) = 1 if hit] a Pattern Spur Single 4-connection 0 0 M M0 0 0 0 0 0 0 0 0 M0 0 M0 0 M0 0 MM 0 0 0 0 0 0 0 M0 0 0 0 L Cluster (thin only) 0 0 M 0 MM MM0 M0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 MM 0 M0 0 M0 MM0 MM0 0 M0 0 M0 0 MM 0 0 0 0 0 0 0 0 0 0 0 0 M0 0 MM0 0 MM 0 0 M 4-Connected offset 0 MM MM0 0 M0 0 0 M MM0 0 MM 0 MM 0 MM 0 0 0 0 0 0 0 0 M 0 M0 Spur corner cluster 0 A M MB 0 0 0 M M0 0 0 MB A M0 A M0 0 MB M0 0 0 0 M MB 0 0 AM Corner cluster MMD MMD DDD Tee branch D M0 0 MD 0 0 D D0 0 D MD 0 M0 0 M0 D MD MMM MMM MMM MMM MM0 MM0 0 MM 0 MM D0 0 0 0 D 0 MD D M0 0 M0 D MD D MD 0 M0 Vee branch MD M MD C CBA A DM D MD D MB D MD B MD A B C MD A MD M CDM Diagonal branch D M0 0 MD D0 M M0 D 0 MM MM0 MM0 0 MM M0 D D 0 M 0 MD D M0 a A∪B∪C = 1 D = 0∪1 A ∪ B = 1.
  • 421. 416 MORPHOLOGICAL IMAGE PROCESSING (a) Four iterations (b) Thirteen iterations FIGURE 14.3-2. Shrinking of a binary image. The following is an example of the thinning of a 3 × 5 pixel object without holes 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 0 1 1 1 0 1 1 1 1 1 0 0 0 0 0 before after A 2 × 5 object is thinned as follows: 1 1 1 1 1 0 0 0 0 0 1 1 1 1 1 0 1 1 1 1 before after Table 14.3-1 lists the conditional mark patterns, as indicated by the letter T in the table column, for thinning by the conditional mark algorithm of Figure 14.3-1. The shrink and thin unconditional patterns are identical, as shown in Table 14.3-2. Figure 14.3-3 contains an example of the thinning of a binary image for four and eight iterations. Figure 14.3-4 provides an example of the thinning of an image of a printed circuit board in order to locate solder pads that have been deposited improp- erly and that do not have holes for component leads. The pads with holes erode to a minimally connected ring, while the pads without holes erode to a point. Thinning can be applied to the background of an image containing several objects as a means of separating the objects. Figure 14.3-5 provides an example of the process. The original image appears in Figure 14.3-5a, and the background- reversed image is Figure 14.3-5b. Figure 14.3-5c shows the effect of thinning the background. The thinned strokes that separate the original objects are minimally
  • 422. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 417 (a) Four iterations (b) Eight iterations FIGURE 14.3-3. Thinning of a binary image. connected, and therefore the background of the separating strokes is eight-connected throughout the image. This is an example of the connectivity ambiguity discussed in Section 14.1. To resolve this ambiguity, a diagonal fill operation can be applied to the thinned strokes. The result, shown in Figure 14.3-5d, is called the exothin of the original image. The name derives from the exoskeleton, discussed in the following section. 14.3.3. Binary Image Skeletonizing A skeleton or stick figure representation of an object can be used to describe its structure. Thinned objects sometimes have the appearance of a skeleton, but they are not always uniquely defined. For example, in Figure 14.3-3, both the rectangle and ellipse thin to a horizontal line. (a) Original (b) Thinned FIGURE 14.3-4. Thinning of a printed circuit board image.
  • 423. 418 MORPHOLOGICAL IMAGE PROCESSING (a) Original (b) Background-reversed (c) Thinned background (d ) Exothin FIGURE 14.3-5. Exothinning of a binary image. Blum (18) has introduced a skeletonizing technique called medial axis transfor- mation that produces a unique skeleton for a given object. An intuitive explanation of the medial axis transformation is based on the prairie fire analogy (19–22). Con- sider the circle and rectangle regions of Figure 14.3-6 to be composed of dry grass on a bare dirt background. If a fire were to be started simultaneously on the perime- ter of the grass, the fire would proceed to burn toward the center of the regions until all the grass was consumed. In the case of the circle, the fire would burn to the cen- ter point of the circle, which is the quench point of the circle. For the rectangle, the fire would proceed from each side. As the fire moved simultaneously from left and top, the fire lines would meet and quench the fire. The quench points or quench lines of a figure are called its medial axis skeleton. More generally, the medial axis skele- ton consists of the set of points that are equally distant from two closest points of an object boundary. The minimal distance function is called the quench distance of the object. From the medial axis skeleton of an object and its quench distance, it is
  • 424. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 419 (a) Circle (b) Rectangle FIGURE 14.3-6. Medial axis transforms. possible to reconstruct the object boundary. The object boundary is determined by the union of a set of circular disks formed by circumscribing a circle whose radius is the quench distance at each point of the medial axis skeleton. A reasonably close approximation to the medial axis skeleton can be implemented by a slight variation of the conditional marking implementation shown in Figure 14.3- 1. In this approach, an image is iteratively eroded using conditional and unconditional mark patterns until no further erosion occurs. The conditional mark patterns for skele- tonization are listed in Table 14.3-1 under the table indicator K. Table 14.3-3 lists the unconditional mark patterns. At the conclusion of the last iteration, it is necessary to perform a single iteration of bridging as defined by Eq. 14.2-4 to restore connectivity, which will be lost whenever the following pattern is encountered: 1 11 11 1 111 1 Inhibiting the following mark pattern created by the bit pattern above: MM M M will prevent elliptically shaped objects from being improperly skeletonized.
  • 425. 420 MORPHOLOGICAL IMAGE PROCESSING TABLE 14.3-3. Skeletonize Unconditional Mark Patterns [P(M, M0 , M1 , M2, M3, M4, M5, M6 , M7) = 1 if hit]a Pattern Spur 0 0 0 0 0 0 0 0 M M 0 0 0 M 0 0 M 0 0 M 0 0 M 0 0 0 M M 0 0 0 0 0 0 0 0 Single 4-connection 0 0 0 0 0 0 0 0 0 0 M 0 0 M 0 0 M M M M 0 0 M 0 0 M 0 0 0 0 0 0 0 0 0 0 L corner 0 M 0 0 M 0 0 0 0 0 0 0 0 M M M M 0 0 M M M M 0 0 0 0 0 0 0 0 M 0 0 M 0 Corner cluster D M M D D D M M D D D D D M M M M D M M D D M M D D D M M D D D D D M M Tee branch D M D D M D D D D D M D M M M M M D M M M D M M D 0 0 D M D D M D D M D Vee branch M D M M D C C B A A D M D M D D M B D M D B M D A B C M D A M D M C D M Digonal branch D M 0 0 M D D 0 M M 0 D 0 M M M M 0 M M 0 0 M M M 0 D D 0 M 0 M D D M 0 a A∪B∪C = 1 D = 0 ∪ 1.
  • 426. BINARY IMAGE SHRINKING, THINNING, SKELETONIZING, AND THICKENING 421 (a) Four iterations (b) Ten iterations FIGURE 14.3-7. Skeletonizing of a binary image. Figure 14.3-7 shows an example of the skeletonization of a binary image. The eroded pixels are midgray. It should observed that skeletonizing gives different results than thinning for many objects. Prewitt (23, p. 136) has coined the term exoskeleton for the skeleton of the background of object in a scene. The exoskeleton partitions each objects from neighboring object, as does the thinning of the back- ground. 14.3.4. Binary Image Thickening In Section 14.2.1, the fatten operator was introduced as a means of dilating objects such that objects separated by a single pixel stroke would not be fused. But the fat- ten operator does not prevent fusion of objects separated by a double width white stroke. This problem can be solved by iteratively thinning the background of an image and then performing a diagonal fill operation. This process, called thickening, when taken to its idempotent limit, forms the exothin of the image, as discussed in Section 14.3.2. Figure 14.3-8 provides an example of thickening. The exothin oper- ation is repeated three times on the background reversed version of the original image. Figure 14.3-8b shows the final result obtained by reversing the background of the exothinned image.
  • 427. 422 MORPHOLOGICAL IMAGE PROCESSING (a) Original (b) Thickened FIGURE 14.3-8. Thickening of a binary image. 14.4. BINARY IMAGE GENERALIZED DILATION AND EROSION Dilation and erosion, as defined earlier in terms of hit-or-miss transformations, are limited to object modification by a single ring of boundary pixels during each itera- tion of the process. The operations can be generalized. Before proceeding further, it is necessary to introduce some fundamental con- cepts of image set algebra that are the basis for defining the generalized dilation and erosions operators. Consider a binary-valued source image function F ( j, k ). A pixel at coordinate ( j, k ) is a member of F ( j, k ) , as indicated by the symbol ∈, if and only if it is a logical 1. A binary-valued image B ( j, k ) is a subset of a binary-valued image A ( j, k ), as indicated by B ( j, k ) ⊆ A ( j, k ), if for every spatial occurrence of a logical 1 of A ( j, k ), B ( j, k ) is a logical 1. The complement F ( j, k ) of F ( j, k ) is a binary-valued image whose pixels are in the opposite logical state of those in F ( j, k ). Figure 14.4-1 shows an example of the complement process and other image set algebraic operations on a pair of binary images. A reflected image F ( j, k ) is an˜ image that has been flipped from left to right and from top to bottom. Figure 14.4-2 provides an example of image complementation. Translation of an image, as indi- cated by the function G ( j, k ) = T r, c { F ( j, k ) } (14.4-1) consists of spatially offsetting F ( j, k ) with respect to itself by r rows and c columns, where – R ≤ r ≤ R and – C ≤ c ≤ C . Figure 14.4-2 presents an example of the transla- tion of a binary image.
  • 428. BINARY IMAGE GENERALIZED DILATION AND EROSION 423 FIGURE 14.4-1. Image set algebraic operations on binary arrays. 14.4.1. Generalized Dilation Generalized dilation is expressed symbolically as G ( j, k ) = F ( j, k ) ⊕ H ( j , k ) (14.4-2) where F ( j, k ) for 1 ≤ j, k ≤ N is a binary-valued image and H ( j, k ) for 1 ≤ j, k ≤ L , where L is an odd integer, is a binary-valued array called a structuring element. For notational simplicity, F ( j, k ) and H ( j, k ) are assumed to be square arrays. General- ized dilation can be defined mathematically and implemented in several ways. The Minkowski addition definition (1) is G ( j, k ) = ∪ ∪ T r , c { F ( j, k ) } (14.4-3)      ( r, c ) ∈ H
  • 429. 424 MORPHOLOGICAL IMAGE PROCESSING FIGURE 14.4-2. Reflection and translation of a binary array. It states that G ( j, k ) is formed by the union of all translates of F ( j, k ) with respect to itself in which the translation distance is the row and column index of pixels of H ( j, k ) that is a logical 1. Figure 14.4-3 illustrates the concept. Equation 14.4-3 results in an M × M output array G ( j, k ) that is justified with the upper left corner of the input array F ( j, k ) . The output array is of dimension M = N + L – 1, where L is the size of the structuring element. In order to register the input and output images properly, F ( j, k ) should be translated diagonally right by Q = ( L – 1 ) ⁄ 2 pixels. Fig- ure 14.4-3 shows the exclusive-OR difference between G ( j, k ) and the translate of F ( j, k ) . This operation identifies those pixels that have been added as a result of generalized dilation. An alternative definition of generalized dilation is based on the scanning and pro- cessing of F ( j, k ) by the structuring element H ( j, k ) . With this approach, generalized dilation is formulated as (17) G ( j, k ) = ∪ ∪ F ( m, n ) ∩ H ( j – m + 1, k – n + 1 ) m n (14.4-4) With reference to Eq. 7.1-7, the spatial limits of the union combination are MAX { 1, j – L + 1 } ≤ m ≤ MIN { N, j } (14.4-5a) MAX { 1, k – L + 1 } ≤ n ≤ MIN { N, k } (14.4-5b) Equation 14.4-4 provides an output array that is justified with the upper left corner of the input array. In image processing systems, it is often convenient to center the input and output images and to limit their size to the same overall dimension. This can be accomplished easily by modifying Eq. 14.4-4 to the form G ( j, k ) = ∪ ∪ F ( m , n ) ∩ H ( j – m + S, k – n + S ) m n (14.4-6)
  • 430. BINARY IMAGE GENERALIZED DILATION AND EROSION 425 FIGURE 14.4-3. Generalized dilation computed by Minkowski addition. where S = ( L – 1 ) ⁄ 2 and, from Eq. 7.1-10, the limits of the union combination are MAX { 1, j – Q } ≤ m ≤ MIN { N, j + Q } (14.4-7a) MAX { 1, k – Q } ≤ n ≤ MIN { N, k + Q } (14.4-7b)
  • 431. 426 MORPHOLOGICAL IMAGE PROCESSING and where Q = ( L – 1 ) ⁄ 2 . Equation 14.4-6 applies for S ≤ j, k ≤ N – Q and G ( j, k ) = 0 elsewhere. The Minkowski addition definition of generalized erosion given in Eq. 14.4-2 can be modified to provide a centered result by taking the trans- lations about the center of the structuring element. In the following discussion, only the centered definitions of generalized dilation will be utilized. In the special case for which L = 3, Eq. 14.4-6 can be expressed explicitly as ( G ( j, k ) ) = [ H ( 3, 3 ) ∩ F ( j – 1, k – 1 ) ] ∪ [ H ( 3, 2 ) ∩ F ( j – 1, k ) ] ∪ [ H ( 3, 1 ) ∩ F ( j – 1, K + 1 ) ] ∪ [ H ( 2, 3 ) ∩ F ( j, k – 1 ) ] ∪ [ H ( 2, 2 ) ∩ F ( j, k ) ] ∪ [ H ( 2, 1 ) ∩ F ( j, k + 1 ) ] ∪ [ H ( 1, 3 ) ∩ F ( j + 1, k – 1 ) ] ∪ [ H ( 1, 2 ) ∩ F ( j + 1, k ) ] ∪ [ H ( 1, 1 ) ∩ F ( j + 1, k + 1 ) ] (14.4-8) If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , then G ( j, k ) , as computed by Eq. 14.4-8, gives the same result as hit-or-miss dilation, as defined by Eq. 14.2-5. It is interesting to compare Eqs. 14.4-6 and 14.4-8, which define generalized dilation, and Eqs. 7.1-14 and 7.1-15, which define convolution. In the generalized dilation equation, the union operations are analogous to the summation operations of convolution, while the intersection operation is analogous to point-by-point multiplication. As with convolution, dilation can be conceived as the scanning and processing of F ( j, k ) by H ( j, k ) rotated by 180°. 14.4.2. Generalized Erosion Generalized erosion is expressed symbolically as G ( j, k ) = F ( j, k ) ᭺ H ( j, k ) – (14.4-9) where again H ( j, k ) is an odd size L × L structuring element. Serra (3) has adopted, as his definition for erosion, the dual relationship of Minkowski addition given by Eq. 14.4-1, which was introduced by Hadwiger (24). By this formulation, general- ized erosion is defined to be G ( j, k ) = ∩ ∩ Tr, c { F ( j, k ) } (14.4-10)      ( r, c ) ∈ H The meaning of this relation is that erosion of F ( j, k ) by H ( j, k ) is the intersection of all translates of F ( j, k ) in which the translation distance is the row and column index of pixels of H ( j, k ) that are in the logical 1 state. Steinberg et al. (6,25) have adopted the subtly different formulation
  • 432. BINARY IMAGE GENERALIZED DILATION AND EROSION 427 FIGURE 14.4-4. Comparison of erosion results for two definitions of generalized erosion. G ( j, k ) = ∩ ∩ Tr, c { F ( j, k ) } (14.4-11)      ˜ ( r, c ) ∈ H introduced by Matheron (2), in which the translates of F ( j, k ) are governed by the ˜ reflection H ( j, k ) of the structuring element rather than by H ( j, k ) itself. Using the Steinberg definition, G ( j, k ) is a logical 1 if and only if the logical 1s of H ( j, k ) form a subset of the spatially corresponding pattern of the logical 1s of F ( j, k ) as H ( j, k ) is scanned over F ( j, k ) . It should be noted that the logical zeros of H ( j, k ) do not have to match the logical zeros of F ( j, k ) . With the Serra definition, the statements above hold when F ( j, k ) is scanned and processed by the reflection of the structuring element. Figure 14.4-4 presents a comparison of the erosion results for the two definitions of erosion. Clearly, the results are inconsistent. Pratt (26) has proposed a relation, which is the dual to the generalized dilation expression of Eq. 14.4-6, as a definition of generalized erosion. By this formulation, generalized erosion in centered form is G ( j, k ) = ∩ ∩ F ( m, n ) ∪ H ( j – m + S, k – n + S ) m n (14.4-12) where S = ( L – 1 ) ⁄ 2 , and the limits of the intersection combination are given by Eq. 14.4-7. In the special case for which L = 3, Eq. 14.4-12 becomes
  • 433. 428 MORPHOLOGICAL IMAGE PROCESSING G ( j, k ) = [ H ( 3, 3 ) ∪ F ( j – 1, k – 1 ) ] ∩ [ H ( 3, 2 ) ∪ F ( j – 1, k ) ] ∩ [ H ( 3, 1 ) ∪ F ( j – 1, k + 1 ) ] ∪ [ H ( 2, 3 ) ∪ F ( j, k – 1 ) ] ∩ [ H ( 2, 2 ) ∪ F ( j, k ) ] ∩ [ H ( 2, 1 ) ∪ F ( j, k + 1 ) ] ∩ [ H ( 1, 3 ) ∪ F ( j + 1, k – 1 ) ] ∩ [ H ( 1, 2 ) ∪ F ( j + 1, k ) ] ∩ [ H ( 1, 1 ) ∪ F ( j + 1, k + 1 ) ] (14.4-13) If H ( j, k ) = 1 for 1 ≤ j, k ≤ 3 , Eq. 14.4-13 gives the same result as hit-or-miss eight- neighbor erosion as defined by Eq. 14.2-6. Pratt's definition is the same as the Serra definition. However, Eq. 14.4-12 can easily be modified by substituting the reflec- ˜ tion H ( j, k ) for H ( j, k ) to provide equivalency with the Steinberg definition. Unfortunately, the literature utilizes both definitions, which can lead to confusion. The definition adopted in this book is that of Hadwiger, Serra, and Pratt, because the FIGURE 14.4-5. Generalized dilation and erosion for a 5 × 5 structuring element.
  • 434. BINARY IMAGE GENERALIZED DILATION AND EROSION 429 defining relationships (Eq. 14.4-1 or 14.4-12) are duals to their counterparts for gen- eralized dilation (Eq. 14.4-3 or 14.4-6). Figure 14.4-5 shows examples of generalized dilation and erosion for a symmet- ric 5 × 5 structuring element. 14.4.3. Properties of Generalized Dilation and Erosion Consideration is now given to several mathematical properties of generalized dilation and erosion. Proofs of these properties are found in Reference 25. For nota- tional simplicity, in this subsection the spatial coordinates of a set are dropped, i.e., A( j, k) = A. Dilation is commutative: A⊕B = B⊕A (14.4-14a) But in general, erosion is not commutative: A ᭺B ≠ B᭺A – – (14.4-14b) Dilation and erosion are increasing operations in the sense that if A ⊆ B , then A⊕C⊆B⊕C (14.4-15a) A᭺ C ⊆ B ᭺ C – – (14.4-15b) Dilation and erosion are opposite in effect; dilation of the background of an object behaves like erosion of the object. This statement can be quantified by the duality relationship A᭺ B = A ⊕ B – (14.4-16) For the Steinberg definition of erosion, B on the right-hand side of Eq. 14.4-16 ˜ should be replaced by its reflection B . Figure 14.4-6 contains an example of the duality relationship. The dilation and erosion of the intersection and union of sets obey the following relations: [A ∩ B] ⊕ C ⊆ [A ⊕ C] ∩ [B ⊕ C] (14.4-17a) [ A ∩ B ] ᭺ C = [ A ᭺C ] ∩ [ B ᭺ C ] – – – (14.4-17b) [ A ∪ B ] ⊕ C = [ A ⊕ C] ∪ [ B ⊕ C ] (14.4-17c) [ A ∪ B ] ᭺ C ⊇ [ A ᭺ C ] ∪ [ B ᭺C ] – – – (14.4-17d)
  • 435. 430 MORPHOLOGICAL IMAGE PROCESSING FIGURE 14.4-6. Duality relationship between dilation and erosion. The dilation and erosion of a set by the intersection of two other sets satisfy these containment relations: A ⊕ [B ∩ C] ⊆ [A ⊕ B] ∩ [A ⊕ C] (14.4-18a) A᭺ [B ∩ C] ⊇ [A ᭺ B] ∪ [A ᭺ C] – – – (14.4-18b) On the other hand, dilation and erosion of a set by the union of a pair of sets are governed by the equality relations A ⊕ [B ∪ C] = [A ⊕ B] ∪ [A ⊕ C] (14.4-19a) A ᭺[ B ∪ C] = [A ᭺B ] ∪ [ A ᭺C] – – – (14.4-19b) The following chain rules hold for dilation and erosion. A ⊕[B ⊕ C] = [ A ⊕ B] ⊕ C (14.4-20a) A ᭺[B ⊕ C ] = [ A ᭺ B] ᭺ C – – – (14.4-20b) 14.4.4. Structuring Element Decomposition Equation 14.4-20 is important because it indicates that if a L × L structuring element can be expressed as H ( j, k ) = K 1 ( j, k ) ⊕ … ⊕ Kq ( j, k ) ⊕ … ⊕ K Q ( j, k ) (14.4-21)
  • 436. BINARY IMAGE GENERALIZED DILATION AND EROSION 431 FIGURE 14.4-7. Structuring element decomposition. where Kq ( j, k ) is a small structuring element, it is possible to perform dilation and erosion by operating on an image sequentially. In Eq. 14.4-21, if the small structur- ing elements K q ( j, k ) are all 3 × 3 arrays, then Q = ( L – 1 ) ⁄ 2 . Figure 14.4-7 gives several examples of small structuring element decomposition. Sequential small structuring element (SSE) dilation and erosion is analogous to small generating ker- nel (SGK) convolution as given by Eq. 9.6-1. Not every large impulse response array can be decomposed exactly into a sequence of SGK convolutions; similarly, not every large structuring element can be decomposed into a sequence of SSE dila- tions or erosions. Following is an example in which a 5 × 5 structuring element can- not be decomposed into the sequential dilation of two 3 × 3 SSEs. Zhuang and Haralick (27) have developed a computational search method to find a SEE decom- position into 1 × 2 and 2 × 1 elements.
  • 437. 432 MORPHOLOGICAL IMAGE PROCESSING FIGURE 14.4-8. Small structuring element decomposition of a 5 × 5 pixel ring. 1 1 1 1 1 1 0 0 0 1 1 0 0 0 1 1 0 0 0 1 1 1 1 1 1 For two-dimensional convolution it is possible to decompose any large impulse response array into a set of sequential SGKs that are computed in parallel and
  • 438. BINARY IMAGE CLOSE AND OPEN OPERATIONS 433 summed together using the singular-value decomposition/small generating kernel (SVD/SGK) algorithm, as illustrated by the flowchart of Figure 9.6-2. It is logical to conjecture as to whether an analog to the SVD/SGK algorithm exists for dilation and erosion. Equation 14.4-19 suggests that such an algorithm may exist. Figure 14.4-8 illustrates an SSE decomposition of the 5 × 5 ring example based on Eqs. 14.4-19a and 14.4-21. Unfortunately, no systematic method has yet been found to decompose an arbitrarily large structuring element. 14.5. BINARY IMAGE CLOSE AND OPEN OPERATIONS Dilation and erosion are often applied to an image in concatenation. Dilation fol- lowed by erosion is called a close operation. It is expressed symbolically as G ( j, k ) = F ( j, k ) • H ( j, k ) (14.5-1a) where H ( j, k ) is a L × L structuring element. In accordance with the Serra formula- tion of erosion, the close operation is defined as – ˜ G ( j, k ) = [ F ( j, k ) ⊕ H ( j, k ) ] ᭺ H ( j, k ) (14.5-1b) where it should be noted that erosion is performed with the reflection of the structur- ing element. Closing of an image with a compact structuring element without holes (zeros), such as a square or circle, smooths contours of objects, eliminates small holes in objects, and fuses short gaps between objects. An open operation, expressed symbolically as G ( j, k ) = F ( j, k ) ᭺ H ( j, k ) (14.5-2a) consists of erosion followed by dilation. It is defined as – ˜ G ( j, k ) = [ F ( j, k ) ᭺ H ( j, k ) ] ⊕ H ( j, k ) (14.5-2b) where again, the erosion is with the reflection of the structuring element. Opening of an image smooths contours of objects, eliminates small objects, and breaks narrow strokes. The close operation tends to increase the spatial extent of an object, while the open operation decreases its spatial extent. In quantitative terms F ( j, k ) • H ( j, k ) ⊇ F ( j, k ) (14.5-3a) F ( j, k ) ᭺ H ( j, k ) ⊆ F ( j, k ) (14.5-3b)
  • 439. 434 MORPHOLOGICAL IMAGE PROCESSING blob (a ) Original closing overlay of blob & closing (b ) Close (c ) Overlay of original and close opening overlay of blob & opening (d ) Open (e ) Overlay of original and open FIGURE 14.5-1. Close and open operations on a binary image.
  • 440. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS 435 It can be shown that the close and open operations are stable in the sense that (25) [ F ( j, k ) • H ( j, k ) ] • H ( j, k ) = F ( j, k ) • H ( j, k ) (14.5-4a) [ F ( j, k ) ᭺ H ( j, k ) ] ᭺ H ( j , k ) = F ( j , k ) ᭺H ( j , k ) (14.5-4b) Also, it can be easily shown that the open and close operations satisfy the following duality relationship: F ( j, k ) • H ( j, k ) = F ( j, k ) ᭺H ( j, k ) (14.5-5) Figure 14.5-1 presents examples of the close and open operations on a binary image. 14.6. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS Morphological concepts can be extended to gray scale images, but the extension often leads to theoretical issues and to implementation complexities. When applied to a binary image, dilation and erosion operations cause an image to increase or decrease in spatial extent, respectively. To generalize these concepts to a gray scale image, it is assumed that the image contains visually distinct gray scale objects set against a gray background. Also, it is assumed that the objects and background are both relatively spatially smooth. Under these conditions, it is reasonable to ask: Why not just threshold the image and perform binary image morphology? The rea- son for not taking this approach is that the thresholding operation often introduces significant error in segmenting objects from the background. This is especially true when the gray scale image contains shading caused by nonuniform scene illumina- tion. 14.6.1. Gray Scale Image Dilation and Erosion Dilation or erosion of an image could, in principle, be accomplished by hit-or-miss transformations in which the quantized gray scale patterns are examined in a 3 × 3 window and an output pixel is generated for each pattern. This approach is, how- ever, not computationally feasible. For example, if a look-up table implementation were to be used, the table would require 2 72 entries for 256-level quantization of each pixel! The common alternative is to use gray scale extremum operations over a 3 × 3 pixel neighborhoods. Consider a gray scale image F ( j, k ) quantized to an arbitrary number of gray lev- els. According to the extremum method of gray scale image dilation, the dilation operation is defined as G ( j, k ) = MAX { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) } (14.6-1)
  • 441. 436 MORPHOLOGICAL IMAGE PROCESSING printed circuit board (a ) Original PCB profile dilation profile 1 iteration (b ) Original profile (c ) One iteration dilation profile 2 iterations dilation profile 3 iterations (d ) Two iterations (e ) Three iterations FIGURE 14.6-1. One-dimensional gray scale image dilation on a printed circuit board image. where MAX { S 1, …, S 9 } generates the largest-amplitude pixel of the nine pixels in the neighborhood. If F ( j, k ) is quantized to only two levels, Eq. 14.6-1 provides the same result as that using binary image dilation as defined by Eq. 14.2-5.
  • 442. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS 437 By the extremum method, gray scale image erosion is defined as G ( j, k ) = MIN { F ( j, k ), F ( j, k + 1 ), F ( j – 1, k + 1 ), …, F ( j + 1, k + 1 ) } (14.6-2) where MIN { S 1, …, S 9 } generates the smallest-amplitude pixel of the nine pixels in the 3 × 3 pixel neighborhood. If F ( j, k ) is binary-valued, then Eq. 14.6-2 gives the same result as hit-or-miss erosion as defined in Eq. 14.2-8. In Chapter 10, when discussing the pseudomedian, it was shown that the MAX and MIN operations can be computed sequentially. As a consequence, Eqs. 14.6-1 and 14.6-2 can be applied iteratively to an image. For example, three iterations gives the same result as a single iteration using a 7 × 7 moving-window MAX or MIN operator. By selectively excluding some of the terms S 1, …, S 9 of Eq. 14.6-1 or 14.6-2 during each iteration, it is possible to synthesize large nonsquare gray scale structuring elements in the same number as illustrated in Figure 14.4-7 for binary structuring elements. However, no systematic decomposition procedure has yet been developed. Figures 14.6-1 and 14.6-2 show the amplitude profile of a row of a gray scale image of a printed circuit board (PCB) after several dilation and erosion iterations. The row selected is indicated by the white horizontal line in Figure 14.6-la. In Figure 14.6-2, two-dimensional gray scale dilation and erosion are performed on the PCB image. 14.6.2. Gray Scale Image Close and Open Operators The close and open operations introduced in Section 14.5 for binary images can easily be extended to gray scale images. Gray scale closing is realized by first per- forming gray scale dilation with a gray scale structuring element, then gray scale erosion with the same structuring element. Similarly, gray scale opening is accom- plished by gray scale erosion followed by gray scale dilation. Figure 14.6-3 gives examples of gray scale image closing and opening. Steinberg (28) has introduced the use of three-dimensional structuring elements for gray scale image closing and opening operations. Although the concept is well defined mathematically, it is simpler to describe in terms of a structural image model. Consider a gray scale image to be modeled as an array of closely packed square pegs, each of which is proportional in height to the amplitude of a corre- sponding pixel. Then a three-dimensional structuring element, for example a sphere, is placed over each peg. The bottom of the structuring element as it is translated over the peg array forms another spatially discrete surface, which is the close array of the original image. A spherical structuring element will touch pegs at peaks of the original peg array, but will not touch pegs at the bottom of steep valleys. Conse- quently, the close surface “fills in” dark spots in the original image. The opening of a gray scale image can be conceptualized in a similar manner. An original image is modeled as a peg array in which the height of each peg is inversely proportional to
  • 443. 438 MORPHOLOGICAL IMAGE PROCESSING erosion profile 1 iteration (a ) One iteration erosion profile 2 iterations erosion profile 3 iterations (b ) Two iterations (c ) Three iterations FIGURE 14.6-2. One-dimensional gray scale image erosion on a printed circuit board image. the amplitude of each corresponding pixel (i.e., the gray scale is subtractively inverted). The translated structuring element then forms the open surface of the orig- inal image. For a spherical structuring element, bright spots in the original image are made darker. 14.6.3. Conditional Gray Scale Image Morphological Operators There have been attempts to develop morphological operators for gray scale images that are analogous to binary image shrinking, thinning, skeletonizing, and thicken- ing. The stumbling block to these extensions is the lack of a definition for connec- tivity of neighboring gray scale pixels. Serra (4) has proposed approaches based on topographic mapping techniques. Another approach is to iteratively perform the basic dilation and erosion operations on a gray scale image and then use a binary thresholded version of the resultant image to determine connectivity at each iteration.
  • 444. GRAY SCALE IMAGE MORPHOLOGICAL OPERATIONS 439 Printed Circuit Board (a ) Original 5x5 square dilation 5x5 square erosion (b ) Dilation (c ) Erosion 5x5 square closing 5x5 square opening (d ) Close (e ) Open FIGURE 14.6-3. Two-dimensional gray scale image dilation, erosion, close, and open on a printed circuit board image.
  • 445. 440 MORPHOLOGICAL IMAGE PROCESSING REFERENCES 1. H. Minkowski, “Volumen und Oberfiläche,” Mathematische Annalen, 57, 1903, 447– 459. 2. G. Matheron, Random Sets and Integral Geometry, Wiley, New York, 1975. 3. J. Serra, Image Analysis and Mathematical Morphology, Vol. 1, Academic Press, Lon- don, 1982. 4. J. Serra, Image Analysis and Mathematical Morphology: Theoretical Advances, Vol. 2, Academic Press, London, 1988. 5. J. Serra, “Introduction to Mathematical Morphology,” Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 283–305. 6. S. R. Steinberg, “Parallel Architectures for Image Processing,” Proc. 3rd International IEEE Compsac, Chicago, 1981. 7. S. R. Steinberg, “Biomedical Image Processing,” IEEE Computer, January 1983, 22–34. 8. S. R. Steinberg, “Automatic Image Processor,” US patent 4,167,728. 9. R. M. Lougheed and D. L. McCubbrey, “The Cytocomputer: A Practical Pipelined Image Processor,” Proc. 7th Annual International Symposium on Computer Architec- ture, 1980. 10. A. Rosenfeld, “Connectivity in Digital Pictures,” J. Association for Computing Machin- ery, 17, 1, January 1970, 146–160. 11. A. Rosenfeld, Picture Processing by Computer, Academic Press, New York, 1969. 12. M. J. E. Golay, “Hexagonal Pattern Transformation,” IEEE Trans. Computers, C-18, 8, August 1969, 733–740. 13. K. Preston, Jr., “Feature Extraction by Golay Hexagonal Pattern Transforms,” IEEE Trans. Computers, C-20, 9, September 1971, 1007–1014. 14. F. A. Gerritsen and P. W. Verbeek, “Implementation of Cellular Logic Operators Using 3 × 3 Convolutions and Lookup Table Hardware,” Computer Vision, Graphics, and Image Processing, 27, 1, 1984, 115–123. 15. A. Rosenfeld, “A Characterization of Parallel Thinning Algorithms,” Information and Control, 29, 1975, 286–291. 16. T. Pavlidis, “A Thinning Algorithm for Discrete Binary Images,” Computer Graphics and Image Processing, 13, 2, 1980, 142–157. 17. W. K. Pratt and I. Kabir, “Morphological Binary Image Processing with a Local Neigh- borhood Pipeline Processor,” Computer Graphics, Tokyo, 1984. 18. H. Blum, “A Transformation for Extracting New Descriptors of Shape,” in Symposium Models for Perception of Speech and Visual Form, W. Whaten-Dunn, Ed., MIT Press, Cambridge, MA, 1967. 19. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley-Inter- science, New York, 1973. 20. L. Calabi and W. E. Harnett, “Shape Recognition, Prairie Fires, Convex Deficiencies and Skeletons,” American Mathematical Monthly, 75, 4, April 1968, 335–342. 21. J. C. Mott-Smith, “Medial Axis Transforms,” in Picture Processing and Psychopicto- rics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970.
  • 446. REFERENCES 441 22. C. Arcelli and G. Sanniti Di Baja, “On the Sequential Approach to Medial Line Thinning Transformation,” IEEE Trans. Systems, Man and Cybernetics, SMC-8, 2, 1978, 139– 144. 23. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy- chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 24. H. Hadwiger, Vorslesunger uber Inhalt, Oberfläche und Isoperimetrie, Springer-Verlag, Berlin, 1957. 25. R. M. Haralick, S. R. Steinberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-9, 4, July 1987, 532–550. 26. W. K. Pratt, “Image Processing with Primitive Computational Elements,” McMaster University, Hamilton, Ontario, Canada, 1987. 27. X. Zhuang and R. M. Haralick, “Morphological Structuring Element Decomposition,” Computer Vision, Graphics, and Image Processing, 35, 3, September 1986, 370–382. 28. S. R. Steinberg, “Grayscale Morphology,” Computer Vision, Graphics, and Image Pro- cessing, 35, 3, September 1986, 333–355.
  • 447. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 15 EDGE DETECTION Changes or discontinuities in an image amplitude attribute such as luminance or tri- stimulus value are fundamentally important primitive characteristics of an image because they often provide an indication of the physical extent of objects within the image. Local discontinuities in image luminance from one level to another are called luminance edges. Global luminance discontinuities, called luminance boundary seg- ments, are considered in Section 17.4. In this chapter the definition of a luminance edge is limited to image amplitude discontinuities between reasonably smooth regions. Discontinuity detection between textured regions is considered in Section 17.5. This chapter also considers edge detection in color images, as well as the detection of lines and spots within an image. 15.1. EDGE, LINE, AND SPOT MODELS Figure 15.1-1a is a sketch of a continuous domain, one-dimensional ramp edge modeled as a ramp increase in image amplitude from a low to a high level, or vice versa. The edge is characterized by its height, slope angle, and horizontal coordinate of the slope midpoint. An edge exists if the edge height is greater than a specified value. An ideal edge detector should produce an edge indication localized to a single pixel located at the midpoint of the slope. If the slope angle of Figure 15.1-1a is 90°, the resultant edge is called a step edge, as shown in Figure 15.1-1b. In a digital imaging system, step edges usually exist only for artificially generated images such as test patterns and bilevel graphics data. Digital images, resulting from digitization of optical images of real scenes, generally do not possess step edges because the anti aliasing low-pass filtering prior to digitization reduces the edge slope in the digital image caused by any sudden luminance change in the scene. The one-dimensional 443
  • 448. 444 EDGE DETECTION FIGURE 15.1-1. One-dimensional, continuous domain edge and line models. profile of a line is shown in Figure 15.1-1c. In the limit, as the line width w approaches zero, the resultant amplitude discontinuity is called a roof edge. Continuous domain, two-dimensional models of edges and lines assume that the amplitude discontinuity remains constant in a small neighborhood orthogonal to the edge or line profile. Figure 15.1-2a is a sketch of a two-dimensional edge. In addi- tion to the edge parameters of a one-dimensional edge, the orientation of the edge slope with respect to a reference axis is also important. Figure 15.1-2b defines the edge orientation nomenclature for edges of an octagonally shaped object whose amplitude is higher than its background. Figure 15.1-3 contains step and unit width ramp edge models in the discrete domain. The vertical ramp edge model in the figure contains a single transition pixel whose amplitude is at the midvalue of its neighbors. This edge model can be obtained by performing a 2 × 2 pixel moving window average on the vertical step edge
  • 449. EDGE, LINE, AND SPOT MODELS 445 FIGURE 15.1-2. Two-dimensional, continuous domain edge model. model. The figure also contains two versions of a diagonal ramp edge. The single- pixel transition model contains a single midvalue transition pixel between the regions of high and low amplitude; the smoothed transition model is generated by a 2 × 2 pixel moving window average of the diagonal step edge model. Figure 15.1-3 also presents models for a discrete step and ramp corner edge. The edge location for discrete step edges is usually marked at the higher-amplitude side of an edge transi- tion. For the single-pixel transition model and the smoothed transition vertical and corner edge models, the proper edge location is at the transition pixel. The smoothed transition diagonal ramp edge model has a pair of adjacent pixels in its transition zone. The edge is usually marked at the higher-amplitude pixel of the pair. In Figure 15.1-3 the edge pixels are italicized. Discrete two-dimensional single-pixel line models are presented in Figure 15.1-4 for step lines and unit width ramp lines. The single-pixel transition model has a mid- value transition pixel inserted between the high value of the line plateau and the low-value background. The smoothed transition model is obtained by performing a 2 × 2 pixel moving window average on the step line model.
  • 450. 446 EDGE DETECTION FIGURE 15.1-3. Two-dimensional, discrete domain edge models. A spot, which can only be defined in two dimensions, consists of a plateau of high amplitude against a lower amplitude background, or vice versa. Figure 15.1-5 presents single-pixel spot models in the discrete domain. There are two generic approaches to the detection of edges, lines, and spots in a luminance image: differential detection and model fitting. With the differential detection approach, as illustrated in Figure 15.1-6, spatial processing is performed on an original image F ( j, k ) to produce a differential image G ( j, k ) with accentu- ated spatial amplitude changes. Next, a differential detection operation is executed to determine the pixel locations of significant differentials. The second general approach to edge, line, or spot detection involves fitting of a local region of pixel values to a model of the edge, line, or spot, as represented in Figures 15.1-1 to 15.1-5. If the fit is sufficiently close, an edge, line, or spot is said to exist, and its assigned parameters are those of the appropriate model. A binary indicator map E ( j, k ) is often generated to indicate the position of edges, lines, or spots within an
  • 451. EDGE, LINE, AND SPOT MODELS 447 FIGURE 15.1-4. Two-dimensional, discrete domain line models. image. Typically, edge, line, and spot locations are specified by black pixels against a white background. There are two major classes of differential edge detection: first- and second-order derivative. For the first-order class, some form of spatial first-order differentiation is performed, and the resulting edge gradient is compared to a threshold value. An edge is judged present if the gradient exceeds the threshold. For the second-order derivative class of differential edge detection, an edge is judged present if there is a significant spatial change in the polarity of the second derivative. Sections 15.2 and 15.3 discuss the first- and second-order derivative forms of edge detection, respectively. Edge fitting methods of edge detection are considered in Section 15.4.
  • 452. 448 EDGE DETECTION FIGURE 15.1-5. Two-dimensional, discrete domain single pixel spot models. 15.2. FIRST-ORDER DERIVATIVE EDGE DETECTION There are two fundamental methods for generating first-order derivative edge gradi- ents. One method involves generation of gradients in two orthogonal directions in an image; the second utilizes a set of directional derivatives.
  • 453. FIRST-ORDER DERIVATIVE EDGE DETECTION 449 FIGURE 15.1-6. Differential edge, line, and spot detection. 15.2.1. Orthogonal Gradient Generation An edge in a continuous domain edge segment F ( x, y ) such as the one depicted in Figure 15.1-2a can be detected by forming the continuous one-dimensional gradient G ( x, y ) along a line normal to the edge slope, which is at an angle θ with respect to the horizontal axis. If the gradient is sufficiently large (i.e., above some threshold value), an edge is deemed present. The gradient along the line normal to the edge slope can be computed in terms of the derivatives along orthogonal axes according to the following (1, p. 106) ∂F ( x, y ) ∂F ( x, y ) G ( x, y ) = ------------------- cos θ + ------------------- sin θ - - (15.2-1) ∂x ∂y Figure 15.2-1 describes the generation of an edge gradient G ( x, y ) in the discrete domain in terms of a row gradient G R ( j, k ) and a column gradient G C ( j, k ) . The spatial gradient amplitude is given by 2 2 1⁄2 G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ] (15.2-2) For computational efficiency, the gradient amplitude is sometimes approximated by the magnitude combination G ( j, k ) = G R ( j, k ) + G C ( j, k ) (15.2-3) FIGURE 15.2-1. Orthogonal gradient generation.
  • 454. 450 EDGE DETECTION The orientation of the spatial gradient with respect to the row axis is  G C ( j, k )  θ ( j, k ) = arc tan  -------------------  - (15.2-4)  G R ( j, k )  The remaining issue for discrete domain orthogonal gradient generation is to choose a good discrete approximation to the continuous differentials of Eq. 15.2-1. The simplest method of discrete gradient generation is to form the running differ- ence of pixels along rows and columns of the image. The row gradient is defined as G R ( j, k ) = F ( j, k ) – F ( j, k – 1 ) (15.2-5a) and the column gradient is G C ( j, k ) = F ( j, k ) – F ( j + 1, k ) (15.2-5b) These definitions of row and column gradients, and subsequent extensions, are cho- sen such that GR and GC are positive for an edge that increases in amplitude from left to right and from bottom to top in an image. As an example of the response of a pixel difference edge detector, the following is the row gradient along the center row of the vertical step edge model of Figure 15.1-3: 0 0 0 0 h 0 0 0 0 In this sequence, h = b – a is the step edge height. The row gradient for the vertical ramp edge model is 0 0 0 0 h h 0 0 0 -- -- - - 2 2 For ramp edges, the running difference edge detector cannot localize the edge to a single pixel. Figure 15.2-2 provides examples of horizontal and vertical differencing gradients of the monochrome peppers image. In this and subsequent gradient display photographs, the gradient range has been scaled over the full contrast range of the photograph. It is visually apparent from the photograph that the running difference technique is highly susceptible to small fluctuations in image luminance and that the object boundaries are not well delineated.
  • 455. FIRST-ORDER DERIVATIVE EDGE DETECTION 451 (a) Original (b) Horizontal magnitude (c) Vertical magnitude FIGURE 15.2-2. Horizontal and vertical differencing gradients of the peppers_mon image. Diagonal edge gradients can be obtained by forming running differences of diag- onal pairs of pixels. This is the basis of the Roberts (2) cross-difference operator, which is defined in magnitude form as G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k ) (15.2-6a) and in square-root form as 2 2 1⁄2 G ( j , k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] ] (15.2-6b)
  • 456. 452 EDGE DETECTION where G 1 ( j, k ) = F ( j , k ) – F ( j + 1 , k + 1 ) (15.2-6c) G 2 ( j, k ) = F ( j, k + 1 ) – F ( j + 1, k ) (15.2-6d) The edge orientation with respect to the row axis is π  G 2 ( j, k )  θ ( j, k ) = -- + arc tan  ------------------  - - (15.2-7) 4  G 1 ( j, k )  Figure 15.2-3 presents the edge gradients of the peppers image for the Roberts oper- ators. Visually, the objects in the image appear to be slightly better distinguished with the Roberts square-root gradient than with the magnitude gradient. In Section 15.5, a quantitative evaluation of edge detectors confirms the superiority of the square-root combination technique. The pixel difference method of gradient generation can be modified to localize the edge center of the ramp edge model of Figure 15.1-3 by forming the pixel differ- ence separated by a null value. The row and column gradients then become G R ( j, k ) = F ( j, k + 1 ) – F ( j, k – 1 ) (15.2-8a) GC ( j, k ) = F ( j – 1, k ) – F ( j + 1, k ) (15.2-8b) The row gradient response for a vertical ramp edge model is then h h 0 0 -- h -- 0 0 - - 2 2 (a) Magnitude (b) Square root FIGURE 15.2-3. Roberts gradients of the peppers_mon image.
  • 457. FIRST-ORDER DERIVATIVE EDGE DETECTION 453 FIGURE 15.2-4. Numbering convention for 3 × 3 edge detection operators. Although the ramp edge is properly localized, the separated pixel difference gradi- ent generation method remains highly sensitive to small luminance fluctuations in the image. This problem can be alleviated by using two-dimensional gradient forma- tion operators that perform differentiation in one coordinate direction and spatial averaging in the orthogonal direction simultaneously. Prewitt (1, p. 108) has introduced a 3 × 3 pixel edge gradient operator described by the pixel numbering convention of Figure 15.2-4. The Prewitt operator square root edge gradient is defined as 2 2 1⁄2 G ( j , k ) = [ [ G R ( j, k ) ] + [ G C ( j, k ) ] ] (15.2-9a) with 1 G R ( j, k ) = ------------ [ ( A 2 + KA 3 + A 4 ) – ( A 0 + KA7 + A6 ) ] - (15.2-9b) K+2 1 G C ( j, k ) = ------------ [ ( A0 + KA1 + A2 ) – ( A 6 + KA 5 + A 4 ) ] - (15.2-9c) K+2 where K = 1. In this formulation, the row and column gradients are normalized to provide unit-gain positive and negative weighted averages about a separated edge position. The Sobel operator edge detector (3, p. 271) differs from the Prewitt edge detector in that the values of the north, south, east, and west pixels are doubled (i. e., K = 2). The motivation for this weighting is to give equal importance to each pixel in terms of its contribution to the spatial gradient. Frei and Chen (4) have proposed north, south, east, and west weightings by K = 2 so that the gradient is the same for horizontal, vertical, and diagonal edges. The edge gradient G ( j, k ) for these three operators along a row through the single pixel transition vertical ramp edge model of Figure 15.1-3 is
  • 458. 454 EDGE DETECTION h h 0 0 -- - h -- 0 0 - 2 2 Along a row through the single transition pixel diagonal ramp edge model, the gra- dient is h h 2 ( 1 + K )h h h 0 ------------------------- - ------ - ----------------------------- - ------ - ------------------------- - 0 2 (2 + K) 2 2+K 2 2 (2 + K) In the Frei–Chen operator with K = 2 , the edge gradient is the same at the edge center for the single-pixel transition vertical and diagonal ramp edge models. The Prewitt gradient for a diagonal edge is 0.94 times that of a vertical edge. The (a) Prewitt (b) Sobel (c) Frei−Chen FIGURE 15.2-5. Prewitt, Sobel, and Frei–Chen gradients of the peppers_mon image.
  • 459. FIRST-ORDER DERIVATIVE EDGE DETECTION 455 corresponding factor for a Sobel edge detector is 1.06. Consequently, the Prewitt operator is more sensitive to horizontal and vertical edges than to diagonal edges; the reverse is true for the Sobel operator. The gradients along a row through the smoothed transition diagonal ramp edge model are different for vertical and diago- nal edges for all three of the 3 × 3 edge detectors. None of them are able to localize the edge to a single pixel. Figure 15.2-5 shows examples of the Prewitt, Sobel, and Frei–Chen gradients of the peppers image. The reason that these operators visually appear to better delin- eate object edges than the Roberts operator is attributable to their larger size, which provides averaging of small luminance fluctuations. The row and column gradients for all the edge detectors mentioned previously in this subsection involve a linear combination of pixels within a small neighborhood. Consequently, the row and column gradients can be computed by the convolution relationships G R ( j, k ) = F ( j , k ) ᭺ H R ( j, k ) * (15.2-10a) G C ( j, k ) = F ( j, k ) ᭺ H C ( j, k ) * (15.2-10b) where H R ( j, k ) and H C ( j, k ) are 3 × 3 row and column impulse response arrays, respectively, as defined in Figure 15.2-6. It should be noted that this specification of the gradient impulse response arrays takes into account the 180° rotation of an impulse response array inherent to the definition of convolution in Eq. 7.1-14. A limitation common to the edge gradient generation operators previously defined is their inability to detect accurately edges in high-noise environments. This problem can be alleviated by properly extending the size of the neighborhood opera- tors over which the differential gradients are computed. As an example, a Prewitt- type 7 × 7 operator has a row gradient impulse response of the form 1 1 1 0 –1 –1 –1 1 1 1 0 –1 –1 –1 1 1 1 0 –1 –1 –1 1 H R = ----- - 1 1 1 0 –1 –1 –1 (15.2-11) 21 1 1 1 0 –1 –1 –1 1 1 1 0 –1 –1 –1 1 1 1 0 –1 –1 –1 An operator of this type is called a boxcar operator. Figure 15.2-7 presents the box- car gradient of a 7 × 7 array.
  • 460. 456 EDGE DETECTION FIGURE 15.2-6. Impulse response arrays for 3 × 3 orthogonal differential gradient edge operators. Abdou (5) has suggested a truncated pyramid operator that gives a linearly decreasing weighting to pixels away from the center of an edge. The row gradient impulse response array for a 7 × 7 truncated pyramid operator is given by 1 1 1 0 –1 –1 –1 1 2 2 0 –2 –2 –1 1 2 3 0 –3 –2 –1 1 H R = ----- - 1 2 3 0 –3 –2 –1 (15.2-12) 34 1 2 3 0 –3 –2 –1 1 2 2 0 –2 –2 –1 1 1 1 0 –1 –1 –1
  • 461. FIRST-ORDER DERIVATIVE EDGE DETECTION 457 (a) 7 × 7 boxcar (b) 9 × 9 truncated pyramid (c) 11 × 11 Argyle, s = 2.0 (d ) 11 × 11 Macleod, s = 2.0 (e) 11 × 11 FDOG, s = 2.0 FIGURE 15.2-7. Boxcar, truncated pyramid, Argyle, Macleod, and FDOG gradients of the peppers_mon image.
  • 462. 458 EDGE DETECTION Argyle (6) and Macleod (7,8) have proposed large neighborhood Gaussian-shaped weighting functions as a means of noise suppression. Let 2 –1 ⁄ 2 2 g ( x, s ) = [ 2πs ] exp { – 1 ⁄ 2 ( x ⁄ s ) } (15.2-13) denote a continuous domain Gaussian function with standard deviation s. Utilizing this notation, the Argyle operator horizontal coordinate impulse response array can be expressed as a sampled version of the continuous domain impulse response  – 2 g ( x, s )g ( y, t ) for x ≥ 0 (15.2-14a)  H R ( j, k ) =   2g ( x, s )g ( y, t )  for x < 0 (15.2-14b) where s and t are spread parameters. The vertical impulse response function can be expressed similarly. The Macleod operator horizontal gradient impulse response function is given by H R ( j, k ) = [ g ( x + s, s ) – g ( x – s, s ) ]g ( y, t ) (15.2-15) The Argyle and Macleod operators, unlike the boxcar operator, give decreasing importance to pixels far removed from the center of the neighborhood. Figure 15.2-7 provides examples of the Argyle and Macleod gradients. Extended-size differential gradient operators can be considered to be compound operators in which a smoothing operation is performed on a noisy image followed by a differentiation operation. The compound gradient impulse response can be written as H ( j, k ) = H G ( j, k ) ᭺ H S ( j, k ) ‫ء‬ (15.2-16) where H G ( j, k ) is one of the gradient impulse response operators of Figure 15.2-6 and H S ( j, k ) is a low-pass filter impulse response. For example, if H S ( j, k ) is the 3 × 3 Prewitt row gradient operator and H S ( j, k ) = 1 ⁄ 9 , for all ( j, k ) , is a 3 × 3 uni- form smoothing operator, the resultant 5 × 5 row gradient operator, after normaliza- tion to unit positive and negative gain, becomes 1 1 0 –1 –1 2 2 0 –2 –2 1 H R = ----- - 3 3 0 –3 –3 (15.2-17) 18 2 2 0 –2 –2 1 1 0 –1 –1
  • 463. FIRST-ORDER DERIVATIVE EDGE DETECTION 459 The decomposition of Eq. 15.2-16 applies in both directions. By applying the SVD/ SGK decomposition of Section 9.6, it is possible, for example, to decompose a 5 × 5 boxcar operator into the sequential convolution of a 3 × 3 smoothing kernel and a 3 × 3 differentiating kernel. A well-known example of a compound gradient operator is the first derivative of Gaussian (FDOG) operator, in which Gaussian-shaped smoothing is followed by differentiation (9). The FDOG continuous domain horizontal impulse response is – ∂[ g ( x, s )g ( y, t ) ] H R ( j, k ) = ------------------------------------------ - (15.2-18a) ∂x which upon differentiation yields – xg ( x, s )g ( y, t ) HR ( j, k ) = ------------------------------------- - (15.2-18b) 2 s Figure 15.2-7 presents an example of the FDOG gradient. All of the differential edge enhancement operators presented previously in this subsection have been derived heuristically. Canny (9) has taken an analytic approach to the design of such operators. Canny's development is based on a one- dimensional continuous domain model of a step edge of amplitude hE plus additive white Gaussian noise with standard deviation σ n . It is assumed that edge detection is performed by convolving a one-dimensional continuous domain noisy edge signal f ( x ) with an antisymmetric impulse response function h ( x ) , which is of zero amplitude outside the range [ – W, W ] . An edge is marked at the local maximum of the convolved gradient f ( x ) ᭺ h ( x ). The Canny operator impulse response h ( x ) is * chosen to satisfy the following three criteria. 1. Good detection. The amplitude signal-to-noise ratio (SNR) of the gradient is maximized to obtain a low probability of failure to mark real edge points and a low probability of falsely marking nonedge points. The SNR for the model is hE S ( h ) SNR = ---------------- - (15.2-19a) σn with 0 ∫– W h ( x ) d x S ( h ) = ---------------------------------- (15.2-19b) W 2 ∫ [ h ( x ) ] dx –W
  • 464. 460 EDGE DETECTION 2. Good localization. Edge points marked by the operator should be as close to the center of the edge as possible. The localization factor is defined as hE L ( h ) LOC = ----------------- (15.2-20a) σn with h′ ( 0 ) L ( h ) = ------------------------------------- - (15.2-20b) W 2 ∫ –W [ h′ ( x ) ] dx where h′ ( x ) is the derivative of h ( x ) . 3. Single response. There should be only a single response to a true edge. The distance between peaks of the gradient when only noise is present, denoted as xm, is set to some fraction k of the operator width factor W. Thus x m = kW (15.2-21) Canny has combined these three criteria by maximizing the product S ( h )L ( h ) subject to the constraint of Eq. 15.2-21. Because of the complexity of the formulation, no analytic solution has been found, but a variational approach has been developed. Figure 15.2-8 contains plots of the Canny impulse response functions in terms of xm. FIGURE 15.2-8. Comparison of Canny and first derivative of Gaussian impulse response functions.
  • 465. FIRST-ORDER DERIVATIVE EDGE DETECTION 461 As noted from the figure, for low values of xm, the Canny function resembles a box- car function, while for xm large, the Canny function is closely approximated by a FDOG impulse response function. Discrete domain versions of the large operators defined in the continuous domain can be obtained by sampling their continuous impulse response functions over some W × W window. The window size should be chosen sufficiently large that truncation of the impulse response function does not cause high-frequency artifacts. Demigny and Kamie (10) have developed a discrete version of Canny’s criteria, which lead to the computation of discrete domain edge detector impulse response arrays. 15.2.2. Edge Template Gradient Generation With the orthogonal differential edge enhancement techniques discussed previously, edge gradients are computed in two orthogonal directions, usually along rows and columns, and then the edge direction is inferred by computing the vector sum of the gradients. Another approach is to compute gradients in a large number of directions by convolution of an image with a set of template gradient impulse response arrays. The edge template gradient is defined as G ( j, k ) = MAX { G 1 ( j, k ) , …, G m ( j, k ) , …, G M ( j, k ) } (15.2-22a) where G m ( j, k ) = F ( j , k ) ᭺ H m ( j, k ) ‫ء‬ (15.2-22b) is the gradient in the mth equispaced direction obtained by convolving an image with a gradient impulse response array Hm ( j, k ). The edge angle is determined by the direction of the largest gradient. Figure 15.2-9 defines eight gain-normalized compass gradient impulse response arrays suggested by Prewitt (1, p. 111). The compass names indicate the slope direc- tion of maximum response. Kirsch (11) has proposed a directional gradient defined by 7   G ( j, k ) = MAX  5S i – 3T i  (15.2-23a) i = 0   where Si = A i + Ai + 1 + A i + 2 (15.2-23b) T i = A i + 3 + Ai + 4 + A i + 5 + Ai + 5 + A i + 6 (15.2-23c)
  • 466. 462 EDGE DETECTION FIGURE 15.2-9. Template gradient 3 × 3 impulse response arrays. The subscripts of A i are evaluated modulo 8. It is possible to compute the Kirsch gradient by convolution as in Eq. 15.2-22b. Figure 15.2-9 specifies the gain-normal- ized Kirsch operator impulse response arrays. This figure also defines two other sets of gain-normalized impulse response arrays proposed by Robinson (12), called the Robinson three-level operator and the Robinson five-level operator, which are derived from the Prewitt and Sobel operators, respectively. Figure 15.2-10 provides a comparison of the edge gradients of the peppers image for the four 3 × 3 template gradient operators.
  • 467. FIRST-ORDER DERIVATIVE EDGE DETECTION 463 (a) Prewitt compass gradient (b) Kirsch (c) Robinson three-level (d) Robinson five-level FIGURE 15.2-10. 3 × 3 template gradients of the peppers_mon image. Nevatia and Babu (13) have developed an edge detection technique in which the gain-normalized 5 × 5 masks defined in Figure 15.2-11 are utilized to detect edges in 30° increments. Figure 15.2-12 shows the template gradients for the peppers image. Larger template masks will provide both a finer quantization of the edge ori- entation angle and a greater noise immunity, but the computational requirements increase. Paplinski (14) has developed a design procedure for n-directional template masks of arbitrary size. 15.2.3. Threshold Selection After the edge gradient is formed for the differential edge detection methods, the gradient is compared to a threshold to determine if an edge exists. The threshold value determines the sensitivity of the edge detector. For noise-free images, the
  • 468. 464 EDGE DETECTION FIGURE 15.2-11. Nevatia–Babu template gradient impulse response arrays.
  • 469. FIRST-ORDER DERIVATIVE EDGE DETECTION 465 FIGURE 15.2-12. Nevatia–Babu gradient of the peppers_mon image. threshold can be chosen such that all amplitude discontinuities of a minimum con- trast level are detected as edges, and all others are called nonedges. With noisy images, threshold selection becomes a trade-off between missing valid edges and creating noise-induced false edges. Edge detection can be regarded as a hypothesis-testing problem to determine if an image region contains an edge or contains no edge (15). Let P(edge) and P(no- edge) denote the a priori probabilities of these events. Then the edge detection pro- cess can be characterized by the probability of correct edge detection, ∞ PD = ∫t p ( G edge ) dG (15.2-24a) and the probability of false detection, ∞ PF = ∫t p ( G no – e dge ) dG (15.2-24b) where t is the edge detection threshold and p(G|edge) and p(G|no-edge) are the con- ditional probability densities of the edge gradient G ( j, k ). Figure 15.2-13 is a sketch of typical edge gradient conditional densities. The probability of edge misclassifica- tion error can be expressed as P E = ( 1 – PD )P ( edge ) + ( P F )P ( no – e dge ) (15.2-25)
  • 470. 466 EDGE DETECTION FIGURE 15.2-13. Typical edge gradient conditional probability densities. This error will be minimum if the threshold is chosen such that an edge is deemed present when p ( G edge ) P ( no – e dge ) ------------------------------------ ≥ ------------------------------ - - (15.2-26) p ( G no – e dge ) P ( edge ) and the no-edge hypothesis is accepted otherwise. Equation 15.2-26 defines the well-known maximum likelihood ratio test associated with the Bayes minimum error decision rule of classical decision theory (16). Another common decision strat- egy, called the Neyman–Pearson test, is to choose the threshold t to minimize P F for a fixed acceptable P D (16). Application of a statistical decision rule to determine the threshold value requires knowledge of the a priori edge probabilities and the conditional densities of the edge gradient. The a priori probabilities can be estimated from images of the class under analysis. Alternatively, the a priori probability ratio can be regarded as a sensitivity control factor for the edge detector. The conditional densities can be determined, in principle, for a statistical model of an ideal edge plus noise. Abdou (5) has derived these densities for 2 × 2 and 3 × 3 edge detection operators for the case of a ramp edge of width w = 1 and additive Gaussian noise. Henstock and Chelberg (17) have used gamma densities as models of the conditional probability densities. There are two difficulties associated with the statistical approach of determining the optimum edge detector threshold: reliability of the stochastic edge model and analytic difficulties in deriving the edge gradient conditional densities. Another approach, developed by Abdou and Pratt (5,15), which is based on pattern recogni- tion techniques, avoids the difficulties of the statistical method. The pattern recogni- tion method involves creation of a large number of prototype noisy image regions, some of which contain edges and some without edges. These prototypes are then used as a training set to find the threshold that minimizes the classification error. Details of the design procedure are found in Reference 5. Table 15.2-1
  • 471. TABLE 15.2-1. Threshold Levels and Associated Edge Detection Probabilities for 3 × 3 Edge Detectors as Determined by the Abdou and Pratt Pattern Recognition Design Procedure Vertical Edge Diagonal Edge SNR = 1 SNR = 10 SNR = 1 SNR = 10 Operator tN PD PF tN PD PF tN PD PF tN PD PF Roberts orthogonal 1.36 0.559 0.400 0.67 0.892 0.105 1.74 0.551 0.469 0.78 0.778 0.221 gradient Prewitt orthogonal 1.16 0.608 0.384 0.66 0.912 0.480 1.19 0.593 0.387 0.64 0.931 0.064 gradient Sobel orthogonal 1.18 0.600 0.395 0.66 0.923 0.057 1.14 0.604 0.376 0.63 0.947 0.053 gradient Prewitt compass 1.52 0.613 0.466 0.73 0.886 0.136 1.51 0.618 0.472 0.71 0.900 0.153 template gradient Kirsch template 1.43 0.531 0.341 0.69 0.898 0.058 1.45 0.524 0.324 0.79 0.825 0.023 gradient Robinson three-level 1.16 0.590 0.369 0.65 0.926 0.038 1.16 0.587 0.365 0.61 0.946 0.056 template gradient Robinson five-level 1.24 0.581 0.361 0.66 0.924 0.049 1.22 0.593 0.374 0.65 0.931 0.054 template gradient 467
  • 472. 468 EDGE DETECTION (a) Sobel, t = 0.06 (b) FDOG, t = 0.08 (c) Sobel, t = 0.08 (d ) FDOG, t = 0.10 (e) Sobel, t = 0.10 (f ) FDOG, t = 0.12 FIGURE 15.2-14. Threshold sensitivity of the Sobel and first derivative of Gaussian edge detectors for the peppers_mon image.
  • 473. SECOND-ORDER DERIVATIVE EDGE DETECTION 469 provides a tabulation of the optimum threshold for several 2 × 2 and 3 × 3 edge detectors for an experimental design with an evaluation set of 250 prototypes not in the training set (15). The table also lists the probability of correct and false edge detection as defined by Eq. 15.2-24 for theoretically derived gradient conditional densities. In the table, the threshold is normalized such that t N = t ⁄ G M , where G M is the maximum amplitude of the gradient in the absence of noise. The power signal- 2 to-noise ratio is defined as SNR = ( h ⁄ σ n ) , where h is the edge height and σ n is the noise standard deviation. In most of the cases of Table 15.2-1, the optimum thresh- old results in approximately equal error probabilities (i.e., PF = 1 – P D ). This is the same result that would be obtained by the Bayes design procedure when edges and nonedges are equally probable. The tests associated with Table 15.2-1 were con- ducted with relatively low signal-to-noise ratio images. Section 15.5 provides exam- ples of such images. For high signal-to-noise ratio images, the optimum threshold is much lower. As a rule of thumb, under the condition that P F = 1 – P D , the edge detection threshold can be scaled linearly with signal-to-noise ratio. Hence, for an image with SNR = 100, the threshold is about 10% of the peak gradient value. Figure 15.2-14 shows the effect of varying the first derivative edge detector threshold for the 3 × 3 Sobel and the 11 × 11 FDOG edge detectors for the peppers image, which is a relatively high signal-to-noise ratio image. For both edge detec- tors, variation of the threshold provides a trade-off between delineation of strong edges and definition of weak edges. 15.2.4. Morphological Post Processing It is possible to improve edge delineation of first-derivative edge detectors by apply- ing morphological operations on their edge maps. Figure 15.2-15 provides examples for the 3 × 3 Sobel and 11 × 11 FDOG edge detectors. In the Sobel example, the threshold is lowered slightly to improve the detection of weak edges. Then the mor- phological majority black operation is performed on the edge map to eliminate noise-induced edges. This is followed by the thinning operation to thin the edges to minimally connected lines. In the FDOG example, the majority black noise smooth- ing step is not necessary. 15.3. SECOND-ORDER DERIVATIVE EDGE DETECTION Second-order derivative edge detection techniques employ some form of spatial sec- ond-order differentiation to accentuate edges. An edge is marked if a significant spa- tial change occurs in the second derivative. Two types of second-order derivative methods are considered: Laplacian and directed second derivative. 15.3.1. Laplacian Generation The edge Laplacian of an image function F ( x, y ) in the continuous domain is defined as
  • 474. 470 EDGE DETECTION (a) Sobel, t = 0.07 (b) Sobel majority black (c) Sobel thinned (d ) FDOG, t = 0.11 (e ) FDOG thinned FIGURE 15.2-15. Morphological thinning of edge maps for the peppers_mon image.
  • 475. SECOND-ORDER DERIVATIVE EDGE DETECTION 471 G ( x, y ) = – ∇2{ F ( x, y ) } (15.3-1a) where, from Eq. 1.2-17, the Laplacian is 2 2 2 ∂ ∂ ∇ = + (15.3-1b) 2 2 ∂x ∂y The Laplacian G ( x, y ) is zero if F ( x, y ) is constant or changing linearly in ampli- tude. If the rate of change of F ( x, y ) is greater than linear, G ( x, y ) exhibits a sign change at the point of inflection of F ( x, y ). The zero crossing of G ( x, y ) indicates the presence of an edge. The negative sign in the definition of Eq. 15.3-la is present so that the zero crossing of G ( x, y ) has a positive slope for an edge whose amplitude increases from left to right or bottom to top in an image. Torre and Poggio (18) have investigated the mathematical properties of the Laplacian of an image function. They have found that if F ( x, y ) meets certain smoothness constraints, the zero crossings of G ( x, y ) are closed curves. In the discrete domain, the simplest approximation to the continuous Laplacian is to compute the difference of slopes along each axis: G ( j , k ) = [ F ( j, k ) – F ( j, k – 1 ) ] – [ F ( j, k + 1 ) – F ( j, k ) ] + [ F ( j, k ) – F ( j + 1, k ) ] – [ F ( j – 1, k ) – F ( j, k ) ] (15.3-2) This four-neighbor Laplacian (1, p. 111) can be generated by the convolution opera- tion G ( j, k ) = F ( j, k ) ᭺ H ( j, k ) ‫ء‬ (15.3-3) with 0 0 0 0 –1 0 H = –1 2 –1 + 0 2 0 (15.3-4a) 0 0 0 0 –1 0 or 0 –1 0 H = –1 4 – 1 (15.3-4b) 0 –1 0 where the two arrays of Eq. 15.3-4a correspond to the second derivatives along image rows and columns, respectively, as in the continuous Laplacian of Eq. 15.3-lb. The four-neighbor Laplacian is often normalized to provide unit-gain averages of the positive weighted and negative weighted pixels in the 3 × 3 pixel neighborhood. The gain-normalized four-neighbor Laplacian impulse response is defined by
  • 476. 472 EDGE DETECTION 0 –1 0 1 H = -- – 1 4 - –1 (15.3-5) 4 0 –1 0 Prewitt (1, p. 111) has suggested an eight-neighbor Laplacian defined by the gain- normalized impulse response array –1 –1 – 1 1 H = -- - –1 8 –1 (15.3-6) 8 –1 –1 – 1 This array is not separable into a sum of second derivatives, as in Eq. 15.3-4a. A separable eight-neighbor Laplacian can be obtained by the construction –1 2 –1 – 1 –1 –1 H = –1 2 –1 + 2 2 2 (15.3-7) –1 2 –1 –1 –1 – 1 in which the difference of slopes is averaged over three rows and three columns. The gain-normalized version of the separable eight-neighbor Laplacian is given by –2 1 –2 H = 1 -- - 1 4 1 (15.3-8) 8 –2 1 –2 It is instructive to examine the Laplacian response to the edge models of Figure 15.1-3. As an example, the separable eight-neighbor Laplacian corresponding to the center row of the vertical step edge model is – 3 h 3h 0 -------- ----- - - 0 8 8 where h = b – a is the edge height. The Laplacian response of the vertical ramp edge model is –3h 3h 0 -------- 0 - ----- - 0 16 16 For the vertical edge ramp edge model, the edge lies at the zero crossing pixel between the negative- and positive-value Laplacian responses. In the case of the step edge, the zero crossing lies midway between the neighboring negative and positive response pixels; the edge is correctly marked at the pixel to the right of the zero
  • 477. SECOND-ORDER DERIVATIVE EDGE DETECTION 473 crossing. The Laplacian response for a single-transition-pixel diagonal ramp edge model is 0 –h – h ----- ----- - - 0 h -- - h -- - 0 8 8 8 8 and the edge lies at the zero crossing at the center pixel. The Laplacian response for the smoothed transition diagonal ramp edge model of Figure 15.1-3 is –h –h –h h h h 0 ----- ----- ----- ----- -- ----- 0 - - - - - - 16 8 16 16 8 16 In this example, the zero crossing does not occur at a pixel location. The edge should be marked at the pixel to the right of the zero crossing. Figure 15.3-1 shows the Laplacian response for the two ramp corner edge models of Figure 15.1-3. The edge transition pixels are indicated by line segments in the figure. A zero crossing exists at the edge corner for the smoothed transition edge model, but not for the sin- gle-pixel transition model. The zero crossings adjacent to the edge corner do not occur at pixel samples for either of the edge models. From these examples, it can be FIGURE 15.3-1. Separable eight-neighbor Laplacian responses for ramp corner models; all values should be scaled by h/8.
  • 478. 474 EDGE DETECTION concluded that zero crossings of the Laplacian do not always occur at pixel samples. But for these edge models, marking an edge at a pixel with a positive response that has a neighbor with a negative response identifies the edge correctly. Figure 15.3-2 shows the Laplacian responses of the peppers image for the three types of 3 × 3 Laplacians. In these photographs, negative values are depicted as dimmer than midgray and positive values are brighter than midgray. Marr and Hildrith (19) have proposed the Laplacian of Gaussian (LOG) edge detection operator in which Gaussian-shaped smoothing is performed prior to appli- cation of the Laplacian. The continuous domain LOG gradient is G ( x, y ) = – ∇2{ F ( x, y ) ᭺ H S ( x, y ) } * (15.3-9a) where G ( x, y ) = g ( x, s )g ( y, s ) (15.3-9b) (a) Four-neighbor (b) Eight-neighbor (c) Separable eight-neighbor (d ) 11 × 11 Laplacian of Gaussian FIGURE 15.3-2. Laplacian responses of the peppers_mon image.
  • 479. SECOND-ORDER DERIVATIVE EDGE DETECTION 475 is the impulse response of the Gaussian smoothing function as defined by Eq. 15.2-13. As a result of the linearity of the second derivative operation and of the lin- earity of convolution, it is possible to express the LOG response as G ( j, k ) = F ( j, k ) ᭺ H ( j, k ) * (15.3-10a) where H ( x, y ) = – ∇2{ g ( x, s )g ( y, s ) } (15.3-10b) Upon differentiation, one obtains 2 2  x2 + y2  1-  H ( x, y ) = -------  1 – x + y  exp ---------------- 4 2  – ----------------  (15.3-11) πs  2s   2s2  Figure 15.3-3 is a cross-sectional view of the LOG continuous domain impulse response. In the literature it is often called the Mexican hat filter. It can be shown (20,21) that the LOG impulse response can be expressed as 2 2 1  y  1  x  H ( x, y ) = -------  1 – ----  g ( x, s )g ( y, s ) + -------  1 – ----  g ( x, s )g ( y, s ) (15.3-12) - - - - 2 2 2 2 πs  s  πs  s  Consequently, the convolution operation can be computed separably along rows and columns of an image. It is possible to approximate the LOG impulse response closely by a difference of Gaussians (DOG) operator. The resultant impulse response is H ( x, y ) = g ( x, s 1 )g ( y, s 1 ) – g ( x, s 2 )g ( y, s 2 ) (15.3-13) FIGURE 15.3-3. Cross section of continuous domain Laplacian of Gaussian impulse response.
  • 480. 476 EDGE DETECTION where s 1 < s 2. Marr and Hildrith (19) have found that the ratio s 2 ⁄ s 1 = 1.6 provides a good approximation to the LOG. A discrete domain version of the LOG operator can be obtained by sampling the continuous domain impulse response function of Eq. 15.3-11 over a W × W window. To avoid deleterious truncation effects, the size of the array should be set such that W = 3c, or greater, where c = 2 2 s is the width of the positive center lobe of the LOG function (21). Figure 15.3-2d shows the LOG response of the peppers image for a 11 × 11 operator. 15.3.2. Laplacian Zero-Crossing Detection From the discrete domain Laplacian response examples of the preceding section, it has been shown that zero crossings do not always lie at pixel sample points. In fact, for real images subject to luminance fluctuations that contain ramp edges of varying slope, zero-valued Laplacian response pixels are unlikely. A simple approach to Laplacian zero-crossing detection in discrete domain images is to form the maximum of all positive Laplacian responses and to form the minimum of all negative-value responses in a 3 × 3 window, If the magnitude of the difference between the maxima and the minima exceeds a threshold, an edge is judged present. FIGURE 15.3-4. Laplacian zero-crossing patterns.
  • 481. SECOND-ORDER DERIVATIVE EDGE DETECTION 477 Huertas and Medioni (21) have developed a systematic method for classifying 3 × 3 Laplacian response patterns in order to determine edge direction. Figure 15.3-4 illustrates a somewhat simpler algorithm. In the figure, plus signs denote pos- itive-value Laplacian responses, and negative signs denote negative Laplacian responses. The algorithm can be implemented efficiently using morphological image processing techniques. 15.3.3. Directed Second-Order Derivative Generation Laplacian edge detection techniques employ rotationally invariant second-order differentiation to determine the existence of an edge. The direction of the edge can be ascertained during the zero-crossing detection process. An alternative approach is first to estimate the edge direction and then compute the one-dimensional second- order derivative along the edge direction. A zero crossing of the second-order derivative specifies an edge. The directed second-order derivative of a continuous domain image F ( x, y ) along a line at an angle θ with respect to the horizontal axis is given by 2 2 2 F ′′ ( x, y ) = ∂ F ( x, y ) cos 2 θ + ∂ F ( x, y ) cos θ sin θ + ---------------------- sin 2 θ ---------------------- ---------------------- ∂ F ( x, y ) (15.3-14) ∂x 2 ∂x ∂y ∂y 2 It should be noted that unlike the Laplacian, the directed second-order derivative is a nonlinear operator. Convolving a smoothing function with F ( x, y ) prior to differen- tiation is not equivalent to convolving the directed second derivative of F ( x, y ) with the smoothing function. A key factor in the utilization of the directed second-order derivative edge detec- tion method is the ability to determine its suspected edge direction accurately. One approach is to employ some first-order derivative edge detection method to estimate the edge direction, and then compute a discrete approximation to Eq. 15.3-14. Another approach, proposed by Haralick (22), involves approximating F ( x, y ) by a two-dimensional polynomial, from which the directed second-order derivative can be determined analytically. As an illustration of Haralick's approximation method, called facet modeling, let the continuous image function F ( x, y ) be approximated by a two-dimensional qua- dratic polynomial ˆ 2 2 2 2 2 2 F ( r, c ) = k 1 + k 2 r + k 3 c + k 4 r + k 5 rc + k 6 c + k 7 rc + k 8 r c + k 9 r c (15.3-15) about a candidate edge point ( j, k ) in the discrete image F ( j, k ), where the k n are weighting factors to be determined from the discrete image data. In this notation, the indices – ( W – 1 ) ⁄ 2 ≤ r, c ≤ ( W – 1 ) ⁄ 2 are treated as continuous variables in the row (y-coordinate) and column (x-coordinate) directions of the discrete image, but the discrete image is, of course, measurable only at integer values of r and c. From this model, the estimated edge angle is
  • 482. 478 EDGE DETECTION  k2  θ = arc tan  ----  - (15.3-16)  k3  In principle, any polynomial expansion can be used in the approximation. The expansion of Eq. 15.3-15 was chosen because it can be expressed in terms of a set of orthogonal polynomials. This greatly simplifies the computational task of determin- ing the weighting factors. The quadratic expansion of Eq. 15.3-15 can be rewritten as N ˆ F ( r, c ) = ∑ a n Pn ( r, c ) (15.3-17) n=1 where Pn ( r, c ) denotes a set of discrete orthogonal polynomials and the a n are weighting coefficients. Haralick (22) has used the following set of 3 × 3 Chebyshev orthogonal polynomials: P 1 ( r, c ) = 1 (15.3-18a) P 2 ( r, c ) = r (15.3-18b) P3 ( r, c ) = c (15.3-18c) 2 2 P4 ( r, c ) = r – -- - (15.3-18d) 3 P5 ( r, c ) = rc (15.3-18e) 2 2 P 6 ( r, c ) = c – -- - (15.3-18f) 3 2 2 P 7 ( r, c ) = c  r – --   - (15.3-18g) 3 2 2 P 8 ( r, c ) = r  c – --  - (15.3-18h)  3 2 2 2 2 P 9 ( r, c ) =  r – --   c – --   - - (15.3-18i) 3 3 defined over the (r, c) index set {-1, 0, 1}. To maintain notational consistency with the gradient techniques discussed previously, r and c are indexed in accordance with the (x, y) Cartesian coordinate system (i.e., r is incremented positively up rows and c is incremented positively left to right across columns). The polynomial coefficients kn of Eq. 15.3-15 are related to the Chebyshev weighting coefficients by
  • 483. SECOND-ORDER DERIVATIVE EDGE DETECTION 479 2 2 4 k 1 = a 1 – -- a 4 – -- a 6 + -- a 9 - - - (15.3-19a) 3 3 9 2 k 2 = a 2 – -- a 7 - (15.3-19b) 3 k3 = a3 – 2 a8 -- - (15.3-19c) 3 2 k 4 = a 4 – -- a 9 - (15.3-19d) 3 k5 = a5 (15.3-19e) 2 k 6 = a 6 – -- a 9 - (15.3-19f) 3 k7 = a7 (15.3-19g) k8 = a8 (15.3-19h) k9 = a9 (15.3-19i) The optimum values of the set of weighting coefficients an that minimize the mean- ˆ square error between the image data F ( r, c ) and its approximation F ( r, c ) are found to be (22) ∑ ∑ Pn ( r, c )F ( r, c ) a n = ---------------------------------------------------- - (15.3-20) 2 ∑ ∑ [ P n ( r, c ) ] As a consequence of the linear structure of this equation, the weighting coefficients An ( j, k ) = a n at each point in the image F ( j, k ) can be computed by convolution of the image with a set of impulse response arrays. Hence An ( j, k ) = F ( j, k ) ᭺ H n ( j, k ) ‫ء‬ (15.3-21a) where P n ( – j, – k ) H n ( j, k ) = ----------------------------------------- - (15.3-21b) 2 ∑ ∑ [ Pn ( r, c ) ] Figure 15.3-5 contains the nine impulse response arrays corresponding to the 3 × 3 Chebyshev polynomials. The arrays H2 and H3, which are used to determine the edge angle, are seen from Figure 15.3-5 to be the Prewitt column and row operators, respectively. The arrays H4 and H6 are second derivative operators along columns
  • 484. 480 EDGE DETECTION FIGURE 15.3-5. Chebyshev polynomial 3 × 3 impulse response arrays. and rows, respectively, as noted in Eq. 15.3-7. Figure 15.3-6 shows the nine weight- ing coefficient responses for the peppers image. The second derivative along the line normal to the edge slope can be expressed explicitly by performing second-order differentiation on Eq. 15.3-15. The result is ˆ F ′′ ( r, c ) = 2k 4 sin 2 θ + 2k 5 sin θ cos θ + 2k 6 cos 2 θ + ( 4k 7 sin θ cos θ + 2k 8 cos 2 θ )r + ( 2k 7 sin 2 θ + 4k 8 sin θ cos θ )c 2 2 + ( 2k 9 cos 2 θ )r + ( 8k 9 sin θ cos θ )rc + ( 2k 9 sin 2 θ )c (15.3-22) This second derivative need only be evaluated on a line in the suspected edge direc- tion. With the substitutions r = ρ sin θ and c = ρ cos θ, the directed second-order derivative can be expressed as ˆ 2 2 F ′′ ( ρ ) = 2 ( k 4 sin θ + k 5 sin θ cos θ + k 6 cos θ ) 2 + 6 sin θ cos θ ( k 7 sin θ + k 8 cos θ ) ρ + 12 ( k 9 sin 2 θ cos 2 θ )ρ (15.3-23) ˆ The next step is to detect zero crossings of F ′′ ( ρ ) in a unit pixel range – 0.5 ≤ ρ ≤ 0.5 of the suspected edge. This can be accomplished by computing the real root (if it exists) within the range of the quadratic relation of Eq. 15.3-23.
  • 485. SECOND-ORDER DERIVATIVE EDGE DETECTION 481 (a ) Chebyshev 1 (b ) Chebyshev 2 (c ) Chebyshev 3 (d ) Chebyshev 4 (e ) Chebyshev 5 (f ) Chebyshev 6 FIGURE 15.3-6. 3 × 3 Chebyshev polynomial responses for the peppers_mon image.
  • 486. 482 EDGE DETECTION (g ) Chebyshev 7 (h ) Chebyshev 8 (i ) Chebyshev 9 FIGURE 15.3-6 (Continued). 3 × 3 Chebyshev polynomial responses for the peppers_mon image. 15.4. EDGE-FITTING EDGE DETECTION Ideal edges may be viewed as one- or two-dimensional edges of the form sketched in Figure 15.1-1. Actual image data can then be matched against, or fitted to, the ideal edge models. If the fit is sufficiently accurate at a given image location, an edge is assumed to exist with the same parameters as those of the ideal edge model. In the one-dimensional edge-fitting case described in Figure 15.4-1, the image signal f(x) is fitted to a step function a for x < x 0 (15.4-1a)  s( x) =  a + h  for x ≥ x 0 (15.4-1b)
  • 487. EDGE-FITTING EDGE DETECTION 483 FIGURE 15.4-1. One- and two-dimensional edge fitting. An edge is assumed present if the mean-square error x0 + L 2 E = ∫x 0–L [ f ( x ) – s ( x ) ] dx (15.4-2) is below some threshold value. In the two-dimensional formulation, the ideal step edge is defined as a for x cos θ + y sin θ < ρ (15.4-3a)  s(x) =   a + h for x cos θ + y sin θ ≥ ρ (15.4-3b) where θ and ρ jointly specify the polar distance from the center of a circular test region to the normal point of the edge. The edge-fitting error is 2 E = ∫ ∫ [ F ( x, y ) – S ( x, y ) ] dx dy (15.4-4) where the integration is over the circle in Figure 15.4-1.
  • 488. 484 EDGE DETECTION Hueckel (23) has developed a procedure for two-dimensional edge fitting in which the pixels within the circle of Figure 15.4-1 are expanded in a set of two- dimensional basis functions by a Fourier series in polar coordinates. Let B i ( x, y ) represent the basis functions. Then, the weighting coefficients for the expansions of the image and the ideal step edge become fi = ∫ ∫ Bi ( x, y )F ( x, y ) dx dy (15.4-5a) si = ∫ ∫ Bi ( x, y )S ( x, y ) dx dy (15.4-5b) In Hueckel's algorithm, the expansion is truncated to eight terms for computational economy and to provide some noise smoothing. Minimization of the mean-square- 2 error difference of Eq. 15.4-4 is equivalent to minimization of ( f i – s i ) for all coef- ficients. Hueckel has performed this minimization, invoking some simplifying approximations and has formulated a set of nonlinear equations expressing the estimated edge parameter set in terms of the expansion coefficients f i . Nalwa and Binford (24) have proposed an edge-fitting scheme in which the edge angle is first estimated by a sequential least-squares fit within a 5 × 5 region. Then, the image data along the edge direction is fit to a hyperbolic tangent function ρ –ρ e –e tanh ρ = ------------------- (15.4-6) ρ –ρ e +e as shown in Figure 15.4-2. Edge-fitting methods require substantially more computation than do derivative edge detection methods. Their relative performance is considered in the following section. FIGURE 15.4-2. Hyperbolic tangent edge model.
  • 489. LUMINANCE EDGE DETECTOR PERFORMANCE 485 15.5. LUMINANCE EDGE DETECTOR PERFORMANCE Relatively few comprehensive studies of edge detector performance have been reported in the literature (15,25,26). A performance evaluation is difficult because of the large number of methods proposed, problems in determining the optimum parameters associated with each technique and the lack of definitive performance criteria. In developing performance criteria for an edge detector, it is wise to distinguish between mandatory and auxiliary information to be obtained from the detector. Obviously, it is essential to determine the pixel location of an edge. Other informa- tion of interest includes the height and slope angle of the edge as well as its spatial orientation. Another useful item is a confidence factor associated with the edge deci- sion, for example, the closeness of fit between actual image data and an idealized model. Unfortunately, few edge detectors provide this full gamut of information. The next sections discuss several performance criteria. No attempt is made to provide a comprehensive comparison of edge detectors. 15.5.1. Edge Detection Probability The probability of correct edge detection PD and the probability of false edge detec- tion PF, as specified by Eq. 15.2-24, are useful measures of edge detector perfor- mance. The trade-off between PD and PF can be expressed parametrically in terms of the detection threshold. Figure 15.5-1 presents analytically derived plots of PD versus PF for several differential operators for vertical and diagonal edges and a sig- nal-to-noise ratio of 1.0 and 10.0 (13). From these curves it is apparent that the Sobel and Prewitt 3 × 3 operators are superior to the Roberts 2 × 2 operators. The Prewitt operator is better than the Sobel operator for a vertical edge. But for a diago- nal edge, the Sobel operator is superior. In the case of template-matching operators, the Robinson three-level and five-level operators exhibit almost identical perfor- mance, which is superior to the Kirsch and Prewitt compass gradient operators. Finally, the Sobel and Prewitt differential operators perform slightly better than the Robinson three- and Robinson five-level operators. It has not been possible to apply this statistical approach to any of the larger operators because of analytic difficulties in evaluating the detection probabilities. 15.5.2. Edge Detection Orientation An important characteristic of an edge detector is its sensitivity to edge orientation. Abdou and Pratt (15) have analytically determined the gradient response of 3 × 3 template matching edge detectors and 2 × 2 and 3 × 3 orthogonal gradient edge detectors for square-root and magnitude combinations of the orthogonal gradients. Figure 15.5-2 shows plots of the edge gradient as a function of actual edge orienta- tion for a unit-width ramp edge model. The figure clearly shows that magni- tude combination of the orthogonal gradients is inferior to square-root combination.
  • 490. 486 EDGE DETECTION FIGURE 15.5-1. Probability of detection versus probability of false detection for 2 × 2 and 3 × 3 operators. Figure 15.5-3 is a plot of the detected edge angle as a function of the actual orienta- tion of an edge. The Sobel operator provides the most linear response. Laplacian edge detectors are rotationally symmetric operators, and hence are invariant to edge orientation. The edge angle can be determined to within 45° increments during the 3 × 3 pixel zero-crossing detection process. 15.5.3. Edge Detection Localization Another important property of an edge detector is its ability to localize an edge. Abdou and Pratt (15) have analyzed the edge localization capability of several first derivative operators for unit width ramp edges. Figure 15.5-4 shows edge models in which the sampled continuous ramp edge is displaced from the center of the operator. Figure 15.5-5 shows plots of the gradient response as a function of edge
  • 491. LUMINANCE EDGE DETECTOR PERFORMANCE 487 FIGURE 15.5-2. Edge gradient response as a function of edge orientation for 2 × 2 and 3 × 3 first derivative operators. displacement distance for vertical and diagonal edges for 2 × 2 and 3 × 3 orthogo- nal gradient and 3 × 3 template matching edge detectors. All of the detectors, with the exception of the Kirsch operator, exhibit a desirable monotonically decreasing response as a function of edge displacement. If the edge detection threshold is set at one-half the edge height, or greater, an edge will be properly localized in a noise- free environment for all the operators, with the exception of the Kirsch operator, for which the threshold must be slightly higher. Figure 15.5-6 illustrates the gradi- ent response of boxcar operators as a function of their size (5). A gradient response FIGURE 15.5-3. Detected edge orientation as a function of actual edge orientation for 2 × 2 and 3 × 3 first derivative operators.
  • 492. 488 EDGE DETECTION (a) 2 × 2 model (b) 3 × 3 model FIGURE 15.5-4. Edge models for edge localization analysis. comparison of 7 × 7 orthogonal gradient operators is presented in Figure 15.5-7. For such large operators, the detection threshold must be set relatively high to prevent smeared edge markings. Setting a high threshold will, of course, cause low-ampli- tude edges to be missed. Ramp edges of extended width can cause difficulties in edge localization. For first-derivative edge detectors, edges are marked along the edge slope at all points for which the slope exceeds some critical value. Raising the threshold results in the missing of low-amplitude edges. Second derivative edge detection methods are often able to eliminate smeared ramp edge markings. In the case of a unit width ramp edge, a zero crossing will occur only at the midpoint of the edge slope. Extended-width ramp edges will also exhibit a zero crossing at the ramp midpoint provided that the size of the Laplacian operator exceeds the slope width. Figure 15.5-8 illustrates Laplacian of Gaussian (LOG) examples (21). Berzins (27) has investigated the accuracy to which the LOG zero crossings locate a step edge. Figure 15.5-9 shows the LOG zero crossing in the vicinity of a corner step edge. A zero crossing occurs exactly at the corner point, but the zero- crossing curve deviates from the step edge adjacent to the corner point. The maxi- mum deviation is about 0.3s, where s is the standard deviation of the Gaussian smoothing function.
  • 493. LUMINANCE EDGE DETECTOR PERFORMANCE 489 FIGURE 15.5-5. Edge gradient response as a function of edge displacement distance for 2 × 2 and 3 × 3 first derivative operators. FIGURE 15.5-6. Edge gradient response as a function of edge displacement distance for variable-size boxcar operators.
  • 494. 490 EDGE DETECTION FIGURE 15.5-7 Edge gradient response as a function of edge displacement distance for several 7 × 7 orthogonal gradient operators. 15.5.4. Edge Detector Figure of Merit There are three major types of error associated with determination of an edge: (1) missing valid edge points, (2) failure to localize edge points, and (3) classification of FIGURE 15.5-8. Laplacian of Gaussian response of continuous domain for high- and low- slope ramp edges.
  • 495. LUMINANCE EDGE DETECTOR PERFORMANCE 491 FIGURE 15.5-9. Locus of zero crossings in vicinity of a corner edge for a continuous Lapla- cian of Gaussian edge detector. noise fluctuations as edge points. Figure 15.5-10 illustrates a typical edge segment in a discrete image, an ideal edge representation, and edge representations subject to var- ious types of error. A common strategy in signal detection problems is to establish some bound on the probability of false detection resulting from noise and then attempt to maximize the probability of true signal detection. Extending this concept to edge detection simply involves setting the edge detection threshold at a level such that the probabil- ity of false detection resulting from noise alone does not exceed some desired value. The probability of true edge detection can readily be evaluated by a coincidence comparison of the edge maps of an ideal and an actual edge detector. The penalty for nonlocalized edges is somewhat more difficult to access. Edge detectors that pro- vide a smeared edge location should clearly be penalized; however, credit should be given to edge detectors whose edge locations are localized but biased by a small amount. Pratt (28) has introduced a figure of merit that balances these three types of error. The figure of merit is defined by IA 1 1 R = ---- IN - ∑ ----------------- 1 + ad - 2 (15.5-1) i =1 where I N = MAX { I I, I A } and I I and I A represent the number of ideal and actual edge map points, a is a scaling constant, and d is the separation distance of an actual edge point normal to a line of ideal edge points. The rating factor is normal- ized so that R = 1 for a perfectly detected edge. The scaling factor may be adjusted to penalize edges that are localized but offset from the true position. Normalization by the maximum of the actual and ideal number of edge points ensures a penalty for smeared or fragmented edges. As an example of performance, if a = 1 , the rating of - - 9
  • 496. 492 EDGE DETECTION FIGURE 15.5-10. Indications of edge location. a vertical detected edge offset by one pixel becomes R = 0.90, and a two-pixel offset gives a rating of R = 0.69. With a = 1 , a smeared edge of three pixels width centered - - 9 about the true vertical edge yields a rating of R = 0.93, and a five-pixel-wide smeared edge gives R = 0.84. A higher rating for a smeared edge than for an offset edge is reasonable because it is possible to thin the smeared edge by morphological postprocessing. The figure-of-merit criterion described above has been applied to the assessment of some of the edge detectors discussed previously, using a test image consisting of a 64 × 64 pixel array with a vertically oriented edge of variable contrast and slope placed at its center. Independent Gaussian noise of standard deviation σ n has been 2 added to the edge image. The signal-to-noise ratio is defined as SNR = ( h ⁄ σ n ) , where h is the edge height scaled over the range 0.0 to 1.0. Because the purpose of the testing is to compare various edge detection methods, for fairness it is important that each edge detector be tuned to its best capabilities. Consequently, each edge detector has been permitted to train both on random noise fields without edges and
  • 497. LUMINANCE EDGE DETECTOR PERFORMANCE 493 the actual test images before evaluation. For each edge detector, the threshold parameter has been set to achieve the maximum figure of merit subject to the maxi- mum allowable false detection rate. Figure 15.5-11 shows plots of the figure of merit for a vertical ramp edge as a function of signal-to-noise ratio for several edge detectors (5). The figure of merit is also plotted in Figure 15.5-12 as a function of edge width. The figure of merit curves in the figures follow expected trends: low for wide and noisy edges; and high in the opposite case. Some of the edge detection methods are universally superior to others for all test images. As a check on the subjective validity of the edge location figure of merit, Figures 15.5-13 and 15.5-14 present the edge maps obtained for several high-and low-ranking edge detectors. These figures tend to corroborate the utility of the figure of merit. A high figure of merit generally corresponds to a well-located edge upon visual scrutiny, and vice versa. (a) 3 × 3 edge detectors (b) Larger size edge detectors FIGURE 15.5-11. Edge location figure of merit for a vertical ramp edge as a function of sig- nal-to-noise ratio for h = 0.1 and w = 1.
  • 498. 494 EDGE DETECTION 100 SOBEL FIGURE OF MERIT 80 60 PREWITT COMPASS ROBERTS 40 MAGNITUDE 20 0 7 5 3 1 EDGE WIDTH, PIXELS FIGURE 15.5-12. Edge location figure of merit for a vertical ramp edge as a function of signal-to-noise ratio for h = 0.1 and SNR = 100. 15.5.5. Subjective Assessment In many, if not most applications in which edge detection is performed to outline objects in a real scene, the only performance measure of ultimate importance is how well edge detector markings match with the visual perception of object boundaries. A human observer is usually able to discern object boundaries in a scene quite accurately in a perceptual sense. However, most observers have diffi- culty recording their observations by tracing object boundaries. Nevertheless, in the evaluation of edge detectors, it is useful to assess them in terms of how well they produce outline drawings of a real scene that are meaningful to a human observer. The peppers image of Figure 15.2-2 has been used for the subjective assessment of edge detectors. The peppers in the image are visually distinguishable objects, but shadows and nonuniform lighting create a challenge to edge detectors, which by definition do not utilize higher-order perceptive intelligence. Figures 15.5-15 and 15.5-16 present edge maps of the peppers image for several edge detectors. The parameters of the various edge detectors have been chosen to produce the best visual delineation of objects. Heath et al. (26) have performed extensive visual testing of several complex edge detection algorithms, including the Canny and Nalwa–Binford methods, for a num- ber of natural images. The judgment criterion was a numerical rating as to how well the edge map generated by an edge detector allows for easy, quick, and accurate rec- ognition of objects within a test image.
  • 499. LUMINANCE EDGE DETECTOR PERFORMANCE 495 SNR = 100 (a ) Original (b ) Edge map, R = 100% SNR = 10 (c ) Original (d ) Edge map, R = 85.1% SNR = 1 (e ) Original (f ) Edge Map, R = 24.2% FIGURE 15.5-13. Edge location performance of Sobel edge detector as a function of signal- to-noise ratio, h = 0.1, w = 1, a = 1/9.
  • 500. 496 EDGE DETECTION (a ) Original (b) East compass, R = 66.1% (c ) Roberts magnitude, R = 31.5% (d ) Roberts square root, R = 37.0% (e ) Sobel, R = 85.1% (f ) Kirsch, R = 80.8% FIGURE 15.5-14. Edge location performance of several edge detectors for SNR = 10, h = 0.1, w = 1, a = 1/9.
  • 501. LUMINANCE EDGE DETECTOR PERFORMANCE 497 (a) 2 × 2 Roberts, t = 0.08 (b) 3 × 3 Prewitt, t = 0.08 (c) 3 × 3 Sobel, t = 0.09 (d ) 3 × 3 Robinson five-level (e) 5 × 5 Nevatia−Babu, t = 0.05 (f ) 3 × 3 Laplacian FIGURE 15.5-15. Edge maps of the peppers_mon image for several small edge detectors.
  • 502. 498 EDGE DETECTION (a) 7 × 7 boxcar, t = 0.10 (b) 9 × 9 truncated pyramid, t = 0.10 (c) 11 × 11 Argyle, t = 0.05 (d ) 11 × 11 Macleod, t = 0.10 (e) 11 × 11 derivative of Gaussian, t = 0.11 (f ) 11 × 11 Laplacian of Gaussian FIGURE 15.5-16. Edge maps of the peppers_mon image for several large edge detectors.
  • 503. LINE AND SPOT DETECTION 499 15.6. COLOR EDGE DETECTION In Chapter 3 it was established that color images may be described quantitatively at each pixel by a set of three tristimulus values T1, T2, T3, which are proportional to the amount of red, green, and blue primary lights required to match the pixel color. The luminance of the color is a weighted sum Y = a 1 T 1 + a 2 T 2 + a 3 T3 of the tris- timulus values, where the a i are constants that depend on the spectral characteristics of the primaries. Several definitions of a color edge have been proposed (29). An edge in a color image can be said to exist if and only if the luminance field contains an edge. This definition ignores discontinuities in hue and saturation that occur in regions of con- stant luminance. Another definition is to judge a color edge present if an edge exists in any of its constituent tristimulus components. A third definition is based on form- ing the sum G ( j, k ) = G 1 ( j, k ) + G 2 ( j, k ) + G 3 ( j, k ) (15.6-1) of the gradients G i ( j, k ) of the three tristimulus values or some linear or nonlinear color components. A color edge exists if the gradient exceeds a threshold. Still another definition is based on the vector sum gradient 2 2 2 1⁄2 G ( j, k ) = [ [ G 1 ( j, k ) ] + [ G 2 ( j, k ) ] + [ G 3 ( j, k ) ] ] (15.6-2) With the tricomponent definitions of color edges, results are dependent on the particular color coordinate system chosen for representation. Figure 15.6-1 is a color photograph of the peppers image and monochrome photographs of its red, green, and blue components. The YIQ and L*a*b* coordinates are shown in Figure 15.6-2. Edge maps of the individual RGB components are shown in Figure 15.6-3 for Sobel edge detection. This figure also shows the logical OR of the RGB edge maps plus the edge maps of the gradient sum and the vector sum. The RGB gradient vector sum edge map provides slightly better visual edge delineation than that provided by the gradient sum edge map; the logical OR edge map tends to produce thick edges and numerous isolated edge points. Sobel edge maps for the YIQ and the L*a*b* color components are presented in Figures 15.6-4 and 15.6-5. The YIQ gradient vec- tor sum edge map gives the best visual edge delineation, but it does not delineate edges quite as well as the RGB vector sum edge map. Edge detection results for the L*a*b* coordinate system are quite poor because the a* component is very noise sensitive. 15.7 LINE AND SPOT DETECTION A line in an image could be considered to be composed of parallel, closely spaced edges. Similarly, a spot could be considered to be a closed contour of edges. This
  • 504. 500 EDGE DETECTION (a ) Monochrome representation (b) Red component (c ) Green component (d ) Blue component FIGURE 15.6-1. The peppers_gamma color image and its RGB color components. See insert for a color representation of this figure. method of line and spot detection involves the application of scene analysis tech- niques to spatially relate the constituent edges of the lines and spots. The approach taken in this chapter is to consider only small-scale models of lines and edges and to apply the detection methodology developed previously for edges. Figure 15.1-4 presents several discrete models of lines. For the unit-width line models, line detection can be accomplished by threshold detecting a line gradient 4 G ( j, k ) = MAX { F ( j, k ) ᭺ H m ( j, k ) } ‫ء‬ (15.7-1) m=1
  • 505. LINE AND SPOT DETECTION 501 (a ) Y component (b ) L* component (c ) I component (d ) a* component (e ) Q component (f ) b* component FIGURE 15.6-2. YIQ and L*a*b* color components of the peppers_gamma image.
  • 506. 502 EDGE DETECTION (a ) Red edge map (b ) Logical OR of RGB edges (c ) Green edge map (d ) RGB sum edge map (e) Blue edge map (f ) RGB vector sum edge map FIGURE 15.6-3. Sobel edge maps for edge detection using the RGB color components of the peppers_gamma image.
  • 507. LINE AND SPOT DETECTION 503 (a ) Y edge map (b ) Logical OR of YIQ edges (c ) I edge map (d ) YIQ sum edge map (e) Q edge map (f ) YIQ vector sum edge map FIGURE 15.6-4. Sobel edge maps for edge detection using the YIQ color components of the peppers_gamma image.
  • 508. 504 EDGE DETECTION (a ) L* edge map (b ) Logical OR of L*a *b * edges (c ) a * edge map (d ) L*a *b * sum edge map (e ) b * edge map (f ) L*a *b * vector sum edge map FIGURE 15.6-5. Sobel edge maps for edge detection using the L*a*b* color components of the peppers_gamma image.
  • 509. LINE AND SPOT DETECTION 505 where H m ( j, k ) is a 3 × 3 line detector impulse response array corresponding to a specific line orientation. Figure 15.7-1 contains two sets of line detector impulse response arrays, weighted and unweighted, which are analogous to the Prewitt and Sobel template matching edge detector impulse response arrays. The detection of ramp lines, as modeled in Figure 15.1-4, requires 5 × 5 pixel templates. Unit-width step spots can be detected by thresholding a spot gradient ‫ء‬ G ( j, k ) = F ( j, k ) ᭺ H ( j, k ) (15.7-2) where H ( j, k ) is an impulse response array chosen to accentuate the gradient of a unit-width spot. One approach is to use one of the three types of 3 × 3 Laplacian operators defined by Eq. 15.3-5, 15.3-6, or 15.3-8, which are discrete approxima- tions to the sum of the row and column second derivatives of an image. The gradient responses to these impulse response arrays for the unit-width spot model of Figure 15.1-6a are simply a replicas of each array centered at the spot, scaled by the spot height h and zero elsewhere. It should be noted that the Laplacian gradient responses are thresholded for spot detection, whereas the Laplacian responses are examined for sign changes (zero crossings) for edge detection. The disadvantage to using Laplacian operators for spot detection is that they evoke a gradient response for edges, which can lead to false spot detection in a noisy environment. This problem can be alleviated by the use of a 3 × 3 operator that approximates the continuous FIGURE 15.7-1. Line detector 3 × 3 impulse response arrays.
  • 510. 506 EDGE DETECTION 2 2 cross second derivative ∂2 ⁄ ∂x ∂y . Prewitt (1, p. 126) has suggested the following discrete approximation: 1 –2 1 1 H = -- - –2 4 –2 (15.7-3) 8 1 –2 1 The advantage of this operator is that it evokes no response for horizontally or verti- cally oriented edges, however, it does generate a response for diagonally oriented edges. The detection of unit-width spots modeled by the ramp model of Figure 15.1-5 requires a 5 × 5 impulse response array. The cross second derivative operator of Eq. 15.7-3 and the separable eight-connected Laplacian operator are deceptively similar in appearance; often, they are mistakenly exchanged with one another in the literature. It should be noted that the cross second derivative is identical to within a scale factor with the ninth Chebyshev polynomial impulse response array of Figure 15.3-5. Cook and Rosenfeld (30) and Zucker et al. (31) have suggested several algo- rithms for detection of large spots. In one algorithm, an image is first smoothed with a W × W low-pass filter impulse response array. Then the value of each point in the averaged image is compared to the average value of its north, south, east, and west neighbors spaced W pixels away. A spot is marked if the difference is sufficiently large. A similar approach involves formation of the difference of the average pixel amplitude in a W × W window and the average amplitude in a surrounding ring region of width W. Chapter 19 considers the general problem of detecting objects within an image by template matching. Such templates can be developed to detect large spots. REFERENCES 1. J. M. S. Prewitt, “Object Enhancement and Extraction,” in Picture Processing and Psy- chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York. 1970. 2. L. G. Roberts, “Machine Perception of Three-Dimensional Solids,” in Optical and Elec- tro-Optical Information Processing, J. T. Tippett et al., Eds., MIT Press, Cambridge, MA, 1965, 159–197. 3. R. O. Duda and P. E. Hart, Pattern Classification and Scene Analysis, Wiley, New York, 1973. 4. W. Frei and C. Chen, “Fast Boundary Detection: A Generalization and a New Algorithm,” IEEE Trans. Computers, C-26, 10, October 1977, 988–998. 5. I. Abdou, “Quantitative Methods of Edge Detection,” USCIPI Report 830, Image Processing Institute, University of Southern California, Los Angeles, 1973. 6. E. Argyle, “Techniques for Edge Detection,” Proc. IEEE, 59, 2, February 1971, 285–287.
  • 511. REFERENCES 507 7. I. D. G. Macleod, “On Finding Structure in Pictures,” in Picture Processing and Psy- chopictorics, B. S. Lipkin and A. Rosenfeld, Eds., Academic Press, New York, 1970. 8. I. D. G. Macleod, “Comments on Techniques for Edge Detection,” Proc. IEEE, 60, 3, March 1972, 344. 9. J. Canny, “A Computational Approach to Edge Detection,” IEEE Trans. Pattern Analy- sis and Machine Intelligence, PAMI-8, 6, November 1986, 679–698. 10. D. Demigny and T. Kamie, “A Discrete Expression of Canny’s Criteria for Step Edge Detector Performances Evaluation,” IEEE Trans. Pattern Analysis and Machine Intelli- gence, PAMI-19, 11, November 1997, 1199–1211. 11. R. Kirsch, “Computer Determination of the Constituent Structure of Biomedical Images,” Computers and Biomedical Research, 4, 3, 1971, 315–328. 12. G. S. Robinson, “Edge Detection by Compass Gradient Masks,” Computer Graphics and Image Processing, 6, 5, October 1977, 492–501. 13. R. Nevatia and K. R. Babu, “Linear Feature Extraction and Description,” Computer Graphics and Image Processing, 13, 3, July 1980, 257–269. 14. A. P. Paplinski, “Directional Filtering in Edge Detection,” IEEE Trans. Image Process- ing, IP-7, 4, April 1998, 611–615. 15. I. E. Abdou and W. K. Pratt, “Quantitative Design and Evaluation of Enhancement/ Thresholding Edge Detectors,” Proc. IEEE, 67, 5, May 1979, 753–763. 16. K. Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, New York, 1972. 17. P. V. Henstock and D. M. Chelberg, “Automatic Gradient Threshold Determination for Edge Detection,” IEEE Trans. Image Processing, IP-5, 5, May 1996, 784–787. 18. V. Torre and T. A. Poggio, “On Edge Detection,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 2, March 1986, 147–163. 19. D. Marr and E. Hildrith, “Theory of Edge Detection,” Proc. Royal Society of London, B207, 1980, 187–217. 20. J. S. Wiejak, H. Buxton, and B. F. Buxton, “Convolution with Separable Masks for Early Image Processing,” Computer Vision, Graphics, and Image Processing, 32, 3, December 1985, 279–290. 21. A. Huertas and G. Medioni, “Detection of Intensity Changes Using Laplacian-Gaussian Masks,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-8, 5, September 1986, 651–664. 22. R. M. Haralick, “Digital Step Edges from Zero Crossing of Second Directional Deriva- tives,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, 1, January 1984, 58–68. 23. M. Hueckel, “An Operator Which Locates Edges in Digital Pictures,” J. Association for Computing Machinery, 18, 1, January 1971, 113–125. 24. V. S. Nalwa and T. O. Binford, “On Detecting Edges,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-6, November 1986, 699–714. 25. J. R. Fram and E. S. Deutsch, “On the Evaluation of Edge Detection Schemes and Their Comparison with Human Performance,” IEEE Trans. Computers, C-24, 6, June 1975, 616–628.
  • 512. 508 EDGE DETECTION 26. M. D. Heath, et al., “A Robust Visual Method for Assessing the Relative Performance of Edge-Detection Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, PAMI-19, 12, December 1997, 1338–1359. 27. V. Berzins, “Accuracy of Laplacian Edge Detectors,” Computer Vision, Graphics, and Image Processing, 27, 2, August 1984, 195–210. 28. W. K. Pratt, Digital Image Processing, Wiley-Interscience, New York, 1978, 497–499. 29. G. S. Robinson, “Color Edge Detection,” Proc. SPIE Symposium on Advances in Image Transmission Techniques, 87, San Diego, CA, August 1976. 30. C. M. Cook and A. Rosenfeld, “Size Detectors,” Proc. IEEE Letters, 58, 12, December 1970, 1956–1957. 31. S. W. Zucker, A. Rosenfeld, and L. S. Davis, “Picture Segmentation by Texture Discrim- ination,” IEEE Trans. Computers, C-24, 12, December 1975, 1228–1233.
  • 513. Digital Image Processing: PIKS Inside, Third Edition. William K. Pratt Copyright © 2001 John Wiley & Sons, Inc. ISBNs: 0-471-37407-5 (Hardback); 0-471-22132-5 (Electronic) 16 IMAGE FEATURE EXTRACTION An image feature is a distinguishing primitive characteristic or attribute of an image. Some features are natural in the sense that such features are defined by the visual appearance of an image, while other, artificial features result from specific manipu- lations of an image. Natural features include the luminance of a region of pixels and gray scale textural regions. Image amplitude histograms and spatial frequency spec- tra are examples of artificial features. Image features are of major importance in the isolation of regions of common property within an image (image segmentation) and subsequent identification or labeling of such regions (image classification). Image segmentation is discussed in Chapter 16. References 1 to 4 provide information on image classification tech- niques. This chapter describes several types of image features that have been proposed for image segmentation and classification. Before introducing them, however, methods of evaluating their performance are discussed. 16.1. IMAGE FEATURE EVALUATION There are two quantitative approaches to the evaluation of image features: prototype performance and figure of merit. In the prototype performance approach for image classification, a prototype image with regions (segments) that have been indepen- dently categorized is classified by a classification procedure using various image features to be evaluated. The classification error is then measured for each feature set. The best set of features is, of course, that which results in the least classification error. The prototype performance approach for image segmentation is similar in nature. A prototype image with independently identified regions is segmented by a 509
  • 514. 510 IMAGE FEATURE EXTRACTION segmentation procedure using a test set of features. Then, the detected segments are compared to the known segments, and the segmentation error is evaluated. The problems associated with the prototype performance methods of feature evaluation are the integrity of the prototype data and the fact that the performance indication is dependent not only on the quality of the features but also on the classification or seg- mentation ability of the classifier or segmenter. The figure-of-merit approach to feature evaluation involves the establishment of some functional distance measurements between sets of image features such that a large distance implies a low classification error, and vice versa. Faugeras and Pratt (5) have utilized the Bhattacharyya distance (3) figure-of-merit for texture feature evaluation. The method should be extensible for other features as well. The Bhatta- charyya distance (B-distance for simplicity) is a scalar function of the probability densities of features of a pair of classes defined as  1⁄2  B ( S 1, S 2 ) = – ln  ∫ [ p ( x S 1 )p ( x S 2 ) ] dx  (16.1-1)   where x denotes a vector containing individual image feature measurements with conditional density p ( x S i ) . It can be shown (3) that the B-distance is related mono- tonically to the Chernoff bound for the probability of classification error using a Bayes classifier. The bound on the error probability is 1⁄2 P ≤ [ P ( S 1 )P ( S2 ) ] exp { – B ( S 1, S 2 ) } (16.1-2) where P ( Si ) represents the a priori class probability. For future reference, the Cher- noff error bound is tabulated in Table 16.1-1 as a function of B-distance for equally likely feature classes. For Gaussian densities, the B-distance becomes  1  T Σ1 + Σ2 –1  -- Σ 1 + Σ 2  - B ( S 1, S 2 ) = 1 ( u 1 – u 2 )  -----------------  ( u 1 – u 2 ) + 1 ln  ---------------------------------  (16.1-3) -- - - -- - 2 - 8  2 2  Σ1 1 ⁄ 2 Σ2 1 ⁄ 2    where ui and Σ i represent the feature mean vector and the feature covariance matrix of the classes, respectively. Calculation of the B-distance for other densities is gener- ally difficult. Consequently, the B-distance figure of merit is applicable only for Gaussian-distributed feature data, which fortunately is the common case. In prac- tice, features to be evaluated by Eq. 16.1-3 are measured in regions whose class has been determined independently. Sufficient feature measurements need be taken so that the feature mean vector and covariance can be estimated accurately.
  • 515. AMPLITUDE FEATURES 511 TABLE 16.1-1 Relationship of Bhattacharyya Distance and Chernoff Error Bound B ( S 1, S 2 ) Error Bound 1 1.84 × 10–1 2 6.77 × 10–2 4 9.16 × 10–3 6 1.24 × 10–3 8 1.68 × 10–4 10 2.27 × 10–5 12 2.07 × 10–6 16.2. AMPLITUDE FEATURES The most basic of all image features is some measure of image amplitude in terms of luminance, tristimulus value, spectral value, or other units. There are many degrees of freedom in establishing image amplitude features. Image variables such as lumi- nance or tristimulus values may be utilized directly, or alternatively, some linear, nonlinear, or perhaps noninvertible transformation can be performed to generate variables in a new amplitude space. Amplitude measurements may be made at spe- cific image points, [e.g., the amplitude F ( j, k ) at pixel coordinate ( j, k ) , or over a neighborhood centered at ( j, k ) ]. For example, the average or mean image amplitude in a W × W pixel neighborhood is given by w w 1 M ( j, k ) = ------ W - 2 ∑ ∑ F ( j + m, k + n ) (16.2-1) m = –w n = –w where W = 2w + 1. An advantage of a neighborhood, as opposed to a point measure- ment, is a diminishing of noise effects because of the averaging process. A disadvan- tage is that object edges falling within the neighborhood can lead to erroneous measurements. The median of pixels within a W × W neighborhood can be used as an alternative amplitude feature to the mean measurement of Eq. 16.2-1, or as an additional feature. The median is defined to be that pixel amplitude in the window for which one-half of the pixels are equal or smaller in amplitude, and one-half are equal or greater in amplitude. Another useful image amplitude feature is the neighborhood standard deviation, which can be computed as w w 1⁄2 1 2 S ( j, k ) = ---- W - ∑ ∑ [ F ( j + m, k + n ) – M ( j + m, k + n ) ] (16.2-2) m = –w n = – w
  • 516. 512 IMAGE FEATURE EXTRACTION (a) Original (b) 7 × 7 pyramid mean (c) 7 × 7 standard deviation (d ) 7 × 7 plus median FIGURE 16.2-1. Image amplitude features of the washington_ir image. In the literature, the standard deviation image feature is sometimes called the image dispersion. Figure 16.2-1 shows an original image and the mean, median, and stan- dard deviation of the image computed over a small neighborhood. The mean and standard deviation of Eqs. 16.2-1 and 16.2-2 can be computed indirectly in terms of the histogram of image pixels within a neighborhood. This leads to a class of image amplitude histogram features. Referring to Section 5.7, the first-order probability distribution of the amplitude of a quantized image may be defined as P ( b ) = P R [ F ( j, k ) = r b ] (16.2-3) where r b denotes the quantized amplitude level for 0 ≤ b ≤ L – 1 . The first-order his- togram estimate of P(b) is simply
  • 517. AMPLITUDE FEATURES 513 N( b) P ( b ) ≈ ----------- - (16.2-4) M where M represents the total number of pixels in a neighborhood window centered about ( j, k ) , and N ( b ) is the number of pixels of amplitude r b in the same window. The shape of an image histogram provides many clues as to the character of the image. For example, a narrowly distributed histogram indicates a low-contrast image. A bimodal histogram often suggests that the image contains an object with a narrow amplitude range against a background of differing amplitude. The following measures have been formulated as quantitative shape descriptions of a first-order histogram (6). Mean: L–1 SM ≡ b = ∑ bP ( b ) (16.2-5) b=0 Standard deviation: L–1 1⁄2 2 SD ≡ σb = ∑ (b – b ) P( b) (16.2-6) b=0 Skewness: L–1 1 3 S S = ----- σb - 3 ∑ ( b – b) P(b ) (16.2-7) b=0 Kurtosis: L–1 1- 4 S K = ----- 4 ∑ ( b – b) P( b ) – 3 (16.2-8) σb b=0 Energy: L–1 2 SN = ∑ [P(b )] (16.2-9) b=0 Entropy: L–1 SE = – ∑ P ( b ) log 2 { P ( b ) } (16.2-10) b=0
  • 518. 514 IMAGE FEATURE EXTRACTION m,n r q j,k FIGURE 16.2-2. Relationship of pixel pairs. The factor of 3 inserted in the expression for the Kurtosis measure normalizes SK to zero for a zero-mean, Gaussian-shaped histogram. Another useful histogram shape measure is the histogram mode, which is the pixel amplitude corresponding to the histogram peak (i.e., the most commonly occurring pixel amplitude in the window). If the histogram peak is not unique, the pixel at the peak closest to the mean is usu- ally chosen as the histogram shape descriptor. Second-order histogram features are based on the definition of the joint proba- bility distribution of pairs of pixels. Consider two pixels F ( j, k ) and F ( m, n ) that are located at coordinates ( j, k ) and ( m, n ), respectively, and, as shown in Figure 16.2-2, are separated by r radial units at an angle θ with respect to the horizontal axis. The joint distribution of image amplitude values is then expressed as P ( a, b ) = P R [ F ( j, k ) = r a, F ( m, n ) = r b ] (16.2-11) where r a and r b represent quantized pixel amplitude values. As a result of the dis- crete rectilinear representation of an image, the separation parameters ( r, θ ) may assume only certain discrete values. The histogram estimate of the second-order dis- tribution is N ( a, b ) P ( a, b ) ≈ ----------------- - (16.2-12) M where M is the total number of pixels in the measurement window and N ( a, b ) denotes the number of occurrences for which F ( j, k ) = r a and F ( m, n ) = r b . If the pixel pairs within an image are highly correlated, the entries in P ( a, b ) will be clustered along the diagonal of the array. Various measures, listed below, have been proposed (6,7) as measures that specify the energy spread about the diagonal of P ( a, b ). Autocorrelation: L–1 L–1 SA = ∑ ∑ abP ( a, b ) (16.2-13) a=0 b=0
  • 519. AMPLITUDE FEATURES 515 Covariance: L–1 L–1 SC = ∑ ∑ ( a – a ) ( b – b )P ( a, b ) (16.2-14a) a=0 b=0 where L–1 L–1 a = ∑ ∑ aP ( a, b ) (16.2-14b) a=0 b=0 L–1 L–1 b = ∑ ∑ bP ( a, b ) (16.2-14c) a=0 b=0 Inertia: L–1 L–1 2 SI = ∑ ∑ (a – b) P ( a, b ) (16.2-15) a=0 b=0 Absolute value: L–1 L–1 SV = ∑ ∑ a – b P ( a, b ) (16.2-16) a=0 b=0 Inverse difference: L–1 L–1 P ( a, b ) SF = ∑ ∑ ---------------------------- 1 + ( a – b) 2 (16.2-17) a=0 b=0 Energy: L–1 L–1 2 SG = ∑ ∑ [ P ( a, b ) ] (16.2-18) a=0 b=0 Entropy: L–1 L–1 ST = – ∑ ∑ P ( a, b ) log 2 { P ( a, b ) } (16.2-19) a=0 b=0 The utilization of second-order histogram measures for texture analysis is consid- ered in Section 16.6.
  • 520. 516 IMAGE FEATURE EXTRACTION 16.3. TRANSFORM COEFFICIENT FEATURES The coefficients of a two-dimensional transform of a luminance image specify the amplitude of the luminance patterns (two-dimensional basis functions) of a trans- form such that the weighted sum of the luminance patterns is identical to the image. By this characterization of a transform, the coefficients may be considered to indi- cate the degree of correspondence of a particular luminance pattern with an image field. If a basis pattern is of the same spatial form as a feature to be detected within the image, image detection can be performed simply by monitoring the value of the transform coefficient. The problem, in practice, is that objects to be detected within an image are often of complex shape and luminance distribution, and hence do not correspond closely to the more primitive luminance patterns of most image trans- forms. Lendaris and Stanley (8) have investigated the application of the continuous two- dimensional Fourier transform of an image, obtained by a coherent optical proces- sor, as a means of image feature extraction. The optical system produces an electric field radiation pattern proportional to ∞ ∞ F ( ω x, ω y ) = ∫–∞ ∫–∞ F ( x, y ) exp { – i ( ωx x + ωy y ) } dx dy (16.3-1) where ( ω x, ω y ) are the image spatial frequencies. An optical sensor produces an out- put 2 M ( ω x, ω y ) = F ( ω x, ω y ) (16.3-2) proportional to the intensity of the radiation pattern. It should be observed that F ( ω x, ω y ) and F ( x, y ) are unique transform pairs, but M ( ω x, ω y ) is not uniquely related to F ( x, y ) . For example, M ( ω x, ω y ) does not change if the origin of F ( x, y ) is shifted. In some applications, the translation invariance of M ( ω x, ω y ) may be a benefit. Angular integration of M ( ω x, ω y ) over the spatial frequency plane produces a spatial frequency feature that is invariant to translation and rotation. Representing M ( ω x, ω y ) in polar form, this feature is defined as 2π N (ρ) = ∫0 M ( ρ, θ ) dθ (16.3-3) 2 2 2 where θ = arc tan { ω x ⁄ ω y } and ρ = ω x + ω y . Invariance to changes in scale is an attribute of the feature ∞ P(θ ) = ∫0 M ( ρ, θ ) dρ (16.3-4)
  • 521. TRANSFORM COEFFICIENT FEATURES 517 FIGURE 16.3-1. Fourier transform feature masks. The Fourier domain intensity pattern M ( ω x, ω y ) is normally examined in specific regions to isolate image features. As an example, Figure 16.3-1 defines regions for the following Fourier features: Horizontal slit: ∞ ωy (m + 1 ) S1( m ) = ∫– ∞ ∫ω ( m ) y M ( ω x, ω y ) dω x dω y (16.3-5) Vertical slit: ωx (m + 1 ) ∞ S2 ( m ) = ∫ω ( m) ∫–∞ M ( ω x, ωy ) dωx dωy x (16.3-6) Ring: ρ ( m + 1 ) 2π S3 ( m ) = ∫ρ ( m ) ∫0 M ( ρ, θ ) dρ dθ (16.3-7) Sector: ∞ θ(m + 1 ) S4 ( m ) = ∫0 ∫ θ ( m ) M ( ρ, θ ) dρ dθ (16.3-8)
  • 522. 518 IMAGE FEATURE EXTRACTION (a ) Rectangle (b ) Rectangle transform (c ) Ellipse (d ) Ellipse transform (e ) Triangle (f ) Triangle transform FIGURE 16.3-2. Discrete Fourier spectra of objects; log magnitude displays. For a discrete image array F ( j, k ) , the discrete Fourier transform N–1 N–1 1  – 2πi  F ( u, v ) = --- N - ∑ ∑ F ( j, k ) exp  ----------- ( ux + vy )   N  (16.3-9) j=0 k=0
  • 523. TEXTURE DEFINITION 519 for u, v = 0, …, N – 1 can be examined directly for feature extraction purposes. Hor- izontal slit, vertical slit, ring, and sector features can be defined analogous to Eqs. 16.3-5 to 16.3-8. This concept can be extended to other unitary transforms, such as the Hadamard and Haar transforms. Figure 16.3-2 presents discrete Fourier transform log magnitude displays of several geometric shapes. 16.4. TEXTURE DEFINITION Many portions of images of natural scenes are devoid of sharp edges over large areas. In these areas, the scene can often be characterized as exhibiting a consistent structure analogous to the texture of cloth. Image texture measurements can be used to segment an image and classify its segments. Several authors have attempted qualitatively to define texture. Pickett (9) states that “texture is used to describe two dimensional arrays of variations... The ele- ments and rules of spacing or arrangement may be arbitrarily manipulated, provided a characteristic repetitiveness remains.” Hawkins (10) has provided a more detailed description of texture: “The notion of texture appears to depend upon three ingredi- ents: (1) some local 'order' is repeated over a region which is large in comparison to the order's size, (2) the order consists in the nonrandom arrangement of elementary parts and (3) the parts are roughly uniform entities having approximately the same dimensions everywhere within the textured region.” Although these descriptions of texture seem perceptually reasonably, they do not immediately lead to simple quan- titative textural measures in the sense that the description of an edge discontinuity leads to a quantitative description of an edge in terms of its location, slope angle, and height. Texture is often qualitatively described by its coarseness in the sense that a patch of wool cloth is coarser than a patch of silk cloth under the same viewing conditions. The coarseness index is related to the spatial repetition period of the local structure. A large period implies a coarse texture; a small period implies a fine texture. This perceptual coarseness index is clearly not sufficient as a quantitative texture mea- sure, but can at least be used as a guide for the slope of texture measures; that is, small numerical texture measures should imply fine texture, and large numerical measures should indicate coarse texture. It should be recognized that texture is a neighborhood property of an image point. Therefore, texture measures are inher- ently dependent on the size of the observation neighborhood. Because texture is a spatial property, measurements should be restricted to regions of relative uniformity. Hence it is necessary to establish the boundary of a uniform textural region by some form of image segmentation before attempting texture measurements. Texture may be classified as being artificial or natural. Artificial textures consist of arrangements of symbols, such as line segments, dots, and stars placed against a neutral background. Several examples of artificial texture are presented in Figure 16.4-1 (9). As the name implies, natural textures are images of natural scenes con- taining semirepetitive arrangements of pixels. Examples include photographs of brick walls, terrazzo tile, sand, and grass. Brodatz (11) has published an album of photographs of naturally occurring textures. Figure 16.4-2 shows several natural texture examples obtained by digitizing photographs from the Brodatz album.
  • 524. 520 IMAGE FEATURE EXTRACTION FIGURE 16.4-1. Artificial texture.
  • 525. VISUAL TEXTURE DISCRIMINATION 521 (a) Sand (b) Grass (c) Wool (d) Raffia FIGURE 16.4-2. Brodatz texture fields. 16.5. VISUAL TEXTURE DISCRIMINATION A discrete stochastic field is an array of numbers that are randomly distributed in amplitude and governed by some joint probability density (12). When converted to light intensities, such fields can be made to approximate natural textures surpris- ingly well by control of the generating probability density. This technique is useful for generating realistic appearing artificial scenes for applications such as airplane flight simulators. Stochastic texture fields are also an extremely useful tool for investigating human perception of texture as a guide to the development of texture feature extraction methods. In the early 1960s, Julesz (13) attempted to determine the parameters of stochas- tic texture fields of perceptual importance. This study was extended later by Julesz et al. (14–16). Further extensions of Julesz’s work have been made by Pollack (17),
  • 526. 522 IMAGE FEATURE EXTRACTION FIGURE 16.5-1. Stochastic texture field generation model. Purks and Richards (18), and Pratt et al. (19). These studies have provided valuable insight into the mechanism of human visual perception and have led to some useful quantitative texture measurement methods. Figure 16.5-1 is a model for stochastic texture generation. In this model, an array of independent, identically distributed random variables W ( j, k ) passes through a linear or nonlinear spatial operator O { · } to produce a stochastic texture array F ( j, k ). By controlling the form of the generating probability density p ( W ) and the spatial operator, it is possible to create texture fields with specified statistical proper- ties. Consider a continuous amplitude pixel x 0 at some coordinate ( j, k ) in F ( j, k ) . Let the set { z 1, z 2, …, z J } denote neighboring pixels but not necessarily nearest geo- metric neighbors, raster scanned in a conventional top-to-bottom, left-to-right fash- ion. The conditional probability density of x 0 conditioned on the state of its neighbors is given by p ( x 0, z 1, …, z J ) p ( x 0 z 1, …, z J ) = ------------------------------------- (16.5-1) p ( z 1, …, z J ) The first-order density p ( x 0 ) employs no conditioning, the second-order density p ( x 0 z 1 ) implies that J = 1, the third-order density implies that J = 2, and so on. 16.5.1. Julesz Texture Fields In his pioneering tex