Subido por Arnold Saul Calisaya Sumi

Introduction to remote sensing by Cracknell, Arthur P. Hayes, Ladson (z-lib.org)

Anuncio
9255_C000.fm Page i Tuesday, February 27, 2007 12:33 PM
INTRODUCTION TO
REMOTE SENSING
Second Edition
9255_C000.fm Page ii Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page iii Tuesday, February 27, 2007 12:33 PM
INTRODUCTION TO
REMOTE SENSING
Second Edition
Arthur P. Cracknell
Ladson Hayes
CRC Press
Taylor & Francis Group
6000 Broken Sound Parkway NW, Suite 300
Boca Raton, FL 33487-2742
© 2007 by Taylor & Francis Group, LLC
CRC Press is an imprint of Taylor & Francis Group, an Informa business
No claim to original U.S. Government works
Version Date: 20140113
International Standard Book Number-13: 978-1-4200-0897-5 (eBook - PDF)
This book contains information obtained from authentic and highly regarded sources. Reasonable
efforts have been made to publish reliable data and information, but the author and publisher cannot
assume responsibility for the validity of all materials or the consequences of their use. The authors and
publishers have attempted to trace the copyright holders of all material reproduced in this publication
and apologize to copyright holders if permission to publish in this form has not been obtained. If any
copyright material has not been acknowledged please write and let us know so we may rectify in any
future reprint.
Except as permitted under U.S. Copyright Law, no part of this book may be reprinted, reproduced,
transmitted, or utilized in any form by any electronic, mechanical, or other means, now known or
hereafter invented, including photocopying, microfilming, and recording, or in any information storage or retrieval system, without written permission from the publishers.
For permission to photocopy or use material electronically from this work, please access www.copyright.com (http://www.copyright.com/) or contact the Copyright Clearance Center, Inc. (CCC), 222
Rosewood Drive, Danvers, MA 01923, 978-750-8400. CCC is a not-for-profit organization that provides licenses and registration for a variety of users. For organizations that have been granted a photocopy license by the CCC, a separate system of payment has been arranged.
Trademark Notice: Product or corporate names may be trademarks or registered trademarks, and are
used only for identification and explanation without intent to infringe.
Visit the Taylor & Francis Web site at
http://www.taylorandfrancis.com
and the CRC Press Web site at
http://www.crcpress.com
9255_C000.fm Page v Tuesday, February 27, 2007 12:33 PM
Preface
In this textbook we describe the physical principles of common remote
sensing systems and discuss the processing, interpretation, and applications
of the data. In this second edition we have maintained the original style and
approach of the first edition, but all the chapters have been revised, taking
into account the many developments in remote sensing which have taken
place over the last 15 years. Chapter 3 has been extended to include details
of the more important new satellite systems launched since the first edition
was written, although many more systems have been developed and
launched than we could possibly include (details of other systems will be
found in the comprehensive reference book by H.A. Kramer, see the list of
references). Chapter 5 includes new sections on airborne lidar for land
surveys and airborne gamma ray spectroscopy and chapter 7 has a new
section on interferometric synthetic aperture radar. The discussion of nowobsolete hardware, particularly for printing images, has been omitted from
chapter 9 and the discussion of filtering of images has been expanded.
Chapter 10 has been updated to include a number of recent applications,
particularly some that make use of global datasets.
The references and the bibliography (formerly Appendix I) have been
updated, but Appendix II on sources of remotely-sensed data in the first
edition has been deleted because, these days, anyone looking for satellite
data will presumably use some search engine to locate the source of the data
on the internet. The list of abbreviations and acronyms (originally Appendix
III) has been retained and updated.
We are grateful to Dr. Franco Coren for assistance with the section on airborne
lidar for land surveys and for supplying Figures 5.7, 5.8, and 5.9, to Prof. Lucy
Wyatt for suggestions regarding chapter 6 on ground wave and sky wave
radars, to Dr. Rudi Gens for comments on interferometric SAR (Section 7.5),
and to Dr. Iain Woodhouse for supplying the digital file of Figure 7.23.
We are, as before, grateful to the holders of the copyrights of material that
we have used; the sources are acknowledged in situ.
Arthur Cracknell
Ladson Hayes
9255_C000.fm Page vi Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page vii Tuesday, February 27, 2007 12:33 PM
About the Authors
Prof. Arthur Cracknell graduated with a degree in physics from Cambridge
University in 1961. He later earned his doctor of philosophy from Oxford
University, where his dissertation was entitled “Some Band Structure
Calculations for Metals.” Prof. Cracknell worked as a lecturer in physics at
Singapore University (now the National University of Singapore) from 1964
to 1967 and at Essex University from 1967 to 1970 before moving to Dundee
University in 1970 where he became a professor in 1978. He retired from
Dundee University in 2002 and now holds the title of emeritus professor
there. He is currently working on various short-term contracts with several
Far Eastern universities.
After several years of research work on the study of group-theoretical
techniques in solid-state physics, Prof. Cracknell turned his research interests
in the late 1970’s to remote sensing. Editor of the International Journal of
Remote Sensing for more than 20 years, Prof. Cracknell, along with his colleagues and research students, has published approximately 250 research
papers and is the author or coauthor of several books, both on theoretical
solid-state physics and on remote sensing. His latest books include The
Advanced Very High Resolution Radiometer (Taylor & Francis, 1997) and Visible
Infrared Imager Radiometer Suite: A New Operational Cloud Imager (CRC Press,
Taylor & Francis, 2006), written with Keith Hutchison, about the VIIRS,
which is planned to be the successor to the Advanced Very High Resolution
Radiometer.
Prof. Ladson Hayes read for a doctor of philosophy under the supervision
of Arthur Cracknell and is now a lecturer in electrical and electronic engineering at the University of Dundee, Scotland.
9255_C000.fm Page viii Tuesday, February 27, 2007 12:33 PM
9255_C000.fm Page ix Tuesday, February 27, 2007 12:33 PM
Table of Contents
Chapter 1 An Introduction to Remote Sensing .................................. 1
1.1 Introduction ....................................................................................................1
1.2 Aircraft Versus Satellites...............................................................................7
1.3 Weather Satellites.........................................................................................10
1.4 Observations of the Earth’s Surface ......................................................... 11
1.5 Communications and Data Collection Systems .....................................12
1.5.1 Communications Systems ..............................................................12
1.5.2 Data Collection Systems .................................................................14
Chapter 2 Sensors and Instruments ................................................... 21
2.1 Introduction ..................................................................................................21
2.2 Electromagnetic Radiation .........................................................................22
2.3 Visible and Near-Infrared Sensors............................................................29
2.4 Thermal-Infrared Sensors...........................................................................35
2.5 Microwave Sensors......................................................................................38
2.6 Sonic Sensors ................................................................................................44
2.6.1 Sound Navigation and Ranging....................................................44
2.6.2 Echo Sounding .................................................................................45
2.6.3 Side Scan Sonar................................................................................46
Chapter 3 Satellite Systems ................................................................ 49
3.1 Introduction ..................................................................................................49
3.2 Meteorological Remote Sensing Satellites ...............................................50
3.2.1 Polar-Orbiting Meteorological Satellites ......................................50
3.2.2 Geostationary Meteorological Satellites.......................................59
3.3 Nonmeteorological Remote Sensing Satellites........................................64
3.3.1 Landsat ..............................................................................................64
3.3.2 SPOT ..................................................................................................67
3.3.3 Resurs-F and Resurs-O ...................................................................68
3.3.4 IRS ......................................................................................................68
3.3.5 Pioneering Oceanographic Satellites ............................................69
3.3.6 ERS .....................................................................................................70
3.3.7 TOPEX/Poseidon.............................................................................71
3.3.8 Other Systems ..................................................................................71
3.4 Resolution .....................................................................................................73
3.4.1 Spectral Resolution..........................................................................74
3.4.2 Spatial Resolution ............................................................................74
3.4.3 Frequency of Coverage ...................................................................75
9255_C000.fm Page x Tuesday, February 27, 2007 12:33 PM
Chapter 4 Data Reception, Archiving, and Distribution ................ 77
4.1 Introduction ..................................................................................................77
4.2 Data Reception from the TIROS-N/NOAA
Series of Satellites ........................................................................................78
4.3 Data Reception from Other Remote Sensing Satellites .........................82
4.4 Archiving and Distribution........................................................................83
Chapter 5 Lasers and Airborne Remote Sensing Systems .............. 89
5.1 Introduction ..................................................................................................89
5.2 Early Airborne Lidar Systems ...................................................................89
5.3 Lidar Bathymetry.........................................................................................91
5.4 Lidar for Land Surveys...............................................................................96
5.4.1 Positioning and Direct Georeferencing of Laser Data...............96
5.4.2 Applications of Airborne Lidar Scanning ...................................98
5.5 Laser Fluorosensing ..................................................................................101
5.6 Airborne Gamma Ray Spectroscopy ......................................................108
Chapter 6 Ground Wave and Sky Wave Radar Techniques .......... 113
6.1 Introduction ................................................................................................ 113
6.2 The Radar Equation .................................................................................. 115
6.3 Ground Wave Systems.............................................................................. 118
6.4 Sky Wave Systems .....................................................................................120
Chapter 7 Active Microwave Instruments ...................................... 129
7.1 Introduction ................................................................................................129
7.2 The Altimeter..............................................................................................129
7.3 The Scatterometer ......................................................................................138
7.4 Synthetic Aperture Radar.........................................................................145
7.5 Interferometric Synthetic Aperture Radar.............................................154
Chapter 8 Atmospheric Corrections to Passive
Satellite Remote Sensing Data ....................................... 159
8.1 Introduction ................................................................................................159
8.2 Radiative Transfer Theory........................................................................160
8.3 Physical Processes Involved in Atmospheric Correction....................162
8.3.1 Emitted Radiation..........................................................................165
8.3.1.1 Surface Radiance: L1(k), T1 ............................................165
8.3.1.2 Upwelling Atmospheric
Radiance: L2(k ), T2 ..........................................................166
8.3.1.3 Downwelling Atmospheric
Radiance: L3(k), T3 ..........................................................167
8.3.1.4 Space Component: L4(k ), T4 ..........................................167
8.3.1.5 Total Radiance: L*(k), Tb ................................................167
8.3.1.6 Calculation of Sea-Surface Temperature.....................168
9255_C000.fm Page xi Tuesday, February 27, 2007 12:33 PM
8.3.2
8.3.3
8.4
8.5
Reflected Radiation........................................................................168
Atmospheric Transmission...........................................................171
8.3.3.1 Scattering by Air Molecules ..........................................172
8.3.3.2 Absorption by Gases ......................................................173
8.3.3.3 Scattering by Aerosol Particles .....................................174
Thermal-Infrared Scanners and Passive Microwave Scanners ..........175
8.4.1 The Radiative Transfer Equation ................................................175
8.4.2 Thermal-Infrared Scanner Data...................................................178
8.4.3 Passive Microwave Scanner Data ...............................................188
Visible Wavelength Scanners ...................................................................191
8.5.1 Calibration of the Data .................................................................191
8.5.2 Atmospheric Corrections to
the Satellite-Received Radiance ..................................................195
8.5.3 Algorithms for the Extraction of Marine Parameters
from Water-Leaving Radiance.....................................................202
Chapter 9 Image Processing.............................................................. 205
9.1 Introduction ................................................................................................205
9.2 Digital Image Displays .............................................................................205
9.3 Image Processing Systems .......................................................................208
9.4 Density Slicing ...........................................................................................209
9.5 Image Processing Programs.....................................................................210
9.6 Image Enhancement .................................................................................. 211
9.6.1 Contrast Enhancement.................................................................. 211
9.6.2 Edge Enhancement ........................................................................215
9.6.3 Image Smoothing...........................................................................219
9.7 Multispectral Images.................................................................................221
9.8 Principal Components ..............................................................................225
9.9 Fourier Transforms ....................................................................................229
Chapter 10 Applications of Remotely Sensed Data....................... 241
10.1 Introduction ..............................................................................................241
10.2 Applications to the Atmosphere ...........................................................241
10.2.1 Weather Satellites in Forecasting and Nowcasting ..............241
10.2.2 Weather Radars in Forecasting ................................................243
10.2.3 Determination of Temperature Changes with Height
from Satellites .............................................................................246
10.2.4 Measurements of Wind Speed.................................................248
10.2.4.1 Tropospheric Estimations from
Cloud Motion .............................................................248
10.2.4.2 Microwave Estimations
of Surface Wind Shear...............................................250
10.2.4.3 Sky Wave Radar.........................................................251
10.2.5 Hurricane Prediction and Tracking ........................................253
9255_C000.fm Page xii Tuesday, February 27, 2007 12:33 PM
10.3
10.4
10.5
10.6
10.7
10.2.6 Satellite Climatology .................................................................255
10.2.6.1 Cloud Climatology ....................................................256
10.2.6.2 Global Temperature ...................................................259
10.2.6.3 Global Moisture..........................................................261
10.2.6.4 Global Ozone ..............................................................262
10.2.6.5 Summary .....................................................................267
Applications to the Geosphere ..............................................................267
10.3.1 Geological Information from
Electromagnetic Radiation........................................................267
10.3.2 Geological Information from
the Thermal Spectrum...............................................................269
10.3.2.1 Thermal Mapping ......................................................269
10.3.2.2 Engineering Geology.................................................269
10.3.2.3 Geothermal and Volcano Studies ............................271
10.3.2.4 Detecting Underground
and Surface Coal Fires ..............................................273
10.3.3 Geological Information from Radar Data..............................274
10.3.4 Geological Information from Potential Field Data...............275
10.3.5 Geological Information from Sonars ......................................276
Applications to the Biosphere ...............................................................277
10.4.1 Agriculture ..................................................................................279
10.4.2 Forestry ........................................................................................281
10.4.3 Spatial Information Systems: Land Use
and Land Cover Mapping........................................................285
Applications to the Hydrosphere..........................................................287
10.5.1 Hydrology ...................................................................................287
10.5.2 Oceanography and Marine Resources ...................................288
10.5.2.1 Satellite Views of Upwelling....................................289
10.5.2.2 Sea-Surface Temperatures.........................................291
10.5.2.3 Monitoring Pollution.................................................293
Applications to the Cryosphere ............................................................296
Postscript ...................................................................................................300
References ............................................................................................ 303
Bibliography ........................................................................................ 313
Appendix .............................................................................................. 317
Index ..................................................................................................... 327
9255_C001.fm Page 1 Thursday, March 8, 2007 11:14 AM
1
An Introduction to Remote Sensing
1.1
Introduction
Remote sensing may be taken to mean the observation of, or gathering of
information about, a target by a device separated from it by some distance.
The expression “remote sensing” was coined by geographers at the U.S.
Office of Naval Research in the 1960s at about the time that the use of “spy”
satellites was beginning to move out of the military sphere and into the
civilian sphere. Remote sensing is often regarded as being synonymous with
the use of artificial satellites and, in this regard, may call to mind glossy
calendars and coffee-table books of images of various parts of the Earth (see,
for example, Sheffield [1981, 1983]; Bullard and Dixon-Gough [1985]; and
Arthus-Bertrand [2002]) or the satellite images that are commonly shown on
television weather forecasts. Although satellites do play an important role
in remote sensing, remote sensing activity not only precedes the expression
but also dates from long before the launch of the first artificial satellite. There
are a number of ways of gathering remotely sensed data that do not involve
satellites and that, indeed, have been in use for very much longer than
satellites. For example, virtually all of astronomy can be regarded as being
built upon the basis of remote sensing data. However, this book is concerned
with terrestrial remote sensing. Photogrammetric techniques, using air photos for mapping purposes, were widely used for several decades before
satellite images became available. The idea of taking photographs of the
surface of the Earth from a platform elevated above the surface of the Earth
was originally put into practice by balloonists in the nineteenth century; the
earliest known photograph from a balloon was taken of the village of Petit
Bicêtre near Paris in 1859. Military reconnaissance aircraft in World War I
and, even more so, in World War II helped to substantially develop aerial
photographic techniques. This technology was later advanced by the invention and development of radar and thermal-infrared systems.
Some of the simpler instruments, principally cameras, that are used in
remote sensing also date from long before the days of artificial satellites. The
principle of the pinhole camera and the camera obscura has been known for
1
9255_C001.fm Page 2 Thursday, March 8, 2007 11:14 AM
2
Introduction to Remote Sensing
centuries, and the photographic process for permanently recording an image
on a plate, film, or paper was developed in the earlier part of the nineteenth
century. If remote sensing is regarded as the acquisition of information about
an object without physical contact with it, almost any use of photography
in a scientific or technical context may be thought of as remote sensing. For
some decades, a great deal of survey work has been done by the interpretation of aerial photography obtained from low-level flights using light
aircraft; sophisticated photogrammetric techniques have come to be applied
in this type of work. It is important to realize, however, that in addition to
conventional photography (photography using cameras with film that is
sensitive to light in the visible wavelength range), other important instruments and techniques are used in remote sensing work. For instance, infrared
photography can be used instead of the conventional visible wavelength
range photography. Color-infrared photography, which was originally
developed as a military reconnaissance tool, was found to be extremely
valuable in scientific studies of vegetation. Alternatively, multispectral scanners may be used in place of cameras. These scanners can be built to operate
in the microwave range as well as in the visible, near-infrared, and thermalinfrared ranges of the electromagnetic spectrum. One can also use active
techniques based on the principles of radar, where the instrument itself
generates the radiation that is used. However, the instruments may differ very
substantially from commercially available radars that are used for navigation
and to ensure the safety of shipping and aircraft.
There are other means of seeking and transmitting information apart from
using electromagnetic radiation as the carrier of the information in remote
sensing activities. One alternative is to use ultrasonic waves. Although these
waves do not travel far in the atmosphere, they travel large distances under
water with only very slight attenuation; this makes them particularly valuable for use in bathymetric work in rivers and seas, for hunting for submerged wrecks, for the inspection of underwater installations and pipelines,
for the detection of fish and submarines, and for underwater communications
purposes (see Cracknell [1980]). Figure 1.1 shows an image of old, flooded
limestone mine workings obtained with underwater ultrasonic equipment.
Remote sensing involves more than generation and interpretation of data
in the form of images. For instance, data on pressure, temperature, and
humidity at different heights in the atmosphere are routinely gathered by
meteorological services around the world using rockets and balloons carrying expendable instrument packages that are released from the ground at
regular intervals. A great deal of scientific information about the upper layers
of the atmosphere is also gathered by radio sounding methods operated by
stations on the ground and from instruments flown on satellites. Close to
the ground, acoustic sounding methods are often used and weather radars
are used to monitor precipitation (see Section 10.2.2).
Notwithstanding the wide coverage actually implied in the term “remote
sensing,” we shall confine ourselves for the purpose of this book to studying
the gathering of information about the surface of the earth and events on
9255_C001.fm Page 3 Thursday, March 8, 2007 11:14 AM
An Introduction to Remote Sensing
3
FIGURE 1.1
Sonar image of part of a flooded abandoned limestone mine in the West Midlands of England.
(Cook, 1985.)
the surface of the Earth — that is, we shall confine ourselves to Earth observation. This is not meant to imply that the gathering of data about other
planets in the solar system or the use of ultrasound for subsurface remote
sensing and communications purposes are unimportant. In dealing with the
observation of the Earth’s surface using remote sensing techniques, this book
will be considering a part of science that not only includes many purely
scientific problems but also has important applications in the everyday lives
of mankind. The observation of the Earth’s surface and events thereon
involves using a wide variety of instruments and platforms for the detection
of radiation at a variety of different wavelengths. The radiation itself may
be either radiation originating from the Sun, radiation emitted at the surface
of the Earth, or radiation generated by the remote sensing instruments themselves and reflected back from the Earth’s surface. A quite detailed treatise
and reference book on the subject is the Manual of Remote Sensing (Colwell,
1983; Henderson and Lewis, 1998; Rencz and Ryerson, 1999; Ustin, 2004;
Ryerson, 2006); many details that would not be proper to include in the
present book can be found in that treatise. In addition, a number of general
textbooks on the principles of Earth observation and its various applications
are available; some of these are listed in the Bibliography.
9255_C001.fm Page 4 Thursday, March 8, 2007 11:14 AM
4
Introduction to Remote Sensing
The original initiative behind the space program lay with the military. The
possibilities of aerial photography certainly began to be appreciated during
World War I, whereas in World War II, aerial photographs obtained by reconnaissance pilots, often at very considerable risk, were of enormous importance.
The use of infrared photographic film allowed camouflaged materials to be
distinguished from the air. There is little doubt that without the military
impetus, the whole program of satellite-based remote sensing after World War
II would be very much less developed than it is now. This book will not be
concerned with the military aspects of the subject. But as far as technical details
are concerned, it would be a reasonably safe assumption that any instrument
or facility that is available in the civilian satellite program has a corresponding
instrument or facility with similar or better performance in the military program, if there is any potential or actual military need for it. As has already
been indicated, the term “remote sensing” was coined in the early 1960s at
the time that the rocket and space technology that was developed for military
purposes after World War II was beginning to be transferred to the civilian
domain. The history of remote sensing may be conveniently divided into two
periods: the period prior to the space age (up to 1960) and the period thereafter.
The distinctions between these two periods are summarized in Table 1.1.
TABLE 1.1
Comparison of the Two Major Periods in the History of Remote Sensing
Prior to Space Age (1860–1960)
Since 1960
Only one kind and date of photography
Many kinds and dates of remote sensing data
Heavy reliance on the human analysis of
unenhanced images
Heavy reliance on the machine analysis and
enhancement of images
Extensive use of photo interpretation keys
Minimal use of photo interpretation keys
Relatively good military/civil relations with
respect to remote sensing
Relatively poor military/civil relations with
respect to remote sensing
Few problems with uninformed opportunists Many problems with uninformed opportunists
Minimal applicability of the “multi” concept Extensive applicability of the “multi” concept
Simple and inexpensive equipment, readily
operated and maintained by resourceoriented workers
Complex and expensive equipment, not readily
operated and maintained by resource-oriented
workers
Little concern about the renewability of
resources, environmental protection, global
resource information systems, and
associated problems related to “signature
extension,” “complexity of an area’s
structure,” and/or the threat imposed by
“economic weaponry”
Much concern about the renewability of
resources, environmental protection, global
resource information systems, and associated
problems related to “signature extension,”
“complexity of an area’s structure,” and/or the
threat imposed by “economic weaponry”
Heavy resistance to “technology acceptance” Continuing heavy resistance to “technology
by potential users of remote sensing-derived acceptance” by potential users of remote
information.
sensing-derived information.
Adapted from Colwell, 1983.
9255_C001.fm Page 5 Thursday, March 8, 2007 11:14 AM
An Introduction to Remote Sensing
5
Remote sensing is far from being a new technique. There was, in fact, a
very considerable amount of remote sensing work done prior to 1960,
although the actual term “remote sensing” had not yet been coined. The
activities of the balloonists in the nineteenth century and the activities of the
military in World Wars I and II have already been mentioned. Following
World War II, enormous advances were made on the military front. Spy
planes were developed that were capable of revealing, for example, the
installation of Soviet rocket bases in Cuba in 1962. Military satellites were
also launched; some were used to provide valuable meteorological data for
defense purposes and others were able to locate military installations and follow
the movements of armies. In the peacetime between World Wars I and II,
substantial advances were made in the use of aerial photography for civilian
applications in areas such as agriculture, cartography, forestry, and geology.
Subsequently, archaeologists began to appreciate its potential as well. Remote
sensing, in its earlier stages at least, was simply a new area in photointerpretation. The advent of artificial satellites gave remote sensing a new dimension.
The first photographs of the Earth taken from space were obtained in the
early 1960s. Man had previously only been able to study small portions of
the surface of the Earth at one time and had painstakingly built up maps
from a large number of local observations. The Earth was suddenly seen as
an entity, and its larger surface features were rendered visible in a way that
captivated people’s imaginations. In 1972, the United States launched its first
Earth Resources Technology Satellite (ERTS-1), which was later renamed
Landsat 1. It was then imagined that remote sensing would solve almost
every remaining problem in environmental science. Initially, there was enormous confidence in remote sensing and a considerable degree of overselling
of the new systems. To some extent, this boom was followed by a period of
disillusionment when it became obvious that, although valuable information
could be obtained, there were substantial difficulties to be overcome and
considerable challenges to be met. A more realistic approach is now perceived and people have realized that remote sensing from satellites provides a
tool to be used in conjunction with traditional sources of information, such
as aerial photography and ground observation, to improve the knowledge
and understanding of a whole variety of environmental, scientific, engineering,
and human problems.
An extensive history of the development of remote sensing will be found
in the book by Kramer (2002) and Dr Kramer has produced an even more
comprehensive version which is available on the following website: http://
directory.eoportal.org/pres_ObservationoftheEarthanditsEnvironment.html
Before proceeding any further, it is worthwhile commenting on some
points that will be discussed in later sections. First, it is convenient to divide
remotely sensed material according to the wavelength of the electromagnetic
radiation used (optical, near-infrared, thermal-infrared, microwave, and
radio wavelengths). Secondly, it is convenient to distinguish between passive
and active sensing techniques. In a passive system, the remote sensing instrument simply receives whatever radiation happens to arrive and selects the
9255_C001.fm Page 6 Thursday, March 8, 2007 11:14 AM
6
Introduction to Remote Sensing
radiation of the particular wavelength range that it requires. In an active
system, the remote sensing instrument itself generates radiation, transmits
that radiation toward a target, receives the reflected radiation from the target,
and extracts information from the return signal. Thirdly, one or two points
need to be made regarding remote sensing satellites. Manned satellite programs are mentioned because these have often captured the popular imagination. The United States and the former Union of Soviet Socialist Republics
had for many years conducted manned satellite programs that included
cameras in their payloads. Although manned missions may be more spectacular than unmanned missions, they are necessarily of rather short duration and the amount of useful information obtained from them is relatively
small compared with the amount of useful information obtained from
unmanned satellites.
Among unmanned satellites, it is important to distinguish between polar
or near-polar orbiting satellites and geostationary satellites. Suppose that a
satellite of mass m travels in a circular orbit of radius r around the Earth, of
mass M; then it will experience a gravitational force of GMm/r2 (G = gravitational constant), which is responsible for causing the acceleration rw2 of
the satellite in its orbit, where w is the angular velocity. Thus, using Newton’s
second law of motion:
G
Mm
= mrω 2
r2
(1.1)
GM
r3
(1.2)
or
ω2 =
and the period of revolution, T, of the satellite is then given by:
T=
2π
r3
= 2π
ω
GM
(1.3)
Since p, G, and M are constants, the period of revolution of the satellite
depends only on the radius of the orbit, provided the satellite is high enough
above the surface of the Earth for the air resistance to be negligible. It is very
common to put a remote sensing satellite into a near-polar orbit at about 800
to 900 km above the surface of the Earth; at that height, it has a period of
about 90 to 100 minutes. If the orbit has a larger radius, the period will be
longer. For the Moon, which has a period of about 28 days, the radius of the
orbit is about 384,400 km. Somewhere in between these two radii is one
value of the radius for which the period is exactly 24 hours, or 1 day. This
radius, which is approximately 42,250 km, corresponds to a height of about
35,900 km above the surface of the Earth. If one chooses an orbit of this
radius in the equatorial plane, rather than a polar orbit, and if the sense of
the movement of the satellite in this orbit is the same as the rotation of the
9255_C001.fm Page 7 Thursday, March 8, 2007 11:14 AM
An Introduction to Remote Sensing
7
Earth, then the satellite will remain vertically over the same point on the
surface of the Earth (on the equator). This constitutes what is commonly
known as a geosynchronous or geostationary orbit.
1.2
Aircraft Versus Satellites
Remote sensing of the Earth from aircraft and from satellites is already
established in a number of areas of environmental science. Further applications
are constantly being developed as a result of improvements both in the technology itself and in people’s general awareness of the potential of remote
sensing techniques. Table 1.2 lists a number of areas for which remote sensing
is particularly useful. In the applications given, aircraft or satellite data are
used as appropriate to the purpose. There are several advantages of using
remotely sensed data obtained from an aircraft or satellite rather than using
data gathered by conventional methods. The main advantages are that data
can be gathered by aircraft or satellites quite frequently and over large areas.
The major disadvantage is that extraction of the required information from
the remotely sensed data may be difficult or, in some cases, impossible. Various
considerations must be taken into account when deciding between using aircraft or satellite data. The fact that an aircraft flies so much lower than a satellite
means that one can see more detail on the ground from an aircraft than from
a satellite. However, although a satellite can see less detail, it may be more
suitable for many purposes. A satellite has the advantages of regularity of
coverage and an area of coverage (in terms of area on the ground) that could
never be achieved from an aircraft. The frequency of coverage of a given site
by satellite-flown instruments may, however, be too low for some applications.
For a small area, a light aircraft can be used to obtain a large number of images
more frequently. Figure 1.2 illustrates some of the major differences between
satellites and aircraft in remote sensing work.
A number of factors should be considered in deciding whether to use
aircraft or satellite data, including:
• Extent of the area to be covered
• Speed of development of the phenomenon to be observed
• Detailed performance of the instrument available for flying in the
aircraft or satellite
• Availability and cost of the data.
The last point in this list, which concerns the cost to the user, may seem a little
surprising. Clearly, it is much more expensive to build a satellite platform and
sensor system, to launch it, to control it in its orbit, and to recover the data
than it would be to buy and operate a light aircraft and a good camera or
scanner. In most instances, the cost of a remote sensing satellite system has
9255_C001.fm Page 8 Thursday, March 8, 2007 11:14 AM
8
Introduction to Remote Sensing
TABLE 1.2
Uses of Remote Sensing
Archaeology and anthropology
Cartography
Geology
Surveys
Mineral resources
Land use
Urban land use
Agricultural land use
Soil survey
Health of crops
Soil moisture and evapotranspiration
Yield predictions
Rangelands and wildlife
Forestry - inventory
Forestry, deforestation, acid rain, disease
Civil engineering
Site studies
Water resources
Transport facilities
Water resources
Surface water, supply, pollution
Underground water
Snow and ice mapping
Coastal studies
Erosion, accretion, bathymetry
Sewage, thermal and chemical pollution monitoring
Oceanography
Surface temperature
Geoid
Bottom topography
Winds, waves, and currents
Circulation
Sea ice mapping
Oil pollution monitoring
Meteorology
Weather systems tracking
Weather forecasting
Heat flux and energy balance
Input to general circulation models
Sounding for atmospheric profiles
Cloud classification
Precipitation monitoring
Climatology
Atmospheric minority constituents
Surface albedo
Heat flux and energy balance
Input to climate models
Desertification
9255_C001.fm Page 9 Thursday, March 8, 2007 11:14 AM
9
An Introduction to Remote Sensing
TABLE 1.2 (Continued)
Uses of Remote Sensing
Natural disasters
Floods
Earthquakes
Volcanic eruptions
Forest fires
Subsurface coal fires
Landslides
Tsunamis
Planetary studies
been borne by the taxpayers of one country or another. In the early days, the
costs charged to the user of the data covered little more than the cost of the
media on which the data were supplied (photographic film, computer data
storage media of the day [i.e. computer compatible tape], and so forth) plus
the postage. Subsequently, with the launch of SPOT-1 in 1986 and a change of
U.S. government policy with regard to Landsat at about the same time, the
cost of satellite data was substantially increased in order to recover some of
the costs of the ground station operation from the users of the data. To try to
recover the development, construction, and launch costs of a satellite system
from the selling of the data to users would make the cost of the data so
expensive that it would kill most possible applications of Earth observation
satellite data stone dead. What seems to have been evolving is a two-tier
system in which data for teaching or academic research purposes are provided
free or at very low cost, whereas data for commercial uses are rather expensive.
Recently, two satellite remote sensing systems have been developed on a
commercial basis (IKONOS and Quickbird); these are very high resolution
Satellite
Aircraft
~ thousands of
feet (m.)
~ hundreds of
miles (km)
FIGURE 1.2
Causes of differences in scale of aircraft and satellite observations.
9255_C001.fm Page 10 Thursday, March 8, 2007 11:14 AM
10
Introduction to Remote Sensing
systems and their intention is to compete in the lucrative air photography
market. On the other hand, data from weather satellites remain free or are
available at nominal cost on the basis of the long-standing principle that
meteorological data are freely exchanged between countries.
The influence of the extent of the area to be studied on the choice of aircraft
or satellite as a source of remote sensing data is closely related to the question
of spatial resolution. Loosely speaking, one can think of the spatial resolution
as the size of the smallest object that can be seen in a remote sensing image.
The angular limit of resolution of an instrument used for remote sensing
work is, in nearly every case, determined by the design and construction of
the instrument. Satellites are flown several hundred kilometers above the
surface of the Earth, whereas aircraft, and particularly light survey aircraft,
may fly very low indeed, possibly only a few hundred meters above the
surface of the Earth. The fact that the aircraft is able to fly so low means that,
with a given instrument, far more detail of the ground can be seen from the
aircraft than could be seen by using the same instrument on a satellite. However, as will be discussed later, there are many purposes for which the lower
resolution that is available from satellite observations is perfectly adequate and,
when compared with an aircraft, a satellite can have several advantages. For
instance, once launched into orbit, a satellite simply continues in that orbit
without consuming fuel for propulsion because air resistance is negligible at
the altitudes concerned. Occasional adjustments to the orbit may be made by
remote command from the ground; these adjustments consume only a very
small amount of fuel. The electrical energy needed to drive the instruments
and transmitters on board satellites is derived from large solar panels.
1.3
Weather Satellites
A satellite has a scale of coverage and a regularity of coverage that one could
never reasonably expect to obtain from an aircraft. The exact details of the
coverage obtained depend on the satellite in question. As an example, a
single satellite of the polar orbiting Television InfraRed Observation Satellite
series (TIROS-N series) carries a sensor, the Advanced Very High Resolution
Radiometer (AVHRR), which produces the pictures seen on many television
weather programs and which gives complete coverage of the entire surface
of the Earth daily. A geostationary weather satellite gives images more frequently, in most cases every half hour but for the newest systems every
quarter of an hour. However, it only sees a fixed portion (30% to 40%) of the
surface of the Earth (see Figure 1.3). Global coverage of the surface of the
Earth (apart from the polar regions) is obtained from a chain of geostationary
satellites arranged at intervals around the equator. Satellites have completely
transformed the study of meteorology by providing synoptic pictures of
weather systems such as could never before be obtained, although in presatellite days some use was made of photographs from high-flying aircraft.
9255_C001.fm Page 11 Thursday, March 8, 2007 11:14 AM
An Introduction to Remote Sensing
11
FIGURE 1.3 (See color insert)
An image of the Earth from GOES-E, showing the extent of geostationary satellite coverage.
1.4
Observations of the Earth’s Surface
A satellite may remain in operation for several years unless it experiences
some accidental failure or its equipment is deliberately turned off by mission
control from the ground. Thus, a satellite has the important advantage over
an aircraft in that it gathers information in all weather conditions, including
those in which one might not choose to fly in a light survey aircraft. It must,
of course, be remembered that clouds may obscure the surface of the Earth.
Indeed, for studies of the Earth’s atmosphere, clouds are often of particular
interest. By flying an aircraft completely below the clouds, one may be able
to collect useful information about the Earth’s surface although, because it is
not usual for aircraft remote sensing missions to be flown in less than optimal
conditions, one would try to avoid having to take aerial photographs on
cloudy days. Much useful data can still be gathered by a satellite on the very
large number of days on which there is some, but not complete, cloud cover.
The remotely sensed signals detected by the sensors on a satellite or aircraft
but originating from the ground are influenced by the intervening atmosphere.
The magnitude of the influence depends on the distance between the surface
of the Earth and the platform carrying the sensor and on the atmospheric
conditions prevailing at the time. It also depends very much on the principles
of operation of the sensor, especially on the wavelength of the radiation that
is used. Because the influence of the atmosphere is variable, it may be necessary to make corrections to the data in order to accommodate the variability.
9255_C001.fm Page 12 Thursday, March 8, 2007 11:14 AM
12
Introduction to Remote Sensing
The approach adopted to the question of atmospheric corrections to remotely
sensed data will be determined by the nature of the environmental problem
to which the data are applied, as well as by the properties of the sensor used
and by the processing applied to the data. In land-based applications of
satellite remote sensing data, it may or may not be important to consider
atmospheric effects, depending on the application in question. In meteorological applications, it is the atmosphere that is being observed anyway and,
in most instances, quantitative determinations of, and corrections to, the radiance are relatively unimportant. Atmospheric effects are of greatest concern
to users of remote sensing data where water bodies, such as lakes, rivers, and
oceans, have been studied with regard to the determination of physical or
biological parameters of the water.
1.5
1.5.1
Communications and Data Collection Systems
Communications Systems
Although this book is primarily concerned with remote sensing satellite
platforms that carry instruments for gathering information about the surface
of the Earth, mention should be made of the many satellites that are launched
for use in the field of telecommunications. Many of these satellites belong
to purely commercial telecommunications network operations systems. The
user of these telecommunications facilities is, however, generally unaware
that a satellite is being used; for example, the user simply dials an international telephone number and need never even know whether the call goes
via a satellite. Some remote sensing satellites have no involvement in communications systems apart from the transmission back to ground of the data
that they themselves generate, whereas others have a subsidiary role in
providing a communications facility.
The establishment of a system of geostationary satellites as an alternative to using submarine cables for international communication was foreseen as early as 1945. The first communications satellite, Telstar, was
launched by the United States in 1962. Telstar enabled television pictures
to be relayed across the Atlantic for the short time that the satellite was
in view of the ground receiving stations on both sides of the Atlantic. The
Syncom series, which were truly geostationary satellites, followed in 1963.
The idea involved is basically a larger version of the microwave links that
are commonplace on land. Two stations on the surface communicate via
a geostationary satellite. The path involved is about a thousand times
longer than a direct link between two stations would be on the surface.
As a consequence, the antennae used are much larger, the transmitters are
much more powerful, and the receivers are much more sensitive than
those for direct communication over shorter distances on the surface of
9255_C001.fm Page 13 Thursday, March 8, 2007 11:14 AM
13
An Introduction to Remote Sensing
the Earth. Extensive literature now exists on using geostationary satellites
for commercial telecommunications purposes.
For any remote sensing satellite system, some means of transferring the
information that has been gathered by the sensors on the satellite back to
Earth is necessary. In the case of a manned spacecraft, the recorded data
can be brought back by the astronauts in the spacecraft when they return
to Earth. However, the majority of scientific remote sensing data gathered
from space is gathered using unmanned spacecraft. The data from an
unmanned spacecraft must be transmitted back to Earth by radio transmission from the satellite to a suitably equipped ground station. The transmitted radio signals can only be received from the satellite when it is above
the horizon of the ground station. In the case of polar-orbiting satellites,
global coverage could be achieved by having tape recorders on board the
satellite and transmitting the tape-recorded data back to Earth when the
satellite is within range of a ground station. However, in practice, it is
usually only possible to provide tape recording facilities adequate for
recording a small fraction of the data that could, in principle, be gathered
during each orbit of the satellite. Alternatively, global coverage could be
made possible by the construction of a network of receiving stations suitably distributed over the surface of the Earth. This method for obtaining
global coverage was originally intended in the case of the Landsat series
of satellites (see Figure 1.4). However, an alternative approach to securing
global coverage takes the form of a relay system, in which a series of
geostationary satellites link signals from an orbiting remote sensing satellite
with a receiving station at all times.
Kiruna
Sweden
Prince Albert
Canada Gatineau
Norman
USA
Fucino
Italy
Canada
Maspalomas
Spain
Riyadh
Saudi Arabia
Cotopaxi
Ecuador
Cuiaba
Brazil
Beijing
China Hatoyama
Islamabad
Japan
Pakistan Chung-Li
Taiwan
Hyderabad
Bangkok
India
Thailand
Parepare
Indonesia
Johannesburg
South Africa
Alice Springs
Australia
FIGURE 1.4
Landsat TM ground receiving stations and extent of coverage (stations not shown: Argentina,
Chile, Kenya, and Mongolia). (http://geo.arc.nasa.gov/sge/1andsat/coverage.html)
9255_C001.fm Page 14 Thursday, March 8, 2007 11:14 AM
14
1.5.2
Introduction to Remote Sensing
Data Collection Systems
Although the major part of the data transmitted back to Earth on the communications link from a remote sensing satellite will consist of the data that
the instruments on the satellite have gathered, some of these satellites also
fulfill a communications role. For example, the geostationary satellite Meteosat
(see Section 3.2) serves as a communications satellite to transmit processed
Meteosat data from the European Space Operations Centre (ESOC) in
Darmstadt, Germany, to users of the data; it is also used to retransmit data
from some other geostationary satellites to users who may be out of the
direct line of sight of those satellites. Another aspect of remote sensing
satellites that is of particular relevance to environmental scientists and engineers is that some satellites carry data collection systems. Such systems
enable the satellites to collect data from instruments situated in difficult or
inaccessible locations on the land or sea surface. Such instruments may be
at sea on a moored or drifting buoy or on a weather station in a hostile or
otherwise inaccessible environment, such as the Arctic or a desert.
Several methods of recording and retrieving data from an unmanned data
gathering station, such as a buoy or an isolated weather or hydrological
station, are available. Examples include:
• Cassette tape recorders or computer storage media, which require
occasional visits to collect the data
• A direct radio link to a receiving station conveniently situated on
the ground
• A radio link via a satellite.
The first option may be satisfactory if the amount of data received is relatively
small; however, if the data are substantial and can only be retrieved occasionally, this method may not be very suitable. The second option may be satisfactory over short distances but becomes progressively more difficult over
longer distances. The third option has some attractions and is worth a little
further consideration here. Two satellite-based data collection systems are of
importance. One involves the use of a geostationary satellite, such as Meteosat;
the other, the Argos data collection system, involves the National Oceanic and
Atmospheric Administration (NOAA) polar-orbiting operational environmental satellite (POES) (see Figure 1.5).
Using a satellite has several advantages over using a direct radio transmission from the platform housing the data-collecting instruments to the
user’s own radio receiving station. One of these is simply convenience. It
saves on the cost of reception equipment and of operating staff for a receiving
station of one’s own; it also simplifies problems of frequency allocations.
There may, however, be the more fundamental problem of distance. If the
satellite is orbiting, it can store the messages on board and play them back
later, perhaps on the other side of the Earth. The Argos system accordingly
enables someone in Europe to receive data from buoys drifting in the Pacific
9255_C001.fm Page 15 Thursday, March 8, 2007 11:14 AM
15
An Introduction to Remote Sensing
TIROS-N series
satellite
NOAA
Wallops Island U.S.A.
telemetry station
NOAA
Gilmore Creek, U.S.A
telemetry station
METEO
Lannion, France
telemetry station
CNES
Service ARGOS
data processing centre
France
NESS
Suitland, U.S.A
Users
Users
FIGURE 1.5
Overview of Argos data collection and platform location system. (System Argos.)
Ocean or in Antarctica, for example. In addition to recovering data from a
drifting buoy, the Argos system can also be used to locate the position of the
buoy.
To some extent, the geostationary satellite data collection systems and the
Argos data collection systems are complementary. A data collection system
using a geostationary satellite, such as Meteosat, has the advantage that the
satellite is always overhead and therefore always available, in principle, to
receive data. For the Meteosat system, moored buoys or stationary platforms
on land can be equipped with transmitters to send records of measurements
to the Meteosat satellite; the messages are transmitted to the ESOC and then
relayed to the user. Although data could also be gathered from a drifting
buoy using the Meteosat system, the location of the buoy would be unknown.
A data collection system cannot be used on a geostationary satellite if the
data collection platform is situated in extreme polar regions, such as outside
the circle indicating the telecommunications coverage in Figure 1.6. On the
other hand, a data collection system that uses a polar-orbiting satellite will
perform better in polar regions because the satellite will be in sight of a
platform that is near one of the poles much more frequently than a platform
9255_C001.fm Page 16 Thursday, March 8, 2007 11:14 AM
16
Introduction to Remote Sensing
240°
300°
0°
60°
120°
180°
90°
90°
60°
60°
30°
30°
0°
0°
−30°
−30°
−60°
−60°
−90°
−90°
−150°
−90°
−30°
0°
30°
90°
150°
FIGURE 1.6
Meteosat reception area. (European Space Agency.)
near the equator. A polar-orbiting satellite will, however, be out of sight of
the data collection platform a great deal of the time.
The platform location facility is not particularly interesting for a landbased platform because the location of the platform is known — although
it has occasionally been a useful feature when transmitters have been stolen!
At sea, however, information regarding the location of the data collection
platform is very valuable because it allows data to be gathered from drifting
buoys and provides the position from which the data were obtained. The
locational information is also valuable for moored buoys because it provides
a constant check that the buoy has not broken loose from its mooring. If the
buoy does break loose, then the location facility is able to provide valuable
information to a vessel sent to recover it.
The location of a platform is determined by making use of the Doppler
effect on the frequency of the carrier wave of the transmission from the
platform; this transmitting frequency, f0, is fixed (within the stability of the
transmitter) and is nominally the same for all platforms. The apparent frequency of the signal received by the data collection system on the satellite
is represented by the equation:
 c − v cos θ 
f′ = 
 f0
c

(1.4)
where
c is the velocity of light,
v is the velocity of the satellite, and
q is the angle between the line of sight and the velocity of the satellite.
If c, f0 , and the orbital parameters of the satellite are known, so that v is
known, then f ’ is measured by the receiving system on the satellite; cosq can
9255_C001.fm Page 17 Thursday, March 8, 2007 11:14 AM
17
An Introduction to Remote Sensing
F
Orbit 2
E
1
Orbit 1
C
2
B
1´
D
A
FIGURE 1.7
Diagram to illustrate the principle of the location of platforms with the Argos system.
then be calculated. The position of the satellite is also known from the orbital
parameters so that a field of possible positions of the platform is obtained.
This field takes the form of a cone, with the satellite at its apex and the
velocity vector of the satellite along the axis of symmetry (see Figure 1.7).
A, B, and C denote successive positions of the satellite when transmissions
are received from the given platform. D, E, and F are the corresponding
positions at which messages are received from this platform in the following
orbit, which occurs approximately 100 minutes later. Because the altitude of
the satellite is known, the intersection of several of the cones for one orbit
(each corresponding to a separate measurement) with the altitude sphere
yields the solution for the location of the platform. Actually, this yields two
solutions: points 1 and 1’, which are symmetrically placed relative to the
ground track of the satellite. One of these points is the required solution, the
other is its “image.” This ambiguity cannot be resolved with data from a
single orbit alone, but it can be resolved with data received from two successive orbits and a knowledge of the order of magnitude of the drift velocity
of the platform. In Figure 1.7, point 1’ could thus be eliminated. In practice,
because of the considerable redundancy, one does not need to precisely know
f0; it is enough that the transmitter frequency f0 be stable over the period of
observation. The processing of all the measurements made at A, B, C, D, E,
and F then yields the platform position, its average speed over the interval
between the two orbits, and the frequency of the oscillator.
The Argos platform location and data collection system has been operational since 1978. It was established under an agreement (Memorandum of
Understanding) between the French Centre National d’Etudes Spatiales
(CNES) and two U.S. organizations, the National Aeronautics and Space
Administration (NASA) and the NOAA. The Argos system’s main mission
is to provide an operational environmental data collection service for the
9255_C001.fm Page 18 Thursday, March 8, 2007 11:14 AM
18
Introduction to Remote Sensing
entire duration of the NOAA POES program and its successors. Argos is
currently operated and managed by Collecte, Localisation, Satellites (CLS),
a CNES subsidiary in Toulouse, France, and Service Argos, Inc., a CLS North
American subsidiary, in Largo, Maryland, near Washington D.C. (web sites:
http://www.cls.fr and http://www.argosinc.com). After several years of operational service, the efficiency and reliability of the Argos system has been
demonstrated very successfully and by 2003 there were 8000 Argos transmitters operating around the world.
The Argos system consists of three segments:
The set of all users’ platforms (buoys, rafts, fixed or offshore stations,
animals, birds, etc.), each being equipped with a platform transmitter terminal (PTT)
The space segment composed of the onboard data collection system
(DCS) flown on each satellite of the NOAA POES program
The ground segment for the processing and distribution of data.
These will be considered briefly in turn.
The main characteristics of Argos PTTs can be summarized as follows:
• Transmission frequency: 401.65 MHz
• Messages: less than 1 second duration and transmitted at regular
intervals by any given PTT
• Message capacity for sensor data: up to 32 bytes
• Repetition rate: 45 to 200 s.
Because all Argos PTTs work on the same frequency, they are particularly
easy to operate. They are also moderately priced. The transmitters can be
very small; miniaturized models can be as compact as a small matchbox,
weighing as little as 0.5 oz (15 g), with a tiny power consumption. These
features mean that Argos transmitters can be used to track small animals
and birds.
At any given time, the space segment consists of two satellites equipped
with the Argos onboard DCS. These are satellites of the NOAA POES series
that are in near-circular polar orbits with periods of about 100 minutes. Each
orbit is Sun-synchronous, that is the angle between the orbital plane and the
Sun direction remains constant. The orbital planes of the two satellites are
inclined at 90˚ to one another. Each satellite crosses the equatorial plane at
a fixed (local solar) time each day; these are 1500 hours (ascending node)
and 0300 hours (descending node) for one satellite, and 1930 hours and 0730
hours for the other. These times are approximate as there is, in fact, a slight
precession of the orbits from one day to the next. The PTTs are not interrogated by the DCS on the satellite — they transmit spontaneously. Messages
are transmitted at regular intervals by any given platform. Time-separation
9255_C001.fm Page 19 Thursday, March 8, 2007 11:14 AM
An Introduction to Remote Sensing
19
of messages, to ensure that messages for different PTTs arrive randomly at
the DCS on the satellite, is achieved by assigning slightly different intervals
to different platforms. Transmissions occur every 45 to 60 seconds in the case
of location-type platforms and every 100 to 200 seconds for data-collectiononly platforms. The DCS can handle several messages simultaneously reaching the satellite (four on the earlier versions, eight on the later versions),
provided they are separated in frequency. Frequency separation of messages
will occur because the carrier frequencies of the messages from different
PTTs will be slightly different as a result of the Doppler shifts of the transmissions from different platforms. Nevertheless, some messages may still be
lost; the likelihood of this is kept small by controlling the total number of
PTTs that access the system. At any given time, one of these satellites can
receive messages from platforms within a circle of diameter about 3100 miles
(5000 km) on the ground. The DCS on a satellite acquires and records a
composite signal comprising a mixture of messages received from a number of
PTTs within each satellite’s coverage. Each time a satellite passes over one
of the three telemetry stations (Wallops Island, Virginia; Fairbanks, Alaska; or
Lannion, France), all the Argos message data recorded on tape are read out
and transmitted to that station. As well as being tape recorded on board the
spacecraft, the Argos data messages are multiplexed into the direct readout
transmissions from the satellite. A number of regional receiving stations
receive transmitted data from the satellites in real time whenever a satellite
is above the horizon at that station. The three main ground stations also act
as regional receiving stations. The CLS has global processing centers in
Toulouse, France, and Largo, Maryland, and a number of regional processing
centers as well.
Argos data are distributed to the owners of PTTs by a variety of methods,
including fax, magnetic tape, floppy diskette, CD-ROMs, and networks.
An automatic distribution service supplies results automatically, either at
user-defined fixed times or whenever new data become available. The user
specifies the most appropriate distribution network. For example, many
users are taking advantage of the Internet to receive their data via file transfer
protocol or email. There is no need to interrogate Argos online because data
are delivered automatically to the user’s system. Argos has also established
a powerful Global Telecommunications System (GTS) processing subsystem
to simplify the transmission of data directly onto the GTS of the World
Meteorological Organization (WMO), a worldwide operations system for the
sharing of meteorological and climate data. Meteorological results are distributed as soon as processing is completed or at required times. For each
location obtained from the Argos system, the error associated with it is
calculated. The error is specified as class 3 (error < 150 m), class 2 (150 m <
error < 350 m), class 1 (350 m < error < 1 km), or class 0 (error > 1 km).
The location principle used in the Argos GTS system is quite different
from the principle used in a global positioning system (GPS). But, of course,
a data collection system fitted with an Argos PTT may be equipped with
a GPS receiver and its output transmitted via the Argos PTT. GPS positions
9255_C001.fm Page 20 Thursday, March 8, 2007 11:14 AM
20
Introduction to Remote Sensing
are processed along with Argos locations through the Argos system.
Results are integrated with Argos data and GPS and Argos locations appear
in the same format (a flag indicates whether a location is obtained from
Argos or GPS). Needless to say, the use of a GPS receiver impacts on the
platform’s power requirements and costs.
9255_C002.fm Page 21 Friday, February 16, 2007 10:30 PM
2
Sensors and Instruments
2.1
Introduction
Remote sensing of the surface of the Earth — whether land, sea, or atmosphere
— is carried out using a variety of different instruments. These instruments,
in turn, use a variety of different wavelengths of electromagnetic radiation. This
radiation may be in the visible, near-infrared (or reflected-infrared), thermalinfrared, microwave, or radio wave part of the electromagnetic spectrum.
The nature and precision of the information that it is possible to extract from
a remote sensing system depend both on the sensor that is used and on the
platform that carries the sensor. For example, a thermal-infrared scanner that
is flown on an aircraft at an altitude of 500 m may have an instantaneous field
of view (IFOV), or footprint, of about 1m2 or less. If a similar instrument is
flown on a satellite at a height of 800 to 900 km, the IFOV is likely to be about
1 km2. This chapter is concerned with the general principles of the main sensors
that are used in Earth remote sensing. In most cases, sensors similar to the
ones described in this chapter are available for use in aircraft and on satellites,
and no attempt will be made to draw fine distinctions between sensors
developed for the two different types of platforms. Some of these instruments
have been developed primarily for use on aircraft but are being used on
satellites as well. Other sensors have been developed primarily for use on satellites although satellite-flown sensors are generally tested with flights on
aircraft before being used on satellites. Satellite data products are popular
because they are relatively cheap and because they often yield a new source
of information that was not previously available. For mapping to high accuracy or for the study of rapidly changing phenomena over relatively small
areas, data from sensors flown on aircraft may be much more useful than
satellite data.
In this chapter we shall give a brief account of some of the relevant aspects
of the physics of electromagnetic radiation (see Section 2.2). Electromagnetic
radiation is the means by which information is carried from the surface of the
Earth to a remote sensing satellite. Sensors operating in the visible and infrared
regions of the electromagnetic spectrum will be considered in Sections 2.3 and
2.4, and sensors operating in the microwave region of the electromagnetic
21
9255_C002.fm Page 22 Friday, February 16, 2007 10:30 PM
22
Introduction to Remote Sensing
spectrum will be considered in Section 2.5. The instruments that will be discussed in Sections 2.3 to 2.5 are those commonly used in aircraft or on satellites.
It should be appreciated that other systems that operate with microwaves and
radio waves are available and can be used for gathering Earth remote sensing
data using installations situated on the ground rather than in aircraft or on
satellites; because the physics of these systems is rather different from those
of most of the sensors flown on aircraft or satellites, the discussion of groundbased systems will be postponed until later (see Chapter 6).
It is important to distinguish between passive and active sensors. A passive
sensor is one that simply responds to the radiation that is incident on the instrument. In an active instrument, the radiation is generated by the instrument,
transmitted downward to the surface of the Earth, and reflected back to the
sensor; the received signal is then processed to extract the required information.
As far as satellite remote sensing is concerned, systems operating in the visible
and infrared parts of the electromagnetic spectrum are very nearly all passive,
whereas microwave instruments are either passive or active; all these instruments can be flown on aircraft as well. Active instruments operating in the
visible and infrared parts of the spectrum, while not commonly being flown on
satellites, are frequently flown on aircraft (see Chapter 5). Active instruments
are essentially based on some aspect of radar principles (see Chapters 5 to 7).
Remote sensing instruments can also be divided into imaging and nonimaging instruments. Downward-looking imaging devices produce two-dimensional
pictures of a part of the surface of the Earth or of clouds in the atmosphere.
Variations in the image field may denote variations in the color, temperature,
or roughness of the area viewed. The spatial resolution may range from
about 1 m, as with some of the latest visible-wavelength scanners or synthetic
aperture radars, to tens of kilometers, as with the passive scanning microwave radiometers. Nonimaging devices give information such as the height
of the satellite above the surface of the Earth (the altimeter) or an average
value of a parameter such as the surface roughness of the sea, the wind
speed, or the wind direction averaged over an area beneath the instantaneous
position of the satellite (see Chapter 7 in particular).
From the point of view of data processing and interpretation, the data
from an imaging device may be richer and easier to interpret visually, but
they usually require more sophisticated (digital) image-processing systems
to handle them and present the results to the user. The quantitative handling
of corrections for atmospheric effects is also likely to be more difficult for
imaging than for nonimaging devices.
2.2
Electromagnetic Radiation
The important parameters characterizing any electromagnetic radiation under
study are the wavelength (or frequency), the amplitude, the direction of propagation, and the polarization (see Figure 2.1). Although the wavelength may
104
103
102
101
Electron shifts
1
10−1
Molecular
vibrations
1018 1017
1015
1013
Molecular
rotations
Fluctuations in electric and magnetic fields
Phenomena
detected
Radiometry,
Imaging,
single and spectrometry,
multi-lens thermography
cameras,
various film
emulsions,
Multispectral
photography
Atomic
Total X-ray
gamma imaging absorption
spectroray
photometry,
counts,
mechanical
gamma
line scanning
ray
spectrometry
10−2 10−1
Metres
1010 109
Hertz
10
101
108
107
102
106
103 104
105
105
104
108
101
A.C.
107
102
1000 km
Audio
106
103
1
Principal
techniques for
environmental
remote sensing
Transmission
through
atmosphere
Spectral
regions
Wavelength
Frequency
FIGURE 2.1
The electromagnetic spectrum. The scales give the energy of the photons corresponding to radiation of different frequencies and wavelengths.
(Barrett and Curtis, 1982.)
Passive microwave Electromagnetic
sensing
radiometry,
Radar
imaging
1 mm
1m
1 km
Microwave
Radio
LF
EHF SHF UHF VHF HF MF
Q/Kg Ku XCSL UHF
1 µm
Visible
light Infrared
1 nm
Ultra
Gamma
rays X-rays violet
“Hard” “Soft”
10−3
1012 1011
10−3 10−4 10−5 10−6 10−7 10−8 10−9 10−10 10−11 10−12 10−13 10−14 Photon
energy
Electron volts
23
−
−21
−22
−24
−25
−26
−27
−28
−29
−30
−31
−32
−33
Photon
10
10
10
10
10
10
10
10
10
10
10
10
10
Joules
energy
10−2
10−5 10−4
1014
10−7 10−6
1016
10−11 10−10 10−9 10−8
1020 1019
10−14 10−15 10−16 10−17 10−18 10−19 10−20
105
Dissociation
Heating
9255_C002.fm Page 23 Friday, February 16, 2007 10:30 PM
Sensors and Instruments
23
9255_C002.fm Page 24 Friday, February 16, 2007 10:30 PM
24
Introduction to Remote Sensing
take any value from zero to infinity, radiation from only part of this range of
wavelengths is useful for remote sensing of the surface of the Earth. First of
all, there needs to be a substantial quantity of radiation of the wavelength in
question. A passive system is restricted to radiation that is emitted with a
reasonable intensity from the surface of the Earth or which is present in
reasonable quantity in the radiation that is emitted by the Sun and then
reflected from the surface of the Earth. An active instrument is restricted to
wavelength ranges in which reasonable intensities of the radiation can be
generated by the remote sensing instrument on the platform on which it is
operating. In addition to an adequate amount of radiation, it is also necessary
that the radiation is not appreciably attenuated in its passage through the
atmosphere between the surface of the Earth and the satellite; in other words,
a suitable atmospheric “window” must be chosen.
In addition to these considerations, it must also be possible to recover the
data generated by the remote sensing instrument. In practice this means that
the amount of data generated on a satellite must be able to be accommodated
both by the radio link by which the data are to be transmitted back to the
Earth and by the ground receiving station used to receive the data. These
various considerations restrict one to the use of the visible, infrared, and
microwave regions of the electromagnetic spectrum. The wavelengths
involved are indicated in Figure 2.2.
The visible part of the spectrum of electromagnetic radiation extends from
blue light with a wavelength of about 0.4 µm to red light with a wavelength
of about 0.75 µm. Visible radiation travels through a clean, dry atmosphere
with very little attenuation. Consequently, the visible part of the electromagnetic spectrum is a very important region for satellite remote sensing work.
For passive remote sensing work using visible radiation, the radiation is
usually derived from the Sun, being reflected at the surface of the Earth.
Radar
Microwave
1014
Frequency (Hz)
1015
Wavelength
0.3 µm 3 µm 30 µm 0.3 mm 3 mm 3 cm 30 cm
Ultraviolet
1013
Infrared
Visible
FIGURE 2.2
Sketch to illustrate the electromagnetic spectrum.
1012
1011
1010
Radio
109
108
3m
9255_C002.fm Page 25 Friday, February 16, 2007 10:30 PM
Sensors and Instruments
25
FIGURE 2.3
Nighttime satellite image of Europe showing aurora and the lights of major cities. (Aerospace
Corporation.)
If haze, mist, fog, or dust clouds are present, the visible radiation will be
substantially attenuated in its passage through the atmosphere. At typical
values of land-surface or sea-surface temperature, the intensity of visible
radiation that is emitted by the land or sea is negligibly small. Satellite
systems operating in the visible part of the electromagnetic spectrum therefore usually only gather useful data during daylight hours. Exceptions to
this are provided by aurora, by the lights of major cities, and by the gas
flares associated with oil production and refining activities (see Figure 2.3).
An interesting and important property of visible radiation, by contrast
with infrared and microwave radiation, is that visible radiation, especially
toward the blue end of the spectrum, is capable of penetrating water to a
distance of several meters. Blue light can travel 10 to 20 m through clear
ocean water before becoming significantly attenuated; red light, however,
penetrates very little distance. Thus, with visible radiation, one can probe
the physical and biological properties of the near-surface layers of water
bodies, whereas with infrared and microwave radiation, only the surface
itself can be directly studied with the radiation.
Infrared radiation cannot be detected by the human eye, but it can be
detected photographically or electronically. The infrared region of the spectrum is divided into the near-infrared, with wavelengths from about 0.75 µm
to about 1.5 µm, and the thermal-infrared, with wavelengths from about 3
9255_C002.fm Page 26 Friday, February 16, 2007 10:30 PM
26
Introduction to Remote Sensing
or 4 µm to about 12 or 13 µm. The near-infrared part of the spectrum is
important, at least in agricultural and forestry applications of remote sensing,
because most vegetation reflects strongly in the near-infrared part of the
spectrum. Indeed vegetation generally reflects more strongly in the nearinfrared than in the visible. Water, on the other hand, is an almost perfect
absorber at near-infrared wavelengths. Apart from clouds, the atmosphere
is transparent to near-infrared radiation.
At near-infrared wavelengths, the intensity of the reflected radiation is
considerably greater than the intensity of the emitted radiation; however,
at thermal-infrared wavelengths, the emitted radiation becomes more
important. The relative proportions of reflected and emitted radiation vary
according to the wavelength of the radiation, the emissivities of the surfaces
observed, and the solar illumination of the area under observation. This can
be illustrated using the Planck radiation distribution function; the energy
E(l)dl in the wavelength range l to l + dl for black-body radiation at
temperature T is given by
E(λ )dλ =
8π hc
dλ
λ  exp( hc/k λ T ) − 1
(2.1)
5
where
h = Planck’s constant,
c = velocity of light, and
k = Boltzmann’s constant.
This formula was first put forward by Max Planck as an empirical relation; it
was only justified in terms of quantum statistical mechanics much later. The
value of the quantity E(l), in units of 8phc m–5, is given in Table 2.1 for five
different wavelengths, when T = 300 K, corresponding roughly to radiation
emitted from the Earth. In this table, values of E(l)(r/R)2 are also given for
the same wavelengths, when T = 6,000 K, where r = radius of the Sun and
R = radius of the Earth’s orbit around the Sun. This gives an estimate of the
order of magnitude of the solar radiation reflected at the surface of the Earth,
leaving aside emissivities, atmospheric attenuation, and other factors.
TABLE 2.1
Estimates of Relative Intensities of Reflected Solar Radiation
and Emitted Radiation From the Surface of the Earth
Wavelength (l)
Blue
Red
Infrared
Thermal-infrared
Microwave
0.4 µm
0.7 µm
3.5 µm
12 µm
3 cm
Emitted Intensity
Reflected Intensity
7.7 × 10–20
2.4 × 100
1.6 × 1021
7.5 × 1022
2.6 × 1010
6.1 × 1024
5.1 × 1024
4.7 × 1022
4.5 × 1020
1.3 × 107
Note: Second column corresponds to E(l) in units of 8p hc m–5 for T = 300 K,
third column corresponds to E(l)(r/R)2 in the same units for T = 6000 K.
9255_C002.fm Page 27 Friday, February 16, 2007 10:30 PM
Sensors and Instruments
27
From Table 2.1 it can be seen that at optical and very near-infrared wavelengths the emitted radiation is negligible compared with the reflected radiation. At wavelengths of about 3 or 4 µm, both emitted and reflected radiation
are important, whereas at wavelengths of 11 or 12 µm, the emitted radiation
is dominant and the reflected radiation is relatively unimportant. At microwave wavelengths, the emitted radiation is also dominant over natural
reflected microwave radiation; however, as the use of man-made microwave
radiation for telecommunications increases, the contamination of the signals
from the surface of the land or sea becomes more serious. A strong infrared
radiation absorption band separates the thermal-infrared part of the spectrum
into two regions, or windows, one between roughly 3 µm and 5 µm and the
other between roughly 9.5 µm and 13.5 µm (see Figure 2.1). Assuming the
emitted radiation can be separated from the reflected radiation, satellite remote
sensing data in the thermal-infrared part of the electromagnetic spectrum can
be used to determine the temperature of the surface of the land or sea, provided
the emissivity of the surface is known. The emissivity of water is known; in
fact, it is very close to unity. For land, however, the emissivity varies widely
and its value is not very accurately known. Thus, infrared remotely sensed
data can readily be used for the measurement of sea-surface temperatures, but
their interpretation for land areas is more difficult. Aircraft-flown thermalinfrared scanners are widely used in surveys to study heat losses from roof
surfaces of buildings as well as in the study of thermal plumes from sewers,
factories, and power stations. Figure 2.4 highlights the discharge of warm
sewage into the River Tay. It can be seen that the sewage dispersion is not
particularly effective in the prevailing conditions. Because this is a thermal
image, the tail-off with distance from the outfall is possibly more a measure
of the rate of cooling than the dispersal of the sewage.
In the study of sea-surface temperatures using the 3 to 5 µm range, it is
necessary to restrict oneself to the use of nighttime data in order to avoid
the considerable amount of reflected thermal-infrared radiation that is
present at these wavelengths during the day. This wavelength range is used
for channel (or band) 3 of the Advanced Very High Resolution Radiometer
(AVHRR) (see Section 3.2.1). This channel of the AVHRR can accordingly be
used to study surface temperatures of the Earth at night only. For the 9.5 to
13.5 µm wavelength range, the reflected solar radiation is much less important and so data from this wavelength range can be used throughout the
day. However, even in these two atmospheric windows, the atmosphere is
still not completely transparent and accurate calculations of Earth-surface
temperatures or emissivities from thermal-infrared satellite data must incorporate corrections to allow for atmospheric effects. These corrections are
discussed in Chapter 8. Thermal-infrared radiation does not significantly
penetrate clouds, so one should remember that in cloudy weather it is the
temperature and emissivity of the upper surface of the clouds — not of the
land or sea surface of the Earth — that are being studied.
In microwave remote sensing of the Earth, the range of wavelengths used
is from about 1 mm to several tens of centimeters. The shorter wavelength
9255_C002.fm Page 28 Friday, February 16, 2007 10:30 PM
28
Introduction to Remote Sensing
Tay Estuary
(a)
Land
area
Surface temperature:-
>11.5°C
11.0–11.5°C
Pipe location
10.5–11.0°C
10.1–10.5°C
(b)
FIGURE 2.4
A thermal plume in the Tay Estuary, Dundee: (a) thermal-infrared scanner image; (b) enlarged
and thermally contoured area from within box in (a). (Wilson and Anderson, 1984.)
limit of this range is attributable to atmospheric absorption, whereas the long
wavelength limit may be ascribed to instrumental constraints and the reflective
and emissive properties of the atmosphere and the surface of the Earth. There
are a number of important differences between remote sensing in the microwave part of the spectrum and remote sensing in the visible and infrared parts
of the spectrum. First, microwaves are scarcely attenuated at all in their
passage through the atmosphere, except in the presence of heavy rain. This
means that microwave techniques can be used in almost all weather conditions. The effect of heavy rain on microwave transmission is actually exploited
by meteorologists using ground-based radars to study rainfall. A second
difference is that the intensities of the radiation emitted or reflected by the
surface of the Earth in the microwave part of the electromagnetic spectrum
are very small, with the result being that any passive microwave remote
sensing instrument must necessarily be very sensitive. This creates the
requirement that the passive microwave radiometer gathers radiation from a
large area (i.e., its instantaneous field of view will have to be very large indeed)
in order to preserve the fidelity of the signal received. On the other hand, an
active microwave remote sensing instrument has little background radiation to
9255_C002.fm Page 29 Friday, February 16, 2007 10:30 PM
29
Sensors and Instruments
corrupt the signal that is transmitted from the satellite, reflected at the surface
of the Earth, and finally received back at the satellite. A third difference is that
the wavelengths of the microwave radiation used are comparable in size to
many of the irregularities of the surface of the land or the sea. Therefore, the
remote sensing instrument may provide data that enables one to obtain information about the roughness of the surface that is being observed. This is of
particular importance when studying oceanographic phenomena.
2.3
Visible and Near-Infrared Sensors
A general classification scheme for sensors operating in the visible and infrared
regions of the spectrum is illustrated in Figure 2.5. In photographic cameras,
where an image is formed in a conventional manner by a lens, recordings are
restricted to those wavelengths for which it is possible to manufacture lenses
(i.e., in practice, to wavelengths in the visible and near-infrared regions). The
camera may be an instrument in which the image is captured on film or on a
charged-coupled device (CCD) array. Alternatively, it may be like a television
camera, in which case it would usually be referred to as a return beam vidicon
(RBV) camera, in which the image is converted into a signal that is superimposed on a carrier wave and transmitted to a distant receiver. RBV cameras
have been flown with some success on some of the Landsat satellites. In the
case of nonphotographic sensors, either no image is formed or an image is
formed in a completely different physical manner from the method used in a
camera with a lens. If no lens is involved, the instrument is able to operate at
longer wavelengths in the infrared part of the spectrum.
Visible and thermal
IR sensors
Photographic
(cameras)
Electro-optical
Imaging
Scanning
Non-imaging
Detector
arrays
FIGURE 2.5
Classification scheme for sensors covering the visible and thermal-infrared range of the
electromagnetic spectrum.
9255_C002.fm Page 30 Friday, February 16, 2007 10:30 PM
30
Introduction to Remote Sensing
Multispectral scanners (MSSs) are nonphotographic instruments that are
widely used in remote sensing and are able to operate both in the visible
and infrared ranges of wavelengths. The concept of an MSS involves an
extension of the idea of a simple radiometer in two ways: first by splitting
the beam of received radiation into a number of spectral ranges or “bands”
and secondly by adding the important feature of scanning. The image is not
formed all at once as it is in a camera but is built up by scanning. In most
cases, this scanning is achieved using a rotating mirror; in others, either the
whole satellite spins or a “push-broom” technique using a one-dimensional
CCD array is employed. An MSS consists of a telescope and various other
optical and electronic components. At any given instant, the telescope
receives radiation from a given area, the IFOV, on the surface of the Earth
in the line of sight of the telescope. The radiation is reflected by the mirror
and separated into different spectral bands, or ranges of wavelength. The
intensity of the radiation in each band is then measured by a detector. The
output value from the detector then gives the intensity for one point (picture
element, or pixel) in the image. For a polar-orbiting satellite, scanning is
achieved by having the axis of rotation of the minor along the direction of
motion of the satellite so that the scan lines are at right angles to the direction
of motion of the satellite (see Figure 2.6). At any instant, the instrument
Optics
Scan mirror
6 Detectors
per band
(24 total)
+ 2 for band 8
(Landsat-C)
185 km
6 Lines scan/band
Direction
of flight
FIGURE 2.6
Landsat MSS scanning system. (National Aeronautics and Space Administration [NASA 1976].)
9255_C002.fm Page 31 Friday, February 16, 2007 10:30 PM
31
Sensors and Instruments
Electronically
despun antenna
Toroidal
pattern antennas
Solar panels
VHF
antenna
Cooler
Radiometer
aperture
FIGURE 2.7
The first Meteosat satellite, Meteosat-1.
views a given area beneath it and concentrates the radiation from that IFOV
onto the detecting system; successive pixels in the scan line are generated
by data from successive positions of the mirror as it rotates and receives
radiation from successive IFOVs. For a polar-orbiting satellite, the advance
to the next scan line is achieved by the motion of the satellite. For a geostationary satellite, line-scanning is achieved by having the satellite spinning
about an axis parallel to the axis of rotation of the Earth; the advance to the
next scan line is achieved by adjusting the look direction of the optics —
that is, by tilting the mirror. For example, Meteosat-1, which was launched
into geostationary orbit at the Greenwich Meridian, spins at 100 rpm about
an axis almost parallel to the N-S axis of the Earth (see Figure 2.7). Changes
in inclination, spin rate, and longitudinal position are made, when required,
by using a series of thruster motors that are controlled from the ground.
The push-broom scanner is an alternative scanning system that has no moving parts. It has a one-dimensional array of CCDs that is used in place of a
scanning mirror to achieve cross-track scanning. No mechanical scanning is
involved; a whole scan line is imaged optically onto the CCD array and the
scanning along the line is achieved from the succession of signals from the
responses of the detectors in the array. At a later time, the instrument is moved
forward, the next scan line is imaged on the CCD array, and the responses are
obtained electronically — in other words, the advance from one scan line to
the next is achieved by the motion of the satellite (see Figure 2.8).
9255_C002.fm Page 32 Friday, February 16, 2007 10:30 PM
32
Introduction to Remote Sensing
IFOV for Each Detector = 1 mrad
Scan
Direction
Altitude
10 km
Ground
Resolution
Cell
10 m by 10 m
Dwell Time =
Cell Dimension
Velocity
=
10 m / cell-1
200 m / sec-1
= 5 x 10-2 sec / cell-1
FIGURE 2.8
Sketch of push-broom or along track scanner. (Sabins, 1986.)
An MSS produces several coregistered images, one corresponding to each
of the spectral bands into which the radiation is separated by the detecting
system. In the early days, the number of bands in an MSS was very small, for
example, four bands for the Landsat MSS; however, as technology has
advanced, the number of bands has increased to 20 or 30 bands and, more
recently, to several hundred bands. When the number of bands is very large,
the instrument is referred to as a hyperspectral scanner. In a hyperspectral
scanner, the set of intensities of the various bands for any given pixel, if
plotted against the wavelength of the bands, begins to approach a continuous
spectrum of the radiation reflected from the ground IFOV. Consequently, a
hyperspectral scanner is also referred to as an imaging spectrometer; it
generates a spectrum for each pixel in the image.
The object of using more spectral bands or channels is to achieve greater
discrimination between different targets on the surface of the Earth. The data
collected by an imaging spectrometer for one scene are sometimes referred
to as a hyperspectral cube. The x and y directions represent two orthogonal
directions on the ground, one along the flight line and the other at right
9255_C002.fm Page 33 Friday, February 16, 2007 10:30 PM
Sensors and Instruments
33
angles to the flight line. The z direction represents the band number or, on
a linear scale if the bands are equally spaced, the wavelength. For any given
value of z, the horizontal sheet of intensities corresponds to the image of the
ground at one particular wavelength.
A great deal of information can be extracted from a monochrome image
obtained from one band of an MSS or hyperspectral scanner. The image can
be handled as a photographic product and subjected to the conventional
techniques of photointerpretation. The image can also be handled on a digital,
interactive image-processing system and various image-enhancement
operations, such as contrast enhancement, edge enhancement, and density
slicing, can be applied to the image. These techniques are discussed in
Chapter 9. However, more information can usually be extracted by using
the data from several bands and thereby exploiting the differences in the
reflectivity, as a function of wavelength, of different objects on the ground.
The data from several bands can be combined visually, for example, by using
three bands and putting the pictures from these bands onto the three guns
of a color television monitor or onto the primary-color emulsions of a color
film. The colors that appear in an image that is produced in this way will
not necessarily bear any simple relationship to the true colors of the original
objects on the ground when they are viewed in white light from the Sun.
Examples of such false color composites abound in many coffee-table books
of satellite-derived remote sensing images (see Figure 2.9, for example).
Colored images are widely used in remote sensing work. In many
instances, the use of color enables additional information to be conveyed
visually that could not be conveyed in a black-and-white monochrome
image, although it is not uncommon for color to be added for purely cosmetic
purposes. Combining data from several different bands of an MSS to produce
a false color composite image for visual interpretation and analysis suffers
from the restriction that the digital values of three bands only can be used
as input data for a given pixel in the image. This means that only three bands
can be handled simultaneously; if more bands are used, then combinations
or ratios of bands must be taken before the data are used to produce an
image and, in that case, the information available is not being exploited to
the full. Full use of the information available in all the bands can be made
if the data are analyzed and interpreted with a computer. The numerical
methods that are used for handling multispectral data will be considered in
some detail in Chapter 9. Different surfaces generally have different reflectivities in different parts of the spectrum. Accordingly, an attempt may be
made to identify surfaces from their observed reflectivities. In doing this one
needs to consider not just the fraction of the total intensity of the incident
sunlight that is reflected by the surface but also the distribution of the
reflectivity as a function of wavelength. This reflectivity spectrum can be
regarded as characteristic of the nature of the surface and is sometimes
described as a spectral “signature” by which the nature of the surface may
be identified. However, the data recovered from an MSS do not provide
reflectivity as a continuous function of wavelength; one only obtains a
9255_C002.fm Page 34 Friday, February 16, 2007 10:30 PM
34
Introduction to Remote Sensing
FIGURE 2.9 (See color insert)
A false color composite of southwest Europe and northwest Africa based on National Oceanic
and Atmospheric Administration AVHRR data. (Processed by DLR for the European Space Agency.)
discrete set of numbers corresponding to the integrals of the continuous
reflectivity function integrated over the wavelength ranges of the various
bands of the instrument (see Figure 2.10). Thus, data from an MSS clearly
provide less scope for discrimination among different surfaces than continuous spectra would provide. It has, until recently, not been possible to gather
remotely sensed data to produce anything like a continuous spectrum for
each pixel; however, with a hyperspectral scanner or imaging spectrometer,
where the number of bands available is greater, the discrete set of numbers
constituting the signature of a pixel more closely approaches a continuous
reflectivity function.
9255_C002.fm Page 35 Friday, February 16, 2007 10:30 PM
35
Sensors and Instruments
I
Band
1
0.5
Band
2
0.6
Band
3
0.7
Band
4
0.8
λ(µm)
1.1
FIGURE 2.10
Sketch to illustrate the relation between a continuous reflectivity distribution and the bandintegrated values (broken line histogram).
2.4
Thermal-Infrared Sensors
Airborne thermal-infrared line scanners were developed in the 1960s (see
Figure 2.11). Radiation from the surface under investigation strikes the scan
mirror and is reflected to the surface of the focusing mirrors and then to a
photoelectric detector. The voltage output of the detector is amplified and
activates the output of a light source. The light varies in intensity with the
voltage and is recorded on film. The detectors generally measure radiation
in the 3.5 to 5.5 µm and 8.0 to 14.0 µm atmospheric windows. When the
instrument is operating, the scan mirror rotates about an axis parallel to the
flight path (see Figure 2.11). Instruments of this type have been widely used
in airborne surveys to study, for example, temperature variations associated
with natural geothermal anomalies, heat losses from roof surfaces of buildings, and faults in underground hot water or steam distribution networks
for communal heating systems. A thermal-infrared band or channel was
added to the visible and near-infrared scanners flown on the early polarorbiting meteorological satellites. From 1978 onward, in the AVHRR flown
on the National Oceanic and Atmospheric Administration (NOAA) polarorbiting operational environmental satellites (POES), the output was digitized
on board and transmitted to Earth as digital data. Data from the thermalinfrared channels of scanners flown on polar-orbiting and geostationary meteorological satellites are now routinely used for the determination of sea surface
temperatures all over the world.
The use of thermal-infrared scanning to determine temperatures is a
passive rather than an active process. That is to say it depends on the
radiation originating from the object under observation and does not require
the object to be illuminated by the sensor itself. All objects with temperatures
above absolute zero contain atoms in various states of random thermal
9255_C002.fm Page 36 Friday, February 16, 2007 10:30 PM
36
Introduction to Remote Sensing
(Optional)
direct film
recorder
Magnetic
tape recorder
Modulated
light source
Liquid
nitrogen
container
Recorder
mirror
Motor
Scan
mirror
Signal
Detector
Controlled radiant
temperature sources
(for calibrated imagery)
Instantaneous field
of view (2 to 3 mrad)
Scan pattern
on ground
Amplifier
Focusing
mirrors
Angular
field of
view (90 to
120°)
Aircraft flight
direction
Ground
resolution cell
FIGURE 2.11
Schematic view of an airborne infrared scanning system.
motion and in continuous collision with each other. These motions and
collisions give rise to the emission of electromagnetic radiation over a broad
range of wavelengths. The temperature of an object affects the quantity of
the continuum radiation it emits and determines the wavelength at which
the radiation is a maximum (lmax). The value of this wavelength, lmax, can
actually be derived from the Planck radiation formula in Equation 2.1 by
considering the curve for a constant value of T and differentiating with
respect to l to find the maximum of the curve. The result is expressed as
Wien’s displacement law:
λ maxT = constant
(2.2)
where T is the temperature of the object.
It is not true, however, that all bodies radiate the same quantity of radiation
at the same temperature. The amount depends on a property of the body
called the emissivity, e, the ideal black body (or perfect emitter) having an
9255_C002.fm Page 37 Friday, February 16, 2007 10:30 PM
37
Sensors and Instruments
T = 700 K
E(λ) (109 Wm−3)
E(λ) (106 Wm−3)
30
T = 293 K
20
10
2.0
T = 600 K
1.0
T = 500 K
T = 400 K
0
0
0
5
10
λ(µm)
(a)
15
20
0
5
10
λ(µm)
(b)
15
20
FIGURE 2.12
Planck distribution function for black body radiation at (a) 293 K and (b) a number of other
temperatures; note the change of scale between (a) and (b).
emissivity of unity and all other bodies having emissivities less than unity.
Wien’s displacement law describes the broadband emission properties of an
object. As indicated in Section 2.2, Planck’s radiation law gives the energy
distribution within the radiation continuum produced by a black body.
Using the Planck relationship (Equation 2.1), one can draw the shape of
the energy distribution from a black body at a temperature of 293 K (20°C
[68°F]), the typical temperature of an object viewed by an infrared scanner.
The 5 to 20 µm range is also commonly referred to as the thermal-infrared
region, as it is in this region that objects normally encountered by human
beings radiate their heat. From Figure 2.12 it can also be seen that the energy
maximum occurs at 10 µm, which is fortuitous because an atmospheric
transmission window exists around this wavelength. To explain what is
meant by an atmospheric window, it should be realized that the atmosphere
attenuates all wavelengths of electromagnetic radiation differently due to
the absorption spectra of the constituent atmospheric gases. Figure 2.13 shows
the atmospheric absorption for a range of wavelengths, with some indication
of the gases that account for this absorption. It can be seen then from the
lowest curve in Figure 2.13, which applies to the whole atmosphere, that
there is a region of high atmospheric transmittance between 8 and 14 µm and
it is this waveband that is used for temperature studies with airborne and
satellite-flown radiometers. This region of the spectrum is also the region in
which there is maximum radiation for the range of temperatures seen in
terrestrial objects (for example, ground temperatures, buildings, and roads).
The total radiation emitted from a body at a temperature T is given by the
well-known Stefan-Boltzmann Law:
E = σT 4
(2.3)
Accordingly, if the total radiation emitted is measured, the temperature of
the body may then be determined. Equation 2.3 was originally put forward
as an empirical formula, but it can be derived by integrating the Planck
9255_C002.fm Page 38 Friday, February 16, 2007 10:30 PM
38
Absorption
coefficient
(rel units)
Introduction to Remote Sensing
Wavelength (µm)
1.98 1.99 2.00
Detail
of H2O
spectrum
1
CH4
0
1
1
0
1
0
Absorptivity
N2O
O2 and O3
0
1
CO2
0
1
H2O
0
1
Atmosphere
0
0.1
0.2 0.3 0.4 0.6 0.8 1 1.5 2
3 4 5 6 8 10 20 30
Wavelength (µm)
FIGURE 2.13
Whole atmosphere transmittance.
distribution function in Equation 2.1, for a given temperature, over the whole
range of l, from zero to infinity. This also yields an expression for s.
Airborne infrared surveys are flown along parallel lines at fixed line spacing
and flying height and, because thermal surveys are usually flown in darkness,
a sophisticated navigation system is invariably required. This may take the form
of ground control beacons mounted on vehicles. Predawn surveys are normally
flown because thermal conditions tend to stabilize during the night and temperature differences on the surface are enhanced. During daytime, solar energy heats
the Earth’s surface and may accordingly contaminate the information sought.
The predawn period is also optimal for flying because turbulence that can cause
aircraft instability, and consequently image distortion, is at a minimum. The
results are usually printed like conventional black-and-white photographs,
showing hot surfaces as white and cool surfaces as dark. The term infrared
thermography is commonly applied to the determination of temperatures, using
infrared cameras or scanners, for studying objects at close range or from an
aircraft. This term tends not to be used with thermal-infrared data from satellites.
2.5
Microwave Sensors
The existence of passive microwave scanners was mentioned briefly in Section
2.2, and their advantage over optical and infrared scanners — in that they can
give information about the surface of the Earth in cloudy weather — was
9255_C002.fm Page 39 Friday, February 16, 2007 10:30 PM
39
Sensors and Instruments
alluded to. Passive microwave sensors are also capable of gathering data at
night as well as during the day because they sense emitted radiation rather
than reflected solar radiation. However, the spatial resolution of passive
microwave sensors is very poor compared with that of visible and infrared
scanners. There are two reasons for this. First, the wavelength of microwaves
is much longer than those of visible and infrared radiation and the theoretical
limit to the spatial resolution depends on the ratio of the wavelength of the
radiation to the aperture of the sensing instrument. Secondly, as already
mentioned, the intensity of microwave radiation emitted or reflected from
the surface of the Earth is very low. The nature of the environmental and
geophysical information that can be obtained from a microwave scanner is
complementary to the information that can be obtained from visible and
infrared scanners.
Passive microwave radiometry applied to investigations of the Earth’s
surface involves the detection of thermally generated microwave radiation.
The characteristics of the received radiation, in terms of the variation of
intensity, polarization properties, frequency, and observation angle, depend
on the nature of the surface being observed and on its emissivity. The part
of the electromagnetic spectrum with which passive microwave radiometry
is concerned is from ~1 GHz to ~200 GHz or, in terms of wavelengths, from
~0.15 cm to ~30 cm.
Figure 2.14 shows the principal elements of a microwave radiometer.
Scanning is achieved by movement of the antenna and the motion of the
platform (aircraft or satellite) in the direction of travel. The signal is very
small, and one of the main problems is to reduce the noise level of the receiver
itself to an acceptable level. After detection, the signal is integrated to give
a suitable signal-to-noise value. The signal can then be stored on a tape
recorder on board the platform or, in the case of a satellite, it may then be
transmitted by a telemetry system to a receiving station on Earth.
The spatial resolution of a passive microwave radiometer depends on the
beamwidth of the receiving antenna, the aperture of the antenna, and the
wavelength of the radiation, as represented by the equation:
AG =
λ 2 R 2 sec 2 θ
AA
(2.4)
where
AG is the area viewed (resolved normally),
l is the wavelength,
R is the range,
AA is the area of the receiving aperture, and
q is the scan angle.
The spatial resolution decreases by three or four orders of magnitude for a given
size of antenna from the infrared to the microwave region of the electromagnetic
spectrum. For example, the thermal-infrared channels of the AVHRR flown on
9255_C002.fm Page 40 Friday, February 16, 2007 10:30 PM
40
Introduction to Remote Sensing
Axis of rotation
Offset
reflector
Multi-frequency
feed horn
Drive system
Skyhorn
cluster
FIGURE 2.14
Scanning multichannel (or multifrequency) microwave radiometer (SMMR).
the NOAA POES series of satellites have an instantaneous field of view of a
little more than 1 km2. For the shortest wavelength (frequency 37 GHz) of
the Scanning Multichannel Microwave Radiometer (SMMR) flown on the
Nimbus-7 satellite, the IFOV was about 18 km × 27 km, whereas for the
longest wavelength (frequency 6.6 GHz) on that instrument, it was about
95 km × 148 km. An antenna of a totally unrealistic size would be required to
obtain an IFOV of the order of 1 km2 for microwave radiation. The SMMR ended
operations on July 6, 1988. Its successor was the Special Sensor Microwave
Imager, which has been flown on many of the Defense Meteorological Satellite
Program series of polar-orbiting satellites (see Chapter 3) from 1987 onwards.
Passive scanning microwave radiometers flown on satellites can be used
to obtain frequent measurements of sea-surface temperatures on a global
scale and are thus very suitable for meteorological and climatological studies,
although they are of no use in studying small-scale water-surface temperature features, such as fronts in coastal regions. On the other hand, the spatial
resolution of a satellite-flown thermal-infrared scanner is very appropriate
for the study of small-scale phenomena. It would give far too much detail
for global weather forecasting purposes and would need to be degraded
before it could be used for that purpose. Figure 2.15 shows sea-surface and
ice-surface temperatures derived from the SMMR.
The signal/noise ratio can also be a problem. The signal is the radiated or
reflected brightness of the target (i.e., its microwave temperature). The noise
corresponds to the temperature of the passive receiver. To improve the
9255_C002.fm Page 41 Friday, February 16, 2007 10:30 PM
41
Sensors and Instruments
(a)
(b)
FIGURE 2.15 (See color insert)
Sea ice and ocean surface temperatures derived from Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR); three-day average data for north and south polar regions (a) April
1979 and (b) June 1979. (NASA Goddard Space Flight Center.)
9255_C002.fm Page 42 Friday, February 16, 2007 10:30 PM
42
Introduction to Remote Sensing
signal/noise ratio for weak targets, the receiver temperature must be
proportionately lower. The signal/noise ratio, S/N, is given by
 T 4λ 2 
S
= F  S2 4 
N
 R TR 
(2.5)
where
TS is the brightness temperature of the target,
TR is the temperature of the receiver,
and R is the range.
The received signal in a passive radiometer is also a function of the range,
the intensity of the radiation received being inversely proportional to R2.
This has a considerable effect when passive instruments are flown on satellites rather than aircraft. In practice, another important factor is the presence
of microwave communications transmissions at the surface of the Earth;
these are responsible for substantial contamination of the Earth-leaving
microwave radiance and therefore lead to significant error in satellitederived sea-surface temperatures.
An active microwave system can improve the poor spatial resolution associated with a passive microwave system. With an active system, it is possible
to measure parameters of the radiation other than just intensity. One can
measure:
• Time for the emitted pulse of radiation to travel from the satellite to
the ground and back to the satellite
• Doppler shift in the frequency of the radiation as a result of relative
motion of the satellite and the ground
• Polarization of the radiation (although polarization can also be measured by passive instruments).
The important types of active microwave instruments that are flown on
satellites include the altimeter, the scatterometer, and the synthetic aperture
radar (SAR).
A radar altimeter is an active device that uses the return time of a pulse
of microwave radiation to determine the height of the satellite above the
surface of the land or sea. It measures the vertical distance straight down
from the satellite to the surface of the Earth. Altimeters have been flown on
various spacecraft, including Skylab, GEOS-3, Seasat, ERS-1, ERS-2, TOPEX/
Poseidon, and ENVISAT and accuracies of the order of ±3 or 4 cm have been
obtained with them. The principal use of the altimeter is for the determination
of the mean level of the surface of the sea after the elimination of tidal effect
and all other motion of the water. By analyzing the shape of the return pulse
received by the altimeter when the satellite is over the sea, it is also possible
to determine the significant wave height of waves on the surface of the sea
and to determine the near-surface wind speed (but not the wind direction).
9255_C002.fm Page 43 Friday, February 16, 2007 10:30 PM
Sensors and Instruments
43
The relationships used to determine the sea state and wind speed are essentially empirical. These empirical relationships are based originally on measurements obtained with altimeters flown on aircraft and calibrated with
surface data; subsequent refinements of these relationships have been
achieved using satellite data. Accuracies of ±1.5 ms−1 are claimed for the
derived wind speeds.
The scatterometer is another active microwave instrument that can be used
to study sea state. Unlike the altimeter, which uses a single beam directed
vertically downward from the spacecraft, the scatterometer uses a more
complicated arrangement that involves a number of radar beams that enable
the direction as well as the speed of the wind to be determined. It was
possible to determine the wind direction to within ±20° with the scatterometers on the Seasat, ERS-1, and ERS-2 satellites. Further details of active
microwave systems are presented in Chapter 7.
The important imaging microwave instruments are the passive scanning
multichannel, multispectral, or multifrequency microwave radiometers and
the active SARs. It has already been noted that passive radiometry is limited
by its poor spatial resolution, which depends on the range, the wavelength of
the radiation used, the aperture of the antenna, and the signal/noise ratio. The
signal/noise ratio in turn is influenced by the strength of the signal produced
by the target and by the temperature and sensitivity of the receiver. Ideally, a
device is required that can operate in all weather conditions, that can operate
both during the day and during the night, and that has adequate spatial
resolution for whatever purpose it is required to use the instrument in an Earth
observation program. For many remote-sensing applications, passive microwave radiometers cannot satisfy the third requirement. An active microwave
instrument, that is some kind of radar device, meets the first two of these
conditions, the conditions concerning all-weather and nighttime operation.
When used on an aircraft, conventional imaging radars are able to give very
useful information about a variety of phenomena on the surface of the Earth.
Accordingly, conventional (side-looking airborne) radars are frequently flown
on aircraft for remote sensing work. However, when it comes to carrying an
imaging radar on board a satellite, calculations of the size of antenna that
would be required to achieve adequate spatial resolution show that one would
need an antenna that was enormously larger than one could possibly hope to
mount on board a satellite. SAR has been introduced to overcome this problem.
In a SAR, reflected signals are received from successive positions of the
antenna as the platform moves along its path. In this way, an image is built
up that is similar to the image one would obtain from a real antenna of several
hundreds of meters or even a few kilometers in length. Whereas in the case
of a radiometer or scanner, an image is produced directly and simply from
the data transmitted back to Earth from the platform, in the case of a SAR, the
reconstruction of an image from the transmitted data is much more complicated. It involves processing the Doppler shifts of the received radiation. (This
will be described further in Chapter 7).
9255_C002.fm Page 44 Friday, February 16, 2007 10:30 PM
44
Introduction to Remote Sensing
FIGURE 2.16
Displacement of a ship relative to its wake in a SAR image; data from Seasat orbit 834 of
August 24, 1978 processed digitally. (RAE Farnborough.)
It is important not to have too many preconceptions about the images
produced from a SAR. A SAR image need not necessarily be a direct counterpart of an image produced in the optical or infrared part of the spectrum
with a camera or scanner. Perhaps the most obvious difference arises in
connection with moving objects in the target field. Such an object will lead
to a received signal that has two Doppler shifts in it, one from the motion
of the target and one from the motion of the platform carrying the SAR
instrument. In processing the received signals, one cannot distinguish
between these two different contributions to the Doppler shift. Effectively,
the processing regards the Doppler shift arising from the motion of the target
as an extra contribution to the range. Figure 2.16 is a SAR image of a moving
ship in which the ship appears displaced from its wake; similarly SAR
images have been obtained in which a moving train appears displaced
sideways from the track. The principles of SAR are considered in more detail
in Chapter 7.
2.6
2.6.1
Sonic Sensors
Sound Navigation and Ranging
Sound navigation and ranging (sonar) is similar in principle to radar but
uses pulses of sound or of ultrasound instead of pulses of radio waves.
Whereas radio waves propagate freely in the atmosphere but are heavily
9255_C002.fm Page 45 Friday, February 16, 2007 10:30 PM
45
Sensors and Instruments
attenuated in water, the opposite is true of ultrasound. Radar cannot be used
under water. Sonar is used very extensively for underwater studies, both for
ranging, for detecting underwater features, and for mapping seabed topography. The underwater features may include wrecks, or in a military context,
submarines and mines. Two methods are available for observing seabed
topography with sound or ultrasound; these involve vertical sounding with
an echo sounder or scanning with a side-scan sonar.
2.6.2
Echo Sounding
An echo sounder makes discrete measurements of depth below floating
vessels using the return time for pulses of sound or ultrasound transmitted
vertically downwards to the seabed; from profiles of such measurements,
water depth charts can be constructed. This is, essentially, the underwater
analogue of the radar altimeter used to measure the height of a satellite above
the surface of the Earth. The echo sounder method gives a topographic profile
along a section of the sea floor directly beneath the survey ship. Even if a network
of such lines is surveyed, considerable interpolation is required if the echo
sounder data are to be contoured correctly and a meaningful two-dimensional
picture of seabed topography constructed between traversed lines.
Echo sounders do not provide direct measurement of water depth. A pulse
of sound is emitted by the sounder and the echo from the seabed is detected.
What is actually measured is the time interval between the transmission of
the pulse and the detection of the echo. This pulse of sound has traveled to
the seabed and back over a time interval called the two-way travel time.
Thus, the depth d is given by:
d=
1
vt
2
(2.6)
where t = the two-way travel time and v = velocity of sound in water.
The velocity v is not a universal constant but its value depends on such
factors as the temperature and salinity of the water.
The first stage in the production of bathymetric charts from echo soundings
is the transferal of depth values measured for each fix position onto the
survey map, the depth values being termed “posted” values. Depth values
intermediate between fixes are usually posted at this stage, particularly
topographic highs and lows as seen on the echo trace. Once a grid of lines
has been surveyed in an area, the data may be contoured to produce a
bathymetric chart. However, it is first necessary to apply corrections to the
measured depth values to compensate for tidal effects, to adjust to a predefined datum, and to compensate for variation with depth of the velocity
of sound in water.
9255_C002.fm Page 46 Friday, February 16, 2007 10:30 PM
46
Introduction to Remote Sensing
A
B
X
Y
C
FIGURE 2.17
Artist’s impression of a side scan sonar transducer beam: A = slant range, B = towfish height
above bottom, C = horizontal range. (Klein Associates Inc.)
2.6.3
Side Scan Sonar
Side scan sonar was developed in the late 1950s from experiments using
echo sounders tilted away from the vertical. Such sounders were studied as
a possible means of detecting shoals of fish, but results also showed the
potential of the method for studying the geology of the seabed and the
detection of wrecks as well as natural features of seabed topography adjacent
to, but not directly beneath, a ship’s course. Modern equipment utilizes
specially designed transducers that emit focused beams of sound having
narrow horizontal beam angles, usually less than 2°, and wide vertical beam
angles, usually greater than 20°; each pulse of sound is of very short duration,
usually less than 1 msec. To maximize the coverage obtained per survey line
sailed, dual-channel systems have been designed, the transducers being
mounted in a towed “fish” so that separate beams are scanned to each side
of the ship (see Figure 2.17). Thus a picture can be constructed of the seabed
ranging from beneath the ship to up to a few hundred meters on either side
of the ship’s course. The range of such a system is closely linked to the
resolution obtainable. Emphasis is given here to high-resolution, relatively
short-range systems (100 m to 1 km per channel) as these systems are more
commonly used. Typically, a high-precision system would be towed some
20 m above the seabed and would survey to a range of 150 m on either side
of the ship.
As with the echo sounder, the basic principle of side scan sonar is that
echoes of a transmitted pulse are detected and presented on a facsimile
9255_C002.fm Page 47 Friday, February 16, 2007 10:30 PM
47
Sensors and Instruments
record, termed a sonograph, in such a way that the time scan can easily
be calibrated in terms of distance across the seabed. The first echo in any
scan is the bottom echo, with subsequent echoes being reflected from
features ranging across the seabed to the outer limit of the scan. A number
of points should be noted. The range scale shown on a sonograph is usually
not the true range across the seabed but the slant range of the sound beam
(A in Figure 2.17) and, as with the echo sounder, distances indicated on a
record depend on an assumption about the velocity of sound in water
because the distance is taken to be equal to 1/2vt. If properly calibrated,
the sonograph will show the correct value for B, the depth of water beneath
the fish, which is presented as a depth profile on a sonograph. Echoes
reflected across the scan, subsequent to the seabed echo (from points X to
Y in Figure 2.17), are subject to slant range distortion: the actual distance
scanned across the seabed is
C = A 2 − B2
(2.7)
Thus, corrections for slant range distortion should be applied if an object is
detected by side scan and a precise measurement of its size and position
relative to a fixed position of a ship is required. If A and B are kept near
constant for a survey, a correction can be made to relate apparent range to
true range.
Perhaps the most important variable to be considered in side scan sonar
applications is the resolution required. For highest resolution, a high-frequency
sound source, possibly in the range of 50 to 500 kHz, and a very short pulse
length, of the order of 0.1 msec, is required. Such a source gives a range
resolution of 20 to 50 cm and enables detection of small-scale features of
seabed morphology, such as sand ripples of 10 to 20 cm amplitude. However,
the maximum range of such a system is not likely to exceed 400 m. If lower
resolving power is acceptable, systems based on lower frequency sources
are available that can be operated over larger sweep ranges. Thus, if the
object of a survey is to obtain complete coverage of an area, range limitation
will be an important factor in the cost of the undertaking.
The configuration of the main components of the instrument system is
very similar to that of the echo sounder, though with dual channel systems,
each channel constitutes a single-channel subsystem consisting of a transmission unit, transmitting and receiving transducers, a receiving amplifier,
and a signal processor. The function of the receiving amplifier and signal
processor in a side scan sonar is similar to that of the equivalent unit in the
echo sounder but, because not only those signals related to the first arrival
echoes from the seabed are of concern, a more complex signal-processing
facility is required.
9255_C002.fm Page 48 Friday, February 16, 2007 10:30 PM
9255_C003.fm Page 49 Friday, February 16, 2007 11:08 PM
3
Satellite Systems
3.1
Introduction
In April 1960, only 3.5 years after the first man-made satellite orbited the
Earth, the United States began its environmental satellite program with the
launch of TIROS-1, the first satellite in its TIROS (Television InfraRed
Observation Satellite) series. This achievement clearly demonstrated the
possibility of acquiring images of the Earth’s cloud systems from space, and
TIROS became the first in a long series of satellites launched primarily for
the purpose of meteorological research. A detailed account of the first
30 years of meteorological satellite systems is given by Rao et al. (1990). An
enormous number of other satellites have now been launched for a wide
range of environmental remote sensing work.
In this chapter, a few of the important features of some remote sensing
satellite systems are outlined. Rather than attempt to describe every system
that has ever been launched, this chapter will concentrate on those that are
reasonably widely used by scientists and engineers who actually make use
of remote sensing data collected by satellites. A comprehensive survey of Earth
observation satellite systems is given by Kramer (2002) and in Dr Kramer’s
updated version of his book which is available on the following website:
http://directory.eoportal.org/pres_ObservationoftheEarthanditsEnvironment. html.
A lot of information gathered by the Committee on Earth Observation Satellites can also be found on the website http://www.eohandbook.com.
Consideration is given in this chapter to the spatial resolution, spectral
resolution, and frequency of coverage of the different systems. Although it is
also important to consider the atmospheric effects on radiation traveling from
the surface of the Earth to a satellite, as they do influence the design of the
remote-sensing systems themselves, an extensive discussion of this topic is
postponed until Chapter 8, as the consideration of atmospheric effects is of
importance primarily at the data processing and interpretation stages.
In the early days, the main players in space programs for Earth resources
monitoring were the United States and the former Union of Soviet Socialist
Republics (USSR). In the meteorological field, the programs were similar
49
9255_C003.fm Page 50 Friday, February 16, 2007 11:08 PM
50
Introduction to Remote Sensing
and, to some extent, complementary. The two countries developed very
similar polar-orbiting meteorological satellite programs. They cooperated in
the international program of geostationary meteorological satellites. However,
when it came to high-resolution programs, principally for land-based
applications, the two countries developed quite different systems. The
United States developed the Landsat program, and the USSR developed
the Resurs-F program.
3.2
Meteorological Remote Sensing Satellites
There now exists a system of operational meteorological satellites comprising
both polar-orbiting and geostationary satellites. These form an operational
system because:
• The U.S. National Oceanographic and Atmospheric Administration
(NOAA) has committed itself to an ongoing operational program of
polar-orbiting operational environmental satellites (POESs),
although the future operation of this program will be shared with
the European organization for the Exploitation of Meteorological
Satellites’ (EUMETSAT’s) Meteorology Operational Program.
• The international meteorological community has committed itself to
a series of operational geostationary satellites for meteorological
purposes.
In addition to these operational systems, several experimental satellite
systems have provided meteorological data for a period, but with no
guarantee of continuity of supply of that type of data. An overview of
polar-orbiting and geostationary meteorological satellite programs is given
in Tables 3.1 and 3.2, respectively.
3.2.1
Polar-Orbiting Meteorological Satellites
The NOAA POES program has its origins in TIROS-1, which was launched
in 1960 as an experimental satellite. Altogether a series of 10 experimental
spacecraft (TIROS-1 to TIROS-10) were launched by the United States over
the period 1960 to 1965. They were followed by the second-generation
TIROS Operational System (TOS) satellites ESSA-1 to ESSA-9 (ESSA =
Environmental Science Services Administration) between 1966 and 1969
and the third generation improved TIROS Operational system (ITOS)
satellites ITOS-1 and NOAA-2 to NOAA-5 between 1970 and 1978. These
systems were followed by TIROS-N, which was launched on October 13,
1978. TIROS-N was the first spacecraft in the fourth generation TIROSN/NOAA and Advanced TIROS-N(ATN)/NOAA series, and this system
is still in operation.
9255_C003.fm Page 51 Friday, February 16, 2007 11:08 PM
51
Satellite Systems
TABLE 3.1
Overview of Polar-Orbiting Meteorological Satellite Series
Satellite Series
(Agency)
NOAA-2 to -5 (NOAA)
TIROS-N (NOAA
POES)
NOAA-15 and NOAAL, -M, -N, and N′
DMSP Block 5D-1
(DoD)
DMSP Block 5D-2
(DoD)
Launch
Major
Instruments
Comments
October 21, 1971;
July 29, 1976
October 13, 1978
VHRR
2580-km swath
AVHRR
> 2600-km swath
May 13, 1998 to 2007
AVHRR/3
> 2600-km swath
September 11, 1976,
to July 14, 1980
December 20, 1982,
to April 4, 1997
OLS
3000 km swath
OLS, SSM/I
SSMIS replaces
SSM/I starting
with F-16 (2001)
December 12, 1999
OLS, SSM/1
October 24, 1985
MR-2000M
3100-km swath
2001 (Meteor-3M-1)
MR-900B
2600 km swath
2800-km swath
DMSP Block 5D-3
(DoD)
Meteor-3 series
(ROSHYDROMET)
Meteor-3M series
(ROSHYDROMET)
FY-1A to 1D (Chinese
Meteorological
Administration)
MetOp-1 (EUMETSAT)
September 7, 1988;
May 15, 2002
MVISR
*
NPP, (NASA/IPO)
*
NPOESS (IPO)
*
AVHRR/3,
MHS, IASI
VIIRS, CrIS,
ATMS
VIIRS, CMIS,
CrIS
PM complement to
NOAA POES series
NPOESS
Preparatory Project
Successor to NOAA
POES and DMSP
series
(Adapted from Kramer, 2002)
*Not yet launched at time of writing.
The last six spacecraft of the series, the ATN spacecraft, were larger and
had additional solar panels to provide more power. This additional space
and power onboard enabled extra instruments to be carried, such as the
Earth Radiation Budget Experiment (ERBE), the Solar Backscatter Ultraviolet
instrument (SBUV/2), and a search and rescue system. The search and rescue
system uses the same location principles as the Argos system but is separate
from Argos (we follow Kramer [2002] in calling it S&RSAT rather than
SARSAT to avoid confusion with synthetic aperture radar [SAR]). The ERBE
is a National Aeronautics and Space Administration (NASA) research instrument. ERBE data contribute to understanding the total and seasonal planetary albedo and Earth radiation balances, zone by zone. This information is
used for recognizing and interpreting seasonal and annual climate variations
and contributes to long-term climate monitoring, research, and prediction.
The SBUV radiometer is a nonscanning, nadir-viewing instrument designed to
9255_C003.fm Page 52 Friday, February 16, 2007 11:08 PM
52
Introduction to Remote Sensing
TABLE 3.2
Overview of Geostationary Meteorological Satellites
Spacecraft Series
(Agency)
ATS-1 to ATS-6 (NASA)
Launch
December 6, 1966, to
August 12, 1969
GOES-1 to -7 (NOAA)
October 16, 1975, to
February 26, 1987
GOES-8 to -12 (NOAA) April 13, 1994, to July
23, 2001
GMS-1 to -5 (JMA)
July 14, 1977; March
18, 1995
MTSAT-1 (JMA et al.)
November 15, 1999
(launch failure of H-2
vehicle)
MTSAT-1R (JMA)
February 26, 2005
MTSAT-2
February 18, 2006
Meteosat-1 to -7
November 23, 1977;
(EUMETSAT)
September 3, 1997
MSG-1 (EUMETSAT)
August 28, 2002
INSAT-1B to -1D (ISRO) August 30, 1983, to
June 12, 1990
INSAT-2A to -2E (ISRO) July 9, 1992, to April 3,
1999
INSAT-3B, -3C, -3A, 3E March 22,2000, to
(ISRO)
September 28, 2003
MetSat-1 (ISRO)
September 12, 2002
GOMS-1 (Russia/
October 31, 1994
Planeta)
FY-2A, -2B (CMA,
June 10, 1997; July 26,
China)
2000
Major Instrument
Comment
SSCC (MSSCC
ATS-3)
VISSR
Technical
demonstration
First generation
GOES-Imager,
Sounder
VISSR (GOES
heritage)
JAMI
Second generation
JAMI
JAMI
VISSR
First generation
Second generation
First generation
SEVIRI, GERB
VHRR
Second generation
VHRR/2
Starting with -2E
VHRR/2
STR
Weather satellite only
First generation
S-VISSR
(Adapted from Kramer, 2002)
measure scene radiance in the ultraviolet spectral region from 160 to 400 nm.
SBUV data are used to determine the vertical distribution of ozone and the
total ozone in the atmosphere as well as solar spectral irradiance. The
S&RSAT is part of an international program to save lives. The S&RSAT
equipment on POES is provided by Canada and France. Similar Russian
equipment, called COSPAS (Space System for the Search of Distressed
Vessels), is carried on the Russian polar-orbiting spacecraft. The S&RSAT
and COSPAS systems relay emergency radio signals from aviators, mariners,
and land travelers in distress to ground stations, where the location of the
distress signal transmitter is determined. Information on the nature and
location of the emergency is then passed to a mission control center that
alerts the rescue coordination center closest to the emergency. Sketches of
spacecraft of the TIROS-NOAA series are shown in Figure 3.1.
9255_C003.fm Page 53 Friday, February 16, 2007 11:08 PM
53
Satellite Systems
Solar array
drive motor
Array drive
electronics
Solar
array
Nitrogen
tank (2)
Hydrazine
tank (2)
Reaction
system support
structure
Battery
modules (4)
Equipment
High-energy
support
proton and alpha
module
particle detector
Thermal
S-band
Medium-energy
control
omni
proton and electron
pinwheel
antenna
detector
louvres (12)
Sun sensor detector
Earth
Inertial
sensor
measurement
assembly
unit
Instrument
mounting
platform sunshade
Instrument
mounting
platform
Advanced very
Beacon/
high resolution
command
radiometer
antenna
S-band
Stratospheric
omni Microwave
sounding unit
atenna sounding
unit
UHF data
High
resolution
collection
infrared radiation
system antenna
sounder
S-band
antenna
(3)
VHF
real-time
antenna
Rocket engine
assembly (4)
(a)
AVHRR
IMU
Thermal control
pinwheel louvres (15) SAR antennas
IMP
SOA
HIRS
SSU
Battery
modules (6)
SAD
Solar array
ESA
MSU
SLA
(1)
BDA
SOA
SBUV
ERBE SBA (3)
(Scanner)
ERBE
VRA
(Non-scanner)
REA (4)
UDA
(b)
FIGURE 3.1
Sketches of (a) TIROS-N spacecraft (Schwalb, 1978) and (b) Advanced TIROS-N spacecraft (ITT).
The primary POES mission is to provide daily global observations of
weather patterns and environmental conditions in the form of quantitative
data usable for numerical weather analysis and prediction. Polar-orbiting
spacecraft are used to observe and derive cloud cover, ice and snow coverage,
surface temperatures, vertical temperature and humidity profiles, and other
variables. The POES instrument payload has varied from mission to mission,
based on in-orbit experience and changing requirements.
Each NOAA POES has an atmospheric sounding capability and a highresolution imaging capability. Before TIROS-N, the imaging capability was
provided by the Very High Resolution Radiometer (VHRR). The VHRR was
a two-channel, cross-track scanner that had an instantaneous field of view
(IFOV) of 0.87 km, a swath width of 2580 km, and two spectral bands.
9255_C003.fm Page 54 Friday, February 16, 2007 11:08 PM
54
Introduction to Remote Sensing
The first VHRR channel measured reflected visible radiation from cloud tops
or the Earth’s surface in the limited spectral range of 0.6 to 0.7 µm. The
second channel measured thermal-infrared radiation emitted from the Earth,
sea, and cloud tops in the 10.5 to 12.5 µm region. This spectral region
permitted both daytime and nighttime radiance measurements and the
determination of the temperature of the cloud tops and of the sea surface in
cloud-free areas, both during daytime and at night. Improvements were
made through the third and fourth generations and, starting with TIROSN, the system has delivered digital scanner data rather than analogue data.
TIROS-N had a new set of data gathering instruments. The instruments
flown on TIROS-N and its successors include the TIROS Operational Vertical Sounder (TOVS), the Advanced Very High Resolution Radiometer
(AVHRR), the Argos data collection system (see Section 1.5.2), and the
Space Environment Monitor (SEM). The TOVS is a three-instrument system
consisting of:
• High-Resolution Infrared Radiation Sounder (HIRS/2). The HIRS/2
is a 20-channel instrument for taking atmospheric measurements,
primarily in the infrared region. The data acquired can be used to
compute atmospheric profiles of pressure, temperature, and humidity.
• Stratospheric Sounding Unit (SSU). The SSU is a three-channel
instrument, provided by the United Kingdom, that uses a selective
absorption technique. The pressure in a carbon dioxide gas cell in
the optical path determines the spectral characteristics of each
channel, and the mass of carbon dioxide in each cell determines
the atmospheric level at which the weighting function of each
channel peaks.
• Microwave Sounding Unit (MSU). This four-channel Dicke radiometer makes passive microwave measurements in the 5.5 mm oxygen
band. Unlike the infrared instruments of TOVS, the MSU is little
influenced by clouds in the field of view.
The purpose of the TOVS is to enable vertical profiles of atmospheric
parameters, i.e. pressure, temperature and humidity, to be retrieved. On the
more recent spacecraft in the NOAA series there is an improved version of
TOVS, the Advanced TIROS Operational Vertical Sounder (ATOVS). ATOVS
includes a modified version of HIRS and a very much modified version of
the MSU, the Advanced Microwave Sounding Unit (AMSU) which has many
more spectral channels than the MSU.
The AVHRR is the main imaging instrument; it is the successor of the
VHRR, which was flown on earlier spacecraft. Three generations of AVHRR
cross-track scanning instruments (built by ITT Aerospace of Fort Wayne, IN)
have provided daily global coverage starting from 1978 (TIROS-N) to the
turn of the millennium and beyond. The AVHRR is a five-channel scanning
radiometer, or multispectral scanner (MSS), with a 1.1 km resolution that is
9255_C003.fm Page 55 Friday, February 16, 2007 11:08 PM
55
Satellite Systems
TABLE 3.3
Spectral Channel Wavelengths of the AVHRR
AVHRR/1
Channel
No.
TIROS-N
(µm)
NOAA-6,
-8, -10 (µm)
AVHRR/2
NOAA-7,
-9, -11, -12,
IFOV
-14 (µm)
(mrad)
1
0.550–0.90
0.550–0.68
0.550–0.68
1.39
2
0.725–1.10
0.725–1.10
0.725–1.10
1.41
3
3.550–3.93
3.550–3.93
3.550–3.93
1.51
4
10.50–11.50
10.50–11.50
10.30–11.30
1.41
5
Repeat of
channel 4
Repeat of
channel 4
11.50–12.50
1.30
Principal Use
of Channel
Day cloud and surface
mapping
Surface water
delineation and
vegetation mapping
Sea surface
temperature and fire
detection
Sea surface
temperature and night
time cloud mapping
Surface temperature
and day/night cloud
mapping
(Kramer, 2002)
sensitive in the visible, near-infrared, and thermal-infrared window regions.
The spectral channels of the first two generations of the AVHRR are identified
in Table 3.3. In the third generation instrument, AVHRR/3, which was first
flown on NOAA-15 (launched on May 13, 1998), an extra spectral channel
with a wavelength range of 1.58 to 1.64 µm was added, as channel 3a. The
old channel 3 of 3.55 to 3.93 µm wavelength range, was redesignated as
channel 3b. However, to avoid disturbing the data transmission format, these
two channels 3a and 3b are not operated simultaneously and at any one time
only one is transmitted in the channel 3 position in the data stream. Channel
3b is valuable for studying certain kinds of clouds and small intensive
sources of heat (see Chapter 6 of Cracknell [1997]). A sketch of the AVHRR
is shown in Figure 3.2.
The data collected by the TIROS-N instruments, like those from all NOAA
POES, were stored onboard the satellite for transmission to the NOAA central processing facility at Suitland, MD, through the Wallops and Fairbanks
command and data acquisition stations. The AVHRR data can be recorded
at 1.1-km resolution (the basic resolution of the AVHRR instrument) or at 4-km
resolution. The stored high-resolution (1.1 km) imagery is known as local
area coverage (LAC) data. Owing to the large number of data bits, only about
11 minutes of LAC data can be accommodated on a single recorder. By
contrast, 115 minutes of the lower resolution (4 km) imagery, called global
area coverage (GAC) data, can be stored on a recorder — enough to cover
an entire orbit of 102 minutes. Satellite data are also transmitted in real time
direct readout at very high frequency (VHF) and S-band frequencies in the
9255_C003.fm Page 56 Friday, February 16, 2007 11:08 PM
56
Introduction to Remote Sensing
Relay optics cover
Radiant cooler
assembly
Telescope cover
Electronics
module assembly
Earth shield
and radiator
assembly
Optical assembly
Detector
assembly
Baseplate
Scanner assembly
FIGURE 3.2
Illustration of the AVHRR instrument. (Kramer, 2002.)
automatic picture transmission (APT) and high-resolution picture transmission (HRPT) modes, respectively. These data can be recovered by local
ground stations while they are in the direct line of sight of the spacecraft.
The terms LAC and HRPT refer to the same high-resolution data; the only
difference between them is that LAC refers to tape-recorded data and HRPT
refers to data that are downlinked live in the direct readout transmission to
ground stations for which the satellite is above their horizon.
The AVHRR provides data not only for daytime and nighttime imaging
in the visible and infrared but also for sea surface temperature determination,
estimation of heat budget components, and identification of snow and sea ice.
The AVHRR is the spaceborne instrument with the longest service period
and the widest data distribution and data analysis in the history of operational meteorology, oceanography, climatology, vegetation monitoring, and
land and sea ice observation. The instrument provides wide-swath (>2600 km,
scan to +56°) multispectral imagery of about 1.1 km spatial resolution at
nadir from near-polar orbits (nominal altitude of 833 km). The resolution of
1.1 km is quite suitable for the wide-swath measurement of large-scale meteorological phenomena. The benefit of AVHRR data lies in its high temporal
frequency of global coverage. The AVHRR instrument was initially designed
for meteorological applications. The initial objectives were to develop a
system that would provide a more efficient way to track clouds, estimate
snow cover extent, and estimate sea surface temperature — and for these
purposes it has proved to be enormously successful. However, a few years
after the launch of the first AVHRR instrument, its usefulness in other applications, most especially in monitoring global vegetation, became apparent. Since
then, numerous other nonmeteorological uses for the data from the AVHRR
9255_C003.fm Page 57 Friday, February 16, 2007 11:08 PM
Satellite Systems
57
have been identified (see Chapter 10); an extensive discussion of the nonmeteorological uses of AVHRR data is given by Cracknell (1997).
The USSR was the other great world power involved in space from the very
early days. “Meteor” is the generic name for the long series of polar-orbiting
weather satellites that were launched by the USSR, and subsequently by Russia. The agency responsible for them is the Russian Federal Service for
Hydrometeorology and Environmental Monitoring (ROSHYDROMET). Prior
to this series, there was an experimental Cosmos series, of which the first
member with a meteorological objective was Cosmos-44, launched in 1964,
followed by a further nine Cosmos satellites until 1969, when the series was
officially named “Meteor-1.” This was followed by the series Meteor-2 and
Meteor-3.
In parallel with the civilian POES program, the U.S. military services of
the Department of Defense (DOD) built their own polar-orbiting meteorological satellite series, referred to as the Defense Meteorological Satellite
Program (DMSP), with the objective of collecting and disseminating worldwide cloud cover data on a daily basis. The first of the DMSP satellites was
launched on January 19, 1965, and a large number of satellites in this series
have been launched since then, with the satellites being progressively more
sophisticated. Like the NOAA series of satellites, the DMSP satellites are in
Sun-synchronous orbits with a period of about 102 minutes; two satellites
are normally in operation at any one time (one with a morning and one with
a late morning equatorial crossing time). The spacecraft are in orbits with a
nominal altitude of 833 km, giving the instrument a swath width of about
3,000 km. The spacecraft carry the Operational Linescan System, or OLS.
This instrument is somewhat similar to the VHRR, which has already been
described briefly; it is a two-channel across-track scanning radiometer, or
MSS, that was designed to gather daytime and nighttime cloud cover imagery. The wavelength ranges of the two channels are 0.4 to 1.1 µm and 10.0
to 13.4 µm (8 to 13 µm before 1979). The visible channel has a low-light
amplification system that enables intense light sources associated with urban
areas or forest fires to be seen in the nighttime data.
Many of the later DMSP spacecraft (from 1987 onward) have carried the
Special Sensor Microwave Imager (SSM/I), which is a successor to the
Scanning Multichannel Microwave Radiometer (SMMR) flown on Nimbus7 and Seasat, both of which were launched in 1978. The SSM/I is a fourfrequency, seven-channel instrument with frequencies and spatial resolution
similar to those of the SMMR (see Table 3.4). SSM/I is now, in turn, being
succeeded on the latest DMSP spacecraft by the Special Sensor Microwave
Imager Sounder (SSMIS), which incorporates other earlier microwave sounding instruments flown on DMSP spacecraft.
The future U.S. polar-orbiting meteorological satellite system is the National
Polar-Orbiting Operational Environmental Satellite System (NPOESS). This
system represents a merger of the NOAA POES and DMSP programs, with
the objective of providing a single, national remote-sensing capability for
meteorological, oceanographic, climatic, and space environmental data.
9255_C003.fm Page 58 Friday, February 16, 2007 11:08 PM
58
Introduction to Remote Sensing
TABLE 3.4
Characteristics of the SSM/I
Wavelength (mm)
Frequency (GHz)
Polarization
Resolution (km along
track × km across track)
15.5
15.5
13.5
8.1
8.1
3.5
3.5
19.35
19.35
22.235
37.0
37.0
85.0
85.0
Vertical
Horizontal
Vertical
Vertical
Horizontal
Vertical
Horizontal
68.9 × 44.3
69.7 × 43.7
59.7 × 39.6
35.4 × 29.2
37.2 × 28.7
15.7 × 13.9
15.7 × 13.9
(Adapted from Kramer, 2002)
The DoD’s DMSP and the NOAA POES convergence is taking place in two
phases:
During the first phase, which began in May 1998, all DMSP satellite
operational command and control functions of Air Force Space
Command (AFSPC) were transferred to a triagency integrated
program office (IPO) established within NOAA. NOAA was given
the sole responsibility of operating both satellites programs, POES
and DMSP (from the National Environmental Satellite, Data, and
Information Service [NESDIS] in Suitland, MD).
During the second phase, the IPO will launch and operate the new
NPOESS satellites that will satisfy the requirements of both the DOD
and the Department of Commerce (of which NOAA is a part) from
about the end of the present decade.
EUMETSAT, the European meteorological satellite data service provider,
has had a long-standing geostationary spacecraft program (see below) and
has been planning a polar-orbiting satellite series since the mid 1980s. The
EUMETSAT Polar System (EPS) consists of the European Space Agency
(ESA)–developed Meteorological Operational (MetOp) series of spacecraft
and an associated ground segment for meteorological and climate monitoring from polar, low-Earth orbits. Since the early 1990s, NOAA and EUMETSAT have been planning a cooperation over polar-orbiting meteorological
satellites. The basic intention is to join the space segment of the emerging
MetOp program of EUMETSAT with the existing POES program of NOAA
into a fully coordinated service, thus sharing the costs. The plans came to a
common baseline and agreement, referred to as the Initial Joint Polar System
(IJPS), in 1998. IJPS comprises two series of independent, but fully coordinated, polar satellite systems, namely POES and MetOp, to provide for the
continuous and timely collection and exchange of environmental data from
space. EUMETSAT plans to include its satellites MetOp-1, MetOp-2, and
MetOp-3 for the morning orbit, while NOAA is starting with its NOAA-N
and NOAA-N′ spacecraft for the afternoon orbit of the coordinated system.
9255_C003.fm Page 59 Friday, February 16, 2007 11:08 PM
Satellite Systems
59
The MetOp program, as successor to the NOAA POES morning series, is
required to provide a continuous direct broadcast of its meteorological data
to the worldwide user community, so that any ground station in any part of
the world can receive local data when the satellite passes over that receiving
station. This implies continued long-term provision of the HRPT and VHF
downlink services.
The Feng-Yun (“Feng Yun” means “wind and cloud”) meteorological satellite program of the People’s Republic of China includes both polar-orbiting
and geostationary spacecraft. The Feng-Yun-1 series are polar-orbiting spacecraft, the first of which were launched in 1988, 1990, and 1999. Further
information on these spacecraft is given by Kramer (2002).
3.2.2
Geostationary Meteorological Satellites
The network of geostationary meteorological spacecraft consists of individual
spacecraft that have been built, launched, and operated by a number of different countries; these spacecraft are placed at intervals of about 60° or 70°
around the equator. Given the horizon that can be seen from the geostationary
height, this gives global coverage of the Earth with the exception of the polar
regions (see Figure 3.3). The objective is to provide the nearly continuous,
repetitive observations needed to predict, detect, and track severe weather.
This series of spacecraft is coordinated by the Co-ordination Group for Meteorological Satellites (CGMS). These spacecraft carry scanners that operate in
the visible and infrared parts of the spectrum. They observe and measure
cloud cover, surface conditions, snow and ice cover, surface temperatures, and
the vertical distributions of pressure and humidity in the atmosphere. Images
are transmitted by each spacecraft at 30-minute intervals, though from the
very latest spacecraft, e.g. MSG (Meteosat Second Generation) and the NOAAGOES Third Generation, images are transmitted every 15 minutes.
The first geostationary meteorological satellite was NASA’s Applications
Technology Satellite-1 (ATS-1), which was launched in December 1966. The
first NOAA operational geostationary meteorological satellite, Geostationary
Operational Environmental Satellite-1 (GOES-1), was launched in 1975. The
United States has taken responsibility for providing GOES-East, which is
located over the equator at 75°W, and GOES-West, which is located over the
equator at 135°W. The first generation GOES satellites (up to GOES-7, which
was launched in 1987), carried a two-band scanner called the Visible Infrared
Spin Scan Radiometer (VISSR) (see Table 3.5); the second generation (from
GOES-8, launched in 1994, to GOES-12, launched in 2001) carried a five-band
scanner called the GOES Imager (see Table 3.5).
The Geostationary Operational Meteorological Satellite (GOMS) was
developed by the USSR. GOMS-1 was launched in 1994 and placed in a
geostationary position at 76°E, over the Indian Ocean. GOMS-1, also referred
to as Electro-1, ended operations in November 2000. Russia plans to launch
Electro-M (modified), but until that launch occurs the Russian weather service
is dependent on the services provided by EUMETSAT’S Meteosat for geostationary weather satellite data.
330°
0°
30°
60°
90°
120°
150°
180°
90°
90°
N
0°
S
−150°
Telecom coverage
−90°
Longitude
−30° W 0° E
Imaging coverage
150°
−90°
30°
GMS
+
Japan
−90°
GOMS
+
USSR
−60°
METEOSAT
+
Europe
−60°
GOES-E
+
USA
−30°
GOES-W
+
USA
−30°
N
0°
S
30°
300°
30°
270°
60°
240°
60°
210°
60
FIGURE 3.3
Coverage of the Earth by the international series of geostationary meteorological satellites.
Latitude
90°
9255_C003.fm Page 60 Friday, February 16, 2007 11:08 PM
Introduction to Remote Sensing
9255_C003.fm Page 61 Friday, February 16, 2007 11:08 PM
61
Satellite Systems
TABLE 3.5
Features of Commonly Used Multispectral Scanners
GOES First Generation: VIISR
Channel
Wavelength (µm)
IFOV (km)
0.55–0.72
10.5–12.6
0.9
7
1
2
GOES Second Generation: GOES Imager
Channel
Wavelength (µm)
IFOV (km)
0.55–0.75
3.8–4.0
6.5–7.0
10.20–11.2
11.5–12.5
1
4
8
4
4
1
2
3
4
5
Meteosat First Generation
Channel
Wavelength (µm)
IFOV (km)
0.4–1.1
10.5–12.5
5.7–7.1
~2.4
~5
~5
1
2
3
Landsat-1 to Landsat-5: MSS
Channel*
Wavelength (µm)
IFOV (m)
0.5–0.6
0.6–0.7
0.7–0.8
0.8–1.1
80
80
80
80
1 (4)
2 (5)
3 (6)
4 (7)
*The designation of the channels as 4, 5, 6,
and 7 applied to Landsat-1, -2, and -3.
Landsat-4 to Landsat- 7: Thematic Mapper and Enhanced Thematic Mapper
Channel
Wavelength (µm)
IFOV (m)
Blue
Green
Red
Near-infrared
Mid-infrared
Infrared
Thermal infrared
1, Pan*
0.45–0.52
0.52–0.6
0.63–0.69
0.76–0.9
1.55–1.75
2.08–2.35
10.4–12.5
0.5–0.9
30
30
30
30
30
30
120
13 m × 15 m
*On Enhanced Thematic Mapper on Landsat-7 only
(continued)
9255_C003.fm Page 62 Friday, February 16, 2007 11:08 PM
62
Introduction to Remote Sensing
TABLE 3.5 (Continued)
Features of Commonly Used Multispectral Scanners
CZCS
Channel
Wavelength (µm)
1
2
3
4
5
6
Resolution
0.433–0.453
0.51–0.53
0.54–0.56
0.66–0.68
0.6–0.8
10.5–12.5
~825 m
SeaWiFS
Channel
Wavelength (µm)
1
2
3
4
5
6
7
8
Resolution
0.402–0.422
0.433–0.453
0.480–0.500
0.500–0.520
0.545–0.565
0.660–0.680
0.745–0.785
0.845–0.885
1.13 km (4.5 km in GAC
mode)
SPOT Haute Resolution Visible
SPOT-1, -2, -3
Channel
Multispectral mode
Panchromatic mode
SPOT-4
SPOT-5
Wavelength IFOV Wavelength
Wavelength
(µm)
(m)
(µm)
IFOV (m)
(µm))
IFOV (m)
0.5–0.59
20
0.50–0.59
20
0.50–0.59
10
0.61–0.68
20
0.61–0.68
20
0.61–0.68
10
0.79–0.89
20
0.78–0.89
20
0.78–0.89
10
1.58–1.75
20
1.58–1.75
20
0.48–0.71
10
0.48–0.71
2.5 or 5
0.51–0.73
10
9255_C003.fm Page 63 Friday, February 16, 2007 11:08 PM
63
Satellite Systems
TABLE 3.5 (Continued)
Features of Commonly Used Multispectral Scanners
SPOT-5 VEGETATION
Channel
Wavelength (µm)
IFOV (km)
0.43–0.47
0.61–0.68
0.78–0.89
1.58–1.75
1.15
1.15
1.15
1.15
1
2
3
4
IKONOS-2, Quickbird-2
Channel
Wavelength (µm)
IKONOS
IFOV (m)
Quickbird
IFOV (m)
1
2
3
4
Panchromatic mode
0.45–0.52
0.52–0.60
0.63–0.69
0.76–0.90
0.45–0.90
≤4
≤4
≤4
≤4
≤1
2.5
2.5
2.5
2.5
0.61
INSAT is a multipurpose operational series of Indian geostationary satellites employed for meteorological observations over India and the Indian
Ocean as well as for domestic telecommunications (such as nationwide direct
television broadcasting, television program distribution, meteorological data distribution). They have been launched into the position 74°E, very close to GOMS.
The first series, INSAT-1A to INSAT-1D were launched from 1981 to 1990.
The second series started with INSAT-2A, which was launched in 1992. The
prime instrument, the VHRR, has been enhanced several times. With INSAT-2E
(launched in 1999), it provides data with 2-km spatial resolution in the visible
band and 8-km resolution in the near-infrared and thermal-infrared bands. The
INSAT-3 series commenced with the launch of INSAT-3B in 2000.
Meteosat is the European contribution to the international program of geostationary meteorological satellites. It is positioned over the Greenwich meridian and is operated by EUMETSAT. The Meteosat program was initiated by
ESA in 1972, and the launch of Meteosat-1 (a demonstration satellite) occurred
on November 23, 1977. The EUMETSAT convention was signed by 16 countries
on May 24, 1983. On January 1, 1987, responsibility for the operation of the
Meteosat spacecraft was transferred from ESA to EUMETSAT. The main instrument on board the satellite is a scanning radiometer with three spectral bands
(see Table 3.5). The third wavelength range is a little unusual; this band indicates
atmospheric water vapor content. The Meteosat spacecraft (see Figure 2.7) spins
about an axis parallel to the Earth’s axis of rotation, and this spinning provides
scanning in the E-W direction. N-S scanning is provided by a tilt mirror whose
angle of tilt is changed slightly from one scan line to the next. Meteosat is also
used for communications purposes (see Section 1.5). The Meteosat Second
9255_C003.fm Page 64 Friday, February 16, 2007 11:08 PM
64
Introduction to Remote Sensing
Generation series, which launched its first satellite on August 28, 2002, provides considerable improvements, particularly in generating images more
frequently (every 15 minutes instead of every 30 minutes).
The Japanese Meteorological Authority and Japan's National Space
Development Agency (NASDA) also have a series of geostationary meteorological satellites, which have been located at 120°E (GMS-3) and 140 °E (GMS4, GMS-5). Japan started its geostationary meteorological satellite program
with the launch of Geostationary Meteorological Satellite-1 (GMS-1), referred
to as Himawari-1 in Japan, on July 7, 1977. The newest entry into the ring,
Multifunctional Transport Satellite-1 (MTSAT-1), which was launched on
November 15, 1999, was planned to provide the double service of an “aeronautical mission” (providing navigation data to air-traffic control services
in the Asia Pacific region) and a “meteorological mission”; however, a launch
failure of the H-2 vehicle occurred. In the latter function, MTSAT is a successor program to the GMS series. There is a replacement satellite, MTSAT1R, and the prime instrument of the meteorology mission on MTSAT-1R is
the Japanese Advanced Meteorological Imager (JAMI).
China joined the group of nations with geostationary meteorological satellites with the launch of FY-2A (Feng-Yun-2A) on 10 June 1997. The prime
sensor, the Stretched-Visible and Infrared Spin-Scan Radiometer (S-VISSR), is
an optomechanical system, providing observations in three bands (at resolutions of 1.25 km in the visible and 5 km in the infrared and water vapor bands).
According to Kramer (2002), a U.S. commercial geostationary weather
satellite program is being developed by Astro Vision, Inc. (located at NASA’s
Stennis Space Center in Pearl River, MS). The overall objective is to launch
a series of five AVstar satellites to monitor the weather over North and South
America and provide meteorological data products to a customer base. One
goal is to produce quasilive regional imagery with a narrow-field instrument
to permit researchers to monitor quickly the formation of major weather
patterns. Far more-detailed information about the various polar-orbiting and
geostationary meteorological satellites than we have space to include here
can be found in Rao et al. (1990) and Kramer (2002).
3.3
Nonmeteorological Remote Sensing Satellites
We now turn to nonmeteorological Earth-observing satellite systems. A number of different multispectral scanners are carried on these satellites and
some details of many of these are given in Table 3.5.
3.3.1
Landsat
The Landsat program began with the launch by NASA in 1972 of the first
Earth Resources Technology Satellite (ERTS-1), which was subsequently
renamed Landsat-1. Since then, the Landsat program has had a checkered
political history in the United States. The original program was continued as
9255_C003.fm Page 65 Friday, February 16, 2007 11:08 PM
65
Relative response
Satellite Systems
140
130
120
110
100
90
80
70
60
50
40
30
20
10
Band 1
Band 2 Band 3
Band 4
450 500 550 600 650 700 750 800 850 900 950 1000 1050
Wavelength (nm)
FIGURE 3.4
Landsat MSS wavelength bands.
a research/experimental program, with the launch of two more satellites,
Landsat-2 and Landsat-3, until 1983. The system was then declared to be
operational and was transferred to NOAA. In 1984, the Land Remote Sensing
Commercialization Act authorized a phased commercialization of remote
sensing data from the Landsat system. However, this policy was reversed with
the Land Remote Sensing Policy Act of 1992, which created a Landsat
Program Management under NASA and DoD leadership. In 1994, the
DoD withdrew from the Landsat Program Management and the (by then)
Landsat-7 program was restructured and put under joint NASA/NOAA
management, with NASA having the responsibility for the space segment
(spacecraft building and launch) and NOAA having the responsibility for the
ground segment (spacecraft operation and data distribution). The main instrument that was flown on all the early spacecraft in this program was the MSS,
an across-track scanner with four spectral bands with wavelengths given in
Table 3.5. These bands were originally labeled 4, 5, 6, and 7, although the morelogical numbers 1, 2, 3, and 4 were introduced with Landsat-4. The spectral
responses for the bands normalized to a common peak are sketched in
Figure 3.4. The size of the IFOV, or ground resolution cell, of the Landsat MSS
is approximately 80 m × 80 m, and the width of the swath scanned on the
ground in each orbit is 185 km. The other important instrument that has been
carried on Landsat-4 and later spacecraft in the program is the thematic mapper (TM), which has six spectral bands in the visible and near-infrared wavelength ranges, with an IFOV of 30 m × 30 m, and one thermal-infrared band
with an IFOV of 120 m × 120 m. The nominal wavelength ranges of the spectral
bands of the TM are given in Table 3.5. An improved version, the enhanced
thematic mapper (ETM), was built for Landsat-6 and Landsat-7. However,
Landsat-6, which was launched in 1993, failed to achieve its orbit, and communication with the satellite was never established; it is now just another
expensive piece of space junk. Landsat-7 was finally launched on April 15,
9255_C003.fm Page 66 Friday, February 16, 2007 11:08 PM
66
Introduction to Remote Sensing
Alaska
60°
NTTF
Goldstone
30°
14
13
2 15
1
Day
2
Day 1
(repeats every 18 days)
12
11
10
9
8
7
6
Orbit
number
5
4
3
0°
30°
60°
FIGURE 3.5
Landsat-1, -2, and -3 orbits in 1 day. (NASA)
1999. For several years, the Landsat program provided the only source of highresolution satellite-derived imagery of the surface of the Earth.
Each of the Landsat satellites was placed in a near-polar Sun-synchronous
orbit at a height of about 918 km above the surface of the Earth. Each satellite
travels in a direction slightly west of south and passes overhead at about 10.00
hours local solar time. In a single day, 14 southbound (daytime) passes occur;
northbound passes occur at night (see Figure 3.5). Because the distance
between successive paths is much greater than the swath width (see
Figure 3.6), not all of the Earth is scanned in any given day. The swath width
is 185 km and, for convenience, the data from each path of the satellite is
divided into frames or scenes corresponding to tracks on the ground of approximately 185 km in length; each of these scenes contains 2,286 scan lines, with
3,200 pixels per scan line. The orbit precesses slowly so that, on each successive
day, all the paths move slightly to the west; on the 18th day, the pattern repeats
itself exactly. Some overlap of orbits occurs, and, in northerly latitudes, this
overlap becomes quite large. After the first three satellites in the series, the
orbital pattern was changed slightly to give a repeat period of 16 days instead
of 18 days. At visible and near-infrared wavelengths, the surface of the Earth
is obscured if clouds are present. Given these factors, the number of useful
Landsat passes per annum over a given area might be fewer than half a dozen.
Nonetheless, data from the MSSs on the Landsat series of satellites have been
used very extensively in a large number of remote sensing programs. As their
name suggests, the Landsat satellites were designed primarily for remote
sensing of the land, but in certain circumstances useful data are also obtained
over the sea and inland water areas.
9255_C003.fm Page 67 Friday, February 16, 2007 11:08 PM
67
Satellite Systems
Orbit N + 1, day M + 1
40
Orbit N, day M + 1
°N
185
KM
2100
KM
120 K
M
Orbit N + 1, day M
40°N
Orbit N, day M
Orbit N, day M + 18
FIGURE 3.6
Landsat-1, -2, and -3 orbits over a certain area on successive days. (NASA)
3.3.2
SPOT
The Système pour l’Observation de la Terre (SPOT) is a program started by
the French Space Agency (Centre National d’Etudes Spatiales, CNES) in
which Sweden and Belgium also now participate. The first spacecraft in the
series, SPOT-1, was launched in 1986 and several later spacecraft in the series
have followed. The primary instrument on the first three spacecraft in the
series is the Haute Resolution Visible (HRV), an along-track, or push-broom,
scanner with a swath width of 60 km. The HRV can operate in two modes,
a multispectral mode with three spectral bands and 20 m × 20 m IFOV or a
one-band panchromatic mode with a 10 m × 10 m IFOV (see Table 3.5).
Because the SPOT instrument is a push-broom type, it has a longer signal
integration time that serves to reduce instrumental noise. However, it also
gives rise to the need to calibrate the individual detectors across each scan
line. An important feature of the SPOT system is that it contains a mirror
that can be tilted so that the HRV instrument is not necessarily looking
vertically downward but can look sideways at an angle of up to 27°. This
serves two useful purposes. By using data from a pair of orbits looking at
the same area on the ground from two different directions, it is possible to
9255_C003.fm Page 68 Friday, February 16, 2007 11:08 PM
68
Introduction to Remote Sensing
obtain stereoscopic pairs of images; this means that SPOT data can be used
for cartographic work involving height determination. Secondly, it means
that the gathering of data can be programmed so that if some phenomenon
or event of particular interest is occurring, such as flooding, a volcanic
eruption, an earthquake, a tsunami, or an oil spillage, the direction of observation can be adjusted so that images are collected from that area from a
large number of different orbits while the interest remains live. For a system
such as the Landsat MSS or TM, which does not have such a tilting facility,
the gathering of data from a given area on the ground is totally constrained
by the pattern of orbits. An improved version of the HRV was developed
for SPOT-4, which was launched in 1998. Another instrument, named VEGETATION, was also built for SPOT-4; this is a wide-swath (2,200 km), lowresolution (about 1 km) scanner with 4 spectral bands (see Table 3.5). As its
name implies, this instrument is designed for large-scale monitoring of the
Earth’s vegetation.
3.3.3
Resurs-F and Resurs-O
The Resurs-F program of the former USSR is a series of photoreconnaissance
spacecraft with short mission lifetimes of the order of 2 to 4 weeks. The
instruments flown are multispectral film cameras, and the films are returned
to Earth at the end of the missions in small, spherical descent capsules. The
number of spectral bands is three or four, and the spatial resolution varies
from 25 to 30 m to 5 to 10 m. Several spacecraft in this series are launched
each year, according to need. Since October 1990, the data products from the
Resurs-F series have been distributed commercially by the State Center
‘Priroda’ and by various distributors in western countries.
The Resurs-O program is a program of the former USSR that is similar
in function and objectives to the Landsat series of spacecraft. The first
spacecraft in the series was launched in 1985 and several successors have
since been launched.
3.3.4
IRS
In 1988, the Indian Space Research Organization (ISRO) began launching a
series of Indian Remote Sensing Satellites (IRS). IRS-1A carried two MSSs,
the Linear Imaging Self-Scanning Sensor (LISS-I and LISS-II), the first one
having a spatial resolution of 73 m and the second one having a spatial
resolution of 36.5 m. Each instrument had four spectral bands with wavelength ranges that were similar to those of the Landsat MSS. IRS-1B, which
was similar to IRS-1A, was launched in 1991. Subsequently further spacecraft
in the series, carrying improved instruments, have since been launched. In
the early years, when the satellites had no onboard tape recorder and no
ground stations were authorized to receive direct broadcast transmissions
apart from the Indian ground station at Hyderabad, no data were available
9255_C003.fm Page 69 Friday, February 16, 2007 11:08 PM
Satellite Systems
69
except for data on the Indian subcontinent. More recently, other ground
stations have begun to receive and distribute IRS data for other parts of the
world.
3.3.5
Pioneering Oceanographic Satellites
The satellite systems that we have considered in Sections 3.3.1 to 3.3.4 were
developed primarily for the study of the land areas of the Earth’s surface. The
year 1978 was a very important year for what has now come to be called space
oceanography — the study of the oceans from space. Before 1978, the only
impact of satellite technology on oceanography was that oceanographers were
aware of the possible use of satellite thermal-infrared data from meteorological
satellites (see Section 3.2) for the determination of sea surface temperatures.
Two spacecraft, Nimbus-7 and Seasat, changed that by demonstrating conclusively the value of data from the visible, near-infrared, and microwave regions
of the electromagnetic spectrum for oceanographic work. Nimbus-7 carried the
Coastal Zone Color Scanner (CZCS), which was the first instrument to clearly
demonstrate the possible use of satellite data to study ocean color (in general
and not just in coastal waters), and Seasat demonstrated convincingly the
powerful potential of microwave instruments for studying the global oceans.
Nimbus-7 and Seasat were both launched in 1978. Seasat only lasted for about
3 months, but Nimbus-7 continued to operate for nearly 10 years.
The important instruments on Nimbus-7 were the SMMR and the CZCS.
On Seasat, the important instruments were the altimeter, scatterometer, SAR,
and SMMR. The Seasat sensors and the SMMR on Nimbus-7 were all microwave sensors; the SMMR has already been described in Section 2.5 and the
active microwave instruments, the altimeter, scatterometer, and SAR will be
described in Chapter 7. The CZCS on Nimbus-7 was an optical and infrared
MSS, which proved to be extremely important. Similar in many ways to the
Landsat MSS and to the AVHRR, the CZCS was sensitive to the range of
intensities expected in light reflected from water and its response was usually
saturated over the land. The IFOV of the CZCS was comparable with that
of the AVHRR. The CZCS had six spectral channels, including some very
narrow channels in the visible and a thermal-infrared channel (see Table 3.5).
The CZCS spectral bands in the visible region are particularly appropriate
for marine and coastal work, although one might argue that the IFOV is
rather large for near-coastal work. The frequency of coverage of the CZCS
was more like that of the AVHRR than that of the Landsat MSS, but Nimbus7 had power budget limitations and so the CZCS was only switched on for
relatively short periods that fall very far short of the full 100 or so minutes
of the complete orbit. The most immediate successor to the CZCS was the
Sea-Viewing Wide Field-of-View Sensor (SeaWiFS), an eight-channel scanner
(see Table 3.5) flown on Orbview-2 (formerly SeaStar), which was launched
in 1997. The SeaWiFS is a commercial satellite but, subject to some restrictions, data are available to researchers free of charge. GAC, LAC, and HRPT
data (in the terminology of the AVHRR) are generated.
9255_C003.fm Page 70 Friday, February 16, 2007 11:08 PM
70
Introduction to Remote Sensing
Two other important instruments that were carried on Nimbus-7 should also
be mentioned: the SBUV and the Total Ozone Mapping Spectrometer (TOMS).
Both instruments measured the ozone concentration in the atmosphere. These
measurements have been continued with SBUV/2 instruments on board the
NOAA-9, -11, -14, -16, and -17 satellites, and TOMS instruments on the Russian
Meteor-3, Earth Probe, and Japanese ADEOS satellites. The two groups of
instruments, TOMS and SBUV types, differ principally in two ways. First, the
TOMS instruments are scanning instruments and the SBUV instruments are
nadir-looking only. Secondly, the TOMS instruments measure only the total
ozone content of the atmospheric column, whereas the SBUV instruments measure both the vertical profile and the total ozone content. These instruments
have played an important role in the study of ozone depletion, both generally
and in particular in the ozone “hole” that appears in the Antarctic spring.
3.3.6
ERS
Apart from the French development of the SPOT program, Europe (in the
form of the ESA) was quite late in entering the satellite remote sensing
arena, although a number of national agencies and institutions developed
their own airborne scanner and SAR systems. The main European contributions to Earth observation have been through the Meteosat program
(see Section 3.2.2) and the two ESA Remote Sensing (ERS) satellite missions.
The ERS program originated in requirements framed in the early 1970s
and is particularly relevant to marine applications of remote sensing. Since
then, the requirements have become more refined, as has the context within
which these needs have been expressed. Early on in the mission definition
the emphasis was on commercial exploitation. But by the time the mission
configuration was finalized in the early 1980s, the emphasis had changed,
with a realization of the importance of global climate and ocean monitoring programs. More recently, the need to establish a commercial return
on the data has reappeared. The main instruments that have been carried
on both the ERS-1 and ERS-2 satellites are a set of active microwave
instruments similar to those that were flown on Seasat. These comprise
the Active Microwave Instrument (AMI) and a radar altimeter. There is,
however, an additional instrument, the Along Track Scanning Radiometer
(ATSR/M), an infrared imaging instrument with some additional microwave channels. The ATSR/M was designed for accurate sea-surface temperature determination.
The AMI is a C-band instrument capable of operating as a SAR and as a
scatterometer; it makes common use of much of the hardware in order to
reduce the payload. However, a consequence of this shared design is that it
is not possible to collect both types of data at the same time.
The radar altimeter is a Ku-band, nadir-pointing instrument that measures
the delay time of the return echoes from ocean and ice surfaces. These data
can provide information about surface elevation, significant wave heights,
and surface wind speeds (see Section 7.1).
9255_C003.fm Page 71 Friday, February 16, 2007 11:08 PM
Satellite Systems
71
The ATSR/M is a four-channel radiometer designed to provide sea surface
and cloud-top temperatures. It has a spatial resolution of 1 km, a swath width
of 500 km, and a relative temperature accuracy of about 0.1°C. It is therefore
in many ways similar to the AVHRR, but it uses a conical scanning system to
obtain two looks at the surface, at nadir and at about 55° ahead, to permit
atmospheric correction. It also incorporates a microwave sounder, a twochannel passive radiometer whose data are merged with the thermal infrared
data before being transmitted to the ground. In addition to the marine and
meteorological applications for which it was designed, many land uses have
been found (for example, vegetation and snow monitoring) as well as surface/
atmosphere flux measuring.
ERS-1 was launched in 1991. It was kept operational for about 1 year after
ERS-2 was launched in 1995, during which time they operated in tandem,
collecting data for pairs of SAR images for interferometric use (see Section
7.4). Tandem images have been used to generate interferometric SAR images
that are used for determining elevations and elevation changes (such as in
volcanic studies) and structural movements (such as in earthquake monitoring) as well as for creating digital terrain models. ERS-1 was then kept on
stand-by until March 2000, when its onboard attitude control system failed.
3.3.7
TOPEX/Poseidon
The demonstration of various oceanographic applications of data generated
by active microwave instruments flown in space was successfully performed
by the proof-of-concept Seasat satellite. However, Seasat failed after about
3 months in orbit, in 1978, and no plans were made for an immediate
successor to be built and flown in space. TOPEX/Poseidon is an altimetry
mission conducted jointly by CNES and NASA. It can be regarded, as far as
satellite radar altimetry is concerned, as the first successor to Seasat. The
mission was launched in 1992 to study the global ocean circulation from
space and was very much a part of the World Ocean Circulation Experiment.
Because TOPEX/Poseidon started life as two separate altimetry missions,
which were later combined into one, it carries two altimeters. To use a radar
altimeter on a satellite to make precise measurements of the geometry of the
surface of the oceans, the orbit of the spacecraft must be known very precisely; a laser retroreflector is therefore used for accurate positioning. The
Poseidon instrument is an experimental, light-weight, single frequency radar
altimeter operating in the Ku band, whereas the main operational instrument
is a dual-frequency Ku/C-band NASA Radar Altimeter. A microwave radiometer provides atmospheric water content data for the purpose of making
atmospheric corrections to allow for variations in the velocity of the radio
waves in the atmosphere.
3.3.8
Other Systems
Other countries have now begun to build and launch remote-sensing
(Earth-observing) satellites. For some countries, the motivation is to
9255_C003.fm Page 72 Friday, February 16, 2007 11:08 PM
72
Introduction to Remote Sensing
develop indigenous technology; for others, it is to acquire their own Earthobserving capability using established technology. There are too many of
these new systems to allow the inclusion of an exhaustive account of them
here (for full details, see Kramer [2002]). We shall just mention a few
examples. One of these was Japanese Earth Resources Satellite-1 (JERS-1),
which was launched in 1992 and carried a SAR and a visible and nearinfrared scanner. Another is the Japanese Advanced Earth Observing
Satellite (ADEOS-1), which carried the Advanced Visible and Near-Infrared
Radiometer and the Ocean Colour and Temperature Scanner but failed in
1997 after 7 months in space. Its successor, ADEOS-2, was launched in 2002.
The objective of ADEOS-2 is to acquire data to support international global
climate change research and to contribute to applications such as meteorology and providing assistance to fisheries; it is particularly dedicated to
research in water and energy cycling and carbon cycling. ADEOS-2 carries
several instruments, the Advanced Microwave Scanning Radiometer, the
Global Line Imager (GLI), the Improved Limb Atmospheric SpectrometerII (a limb-sounding instrument for monitoring high latitude stratospheric
ozone), Sea Winds (a NASA scatterometer), and a Polarization and Directionality of the Earth’s Reflectances instrument (POLDER), which measures
the polarization, directional, and spectral characteristics of the solar light
reflected by aerosols, clouds, oceans, and land surfaces.
On the world scene, there are currently two trends in instrumentation:
hyperspectral imaging systems and high spatial-resolution instruments.
Hyperspectral scanners, or imaging spectrometers, are similar to MSSs,
which were described in Section 2.3; they just have a larger number of
spectral channels. A number of hyperspectral imagers (with up to 350 narrow, usually selectable, wavebands) have been flown on aircraft (some
details are given in Kramer [2002]). However, because of technical limitations, such instruments have only recently been included as part of any
satellite’s payload. The first was the Moderate-Resolution Imaging Spectroradiometer (MODIS), which is flown on the NASA Terra spacecraft that
was launched in December 1999; MODIS has 36 spectral channels. The next
was the Medium-Resolution Imaging Spectrometer (MERIS); MERIS,
which is flown on Envisat (launched in March 2002), has 15 spectral channels. The GLI, which is carried on the Japanese satellite ADEOS-2, has 36
spectral channels.
Until very recently, the highest spatial resolution available from a civilian
satellite was that from SPOT (10 m in the panchromatic band, but reduced
to 5 m, or even 2.5 m by subtlety, for SPOT-5). Recently, however, a number
of commercial missions have been planned and launched giving spatial
resolutions down to 1 m or better. IKONOS-2, now renamed IKONOS, was
successfully launched on September 24, 1999, and became the world’s first
commercial high-resolution Earth imaging satellite. IKONOS has provided
excellent imagery at 1 m resolution (panchromatic) and 4 m (multispectral).
The Russian SPIN-2 has also been producing 1-m resolution digitized photographs since 1998. Quickbird-2, now renamed Quickbird, which was
9255_C003.fm Page 73 Friday, February 16, 2007 11:08 PM
73
Satellite Systems
launched on October 18, 2001, provides 0.6 m-resolution panchromatic
imagery and 2.5-m multispectral imagery (see Table 3.5). The detail that can
be seen in the images from these high-resolution systems is approaching the
detail that can be seen in an air photograph. Apart from use in small-area
environmental monitoring work, the images from these very-high-resolution
systems can be seen as providing competition for air photographs for cartographic work, including the use of stereoscopic data for spot-height and
contour determination.
3.4
Resolution
In discussing remote sensing systems, three important and related qualities
need to be considered:
Spectral resolution
Spatial resolution (or IFOV)
Frequency of coverage.
Each of these quantities is briefly considered in this section with particular
reference to MSSs (see Table 3.5). Many of the ideas involved apply to other
imaging systems, such as radars, and even to some nonimaging systems as
well. Spectral resolution is determined by the construction of the sensor
system itself, whereas IFOV and frequency of coverage are determined both
by the construction of the sensor system and by the conditions under which
it is flown. To some extent, there is a trade-off between spatial resolution
and frequency of coverage; good spatial resolution (that is, small IFOV) tends
to be associated with low frequency of coverage (see Table 3.6).
TABLE 3.6
Frequency of Coverage versus Spatial Resolution
System
IFOV
SPOT-5 Multispectral
10 m
SPOT-5 Panchromatic
Landsat MSS
5m
80 m
Landsat TM
NOAA AVHRR
Geostationary satellites
30m
~1 km
~1–~2.5 km
Repeat Coverage
Days variable*
Several days‡
Few hours‡
30 minutes/15 minutes
* Pointing capability complicates the situation.
‡ Exact value depends on various circumstances.
9255_C003.fm Page 74 Friday, February 16, 2007 11:08 PM
74
3.4.1
Introduction to Remote Sensing
Spectral Resolution
The ideal objective is to obtain a continuous spectrum of the radiation
received at a satellite from a given area on the ground (the IFOV). However,
until the very recent launch of one or two hyperspectral scanners into space,
all that was obtainable was integrated reflectivities over the very small
number of wavelength bands used in the scanner. For many land-based
applications of MSS data from satellites, the number of visible and nearinfrared spectral bands found on the Landsat MSS or TM is adequate. For
some coastal and marine applications, for example, in the determination of
suspended sediment loads and chlorophyll concentrations, many more spectral channels are required. Other applications, such as sandbank mapping
and sea-surface temperature determinations, do not require a multitude of
spectral channels. For sea-surface temperature determinations, only one
appropriate infrared channel is required and this, by and large, has been
available on existing scanners for many years. However, additional channels
are very valuable in correcting for or eliminating atmospheric effects. For
example, the split channel in the thermal-infrared region on the later versions
of the AVHRR enables atmospheric corrections to be made to the sea surface
temperatures derived from the data from that instrument. The SSM/I has
seven spectral channels, which eliminate atmospheric effects quite successfully. For detecting oil pollution at sea, the panchromatic band of the AVHRR
or the visible bands of Landsat MSS would seem to be adequate from the
spectral point of view.
3.4.2
Spatial Resolution
For land-based applications within large countries, such as the United States,
Canada, China, and Russia, the spatial resolution of the Landsat MSS, with
its IFOV of approximately 80 m, is adequate for many purposes. For landbased applications on a finer scale, however, the spatial resolution of the
Landsat MSS is not as good as one might like, and data from the TM on the
Landsat series and from SPOT, with spatial resolutions of 30 m and 20 m
(or 10 m) respectively, are likely to be more appropriate. The data from the
new commercial satellites (IKONOS and Quickbird), with an IFOV of 1 m
or even less, constitute serious rivals to conventional air photographs for
cartographic work.
In coastal and estuarine work, the spatial resolution of the Landsat MSS
or TM is adequate for many purposes. The spatial resolution of other instruments is not; even a quite wide estuary is quickly crossed within half a
dozen, or fewer, pixels for AVHRR, CZCS, or SeaWiFS.
For oceanographic work, the spatial resolution of the AVHRR, CZCS,
SeaWiFS or the scanners on geostationary satellites is generally adequate.
The IFOV of the AVHRR or the CZCS is of the order of 1 km2. For the first
generation Meteosat radiometer, the spatial resolution is considerably poorer
because the satellite is very much higher above the surface of the Earth; the
9255_C003.fm Page 75 Friday, February 16, 2007 11:08 PM
Satellite Systems
75
IFOV is about 5 km × 5 km for the thermal-infrared channel of Meteosat. At
the other extreme, the IFOV of a thermal-infrared scanner flown in a light
aircraft at a rather low altitude may be only 1 m2. Aerial surveys using such
scanners are now widely used to monitor heat losses from the roofs of large
buildings. In areas of open ocean, the spatial resolution of the AVHRR, CZCS,
or Meteosat radiometer is perfectly adequate. It provides the oceanographer
with synoptic maps of sea-surface temperatures over enormous areas that
could not be obtained on such a scale in any other way before the advent of
remote sensing satellites. Such maps are also beginning to find uses in marine
exploitation and management, for example, in locating fish and in marine
and weather forecast modelling.
3.4.3
Frequency of Coverage
For simple cross-track or push-broom scanners, there is a fairly simple tradeoff between spatial resolution (or IFOV) and frequency of coverage. At a
given stage in the development of the technology, the constraints imposed
by the sensor design, the onboard electronics, and the data link to the ground
limit the total amount of data that can be obtained. Thus, the smaller the
IFOV, the more data there are to be handled for any given area on the ground
and the less frequently data will be available for a given area (see Table 3.6).
However, the situation becomes more complicated when the scanner has a
tilting mirror included in its design, as is the case for the SPOT HRV, for
example (see Section 3.3.2). But it is not just instrument specifications and
orbit considerations that limit the frequency of coverage; platform power
requirements and the actual reception and recovery of data must also be
considered. As previously mentioned, because the CZCS required too much
power to be left switched on for a complete orbit, the instrument needed to
be switched on to obtain data for a particular area. The SAR flown on Seasat
had a similar problem associated with power requirements.
Another example of a feature that limits the frequency of coverage arises
in the case of the AMI on ERS-1 and -2. This instrument functioned as both
a SAR and a scatterometer, but not both at the same time. It was commonly,
but not always, operated as a SAR over land and as a scatterometer over the
sea. Frequency of coverage may also be restricted if a spacecraft has no
onboard recording facility or if the onboard recording facility cannot hold all
the data from a complete orbit. Thus with the AVHRR, for example, the 1-km
resolution data can be recorded on board and downlinked (dumped) at one
of NOAA’s receiving stations in the United States. However, only data from
about 10 minutes of acquisition time per orbit (of about 100 minutes) can be
stored. Mission control determines the part of the orbit from which the data
will be recorded. Because the data are also transmitted live at the time of
acquisition (then described as HRPT data), they can be recovered if the satellite is within range of a direct readout ground station, of which there are
now a large number for the AVHRR all around the world. Some parts of an
orbit may, however, still be out of range of any ground station. Thus NOAA
9255_C003.fm Page 76 Friday, February 16, 2007 11:08 PM
76
Introduction to Remote Sensing
has data coverage of the whole Earth, but not complete coverage from each
orbit. A direct readout ground station may have complete coverage from all
orbits passing over it, but its collection is restricted to the area that is scanned
while the satellite is not out of sight or too low on the horizon. Thus no facility
is able to gather directly all the full 1-km resolution data from all the complete
orbits of the spacecraft. On the other hand, the degraded lower resolution
GAC AVHRR data from each complete orbit can be recorded on board and
downlinked at one of NOAA’s own ground stations.
9255_C004.fm Page 77 Tuesday, February 27, 2007 12:35 PM
4
Data Reception, Archiving,
and Distribution
4.1
Introduction
The philosophy behind the gathering of remote sensing data is rather
different in the case of satellite data than for aircraft data. Aircraft data are
usually gathered in a campaign that is commissioned by, or on behalf of, a
particular user and is carried out in a predetermined area. They are also
usually gathered for a particular purpose, such as making maps or monitoring
some given natural resource. The instruments, wavelengths, and spatial
resolutions used are chosen to suit the purpose for which the data are to be
gathered. The owner of the remotely sensed data may or may not decide to
make the data more generally available.
The philosophy behind the supply and use of satellite remote sensing data,
on the other hand, is rather different, and the data, at least in the early days,
were often gathered on a speculative basis. The organization or agency that
is involved in launching a satellite, controlling the satellite in orbit, and
recovering the data gathered by the satellite is not necessarily the main user
of the data and is unlikely to be operating the satellite system on behalf of
a single user. It has been common practice not to collect data only from the
areas on the ground for which a known customer for the data exists. Rather,
data have been collected over enormous areas and archived for subsequent
supply when users later identify their requirements. A satellite system is
usually established and operated by an agency of a single country or by an
agency involving collaboration among the governments of a number of
countries. In addition to actually building the hardware of the satellite systems and collecting the remotely sensed data, there is the task of archiving
and disseminating the data and, in many cases, of convincing the potential
end-user community of the relevance and importance of the data to their
particular needs.
The approach to the reception, archiving, and distribution of satellite data
has changed very significantly between the launch of the first weather satellite in 1960 and the present time. These changes have been a result of huge
77
9255_C004.fm Page 78 Tuesday, February 27, 2007 12:35 PM
78
Introduction to Remote Sensing
advances in technology and an enormous growth in the community wishing
to make use of satellite data. The main technological advances have been
the increase of computing power, the development of much higher density
electronic data storage media, and the development of telecommunications
and the Internet. On the user side, an enormously greater awareness of the
uses and potential uses of satellite data in a wide range of different contexts
now exists. There is also now a wide appreciation of the role of satellite data
for integration with other data into geographic information systems (GISs).
To illustrate what is involved in the reception and archiving of satellite data,
we shall describe the principles involved in the reception of data from one
particular series of polar-orbiting weather satellites, the Television InfraRed
Observation Satellite (TIROS)–N/National Oceanographic and Atmospheric
Administration (NOAA) series.
4.2
Data Reception from the TIROS-N/NOAA
Series of Satellites
We have chosen the TIROS-N/NOAA series of satellites as an example not
only because they are relatively simple and illustrate the main principles
involved in the reception of data from polar-orbiting satellites, but also
because receiving stations for the data from these satellites are now quite
common and are widely distributed throughout the world. Starting with an
NOAA Advanced Very High Resolution Radiometer (AVHRR) receiving
system, many ground stations have later been enhanced to receive data from
other polar-orbiting satellites.
The problem of recovering from the surface of the Earth the data generated
by a remote sensing system, such as those described in Chapter 3, is a
problem in telecommunications. The output signal from an instrument, or a
number of instruments, on board a spacecraft is superimposed on a carrier
wave and this carrier wave, at radiofrequency, is transmitted back to Earth.
In the case of the TIROS-N/NOAA series of satellites, the instruments
include:
•
•
•
•
•
AVHRR
High-Resolution Infrared Radiation Sounder (HIRS/2)
Stratospheric Sounding Unit (SSU)
Microwave Sounding Unit (MSU)
Space Environment Monitor (SEM)
• Argos data collection and platform location system.
The AVHRR is a multispectral scanner (MSS) that generates images of
enormous areas at a spatial resolution of about 1 km (see Chapter 3).
9255_C004.fm Page 79 Tuesday, February 27, 2007 12:35 PM
79
Spacecraft telemetry
and low bit rate
Instrument data
8.32 kbs
HIRS/2 2880 bps
SSU
480 bps
MSU
320 bps
SEM
160 bps
DCS
720 bps
Spacecraft
& instrument
telemetry
TIROS
information
processor
(TIP)
Switching unit
Low data rate instruments
Data Reception, Archiving, and Distribution
Manipulated
information
rate processor
(MIRP)
VHF beacon
DSB data split-phase
linear polarization
136.77/137.77 MHz
Real-time
HRPT data
1698.0 MHz
1707.0 MHz
Split-phase
Right-hand circular
HRPT
0.66 Mbs
AVHRR
Mbs : Megabits per second
kbs : Kilobits per second
APT analogue data
APT transmitter
137.50/137.62 MHz
FIGURE 4.1
TIROS-N instrumentation. (NOAA.)
Consequently, it generates data at a high rate, namely 665,400 bps or 0.6654
Mbs. All the other instruments produce much smaller quantities of data. The
HIRS/2, SSU, and MSU are known collectively as the TIROS Operational
Vertical Sounder (TOVS), or in later, upgraded versions of the series, the
Advanced TIROS Operational Vertical Sounder (ATOVS). They are used for
atmospheric sounding (to determine the profiles of pressure, temperature, and
humidity and the total ozone concentration in the atmosphere). The SEM
measures solar proton, alpha particle, and electron flux density; the energy
spectrum; and the total particulate energy disposition at the altitude of the
satellite. The Argos data collection system has already been mentioned in
Section 1.5.2. These five instruments generate very small quantities of data in
comparison with the AVHRR (see Figure 4.1) — the data rates range from
2,880 bps to 160 bps, compared with 665,400 bps for the AVHRR.
The TIROS-N/NOAA series of satellites are operated with three separate
transmissions: the Automatic Picture Transmission (APT), the High-Resolution
Picture Transmission (HRPT), and the Direct Sounder Broadcast (DSB).
Figure 4.1 identifies the frequencies used and attempts to indicate the data
included in each transmission. The HRPT is an S-band transmission at 1698.0
or 1707.0 MHz and includes data from all the instruments and the spacecraft
housekeeping data. For the APT transmission, a degraded version of the
AVHRR data is produced, consisting of data from only two of the five
spectral bands and the ground resolution (instantaneous field of view) is
degraded from about 1 km to about 4 km. Although the received picture
9255_C004.fm Page 80 Tuesday, February 27, 2007 12:35 PM
80
Introduction to Remote Sensing
from the APT is of poorer quality than the full-resolution picture obtained
with the HRPT, the APT transmission can be received with simpler equipment than what is required for the HRPT. (For more information on the APT,
see Summers [1989]; Cracknell [1997] and the references cited therein). The
DSB transmission contains only the data from the low data–rate instruments
and does not even include a degraded form of the AVHRR data.
Although the higher-frequency transmissions contain more data, there is
a price to be paid in the sense that both the data-reception equipment and
the data-handling equipment need to be more complicated and are, therefore, more expensive. For example, receiving the S-band HRPT transmission
requires a large and steerable reflector/antenna system instead of just a
simple fixed antenna (i.e., a metal rod or a piece of wire). Typically, the
diameter of the reflector, or “dish”, for a NOAA receiving station is between
1 and 2 m. In addition to having the machinery to move the antenna, one
also needs to have quite accurate information about the orbits of the spacecraft so that the antenna assembly can be pointed in the right direction to
receive transmissions as the satellite comes up over the horizon. Thereafter,
the assembly must be moved so that it continues to point at the satellite as
it passes across the sky. The other important consequence of having a high
data-rate is that more complicated and more expensive equipment are
needed to accept and store the data while the satellite is passing over.
For the TIROS-N/NOAA series of satellites, the details of the transmission
are published. The formats used for arranging the data in these transmissions
and the calibration procedure for the instruments, as well as the values of
the necessary parameters, are also published (Kidwell, 1998). Anyone is free
to set up the necessary receiving equipment to recover the data and then
use them. Indeed, NOAA has for a long time adopted a policy of positively
encouraging the establishment of local receiving facilities for the data from
this series of satellites. A description of the equipment required to receive
HRPT and to extract and archive the data is given by Baylis (1981, 1983)
based on the experience of the facility established a long time ago at Dundee
University (see Figure 4.2). In addition, one can now buy “off-the-shelf”
systems for the reception of satellite data from various commercial suppliers.
It should be appreciated that one can only receive radio transmissions from
a satellite while that satellite is above the horizon as seen from the position
of the ground reception facility. Thus, for the TIROS-N/NOAA series of
satellites, the area of the surface of the Earth for which AVHRR data can be
received by one typical data reception station, namely that of the French
Meteorological Service at Lannion in Northwest France, is shown in
Figure 4.3. For a geostationary satellite, the corresponding area is very much
larger because the satellite is much farther away from the surface of the Earth
(see Figure 1.6). Thus, although one can set up a receiving station to receive
direct readout data, if one wishes to obtain data from an area beyond the
horizon — or to obtain historical data — one has to adopt another approach.
One may try to obtain the data from a reception facility for which the target
area is within range. Alternatively, one may be able to obtain the data via
9255_C004.fm Page 81 Tuesday, February 27, 2007 12:35 PM
81
Data Reception, Archiving, and Distribution
Front end
High density
recorder
Antenna
Bit
conditioner
Receiver
Frame
Synchronizer
Computer and
image processor
Mounting
Decommutator
C.C.T.
Tracking
control
Video
processor
Hard copy
FIGURE 4.2
Block diagram of a receiving station for AVHRR data. (Baylis, 1981.)
60
°
80°
60°
70°
60°
40°
40°
50°
Lannion
20°
20°
0°
40°
30°
20°
FIGURE 4.3
Lannion, France, NOAA polar-orbiting satellites data acquisition zone.
9255_C004.fm Page 82 Tuesday, February 27, 2007 12:35 PM
82
Introduction to Remote Sensing
the reception and distribution facilities provided by the body responsible for
the operation of the satellite system in question for historical data and data
from areas beyond the horizon. In the case of the TIROS-N/NOAA series
of satellites, these satellites carry tape recorders on board and so NOAA is
able to acquire imagery from all over the world. In addition to the real-time,
or direct-readout, transmissions that have just been described, some of the
data obtained in each orbit are tape recorded on board the satellite and
played back while the satellite is within sight of one of NOAA’s own ground
stations (either at Wallops Island, VA, or Gilmore Creek, AK). In this way, it
is only possible to recover a small fraction (about 10%) of all the data obtained
in an orbit. The scheduling and playback are controlled from the NOAA
control room (see Needham, 1983). The data are then archived and distributed in response to requests from users. In a similar way, each Landsat
satellite carries tape recorders that allow global coverage of data; the data
are held by the EROS (Earth Resources Observation and Science) Data Center.
Governmental and intergovernmental space agencies that have launched
remote sensing satellites, such as the National Aeronautics and Space Administration in the United States and the European Space Agency in Europe and
many others around the world, have also established receiving stations, both
for receiving data from their own satellites and from other satellites.
4.3
Data Reception from Other Remote Sensing Satellites
The radio signals transmitted from a remote sensing satellite can, in principle, be received not just by the owner of the spacecraft but by anyone
who has the appropriate receiving equipment and the necessary technical
information. The data transmitted from civilian remote sensing satellites
are not usually encrypted, and the technical information on transmission
frequencies and signal formats is usually available. In the case of the
TIROS-N/NOAA series of satellites, as we have seen already, the necessary
technical information on the transmission and on the formatting and calibration of the data is readily available and there are no restrictions on the
reception, distribution, and use of the data. However, one should not
assume that there are no restrictions on the reception, distribution, and use
of data from all remote sensing satellites, or even from all civilian remote
sensing satellites. For example, the situations with regard to Landsat and
SPOT are quite different from that for the TIROS-N/NOAA series. The
receiving hardware for these systems needs to be more sophisticated
because the data rate is higher than for the meteorological satellites; moreover, to operate a facility for the reception and distribution of Landsat or
SPOT data, one must pay a license fee. Landsat ground receiving stations
are established in various parts of the world, including the United States,
Canada, Europe, Argentina, Australia, Brazil, China, Ecuador, India, Indonesia,
9255_C004.fm Page 83 Tuesday, February 27, 2007 12:35 PM
Data Reception, Archiving, and Distribution
83
Japan, Malaysia, Pakistan, Saudi Arabia, Singapore, South Africa, Taiwan,
and Thailand. Several others are also planned (see Figure 1.4)
Various other polar-orbiting remote sensing satellite systems launched and
operated by a variety of organizations in various countries have been
described in Chapter 3. Each of these organizations has made its own
arrangements for the reception of the data from its spacecraft, either using
the organization’s own ground stations or by negotiating agreements with
other ground station operators around the world.
The reception facility for data from a geostationary meteorological satellite
differs from that for data from a polar-orbiting meteorological satellite. For
example, the reflector or dish needs to be larger because of the higher data
rate and because of the much greater distance between the spacecraft and
the receiver. Typically, the diameter of the reflector is 5 to 6 m or even larger.
However, the antenna system does not need to be steerable to follow a
spacecraft’s overpass because the satellite is nominally in a fixed position.
It does, however, need to be adjustable to allow it to follow the spacecraft’s
position as it drifts slowly around its nominal fixed position. Whereas data
from a polar-orbiting spacecraft can only be received occasionally, when the
spacecraft is above the horizon of the ground station, the data from a geostationary meteorological satellite can be received all the time. Allowing for
the actual duration of the scan, images have traditionally been received every
30 minutes, but the newest systems (such as Meteosat second generation
and the Geostationary Operations Environmental Satellite [GOES] third generation) acquire and transmit images every 15 minutes.
4.4
Archiving and Distribution
Over the years since 1960, ground stations for the reception of remote sensing
data have proliferated and they have become more sophisticated. However,
the basic principles on the reception side have remained much the same.
When it comes to archiving and distributing data, the changes have been
much more radical. In the early days, the philosophy of archiving and
distribution was to store the data in a raw state at the ground station where
it was received, immediately after it was received, and to produce a quicklook
black-and-white image in one of the spectral bands. The archive media used
were magnetic tapes, either 2400 ft (732 m) long and 1/2″ (12.7 mm) wide
holding about 5 MB of data or high-density, 1″ (25.4-mm) wide tapes. The
quicklook images could be used immediately by weather forecasters, or they
could be consulted afterward by research workers involved in a whole
variety of environmental studies. On the basis of the results of a search
through the quicklook images, a scientist could order other photographic
products or digital data (probably on 1/2″-wide magnetic tape); to produce
these, the data would be recovered from the archive and processed to generate the required product. Although the archived data proved to be very
9255_C004.fm Page 84 Tuesday, February 27, 2007 12:35 PM
84
Introduction to Remote Sensing
valuable for research work in meteorology, oceanography, and land-based
studies, the use of the data was in most cases not systematic and, most likely,
a considerable amount of potentially useful information has never been
recovered from the archives. Keeping archived data is important because
they might contain evidence of changes in climate and other factors.
We mentioned at the beginning of this chapter that developments in data
storage media, computing power, and telecommunications, as well as the
development of the Internet, have caused big changes in the archiving and
distribution of satellite data since 1960. We shall now consider these developments in more detail. First, there is the question of data storage. As previously mentioned, a half-inch magnetic tape can hold about 5 MB of data.
A CD-ROM, on the other hand, can hold about 500 MB of data. Switching
from magnetic tapes to CD-ROMs for archive data storage has led to big
savings in storage space and much easier handling, too. In addition, magnetic tapes deteriorate after some years, making them unreadable. Therefore,
in addition to switching to CD-ROMs for the storage of new data, many
long-established ground receiving stations have undertaken programs to
transfer their archived data from magnetic tapes to CD-ROMs. Of course,
no one really knows the lifetime of a CD-ROM as a data storage medium.
The second big change that has occurred since the early 1960s is in computing power. Originally, a ground station would archive raw data or data
to which only a small amount of processing had been applied. The massive
increases in computing power have meant that it is now feasible for a ground
station to apply various amounts of processing routinely to all the new data
as they arrive and to store not only the raw data but also data to which
various levels of processing have been applied. The processed data, or information extracted from those data, can then be supplied to customers or users.
Thus, all the data may be geometrically rectified, i.e. presented in a standard
map projection, or any of several geophysical quantities (such as sea surface
temperature, or vegetation index for example, see sections 10.4 and 10.5)
may be calculated routinely on a pixel-by-pixel basis from that data. The
supply of processed data saves customers and users of the data from having
to process the data themselves. As much information as possible must be
extracted from the data and, in many cases, the information should be
distributed in as close as possible to real time. We can also expect to see an
expansion in the generation of data or information for incorporation into
GISs. Although a few users of satellite data may require raw data for their
research and raw data must remain available, what most users of satellite
data want is not raw data but environmental information or the values of
geophysical quantities. The organization that has gone farther than any other
along the road of providing information or products, rather than raw data,
is NOAA’s National Environmental Satellite, Data, and Information Service
(NESDIS). NESDIS operates the Comprehensive Large Array-Data Stewardship System (CLASS), which is an electronic library of NOAA environmental
data, (http://www.class.noaa.gov). CLASS is NOAA’s premiere online facility
9255_C004.fm Page 85 Tuesday, February 27, 2007 12:35 PM
Data Reception, Archiving, and Distribution
85
for the distribution of NOAA and U.S. Department of Defense Polar-Orbiting
Operational Environmental Satellite (POES) data, NOAA’s GOES data, and
derived data.
The meteorological remote sensing satellites are by far the most successful
of all the various remote sensing satellite systems. They are fully operational
in their own field of applications (i.e., meteorology), but the data that they
(particularly the polar-orbiting satellites) generate have a very wide and
highly successful range of nonmeteorological applications (see Cracknell
[1997]). Thus, whereas 20 years ago, NESDIS archived and distributed raw
data and images simply generated from that data, it now produces and
distributes a very wide range of products, mostly from AVHRR data but
also, in some cases, from Defense Meteorological Satellite Program and
GOES data as well (see Table 4.1). There is, however, still a role for direct
readout stations that can receive all the data generated by satellite passes
over their own reception areas.
The greatest change of all over the last 40 years or so has been in communications. In the early days, if one wanted to examine the quicklooks, one
had either to go in person to inspect a ground station’s quicklook archive
or one had to order photographic hard copies of the quicklooks and wait
for them to be delivered by mail. The required photographic products or
computer tapes could then be ordered and they would be generated and
then dispatched by mail. Thus, any application that depended on near-realtime access to the data was faced with considerable logistical difficulties.
This situation has changed as a result of the Internet. A ground station can
mount its quicklooks on its website almost as soon as the data are received
from the satellite. A user or customer, in principle from anywhere in the
world, can consult the quicklooks and then access or order the data online.
Many ground stations then supply the data quickly over the Internet. This
change has allowed satellite data to be used for a whole range of applications of rapidly changing, dynamic situations that were previously theoretically possible but were logistically impossible. These include, for
example, the monitoring of such events as floods, oil spills, smoke and ash
clouds from volcanic eruptions, and hurricane, tsunami and earthquake
damage.
So, if a person needs some satellite data for a particular piece of work,
how does he or she go about obtaining it? The first thing to do is to decide
which satellite system can be expected to provide suitable data for the
project. This decision depends on many factors, including the nature of the
data (active or passive); wavelength range of the radiation used; spatial,
spectral, and temporal resolution of the data; and cost of the data. Once the
most suitable system has been chosen, the next step is to identify the source
of distribution of the data. In the first edition of this book, we provided a
list of sources of satellite data; however, doing so in this edition is no longer
feasible because the number of satellite systems has greatly proliferated
and communications technology has changed out of all recognition, especially
9255_C004.fm Page 86 Tuesday, February 27, 2007 12:35 PM
86
Introduction to Remote Sensing
TABLE 4.1
NOAA NESDIS Earth Observation Products
Atmosphere products
• National Climatic Data Center satellite resources
• Aerosol products
• Precipitation
• North America Imagery
• Satellite Precipitation Estimates and Graphics
• Satellite Services Division (SSD) Precipitation Product Overview
• Operational Significant Event Imagery (OSEI) Flood Events
• Tropics
• GOES Imagery (Atlantic; East Pacific)
• Defense Meteorological Satellite Program (DMSP)
• SSD Tropical Product Overview
• DMSP Tropical Cyclone Products
• NOAA Hurricanes
• Winds
• High Density Satellite Derived Winds
• CoastWatch Ocean Surface Winds
Land products
• OSEI Imagery: Dust Storms; Flood Events; Severe Weather Events; Storm Systems
Events; Unique Imagery
• Fire
• OSEI Fire Images Sectors (Northwest; West; Southwest; Southeast)
• GOES and POES Imagery (Southwestern U.S.; Northwestern U.S.; Florida)
• Hazard Mapping System Fire and Smoke Product
• Web Based GIS Fire Analysis
• Archive of Available Fire Products
• SSD Fire Product Overview
• NOAA Fire Weather Information Center
• Geology and Climatology
• Bathymetry, Topography, and Relief
• Geomagnetism
• Ecosystems
• Interactive Map
• National Geophysical Data Center (NGDC) Paleoclimatology
• NGDC Terrestrial Geophysics
• Snow and Ice
• OSEI Snow Images
• OSEI Ice Images
• SSD Snow and Ice Product Overview
• National Ice Center (Icebergs)
• Volcanic Ash
• Imagery (Tungurahua; Colima; St. Helens)
• Washington Volcanic Ash Advisory Center
• NGDC Volcano Data
• SSD Volcano Product Overview
• NGDC Natural Hazards Overview
Ocean Products
• Laboratory for Satellite Altimetry
• Sea Floor Topography
9255_C004.fm Page 87 Tuesday, February 27, 2007 12:35 PM
Data Reception, Archiving, and Distribution
87
TABLE 4.1 (Continued)
NOAA NESDIS Earth Observation Products
•
•
•
•
•
•
•
Ocean Surface Current Analyses
Marine Geology and Geophysics
National Ice Center (Icebergs)
National Oceanographic Data Center (NODC)
NODC Satellite Oceanography
Coral Reef Bleaching
CoastWatch (Main)
• Program and Products
• Collaborative Products
• Sea Surface Temperature (SST)
• Ocean Color
• Ocean Surface Winds
• Sea Surface Temperatures
• Operational “Daily” SST Anomaly Charts
• Current “Daily” SST Anomaly Charts
• CoastWatch SST
• Office of Satellite Data Processing & Distribution SST Imagery
(Source:
http://www.nesdis.noaa.gov/sat-products.html)
with the development of the Internet. The best way to find a data source
for any chosen satellite system is to search on the Internet using a powerful
search engine, such as Google (http://www.google.com/ ), and to use appropriate key words. Once the person has found the website of the source, he
or she should follow the instructions for acquiring the needed data.
9255_C004.fm Page 88 Tuesday, February 27, 2007 12:35 PM
9255_C005.fm Page 89 Wednesday, September 27, 2006 5:08 PM
5
Lasers and Airborne Remote
Sensing Systems
5.1
Introduction
As mentioned in Chapter 1, it is convenient to distinguish between active
and passive systems in remote sensing work. This chapter is concerned with
airborne remote sensing systems, most of which are active systems that
involve lasers. In any application of active, optical remote sensing (i.e.,
lasers), one of two principles applies. The first involves the use of the lidar
principle — that is, the radar principle applied in the optical region of the
electromagnetic spectrum. The second involves the study of fluorescence
spectra induced by a laser. These techniques were originally applied in a
marine context, with lidar being used for bathymetric work in rather shallow
waters and fluorosensing being used for hydrocarbon pollution monitoring.
The final section of this chapter is concerned with passive systems that use
gamma rays.
Until recently, no lasers were flown on spacecraft. However, light bounced
off a satellite from lasers situated on the ground was used to carry out
ranging measurements to enable the precise determination of the orbit of a
satellite. The use of lasers mounted on a remote sensing platform above the
surface of the Earth has, until recently, been restricted to aircraft. It is difficult
to use lasers on free-flying satellites because they require large collection
optics and extremely high power.
5.2
Early Airborne Lidar Systems
The lidar principle is very simple. A pulse of light is emitted by a laser
mounted on a platform a distance h above the surface of the Earth; the pulse
travels down and is reflected back and an electronic measurement is made of
the time taken, t, for the round trip for the pulse, covering the distance 2h.
89
9255_C005.fm Page 90 Wednesday, September 27, 2006 5:08 PM
90
Introduction to Remote Sensing
Therefore, because c, the velocity of light, is known, the height, h, can be
determined from the equation:
2h
t
(5.1)
h = 21 ct
(5.2)
c=
or
In the early days, airborne lidars could only be used for differential measurements and these found their application in bathymetric work in shallow
waters — that is, in making charts of the depth of shallow estuarine and
coastal waters.
Three early airborne laser systems — developed by the Canada Centre for
Remote Sensing (CCRS), the U.S. Environmental Protection Agency (EPA),
and the National Aeronautics and Space Administration (NASA) — are
described in general terms by O’Neil et al. (1981). The system developed by
the CCRS was primarily intended for the monitoring of oil pollution and was
backed by a considerable amount of work on laboratory studies of the fluorescence spectra of oils (O’Neil et al., 1980; Zwick et al., 1981). After funding
cuts, the system developed by the CCRS was taken over by the Emergencies
Science Division of Environment Canada. In the present generation of the
system, which is known as the Laser Environmental Airborne Fluorosensor
(LEAF), laser-induced 64 spectral channel fluorescence data are collected at
100 Hz. The LEAF system is normally operated at altitudes between 100 and
166 m and at ground speeds of 100 to 140 knots (about 51 to 77 ms–1). The
LEAF is a nadir-looking sensor that has a footprint of 0.1 m by 0.3 m at 100 m
altitude. At the 100 Hz sampling rate, a new sample is collected approximately
every 60 cm along the flight path. The data are processed on board the aircraft
in real time, and the observed fluorescence spectrum is compared with standard reference fluorescence spectra for light refined, crude, and heavy refined
classes of oil and a standard water reference spectrum, all of which are stored
in the LEAF data analysis computer. When the value of the correlation
coefficient between the observed spectrum and the spectrum of a class of
petroleum product is above a certain threshold, and is greater than the correlation with the water spectrum, the observed spectrum is identified as being
of that class of petroleum. The next generation laser fluorosensor to follow
LEAF, which is known as the Scanning Laser Environmental Airborne Fluorosensor (SLEAF), will be enhanced in various ways (Brown et al., 1997).
The EPA system was developed primarily for the purpose of water-quality
monitoring involving the study of chlorophyll and dissolved organic carbon
(Bristow and Nielsen, 1981; Bristow et al., 1981). The first version of the
Airborne Oceanographic Laser (AOL) was built in 1977, to allow investigation of the potential for an airborne laser sensor in the areas of altimetry,
hydrography, and fluorosensing. NASA has operated the AOL since 1977 and,
during this period, the instrument has undergone considerable modifications,
including several major redesigns. It has remained a state-of-the-art airborne
9255_C005.fm Page 91 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
91
laser remote sensing instrument. The instrument modifications and the
results of investigations with the AOL for various marine and terrestrial
applications have been reported in numerous published papers. These
papers include work on applications in hydrography (Hoge et al., 1980), oil
film thickness measurement (Hoge and Swift, 1980, 1983), dye concentration
mapping (Hoge and Swift, 1981), overland terrain mapping (Krabill et al.,
1984), phytoplankton pigment measurement (Hoge et al., 1986), sea ice thickness estimation (Wadhams et al., 1992), and algorithm development for
satellite ocean color sensors (Hoge et al., 1987). The AOL has also been used
to measure ocean wave profiles from which wave spectral characteristics can
be derived. Airborne laser systems were first used successfully over the oceans
and only subsequently over the land. In 1994, a separate airborne lidar system,
known as the Airborne Topographic Mapper (ATM), dedicated to topographic
mapping, was developed within the NASA program to complement the AOL.
The primary use of the ATM by NASA was to map the surface elevation of the
Greenland Ice Sheet (Krabill et al., 1994) and other Arctic glaciers, in an attempt
to study the effects of global climatic change on net ice accumulation. A second
application was to measure the topography of sea ice in the central Arctic Basin
and to infer the depth distribution of the sea ice from the ice elevation measurements. The inferred ice depth distributions were compared directly with
results from upward-looking submarine ice profiles.
Important developments in the early to mid-1990s were made on two fronts.
First, airborne remote sensing laser systems were originally simply used to
observe vertically downward from the aircraft and so, as the aircraft traveled
along its path, a profile along a line directly below the aircraft’s flight path was
generated. To obtain measurements over a two dimensional surface, it was
necessary to interpolate between adjacent profiles. Then scanning mechanisms
were introduced so that, rather than merely collecting data along a line, data
could be gathered from a strip or swath. Thus, from a set of adjacent flight
lines, a whole area could be covered. Secondly, all of the applications of airborne
laser remote sensing require a highly precise range measurement capability on
the part of the lidar and highly accurate measurement of the horizontal and
vertical location of the aircraft platform using differential Global Positioning
System (GPS) technology. In addition, the development of more precise and
stable inertial systems based on triads of accelerometers and gyroscopes has
introduced more reliability to the measurements and increased overall accuracy.
The 1990s saw the development of a number of commercial airborne laser
systems for terrestrial applications as well as for marine applications.
5.3
Lidar Bathymetry
The charting of foreshore and inshore shallow-water areas is one of the most
difficult and time-consuming aspects of conventional hydrographic surveying from boats. This is because the process requires closely packed sounding
9255_C005.fm Page 92 Wednesday, September 27, 2006 5:08 PM
92
Introduction to Remote Sensing
Se
or
nl
ig
ht
ns
Su
Atmosphere
A
A
D
C
Wave
E
Ripples
Water
White caps
β
B
E
α
E
Seaweed
&
algae
Sunlit
slope
Rock
F
Sandbar
Shadow
slope
Mud/ooze
A-Atmospheric haze blue scatter
B-Absorbtion of red light
C-Surface reflection of sun & haze
D-White caps
E-Reflection absorbtion & scattering in water
F-Diffusion of light from bottom
FIGURE 5.1
Light intensity reaching a satellite. (Bullard, 1983a.)
lines and, therefore, a large amount of data collection (each sounding line
represents a sampling over a very narrow swath). In addition to the constraint of time, shallow-water surveying presents the constant danger of
surveying boats running aground.
Attempts have been made to use passive multispectral scanner (MSS) data
from the Landsat series of satellites for bathymetric work in shallow waters
(Cracknell et al., 1982a; Bullard, 1983a, 1983b); however, a number of problems arise (see, for example, MacPhee et al. [1981]). These problems arise
because there are various contributions to the intensity of the light over a
water surface reaching a scanner flown on a satellite (see Figure 5.1), and
many of these contain no information about the depth of the water. The use
of MSS data for water-depth determination is based on mathematical modelling of the total radiance of all wavelengths received at the scanner minus
the unwanted components, leaving only those attributable to water depth
(see Figure 5.2). By subtracting atmospheric scattering and water-surface
glint, the remaining part of the received radiance is due to what can be called
“water-leaving radiance.” This water-leaving radiance arises from diffuse
reflection at the surface and from radiation that has emerged after traveling
from the surface to the bottom and back again; the contribution of the latter
component depends on the water absorption, the bottom reflectivity, and the
9255_C005.fm Page 93 Wednesday, September 27, 2006 5:08 PM
93
Lasers and Airborne Remote Sensing Systems
Water surface
Absorbtion of
red light
Maximum
penetration depth
Sea bed visible
(indicated by light to
dark grey on image)
Sea bed notvisible (indicated by dark grey on image)
Sea bed visible
(indicated by dark
to light grey on image)
FIGURE 5.2
Depth of water penetration represented by a grey scale. (Bullard, 1983a.)
water depth. The feasibility of extracting a measured value of the depth
depends accordingly on being able to separate these factors, which present
serious problems. In addition to the limitation on the depth to which the
technique can be used, the horizontal spatial resolution of the MSS on the
Landsat series of satellites is rather poor for bathymetric work in shallow
waters; the situation is slightly better for the Thematic Mapper and the
Système pour l’Observation de la Terre (SPOT). The problem of spatial
resolution, as well as that of atmospheric correction, can be reduced by using
scanners flown on aircraft instead of satellites but, even so, it seems unlikely
that sufficient accuracy for charting purposes will often be obtainable. A
much more successful system is possible with an airborne lidar.
A method for carrying out bathymetric surveys involving conventional
aerial color photography in association with a laser system was developed
by the Canadian Hydrographic Service in cooperation with the CCRS. This
development, which began in 1970, consisted of a photohydrography system
and a laser profiling system that were flown simultaneously. The photohydrography system used color photography, whereas the laser system used
a profiling laser bathymeter. The photography provided 100% bottom
coverage over a depth range of 2 to 10 m for typical seawater and other
information, such as shoreline, shoals, rock outcroppings, and other hazards
to navigation. The laser system utilized a single-pulsed laser transmitter and
two separate receivers, one to receive the echoes back from the surface of
the water and the bottom, the other to measure aircraft height. The laser that
was used exploited the use of frequency doubling and transmitted short,
high-power pulses of green light (532 nm) and infrared radiation (1064 nm)
at a repetition rate of 10 Hz. Two optical/electronic receivers, one tuned to
532 nm and the other to 1064 nm, were employed to detect the reflected
pulses (see Figure 5.3 and Figure 5.4). The green light penetrated the water
rather well, whereas the infrared radiation hardly penetrated the water at all.
Echoes from the surface and from the bottom were received by the green
channel and, from these, the water depth was obtained by measuring the
9255_C005.fm Page 94 Wednesday, September 27, 2006 5:08 PM
94
Introduction to Remote Sensing
Position
fixing
Swath
FIGURE 5.3
A configuration for lidar bathymetry operation. (Muirhead and Cracknell, 1986.)
Timing and data
acquisition electronics
Optical
receiver
Green and near IR
pulses reflected
from water surface
Green and near IR
pulsed laser
transmitter
Transmitted pulse
Return pulse
separation time, t
Green pulse reflected
from bottom
Depth d = t × c
2
Where c = velocity of light
in water
Water surface
Depth
d
Bottom
FIGURE 5.4
Principles of operation of a lidar bathymeter. (O’Neil et al., 1980.)
9255_C005.fm Page 95 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
95
difference in echo arrival times. Aircraft height information was acquired by
the infrared channel, which measured the two-way transit time of each 1064
nm pulse from the aircraft to the water surface. The lidar bathymeter was used
to provide calibration points along a line or lines so that depths could be
determined over the whole area that was imaged in the color photograph. The
need to combine color photography, essentially to interpolate between the scan
lines of a profiling laser bathymetric system, has declined following the
introduction of scanning lidar systems.
The flying height for a lidar bathymetry system may be as high as 1500 m,
even though it generally does not exceed 350 m. The necessity to focus the
energy as much as possible is the reason for flying at such a low altitude. In
effect, once the green light beam penetrates the water, it spreads due to the
abrupt change in optical properties. The beam therefore diverges widely and
its energy is distributed over a rapidly increasing area; an empirical law is that
the footprint diameter is equal to half the water depth. Even considering that
the energy distribution within the beam obeys a Gaussian distribution, which
means that the central part of the beam has the largest amount of energy, the
divergence causes some indeterminacy in the real reflection position.
Lidar bathymetry systems operate at around 1000 soundings per second,
much less than is the case for laser land survey systems (see Section 5.4);
this is due to the need to generate a much longer laser pulse and higher
power requirements. Bathymetric mapping may be conducted to depths of
up to 50 m in clear water. Data are typically collected at 2–4 m resolution.
The measurable depth is a function of water clarity and will decrease with
increased water turbidity. The derived water depths are used to produce and/
or update nautical charts and to locate potential hazards to navigation, such
as rocks or sunken vessels. A very high density of depth determination is
required, because of the critical importance of locating hazards.
The U.S. Navy uses an Airborne Laser Mine Detection System (ALMDS)
to locate sea mines at or near the surface. The ALMDS provides the advantages of being able to attain high area search rates and image the entire nearsurface volume unencumbered by the inherent limitations of towing bulky
sonar gear in the water, and having to stop to recover equipment.
Time of day and weather are important lidar bathymetry mission considerations. To maximize depth penetration and minimize glare from the surface,
a Sun angle relative to the horizon of between 18° and 25° is optimal (between
18° and 35° is acceptable). Some new systems operate with a circular-shaped
scan in order to maintain a constant incidence angle. A low sea state of between
0 and 1 on the Beaufort scale is essential. Some wave action is permissible,
but breaking waves are not acceptable. Cloud cover should not exceed 5%. In
many areas with high turbidity in the water, such as areas with high concentrations of suspended material, the primary problem in measuring depth with
a lidar bathymeter arises from the large amount of backscattering from the
water column, which broadens the bottom pulse and produces a high “clutter”
level in the region of the bottom peak. When such a situation arises, no
advantage can be gained by increasing the laser power or by range gating the
9255_C005.fm Page 96 Wednesday, September 27, 2006 5:08 PM
96
Introduction to Remote Sensing
receiver because the effective noise level due to this scattering increases along
with the desired signal. A useful parameter for describing the performance of
the sensor is the product of the mean attenuation coefficient and the maximum
recorded depth. Navigational accuracy is important and, in the early days of
lidar bathymetric work, this was a serious problem over open areas of water
that possessed no fixed objects to assist in the identification of position. With
the advent of the GPS, this is no longer a serious problem.
5.4
Lidar for Land Surveys
As indicated in the previous section, airborne lidars were first developed for
bathymetric survey work, for which an accurate knowledge of the height of
the aircraft is not important. The method involves a differential technique, using
two pulses of different wavelength, so that the actual height of the aircraft
cancels out. The introduction of the use of airborne lidars over land areas for
ground cover and land survey work had to wait for developments that enabled
the position and orientation of the aircraft to be determined very accurately.
An airborne lidar system for land survey work is composed of three
separate technologies: a laser scanner, an Inertial Measurement Unit, and a
GPS. These components are configured together with a computer system
which ensures that the data collected are correlated with the same time stamp,
which is extremely important because all of the components require extremely
accurate timing (to the millisecond). The components for airborne lidar survey
technology have been available for many years. Lasers were invented in 1958,
inertial navigation technology has been available for a long time, and GPS has
been around commercially for more than 15 years. The challenge was to
integrate all of these technology components and make them work together —
at the same time ensuring that the system is small enough for use in a light
aircraft or helicopter. This feat was only achieved commercially in the mid
1990s. The major limiting factor for the technology was the airborne GPS,
which has only recently become accurate enough to provide airborne positions
with an error of less than 10 cm.
5.4.1
Positioning and Direct Georeferencing of Laser Data
In order to be able to achieve the required positional accuracy of a lidar
survey aircraft, one must use differential GPS. This requires the use of a
ground GPS station at a known location. The ground station should be
located on or close to the project site where the aircraft is flying to ensure
that the aircraft records the same satellites’ signals as the ground station and
to minimize various other possible errors, such as those arising from inhomogeneities in the atmosphere. The trajectory of the aircraft is computed by
solving the position derived by the solutions computed using the Clear/
Acquisition (C/A) code (also called the Civilian Code or S-Code) and the
9255_C005.fm Page 97 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
97
L1 and L2 carrier frequencies phase information; the trajectory is always
computed in a differential way using a master station. The use of the two
frequency measurements (on L1 and L2) makes it possible to correct for the
ionospheric delay to the radio signals. Because the C/A code is available
every second, the classic GPS solution is based on a timing of 1 second; this
means that if an aircraft moves at a velocity (typical for an acquisition
aircraft) of 120 kts (61.7 ms–1), a point position solution is available approximately every 62 m. Some more recent receivers can measure L1 and L2
carrier phases with a frequency up to 100 Hz (10 Hz is more common),
consequently increasing the number of known positions. It is obvious that
having only such a sparse set of positions is not sufficient to determine the
trajectory of the system accurately enough and using only GPS gives no
information about the attitude of the system. The integration is done using
an inertial measurement unit that provides information about the displacement and attitude that the system in question has in time.
The most common technology used for navigation accelerometers is the
pendulous accelerometer; in this type of sensor, a proof mass with a single
degree of freedom is displaced during acceleration and a rebalance electrical
mechanism is used to maintain a null displacement. The voltage needed to
maintain this balance is proportional to the sensed acceleration.
The displacement measurements are provided through a triaxial triad of
accelerometers that measure the acceleration (including that due to gravity)
of the system; the accelerometer-sensed values are sampled (generally every
50 ms, i.e. at 200 Hz), the gravity field is estimated and subtracted, and then
by double integration in time the displacement is computed. Because of this
double integration, the error is also double integrated and, therefore, it
propagates in time as an exponential function; this error gives rise to an
increasing error in the final position that is called the drift. Because of this
drift, a navigation based only on the double integration of signals from
accelerometers cannot be used; therefore, there is a need for ongoing research
to develop more stable inertial units. The angular position in space of the
sensor (i.e., the attitude) is computed by means of a triaxial triad of gyroscopes; gyroscopes are sensors that measure the angular velocity with respect
to inertial space. This includes not only the rotation of the laser system but
also the Earth’s angular velocity (15 degrees per hour) and the transport rate
(velocity of the aircraft divided by the radius of the Earth). Once the Earth
rate and the transport rate are removed, integration of the gyroscopes’ output
provides a measurement of the short-term angular displacement of the laser
system with respect to the Earth.
The integration between differential GPS solutions and inertial trajectory
is computed by means of complex equations whereby different weights are
attributed to the two elements (GPS-based position and inertial trajectory)
with regard to the relative estimated errors; the core of the integration is a
filtering procedure with a Kalman filter. Once the two solutions are combined, the result is called the smoothed best-estimated trajectory, or SBET.
The SBET is a time series of positions, attitude, and error values that enable
9255_C005.fm Page 98 Wednesday, September 27, 2006 5:08 PM
98
Introduction to Remote Sensing
GPS satellites
IMU
Direction of
flight
One GPS
Groundstation
FIGURE 5.5
Representation of airborne lidar scanning system. (Based on Turton and Jonas, 2003.)
the computation for directly georeferencing the laser scans. The system components are shown diagrammatically in Figure 5.5.
Airborne lidar scanning is an active remote sensing technology. Commonly
used in conjunction with an airborne digital camera, these systems emit laser
signals and as such can be operated at any time during the day or night.
Unlike lidar bathymetric systems, a single wavelength pulse is used, usually
at a near-infrared wavelength of about 1.5 µm, even though many systems
operate at the 1064 nm wavelength due to the possibility of using highly
efficient and stable NdYAG (neodynium yttrium aluminum garenet) lasers.
Information from an airborne lidar system is combined with ground-base
station GPS data to produce the x and y coordinates (easting and northing)
and z coordinate (elevation) of the reflecting points. A typical system can
generate these coordinates at a rate of several million per minute; the leading
edge systems can acquire 100,000 points per second, giving for each the position of four single returns (therefore a maximum of 400,000 points per second).
Reflections for a given pair of x and y coordinates are then separated automatically into signals reflected from the ground and those reflected from
aboveground features. Aboveground features from which reflections can
occur include high-voltage electricity transmission cables, the upper surface
of the canopy in a forest, and the roofs of buildings (see Figure 5.6). A general
processing scheme is illustrated in Figure 5.7.
5.4.2
Applications of Airborne Lidar Scanning
Airborne lidar scanning is a cost-effective method of acquiring spatial data.
Because precise elevation models are needed in a large number of applications, airborne lidar scanning has many uses. In this sense, “precise” means
that the elevation model is available with an accuracy of at least ±0.5 m in
the x and y coordinates and better than 0.2 m in the z coordinate. A typical
9255_C005.fm Page 99 Wednesday, September 27, 2006 5:08 PM
99
Lasers and Airborne Remote Sensing Systems
40.00
AOL surface
AOL wavefrom bottom
Photo ground truth
35.00
MSL (meters + 120)
30.00
25.00
20.00
15.00
10.00
5.00
Spoil pile
Spoil pile
River
00
0.
20
00
0.
18
00
16
0.
00
0.
14
00
12
0.
00
10
0.
0
.0
80
0
.0
60
0
.0
40
0
.0
20
0.
00
0.00
Along track (meters)
FIGURE 5.6
Cross sectional lidar profile obtained over an area of forest under winter conditions during
March, 1979. (Krabill et al., 1984.)
Ground GPS
Data acqusition and
decoding
GPS air
Calibration
data
DGPS processing
Trajectory
computation
Laser data
processing
Kalman
INS data
Data classification
FIGURE 5.7
Block diagram of the processing scheme for an airborne lidar scanning system. (Dr. Franco Coren.)
9255_C005.fm Page 100 Wednesday, September 27, 2006 5:08 PM
100
Introduction to Remote Sensing
60
Cumulative number of points
50
40
30
Data: Data12_count
Model: Gauss
Chi^2 = 14.91747
20
y0 0.37091
xc 0.00172
w 0.10405
A 8.14831
10
0
−0.15
−0.10
−0.05
± 1.78351
± 0.00283
± 0.00758
± 0.55323
0.00
Error (m)
0.05
0.10
0.15
FIGURE 5.8
An example of the error distribution of elevation measurements with an airborne laser scanner.
(Dr. Franco Coren.)
Gaussian error distribution is shown in Figure 5.8. The laser calibration is
performed for every single flight of acquisition in order to minimize the
systematic errors and therefore to maintain the maximum of the Gaussian
function centered at zero; systematic errors are reflected in this figure as a
lateral shift of the Gaussian function. We shall mention just a few of the
applications of airborne laser scanning, namely in forestry, flood risk mapping, monitoring coastal erosion, and the construction of city models.
Because of its ability to pass between tree branches to record both ground
features and aboveground features, airborne lidar scanning is particularly
suited to forestry applications. Applications include the acquisition of data
to compute average tree heights, the use of terrain data to plan the location
of roads to be used in timber harvesting, and the determination of drainage
locations for the design of retention corridors. Ground points can be used
to construct a digital terrain model (DTM) or relief model, or they can be
converted to contours. The reflections from the vegetation can be used to
determine the heights of trees and to estimate the biomass or even the
expected volume of timber that could be cut in any specific stand.
Airborne lidar scanning is also used in flood risk studies. A survey can of
course be carried out when an area is flooded, but this is not necessary. The
ability of airborne lidar scanning to observe large terrain areas accurately
and quickly makes it particularly suitable for the construction of a DTM for
flood plain mapping. Traditionally, the simulation of floods needs very precise elevation models. The aim of such simulations is to decide which areas
9255_C005.fm Page 101 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
101
need to be protected, to identify the areas in which water can be allowed to
accumulate without causing a large amount of damage (retention areas), and
to propose suitable engineering works.
An example of the use of airborne lidar scanning in connection with coastal
erosion for the island of Sylt in Germany is discussed by Lohr (2003). The
erosion at the western part of the island amounts to about 1 million m3 per
year. The total cost for coastal erosion prevention of the western part of the
island is more than =C10M per year. Precise lidar elevation models of the
beach area are gathered regularly after the winter storms. A lidar-generated
DTM, in combination with bathymetric measurements taken at the same
time as the lidar survey, allows the determination of the erosion volume as
well as the locations of the areas that have to be filled.
An airborne lidar survey can also enable a relief model of a city to be
constructed. A three-dimensional city model allows for the accurate, precise,
and up-to-date mapping of the road network. The lidar digital surface model,
combined with complementary information (such as street names and
house numbers) in a geographic information system, can provide up-to-date
coverage for vehicle navigation and positioning systems. Of course, building
blocks and road networks may be vectorized to produce a conventional
road map.
Scanning a land area with an airborne lidar system provides a quicker
way of surveying land than does using conventional ground survey methods. In addition, the processing of airborne lidar data is much easier to
automate than the photogrammetric analysis of stereo pairs of air photos,
and the latter still involves considerable operator intervention. However,
the airborne lidar is not without its problems in land survey work. As
previously mentioned, the airborne lidar is likely to encounter multiple
reflections. One must be able to distinguish and identify these different
reflections. Secondly, there may be differences between what is measured
by the lidar and what a land surveyor would measure on the ground. As
previously noted, the lidar survey of a built-up area produces a threedimensional model of the ground and of all the buildings on it. However,
a land surveyor would normally attempt to map the surface representing
the original undisturbed level of the ground that existed before the buildings were constructed. Figure 5.9 shows two representations, one of the
surface and one of the ground.
5.5
Laser Fluorosensing
Fluorescence occurs when a target molecule absorbs a photon and another
photon is subsequently emitted with a longer wavelength. Although not all
molecules fluoresce, the wavelength spectrum and the decay time spectrum
of emitted photons are characteristics of the target molecules for the specific
9255_C005.fm Page 102 Wednesday, September 27, 2006 5:08 PM
102
Introduction to Remote Sensing
0m
200 m
400 m
(a)
0m
200 m
400 m
(b)
FIGURE 5.9
Digital model of (a) the surface and (b) the ground derived from laser scanning and classification, with ground resolution of 1m × 1m. (Istituto Nazionale di Oceanografia e di Geofisica
Serimentale.)
9255_C005.fm Page 103 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
103
wavelength of the absorbed photons. In a remote sensing context, the source
of excitation photons can be either the Sun or an artificial light source. In the
present context, the active process involves the use of a laser as an artificial
light source. The remote sensing system that both stimulates and analyzes the
fluorescence emission has become known as the laser fluorosensor.
The generalized laser fluorosensor consists of a laser transmitter, operating in the ultraviolet part of the spectrum; an optical receiver; and a data
acquisition system. A laser is used, rather than any other type of light
source, because it can deliver a high radiant flux density at a well-defined
wavelength to the target surface. An ultraviolet wavelength is used in order
to excite fluorescence in the visible region of the spectrum. A pulsed laser
is used to allow daylight operation, target range determination and,
potentially, fluorescence lifetime measurement. A block diagram of the
electro-optical system of an early laser fluorosensor is shown in Figure 5.10.
The characteristics of the laser transmitter, including the collimator, are
summarized in Table 5.1. The induced fluorescence is observed by a
receiver that consists of two main subsystems, a spectrometer and a lidar
altimeter. The receiver characteristics are summarized in Table 5.2. Fluorescence decay times could also be measured with the addition of highspeed detectors, as indicated in the center of Figure 5.10. The telescope
collects light from the point where the laser beam strikes the surface of the
Earth. An ultraviolet blocking filter prevents backscattered laser radiation
from entering the spectrometer. The visible portion of the spectrum, which
includes the laser-induced fluorescence as well as the upwelling background radiance, is dispersed by a concave holographic grating and monitored by gated detectors. Gating of the detectors permits both the
background solar radiance to be removed from the observed signal and
the induced fluorescence emission to be measured only at a specific range
from the sensor; for example, it is possible to measure the fluorescence of
the surface or over a depth interval below the surface.
The first main receiver subsystem is the spectrometer. In the particular
system considered in Figure 5.10, the received light is separated into 16
spectral channels. The first channel is centered on the water Raman line at
381 nm and is 8 nm wide. The spectral range from 400 nm to 660 nm is
covered by 14 channels, each 20 nm wide. The 16th channel is centered at
685 nm in order to observe the chlorophyll-a fluorescence emission, and is
only 7 nm wide. For each laser pulse, the output of each photodiode is
sampled, digitized, and passed to the data acquisition system, which also
notes the lidar altitude, the ultraviolet backscatter amplitude, the laser pulse
power, and the receiver gain.
The second main receiver subsystem in the laser fluorosensor considered
in Figure 5.10 is the lidar altimeter. The lidar altimeter uses the two-way
transit time of the ultraviolet laser pulse to measure the altitude of the
fluorosensor above the terrain. The lidar altitude is required to gate the
receiver and, along with the pulse energy and receiver gain, to normalize
FIGURE 5.10
Block diagram of a fluorosensor electro-optical system. (O’Neil et al., 1980.)
20.5 cm f/3.1
cassegrain
telescope
10/90 beamsplitter
Dichroic
G2
Gate (G1)
Gain
Gain
AGC
circuit
Nitrogen
laser
Trigger
Laser
Backscatter
337.1 nm line filter
Photodiode
G3
Decay time
meters
To be added
Backscatter
amplitude
Laser
power meter
Lidar
altimeter
Gating/
timing
G1 G2 G3 S/H
B
P
H
Sync
τ Blue
τ Red
G
I16
I3
3
S/H
Sample/hold
and background
subtraction
I2
2
16
I1
1
104
UV blocking filter
Field stop
Channel plate
Photocathode
Fibre image
slicer
Concave holographic
grating
Proximity focussed
intensifiers
Output fibre optics
16 photodiodes
9255_C005.fm Page 104 Wednesday, September 27, 2006 5:08 PM
Introduction to Remote Sensing
To data processing system
9255_C005.fm Page 105 Wednesday, September 27, 2006 5:08 PM
105
Lasers and Airborne Remote Sensing Systems
TABLE 5.1
Laser Transmitter Characteristics
Laser type
Wavelength
Pulse length
Pulse energy
Beam divergence
Repetition rate
Nitrogen gas laser
337 nm
3-nsec FWHM
1 mJ/pulse
3 mrad × 1 mrad
100 Hz
(From O’Neil et al., 1980.)
the fluorescence intensity and hence to estimate the fluorescence conversion
efficiency of the target.
Since the early days, various improvements have been made to airborne
laser remote sensing systems, including:
• use of more spectral channels, enabling a much closer approximation
to a continuous fluorescence spectrum to be obtained
• introduction of cross-track scanning
• use of very accurate systems for determining the altitude and attitude (orientation) of the aircraft
• use of more than one laser in a system.
Laser fluorosensing can be used in studying stress in terrestrial vegetation, in
studying chlorophyll concentrations in the aquatic environment, and in oil
spill detection, characterization, and thickness mapping. We shall consider
some of the features of laser fluorosensing systems for each of these situations.
TABLE 5.2
Laser Fluorosensor Receiver Characteristics
Telescope
Clear aperture
Field of view
Intensifier on-gate period
Nominal spectral range
Nominal spectral bandpass (channels 2–15)
Noise equivalent energy*
Lidar altimeter range
Lidar altimeter resolution
f/3·1 Dall Kirkham
0·0232 m2
3 mrad × 1 mrad
70 nsec
386–690 nm
20 nm/channel
~4·8 × 10–17 J
75–750 m
1·5 m
* This is the apparent fluorescence signal (after background subtraction) collected by the receiver in one wavelength channel for a
single laser pulse that equals the noise in the channel. This figure
relates to the sensor performance at the time of collection of the
data presented by O’Neil et al. (1980). The noise equivalent energy
has been improved significantly.
(From O’Neil et al., 1980.)
9255_C005.fm Page 106 Wednesday, September 27, 2006 5:08 PM
106
Introduction to Remote Sensing
If the target observed by a laser fluorosensor is in an aquatic environment,
the excitation photons may undergo Raman scattering by the water molecules.
Part of the energy of the incident photons is absorbed by a vibrational energy
level in the water molecule (the OH bond stretch), and the scattered photons
are shifted to a longer wavelength corresponding to f/c or l/λ = 3418 cm−1.
The amplitude of the Raman signal is directly proportional to the number of
water molecules in the incident photon beam. This Raman line is a prominent
feature of remotely sensed fluorescence spectra taken over water and is used
to estimate the depth to which the excitation photons penetrate the water.
Airborne laser fluorescence has been used quite extensively in terrestrial
studies of vegetation. When green vegetation is illuminated by ultraviolet
radiation, it exhibits a broad fluorescence emission with maxima or shoulders at blue (440 nm) and green (525 nm) wavelengths, as well as the red
and far-red chlorophyll fluorescence with maxima near 685 nm and 740 nm
(Chappelle et al., 1984; Lang et al., 1991; Lichtenthaler et al., 1993; and several
articles in the International Society for Optical Engineering proceedings
edited by Narayan and Kalshoven, 1997). Ratios of the intensities of various
pairs of fluorescence peaks are used as indicators of chlorophyll content and
stress condition in plants and can be used to study the effects of the application of different amounts of nitrogenous fertilizers and postharvest crop
residues (Lang et al., 1996; Lüdeker et al., 1996; McMurtrey et al., 1996;
Narayan and Kalshoven, 1997).
Laser fluorosensing has also been used extensively in work on the aquatic
environment. Fluorescent dyes are often used as tracers for studying the
diffusion and dispersion of, for example, sewage pollution (Valerio, 1981, 1983)
and in certain aspects of hydrology (Smart and Laidlaw, 1977). The advantage
of using a laser system is that, because one can use a well-characterized
chemical dye, one can obtain dye concentration maps without the need for
extensive in situ sampling of the dye concentration. Laser fluorosensing has
also been used very widely to study aquatic primary productivity. Since its
introduction in the 1970s (Kim, 1973), laser fluorosensing has matured from a
research area into a useful operational tool for ecological and biological surveying over large aquatic areas (see, for example, Bunkin and Voliak [2001];
Chekalyuk et al. [1995]; and Hoge [1988]). Chlorophyll-a can be stimulated to
fluoresce at a peak emission wavelength of 685 nm. Generally, fluorometers
for in situ measurements employ an excitation wavelength of 440 nm in the
blue part of the spectrum where chlorophyll-a exhibits a strong absorption
band; however, the conversion of the laser-induced fluorescence measurements into absolute units of chlorophyll concentration and phytoplankton
abundance is complicated because of variability in the quantum yield of chlorophyll fluorescence due to the high temporal and spatial variability of aquatic
phytoplankton strains (Falkowski et al., 1992).
To obtain quantitative measurements of the chlorophyll concentrations
with a laser fluorosensor, rather than just relative measurements, in the early
days required the results of a few in situ measurements of chlorophyll-a
concentration made by conventional means for samples taken simultaneously
9255_C005.fm Page 107 Wednesday, September 27, 2006 5:08 PM
Lasers and Airborne Remote Sensing Systems
107
from a few points under the flight path. These in situ measurements are
needed for the calibration of the airborne data because the data deal not
with a single chemical substance but rather with a group of chemically
related materials, the relative concentrations of which depend on the specific
mixture of the algal species present. Because the absolute fluorescence conversion efficiency depends not only on the species present but also on the
recent history of photosynthetic activity of the organisms (due to changes
in water temperature, salinity, and nutrient levels as well as the ambient
irradiance), this calibration is essential if data are to be compared from day
to day or from region to region. The development of a more-advanced laser
fluorosensing system to overcome at least some of the need for simultaneous
in situ data using a short-pulse, pump-and-probe technique is described by
Chekalyuk et al. (2000). The basic concept is to saturate the photochemical
activity within the target with a light flash (or a series of ‘flashlets’) while
measuring a corresponding induction rise in the quantum yield of chlorophyll fluorescence (Govindjee, 1995; Kramer and Crofts, 1996).
In common with all optical techniques, the depth to which laser fluorosensor
measurements can be made is limited by the transmission of the excitation
and emission photons through the target and its environment. Any one of
the materials that can be monitored by laser fluorosensing can also be monitored by grab sampling from a ship. While in situ measurements or grab
sample analyses are the accepted standard technique, the spatial coverage
by this technique is so poor that any temporal variations over a large area
are extremely difficult to unravel. For rapid surveys, to monitor changing
conditions, an airborne laser fluorosensor can rapidly cover areas of moderate size and the data can be made available very quickly, with only a few
surface measurements needed for calibration and validation purposes.
One important use of laser fluorosensing from aircraft is oil-spill detection,
characterization, mapping, and thickness contouring. Laboratory studies
have shown that mineral oils fluoresce efficiently enough to be detected by
a laser fluorosensor and that their fluorescence spectra not only allow oil to
be distinguished from a seawater background but also allow classification
of the oil into three groups: light refined (e.g., diesel), crude, and heavy
refined (e.g., bunker fuel). The fluorescence spectra of three oils typical of
these groups are shown in Figure 5.11. When used for oil pollution surveillance, a laser fluorosensor can perform three distinct operations: detect an
anomaly, identify the anomaly as oil and not some other substance and
classify the oil into one of the three broad categories just mentioned.
There has also long been a need to measure oil-slick thickness, both within
the spill-response community and among academics in the field. However,
although a considerable amount of work has been done, no reliable methods
currently exist, either in the laboratory or the field, for accurately measuring
oil-on-water slick thickness. A three-laser system called the Laser Ultrasonic
Remote Sensing of Oil Thickness (LURSOT) sensor, which has one laser
coupled to an optical interferometer, has been accurately used to measure
oil thickness (Brown et al., 1997). In this system, the measurement process
9255_C005.fm Page 108 Wednesday, September 27, 2006 5:08 PM
108
Introduction to Remote Sensing
Fluorescence efficiency (×10−3nm−1)
1.0
(×0.6)
0.5
(×10)
400
500
600
Wavelength (nm)
FIGURE 5.11
Laboratory measured fluorescence spectra of Merban crude oil (solid line), La Rosa crude oil
(dash-dot line), and rhodamine WT dye (1% in water) (dashed line). (O’Neil et al., 1980.)
is initiated with a thermal pulse created in the oil layer by the absorption of
a powerful infrared carbon dioxide laser pulse. Rapid thermal expansion of
the oil occurs near the surface where the laser beam was absorbed. This
causes a steplike rise of the sample surface as well as the generation of an
ultrasonic pulse. This ultrasonic pulse travels down through the oil until it
reaches the oil-water interface, where it is partially transmitted and partially
reflected back toward the oil-air interface, where it produces a slight displacement of the oil surface. The time required for the ultrasonic pulse to travel
through the oil and back to the surface again is a function of the thickness and
the ultrasonic velocity in the oil. The displacement of the surface is measured
by a second laser probe beam aimed at the surface. The motion of the surface
produces a phase or frequency shift (Doppler shift) in the reflected probe beam
and this is then demodulated with the interferometer; for further details see
Brown et al. (1997).
5.6
Airborne Gamma Ray Spectroscopy
The development of sodium iodide scintillation counters in the 1950s led to
the construction of airborne gamma ray spectrometers for detecting and
measuring radioactivity on the ground. A block diagram of such a system
is shown in Figure 5.12 (the magnetic tape drive for the storage of the results
would now be replaced by a more modern data storage system). A detector
9255_C005.fm Page 109 Wednesday, September 27, 2006 5:08 PM
109
Lasers and Airborne Remote Sensing Systems
Navigation altimeter
pressure temperature
Analog to
digital Summing High
converter amplifier voltage
Detector package
Computer
FIGURE 5.12
Block diagram of a gamma ray spectrometer. (International Atomic Energy Agency [IAEA], 1991.)
consists of a single crystal of sodium iodide treated with thallium. The sides
of the crystal are coated with magnesium oxide, which is light reflecting. An
incoming gamma ray photon produces fluorescence in the crystal and the
photons that are produced are reflected onto a photomultiplier tube at the
end of the crystal detector. The output from the photomultiplier tube is then
proportional to the energy of the incident gamma ray photon. The pulses
produced by the photomultiplier tube are fed into a pulse height analyzer
which, essentially, produces a histogram of the energies of the incident
gamma rays — that is, it produces a gamma ray spectrum. The system shown
in Figure 5.12 has a bank of detectors, not just a single detector.
A detector takes a finite time to process the output resulting from a given
gamma ray photon; if another photon arrives within that time, it is lost. If
the flux of gamma ray photons is large, then a correction must be applied.
If two pulses arrive at the pulse height analyzer at exactly the same time,
the output is recorded as a single pulse with the sum of the energies of the
two pulses; this also is a problem with large fluxes of gamma ray photons,
and steps have to be taken to overcome it.
Originally, airborne gamma ray spectroscopy was introduced in the 1960s
for the purpose of exploration for ores of uranium. It was then extended into
more general geological mapping applications. The main naturally occurring
radioactive elements are one isotope of potassium (40K), and uranium (238U),
and thorium 232Th their daughter products. In addition to airborne gamma
ray spectroscopy uses in studying natural levels of radioactivity for geological mapping, it can also be used to study man-made radioactive contamination of the environment. It is possible to distinguish different radioactive
9255_C005.fm Page 110 Wednesday, September 27, 2006 5:08 PM
110
Introduction to Remote Sensing
Potassium
1.0
K-1.46
0.6
0.4
0.2
0
Uranium
Total count
Potassium
Normalized channel count rate
40
0.8
0
Thorium
3.0
2.0
1.0
Energy (MeV)
(a)
Bi-2.20
Bi-1.76
214
214
214
Bi-1.12
Bi-0.61
0.8
0.6
0.2
0
0
Uranium
0.4
Potassium
Normalized channel count rate
214
1.0
214
Pb-0.35
Uranium
Total
count
Thorium
2.0
1.0
3.0
Energy (MeV)
(b)
FIGURE 5.13
Gamma ray spectra of (a) 40K, (b) 238U, and (c) 232Th. The positions of the three radioactive
elements’ windows are shown. (IAEA, 1991.)
9255_C005.fm Page 111 Wednesday, September 27, 2006 5:08 PM
111
Lasers and Airborne Remote Sensing Systems
Ti-2.62
208
228
Ti-0.58
208
0.6
0.4
0.2
0
0
Uranium
Total count
Potassium
Normalized channel count rate
0.8
Ac-0.91
-0.97
Thorium
1.0
Thorium
2.0
1.0
3.0
Energy (MeV)
(c)
FIGURE 5.13 (Continued).
materials because the energy (or frequency) of the gamma rays emitted by
a radioactive nuclide is characteristic of that nuclide. The gamma-ray spectra
of 40K, 238U, and 232Th are shown in Figure 5.13. The spectral lines are broadened as a result of the interaction of the gamma rays with the ground and
the intervening atmosphere between the ground and the aircraft. Background radiation, including cosmic rays, is also present, and there is also
the effect of radioactive dust washed out of the atmosphere onto the ground
or the aircraft, and of radiation from the radioactive gas radon (222Rn), which
occurs naturally in varying amounts in the atmosphere. Moreover, the
gamma rays are attenuated by their passage through the atmosphere;
roughly speaking, about half of the intensity of the gamma rays is lost for
every 100 m of height. For mapping of natural radioactivity using fixed-wing
aircraft, a flying height of 120 m is most commonly used. To fly lower is
hazardous, unless the terrain is very flat. In addition, the field of view
(sampling area) is smaller; to fly higher will mean dealing with a smaller
signal. Therefore, for accurate mapping, one must have an accurate value
of the flying height (from a radar altimeter carried on board the aircraft).
More details of the theory and techniques of airborne gamma ray spectroscopy are given in a report published by the International Atomic Energy
Agency (IAEA, 1991).
Airborne gamma ray spectrometer systems designed for mapping natural
radioactivity can also be used for environmental monitoring surveys. For
instance, mapping of the fallout in Sweden from the accident at the nuclear
power station in Chernobyl on April 25 and 26, 1986, is described in some
detail in the report on gamma ray spectroscopy by the IAEA (1991). That
report also describes the successful use of airborne surveys to locate three
9255_C005.fm Page 112 Wednesday, September 27, 2006 5:08 PM
112
Introduction to Remote Sensing
lost radioactive sources (a cobalt [60Co] source lost somewhere in transit
by road between Salt Lake City, UT, and Kansas City, MO, (a distance of
1800 km) in June 1968; a U.S. Athena missile carrying two 57Co sources that
crashed in northern Mexico in July 1970; and the Soviet nuclear-powered
satellite COSMOS-954, which disintegrated on re-entry into the atmosphere
and spread radioactive materials over a large area of northern Canada in
January 1978).
9255_C006.fm Page 113 Friday, February 16, 2007 10:19 PM
6
Ground Wave and Sky Wave
Radar Techniques
6.1
Introduction
The original purpose for which radar was developed was the detection of
targets such as airplanes and ships. In remote sensing applications, over-theland radars are used to study spatial variations in the surface of the land
and also the rather slow temporal variations on the land surface. Before the
advent of remote sensing techniques, data on sea state and wind speeds at
sea were obtained from ships and buoys and were accordingly only available
for a sparse array of points. Wave heights were often simply estimated by
an observer standing on the deck of a ship. As soon as radar was invented,
scientists found that, at low elevation angles, surrounding objects and terrain
caused large echoes and often obliterated genuine targets; this is the wellknown phenomenon clutter. Under usual circumstances, of course, the aim
is to reduce this clutter. However, research on the clutter phenomenon
showed that the backscattered echo became larger with increasing wind
speed. This led to the idea of using the clutter, or backscattering, to measure
surface roughness and wind speed remotely. Remote sensing techniques
using aircraft, and more specifically satellites, have the very great advantage
of being able to provide information about enormous areas of the surface of
the Earth simultaneously. However, remote sensing by satellite-flown instruments using radiation from the visible or infrared parts of the electromagnetic spectrum has the serious disadvantage that the surface of the sea is
often obscured by cloud. Although data on wind speeds at cloud height are
obtainable from a succession of satellite images, these would not necessarily
be representative of wind speeds at ground level.
It is, of course, under adverse weather conditions that one is likely to be
particularly anxious to obtain sea state and marine weather data. Aircraft are
expensive to purchase and maintain and their use is restricted somewhat by
adverse weather conditions; satellite remote sensing techniques can provide
a good deal of relevant information at low cost. Satellites are even more
expensive than aircraft; however, this fact may be overlooked if someone else
113
9255_C006.fm Page 114 Friday, February 16, 2007 10:19 PM
114
Introduction to Remote Sensing
g
in
ok
lo
e- dar
Sid ra
eter
Altim
Sk
y-
w
av
e
Ground-wave
Line o
f sight
FIGURE 6.1
Ground and sky wave radars for oceanography. (Shearman, 1981.)
has paid the large capital costs involved and the user pays only the marginal
costs of the reception, archiving, and distribution of the data. Satellites, of
course, have the advantage over other remote sensing platforms in that they
provide coverage of large areas. If one is concerned with only a relatively small
area of the surface of the Earth, similar data can be obtained about sea state
and near-surface wind speeds using ground-based or ship-based radar systems. Figure 6.1 is taken from a review by Shearman (1981) and illustrates
(though not to scale) ground wave and sky wave techniques.
A distinction should be made between imaging and nonimaging active microwave systems. Side-looking airborne radars flown on aircraft and synthetic
aperture radars (SARs) flown on aircraft and spacecraft are imaging devices
and can, for instance, give information about wavelengths and about the direction of propagation of waves. A substantial computational effort involving
Fourier analyses of the wave patterns is required to achieve this. In the case of
SAR, this computational effort is additional to the already quite massive computational effort involved in generating an image from the raw data (see
Section 7.4). Other active microwave instruments, such as altimeters and scatterometers, do not form images but give information about wave heights and
wind speeds. This information is obtained from the shapes of the return pulses
received by the instruments. The altimeter (see Section 7.2) operates with short
pulses traveling vertically between the instrument and the ground and is used
to determine the shape of the geoid and the wave height (rms). A scatterometer
uses beams that are offset from the vertical. Calibration data are used to determine wave heights and directions and wind speeds and directions.
Three types of ground-based radar systems for sea-state studies are available
(see Figure 6.1):
Direct line-of-sight systems
Ground wave systems
Sky wave systems
9255_C006.fm Page 115 Friday, February 16, 2007 10:19 PM
Ground Wave and Sky Wave Radar Techniques
115
Direct line-of-sight systems use conventional microwave frequencies,
whereas ground wave and sky wave systems use longer wavelength radio
waves, decametric waves, which correspond to conventional medium-wave
broadcast band frequencies. Microwave radar is limited to use within the
direct line-of-sight and cannot be used to see beyond the horizon. A radar
mounted on a cliff is unlikely to exceed a distance of 30 to 50 km. Microwave
radar systems are discussed in Chapter 7.
6.2
The Radar Equation
Before considering any of the different types of radar systems described in
this chapter and the next one, some consideration must be given to what is
known as the radar equation. The radar equation describes the power of the
return signal to the surface that is being observed by the radar (see for
instance Section 9.2.1. of Woodhouse [2006]). For a radar transmitter, the
power of the transmitted beam in the direction (q, j) is given by:
 1 
St (R , θ , ϕ ) = tλ (θ )PG
t (θ , ϕ ) 
 4π R 2 
(6.1)
where Pt is the power transmitted by the antenna, G(q, j) is the gain factor
representing the directional characteristics of the antenna i.e. PtG(q, j) is the
power per unit solid angle transmitted in the direction (q, j), tl(q) is the
transmittance and is slightly less than 1, and the factor {1/(4p R2)} allows for
the spreading out of the signal over a sphere of radius R, where R is the range.
For a satellite system, tl (θ) is the transmittance through the whole atmosphere.
Now consider an individual target that is illuminated by a radar beam.
This target may absorb, transmit, or scatter the radiation, but we are only
concerned with the energy that is scattered back toward the radar and we
define the scattering cross section s as the ratio of the reflected power per
unit solid angle in the direction back to the radar divided by the incident
power density from the radar (per unit area normal to the beam). s has the
units of area. The scatterer therefore acts as a source of radiation of magnitude s St(R, q, j) and so the power density arriving back at the radar is:
t (θ )σ St (R , θ , ϕ ) tλ2 (θ )σ PG
t (θ , ϕ )
Sr = λ
=
2
4π R
( 4π )2 R 4
(6.2)
The power, Pr , entering the receiver is therefore Sr Ae(q, j), where Ae(q, j)
is the effective antenna area that is related to the gain by:
Ae (θ , φ ) =
λ 2G(θ , ϕ )
4π
(6.3)
9255_C006.fm Page 116 Friday, February 16, 2007 10:19 PM
116
Introduction to Remote Sensing
and therefore:
Pr =
2 2
Ae (θ , ϕ )tλ2 (θ )σ PG
tλ2 (θ )PG
t (θ , ϕ )
t (θ , ϕ ) λ σ
=
( 4π )2 R 4
( 4π )3 R 4
(6.4)
so that:
t 2 (θ )Pt Ae2 (θ , ϕ )σ
Pr = λ
4πλ 2 R 4
(6.5)
and therefore, using the form in Equation 6.4, we can write s as:
σ=
Pr
( 4π )3 R 4
.
λ tλ (θ )2 G(θ , ϕ )2 Pt
2
(6.6)
Note that this process is for what we call the monostatic case — in other
words, when the same antenna is used for the transmitting and receiving of
the radiation. When transmission and reception are performed using different
antennae, which may be in quite different locations as in the case of sky
wave radars, the corresponding equation can be derived in a similar way,
except that it is necessary to distinguish between the different ranges, directions, gains, and areas of the two antennae.
Equations 6.5 and 6.6 are for the power received from one scattering
element at one instant in time. The measured backscatter is the sum of the
backscatter from all the individual small elements of surface in the area that
is viewed by the radar. Equation 6.4, therefore, can be written for an individual scatterer labeled by i as:
2
2
t 2 (θ )PG
t i (θ , ϕ )λ σ i
Pri = λ
( 4π )3 Ri4
(6.7)
and the total received power is then obtained from the summation over i of
all the individual Pri, so that:
N
Pr =
∑P
ri
(6.8)
i =1
In the case of the sea, one must modify the approach because the sea is in
constant motion and therefore the surface is constantly changing. We assume
that there are a sufficiently large number, N, of scatterers, contributing random
9255_C006.fm Page 117 Friday, February 16, 2007 10:19 PM
Ground Wave and Sky Wave Radar Techniques
117
phases to the electric field to be able to express the total received power, when
averaged over time and space, as the sum:
N
Pr =
∑P
ri
(6.9)
i =1
If we assume that the sea surface is divided into elements of size ∆Ai, each
containing a scatterer, the normalized radar cross section s 0 can be defined as:
σ0 =
σi
∆ Ai
(6.10)
The value of s 0 depends on the roughness of the surface of the sea and this,
in turn, depends on the near-surface wind speed. However, it should be
fairly clear that one cannot expect to get an explicit expression for the wind
speed in terms of s 0; it is a matter of using a model, or models, relating s0
to the wind speed and then fitting the experimental data to the chosen model.
The value of s 0 increases with increasing wind speed and decreases with
increasing angle of incidence and depends on the beam azimuth angle relative to the wind direction. Because of the observed different behavior of s 0
in the three different regions of incidence angle ([a] 0 to 20°, [b] 20 to 70°,
and [c] above 70°), there are different models for these three regions. In the
case of ground wave and sky wave radars, it is the intermediate angle of
incidence, where Bragg scattering applies, that is relevant. For the altimeter
(see Section 7.2), it is the low incidence angles (i.e., for q in the range from
0° to about 20°) that apply. In this case, it is assumed that specular reflection
is the dominant factor and so what is done is to use a model where the sea
surface is made up of a large number of small facets oriented at various
angles. Those that are normal, or nearly normal, to the radar beam will give
strong reflections, whereas the other facets will give weak reflections.
If one is concerned with detecting some object, such as a ship or an
airplane, with a radar system, then one makes use of the fact that the object
produces a massively different return signal from the background and therefore the object can be detected relatively easily. However, in remote sensing
of the surface of the Earth, one is not so much concerned with detecting an
object but with studying the variations in the nature or state of the part of
the Earth’s surface that is being observed, whether the land or the sea.
Differences in the nature or state of the surface give rise to differences in s i
of the individual scatterers and therefore, through Equation 6.8 or
Equation 6.9, to differences in the received power of the return signal.
However, inverting Equation 6.8 or Equation 6.9 to use the measured value
of the received power to determine the values of si , or even the value of the
normalized cross section s 0, is not feasible. One is therefore reduced to
constructing models of the surface and comparing the values of the calculated received power for the various models with the actually measured
value of the received power.
9255_C006.fm Page 118 Friday, February 16, 2007 10:19 PM
118
6.3
Introduction to Remote Sensing
Ground Wave Systems
The origin of the use of decametric radar for the study of sea state dates
from the work of Crombie (1955), who discovered that with radio waves of
frequency of 13.56 MHz — that is, 22 m wavelength — the radar echo from
the sea detected at a coastal site had a characteristic Doppler spectrum with
one strongly dominant frequency component. The frequency of this component shifted by 0.376 Hz, which corresponds to the Doppler shift expected
from the velocity of the sea waves with a wavelength equal to half the
wavelength of the radio wave traveling toward the radar. This means that
radio waves interact with sea waves of comparable wavelength in a resonant
fashion that is analogous to the Bragg scattering of X-rays by the rows of
atoms in a crystal. Crombie envisaged a coastal-based radar system, using
multifrequency radars with steerable beams, to provide a radar spectrometer
for studying waves on the surface of the sea. Such radars would have greater
range than the direct line-of-sight microwave radars erected on coastal sites
(see Figure 6.1) because they would be operating at longer wavelengths,
namely tens of meters. These waves, which are referred to as ground waves,
bend around the surface of the Earth so that such a ground wave radar
would be expected to have a range of between 100 km and 500 km, depending on the power and frequency of the radar used.
If the radio waves strike the sea surface at an angle, say ∆, the Bragg
scattering condition is 2lscos ∆ = l, where ls is the sea-surface wavelength
and l is the radio wavelength. For ground waves, the radio waves strike the
sea surface at grazing incidence and the Bragg scattering condition simplifies
to 2ls = l. There is not, of course, a single wavelength alone present in the
waves on the surface of the sea; there is a complicated pattern of waves with
a wind-dependent spectrum of wavelengths and spread of directions. The
importance of the Bragg-scattering mechanism is that the radar can be used
to study a particular selected wavelength component in the chaotic pattern
of waves on the sea surface. Quantities readily derivable from the Bragg
resonant lines are the wind direction, from the relative magnitude of the
approach and recede amplitude (see Figure 6.2a), and the radial component
of the current (see Figure 6.2c). The possibility of determining the current
directly from the Doppler shift does not arise with an SAR because the
Doppler shift associated with the moving target cannot easily be separated
from the Doppler shift associated with the movement of the aircraft or
satellite platform that carries the radar.
It had originally been supposed that the mapping of currents using ground
wave radar would require the use of a large phased-array antenna to resolve
the sea areas in azimuth. Such an array was very costly and demanded at
least 100 m of coast per site. One example of such a system was the Ocean
Surface Current Radar (OSCR) system, which had a receiving antenna of
32 aerials (this system is no longer commercially available). Work at the
National Oceanic and Atmospheric Administration’s (NOAA’s) Wave
9255_C006.fm Page 119 Friday, February 16, 2007 10:19 PM
119
Ground Wave and Sky Wave Radar Techniques
c
0
Spectral power (dB)
a
b
−8
−16
d
−24
−32
−40
e
−0.8
0.4
−0.4
0.0
Doppler frequency (Hz)
0.8
FIGURE 6.2
Features of radar spectra used for sea-state measurement and the oceanographic parameters derived
from them: (a) ratio of two first-order Bragg lines— wind direction; (b) - 10 dB width of larger firstorder Bragg line—wind speed; (c) Doppler shift of first-order Bragg lines from expected values—radial component of surface current; (d) magnitudes of first-order Bragg lines—ocean wave height
spectrum for one wave-frequency and direction; and (e) magnitude of second-order structure—ocean wave height spectrum for all wave-frequencies and directions (sky wave data for 10.00
UT, 23 August 1978, frequency 15 MHz data-window, Hanning, FFT 1024 points, averages 10, slant
range 1125 km). (Shearman, 1981.)
Propagation Laboratory in the 1970s demonstrated the feasibility and accuracy of smaller, transportable high-frequency radars for real-time current
mapping up to 60 km from the shore. This finding was incorporated in the
NOAA Coastal Ocean Dynamics Application Radar (CODAR) current mapping radar (Barrick et al., 1977) and involves using a broad-beam transmitter
at a high frequency (~26 MHz). The returning radio echoes are received
separately on four whip antennae located at the corners of a square. A
Doppler spectrum is determined from the signals received at each of the
four whip antenna and the phases of the components of a particular Doppler
shift in each of the spectra are then compared to deduce the azimuthal
direction from which that component has come. With two such radars on
two separate sites, the radial components of the currents can be determined,
with reference to each site, and the two sets of results can then be combined
to yield the current as a vector field. In 1984, the team that invented these
systems left the NOAA research laboratories to form a commercial company,
CODAR Ocean Sensors, developing low-cost commercial versions of the
system. Hundreds of journal papers have now been published that explain
the techniques and establish accuracies by independent comparisons (see
the website http://www.codaros.com/bib.htm).
The original CODAR design of the 1980s has been improved upon over the
last 20 years and is now replaced with the SeaSonde, which has a small antenna
9255_C006.fm Page 120 Friday, February 16, 2007 10:19 PM
120
Introduction to Remote Sensing
footprint, low power output, and a 360-degree possible viewing angle that
minimizes siting constraints and maximizes coverage area. The SeaSonde can
be remotely controlled from a central computer in an office or laboratory and
set for scheduled automatic data transfers. It is suitable for fine-scale monitoring in ports and small bays, as well as open ocean observation over larger
distances up to 70 km. For extended coverage, a long-range SeaSonde can
observe currents as far as 200 km offshore. The main competitor to SeaSonde
is a German radar called WERA (standing for WElen RAdar). This is a phasedarray system sold by German company Helzel Messtechnik GmbH. The first
WERAs operated at 25 to 30MHz but, with current interest in lower frequencies to obtain longer ranges, they now operate at 12 to 16MHz. Pisces is another
commercially available phased-array system but it is higher specification, and
therefore higher priced, than WERA and has a longer range. Pisces, WERA, and
SeaSonde use frequency-modulated continuous waveform radar technology,
whereas OSCR and the original CODAR were pulsed systems.
Decametric ground wave systems have now been used for over 20 years
to study surface currents over coastal regions. Moreover, these systems have
now developed to the stage that their costs and processing times make it
feasible to provide a near–real time determination of a grid of surface currents every 20 to 30 minutes. This provides a valuable data set for incorporation into numerical ocean models (Lewis et al., 1998).
6.4
Sky Wave Systems
The sky wave radar (see Figure 6.1), involves decametric waves that are
reflected by the ionosphere and consequently follow the curvature of the Earth
in a manner that is very familiar to short-wave radio listeners. These waves
are able to cover very large distances around the Earth. Sky wave radar is
commonly referred to as over-the-horizon radar (OTHR). Sky wave radar
can be used to study sea-surface waves at distances between 1000 km and
3000 km from the radar installation. The observation of data on sea-state
spectra gathered by sky wave radar was first reported by Ward (1969). As
with the ground-wave spectra, sky wave radar depends on the selectivity
of wavelengths achieved by Bragg scattering at the surface of the sea. There
is, however, a difference between the Bragg scattering of ground waves
and sky waves. In the case of ground waves, the radio waves strike the
sea surface at grazing incidence, but in the case of sky waves, the radio
waves strike the sea surface obliquely, say at angle ∆, and the full Bragg
condition 2lscos ∆ = l applies. In addition, ionospheric conditions vary
with time so that both the value of ∆ and the position at which the radio
waves strike the sea surface also vary.
Sky wave radars can operate at frequencies between about 5 and 28 MHz,
corresponding to wavelengths between 60 and 11 m, and they can be used to
9255_C006.fm Page 121 Friday, February 16, 2007 10:19 PM
Ground Wave and Sky Wave Radar Techniques
121
study the sea surface at distances between 1000 km and 3000 km from the
radar installation. The development and operation of a sky wave radar system
is a large and expensive undertaking. However, there is considerable military
interest in the imaging aspect of the use of sky wave radars, and it is doubtful
whether any nonmilitary operation would have the resources to construct and
operate a sky wave radar. One example of the military significance is provided
by the case of the stealth bomber, a half-billion dollar batlike superplane
developed for the U.S. military to evade detection by radar systems. Stealth
aircraft are coated with special radar absorbing material to avoid detection by
conventional microwave radar; however, sky wave radar uses high-frequency
radio waves, which have much longer wavelengths than microwaves. A sky
wave radar can detect the turbulence in the wake of a stealth aircraft in much
the same way that a weather radar is used to detect turbulent weather ahead
so that modern airliners can divert and avoid danger and inconvenience to
passengers. In addition to observing the turbulent wake, the aircraft itself is
less invisible to a sky wave radar than it is to a conventional radar. Moreover
stealth aircraft, such as the U.S. Nighthawk F117A, are designed with sharp
leading edges and a flat belly to minimize reflections back toward conventional
ground-based radars. A sky wave radar bounces down from the ionosphere
onto the upper surfaces that include radar-reflecting protrusions for a cockpit,
engine housings, and other equipment. An additional feature of a sky wave
radar is that it is very difficult to jam because of the way the signal is propagated over the ionosphere.
For the waves on the surface of the sea, only the components of the wave
vector directly toward or away from the radar are involved in the Bragg condition. The relative amplitudes of the positively and negatively Doppler-shifted
lines in the spectrum of the radar echo from a particular area of the sea indicate
the ratio of the energy in approaching and receding wind-driven sea waves.
Should there be only a positively shifted line present, the wind is blowing
directly toward the radar; conversely, should there be only a negatively shifted
line, the wind is blowing directly away from the radar. If the polar diagram of
the wind-driven waves about the mean wind direction is known, the measured
ratio of the positive and negative Doppler shifts enables the direction of the
mean wind to be deduced. This is achieved by rotating the wave-energy polar
diagram relative to the direction of the radar beam until the radar beam’s
direction cuts the polar diagram with the correct ratio (see Figure 6.3). Two wind
directions can satisfy this condition; these directions are symmetrically oriented
on the left and right of the direction of the radar beam. This ambiguity can be
resolved using observations from a sector of radar beam directions and making
use of the continuity conditions for wind circulation (see Figure 6.4).
In practice, the observed positive and negative Doppler shifts are not quite
equal in magnitude. This occurs because an extra Doppler shift arises from
the bodily movement of the water surface on which the waves travel, this
movement being the surface current. The radar, however, is only capable of
determining the component of the total surface current along the direction
of the radar beam.
9255_C006.fm Page 122 Friday, February 16, 2007 10:19 PM
122
beam
Radar
Introduction to Remote Sensing
dB
−
dB
+
Hz
−
+ Hz
Doppler shift
dB
−
+
Hz
FIGURE 6.3
Typical spectra obtained for different wind orientations relative to the radar boresight.
(Shearman, 1981.)
Figure 6.2 shows a sky wave radar spectrum labeled with the various
oceanographic and meteorological quantities that can be derived from it. In
addition to the quantities that have already been mentioned, other quantities
can be derived from the second-order features. It should be noted that current
measurements from sky wave radars are contaminated by extra Doppler
shifts due to ionospheric layer height changes. If current measurements are
to be attempted, one must calibrate the ionospheric Doppler shift; this may
be done, for instance, by considering the echoes from an island.
There are a number of practical considerations to be taken into account
for sky wave radars. The most obvious of these is that, because of the huge
distances involved, they require very high power transmission and very
sensitive receiving system (see Section 10.2.4.3 for a further discussion).
We ought perhaps to consider the behavior and properties of the ionosphere a little more. The lowest part of the atmosphere, called the troposphere, extends to a height of about 10 km. The troposphere contains 90%
of the gases in the Earth’s atmosphere and 99% of the water vapor. It is the
behavior of this part of the atmosphere that constitutes our weather. Above
the troposphere is the stratosphere, which reaches to a height of about 80 km
above the Earth’s surface. The boundary between the troposphere and the
stratosphere is called the tropopause. The ozone layer, which is so essential
to protect life forms from the effects of ultraviolet radiation, is situated in
the lower stratosphere. Ozone (O3) is formed by the action of the incoming
9255_C006.fm Page 123 Friday, February 16, 2007 10:19 PM
Ground Wave and Sky Wave Radar Techniques
123
24.2.82
25.2.82
FIGURE 6.4
Radar-deduced wind directions (heavy arrows) compared with Meteorological-Office analyzed
winds. The discrepancies in the lower picture are due to the multiple peak structure on this
bearing. (Wyatt, 1983.)
solar ultraviolet radiation on oxygen molecules (O2). At heights above about
80 km, the density of the air is so low that when the molecules in the air
become ionized by incoming solar ultraviolet radiation (or, to a lesser extent,
by cosmic rays or solar wind particles) the ions and electrons will coexist
for a long time before recombination occurs. In this region, the highly rarefied
air has the properties of both a gas and a plasma (i.e., an ionized gas);
therefore, the region is called the ionosphere (short for ionized atmosphere).
The ionosphere stretches from about 80 to 180 km above the Earth’s surface
and has a number of important layers (D, E, F1, and F2, in order of ascending
height). The theory of the propagation of a radio wave in a plasma leads to
a value of the refractive index n given by:
 fp 
n = 1−  
 f 
2
(6.11)
9255_C006.fm Page 124 Friday, February 16, 2007 10:19 PM
124
Introduction to Remote Sensing
where
fp is the plasma frequency given by f p = (1/2π ) (ee2 Ne/ε 0me ) ,
ee is the charge on an electron,
Ne is the density of free electrons,
e o is the permittivity of free space, and
me is the mass of an electron.
As height increases through the ionosphere, the recombination time is longer,
the electron density Ne increases, and so fp increases and the refractive index
decreases.
The ionosphere is not a simple mirror; the radio waves are reflected by
total internal reflection. But the total internal reflection is not that of the case
of a plane interface between two homogeneous transparent media, where
the radiation travels in a straight path in the optically more-dense medium
and the total internal reflection occurs at the interface when the angle of
incidence exceeds the critical angle, c, (the condition for the critical angle is
sinc = 1/n). We have just seen that the refractive index for radio waves in the
ionosphere varies with height, decreasing as height increases. A radio wave
traveling obliquely to the vertical therefore does not travel in a straight line
and then suddenly get reflected; as it rises, it is progressively bent away from
the vertical and travels in a curve until eventually it is traveling horizontally
and then starts on a downward curve. Such a curved path is sketched in
Figure 6.1. A convenient discussion of the ionosphere, especially with reference to sky wave radars, can be found in chapter 6 of Kolawole (2002).
The simple ideas of Bragg scattering that have been previously mentioned
are valuable in identifying the particular wavelength of radio wave that will
be selected to contribute to the return pulse. They do not, however, give a
value for the actual intensity of the backscattered radio waves nor do they
take into account second-order effects. This can be tackled by the extension
and adaptation to electromagnetic scattering given by Rice (1951), Barrick
(1971a, 1971b, 1972a, 1972b, 1977a, 1977b), and Barrick and Weber (1977) of
the treatment, originally due to Lord Rayleigh, of the scattering of sound
from a corrugated surface. This is essentially a perturbation theory argument.
A plane radio wave is considered to be incident on a corrugated or rough
conducting surface, and the vector sum of the incident and scattered waves
at the surface of the conductor must satisfy the boundary conditions on the
electromagnetic fields, in particular that the tangential component of the
electric field is zero. More-complicated boundary conditions apply if one
takes into account the fact that seawater is not a perfect conductor and that
its relative permittivity is not exactly equal to unity.
The resultant electric field of all the scattered waves has a component parallel
to the surface of the water that must cancel out exactly with the component
of the incident wave parallel to the surface. The scattering problem therefore
involves the determination of the phases, amplitudes, and polarizations of the
scattered waves that will satisfy this condition. Consider a plane radio wave
with wavelength l 0 incident with grazing angle ∆ on a sea surface with a
9255_C006.fm Page 125 Friday, February 16, 2007 10:19 PM
125
Ground Wave and Sky Wave Radar Techniques
i
S−
λo
r
S+
∆i
H
∆i ∆s ∆s
−
+
x
λs
(a)
r
i
S−
∆−s
∆i
∆i
∆+s
S+
(b)
y
−
kos
koi
kor
+
kos
∆i
+K
ki
ks−
x
−K
ki
ks+
1x
z
(c)
FIGURE 6.5
(a) Scattering from a sinusoidally corrugated surface with H<l0. i, r, and s indicate the incident,
specularly reflected, and first-order scattered waves, respectively; (b) The backscatter case, ∆s
= p – ∆ i ; (c) The case showing general three-dimensional geometry with vector construction for
scattered radio waves.
sinusoidal swell wave of height H (H«l 0) and wavelength ls, traveling with
its velocity in the plane of incidence (see Figure 6.5[a]). There will be three
scattered waves, with grazing angles of reflection of ∆i, ∆s+ and ∆s-, where:
cos ∆s± = cos ∆i ± l0/ls
(6.12)
If one of these scattered waves returns along the direction of the incident
wave, then ∆s− = p − ∆i (see Figure 6.5[b]) so that:
l0/ls = cos ∆i – cos ∆s− = cos ∆i − cos (p − ∆i) = 2 cos ∆i
i.e. λ = 2 λs cos ∆i
which is just the Bragg condition.
(6.13)
9255_C006.fm Page 126 Friday, February 16, 2007 10:19 PM
126
Introduction to Remote Sensing
If the condition H«l0 is relaxed, then the scattered-wave spectrum will
contain additional scattered waves with grazing angles given by:
cos ∆s = cos ∆i ± nl0/ls
(6.14)
where n is a small integer. However, it has been shown that this higher-order
scattering is unimportant in most cases in which decametric radio waves are
incident on the surface of the sea; it only becomes important for very short
radio wavelengths and for very high sea states (Barrick, 1972b).
The condition expressed in Equation 6.12 can be regarded as one component of a vector equation:
ks = ki ± K
(6.15)
where ki, ks, and K are vectors in the horizontal plane and associated with
the incident and reflected radio waves and the swell, respectively.
The above discussion supposes that the incident radio wave, the normal
to the reflecting surface, and the wave vector of the swell are in the same
plane. This can be generalized to cover cases in which swell waves are
traveling in a direction that is not in the plane of incidence of the radio wave
(see Figure 6.5[c]).
The relationship between the Doppler shift observed in the scattered radio
wave and the swell wave is:
fs = fi ± fw
(6.16)
where
fs is the frequency of the scattered wave,
fi is the frequency of the incident wave, and
fw is the frequency of the water wave.
An attempt could be made to determine the wave-directional spectrum by
using first-order returns and by using a range of radar frequencies and radar
look directions. This would involve a complicated hardware system. In
practice, it is likely to be easier to retain a relatively simple hardware system
and to use second-order, or multiple scattering, effects to provide wave
directional spectrum data. It is important to understand and quantify the
second-order effects because of the opportunities they provide for measuring
the wave height, the nondirectional wave-height spectrum, and the directional
wave height spectrum by inversion processes (see, for example, Shearman,
1981). The arguments given above can be extended to multiple scattering. For
successive scattering from two water waves with wave vectors K1 and K2, one
would use the following equations in place of Equations 6.15 and 6.16:
ks = ki ± K1 ± K2
(6.17)
9255_C006.fm Page 127 Friday, February 16, 2007 10:19 PM
127
Ground Wave and Sky Wave Radar Techniques
and
fs = fi ± fw1 ± fw2
(6.18)
where
fw1 and fw2 are the frequencies of the two waves.
If we impose the additional constraint that the scattered radio wave must
constitute a freely propagating wave of velocity c, then:
fsl = 2p fs/ks= c
(6.19)
This results in scattering from two sea waves traveling at right angles,
analogous to the corner reflector in optics or microwave radar.
The method used by Barrick involves using a Fourier series expansion of
the sea-surface height and a Fourier series expansion of the electromagnetic
field. The electromagnetic fields at the boundary, and hence the coefficients
in the expansion of the electromagnetic fields, are expanded using perturbation theory subject to the following conditions:
• The height of the waves must be very small compared with the radio
wavelength.
• The slopes at the surface must be small compared with unity.
• The impedance of the surface must be small compared with the
impedance of free space.
The first-order in the perturbation series corresponds to the simple Bragg
scattering previously described, whereas the second-order corresponds to
the “corner-reflector” scattering by two waves. A perturbation series expansion of the Fourier coefficients used in the description of the sea surface is
also used and an expression for the second-order scattered electromagnetic
field due to the second-order sea-surface wave field can be obtained. In the
notation used by Wyatt (1983), the backscattering cross section takes the form:
s (w) = s1(w) + s 2(w)
(6.20)
where s 1(w) and s 2(w) are the first-order and second-order scattering cross
sections, respectively, and they are given by:
s 1(w) = 26pk04
∑ S(–2mk )d (w – mw )
0
m=±1
where
k0 = the radar wave vector;
w = Doppler frequency;
wB = 2 gk0 = Bragg resonant frequency;
S(k) = sea-wave directional spectrum
B
(6.21)
9255_C006.fm Page 128 Friday, February 16, 2007 10:19 PM
128
Introduction to Remote Sensing
and
∞ π
σ (ω ) = 2 π k
2
6
4
0
∑ ∫ ∫ Γ S(m k)S(m′ k′) × δ(ω − m gk − m′ gk′ )kdkdθ (6.22)
2
m , m′=±1 0 − π
where
k, k¢ = wave numbers of two interacting waves where k + k′ = –2k0;
k, q = polar coordinates of k;
Γ = coupling coefficient = ΓH + ΓEM; and
ΓH = hydrodynamic coupling coefficient:
=−
i k + k′ − ( kk′ − k.k′) (ω 2 + ω B2 )
2
(ω 2 − ω B2 )
mm′ kk′
(6.23)
and ΓEM = electromagnetic coupling coefficient:
=
1 (k.k 0 )(k′.k 0 )/( k02 − 2 k.k′ )
2
k.k′ + k0 ∆
(6.24)
where
∆ = normalized electrical impedance of the sea surface.
The problem then is to invert Equations 6.21 and 6.22 to determine S(k),
the sea-wave directional spectrum, from the measured backscattering cross
section. These equations would enable one to compute the Doppler spectrum
(i.e., the power of the radio wave echo as a function of Doppler frequency)
by both first-order and second-order mechanisms, given the sea-wave height
spectrum in terms of wave vector. However, no simple direct technique is
available to obtain the sea-wave spectrum from the measured radar returns
using inversions of Equations 6.21 and 6.22. One approach is to simplify the
equation and thereby obtain a solution for a restricted set of conditions.
Alternatively, a model can be assumed for S(k) including some parameters,
in order to calculate s 1 (w) and to determine the values of the parameters
by fitting to the measured values of s 2 (w); for further details see Wyatt
(1983). This is one example of the general problem mentioned at the end of
Section 6.2 in relation to the inversion of Equations 6.8 and 6.9.
9255_C007.fm Page 129 Wednesday, September 27, 2006 12:27 PM
7
Active Microwave Instruments
7.1
Introduction
Three important active microwave instruments — the altimeter, the scatterometer, and the synthetic aperture radar (SAR) — are considered in this
chapter. Examples of each of these have been flown on aircraft. The first
successful flight of these instruments in space was on board Seasat. Seasat
was a proof-of-concept mission that only lasted for 3 months before the
satellite failed. Although improved versions of these types of instruments
have been flown on several subsequent satellites, including Geosat, Earth
Remote-Sensing Satellite–1 (ERS-1), ERS-2, TOPEX/Poseidon, Jason-1, and
Envisat, the general principles of what is involved and examples of the
information that can be extracted from the data of each of them are very
well illustrated by the Seasat experience.
7.2
The Altimeter
Satellite altimeters were designed in response to a requirement for accurate
determination of the Earth’s geoid — that is, the long-term mean equilibrium
sea surface. This requires:
• very accurate measurement of the distance from the satellite to the
surface of the sea vertically below it
• very accurate knowledge of the orbit of the satellite.
The principal measurement made by an altimeter is of the time taken for the
round trip of a very short pulse of microwave energy that is transmitted
vertically downward by the satellite, reflected at the surface of the Earth
(by sea or land), and then received back again at the satellite. The distance
of the satellite above the surface of the sea is then given by:
h = 1/2ct
(7.1)
where c is the speed of light.
129
9255_C007.fm Page 130 Wednesday, September 27, 2006 12:27 PM
130
Introduction to Remote Sensing
Seasat
Orbit
Instrument
corrections
Altimeter
Atmospheric
corrections
h
Ocean
surface
topography
Bottom
topography
Geophysical
corrections
Geoid
h∗
Ocean
surface
Laser
site
hg
Reference
ellipsoid
FIGURE 7.1
Schematic of Seasat data collection, modelling and tracking system.
The Seasat altimeter transmitted short pulses at 13.5 GHz with a duration
of 3.2 µ s and pulse repetition rate of 1020 Hz using a 1-m diameter antenna
looking vertically downward. The altimeter was designed to achieve an
accuracy of ±10 cm in the determination of the geoid (details of the design
of the instrument are given by Townsend, 1980). In order to achieve this
accuracy, one must determine the distance between the surface of the sea
and the satellite very accurately. The altitude h* measured with respect to a
reference ellipsoid (see Figure 7.1) can be expressed as:
h* = h + hsg + hi + ha + hs + hg + ht + h0 + e
(7.2)
where h* is the distance from the center of mass of the satellite to the reference
ellipsoid at the subsatellite point; h is the height of the satellite above the
surface of the sea as measured by the altimeter; hsg represents the effects of
spacecraft geometry, including the distance from the altimeter feed to the
center of mass of the satellite and the effect of not pointing vertically; hi is
the total height equivalent of all instrument delays and residual biases; ha is
the total atmospheric correction; hs is the correction due to the surface and
radar pulse interaction and skewness in the surface wave height distributions; hg is the subsatellite geoid height; ht is the correction for solid earth
and ocean tides; h0 is the ocean-surface topography due to such factors as
9255_C007.fm Page 131 Wednesday, September 27, 2006 12:27 PM
131
Active Microwave Instruments
Sea surface height (m)
0
−20
Gregg
sea mount
Bermuda
−40
−60
0338
0339
0340
Time (GMT)
0341
FIGURE 7.2
Sea surface height over sea mount-type features.
ocean circulation, barometric effects, and wind pile-up; and e represents
random measurement errors.
As far as ha is concerned, significant atmospheric effects on the velocity of
the microwaves occur in the troposphere, where most of the atmospheric
gas is concentrated, and in the ionosphere, where the propagation through
a plasma (an overall electrically neutral assembly of electrons and positive
ions) has a significant effect on the velocity. We have already considered the
latter effect in considering the refractive index of a plasma in relation to the
previous discussion of sky wave radars (see Equation 6.11). So ha has an
ionospheric contribution. For the troposphere, one must apply a dry tropospheric correction to allow for the effect of the gases in the atmosphere and
a wet tropospheric correction to allow for the effects of water vapor and
liquid water droplets in clouds.
What is actually measured is h, which is measured to an accuracy of
approximately ±5 cm. Prior to the advent of Seasat, the geoid was only
known to an accuracy of about ±1 m; the idea is to measure or to calculate
all the other quantities in Equation 7.2 with sufficient accuracy that this
equation can be used to determine the height of the geoid to an accuracy of
±10 cm. Some examples of results obtained for height measurements with
the Seasat altimeter are shown in Figure 7.2 to Figure 7.4.
Since the days of Seasat, an enormous amount of work has been done to
refine our knowledge of the detailed shape of the geoid. It is not enough to
determine h* accurately. One needs to be able to determine the orbit accurately if one is to extract information about the geoid. To a first approximation, the satellite’s orbit is an ellipse calculated by assuming the Earth to be
a point mass (i.e., a perfect sphere with a spherically symmetrical density).
However, many factors cause this simple scheme to vary. These factors arise
from variations in the gravitational attraction of the Earth (including tidal
effects), the Moon and the Sun, the drag of the very thin atmosphere, and
the pressure of direct solar radiation and radiation reflected from the Earth.
9255_C007.fm Page 132 Wednesday, September 27, 2006 12:27 PM
132
Introduction to Remote Sensing
Sea surface height (m)
0
−20
Anguilla
−40
Trench at edge of
Venezuelan basin
−60
Puerto Rican trench
0314
0315
0316
Time (GMT)
0317
FIGURE 7.3
Sea surface height over trench-type features.
These effects can be calculated, and improving knowledge of satellite orbits
is being used to improve the accuracy of these calculations. Accurate knowledge of satellite orbits comes from tracking the satellites. Data that describe
the orbit of a satellite are referred to as the orbit ephemeris, and ephemeris
data need to be constantly updated by tracking the satellite. Four methods
can be used for satellite tracking:
Dynamic height (m) (hO)
Satellite laser ranging (SLR)
Global positioning system (GPS)
Doppler orbitography and radiopositioning integrated by satellite
(DORIS)
Precise range and range-rate equipment (PRARE).
0
−1
−2
Gulf stream
151930
FIGURE 7.4
Dynamic height over the Gulf Stream.
152000
Time (GMT)
152030
9255_C007.fm Page 133 Wednesday, September 27, 2006 12:27 PM
133
Active Microwave Instruments
The meaning of SLR is fairly obvious; a laser on the ground is pointed at
the satellite, and the transit time for the return pulse is measured. Satellite
GPS is the same system that is used for all sorts of purposes on the ground.
DORIS involves a series of about 50 ground stations that broadcast radio
signals at two frequencies (401.25 and 2036.25 MHz). A receiver on the
satellite detects the signal, measures the Doppler shift, and thus estimates
the rate at which the range (the range-rate) from the ground station is
changing. PRARE involves transmitting two microwave signals (at X-band
and S-band) from a satellite toward the ground. A ground station retransmits
them to the satellite so that the range can be determined; the ground station
also measures the range-rate. The accuracy attainable varies according to
method (see page 577 of Robinson [2004] for details).
Let us suppose that we have determined h* and the satellite orbit accurately; then we can accurately determine the height of the sea surface,
referred to some datum, along the subsatellite track. To determine the geoid,
of course, we have to remove tidal effects and allow for the fact that variations in the atmospheric pressure at the sea surface affect the height of the
sea surface. Having done all this, then we shall have a fairly accurate picture
of the geoid.
Perhaps now is a good time to consider why the geoid is important. It is
important because it provides information about the variations in the densities
of the subsurface rocks and, below the oceans, it provides information about
the height of the ocean floor. Altimetry data have been used to produce maps
or images of the bottom topography of the entire global oceans. One can think
of these variations in the Earth as causing changes in the gravitational
equipotentials — and, of course, the geoid is just that particular equipotential
that would describe the surface of the oceans if the water were completely at
rest. A first approximation to the description of the Earth, after considering
it to be a sphere, is to consider it as a spheroid — that is, an ellipsoid of
rotation about the N-S axis with an equatorial radius of 6378 km and a
polar radius of 6357 km. The deviations of the geoid from the reference
ellipsoid range from –104 m to +64 m. We represent the gravitational
potential, V(r, θ, λ), at a point a distance r from the center of the Earth, as
an expansion in terms of associated Legendre functions, Plm (sinθ), (l is an
integer and –l ≤ m ≤ l):
∞
V (r , θ , λ ) =
 R
GM
R l= 0  r 
∑
l +1
l
∑ P (sin θ )(C cos mλ + S sin mλ )
m
l
lm
lm
(7.3)
m= 0
where θ is the co-latitude (i.e., the latitude measured from the North Pole),
λ is the longitude, G is the gravitational constant, M is the mass of the Earth,
and R is mean equatorial radius of the Earth.
The gravitational potential is therefore described by the values of the
coefficients Clm and Slm. It is obviously not feasible to determine these
9255_C007.fm Page 134 Wednesday, September 27, 2006 12:27 PM
134
Introduction to Remote Sensing
coefficients for an infinite number of values of l, and so some upper limit,
L, is chosen in practice. Having chosen this representation for the gravitational potential, one can then express the geoid height, N(θ, λ) and the gravitational anomaly, ∆g(θ, λ) in terms of the same coefficients Clm and Slm:
L
N(θ , λ ) =
l
∑ ∑ P (sin θ )(C cos mλ + S sin mλ )
m
lm
l
lm
(7.4)
l = 2 m= 0
and
L
∆g(θ , λ ) = γ
∑
l= 2
l
∑ P (sin θ )(C cos mλ + S sin mλ )
(l − 1)
m
l
lm
lm
(7.5)
m= 0
where γ is a constant.
In addition to the various satellite-flown altimeters previously mentioned
that followed the Seasat altimeter, a number of satellite missions (CHAMP,
GRACE, and GOCE) have recently been dedicated to the study of the Earth’s
gravity (i.e., the study of the geoid); for details, see pages 627 to 630 of the
book by Robinson (2004).
As well as using the time of flight of the radar pulse to determine the
height of the altimeter above the surface of the sea, one can study the shape
of the return pulse to obtain information about the conditions at the surface
of the sea, especially the roughness of the surface and, through that, the nearsurface wind speed. For a perfectly flat horizontal sea surface, the leading
edge of the return pulse would be a very sharp square step function corresponding to a time given by Equation 7.1 for radiation that travels vertically;
radiation traveling at an angle inclined to the vertical arrives slightly later
and causes a slight rounding of this leading edge. If large waves are present
on the surface of the sea, some radiation is reflected from the tops of the
waves, corresponding to a slightly smaller value of h and therefore a slightly
smaller value of t; in the same way, an extra delay of the radiation reflected
by the troughs of the waves occurs. Thus, for a rough sea, the leading edge
of the return pulse is considerably less sharp than the leading edge for a
calm sea (see Figure 7.5). Another way to think about this is to consider the
size of the “footprint” of a radar altimeter pulse on the water surface — that
is, the area of the surface of the sea that contributes to the return pulse
received by the altimeter; this depends on the sea state. At a given distance
from the nadir, the probability of a wave having facets that reflect radiation
back to the satellite increases with increasing roughness of the surface of the sea.
The area actually illuminated is the same; it is the area from which reflected
radiation is returned to the satellite that varies with sea state. For a low sea
state, the spot size for the Seasat altimeter was approximately 1.6 km. For a
higher sea state, the spot size increased up to about 12 km.
9255_C007.fm Page 135 Wednesday, September 27, 2006 12:27 PM
135
Return signal amplitude (counts)
Active Microwave Instruments
100
SWH = 2.4 m
SWH = 11 m
80
60
40
20
0
0
8
16
24 32 40 48
Waveform sample no.
56
60
FIGURE 7.5
Return pulse shape as a function of significant wave height.
Section 6.2 discussed the reflection of the radiation transmitted by a radar,
and the definition of the normalized scattering cross section σ 0 of a surface
was given in Equation 6.10. The power received at the radar is related to the
scattering properties of the surface by the radar equation; Equation 6.6 or 6.7
relates the contribution from a single scatterer to the scattering cross section
of that scatterer, whereas Equation 6.8 or 6.9 relates the total received power
to the scattering cross sections of all the individual scattering elements. One
can obtain the value for σ 0 from an analysis of the return pulse received by
an altimeter. The problem then is to be able to relate σ 0 to the roughness of
the surface — in other words, to the significant wave height (SWH or H1/3).
SWH is defined as the average height of the highest one-third of all the waves;
it is usually taken to be four times the root-mean-square wave height (LonguetHiggins, 1952). Then, to be able to determine the near surface wind speed, one
needs to relate the SWH to the near surface wind speed. The direction of the
wind cannot be determined from altimeter data.
In planning the Seasat mission, the objectives set, in terms of accuracy, were:
• Height measurements ±10 cm
• H1/3 (in the range 1 to 20 m), ±0.5 m or ±10% (whichever is larger)
• Wind speed ±2 ms–1; σ 0 ± 1 dB.
There is no exact theoretical formula which can be used to determine the wind
speed from the shape of the return pulse via the SWH. The relationship
between the change in shape of the return pulse and the value of H1/3 at the
surface was determined empirically beforehand and, in processing the altimeter data from the satellite, a look-up table containing this empirical relationship was used. A comparison between the results obtained from the Seasat
altimeter and from buoy measurements is presented in Figure 7.6. Comparisons
between satellite-derived wave heights and measurements from buoys for
9255_C007.fm Page 136 Wednesday, September 27, 2006 12:27 PM
136
Introduction to Remote Sensing
SWH (buoy) (m)
6.0
PAPA
41001
42001
42003
44004
46001
46005
4.0
2.0
0
0
2.0
4.0
SWH (on board algorithm) (m)
6.0
FIGURE 7.6
Scatter diagram comparing SWH estimates from the National Oceanic and Atmospheric
Administration (NOAA) buoy network and ocean station PAPA with Seasat altimeter onboard
processor estimates (51 observations).
a number of more-recent systems are quoted by Robinson (2004). Figure 7.7
shows the results obtained for H1/3 from the Seasat altimeter for an orbit that
passed very close to a hurricane, Hurricane Fico, on July 16, 1978. In this
data, values of H1/3 up to 10 m were obtained.
SWH (m)
12
8
4
0
PCA to Fico
σo(dB)
16
8
0
1414
1415
Time (GMT)
FIGURE 7.7
Altimeter measurements over Hurricane Fico.
1416
9255_C007.fm Page 137 Wednesday, September 27, 2006 12:27 PM
137
Active Microwave Instruments
z
12
z
Buoy wind (ms-1)
z
z
8
Z
z
z
z
z
z
Z
Y
4
Y
Z
Z
0
0
4
8
12
Seasat wind (Brown algorithm) (ms-1)
FIGURE 7.8
A scatter plot of Seasat radar altimeter inferred wind speeds as a function of the corresponding
buoy measurements. (Guymer, 1987.)
The determination of the wind speed is also carried out via σ 0, the normalized scattering cross section, determined from the received signal, where:
σ 0 = a0 + a1(AGC) + a2(h) + LP + La
(7.6)
where
AGC is the automatic gain control attenuation,
h is the measured height,
LP represents off-nadir pointing losses, and
La represents atmospheric attenuation.
For Seasat, the values of a0, a1(AGC), and a2(h) were determined from prelaunch
testing and by comparison with the Geodynamics Experimental Ocean Satellite–3 (GEOS-3) satellite altimeter at points where the orbits of the two satellites
intersected. The calibration curve used to convert σ0 into wind speed was
obtained using the altimeter on the GEOS-3 satellite, which had been calibrated
with in situ data obtained from data buoys equipped to determine wind
speeds. Comparisons between wind speeds derived from the Seasat altimeter
and from in situ measurements using data buoys are shown in Figure 7.8.
As previously mentioned, several satellite-flown altimeters have been used
since the one flown on Seasat, and the accuracy of the derived parameters
has been improved. However, the principles involved in analyzing the data
remain the same. Comparisons between satellite-derived wave heights and
measurements from buoys for a number of more recent systems than Seasat
are illustrated by Robinson (2004).
9255_C007.fm Page 138 Wednesday, September 27, 2006 12:27 PM
138
7.3
Introduction to Remote Sensing
The Scatterometer
An altimeter, which has just been described in Section 7.2, uses just one beam
directed vertically downward from the spacecraft and enables the speed of
the wind to be determined to ±2 ms–1, although the direction of the wind
cannot be determined. A scatterometer consists of a more-complicated
arrangement that actually uses four radar beams and enables the direction
as well as the speed of the wind to be determined. The first scatterometer
to be flown in space was flown on Seasat.
As mentioned in Section 6.2, the backscattering cross section varies with
the angle of incidence of the radar beam and different models are used for
the scattering by the sea surface for different ranges of angles of incidence.
For an altimeter, the angle of incidence is very small and a model based on
an array of near-normal reflection by an array of small facets was used. In
the case of a scatterometer, as in the case of ground wave and sky wave
radars, the angle of incidence is larger and the situation is assumed to be
described by Bragg scattering, which involves constructive interference
between reflections from successive waves on the sea surface:
λs sin θ i = 21 nλ
(7.7)
where λs is the wavelength on the water surface, λ is the microwave wavelength, θi is the angle of incidence (measured from the vertical), and n is a
small integer.
For typical microwave radiation, the value of λ is about 2 or 3 cm and for
the lowest order of reflection n = 1, so that λs must also be of the order of a
few centimeters. Thus, reflections arise from the capillary waves superimposed on the much longer wavelength gravity waves.
The problem of determining the wind speed from the radar backscattering
cross section has already been mentioned in Chapter 6 with regard to ground
wave and sky wave radars and in Section 7.2 with regard to the altimeter.
The difficulty is to establish the detailed relationship between wind speed
and backscattering cross section. A similar problem exists with the extraction
of wind velocities from scatterometer data. The relationship between radar
backscattering cross section and wind velocity has been established empirically, although it was not determined theoretically from first principles. This
determination has been done using experimental data from wind-wave tanks
and also by calibrating scatterometers on fixed platforms, on aircraft, and
on satellites with the aid of simultaneous in situ data gathered at the ocean
surface. The backscattering cross section σ0 increases with increasing wind
speed, decreases with increasing angle of incidence (see Section 6.2), and
depends on the beam azimuth angle relative to the wind direction (Schroeder
9255_C007.fm Page 139 Wednesday, September 27, 2006 12:27 PM
139
Active Microwave Instruments
et al., 1982); it is generally lower for horizontal polarization than for vertical
polarization, and it appears to depend very little on the microwave frequency
in the range 5 to 20 GHz. An empirical formula that is used for the backscattering coefficient is:
(σ 0)2 = a0 (U, θi , P) + a1(U, P) cos ϕ + a2(U, P) cos 2ϕ
(7.8)
(σ 0)2 = G(ϕ, θi, P) + H(ϕ, θi, P) log10U
(7.9)
or
where U is the wind speed, ϕ is the relative wind direction, and P indicates
whether the polarization is vertical or horizontal.
The coefficients a0, a1, and a2, or the functions G(ϕ, θi, P) and H(ϕ, θi, P), are
derived from fitting measured backscattering results with known wind speeds
and directions in calibration experiments. The form of the backscattering
coefficient as a function of wind speed and direction is shown in Figure 7.9.
Originally, these functions were determined with data from scatterometers
flown on aircraft but, after the launch of Seasat, the values of these functions
have been further refined. Assuming that the functions in Equation 7.8 have
been determined, one can then use this equation with measurements of σ 0
for two or more azimuth angles ϕ to determine both wind speed and wind
direction.
30.0 4.2
0
−2
20.0 3.7
σoVV (dB)
−4
15.0 3.4
−6
−8
10.0 2.9
−10
−12
5.0
−14
0
90
180
270
Wind direction (°)
Wind speed (ms-1) range (dB)
25.0 4.0
2.0
360
FIGURE 7.9
Backscatter cross section σ ° against relative wind direction for various wind speeds. Vertical
polarization of 30° incidence angle. (Offiler, 1983.)
9255_C007.fm Page 140 Wednesday, September 27, 2006 12:27 PM
140
25°
800 km
Introduction to Remote Sensing
25°
55°
Satellite
track
Doppler
cell
1
4
Antenna
number
400 km
3
600 km
2
FIGURE 7.10
The Seasat scatterometer viewing geometry: section in the plane of beams 1 and 3 (top diagram),
beam illumination pattern and ground swath (bottom diagram). This scatterometer operated
at 14.6 GHz (Ku-band) and a set of Doppler filters defined 15 cells in each antenna beam. Either
horizontal or vertical polarization measurements of backscatter could be made.
The scatterometer on the Seasat satellite used four beams altogether; two
of them pointed forward, at 45º to the direction of flight of the satellite,
and two pointed aft, also at 45° to the direction of flight (see Figure 7.10).
Two looks at a given area on the surface of the sea were obtained from the
forward-pointing and aft-pointing beams on one side of the spacecraft; the
change, as a result of Earth rotation, in the area of the surface actually
viewed is quite small. The half-power beam widths were 0.5° in the horizontal plane and about 25° in the vertical plane. This gave a swath width
of about 500 km on each side, going from 200 km to 700 km away from
the subsatellite track. The return signals were separated to give backscattering data from successive areas, or cells, along the strip of sea surface
being illuminated by the transmitted pulse. The spatial resolution was thus
approximately 50 km. The extraction of the wind speed and direction from
the satellite data involves the following steps:
Identifying the position of each cell on the surface of the Earth and
determining the area of the cell and the slant range
Calculating the ratio of the received power to the transmitted power
9255_C007.fm Page 141 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
141
Determining the values of the system losses and the antenna gain
in the cell direction from the preflight calibration data
Calculating σ 0 from the radar equation and correcting this calculation
for atmospheric attenuation derived from the Scanning Multichannel
Microwave Radiometer (SMMR) (which was also flown on the Seasat
satellite) as well as for other instrumental biases.
It is then necessary to combine the data from the two views of a given cell
from the fore and aft beams and thence determine the wind speed and
direction using look-up tables for the functions G(ϕ, θi, P) and H(ϕ, θi, P). The
answer, however, is not necessarily unique; there can be as many as four
solutions, each with similar values for wind speed but with quite different
directions (see Figure 7.9). The scatterometer on Seasat was designed to
measure the surface wind velocity with an accuracy of ±2 ms–1 or ±10%
(whichever is the greater) in speed and ±20° in direction, over the range of
4 to 24 ms−1 in wind speed.
In spite of the Seasat satellite’s relatively short lifespan, some evaluations of
the derived parameters were obtained by comparing them with in situ measurements over certain test areas. One such exercise was the Gulf of Alaska
Experiment, which involved several oceanographic research vessels and buoys
and an aircraft carrying a scatterometer similar to the one flown on Seasat
(Jones et al., 1979). Comparisons with the results from in situ measurements
showed that the results obtained from the Seasat scatterometer were generally
correct to the level of accuracy specified at the design stage, although systematic errors were detected and this information was used to update the algorithms used for processing the satellite data (Schroeder et al., 1982).
A second example is the Joint Air-Sea Interaction (JASIN) project, which
took place in the north Atlantic between Scotland and Iceland during the
period that Seasat was operational. Results from the JASIN project also showed
that the wind vectors derived from the Seasat scatterometer data were accurate
well within the values specified at the design stage. Again, these results were
used to refine the algorithms used to derive the wind vectors from the scatterometer data for other areas (Jones et al., 1981; Offiler, 1983). The Satellite
Meteorology Branch of the U.K. Meteorological Office made a thorough investigation of the Seasat scatterometer wind measurements, using data for the
JASIN project, which covered a period of 2 months. The Institute of Oceanographic Sciences (then at Wormley, U.K.) had collated much of the JASIN data,
but the wind data measured in situ applied to the actual height of the anemometer that provided the measurements, which varied from 2.5 m above
sea level to 23 m above sea level. For comparison with the Seasat data, the
wind data were corrected to a common height of 19.5 m above sea level. Each
Seasat scatterometer value of the wind velocity was then paired, if possible,
with a JASIN observation within 60 km and 30 minutes; a total of 2724 such
pairs were obtained. Because more than one solution for the direction of the
wind derived from the scatterometer data was possible, the value that was
closest in direction to the JASIN value was chosen. Comparisons between the
9255_C007.fm Page 142 Wednesday, September 27, 2006 12:27 PM
142
Introduction to Remote Sensing
20
N = 2724
r = 0.84
SASS = 0.2 +
0.97 × JASIN
SASS wind speed (ms-1)
18
16
14
12
10
8
6
4
2
0
0
2
4
6
8 10 12 14 16
JASIN wind speed (ms-1)
(a)
18
20
450
SASS wind direction (ms-1)
400
350
300
N = 2724
r = 0.99
SASS = 2.7 +
0.99 × JASIN
250
200
150
100
50
0
−50
−50
0
50 100 150 200 250 300 350 400 450
JASIN wind direction (°)
(b)
FIGURE 7.11
Scatter diagrams of (a) wind speed and (b) wind direction measurements made by the Seasat
scatterometer against colocated JASIN observations. The design root-mean-square limits of
2 ms−1 and 20° are indicated by the solid parallel lines and the least-squares regression fit by
the dashed line. Key: * = 1 observation pair; 2 = 2 coincident observations; etc.; ‘0’ = 10 and ‘@’ =
more than 10. (Offiler, 1983.)
9255_C007.fm Page 143 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
143
wind speeds obtained from the Seasat scatterometer and the JASIN surface
data are shown in Figure 7.11(a); similar comparisons for the direction are
given in Figure 7.11(b). Overall, the scatterometer-derived wind velocities
agreed with the surface data to within ±1.7 ms–1 in speed and ±17° in direction.
However, data from one particular Seasat orbit suggest that serious errors in
scatterometer-derived wind speeds may be obtained when thunderstorms are
present (Guymer et al., 1981; Offiler, 1983).
One of the special advantages of satellite-derived data is its high spatial
density. This is illustrated rather well by the example of a cold front shown
in Figure 7.12(a) and Figure 7.13. Figure 7.12(a) shows the synoptic situation
at midnight GMT on August 31, 1978, and Figure 7.12(b) shows the wind
field. These images were both derived from the U.K. Meteorological Office’s
10-level model objective analysis on a 100-km grid. Fronts have been added
manually, by subjective analysis. The low pressure over Iceland had been
moving north-eastward, bringing its associated fronts over the JASIN area
by midnight. On August 31, 1978, Seasat passed just south of Iceland at 0050
GMT, enabling the scatterometer to measure winds in this area (see Figure
7.13, which also shows the observations and analysis at 0100 GMT). The
points M and T indicate two stations that happened to be on either side of
the cold front. At most points, there are four possible solutions indicated,
but the front itself shows clearly in the scatterometer-derived winds as a line
of points at which there are only two, rather than four, solutions. With
experience, synoptic features such as fronts, and especially low pressure
centers, can be positioned accurately, even with the level of ambiguity of
solutions. The subjective analysis of scatterometer-derived wind fields has
been successfully demonstrated by, for example, Wurtele et al. (1982).
A previously mentioned, Seasat lasted only 3 months in operation. Since
Seasat, various scatterometers have been flown in space. The first scatterometer flown after Seasat was on the European Space Agency’s ERS-1
satellite, which was launched in 1991. The next was a U.S. instrument,
NSCAT, which was launched in 1996. After the failure of NSCAT, another
scatterometer, Sea Winds, was launched on the QuikScat platform in 1999.
Instead of a small number of fixed antennae, this scatterometer uses a
rotating antenna. It is capable of measuring wind speed to ±2 ms–1 in the
range 3 to 20 ms–1 and to 10% accuracy in the range 20 to 30 ms–1 and wind
direction to within 20°. Various empirical models relating σ 0 to wind
velocity have been developed for these systems (see for example Robinson
[2004]). The accuracy of the retrieved wind speeds has thus been improved
and scatterometers have become accepted as operational instruments, used
by meteorologists as a source of real-time information on global wind
distribution and invaluable for monitoring the evolution of tropical
cyclones and hurricanes. Oceanographers also have come to rely on the
scatterometer record for forcing ocean models. As operational systems are
developed for ocean forecasting, developers will look to scatterometers to
provide near-real time input in, for example, oil spill dispersion models or
wave forecasting models.
9255_C007.fm Page 144 Wednesday, September 27, 2006 12:27 PM
144
Introduction to Remote Sensing
1020
10
12
1016
1016
1014
1016
10
10
101 16
4
101
0
10
06
06
1010
L
L
1006
1004
100
2
1008
L
10
24
1024
L
1012
H
10
10
H
16
10
14
10
1014
1028
18
10
12
10
1022
1026
1014
1010
L
1018
1020
1016 1016
(a)
(b)
FIGURE 7.12
Example of (a) mean sea level pressure and (b) 1000 mbar vector winds for August 31, 1978.
9255_C007.fm Page 145 Wednesday, September 27, 2006 12:27 PM
145
Active Microwave Instruments
65
14
10
64
1012
63
1016
62
Latitude (°)
61
1018
60
H
GE
1020
59
M, W2
T
1022
58
57 1024
Key:
L
56
JASIN observations of 01Z
GE Gardline endurer
H Hecla
M Meteor
T Tydeman
W2 Buoy W2
55
20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5
West longitude (°)
FIGURE 7.13
Cold front example, 0050 GMT, orbit 930 (vertical polarization). (Offiler, 1983.)
7.4
Synthetic Aperture Radar
Conventional remote sensing of the surface of the Earth from aircraft or
spacecraft involves using either cameras or scanners that produce images
in a rather direct manner. These instruments are passive instruments; they
receive the radiation that happens to fall upon them and select the particular
range of wavelengths that have been chosen for the instrument. When these
instruments operate at visible or infrared wavelengths, they are capable of
quite good spatial resolution (see Chapter 3). However, at visible and
infrared wavelengths, these instruments are not able to see through clouds
so that, if the atmosphere is cloudy, they produce images of the top of the
clouds and not the surface of the Earth. By moving into the microwave
part of the electromagnetic spectrum, scanners are able to see through
clouds and hence to obtain images of the surface of the Earth even when
the weather is cloudy, provided that there is not too much precipitation.
Scanners operating in the microwave range of the electromagnetic spectrum
9255_C007.fm Page 146 Wednesday, September 27, 2006 12:27 PM
146
Introduction to Remote Sensing
Master
oscillator
Signal
film
Signal
film
Multiplier
and
modulator
Amplifier
CRT
Video
amp
Optical
correlator
Mixer
Image film
Antenna
Receiver
User
Terrain
Real time
Post mission
FIGURE 7.14
Imaging radar operations.
have very much poorer spatial resolution (from 27 km to 150 km for the
SMMR on Seasat or Nimbus-7 [see Section 2.5]). Better spatial resolution
can be achieved with an active microwave system but, as mentioned in
Section 2.5, a conventional (or real aperture) radar of the size required
cannot be carried on a satellite. SAR provides a solution to the size
constraints.
The reconstruction of an image from SAR data is not trivial or inexpensive in terms of computer time, and the theories involved in the development of the algorithms that have to be programmed are complex. This
means that the use of SAR involves a sophisticated application of radar
system design and signal-processing techniques. Thus, an SAR for remote
sensing work consists of an end-to-end system that contains a conventional
radar transmitter, an antenna, and a receiver together with a processor
capable of making an image out of an uncorrelated Doppler phase history.
A simplified version of an early system is shown in Figure 7.14. As with
any other remote sensing system, the actual design used depends on the
user requirements and on the extent to which it is possible to meet these
requirements with the available technology. The system illustrated is based
on the earliest implementation technique used to produce images from
SAR — that of optical processing. Although optical processing has some
advantages, and was very important for the generation of quicklook images
at the time of Seasat, it has now been replaced by electronic (digital)
processing techniques.
9255_C007.fm Page 147 Wednesday, September 27, 2006 12:27 PM
147
Active Microwave Instruments
The key to SAR image formation lies in the Doppler effect — in this case,
the shift in frequency of the signal transmitted and received by a moving
radar system. The usual expression for Doppler frequency shift is:
∆f = ±
vf
v
=±
c
λ
(7.10)
The velocity, v, in this expression is the radial component of the velocity
which, in this case, is the velocity of the platform (aircraft or satellite) that
is carrying the radar. The positive sign corresponds to the case of approach
of source and observer, and the negative sign corresponds to the case of
increasing separation between source and observer. For radar, there is a twoway transit of the radio waves between the transmitter and receiver giving
a shift of:
∆f =±
2v
λ
(7.11)
For SAR then, the surfaces of iso-Doppler shift are cones with their axes
along the line of flight of the SAR antenna and with their vertices at the
current position of the antenna (see Figure 7.15); the corresponding isoDoppler contours on the ground are shown in Figure 7.16.
A few points from the theory of conventional (real aperture) radar should
be reconsidered for airborne radars. For an isotropic antenna radiating
power P, the energy flux density at distance R is:
P
4π R 2
(7.12)
Iso-Doppler
cone
h
V
θ
Iso-Doppler
contour
R
x
y
FIGURE 7.15
Iso-Doppler cone.
9255_C007.fm Page 148 Wednesday, September 27, 2006 12:27 PM
148
Introduction to Remote Sensing
0
1
2
3
4
5
6
7
0
8
9
y/h
1
80°
x/h
2
75°
3
4
5
θ = 65°
5° 10° 15°
25°
35°
45°
55°
FIGURE 7.16
Iso-Doppler ground contours.
If the power is concentrated into a solid angle Ω instead of being spread out
isotropically, the flux will be:
P
ΩR 2
(7.13)
in the direction of the beam and 0 in other directions. The half-power beam
width of an aperture can be expressed as:
θ=
1
η
(7.14)
where
η is the size of the aperture expressed in wavelengths,
θ can be expressed as
θ=K
λ
D
(7.15)
where λ is the wavelength, D is the aperture dimensions, and K is a numerical
factor, the value of which depends on the characteristics of the particular
antenna in question. K is of the order of unity and is often taken to be equal
to one for convenience.
For an angular resolution θ, the corresponding linear resolution at range
R will be given by Rθ. If the same antenna is used for both transmission and
reception, the angular resolution is reduced to θ and the linear resolution
becomes Rλ/2D. For a radar system mounted on a moving vehicle, this value
is the along-track resolution. From this formula, one can see that for conventional real aperture radar, resolution is better the closer the target is to
9255_C007.fm Page 149 Wednesday, September 27, 2006 12:27 PM
149
Active Microwave Instruments
Time tn
Range
RESAZ
Range or
crosstrack
direction
Flight
path
FIGURE 7.17
Angular resolution.
the radar. Therefore, a long antenna and a short wavelength are required for
good resolution.
Now consider the question of the resolution in a direction perpendicular
to the direction of motion of the moving radar system. A high-resolution
radar on an aircraft is mounted so as to be side-looking, rather than looking
vertically downward; the acronym SLAR (side-looking airborne radar)
follows from this. The reason for looking sideways is to remove the problem
of ambiguity, or coalescence, that would arise involving the two returns from
points equidistant from the sub-aircraft track if a vertical-looking radar were
used. The radiation pattern is illustrated in Figure 7.17 (i.e., a narrow beam
directed at right angles to the direction of flight of the aircraft). A pulse of
radiation is transmitted and an image of a narrow strip of the Earth’s surface
can be generated from the returns (see Figure 7.18). By the time the next
Flight
path
C
ra ath
y t od
ub e
e
Slant
range
Ground
track
A
Ground range
FIGURE 7.18
Pulse ranging.
1
2
3 B
9255_C007.fm Page 150 Wednesday, September 27, 2006 12:27 PM
150
Introduction to Remote Sensing
pulse is transmitted and received, the aircraft has moved forward a little and
another strip of the Earth’s surface is imaged. A complete image of the swath
AB is built up by the addition of the images of successive strips. Each strip
is somewhat analogous to a scan line produced by an optical or infrared
scanner. Suppose that the radar transmits a pulse of length L (L = cτ, where τ
is the duration of the pulse); then if the system is to be able to distinguish
between two objects, the reflected pulses must arrive sequentially and not
overlap. The objects must therefore be separated by a distance along the
ground that is greater than L/(2 cos ψ), where ψ is the angle between the
direction of travel of the pulse and the horizontal. The resolution along
the ground in the direction at right angles to the line of flight of the platform,
or the range resolution as it is called, is thus c/(2βcosψ), where β, the pulse
bandwidth, is equal to 1/τ.
It is possible to identify limits on the pulse repetition frequency (PRF) that
can be used in an SAR. The Doppler history of a scatterer as the beam passes
over it is not continuous but is sampled at the PRF. The sampling must be
at a frequency that is at least twice the highest Doppler frequency in the
echo, and this sets a lower limit for the PRF. An upper limit is set by the
need to sample the swath unambiguously in the range direction — in other
words, the echoes must not overlap. The PRF limits prove to be:
2ν
c
≤ PRF ≤
2W cos ψ
D
(7.16)
where W is the swath width along the ground in the range direction.
These are very real limits for a satellite system and effectively limit the swath
width achievable at a given azimuth resolution.
It is important to realize that an SAR and a conventional real aperture
radar system achieve the same range resolution; the reason for utilizing
aperture synthesis is to improve along-track resolution (also called the angular cross-range resolution or azimuth resolution). It should also be noticed
that the range resolution is independent of the distance between the ground
and the vehicle carrying the radar. The term “range resolution” is used to
mean the resolution on the ground and at right angles to the direction of
flight; it is not a distance along the direction of propagation of the pulses.
To increase the range resolution, for a given angle ψ, the pulse duration τ
has to be made as short as possible. However, it is also necessary to transmit
enough power to give rise to a reflected pulse that, on return to the antenna,
will be large enough to be detected by the instrument. In order to transmit
a given power while shortening the duration of the pulse, the amplitude of
the signal must be increased; however, it is difficult to design and build
equipment to transmit very short pulses of very high energy. A method that
is very widely adopted to cope with this problem involves using a “chirp”
instead of a pulse of a pure single frequency. A chirp consists of a long pulse
9255_C007.fm Page 151 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
151
with a varying frequency. When the reflected signal is received by the
antenna, it is fed through a “dechirp network” that produces different delays
for the different frequency components of the chirp. This can be thought of
as compressing the long chirp pulse into a much shorter pulse of a correspondingly higher amplitude and therefore increasing the range resolution;
alternatively it can be thought of as dealing with the problem of overlapping
reflected pulses by using the differences in the frequencies to distinguish
between them. Using a pulse with a varying frequency in a detection and
ranging system is not original to radar systems. It is actually used by some
species of bats in an ultrasonic version of a detection and ranging system
(see, for example, Cracknell [1980]).
From the expression Rλ/2D given above for the along-track range, it can
be seen that to obtain good along-track resolution for a real aperture radar,
one needs a long antenna, a short wavelength, and a close range. There are
limits to the lengths of the antennae that can reasonably be carried and
stabilized on a satellite or on an aircraft flying at a high altitude; moreover,
the use of shorter wavelengths involves greater attenuation of the radiation
by clouds and the atmosphere and thereby reduces the all-weather capability
of the radar. Whereas conventional radars are primarily used in short-range
operations at low level, SARs were developed to overcome these difficulties
and are now used both in aircraft and in satellites.
An SAR has an antenna that travels in a direction parallel to its length.
The antenna generally moves continuously, but the signals transmitted and
received back are pulsed. The pulses are transmitted at regular intervals
along the flight path; when these individual signals are stored and then
added, an antenna of long effective length is synthesized in space. Of course,
this synthetic antenna is many times longer than the actual antenna and,
therefore, gives a much narrower beam and much better resolution. However,
an important difference between a real aperture antenna and the synthetic
antenna should be noted: for the real aperture antenna, only a single pulse
at a time is transmitted, received, and displayed as a line on the image; for
the synthetic antenna, each target produces a large number of return pulses.
This set of returns from each target must be stored and then combined in
an appropriate manner so that the synthetic antenna can simulate a physical
antenna of the same length. The along-track resolution is then determined
from a theory very similar to the theory of the resolution of an ordinary
diffraction grating. In place of the spacing between lines on the grating, d
represents the distance between successive positions of the transmitting
antenna when pulses are transmitted. Without considering the details of the
theory, we simply quote the result — namely that the along-track resolution
of an SAR is equal to half the real length of the antenna used (see, for
example, section 10.5.1 of Woodhouse [2006]). This may, at first, sight seem
rather surprising. However, the distance from the platform to the surface of
the Earth is not entirely irrelevant; one must remember that it is necessary
to transmit enough power to be able to receive the reflected signal.
9255_C007.fm Page 152 Wednesday, September 27, 2006 12:27 PM
152
Introduction to Remote Sensing
If a radar is flown on an aircraft, one can, for practical purposes, ignore
the curvature of the Earth, the rotation of the Earth, and the fact that the
wavefront of the radar system is a spherical wave and not a plane wave.
On the other hand, these factors must be taken into account in a radar
system that is flown on a satellite. The range from the radar to an individual scattering point in the target area on the ground changes as the
beam passes over the scattering point. This is known as “range walk.”
There are two components to this effect: one is a quadratic term resulting
from the curvature of the Earth and the other is a linear term resulting
from the rotation of the Earth. Each point must be tracked through the
aperture synthesis to remove this effect; the actual behavior for a particular point depends on the latitude and the range. In order to compensate
for the curvature of the reflected wave front, one must add a rangedependent quadratic phase shift along the synthetic aperture. This is
equivalent to focusing the radar at each range gate and, if not carried out,
the ground along-track resolution will be degraded to K λ R , or approximately λ R .
FIGURE 7.19
Seasat SAR image of the Tay Estuary, Scotland, from orbit 762 on August 19, 1978, processed
digitally. (RAE Farnborough.)
9255_C007.fm Page 153 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
153
One very obvious feature of any SAR image is a characteristic “grainy”
or speckled appearance (see Figure 7.19). This feature is common to all
coherent imaging systems, and it arises as a result of scattering from a
rough surface (i.e., from a surface on which the irregularities are large with
respect to the wavelength). This speckle can provide a serious obstacle in
the interpretation of SAR images. A technique known as “multilooking”
is commonly used to reduce the speckle. To obtain the best along-track
resolution, the full Doppler bandwidth of the echoes must be utilized.
However, it is possible to use only a fraction of the available bandwidth
and produce an image over what is effectively a subaperture of the total
possible synthetic aperture; however, this will have a poorer along-track
resolution. By using Doppler bands centered on different Doppler frequencies, so that there is no overlap, one can generate a number of independent
images for a given scene. Because these different images — or “looks” as
they are commonly called — are independent, their speckle patterns are
also statistically independent. The speckle can then be very much reduced
by incoherently averaging over the different “looks” to produce a multilook
image; however, this reduction of the speckle is achieved at the expense
of azimuthal (along-track) resolution. Typically three or four looks are used
to produce a multilook image.
Having considered the question of spatial resolution, a little consideration will now be given to the question of image formation using an SAR.
It is not appropriate in this book to consider the details of the theory
involved in the construction of an image from raw SAR data (for details
see, for example, Curlander and McDonough [1991]; Cutrona et al. [1966];
Lodge [1981]; McCord [1962] and Woodhouse [2006]). Having established
the theory, the processing itself can be carried out using either optical or
digital techniques. In the early days of space-borne SAR, such as with Seasat,
optical image-processing was used to produce quicklook imagery; this quicklook imagery was used to select some scenes for digital processing, which
then was very time consuming. However, since that time, computing facilities have improved so that digital techniques are now universally used (see
Figure 7.20 and Figure 7.21).
It is also not within of the scope of this book to enter into extensive
discussions of the problems involved in the interpretation of SAR images.
In a radar image, the intensity at each point represents the radar backscattering coefficient for a point, or small area, on the ground. In a photograph
or scanner image in the visible or near-infrared region of the electromagnetic
spectrum, the intensity at a point in the image represents the reflectivity, at
the appropriate wavelength, of the corresponding small area on the ground.
In a thermal-infrared or passive microwave image, the intensity is related
to the temperature and the emissivity of the corresponding area on the
ground. One must remember not to assume that there is necessarily any
simple correlation between images of a given piece of the surface of the Earth
produced by such very different physical processes, even if the data used to
produce the images were generated simultaneously.
9255_C007.fm Page 154 Wednesday, September 27, 2006 12:27 PM
154
Introduction to Remote Sensing
52°30'N
1°00'E
52°00'N
The english
channel
1°30'E
51°30'N
FIGURE 7.20
Optically processed Seasat SAR image of the English Channel from orbit 762 on August 19, 1978.
7.5
Interferometric Synthetic Aperture Radar
At each point in the image field of an SAR, the data actually comprise both
the intensity and the phase, although only the intensity is represented in
images, such as in Figure 7.19 to Figure 7.21. In a single image, the phase is
usually ignored. However, if one has two different SAR images of the same
ground area, then one has the capability of forming an interference pattern,
or interferogram. The study and use of interferometric SAR (referred to in
the literature both as InSAR and IFSAR) has developed very fast since the
late 1980s. In practical terms, the conditions are different for airborne SARs
and for satellite-flown SARs. In the case of airborne SAR systems, two SARs
are mounted, one on either side of the aircraft, a fixed distance apart so that
9255_C007.fm Page 155 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
155
FIGURE 7.21
Digitally processed data for part of the scene shown in Figure 7.20. (RAE Farnborough.)
the two images are generated simultaneously. In the case of satellite-flown
SAR systems, the two images are generated by the same instrument, or by
two similar instruments, in separate orbits and at times determined by the
orbits. The data for these two images are not gathered at the same time.
Radar interferometry using data from an airborne or satellite-flown system
enables the shape of the target surface, the surface of the Earth, to be determined. Radar interferometry appears to have been used first in studies of
the surfaces of Venus and of the Moon (Rogers and Ingalls, 1969). The first
work on the use of SAR for terrestrial topographic mapping was performed
by Graham (1974). The first work on spaceborne SAR interferometry was
based on the use of data from two Seasat images whose acquisition was
separated by 3 days (Goldstein et al., 1988). A historical review and a brief
introduction to the theory of InSAR, along with an extensive bibliography,
is given by Gens and van Genderen (1996). The early work was done with
airborne data and Shuttle Imaging Radar data, but then enormous use came
to be made of SAR data from ERS-1, ERS-2, and from Radarsat.
Let’s consider some of the simplest aspects of the theory of InSAR. Suppose
that a point P on the ground is imaged by two SARs traveling along lines
parallel to one another (or in two satellite orbits where the tangents are
parallel when point P is being imaged). Figure 7.22 shows the construction
of the plane perpendicular to these two parallel lines and containing the
9255_C007.fm Page 156 Wednesday, September 27, 2006 12:27 PM
156
Introduction to Remote Sensing
z
O2
B
ξ
O1
By
Bz
θ
r2
r1
H
P
z
O
y
FIGURE 7.22
Diagram to illustrate the geometry associated with the phase difference for interferometric SAR.
point P. Then the path difference between the paths of the two return signals
is 2(r2 – r1) and the corresponding phase difference ϕ is given by:
ϕ = (4π/λ)|r2 – r1|
(7.17)
The baseline B representing the separation of the positions of the two antennae can be written as (0, By , Bz), where each By and Bz may be positive or
negative. Thus r2 = r1 + B and a simple vector calculation shows that”
r2(=|r2|) = r1 + Bysinθ + Bzcosθ
(7.18)
where it is assumed that the baseline B is short compared with r1 and r2, so
that terms of the order of B2 can be neglected. Therefore:
ϕ = {4π/λ } (Bysinθ + Bzcosθ)
(7.19)
The height z(x, y) of the point P above the chosen datum is given by:
z(x, y) = H – r1cosθ
(7.20)
and the angle ξ , the baseline tilt angle, can be brought in by writing:
cos θ = cos(ξ + (θ − ξ))
= cos ξ cos(θ − ξ) − sin ξ sin(θ − ξ)
= cos ξ 1 − sin 2 (θ − ξ) − sin ξ sin(θ − ξ)
(7.21)
9255_C007.fm Page 157 Wednesday, September 27, 2006 12:27 PM
Active Microwave Instruments
157
FIGURE 7.23
A simulated noise-free histogram for a single pyramid on an otherwise flat surface. (Woodhouse,
2006.)
Consider what happens if the surface that is being observed is perfectly
flat. If point P is moved parallel to the lines of flight of the SARs, there is
no change in θ and therefore no change in phase. If P is moved along a
direction parallel to the y axis, then θ changes (smoothly) and this leads to
corresponding changes in ϕ. Thus the interferogram will consist of a set of
straight fringes parallel to the direction of motion of the SAR. If the surface
is not perfectly flat, then the fringe pattern will be distorted from this simple
form. A simulated noise-free interferogram for a single pyramid on an
otherwise flat surface is shown in Figure 7.23. The straight fringes are then
usually removed so that all that is left is the fringe pattern due to the
variations in the elevation of the point P as it moves around the surface. If
we start with P at a reference point and move around the xy plane then, in
principle, we could use Equation 7.20, with the changes in θ, to determine
the changes in z(x, y) and therefore the elevations of all the other points on
the surface relative to the reference point. However, the calculation is not
that simple because the InSAR does not provide the value of or the changes
in as x and y vary; it only gives the changes in the value of ϕ. The process
of determining the change in θ, as one moves from (x, y) to (x′, y′) from the
change in the value of ϕ (determined from the InSAR) is referred to as phase
unwrapping. Various methods can be used to carry out phase unwrapping;
the details are quite complicated and need not concern us here (see, for instance,
Gens and van Genderen [1996]; Ghiglia and Pritt [1998]; and Gens [2003]). One
complicating feature is the fact that the phase is only defined modulo 2π;
this means that any integer multiple of 2π can be added to, or subtracted
from, the phase difference ϕ.
9255_C007.fm Page 158 Wednesday, September 27, 2006 12:27 PM
158
Introduction to Remote Sensing
Provided that phase unwrapping can be carried out successfully, InSAR
can be used for topographic mapping and the construction of digital elevation models (DEMs) or digital terrain models. Recently, airborne InSAR
systems dedicated to topographic mapping have been developed so that
these systems, along with the airborne lidar systems described in Section 5.3,
provide strong competition for the more conventional photogrammetric
methods for contour and elevation determination using stereopairs of air
photos. For example, the STAR-3i system (Tighe, 2003) operated by the
Canadian company Intermap Technologies, is a 3-cm wavelength, X-band
interferometer using a Learjet commercial aircraft. Typical acquisitions are
for areas of 10 km across-track (range direction) and 50 to 200 km along track
(azimuth direction), with data collected at a coverage rate of up to 100 km2
every minute. The output of high-precision InSAR datasets is accomplished
by using onboard, laser-based inertia measurement data navigational and
differential global positioning system processing to determine the precise
position of the aircraft. The recently modified version of the system can
achieve an accuracy of 50 cm in the vertical direction and 1.25 m in the two
horizontal directions. The elevation model that is generated corresponds to
the reflections from the first surface that the radar pulses encounter, such as
roofs of buildings or treetops, and this is sometimes referred to as the digital
surface model, or DSM. Further processing is necessary to generate a “bare
earth” DEM from a DSM. It is the bare earth elevation model that is required
for the production of contours for a topographic map.
A further development is that of differential InSAR (DInSAR). The concept
is relatively simple although, like nearly everything else related to SAR, the
details are complicated. Basically, because InSAR can be used for topographic
mapping, two sets of InSAR data for the same area can be used to detect
changes in elevation in the area in the period between the acquisitions of
the two sets of InSAR data. Such changes may be due, for example, to
subsidence, landslides, erosion, ice flow on glaciers, earthquakes, or the build
up of pressure inside a volcano before eruption. Differences of the order of
a few centimeters or less can be measured.
9255_C008.fm Page 159 Saturday, February 17, 2007 12:43 AM
8
Atmospheric Corrections to Passive
Satellite Remote Sensing Data
8.1 Introduction
Distinction should be made between two types of situations in which remote
sensing data are used. In the first type, a complete experiment is designed
and carried out by a team of people who are also responsible for the analysis
and interpretation of the data obtained. Such experiments are usually
intended either to gather geophysical data or to demonstrate the feasibility
of an environmental applications project involving remote sensing techniques. The second type of situation is one in which remotely sensed data
are acquired on a speculative basis by the operator of an aircraft or satellite
and then distributed to potential users at their request. In this second situation,
it is necessary to draw the attention of the users to the fact that atmospheric
corrections may be rather important if they propose to use the data for environmental scientific or engineering work.
Useful information about the target area of the land, sea, or clouds is
contained in the physical properties of the radiation leaving that target
area. Remote sensing instruments measure the properties of the radiation
that arrives at the instrument. This radiation has traveled some distance
through the atmosphere and accordingly has suffered both attenuation and
augmentation in the course of that journey. The problem that faces the user
of remote sensing data is the difficulty in accurately regenerating the details
of the properties of the radiation that left the target area from the data
generated by the remote sensing instrument. An attempt to set up the
radiative transfer equation to describe all the various processes that corrupt
the signal that leaves the target area on the land, sea, or cloud from first
principles is a nice exercise in theoretical atmospheric physics and, of
course, is a necessary starting point for any soundly based attempt to apply
atmospheric corrections to satellite data. However, in a real situation, the
problem soon arises that suitable values of various atmospheric parameters
have to be inserted into the radiative transfer equation in order to arrive
159
9255_C008.fm Page 160 Saturday, February 17, 2007 12:43 AM
160
Introduction to Remote Sensing
at a solution. The atmospheric parameters need to correspond to the actual
conditions of the atmosphere at the time that the remotely sensed data
were gathered.
In this chapter, after a general discussion of radiative transfer theory, we
shall consider atmospheric effects in the contexts of microwave, infrared and
visible radiation.
8.2 Radiative Transfer Theory
Making quantitative calculations of the difference between aircraft- or satellitereceived radiance, which is the radiance recorded by a set of remote sensing
instruments, and the Earth-leaving radiance, which is the radiance one is trying
to measure, is problematic. An attempt to solve this problem involves the
use of what is commonly known as radiative transfer theory. In essence, this
consists of studying radiation traveling in a certain direction, specified by
the angle j between that direction and the vertical axis z, and setting up a
differential equation for a small horizontal element of the transmitting
medium (the atmosphere) with thickness dz. It is necessary to consider:
• Radiation entering the element dz from below
• Attenuation suffered by that radiation within the element dz
• Additional radiation that is either generated within the element dz
or scattered into the direction j within the element dz
and thence to determine an expression for the intensity of the radiation
leaving the element dz in the direction j.
The resulting differential equation is called the radiative transfer equation.
Although not particularly difficult to formulate, this general form of the
equation is not commonly used. In practice, the details of the formulation
are simplified to include only the important effects. The equation is therefore
different for different wavelengths of electromagnetic radiation because of
the different relative importance of different physical processes at different
wavelengths. Suitable versions of the radiative transfer equation for optical,
near-infrared, and thermal-infrared wavelengths and for passive microwave
radiation will be presented in the sections that follow.
If the values of the various atmospheric parameters that appear in the
radiative transfer equation are known, this differential equation can be
solved to determine the relation between the aircraft- or satellite-received
radiance and the Earth-leaving radiance. However, the greatest difficulty
in making atmospheric corrections to remotely sensed data lies in the fact
that it is usually impossible to obtain accurate values for the various
atmospheric parameters that appear in the radiative transfer equation. The
atmosphere is a highly dynamic physical system and the various atmospheric parameters will, in general, be functions of the three space variables,
9255_C008.fm Page 161 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
161
x, y, and z, and of the time variable, t. Because of the paucity of the data,
it is common to assume a horizontally stratified atmosphere — in other
words, the atmospheric parameters are assumed to be functions of the
height z but not the x and y coordinates in a horizontal plane. The situation
may be simplified further by assuming that the atmospheric parameters
are given by some model atmosphere based only on the geographical
location and the time of year. However, this approach is not realistic
because the actual atmospheric conditions differ quite considerably from
such a model. It is clearly much better to try to use values of the atmospheric parameters that apply at the time that the remotely sensed data
are collected. This can be done by using:
• Simultaneous, or nearly simultaneous, data from sounding instruments, either radiosondes or satellite-flown sounders
• A multichannel (multispectral) approach and, effectively, using a
large number of channels of data to determine the atmospheric
parameters
• A multilook approach in which a given element of the surface of the
Earth is viewed in rapid succession from a number of different
directions (i.e., through different atmospheric paths), so that the
atmospheric parameters can either be determined or eliminated
from the calculation of the Earth-leaving radiance.
Examples of these different approaches will be presented in the sections that
follow. It is, however, important to realize that there is a fundamental difficulty
— namely that the problem of solving the radiative transfer equation in the
situations described is an example of an unconstrained inversion problem.
That is, there are many unknowns (the atmospheric parameters for a given
atmospheric path) and a very small number of measurements (the intensities
received in the various spectral bands for the given instantaneous field of
view). The solution will, inevitably, take some information from the mathematical and physical assumptions that have been built into the method of
solution adopted.
A general formalism for atmospheric absorption and transmission is
required. Consider a beam of radiation with wavelength l and wave number
k ( = 2p/l) traveling at a direction q to the normal to the Earth’s surface.
After the radiation has traveled a distance l, the radiance flux (radiance) of
the wavelength l, jl(l), is related to its initial value jl(0) by:
l




ϕ λ (l) = ϕ λ (0)exp − sec θ K λ ( z)dz 


0
∫
where z = l cosθ and Kl(z) is the attenuation coefficient.
( 8.1)
9255_C008.fm Page 162 Saturday, February 17, 2007 12:43 AM
162
Introduction to Remote Sensing
Notice that the attenuation coefficient is a function of height as well as of
wavelength. These quantities can be expressed in terms of k instead of l
giving:
l




ϕκ (l) = ϕκ (0)exp − sec θ Kκ ( z)dz 


0
∫
(8.2)
The dimensionless quantity ∫ 0z Kκ ( z)dz is called the optical thickness and is
commonly denoted by tk(z), and the quantity exp(− ∫ 0z Kκ ( z)dz) is called the
beam transmittance and is commonly denoted by Tk (z).
8.3 Physical Processes Involved in Atmospheric Correction
In atmospheric correction processes, the first distinction to be made is
whether the radiation leaving the surface of the land, sea, or clouds is
radiation emitted by that surface or whether it is reflected solar radiation.
The relative proportions of reflected and emitted radiation vary according
to the wavelength of the radiation and the time and place of observation.
As noted in Section 2.2, at optical and very near-infrared wavelengths, the
emitted radiation is negligible compared with the reflected radiation,
whereas at thermal-infrared and microwave wavelengths, emitted radiation
is more important and reflected radiation is of negligible intensity. Within
the limitations of the accuracy of these estimates it may be seen that at a
wavelength of 3.5 µm, which is actually the wavelength of one of the bands
of the Advanced Very High Resolution Radiometer (AVHRR), emitted and
reflected radiation are both important. The problem is to relate data usually
consisting of, or derived from, the output from a passive scanning instrument
to the properties of the land, sea, or clouds under investigation.
The approach adopted to determine the contribution of the intervening
atmosphere to remotely sensed data is governed both by the characteristics
of the remote sensing system in use and by the nature of the environmental
problem to which the data are to be applied. In work that has been done so
far in land-based applications of remote sensing, atmospheric effects have
rarely been considered, whereas in meteorological applications, the atmosphere is the object of investigation. A great deal of meteorological information can be extracted from remote sensing data without taking any account
of details of the corruption of the signal from the target by intervening
layers of the atmosphere. Thus, images from such systems as the National
Oceanographic and Atmospheric Administration (NOAA) series of polarorbiting satellites or from geostationary satellites such as Meteosat, Geostationary Operational Environmental Satellite-E (GOES-E), and GOES-W can
be used to give synoptic views of whole weather systems and their developments in a manner that was previously completely impossible. If experiments
9255_C008.fm Page 163 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
163
are being conducted to study the physical properties and motions of a given
layer of the atmosphere, it may be necessary to make allowance for contributions to a remotely sensed signal from other atmospheric layers. The areas
of work in which atmospheric effects have been of greatest concern to the
users of remote sensing data so far have been those in which water bodies,
such as lakes, lochs, rivers, and oceans, have been studied in order to determine their physical or biological parameters.
In most cases, users of remote sensing data are interested in knowing how
important the various atmospheric effects are on the quality of image data
or on the magnitudes of derived physical or biological parameters; users are
not usually interested in the magnitudes of the corrections to the radiance
values per se. However, to assess the relative importance of the various
atmospheric effects, one must devote some attention to:
• Physical processes occurring in the atmosphere
• Magnitudes of the effects of these processes on the radiance reaching
the satellite
• Consequences of these effects on images or on derived physical or
biological parameters.
There are several different approaches that one can take to applying atmospheric
corrections to satellite remote sensing data for the extraction of geophysical
parameters. We note the following options:
•
•
•
•
Ignore atmospheric effects completely
Calibrate with in situ measurements of geophysical parameters
Use a model atmosphere with parameters determined from historic data
Use a model atmosphere with parameters determined from simultaneous meteorological data
• Eliminate or compensate for atmospheric effects on a pixel-by-pixel
basis.
Selection of the appropriate one of these five options is governed by considerations both of the sensor that is being used to gather the data and of the
problem to which the data are being applied:
• One can ignore the atmospheric effects completely, which is not quite
as frivolous or irresponsible as it might first seem. In practice, this
approach is perfectly acceptable for some applications.
• One can calibrate the data with the results of some simultaneous in
situ measurements of the geophysical parameter that one is trying
to map from the satellite data. These in situ measurements may be
obtained for a training area or at a number of isolated points in the
scene. However, the measurements must be made simultaneously
with the gathering of the data by the satellite.
9255_C008.fm Page 164 Saturday, February 17, 2007 12:43 AM
164
Introduction to Remote Sensing
Many examples of the use of this approach can be found for data
from both visible and infrared channels of aircraft- and satelliteflown scanners. Some relevant references are cited in Sections 8.4.2
and 8.5.1. The method involving calibration with simultaneous in
situ data is capable of yielding quite accurate results. It is quite
successful in practice although, of course, the value of remote sensing techniques can be considerably enhanced if the need for simultaneous in situ calibration data can be eliminated. One should not
assume, however, that the calibration for a given geographical area
on one day can be taken to apply to the same geographical area on
another day; the atmospheric conditions may be quite different.
In addition to the problems associated with variations in the atmospheric conditions from day to day, there is also the quite serious
problem that significant variations in the atmospheric conditions are
likely even within a given scene at any one time. To accurately account
for all of these variations, one would need to have available in situ
calibration data for a much finer network of closely packed points
than would be feasible. While it is, of course, necessary to have some
in situ data available for initial validation checks and for subsequent
occasional monitoring of results derived from satellite data, to use a
large network of in situ calibration data largely negates the value of
using remote sensing data anyway, because one important objective
of using remote sensing data is to eliminate costly fieldwork.
Given the difficulty in determining values of geophysical parameters from satellite data for which results of simultaneous in situ
measurements are not available, many people have adopted methods that involve trying to eliminate atmospheric effects rather than
trying to calculate atmospheric corrections. For example:
• One can use a model atmosphere, with the details and parameters of
the model adjusted according to the geographical location and the time
of year. This method is more likely to be successful if one is dealing
with an instrument with low spatial resolution that is gathering data
over wide areas for an application that involves taking a global view
of the surface of the Earth. In this situation, the local spatial irregularities
and rapid temporal variations in the atmosphere are likely to cancel out
and fairly reliable results can be obtained. This approach is also likely
to be relatively successful for situations in which the magnitude of the
atmospheric correction is relatively small compared with the signal
from the target area that is being observed. All these conditions are
satisfied for passive microwave radiometry and so this approach is
moderately successful for Scanning Multichannel Microwave
Radiometer (SMMR) or Special Sensor Microwave Imager (SMM/I)
data (see, for example, Alishouse et al. [1990]; Gloersen et al. [1984];
Hollinger et al. [1990]; Njoku and Swanson [1983]; and Thomas [1981]).
9255_C008.fm Page 165 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
165
• One can also use a model atmosphere but make use of such simultaneous meteorological data as may actually be available instead of
using only assumed values based on geographical location and time
of year. This simultaneous meteorological data may be obtained
from one of several possible sources. The satellite may, like the
Television InfraRed Observation Satellite-N (TIROS-N) series of satellites, carry other instruments, in addition to the scanner, that are
used for carrying out meteorological sounding through the atmosphere below the satellite (see Section 8.4).
• One can attempt to eliminate atmospheric effects in one of various
ways. For instance, one can use a multilook approach in which a
given target area on the surface of the sea is viewed from two
different directions. Alternatively, one can attempt to eliminate
atmospheric effects by exploiting a number of different spectral
channels to try to cancel out the atmospheric effects between these
channels. These methods will be discussed in Section 8.4.
Most of the options presented here are considered in relation to microwave,
thermal-infrared, and visible-band data. The cases of emitted radiation and
reflected solar radiation are considered separately, with consideration also
being given to atmospheric transmittance.
8.3.1 Emitted Radiation
As previously mentioned, at long wavelengths (i.e., for microwave and
thermal-infrared radiation), it is the emitted radiation, not the reflected solar
radiation, that is important (see Table 2.1). Several factors contribute to the
radiation received at an instrument (see Figure 8.1); these contributions,
identified as T1, T2, T3, and T4 , are described in the following sections. Each
can be considered as a radiance L(k), where k is the wave number, or as
corresponding to an equivalent black body temperature.
8.3.1.1
Surface Radiance: L1(k), T1
Surface radiance is the radiation that is generated thermally at the Earth’s
surface and undergoes attenuation as it passes through the atmosphere before
reaching a scanner; this radiance can be written as eB(k,Ts), where e is the
emissivity, B(k, Ts) is the Planck distribution function, and Ts is the temperature
of the surface. In general, the emissivity e is a function of wave number and
temperature. For example, the emissivity of gases varies very rapidly with wave
number in the neighborhood of the absorption (emission) lines. For seawater,
e may be treated as constant with respect to k and Ts. If the presence of any
material that is not part of seawater is ignored (e.g., oil pollution, industrial
waste), then k may be regarded as a constant. Let p0 be the atmospheric
pressure at the sea surface. By definition, the pressure on the top of the
9255_C008.fm Page 166 Saturday, February 17, 2007 12:43 AM
166
Introduction to Remote Sensing
FIGURE 8.1
Contributions to satellite-received radiance for emitted radiation.
atmosphere is 0. Thus, the radiance reaching the detector from the view
angle q is:
L1 (κ ) = ε B(κ , Ts )τ (κ , θ ; p0 , 0)
(8.3)
where t (k ,q; p, p1) is the atmospheric transmittance for wave number k
and direction q between heights in the atmosphere where the pressures are
p and p1.
8.3.1.2
Upwelling Atmospheric Radiance: L2(k ), T2
The atmosphere emits radiation at all altitudes. As this emitted radiation
travels upward to a scanner, it undergoes attenuation in the overlying atmosphere. It is possible to show (see, for example, Singh and Warren [1983])
that the radiance emitted by a horizontal slab of the atmosphere lying
between heights z and z + dz, where the pressure is p and p + dp respectively,
and arriving in a direction q at a height z1 where the pressure is p1, is given by:
dL2 (κ ) = B(κ , T( p))dτ (κ , θ ; p, p1 )
(8.4)
dτ (κ , θ ; p, p1 )
dp
dp
(8.5)
or
dL2 (κ ) = B(κ , T( p))
The upwelling emitted radiation received at the satellite can thus be written as:
p
L2 (κ ) =
∫ B(κ , T(p))
p0
dτ (κ , θ ; p, 0)
dp
dp
(8.6)
9255_C008.fm Page 167 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
167
where p0 is the atmospheric pressure at the sea surface and T(p) is the
temperature at the height at which the pressure is p.
This expression is based on the assumption of local thermodynamic equilibrium and the use of Kirchhoff’s law to relate the emissivity to the absorption
coefficient.
8.3.1.3
Downwelling Atmospheric Radiance: L3(k), T3
The downwelling radiance involves atmospheric emission downward to the
Earth’s surface where the radiation undergoes reflection upward to the
scanner. Attenuation is undergone as the radiation passes through the atmosphere. The total downwelling radiation from the top of the atmosphere, where
p = 0, to the sea surface, where pressure is p0, is given by:
p0
∫ B(κ , T(p))
0
dτ (κ , θ ; p, p0 )
dp
dp
(8.7)
An amount (1 − e) of this radiation is reflected at the sea surface and after
it passes through the atmosphere from the surface to the top of the atmosphere the radiance reaching the satellite is given by:
p0
∫
L3 (κ ) = (1 − ε )τ (κ , θ ; p0 , 0) B(κ , T( p))
0
8.3.1.4
dτ (κ , θ ; p, p0 )
dp
dp
(8.8)
Space Component: L4(k ), T4
Space has a background brightness temperature of about 3 κ. The space
component passes down through the atmosphere, is reflected at the surface,
and passes up through the atmosphere again to reach the scanner.
8.3.1.5
Total Radiance: L*(k), Tb
The total radiance L*(k) received at the satellite can be written as:
L ∗ (κ ) = L1 (κ ) + L2 (κ ) + L3 (κ ) + L4 (κ )
(8.9)
Alternatively, the same relation can be expressed in terms of the brightness
temperature, Tb , and the equivalent temperatures for each of the contributions already mentioned, or:
Tb = T1 + T2 + T3 + T4
(8.10)
9255_C008.fm Page 168 Saturday, February 17, 2007 12:43 AM
168
Introduction to Remote Sensing
8.3.1.6 Calculation of Sea-Surface Temperature
Sea-surface temperatures are studied quite extensively using both infrared
and passive microwave instruments. In both cases, the problem is to estimate
or eliminate T2, T3, and T4 so that T1 can be determined from the measured
valued of Tb. There is a further complication in the case of microwave radiation because, for certain parts of the Earth’s surface at least, a significant
contribution also arises from microwaves generated artificially for telecommunications purposes. It is simplest, from the point of view of the above
scheme, to include this contribution in T2.
Apart from information about the equivalent black body temperature of
the surface of the land, sea, or cloud, the brightness temperature measured
by the sensor contains information on a number of atmospheric parameters
such as water vapor content, cloud liquid water content, and rainfall rate.
Using multichannel data, it may be possible to eliminate T2, T3, and T4 and
hence to calculate T1 from Tb.
8.3.2 Reflected Radiation
The reflected radiation case concerns radiation that originates from the Sun and
eventually reaches a remote sensing instrument on an aircraft or spacecraft, the
energy of the radiation that arrives at the instrument being measured by the
sensor. Hopefully, the bulk of this radiation will come from the instantaneous
field of view (IFOV) on the target area of land, sea, or cloud that is the observed
object of the remote sensing activity. However, in addition to radiation that has
traveled directly over the path Sun → IFOV → sensor and may contain information about the area seen in the IFOV, some radiation reaches the sensor by
other routes. This radiation does not contain information about the IFOV.
FIGURE 8.2
Contributions to satellite-received radiance for reflected solar radiation; 1, 2, 3, and 4 denote
L1(k), L2(k), L3(k), and L4(k), respectively.
9255_C008.fm Page 169 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
169
FIGURE 8.3
Components of the sensor signal in remote sensing of water. (Sturm, 1981.)
Accordingly, various paths between the Sun and the sensor are considered for
reflected radiation reaching the sensor (see Figure 8.2 and Figure 8.3):
• L1(k): radiation that follows a direct path from the Sun to the target
area and thence to the sensor
• L2(k): radiation from the Sun that is scattered towards the sensor,
either by single or multiple scattering in the atmosphere, without
the radiation ever reaching the target area
• L3(k): radiation that does not come directly from the Sun but, rather,
first undergoes a scattering event before reaching the target area and
then passes to the sensor directly
• L4(k): radiation that is reflected by other target areas of the land, sea,
or clouds and is then scattered by the atmosphere towards the sensor.
9255_C008.fm Page 170 Saturday, February 17, 2007 12:43 AM
170
Introduction to Remote Sensing
These four processes may be regarded, to some extent, as analogues for
reflected radiation of the four processes outlined in Section 8.3.1 for emitted
radiation.
L1(k) contains the most useful information. L2(k) and L4(k) do not contain
useful information about the target area. While, in principle, L3(k) does
contain some information about the target area, it may be misleading information if the radiation is mistakenly regarded as having traveled directly
from the Sun to the target area.
One cannot assume that the spectral distribution of the radiation reaching
the outer regions of the Earth’s atmosphere, or its intensity integrated over all
wavelengths, is constant. The extraterrestrial solar spectral irradiance, as it is
called, and its integral over wavelength, which is called the solar constant,
have been studied experimentally over the last 50 years or more. The technique
that is used is due originally to Langley and involves the extrapolation of
ground-based irradiance measurements to outside the Earth’s atmosphere. A
review of such measurements, together with recommendations of standard
values, was given by Labs and Neckel (1967, 1968, 1970). Measurements of
the extraterrestrial irradiance have also been made from an aircraft flying at
a height of 11.6 km (Thekaekara et al., 1969). Although various experimenters
acknowledge errors in the region of ±3% following the calibration of their
instruments to radiation standards, the sets of results differ from one another
by considerably more than this; in some parts of the spectrum, they differ by
as much as 10%. Examples of results are shown in Figure 8.4. Some of the
E0 (mW/(cm2µm))
200
180
160
140
120
100
80
λ (µm)
60
0.40
0.45
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
FIGURE 8.4
Solar extraterrestrial irradiance (averaged over the year) as a function of wavelength (from four
different sources). (Sturm, 1981.)
9255_C008.fm Page 171 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
171
discrepancy is explained by the fact that the radiation from the Sun itself varies.
Annual fluctuations in the radiance received at the Earth’s atmosphere associated with the variation of the distance from the Sun to the Earth can be taken
into account mathematically. The eccentricity of the ellipse describing the orbit
of the Earth is 0.0167. The minimum and maximum distances from the Sun
to the Earth occur on January 3 and July 2, respectively. The extraterrestrial
solar irradiance for Julian day D is given by the following expression:
{
}
E0 (D) = E0 1 + 0.167 cos (2π/365)(D − 3) 


2
(8.11)
8.3.3 Atmospheric Transmission
The possible origins of the radiation that finally reaches a remote sensing
instrument, and the possible routes that the radiation may take in traveling
from its source to the sensor, were considered in Sections 8.3.1 and 8.3.2. It
is also necessary to consider the scattering mechanisms involved, both in
the atmosphere and at the target area on the surface of the Earth or the
clouds. Although the reflection or scattering at the target area is relevant to
the use of all remotely sensed data, the details of the interaction of the
radiation with the target area are not considered here; rather, this section
focuses on the scattering and absorption of the radiation that occurs during
the passage of radiation through the atmosphere.
Three types of scattering are distinguished depending on the relationship
between a, the diameter of the scattering particle, and l, the wavelength of
the radiation. If a « l, Rayleigh scattering is dominant. For Rayleigh scattering,
the scattering cross section is proportional to 1/l4; for visible radiation, this
applies to scattering by gas molecules. Other cases correspond to scattering
by aerosol particles. If a ≈ l, Mie scattering is dominant. Mie scattering
involves water vapor and dust particles. If a » l, nonselective scattering is
dominant. This scattering is independent of wavelength; for the visible
range, this involves water droplets with radii of the order of 5 to 100 µm.
The mechanisms involved in scattering or absorption of radiation as it
passes through the atmosphere can be conveniently considered as follows.
The attenuation coefficient Kk(z) mentioned in Section 8.2 can be separated
into two parts:
Kκ ( z) = KκM ( z) + KκA ( z)
(8.12)
where KκM ( z) and KκA ( z) refer to molecular and aerosol attenuation coefficients.
Each of these absorption coefficients can be written as the product of NM(z)
or NA(z), the number of particles per unit volume at height z, and a quantity
skM or skA, known as the effective cross section:
Kκ ( z) = N M ( z)σ κM + N A ( z)σ κA
(8.13)
9255_C008.fm Page 172 Saturday, February 17, 2007 12:43 AM
172
Introduction to Remote Sensing
The quantities
τκM ( z) = σ λM
∫ N (z)dz
(8.14)
τκA ( z) = σ λA
∫ N (z)dz
(8.15)
z
M
0
and
z
A
0
are called the molecular optical thickness and the aerosol optical thickness,
respectively. It is convenient to separate the molecular optical thickness into
a sum of two components:
τκM ( z) = τκMs ( z) + τκMa ( z)
(8.16)
where τκM ( z) corresponds to scattering and τκM ( z) corresponds to absorption.
s
a
Thus the total optical thickness can be written as
τκ ( z) = τκMs ( z) + τκMa ( z) + τκA ( z)
(8.17)
These three contributions are considered briefly in turn.
8.3.3.1 Scattering by Air Molecules
At optical wavelengths, Rayleigh scattering by air molecules occurs. The
Rayleigh scattering cross section is given by a well-known formula:
σ λMs =
8π 3 (n2 − 1)2
3N 2 λ 4
(8.18)
where
n = refractive index,
N = number of air molecules per unit volume, and
l = wavelength.
This contribution to the scattering of the radiation can be calculated in a
relatively straightforward manner. The l−4 behavior of the Rayleigh scattering (molecular scattering) means this mechanism is very important at short
wavelengths but becomes unimportant at long wavelengths. The blue color
of the sky and the red color of sunrises and sunsets are attributable to the
difference between this scattering for blue light and red light. This mechanism becomes negligible for near-infrared wavelengths (see Figure 8.5) and
is of no importance for microwaves.
9255_C008.fm Page 173 Saturday, February 17, 2007 12:43 AM
173
Atmospheric Corrections to Passive Satellite Remote Sensing Data
1.0
Molecular scattering
0.1
1
τλ
2
Aerosol scattering
3
1
0.01
Aerosol absorption
0.001
0.5
1.0
1.5
2.0
2.5
λ (µm)
FIGURE 8.5
Normal optical thickness as a function of wavelength. (Sturm, 1981.)
8.3.3.2 Absorption by Gases
In remote sensing work, it is usual to use radiation of wavelengths that are
not within the absorption bands of the major constituents of the atmosphere.
The gases to be considered are oxygen and nitrogen, the main constituents
of the atmosphere, and carbon dioxide, ozone, and water vapor. At optical
wavelengths, the absorption by oxygen, nitrogen, and carbon dioxide is
negligible. Water vapor has a rather weak absorption band for wavelengths
from about 0.7 to 0.74 µm. The only significant contribution to atmospheric
absorption by molecules is by ozone. This contribution can be calculated
and, although it is small in relation to the Rayleigh and aerosol contribution,
it should be included in any calculations of atmospheric corrections to optical
scanner data (see Table 8.1). For scanners operating in the thermal-infrared
and microwave regions of the electromagnetic spectrum, absorption by gases
constitutes the major absorption mechanism. The attenuation experienced
by the radiation can be calculated using the absorption spectra of the gases
involved, carbon dioxide, ozone, and water vapor (see Figure 2.13). The
relative importance of the contributions from these three gases depends on
the wavelength range under consideration. As indicated, only ozone absorption is significant at visible wavelengths.
9255_C008.fm Page 174 Saturday, February 17, 2007 12:43 AM
174
Introduction to Remote Sensing
TABLE 8.1
Ozone Optical Thickness for Vertical Path Through the Entire Atmosphere
Atmosphere Type
Ozone abs.
1
2
3
4
5
Wavelength Coefficient
l (µm)
k 0l(cm–1) V 0(∞) = 0·23 V 0(∞) = 0·39 V 0(∞) = 0·31 V 0(∞) = 0·34 V 0(∞) = 0·45
0·44
0·52
0·55
0·67
0·75
0·001
0·055
0·092
0·036
0·014
0·0002
0·0128
0·0215
0·0084
0·0033
0·0004
0·0213
0·0356
0·0139
0·0054
0·0003
0·0173
0·0289
0·0113
0·0044
0·0003
0·0187
0·0312
0·0122
0·0048
0·0005
0·0245
0·0409
0·0160
0·0062
V0(∞) is the visibility range parameter which is related to the optical thickness t 0l(∞) and the
absorption coefficient k 0l for ozone by t 0l(∞) = k 0l V0(∞)
(Sturm, 1981)
8.3.3.3 Scattering by Aerosol Particles
The aerosol scattering also decreases with increasing wavelength. It is common to write the aerosol optical thickness as:
τ λA = Aλ − B
(8.19)
where B is referred to as the Ångström exponent.
However, the values of the parameters A and B do vary quite considerably
according to the nature of the aerosol particles. Quoted values of B vary from
0.8 to 1.5 or even higher. At optical wavelengths, the aerosol scattering is
comparable in magnitude with the Rayleigh scattering (see Figure 8.5). In
practice, however, it is more difficult to calculate because of the great variability in the nature and concentration of aerosol particles in the atmosphere.
Indeed, when dealing with data from optical scanners, accounting for aerosol
scattering is the most troublesome part of atmospheric correction calculations. Although being of some importance in near-infrared wavelengths,
aerosol scattering can be ignored in the thermal-infrared region for clear air
(i.e., in the absence of cloud, haze, fog, or smoke), and it can be ignored in
the microwave region.
Estimates of corrections to remotely sensed data are based, ultimately, on
solving the radiative transfer equation although, as indicated in the previous
section, accurate solutions are very hard to obtain and one is forced to adopt
an appropriate level of approximation.
The importance of understanding the effect of the atmosphere on remote
sensing data and of making corrections for atmospheric effects depends
very much on the use that is to be made of the data. There are many
meteorological and land-based applications of remote sensing (listed in
Table 1.2) for which there has been no previous need to carry out any kind
9255_C008.fm Page 175 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
175
of atmospheric correction — either because the information that is being
extracted is purely qualitative or because, though being quantitative, the
remotely sensed data are calibrated by the use of in situ data within a
training area. Nevertheless, in the future, some of these studies will become
more exact, particularly as more careful multitemporal studies of environmental systems that exhibit change are undertaken. This is likely to mean
that including atmospheric corrections for some of these applications will
become increasingly important in the future. In the case of oceanographic
and coastal work, the information for extraction consists of quantitative
values of physical or biological parameters of the water, such as the surface
temperature and concentrations of suspended sediment or chlorophyll.
Although it is interesting to consider the importance of atmospheric effects
in terms of the magnitude of the attenuation relative to the magnitude of
the signal from the target area, these effects should not be considered in
isolation but rather should be considered in conjunction with the use to
which the data are to be applied. It must also be remembered that this
section is only concerned with passive sensors.
8.4 Thermal-Infrared Scanners and Passive Microwave
Scanners
We shall first consider the appropriate form of the radiative transfer equation
and then we shall consider the data from thermal infrared scanners and from
microwave scanners separately.
8.4.1
The Radiative Transfer Equation
For thermal-infrared scanners and passive microwave scanners, we are concerned
with emitted radiation; the radiative transfer equation takes the following form:
dIκ (θ , ϕ )
= −γ κ Iκ (θ , ϕ ) + ψ κ (θ , ϕ )
ds
(8.20)
where Ik(q,j) is the intensity of electromagnetic radiation of wave number
k in the direction (q,j), s is measured in the direction (q,j), and gk is an
extinction coefficient.
The first term on the right-hand side of this equation describes the attenuation of the radiation both by absorption and by scattering out of the
direction (q, j). The second term describes the augmentation of the radiation,
both by emission and by scattering of additional radiation into the direction
(q,j); this term can be written in the form:
ψ κ (θ , ϕ ) = ψ κA (θ , ϕ ) + ψ κS (θ , ϕ )
(8.21)
9255_C008.fm Page 176 Saturday, February 17, 2007 12:43 AM
176
Introduction to Remote Sensing
where ykA (q,f) is the contribution corresponding to the emission and can, in
turn, be written in the form:
ψ κA (θ , ϕ ) = γ κA B(κ , T )
(8.22)
where gkA is an extinction coefficient and B(k, T) is the Planck distribution
function for black-body radiation:
B(κ , T ) =
2 hc 2κ 3
exp( hcκ/kT ) − 1
(8.23)
and where h = Planck’s constant, c = velocity of light in free space, k =
Boltzmann’s constant, and T = absolute temperature.
ykS(q,f) is the contribution to scattering into the direction (q,f) and can be
written in the form:
ψ κS (θ , ϕ ) = γ κS Jκ (θ , ϕ )
(8.24)
where Jk (q,f) is a function that depends on the scattering characteristics of
the medium.
Accordingly, Equation 8.20 can be rearranged to give:
−
1 dIκ (θ , ϕ )
γA
γS
= Iκ (θ , φ ) − κ B(κ , T ) − κ Jκ (θ , ϕ )
ds
γκ
γκ
γκ
(8.25)
dIκ (θ , ϕ )
= Iκ (θ , ϕ ) − (1 − ω )B(κ , T ) − ω Jκ (θ , ϕ )
dτ
(8.26)
or
where dt = – gk ds, t = optical thickness, gk = gkA + gkS, and w = gkS/gk .
The differential equation is then expressed in terms of optical thickness t
rather than the geometrical path length s.
At microwave frequencies, where hck (=hf) « kT (f = frequency) and the
Rayleigh-Jeans approximation can be made, namely that:
B(κ , T ) 2 hc 2κ 3
= 2 cκ 2 kT
(1 + hcκ/kT ) − 1
(8.27)
then Equation 8.27 can be integrated and expressed in terms of equivalent
temperatures for black-body radiation:
τ
TB (θ , ϕ , 0) = TB (θ , ϕ , τ )e
−τ
∫
+ Teff (θ , ϕ , τ ′)e − τ ′ dτ ′
0
(8.28)
9255_C008.fm Page 177 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
177
where
Teff (θ , ϕ , τ ′) = [1 − ω (τ ′)]Tm (τ ′) + ω (τ ′)Tsc (θ , ϕ , τ ′)
(8.29)
and where Tm (τ ′) is the radiation temperature of the medium and
Tsc (θ , ϕ , τ ′) is a temperature equivalent for the total radiation scattered into
the direction (q,f) from all directions.
The radiative transfer equation can be solved in two slightly different
ways. We have introduced it in terms of thinking about it as a means to
correct satellite-received radiances, or aircraft-received radiances, to determine the Earth-surface-leaving radiance. In this case, one must either have
independent data on the physical parameters of the atmosphere or one must
assume some values for these parameters. Alternatively, the radiative transfer
equation may be used in connection with attempts to determine the atmospheric profile or conditions as a function of height. Atmospheric profiles
have been determined for many years by radiosondes that are launched at
regular intervals by weather stations. Each radiosonde consists of a balloon;
a set of instruments to measure parameters such as pressure, temperature,
and humidity; and a radio transmitter to transmit the data back to the
ground. However, because radiosonde stations are relatively sparse, sounding instruments flown on various satellites may also be used for determining
atmospheric profiles. Perhaps the best-known of these sounding instruments
are the TIROS Operational Vertical Sounder (TOVS) series flown on the
TIROS-N NOAA series of weather satellites. The TOVS are essentially microwave and infrared multispectral scanners with extremely low spatial resolution. The TOVS system has three separate instruments that are used to
determine temperature profiles from the surface to the 50-km level (see
Section 3.2.1). The High Resolution Infrared Radiation Sounder (HIRS/2)
operates in 20 spectral channels, 19 in the infrared and 1 in the visible, at a
spatial resolution of 25 km and is mainly used for determining tropospheric
temperature and water vapor variations. The four-channel Microwave
Sounding Unit (MSU) operates at a 54-GHz frequency at which clouds are
essentially transparent, although rain causes attenuation. It has a spatial
resolution of 110 km and is the major source of information when the sky is
overcast. Recently, TOVS has been superseded by ATOVS (Advanced TIROS
Operational Vertical Sounder) in which the MSU replaced by the Advanced
MSU (AMSU) on the NOAA-K, -L, and -M spacecraft. The AMSU has two
components: AMSU-A and AMSU-B. AMSU-A is a 15-channel microwave
radiometer that is used for measuring global atmospheric temperature profiles and provides information on atmospheric water in all of its forms (with
the exception of small ice particles, which are transparent at microwave
frequencies). AMSU-B is a five-channel microwave radiometer, the purpose
of which is to receive and measure radiation from a number of different
layers of the atmosphere in order to obtain global data on humidity profiles.
9255_C008.fm Page 178 Saturday, February 17, 2007 12:43 AM
178
Introduction to Remote Sensing
The TOVS Stratospheric Sounding Unit is a three-channel infrared instrument for measuring temperatures in the stratosphere (25 to 50 km) at a
spatial resolution of 25 to 45 km. These sounding instruments are described
in NOAA reports (Schneider et al., 1981; Werbowetzki, 1981).
The data from such sounding instruments are usually analyzed by neglecting the scattering into the direction (q, j) so that Equations 8.20 and 8.28 can
be simplified by neglecting the scattering — in other words, by setting w or
w( τ ′ ) (all τ ′ ) equal to 0. Thus, on integrating Equation 8.25 from 0 (at the
surface) to infinity (at the satellite) in this approximation we obtain:
∞
∫
Iκ (θ , ϕ ) = B(κ , Ts )τ κ (0, ∞) + B(κ , T( z))
0
dτκ ( z, ∞)
dz
dz
(8.30)
The quantity dtk(z, ∞ )/dz may be written as Kk(z) for convenience. Using
Equation 8.30 to give the intensity of radiation Ik(q,f)dk that is received in
a (narrow) spectral band of width dk we have:
∞
Iκ (θ , ϕ )dκ = B(κ , Ts )τκ (0, ∞)dκ +
∫ B(κ , T(z))K (z)dzdκ
κ
(8.31)
z= 0
Information gathered by a sounding instrument is then used to invert the
set of values of Ik(q,j) dk from the various spectral bands of the instrument
to determine the temperature profile. This involves making use of a given
set of values of Kk(z), which may be regarded as a weighting function. In
practice, it is usual to transform this weighting function to express it in terms
of pressure, p, rather than height, z, and also to express the temperature
profile derived from the sounding measurements as a function of pressure,
T(p), rather than a function of height, T(z).
8.4.2 Thermal-Infrared Scanner Data
Data from a thermal-infrared scanner can be processed to yield the value of
the radiance leaving the surface of the Earth. The intensity of the radiation
leaving the Earth at a given wavelength depends on both the temperature
and emissivity of the surface. Over land, the value of the emissivity varies
widely from one surface to another. Consequently, one has to regard both
the emissivity and the temperature of the land surface or land cover as
unknowns that have to be determined either exclusively from the scanner
data or from the scanner data plus supplementary data. Thus, the recovery
of land surface temperatures from thermal infrared data is still very much
at the research stage. The emissivity of the sea is known to be very close to
unity, about 0.98 in fact, and it varies very little with other factors (such as
salinity and temperature). Consequently, the generation of maps of sea surface temperature is now a routine operation carried out at local and regional
9255_C008.fm Page 179 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
179
levels by NOAA and at direct readout stations all around the world and at
a global level by NOAA using tape-recorded data covering the whole Earth.
We shall describe the determination of sea-surface temperatures from data
from the thermal-infrared channels of the AVHRR because this is by far the
most widely used source of thermal-infrared data from satellites. The left-hand
side of Figure 8.6 illustrates the physical processes involved in the passage of
emitted radiation leaving the surface of the sea and traveling up through the
atmosphere to the satellite where it enters the AVHRR and gives rise to signals
in the detectors. Radiation that arrives at a satellite is incident on detectors
that produce voltages in response to this radiation incident upon them. The
voltages produced are then digitized to create the digital numbers that are
transmitted back to Earth. The data generated by a satellite-borne thermalinfrared scanner are received at a ground receiving station as a stream of digital
numbers — often as 8-bit numbers, but 10-bit numbers in the case of the
AVHRR. The right-hand side of Figure 8.6 illustrates the steps involved in the
procedure applied to the processing of the data, including:
•
•
•
•
Eliminating cloudy areas
Performing geometrical rectification of data
Using in-flight calibration data to calculate satellite-derived radiances
Converting satellite-received radiances into “brightness temperatures”
(i.e., equivalent black body temperatures)
• Evaluating atmospheric correction to determine sea surface temperature.
• Each of these steps will be considered in turn.
• Because there is no point in processing data corresponding to areas
of cloud that are obscuring the surface of the sea, the first step is to
eliminate cloudy areas. Various methods are available for identifying
cloudy areas. Many scenes are just solid cloud and can be rejected
immediately by visual inspection. There would seem to be no need
to improve on this simple technique in these cases. Scenes that are
partially cloudy are much more difficult to handle. One method
involves using an interactive system and outlining the areas of cloud
manually, using a cursor to draw around the cloudy areas and then
having appropriate software organized to reject those areas. Alternatively, one can try to establish automatic methods for the identification of clouds. Several such methods are described and illustrated
for AVHRR data by Saunders (1982). These include the use of the
visible channel with a visible threshold, a local uniformity method,
and a histogram method; the use of 3.7-µm channel data for cloud
identification is also considered. A widely used method is that of
Saunders and Kriebel (1988), but others also exist.
Geometrical rectification of the data is a necessary, though potentially
tedious, process. It can be done using the available ephemeris data (data
describing the satellite orbit); the three-dimensional geometry is complicated
9255_C008.fm Page 180 Saturday, February 17, 2007 12:43 AM
180
Introduction to Remote Sensing
Physical situation
Processing
Digital numbers
Digital numbers
Calibration
Satellite
Satellitereceived
infrared
Satellite-received
infrared
intensity
Invert Planck
distribution
Atmospheric
effects
Brightness
temperature
Water-leaving
infrared
Atmospheric
corrections
Sea
Sea surface
temperature
FIGURE 8.6
Diagram to illustrate the determination of sea surface temperature from satellite thermalinfrared data. (Cracknell, 1997.)
but tractable (see, for example, Section 3.1 of Cracknell [1997]). Or, alternatively, one can choose a set of transformation equations relating the geographical coordinates of a pixel to the scan line and column numbers of the
pixels in the raw data and determine the coefficients in these equations by
a least squares fit to a set of ground control points. Or one can use a combination of both approaches, using the ephemeris data to obtain a first
approximation to the geographical coordinates of a pixel and then using a
very small number of ground control points to refine these values. It is usual
to then resample the data to a standard grid in the geographical projection
system chosen. For further details, see Section 3.1 of Cracknell (1997).
The next step is to convert the digital numbers (in the range of 0 to 1023 in
the case of the AVHRR) output by the scanner and received on the ground
into the values of the satellite-received radiance, L*(k), where k refers to the
spectral channel centered around the wave number k. This involves using
in-flight calibration data to calculate the intensity of the radiation incident on
the instrument. The calibration of the thermal-infrared channels of the AVHRR
is achieved using two calibration sources that are viewed by the scanner
9255_C008.fm Page 181 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
181
between successive scans of the surface of the Earth; these two sources comprise a black-body target of measured temperature on board the spacecraft
and a view of deep space. Taking the scanner data, together with preflight
calibration data supplied by NOAA, the digital data can be converted into
radiances (for details, see Section 2.2 of Cracknell [1997]).
Assuming that the energy distribution of the incident radiation is that
of black-body radiation, one can calculate the temperature corresponding
to that radiation by inverting the Planck radiation formula. This temperature is known as the brightness temperature, Tb. The accuracy that can be
attained in determining the brightness temperature depends on the internal
consistency and stability of the scanner and on the accuracy with which it
can be calibrated. Brightness temperatures can be determined to an accuracy
in the region of 0.1 K from the AVHRR on the NOAA series of polar-orbiting
satellites.
The inversion of the Planck distribution function to obtain the brightness
temperature is then a standard mathematical operation. The satellitereceived radiance, L*(k, Tb), is given by:
n
L * (κ , Tb ) =
∑ B(κ , T )ϕˆ (κ )∆κ
i
b
i
i
(8.32)
i =1
where B(κ i , Tb ) is the Planck distribution function, ϕˆ (κ i ) is the normalized
spectral response function of the detector, n is the number of discrete wave
numbers within the spectral window at which the response of the detector
was measured during the preflight investigation, and ∆k i is the width of the
ith interval within which the response function was measured.
NOAA Technical Memorandum NESS 107 (Lauritson and Porto, 1979)
supplies 60 values of the normalized response function with specified values
of ∆ki. Strictly speaking, the response of a detector may not be a linear
function of the radiance; therefore, one must make a suitable correction for
this (Singh and Warren, 1983). In principle, Equation 8.32 can be solved for
the temperature Tb. However, Tb occurs in the summation on the right-hand
side and the equation cannot be inverted to give an explicit expression for
Tb in terms of L*(k,Tb). Thus, one selects a range of values of Tb that is
appropriate to the sea-surface temperatures likely to be encountered; because
the response function of the detector is known, the value of L*(k,Tb) can be
computed for closely spaced values of Tb within this range and a look-up
table can be constructed. This can then be used to convert the satellitereceived radiance into brightness temperature on a pixel-by-pixel basis
throughout the entire scene that one is analyzing. Or a log-linear relation of
the form:
ln( L * (κ , Tb )) = α +
β
Tb
(8.33)
9255_C008.fm Page 182 Saturday, February 17, 2007 12:43 AM
182
Introduction to Remote Sensing
can be generated, where a and b are parameters that depend on the selected
range of Tb and the absolute value of Tb. This formula can then be used
instead of a look-up table to calculate the brightness temperatures very
quickly for the bulk of the scene (Singh and Warren, 1983).
Having calculated the brightness temperature Tb for the whole of the area
of sea surface in the scene, the atmospheric correction must then be calculated
because the objective of using thermal-infrared scanner data is to obtain information about the temperature or the emissivity of the surface of the land or
sea. It is probably fair to say that the problem of atmospheric corrections has
been studied fairly extensively in relation to the sea but has received little
attention for data obtained from land surface areas. As seen in Section 8.3, in
addition to surface radiance, upwelling atmospheric radiance, downwelling
atmospheric radiance, and radiation from space must be determined. Moreover,
as indicated in Section 8.2, radiation propagating through the atmosphere is
attenuated. These effects are considered separately here.
Figure 8.7 shows the contributions from sea-surface emission, reflected
solar radiation, and upwelling and downwelling emission for the 3.7 µm
channel of the AVHRR; the units of radiance are T.R.U. (where 1 T.R.U. = 1 mW
m–2 sr–1 cm). At this wavelength, the intensity of the reflected solar radiation
is very significant in relation to the radiation emitted from the surface,
whereas atmospheric emission is very small. Figure 8.8 shows data for the
11 µm channel of the AVHRR. It can be seen that the reflected radiation is
of little importance but that the atmospheric emission, though small, is not
entirely negligible.
The data in Figure 8.7 and Figure 8.8 are given for varying values of the
atmospheric transmittance. However, in order to make quantitative corrections to a given set of thermal-infrared scanner data, one must know the
actual value of the atmospheric transmittance or atmospheric attenuation at
the time that the scanner data were collected. Of the three attenuation mechanisms mentioned in Section 8.3.3 — namely, Rayleigh (molecular) scattering,
aerosol scattering, and aerosol absorption by gases — absorption by gases is
the important mechanism in the thermal-infrared region, where water vapor,
carbon dioxide, and ozone are the principal atmospheric absorbers and emitters
(see Figure 2.13). To calculate the correction that must be applied to the
brightness temperature to give the temperature of the surface of the sea, one
must know the concentrations of these substances in the column of atmosphere between the satellite and the target area on the surface of the sea.
Computer programs, such as LOWTRAN, can do this and are based on
solving the radiative transfer equation for a given atmospheric profile. Examples of the results for a number of standard atmospheres are given in Table
8.2. These calculations have been performed for the three wavelengths corresponding to the center wavelengths of channels 3, 4, and 5 of the AVHRR.
Table 8.2 illustrates the variations that can be expected in the atmospheric
correction that needs to be applied to the calculated brightness temperature
for various different atmospheric conditions. In reality, atmospheric conditions, especially the concentration of water vapor, vary greatly both spatially
9255_C008.fm Page 183 Saturday, February 17, 2007 12:43 AM
183
0.36
0.32
0.28
0.24
0.20
0.5
0.6
0.7
0.8
0.9
Contrib. from reflected solar radiation (T.R.U.)
0.40
0.070
0.060
0.050
0.040
0.030
0.020
0.5
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
0.0020
0.20
0.16
0.12
0.08
0.04
Contrib. from reflected atmospheric
emission (T.R.U.)
Contrib. from atmospheric emission (T.R.U.)
Contrib. from sea-surface emission (T.R.U.)
Atmospheric Corrections to Passive Satellite Remote Sensing Data
0.0015
0.0010
0.0005
0
–0.0005
0
–0.04
–0.0010
0.5
0.6
0.7
0.8
0.9
Vertical atmospheric transmittance
0.5
Vertical atmospheric transmittance
FIGURE 8.7
Various components of the satellite-recorded radiance in the 3.7 µm channel of the AVHRR.
(Singh and Warren, 1983.)
and temporally. Considering the temporal variations first, the variation in the
atmospheric conditions with time, at any given place, is implied by the values
given in Table 8.2. It is also illustrated to some extent in Figure 8.9 by the two
lines showing the atmospheric corrections to the AVHRR-derived brightness
temperatures, using radiosonde data 12 hours apart for the weather station
at Lerwick, Scotland. The effect of the spatial variation is also illustrated in
Figure 8.9 by the five lines obtained using simultaneous radiosonde data to
give the atmospheric parameters at five weather stations around the coastline
of the U.K. The calculations were performed using the method of Weinreb
and Hill (1980), which incorporates a version of LOWTRAN. For a seasurface temperature of 15°C (288 K), the correction varies from about 0.5 K
at some stations to about 1.5 K at other stations.
Thus, for a reliable determination of the atmospheric correction, one needs
to use the atmospheric profile that applied at the time and place at which
the thermal-infrared data were collected. The use of a model atmosphere,
9255_C008.fm Page 184 Saturday, February 17, 2007 12:43 AM
Introduction to Remote Sensing
90
80
70
60
50
0.6
0.7
0.8
0.9
50
0.008
0.006
0.004
0.002
Contrib. from reflected atmospheric
emission (T.R.U.)
Contrib. from atmospheric emission (T.R.U.)
0.5
Contrib. from reflected solar radiation (T.R.U.)
Contrib. from sea-surface emission (T.R.U.)
184
40
30
20
10
0.5
0.6
0.7
0.8
0.9
Vertical atmospheric transmittance
0.5
0.6
0.7
0.8
0.9
0.6
0.7
0.8
0.9
0.40
0.30
0.20
0.10
0.5
Vertical atmospheric transmittance
FIGURE 8.8
Various components of the satellite-recorded radiance in the 11 µm channel of the AVHRR.
(Singh and Warren, 1983.)
based on geographical location and season of the year, will not give good
results. Consequently, atmospheric corrections need to be carried out on a
quite closely spaced network of points, if not on a pixel-by-pixel basis. The
ideal source of atmospheric data for this purpose is the TOVS, which is flown
on the same NOAA polar-orbiting meteorological satellites as the AVHRR
and which therefore provides data coincident in both space and time with
the thermal-infrared AVHRR data. For a period, TOVS data were used by
NOAA in the production of their atmospherically corrected sea-surface
temperature maps. However, the use of TOVS data requires a very large
amount of computer time and therefore is expensive. First, it is necessary to
invert the TOVS data to generate an atmospheric profile (using software
based on solving the radiative transfer equation). A calculated atmospheric
9255_C008.fm Page 185 Saturday, February 17, 2007 12:43 AM
185
Atmospheric Corrections to Passive Satellite Remote Sensing Data
TABLE 8.2
Atmospheric Attentuation, Ta, for Various Standard Atmospheres
Atmosphere
Channel
H2O Lines
CO2
03
N2 Cont.
H2O Cont.
(a)
Tropical
1
2(b)
3(c)
1·31
0·86
2·80
0·46
0·30
0·55
0
0
0
0·22
0
0
0·41
3·44
4·89
Midlatitude summer
1
2
3
0·95
0·59
2·05
0·42
0·27
0·49
0
0
0
0·20
0
0
0·26
1·61
2·39
Midlatitude winter
1
2
3
0·38
0·21
0·83
0·36
0·21
0·39
0
0
0
0·17
0
0
0·09
0·23
0·35
Subarctic summer
1
2
3
0·85
0·53
1·87
0·42
0·26
0·47
0
0
0
0·20
0
0
0·24
1·23
1·86
U.S. Standard
1
2
3
0·76
0·48
1·76
0·45
0·29
0·53
0
0
0
0·22
0
0
0·19
0·78
1·20
(a) 1 refers to the 3·7 µm channel
(b) 2 refers to the 11 µm channel
(c) 3 refers to the 12 µm channel
Hemsby 1100
Camborne 1100
20
16
Stornoway 1100
Lerwick 1100
Lerwick 2300
Shanwell 1100
12
Attenuation (K)
08
04
0
–04
276
274
278
280
282
284
286
288
290
292
294
Sea Surface Temperature (K)
–08
FIGURE 8.9
Atmospheric attenuation versus sea surface temperature calculated using radiosonde data for
five stations around the U.K. (Callison and Cracknell, 1984.)
9255_C008.fm Page 186 Saturday, February 17, 2007 12:43 AM
186
Introduction to Remote Sensing
profile is valid for a small cluster of pixels in a scene so that it is not
necessary to recalculate the atmospheric profile for every pixel. However,
it is necessary to recalculate atmospheric profiles many times for a whole
scene. Then it is necessary to use this atmospheric profile to calculate the
atmospheric correction to the brightness temperature (using more software, also based on solving the radiative transfer equation). Finally, as
Figure 8.6 indicates, the programs calculate the brightness temperature
for a given sea surface temperature, whereas what is needed is to determine the sea surface temperature for a given brightness temperature. After
a while, scientists realized that other methods can be applied directly on
a pixel-by-pixel basis, which involve a tiny fraction of the computer time
needed when using satellite sounding data and which give results for the
atmospheric corrections that are certainly no less accurate than those
obtained by using the sounding data. These methods are:
• The multichannel, or two-channel, method
• The multilook method.
The multichannel method seems to have been suggested first by Anding
and Kauth (1970). Although some disagreements are noted in the literature (Maul and Sidran, 1972; Anding and Kauth, 1972), a more complete
theoretical justification for the method was demonstrated by McMillin
(1971). Since then, the method has been applied by a number of workers
(Prabhakara et al., 1974; Rao et al., 1972; and Sidran, 1980; and for further
references see Cracknell [1997] and Robinson [2004]); the advantage of
this method is that all local variations in atmospheric conditions are
eliminated on a pixel-by-pixel basis. The original suggestion of Anding
and Kauth was to use two bands, one between 7 and 9 µm and the other,
on the other side of the ozone band, between 10 and 12 µm. They argued
that since the same physical processes are responsible for absorption in
both of these wave bands, the effect in one ought to be proportional to
the effect in the other. Therefore, one measurement in each wavelength
interval could be used to eliminate the effect of the atmosphere. A more
formal justification of the technique can be obtained by manipulation of
the radiative transfer equation. This involves making several assumptions,
namely:
• That the magnitude of the atmospheric correction to the brightness
temperature is fairly small
• That one only includes the effect of water vapor
• That the transmittance of the atmosphere is a linear function of the
water vapor content.
(For details see, for example, Singh and Warren [1983]). The sea-surface
temperature Ts is then written in the form:
Ts = e0 + e1TB(k1) + e2TB(k2)
(8.34)
9255_C008.fm Page 187 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
187
for a two-channel system, where k1 and k2 refer to the two spectral channels, or:
Ts = e0 + e1TB (κ 1 ) + e2TB (κ 2 ) + e3TB (κ 3 )
(8.35)
for a three-channel system, where k1, k2, and k3 refer to three spectral channels
and e0, e1, e2, and e3 are coefficients that must be determined. In three spectral
intervals, or “windows,” the vertical transmittance of the atmosphere may
be as high as 90%; these windows are from 3 to 5 µm, 7 to 8 µm, and 9.5 to
14 µm (see Figure 2.13). Of these three windows, the 3 to 5 µm window is
the most nearly transparent; unfortunately, this window has proven to be of
less practical value than the others because of the large amount of reflected
solar radiation at this wavelength. The most widely used source of thermalinfrared data from satellites is the AVHRR, which was designed, built, and
flown to obtain operational sea surface temperatures from space. Channel 3
of the AVHRR is in the 3 to 5 µm window and channels 4 and 5 are in the
9.5 to 14 µm window. With the launch of NOAA-7 in 1981, which carried
the first five-channel AVHRR instrument, NOAA implemented the multichannel sea surface temperature algorithms that provided access to global
sea surface temperature fields with an estimated accuracy of 1 K or better
(McMillin and Crosby, 1984). Formulas of the type of Equation 8.35 can be
used with nighttime data from the three thermal-infrared channels. However,
the daytime data from channel 3 contains a mixture of emitted infrared
radiation and reflected solar radiation and, of course, these two components
cannot be separated. Therefore, with daytime data, one can only use a twochannel formula of the type of Equation 8.34. Given that the form of
Equation 8.34 and Equation 8.35 can be justified on theoretical grounds, it is
also possible to derive theoretical expressions for e0, e1, e2, and e3. However,
these expressions inevitably involve some of the parameters that specify the
atmospheric conditions. In practice, therefore, the values of these coefficients
are determined by a multivariate regression analysis fitting to in situ data
from buoys. A tremendous amount of effort has gone into trying to establish
the “best” set of values for the coefficients e0, e1, e2, and e3 (Barton, 1995;
Barton and Cechet, 1989; Emery et al., 1994; Kilpatrick et al., 2001; McClain
et al., 1985; Singh and Warren, 1983; Walton, 1988). It seems that if one relies
on using a universal set of coefficients for all times and places, then the
accuracy that can now be obtained is around 0.6 K. Some more recent work
has been directed toward allowing the values of the coefficients e0, e1, e2, and e3
to vary according to atmospheric conditions and geographical location in an
attempt to improve the results.
We now turn briefly to the multilook method. In this method, one tries to
eliminate the effect of the atmosphere by viewing a given target area on the
surface of the sea from two different directions, instead of in two or three
different wavelength bands, as is done in the multichannel approach just discussed. Attempts have been made to do this by using data from two different
satellites, one being a geostationary satellite and one being a polar-orbiting
9255_C008.fm Page 188 Saturday, February 17, 2007 12:43 AM
188
Introduction to Remote Sensing
satellite (Chedin et al., 1982; Holyer, 1984). However, for various reasons, this
approach was not very accurate. The multilook method has been developed in
the Along Track Scanning Radiometers ATSR and ATSR-2, which have been
flown on the European satellites ERS-1 and ERS-2. Instead of a simple acrosstrack scanning mechanism, which only gives one look at each target area on the
ground, this instrument uses a conical scanning mechanism so that in one
rotation it scans a curved strip in the forward direction (forward by about 55°)
and also scans a curved strip vertically below the spacecraft. Thus it obtains two
looks at each target area on the ground that are almost, but not quite, simultaneous. It is claimed that this instrument produces results with smaller errors
(~0.3 K) than those obtained with AVHRR data using the multichannel methods
(~0.6 K). However, the AVHRR has the great advantage of being an operational
system, of providing easily available data to direct-readout stations all over the
world, and of having a historical archive of more than 25 years of global data.
8.4.3 Passive Microwave Scanner Data
Early passive microwave scanners flown in space include the Nimbus-E
Microwave Spectrometer (NEMS), the Electrically Scanning Microwave
Radiometer (ESMR), and the Scanning Microwave Spectrometer (SCAMS),
which were all flown in the 1970s. They were followed by the SMMR, which
was flown on Seasat and Nimbus-7 (see Section 2.5). Its successor, the Special
Sensor Microwave Imager (SSM/I), was flown on a number of the spacecraft
in the American Defense Meteorological Satellite Program, starting in 1987
(see Section 3.2.1). Subsequently, other microwave radiometers have been
flown in space, including the Tropical Rainfall Monitoring Mission (TRMM)
Microwave Imager (TMI), a joint Japanese-U.S. mission launched in November 1997, and the National Aeronautics and Space Administration’s (NASA’s)
Advanced Microwave Scanning Radiometer (AMSR) and AMSR-E.
One important capability of microwave scanners is the determination of
Earth-surface temperatures, which is based on the same basic principle as
infrared scanners — namely, that they are detecting radiation emitted by the
surface of the Earth. The microwave signal is also governed by the Planck
distribution function, but there are a number of important differences from
the thermal-infrared case.
First, the intensities of the radiation emitted or reflected by the surface of
the Earth in the microwave part of the electromagnetic spectrum are very
small; therefore, any passive microwave remote sensing instrument must be
very sensitive and inevitably have a much larger IFOV than an infrared
scanner. Estimates of relative intensities of reflected solar radiation and emitted radiation from the surface of the Earth are given in Table 2.1. Because
the wavelength of microwaves is very much longer than that for infrared
radiation, the Planck distribution function simplifies for the microwave
region of the spectrum to 2ck2kT for a perfect emitter (see Equation 8.27) or
2eck2kT for a surface with an emissivity of e.
9255_C008.fm Page 189 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
189
Secondly, the spatial resolution of a passive microwave scanner is three or
four orders of magnitude smaller (i.e., the IFOV is three or four times larger)
than for a thermal-infrared scanner. For example, the thermal-infrared channels of the AVHRR flown on the NOAA polar-orbiting series of satellites have
an IFOV of a little more than 1 km2. For the shortest wavelength (frequency
37 GHz) of the SMMR flown on the Nimbus-7 satellite, the IFOV was about
18 km × 27 km, whereas for the longest wavelength (frequency 6.6 GHz) on
that instrument was about 95 km × 148 km. An antenna of a totally unrealistic
size would be required to obtain an IFOV of the order of 1 km2 for microwave
radiation. There are two reasons for this very much larger IFOV. One is that
the signal is very weak. The other is that, unlike for a thermal-infrared scanner,
the theoretical diffraction limit is important for a microwave scanner.
Thirdly, microwave radiation penetrates clouds. The microwaves are
scarcely attenuated at all in their passage through the atmosphere, except in
the presence of heavy rain. This means that microwave techniques can be
used in almost all weather conditions, although one must still apply an
atmospheric correction when extracting sea-surface temperatures. The effect
of heavy rain on microwave transmission is actually exploited by meteorologists using ground-based radars to study rainfall and also in the Tropical
Rainfall Monitoring Mission satellite program.
Fourthly, horizontally and vertically polarized radiation can be received
separately. Thus for the SMMR, for instance, which has five frequency channels, allowing for the two possible polarizations of the radiation that are
received at each frequency, there are ten spectral channels, or bands, altogether. In general, optical and infrared scanners flown on satellites in the
early days were provided with fewer radiance values per pixel than the
SMMR (see Table 8.3).
As discussed in the previous section, the emissivity of land surface varies
depending on the nature of the surface, but the emissivity of water at thermalinfrared wavelengths is not only constant but its value is also very close to
1. For microwave scanning radiometer data, the variability of the emissivity
over land and the very large IFOV mean that the conversion of brightness
temperatures into surface temperatures for land would be extremely difficult. For sea and ice, although the value of the emissivity is quite different
from 1, it is still more or less constant and its value is known. Microwave
TABLE 8.3
Radiance Values per Pixel
Scanner
Number of Channels
Landsat MSS
Landsat TM
AVHRR
Meteosat
SMMR
4 (occasionally 5)
7
5 (sometimes 4)
3
10
9255_C008.fm Page 190 Saturday, February 17, 2007 12:43 AM
190
Introduction to Remote Sensing
scanning radiometer data are therefore used quite widely to provide sea
surface and ice surface temperatures. The value of the emissivity of seawater at microwave frequencies is, unlike in the thermal-infrared case, very
significantly different from 1; there are two values, eH and eV, for horizontally and vertically polarized radiation, respectively. These can actually be
determined theoretically from the Fresnel reflection coefficients, rH and rV,
because εH,V = 1 – rH,V2. The values of rH and rV are:
ρH =
cos θ − e − sin 2 θ
cos θ + e − sin 2 θ
(8.36)
and
ρV =
e cos θ − e − sin 2 θ
e cos θ + e − sin 2 θ
(8.37)
where e is the dielectric constant of seawater and depends on the frequency
and q is the angle of incidence.
There are some significant differences between the derivation of sea
surface temperatures from microwave data and from infrared data. First
of all, there is no need to try to eliminate cloudy areas because the IFOV
is so large that almost all the pixels will contain some cloud. Secondly,
the ephemeris data are adequate for geometrical rectification of the data;
this gives the location of a pixel to within the equivalent of a few kilometers, which is relatively insignificant in relation to the size of the pixels.
Therefore, the use of ground control points, which can be very time consuming, is not necessary. Thirdly, the details of the procedure for the calibration
to determine satellite-received radiances from the raw data are obviously
different, although the principles and importance remain the same.
Fourthly, the conversion of satellite-received radiances into brightness temperatures can be done directly because of the simplification of the Planck
function, which has already been noted. Finally, the atmospheric correction
procedure is different from that in the infrared case. Without going into
details, it is done using the radiative transfer equation (see Section 8.4.1).
To do this, one needs to have information about the atmospheric conditions
(i.e., the profiles of pressure, temperature, and humidity). One could use
model atmospheric profiles for a given area of the world and for a given
season, but they are not likely to yield good results; rather, one must have
the profiles for the actual time of capture of the satellite data. Atmospheric
profiles have been determined for many years by radiosondes but, because
radiosonde stations are relatively sparsely distributed and radiosondes are only
launched at certain fixed times of day, use may be made of sounding instruments
(principally TOVS and ATOVS [see Section 8.4.1]) flown on various satellites for
determining atmospheric profiles.
9255_C008.fm Page 191 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
191
When atmospheric corrections are applied to SMMR data, sea surface
temperatures can be obtained with an estimated error of 1.5 to 2 K (SSM/I
appears to have achieved no better than this). However, with the appearance
of AMSR-E, which was launched in May 2002 on NASA’s AQUA spacecraft,
the possibility of obtaining more-accurate sea surface temperatures is
claimed to be possible.
Because the spatial resolution of passive microwave scanner data is much
lower than that of infrared scanner data, the microwave scanners flown on
satellites are used to obtain frequent measurements of sea surface temperatures on a global scale and are thus very suitable for meteorological and
climatological studies, although they are of no use in studying small-scale
water-surface temperature features in coastal regions. On the other hand,
the spatial resolution of a satellite-flown, thermal-infrared scanner is very
appropriate for the study of small-scale phenomena. It would give far too
much detail for global weather forecasting or climate models and would
need to be degraded before it could be used for that purpose.
The wavelengths of the microwave radiation used in passive radiometry
are comparable in size to many of the irregularities of the surface of the land
or sea. Therefore, remote sensing instruments may provide data that enables
one to obtain information about the roughness of the surface that is being
observed. Passive microwave scanner data can therefore also be used to
study near-surface wind speeds over oceans. The retrieval of wind speed
values is based on empirical models in much the same way that wind speeds
are retrieved from data from active microwave instruments (this has already
been discussed in Chapters 6 and 7).
8.5 Visible Wavelength Scanners
Determination of the values of physical quantities of the Earth’s surface,
whether land or water, from satellite-flown visible and near-infrared data
may involve as many as three stages:
Conversion of the digital data output from the sensors into satellitereceived radiance (in absolute physical units)
Determination of the Earth-surface-leaving radiance from the satellitereceived radiance by performing the appropriate atmospheric
corrections
Use of appropriate models or algorithms relating the surface-leaving
radiance to the required physical quantity.
8.5.1 Calibration of the Data
Calibration of visible and near-infrared data involves determining the absolute values of the radiance incident at a satellite from the digital numbers
9255_C008.fm Page 192 Saturday, February 17, 2007 12:43 AM
192
Introduction to Remote Sensing
in the output data stream. The radiation falls on the scanner, is filtered, and
falls on to detectors; the voltage generated by each detector is digitized to
produce the output digital data, and these data are simply numbers on some
scale (e.g., between 0 and 255 or between 0 and 1023). The problem then is
to convert these digital numbers back into values of the intensity of the
radiation incident on the instrument. The instruments flown in a spacecraft
are, of course, calibrated in a laboratory before they are integrated into a
satellite system prior to the launch of the satellite (see, for instance,
Section 2.2.2 of Cracknell [1997]). Once a spacecraft has been launched, the
calibration may change; this might occur as a result of the violence of the
launch process itself, or it might be caused by the different environment in
space (the absence of any atmosphere, cyclic heating by the Sun and cooling
when on the dark side of the Earth) or to the decline in the sensitivity of the
components with age. Once the satellite is in orbit, the gain of the electronics
(the amplifier and digitizer) can be periodically tested by applying a voltage
ramp (or voltage staircase) to the digitizing electronics. The output is then
transmitted in the data stream. If an instrument has several gain settings,
this test must be done separately for each gain setting. However, the use of
a voltage ramp (or staircase) only checks the digitizing electronics and does
not check the optics and detecting part of the system End-to-end calibration
is achieved by recording the output signal when the scan mirror is pointed
at a standard source of known radiance. For some satellite-flown scanners,
provision has been made for the scanning optics to view a standard source,
either on board the spacecraft or outside the spacecraft (deep space, the Sun,
or the Moon); however, this provision is not always made. For instance,
although in-orbit (in-flight) calibration of the thermal bands of the AVHRR
is available (see Section 8.4.2), it is not available for band 1 and band 2, the
visible and near-infrared bands, of the AVHRR.
Teillet et al. (1990) identified three broad categories of methods for the
postlaunch calibration of satellite-flown scanners that have no on-board or
in-flight calibration facilities. These are:
• Methods based on simultaneous aircraft and satellite observations
of a given area of the ground — The instrument on the aircraft is,
as closely as possible, a copy or imitation of the satellite-flown instrument and can be calibrated before and after the aircraft flight so that
the surface-leaving radiance can be determined and thence the
satellite-received radiance. This method does involve making atmospheric corrections to the data, which is difficult to do accurately,
particularly because the atmospheric paths between the ground and
the aircraft and between the ground and the satellite are quite
different.
• Using a combination of model simulations and satellite measurements — this needs to be done for data collected from a large uniform area of ground that is stable in terms of its physical
9255_C008.fm Page 193 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
193
characteristics, particularly its reflectance. Suitable areas are either
large areas of desert, such as in New Mexico or Libya, or areas of
open ocean.
• Using statistical procedures on large bodies of data to determine the
trends in the calibration of the scanner — this also involves choosing
data from a large uniform area of ground with stable physical characteristics, particularly its reflectance.
Using these methods, it has been possible to successfully carry out postlaunch calibration of the visible and near-infrared bands of the AVHRR (for
more details, see Section 2.2.6 of Cracknell [1997]).
We now turn to the consideration of Coastal Zone Color Scanner (CZCS)
and SeaWiFS visible and near-infrared data, which are widely used for the
determination of water quality parameters. For the visible bands of CZCS,
in-flight calibration was done by making use of a standard lamp on board
the spacecraft. Every so often the scanner viewed the standard lamp and the
corresponding digital output was included in the data stream; however, this
procedure was not without its problems, particularly those associated with
the deterioration of the output of the lamp (see, for example, Evans and
Gordon [1994] and Singh et al. [1985]). SeaWiFS is a second-generation ocean
color instrument and, in its design, use was made of many of the lessons
learned from its predecessor, the CZCS.
The prelaunch calibration of a scanner such as SeaWiFS involves recording
the output of the instrument when it is illuminated by a standard source in a
laboratory before it is flown in space (see Barnes et al. [1994; 2001] and the
references quoted therein). We have already noted that as time passes while a
scanner is in its orbit, the sensitivity of the detecting system can be expected to
decrease; it is the purpose of the in-orbit calibration to study this decreased
sensitivity. Thus the satellite-received radiance for each band of SeaWiFS is
represented by:
LT (λ ) = (DN − DN0 ){ k2 ( g)α (t0 )[β + γ (t − t0 ) + δ (t − t0 )2 ]−1 }
(8.38)
where l is the wavelength of the band (in nm), DN is the SeaWiFS signal (as
a digital number), DN0 is the signal in the dark, k2(g) is the prelaunch calibration coefficient (in mW cm–2 sr–1 µm–1 DN–1), g is the electronic gain, a (t0)
is a vicarious correction (dimensionless) to the laboratory calibration on the
first day of in-orbit operations (t0), and the coefficients b (dimensionless), l (in
day −1), and d (in day−2) are used to calculate the change in the sensitivity for
the band at a given number of days (t – t0) after the start of operations.
For each scan by SeaWiFS, DN0 is measured as the telescope views the
black interior of the instrument. For SeaWiFS, the start of operations (t0) is
the day of the first Earth image, which was obtained on September 4, 1997.
Values of k2(g) for each band of SeaWiFS were determined from the prelaunch
calibration measurements, and the values of the other coefficients — a(t0),
9255_C008.fm Page 194 Saturday, February 17, 2007 12:43 AM
194
Introduction to Remote Sensing
b, g, and d — were determined from SeaWiFS postlaunch solar and lunar
data (see Barnes et al. [2001] for details). Changes in the sensitivity of the
bands of SeaWiFS are shown in Figure 8.10.
One of the most important lessons learned from the experience gained
from CZCS was the need for a continuous, comprehensive, calibrationevaluation activity throughout the mission (Hooker et al., 1992; McClain
et al., 1992). The processing of the CZCS data set was complicated by the
degradation of the scanner’s radiometric sensitivity, particularly in the visible bands (Evans and Gordon, 1994). For one thing, the internal lamps in
the CZCS did not illuminate the entire optical path of the instrument.
FIGURE 8.10
Changes in the radiometric sensitivity of SeaWiFS as determined from lunar measurements.
(Barnes et al., 2001.)
9255_C008.fm Page 195 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
195
Therefore, changes in the characteristics of the optical components at the
input aperture of the scanner could not be determined by measurements of
the calibration lamps by the sensor. Moreover, separating changes in the
sensitivity of the sensor from changes in the outputs from the lamps was
difficult. Therefore, Gordon (1987) recommended making frequent observations of the Sun or Moon to determine instrument changes. These two
sources fill the input aperture of the instrument plus all of the elements of
the optical path. Thus the SeaWiFS mission was designed to accommodate
both lunar and solar measurements. When the Sun is used, a diffuser plate
must also be used because the signal would otherwise be so strong that it
would saturate the detectors. Using the Sun also involves one or two problems. First, the Sun’s intensity may vary; however, this can be monitored
from Earth and allowed for. Second, the diffuser plate’s characteristics may
change with age. The solar calibration of SeaWiFS is done daily, whereas
lunar calibration is done on a monthly basis.
8.5.2 Atmospheric Corrections to the Satellite-Received Radiance
There are various situations to be considered regarding atmospheric corrections to data in the visible spectrum. Land-based applications are distinguished from aquatic applications. Much less work has been done on the
quantitative importance of atmospheric effects for land-based applications
than for aquatic applications. The reason for this is that the atmospheric
corrections land-based studies have previously been regarded as less important than those in aquatic applications. There are two main reasons for this:
The intensity of the radiation LL(λ) leaving the surface of the land is
larger than that leaving the water so that, proportionately, the atmospheric effects are less significant in land-based studies than in
aquatic studies that utilize optical scanner data.
The data used in land-based applications tend to make greater use
of the red and near-infrared bands, where the atmospheric effects
are less important than in the blue end of the spectrum, which is
particularly important in aquatic applications.
Because of this latter reason, many land-based applications of visible and
near-infrared scanner data do not require the values of the data to be converted to absolute physical units. One exception is when visible and nearinfrared AVHRR data are used to study long-term global change effects
observed in the normalized difference vegetation index (NDVI) (see for
instance Section 5.4.1 of Cracknell [1997]).
SeaWiFS, which was primarily designed to determine water quality
parameters, has also been used to provide data products from the land
surface and from clouds. For these products, it is much less important to
perform atmospheric corrections because clouds and the surface of the land
are generally much brighter than the oceans, and therefore, the atmospheric
9255_C008.fm Page 196 Saturday, February 17, 2007 12:43 AM
196
Introduction to Remote Sensing
effects are proportionately less important than for the oceans. In addition,
one cannot possibly determine the contribution of atmospheric aerosols to the
top-of-the atmosphere radiance over the land or clouds, as is done in the
calibration of the instrument for ocean measurements (see below). For the ocean,
the water-leaving radiance is small in the near-infrared region, so that most
of the satellite-received near-infrared radiation comes from the atmosphere
and not from the ocean surface. For land measurements, the near-infrared
surface radiance can be very large, contaminating the radiances that could
be used to determine aerosol type and amount. Therefore, SeaWiFS provides
no information on atmospheric aerosols for regions of land or cloud. The
SeaWiFS Project has developed a partial atmospheric correction for land
measurements that calculates the Rayleigh component of the upwelling
radiance, including a surface pressure dependence for each SeaWiFS band.
Along with this correction, the SeaWiFS Project has incorporated algorithms
to provide land surface properties, using the NDVI and the enhanced vegetation index. In addition, an algorithm has been developed to produce
surface reflectances. Each of these algorithms uses SeaWiFS top-of-the-atmosphere radiances determined by the direct calibration of the instrument. To
date, there is no vicarious calibration for SeaWiFS measurements of the land
or of clouds.
A very important aquatic application of visible and near-infrared scanner
data from satellites is in the study of ocean color and the determination of
the values of chlorophyll concentrations in lakes, rivers, estuaries, and open
seas and of suspended sediment concentrations in rivers and coastal waters.
CZCS was the first visible-band scanner designed for studying ocean color
from space; SeaWiFS was its successor. What makes atmospheric corrections
so important in aquatic studies is the fact that the signal that reaches the
satellite includes a very large component that is due to the atmosphere and
does not come from the water-leaving radiance. Table 8.4 indicates the scale
of the problem. This table shows that, for visible band data obtained over
water, the total atmospheric contribution to the satellite-received radiance
approaches 80% or 90% of the signal received at the satellite. This is a much
more serious problem than for thermal-infrared radiation, which is used to
measure sea surface temperatures (see Section 8.4.3). In the case of thermalinfrared radiation, surface temperatures are in the region of 300 K and the
atmospheric effect is equivalent to a few degrees or to perhaps 1% or 2% of
the signal. As shown in Table 8.4, in the optical wavelength, the “noise” or
“error” or “correction” comprises 80% to 90% of the signal. It is thus important to make these corrections very carefully.
The various atmospheric corrections to data from optical scanners flown
on satellites have been discussed in general terms in Section 8.3. As mentioned, for visible wavelengths, the absorption by molecules involves ozone
only. In this section, the term “visible” is taken to include, by implication,
the near-infrared wavelength as well, up to about 1.1 µm or so in wavelength.
The ozone contribution is relatively small and not too difficult to calculate
to the accuracy required. The most important contributions are Rayleigh and
9255_C008.fm Page 197 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
197
TABLE 8.4
Typical Contributions to the Signal Received by
a Satellite-Flown Visible Wavelength Sensor
l (nm)
440
520
550
670
750
l (nm)
440
520
550
670
750
In Clear Water
TLw (%)
Lp (%)
14.4
17.5
14.5
2.2
1.1
84.4
81.2
84.2
96.3
97.0
In Turbid Water
TLw (%)
Lp (%)
18.1
32.3
34.9
16.4
1.1
80.8
66.6
64.1
82.4
97.4
TLr (%)
1.2
1.3
1.3
1.5
1.9
TLr (%)
1.1
1.1
1.0
1.2
1.5
Lw = water-leaving radiance
Lp = atmospheric path radiance
Lr = surface reflected radiance
T = transmission coefficient
(Adapted from Sturm [1981])
aerosol scattering, both of which are large, particularly toward the blue end
of the optical spectrum. Moreover, because reflected radiation is the concern,
light that reaches the satellite by a variety of other paths has to be considered
in addition to the sunlight reflected from the target area (see Section 8.3.2
and Figure 8.3). It is, therefore, not surprising that the formulation of the
radiative transfer equation for radiation at visible wavelengths is rather
different from the approach used in Section 8.4 for microwave and thermal-infrared wavelengths, where the corrections that have to be made to
the satellite-received radiance to produce the Earth-leaving radiance are
of the order of 1% or 2%. It is accordingly clear that the application of
atmospheric corrections to optical scanner data to recover quantitative
values of the Earth-leaving radiance is very much more difficult and
therefore needs to be performed much more carefully than in the thermalinfrared case.
In aquatic applications, it is the water-leaving radiance that is principally
extracted from the satellite data. From the water-leaving radiance, one can
attempt to determine the distribution and concentrations of chlorophyll and
suspended sediment (see Section 8.5.3). Ideally, the objective is to be able to
do this without any need for simultaneous in situ data for calibration of the
remotely sensed data. The large size of the atmospheric effects (see Table 8.4)
means that the accuracy that can be obtained in the extraction of geophysical
9255_C008.fm Page 198 Saturday, February 17, 2007 12:43 AM
198
Introduction to Remote Sensing
parameters from visible-channel satellite data without simultaneous in situ
calibration data is limited. In the past, some work was done on atmospheric
corrections to satellite data with reference to applications to water bodies —
such as inland lakes, lochs, rivers, estuaries, and coastal waters — using
Landsat or Système pour l’Observation de la Terre (SPOT) data. However,
CZCS was the first visible-band scanner designed for studying ocean color
from space, and SeaWiFS was its first successor.
As previously mentioned, for the visible channels, the atmospheric contribution to the radiance received at a satellite forms a very much greater
percentage of the radiance leaving the target area than is the case for thermalinfrared regions. Thus any attempt to make atmospheric corrections to
visible-channel data using method 3 outlined in Section 8.3, with a model
atmosphere with values of the parameters determined only by the geographical location and the time of the year, is likely to be very inaccurate. To obtain
good results with a model atmosphere with simultaneous meteorological data,
as in method 4 outlined in Section 8.3, the meteorological data would be
required over a much finer network of points than is usually available. It
therefore seems that, although some of the less important contributions to the
atmospheric correction for the visible channels may be estimated reasonably
well using model atmospheres or sparse meteorological data, in order to
achieve the best values for the atmospheric corrections in the visible channels,
one must expect to have to use a multichannel approach for some of the
contributions to the atmospheric correction. Of the various mechanisms mentioned in Section 8.3.3, the aerosol scattering is the most difficult contribution
to determine. We shall summarize, briefly, the approach that has been used
for making atmospheric corrections to CZCS and SeaWiFS data to determine
the value of the water-leaving radiance (Barnes et al., 2001; Cracknell and
Singh, 1980; Eplee et al., 2001; Gordon, 1993; Singh et al., 1983; Sturm, 1981,
1983, 1993; Vermote and El Saleous, 1996; Vermote and Roger, 1996).
We therefore turn to the consideration of the processing of CZCS and
SeaWiFS data for the determination of the water-leaving radiance from satellite-received radiance. The discussion concerns, by implication at least,
large open areas of water and is therefore relevant to marine and coastal
applications. We assume that cloudy areas have been eliminated, either
manually or by using some cloud detection algorithm. One could consider
atmospheric models, but they contain several parameters and the values of
these parameters are unknown for the actual atmospheric conditions that
existed at the time and place that the satellite data were collected. Therefore
an empirical approach must be adopted. The following expression was commonly used in the processing of CZCS data; here, the radiance L(λ) received
by a sensor in a spectral channel with wavelength l can be expressed as:
L(λ ) = { Lw (λ ) + Lg (λ )}T(λ ) + LAp (λ ) + LRp (λ )
where
Lw(l) = water-leaving radiance
Lg(l) = Ls(l) + Ld(l) with
(8.39)
9255_C008.fm Page 199 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
199
Ls(l) = Sun glitter radiance
Ld(l) = diffused sky glitter radiance
T(l) = proper transmittance (the transmittance from the target area to the
sensor)
LAp (λ ) = aerosol path radiance
LRp (λ ) = Rayleigh (molecular) scattering path radiance
Of these quantities, L(l) is what has been found from the calibration procedure, on a pixel-by-pixel basis. In order to extract Lw(l), the water-leaving
radiance, which is the quantity that contains the useful information about the
target area on the surface of the sea, all the other quantities appearing in
Equation 8.39 must be evaluated. Methods exist for calculating Ls(l), Ld(l),
and LPR(l), directly (see, for example, Sturm [1981]), but the calculation of
LPA(l) is more difficult. The calculation of LPA(l), the aerosol path radiance,
for CZCS data was considered by Gordon (1978). In this approach, it was
argued that if the water-leaving radiance in the 670 nm wavelength band
from the target area corresponding to the darkest pixel in the scene is assumed
to be 0, then the radiance detected by the remote sensor in that wavelength
channel is due to atmospheric scattering and the glitter only. Then the aerosol
path radiance for this particular wavelength can be evaluated. Moreover, the
aerosol path radiance for any other wavelength can be expressed in terms of
the now known aerosol path radiance in the 670 nm channel (see Equation
8.19). This method, known as the “darkest pixel method,” does have some
problems. For example, the darkest pixel in the extract chosen from the scene
may not be the darkest pixel in the whole scene; unless the choice of the
darkest pixel can be verified each and every time, one cannot be sure of having
found the correct pixel. Moreover, the water-leaving radiance at 670 nm is
not quite 0, even for clear water (see Table 8.4). Obviously, some degree of
arbitrariness exists in defining the darkest pixel in a scene. In spite of this,
Gordon’s darkest pixel-approach, which has subsequently been used by
many other workers, gave real hope for the quantitative applicability of CZCS
data for marine and coastal waters. Finally, the experience of many years of
work on the CZCS data set (in the absence of any follow-on instrument for
about 15 years, i.e., until SeaWiFS was launched in 1997) demonstrated that
this method “was fundamentally flawed, and could never be expected to
deliver reliable results globally” (Robinson, 2004, p. 222). However, with the
limited number of visible and near-infrared bands of CZCS, it was not really
possible to do anything better than this.
CZCS was a pioneering instrument and work done on CZCS data showed
that, in addition to improved arrangements for the in-orbit calibration of the
data, it was also necessary to introduce improvements in the atmospheric
correction procedures. For CZCS, there were simply not enough spectral
bands to enable the atmospheric corrections to be determined very accurately. The result was that, although overall patterns of chlorophyll distribution derived purely from the satellite data itself were generally correct,
one could not determine actual values of the chlorophyll concentrations
much better than to the nearest order of magnitude, except in cases in which
9255_C008.fm Page 200 Saturday, February 17, 2007 12:43 AM
200
Introduction to Remote Sensing
some in situ data was gathered simultaneously with the acquisition of the
satellite data by the scanner. The improvements introduced with SeaWiFS
addressed this problem. With SeaWiFS's increased number of spectral bands,
it became possible to use what is essentially a multichannel technique for
the atmospheric corrections.
For SeaWiFS, a development from the darkest pixel method, using longer
wavelength near-infrared bands and matching with a database of models
using a wide variety of atmospheric aerosols, has been adopted. This
involves using two infrared bands, at 765 nm and 865 nm (i.e., bands 7 and
8), to estimate the aerosol properties of the atmosphere at the time that the
image was generated. These are then extrapolated to the visible bands following Gordon and Wang (1994). The algorithms are expressed in terms of
a reflectance, r(l), defined for any radiance L(l) at a given wavelength and
viewing and solar geometry, as:
ρ(λ ) = π L(λ )/( F0 cos θ 0 )
(8.40)
where F0 is the extraterrestrial solar irradiance and q0 is the solar zenith angle.
Then the satellite measured reflectance, rt(l), can be written as:
ρt (λ ) = ρr (λ ) + ρa (λ ) + ρra (λ ) + T ′(λ )ρwc (λ ) + T ′(λ )ρw (λ )
(8.41)
where the various contributions arise as follows:
rr (l) from air molecules (Rayleigh scattering)
ra (l) from aerosols
rra (l) from molecule-aerosol interactions
rwc(l) from the reflectance of sunlight and skylight by whitecaps at the sea
surface
rw(l) is the water-leaving radiance, which it is the object of the exercise to
determine
T’(l) is the diffuse transmittance through the atmosphere.
The contributions on the right-hand side of this equation are the same as
those in Equation 8.39, except that an extra term involving molecule-aerosol
interactions has been added. rr (l) and rw (l) can be calculated, as was done
for CZCS data. Then it is assumed that for bands 7 and 8 (i.e., at 765 nm and
865 nm), the water leaving reflectance rw(l) really is 0 (distinct from at 670 nm,
where it was assumed to be 0 in dealing with CZCS data). Thus, the last
term on the right hand side of the equation for rt(l) vanishes and the
equation can be rearranged to give:
ρa (λ ) + ρra (λ ) = ρt (λ ) − ρr (λ ) − T ′(λ )ρwc (λ )
(8.42)
If T¢(l) is estimated, the right-hand side of this equation is known and so
the value of ra(l) + rra(l) for the near-infrared can be calculated. At this stage,
9255_C008.fm Page 201 Saturday, February 17, 2007 12:43 AM
Atmospheric Corrections to Passive Satellite Remote Sensing Data
201
a comparison is made with the results of about 25,000 atmospheric radiation
transfer model simulations, using 12 different aerosol models, based on three
types of aerosol with four different values of relative humidity and eight
different aerosol optical thicknesses to generate estimates of {ra(l) + rra(l)}
corresponding to a range of different Sun and sensor pointing geometries.
The 12 candidate aerosol models have been selected to provide a wide spread
of spectral slopes for:
ε(λ , 865) =
ρas (λ )
ρas (865)
(8.43)
where ras(l) is the single scattering aerosol reflectance.
e(410,865), for example, varies between about 0.7 and 2.5 for different aerosol
types. The best match of the near-infrared spectral slope of the aerosol
reflectance identifies which of the 12 candidate aerosol models is appropriate,
and then its magnitude at 865 nm enables the optical thickness to be estimated. Using these quantities, the model look-up table can be entered again
to provide estimates of the aerosol contributions to all the other bands. Given
the information about aerosol type and optical depth, more accurate estimates of T¢(l) can be made, and rw(l) can be obtained for all wavelengths
from Equation 8.41. Finally, from these values, the values of the water-leaving
radiance in each of the bands can be determined, on a pixel-by-pixel basis.
Since SeaWiFS, a number of other ocean color monitoring systems have been
flown in space (see Robinson [2004]).
To be able to determine accurate values of water quality parameters, principally chlorophyll concentration but also suspended sediment concentration, in lakes, rivers, and coastal waters, both accurate calibration of the
satellite-received data and the atmospheric correction of that data to determine the water-leaving radiance are essential. This combined process is
described as “vicarious calibration” (Eplee et al., 2001). The adoption of a
vicarious calibration of SeaWiFS does not preclude the elements in the original calibration plan (involving prelaunch calibration and in-flight calibration), which is described as the direct calibration of the instrument. The direct
calibration of SeaWiFS (see Section 8.5.1) exists independently of the vicarious calibration. An extensive ongoing program of work on vicarious calibration of SeaWiFS data has been carried out based on the NASA/NOAA
Marine Optical Buoy (MOBY), which is deployed 15 km west of Lanai,
Hawaii (Clark et al., 1997; Hooker and McClain, 2000; McClain, 2000). A
criterion for the choice of location for this work (Gordon, 1998) was that it
should be carried out in a cloud-free air mass with a maritime aerosol that
has an optical thickness of less that 0.1; in addition, water-leaving radiances
over the area must be uniform. The MOBY site was chosen because the
aerosols in the vicinity are marine aerosols that are typical of the open ocean;
the area has proportionally fewer clouds than other typical open-ocean
regions; the waters are typically homogeneous, with low concentrations of
9255_C008.fm Page 202 Saturday, February 17, 2007 12:43 AM
202
Introduction to Remote Sensing
chlorophyll; a sun photometer could be located nearby (on Lanai) to make
in situ aerosol measurements; and logistical support facilities are available
in the vicinity (in Honolulu). Let us assume that the values of the waterleaving radiance in the various SeaWiFS bands have been determined using
the prelaunch and on-board calibration procedures described in Section 8.5.1
and the atmospheric corrections described in this section. The values of the
water-leaving radiances in the visible bands can then be adjusted by comparison with the MOBY measurements of water-leaving radiance. The question arises as to whether the atmospheric parameters derived from
measurements made at the one MOBY site can be applied over the open
oceans globally. The SeaWiFS Calibration and Validation Team’s program
includes testing this application with measured values of the water-leaving
radiance at other sites, and good agreement (to within 5%) has been found
for open-ocean clear waters.
8.5.3 Algorithms for the Extraction of Marine Parameters
from Water-Leaving Radiance
In the case of thermal-infrared wavelengths, the rule for extracting the
temperature, albeit only a brightness temperature, was based on a fundamental physical formula — the Planck radiation formula (see Section 8.4.3).
In the case of the extraction of marine physical or biological parameters
from CZCS data, the situation is much less straightforward; it would be
very difficult to obtain from first principles a relationship between the
radiance and the concentration of suspended sediment or of chlorophyll in
the water. Therefore, with CZCS data, no attempt was made to use models
to relate the water-leaving radiance to the values of marine parameters. The
marine parameters commonly studied in this manner are the pigment concentration C (chlorophyll-a and pheophytin-a, µg m–3), the total suspended
load S (dry mass, g m–3), and the diffuse attenuation coefficient K (m–1) for
a given l. Empirical relationships were used and these most commonly took
the form:
M = A(rij )B
(8.44)
where M is the value of the marine parameter and rij is the ratio of the waterleaving radiances L(li) and L(lj) in the bands centered at the two wavelengths
li and lj.
Various workers used relations of this type in regression analyses with
log-transformed data for their own data sets. If a significant linear relationship was found, an algorithm of the form in Equation 8.44 was obtained.
Table 8.5 contains a list of such algorithms proposed by several workers
(Sathyendranath and Morel, 1983). In subsequent work during the next few
years, the CZCS data set was worked on very thoroughly by many workers
and further results were obtained (reviews of the work on CZCS are given,
for example, by Gordon and Morel [1983] and Barale and Schlittenhardt
9255_C008.fm Page 203 Saturday, February 17, 2007 12:43 AM
203
Atmospheric Corrections to Passive Satellite Remote Sensing Data
TABLE 8.5
Some Values of Parameters Given by Different Workers for Algorithms for
Chlorophyll and Suspended Sediment Concentrations from CZCS Data
rij
A
B
N
r2
M = Chl a + Pheo a (mg m–3)
L443/L550
L443/L520
L520/L550
L520/L670
0·776
0·551
1·694
43·85
–1·329
–1·806
–4·449
–1·372
55
55
55
55
0·91
0·87
0·91
0·88
L440/L550
0·54
–1·13
7
0·96
L440/L550
L440/L520
L520/L550
0·505
0·415
0·843
–1·269
–1·795
–3·975
21
21
21
0·978
0·941
0·941
R440/R560
1·92
–1·80
67
0·97
L443/L550
L443/L520
L520/L550
0·783
0·483
2·009
–2·12
–3·08
–5·93
L443/L550
2·45
–3·89
6
L443/L550
L443/L550
1·13
1·216
–1·71
–2·589
454
9
9
9
0·94
0·88
0·95
0·61
M = Total suspended particles (g m –3)
L440/L550
L440/L520
L520/L550
0·4
0·33
0·76
–0·88
–1·09
–4·38
L443/L550
L520/L550
L520/L670
0·24
0·45
5·30
–0·98
–3·30
–1·04
0·92
0·94
0·77
0·86
0·86
0·85
(Data gathered by Sathyendranath and Morel, 1983.)
[1993]). It became apparent that, although one could obtain an algorithm to
fit one set of data, the algorithm and the values of the “constants” A and B
would not be of general validity for other data sets. It became clear that
more work was required in order to better understand the applicability of
the algorithms and, hopefully, determine how to establish the values of the
coefficients in the algorithms for scenes for which simultaneous in situ data
are not available.
Following the flight of CZCS, there was a long period before any other
ocean color scanning instrument was launched into space. Toward the latter
9255_C008.fm Page 204 Saturday, February 17, 2007 12:43 AM
204
Introduction to Remote Sensing
part of this period, attempts were made to develop better algorithms (for a
review, see O’Reilly et al. [1998]). Algorithm development work has taken
two lines. One is to continue to attempt to determine empirical algorithms
of more general applicability than to just one data set. The other is to attempt
to develop some semi-analytical model, which inevitably will contain some
parameters. At present, the empirical models still appear to be more successful (further discussion is given by Robinson [2004]).
9255_C009.fm Page 205 Tuesday, February 27, 2007 12:39 PM
9
Image Processing
9.1
Introduction
Much of the data used in remote sensing exists and is used in the form
of images, each image containing a very great deal of information. Image
processing involves the manipulation of images and is used to:
• Extract information
• Emphasize or de-emphasize certain aspects of the information
contained in the image
• Perform statistical or other analyses to extract nonimage information.
Image processing may therefore be regarded as a branch of information
technology. Some of the simpler operations of image processing discussed
in this chapter will be familiar from everyday life; for example, one might be
familiar with contrast enhancement from one’s experience with photography
or television viewing.
9.2
Digital Image Displays
A digital image consists of an array of numbers. Although arrays may be
square, they are quite commonly rectangular. Digital images are likely to
have been generated directly by a digital camera or scanner, or they may
have been generated from an analogue image by a densitometer. Consider
a black-and-white (or grayscale) image first. Each row of the array or matrix
normally corresponds to one scan line. The numbers are almost always
integers and it is common to work with one byte (i.e., one eight-bit number)
for each element of the array, although other numbers of bits are, of course,
possible. This eight-bit number, which must therefore lie within the range
205
9255_C009.fm Page 206 Tuesday, February 27, 2007 12:39 PM
206
Introduction to Remote Sensing
0
50
100
127
150
200
255
FIGURE 9.1
Grayscale wedge.
0 to 255, denotes the intensity, or grayscale value, associated with one element (a picture element or pixel) of the image. For a human observer, however, the digital image needs to be converted to analogue form using a chosen
“gray scale” relating the numerical value of the element in the array to the
density on a photographic material or to the brightness of a spot on a screen.
The digital image, when converted to analogue form, consists of an array of
pixels that are a set of identical plane-filling shapes, almost always rectangles,
in which each pixel is all the same shade of gray and has no structure within
it. To produce a display of the image on a screen or to produce a hard copy
on a photographic medium, the intensity corresponding to a given element
of the array is mapped to the shade of gray assigned to the pixel according
to a grayscale wedge, such as that shown in Figure 9.1. A human observer,
however, cannot distinguish 256 different shades of gray; 16 shades would
be a more likely number (see Figure 9.2). It is possible to produce a “positive”
(a)
FIGURE 9.2
Image showing (a) 16-level gray scale, (b) 256-level gray scale.
(b)
9255_C009.fm Page 207 Tuesday, February 27, 2007 12:39 PM
207
n(I)
Image Processing
0
I
255
FIGURE 9.3
Sketch of histogram of n (I) number of pixels with intensity I, against I for an image with good
contrast.
and “negative” image from a given array of digital data, although the
question of which is positive and which is negative is largely a matter of
definition. A picture with good contrast can be obtained if the intensities
associated with the pixels in the image are well distributed over the range
from 0 to 255 — that is, for a histogram such as the one shown in Figure 9.3.
For a color image, it is most convenient to think of three separate digital
arrays, each of the same structure, so that the suffices or coordinates x and y
used to label a pixel are the same in each of the three arrays (i.e., the arrays
are coregistered). Each array is then assigned to one of the three primary
colors (red, green, or blue) of a television or computer display monitor or
to the three primary colors of a photographic film. These three arrays may
have been generated for the three primary colors in a digital camera, in
which case the image is a true-color image. Or they may have been generated
in three separate channels of a multispectral or hyperspectral scanner, in
which case the image is a false-color composite in which the color of a
pixel in the image is not the same as the color on the ground (see Figure 2.9,
for example). The array assigned to red produces a wedge like the one shown
in Figure 9.1, representing the intensity of red to be assigned to the pixels
in the image. A similar wedge occurs for the arrays assigned to green and
blue. This leads to the possibility of assigning any one of 2563 colors, or more
than 16 million colors, to any given pixel. Although many images handled
in remote sensing work are in color, recalling the underlying structure in
terms of the three primary colors is often quite useful because image processing is commonly performed separately on the three arrays assigned to
the three primary colors.
In addition to digital image processing, which is widely practiced these
days, a tradition of analogue image processing also exists. This tradition
dates from the time when photographic techniques were already well
established but computing was still in its infancy. In 1978, the first space-borne
9255_C009.fm Page 208 Tuesday, February 27, 2007 12:39 PM
208
Introduction to Remote Sensing
TABLE 9.1
Common Digital Image Processing Operations
Histogram generation
Contrast enhancement
Histogram equalization
Histogram specification
Density slicing
Classification
Band ratios
Multispectral classification
Neighborhood averaging and filtering
Destriping
Edge enhancement
Principal components
Fourier transforms
High-pass and low-pass filtering
synthetic aperture radar (SAR) was flown on Seasat and the output generated
by the SAR was first optically processed or “survey processed” to give
“quicklook” images; digital processing of the SAR data, which is especially
time-consuming, was then carried out later only on useful scenes selected
by inspection of the survey-processed quicklook images. Even now it is still
sometimes useful to make comparisons between digital and optical image
processing techniques. For example, optical systems are often simple in
concept, relatively cheap, and easy to set up and they provide very rapid
image processing. Also, from the point of view of introducing image processing, optical methods often demonstrate some of the basic principles
involved rather more clearly than could be done digitally. This chapter is
concerned only with image processing relating to remote sensing problems.
In the vast majority of cases, remotely sensed image data are processed
digitally and not optically, although in most cases, the final output products
for the user are presented in analogue form. Image processing, using the
same basic ideas, is widely practiced in many other areas of human activity.
Table 9.1 summarizes some of the common digital image processing
operations that are likely to be performed on data input as an array of
numbers from some computer-compatible medium. Most of these processes
have corresponding operations for the processing of analogue images. A
number of formal texts on image processing are also available (see, for
example, Jensen [1996] and Gonzalez and Woods [2002]).
9.3
Image Processing Systems
An image processing system basically consists of a set of computer hardware,
possibly with some specialized peripheral devices, together with a suite of
software to perform the necessary image processing and display operations.
In the early days of digital image processing, some very expensive systems
that used a considerable amount of expensive, purpose-built hardware were
constructed. Now, however, an image processing system is most commonly
a special sophisticated software package constructed to run on standard
9255_C009.fm Page 209 Tuesday, February 27, 2007 12:39 PM
Image Processing
209
personal computer systems. Input is most likely to be in electronic form;
however, digitization of a hard copy image can be done with a flatbed
densitometer or a rotating drum densitometer or by photographing the
image with a digital camera. Hard copy output for ordinary routine work
is commonly produced on a laser printer or an inkjet printer, whereas special
purpose peripherals, such as laser film writers or large plotters, may be used
for the production of very-high-quality output.
9.4
Density Slicing
Density slicing is considered at this stage because it is very closely related
to the discussion on image display in Section 9.2. We consider one digital
array corresponding to a monochrome (black-and-white) image. As previously noted, the number of gray levels that the human eye can distinguish
is quite small. In constructing the 16-level gray scale illustrated in
Figure 9.3(a), the range of intensities from 0 to 255, corresponding to the 256
gray levels used in Figure 9.3(b), has been divided into 16 ranges, with each
range having 16 gray levels assigned to it. The objective of density slicing
is to facilitate visualization of features in the image by reducing the number
of gray levels. This is done by redistributing the levels into a given number
of specified slices or bins and then assigning one shade of gray to all the
pixels in each slice or bin. However, because of the difficulty of distinguishing a large number of different shades of gray, it is usual to introduce the
use of color in density slicing. Therefore, in density slicing, one divides the
intensity range (suppose it is 0 to 255) into a number of ranges, or slices,
and assigns a different color to each slice; all pixels within a given slice are
then “painted” in the color assigned to that slice. With 256 gray levels, one
could use 256 different colors, but that would be far too many to be useful.
Having density sliced an image there will be, as always, no detail within
each pixel, and also any distinction between different pixels with different
digital values but within the same range, or slice, is lost.
The number of slices, the ranges of pixel values in each slice, and the colors
used depend very much on the nature of the image in question. There may
be some good reasons, associated with the physical origins of the digital
data, for using a different number of ranges or slices and for using unequal
ranges for the different slices. For example, in a Landsat TM image, the
intensity received at the scanner in band 4 (near-infrared) from an area of
the surface of the Earth that is covered by water will be extremely low.
Suppose, for the sake of argument, that all these intensities were lower
than 10. Then a convenient and very simple density slice would be to assign
one color, say blue, to all pixels with intensities in the range 0 to 9 and a second
value, say orange, to all pixels with intensities in the range 10 to 255. In this
way, a simple map that distinguishes land areas or cloud (orange) from water
9255_C009.fm Page 210 Tuesday, February 27, 2007 12:39 PM
210
Introduction to Remote Sensing
areas (blue) could be obtained. The scene represented by this image could
then be thought of as having been classified into areas of land or cloud and
areas of water. This example represents a very simple, two-level density slice
and classification scheme. More complicated classification schemes can obviously be envisaged. For example, water could perhaps be classified into
seawater and fresh water and land could be classified into agricultural land,
urban land, and forest. Further subdivisions, both of the water and of the
land area, can be envisaged. However, the chance of achieving a very detailed
classification on the basis of a single TM band is not very good; a much better
classification could be obtained using several bands (see Section 9.7).
9.5
Image Processing Programs
n(I)
In designing an image processing system, a device for displaying an image
on a screen or writing it on a photographic medium would usually be set up
so as to produce a picture with good contrast when there is a good distribution,
or full utilization, of the whole range of intensities from 0 to 255. Thus if the
intensities are all clustered together in one part of the range (for example, as
in Figure 9.4), the image will have very low contrast. The contrast could be
restored by a hardware control, like the contrast control on a television set;
however, it is more convenient to keep the hardware setting fixed and to
perform operations such as contrast enhancement digitally before the final
display or hard copy is produced. For images with histograms such as that
shown in Figure 9.4, the intensities can all be scaled by software to produce a
histogram like the one in Figure 9.3 before producing a display or hard copy.
An important component of any image processing software package is therefore
0
M1
M2
I
FIGURE 9.4
Sketch of histogram for a midgray image with very poor contrast.
255
9255_C009.fm Page 211 Tuesday, February 27, 2007 12:39 PM
211
Image Processing
bound to be a program for generating histograms from the digital data.
Although histograms are important in their own right, they also are very useful
when applied to image enhancement techniques such as contrast stretching,
density slicing, and histogram specification. In addition to constructing a histogram for a complete image, some reason may exist for constructing a histogram either for a small area extracted from the image or for a single scan line.
9.6
Image Enhancement
In this section we discuss three methods to improve the appearance of an
image or to enhance the display of its information content.
9.6.1
Contrast Enhancement
To understand contrast enhancement, it may help to think in terms of a
transfer function that maps the intensities in the original image into intensities in a transformed image with an improved contrast. Suppose that I(x, y)
denotes the intensity associated with a pixel labeled by x (the column number
or pixel number) and y (the row number or scan-line number) in the original
image and that I ′( x , y) denotes the intensity associated with the same pixel
in the transformed image. Then:
I ′( x , y) = T( I )I ( x , y)
(9.1)
where T(I) is the transfer function.
A transfer function might have the appearance shown in Figure 9.5; it is a
function of the intensity in the original image, but not of the pixel coordinates
T(I)
255
0
M1
M2
255
I
FIGURE 9.5
Transfer function for contrast enhancement of the image with the histogram shown in Figure 9.4.
9255_C009.fm Page 212 Tuesday, February 27, 2007 12:39 PM
212
Introduction to Remote Sensing
T(I)
255
0
M1
M2
255
I
FIGURE 9.6
Transfer function for a linear contrast stretch for an image with the histogram shown in Figure 9.4.
x and y. This particular function would be suitable for stretching the contrast
of an image for which the histogram was “bunched” between the values
M1 and M2 (see Figure 9.4). It has the effect of stretching the histogram out
much more evenly over the whole range from 0 to 255 to give a histogram
with an appearance such as that of Figure 9.3. The main problem is to decide
on the form to be used for the transfer function T(I). It is very common to
use a simple linear stretch (as shown in Figure 9.6). A function that more
closely resembles the transformation function in Figure 9.5 is shown in
Figure 9.7, where T(I) is made up of a set of straight lines joining the points
T(I)
255
0
0
M2
M1
I
FIGURE 9.7
A transfer function consisting of three straight-line segments.
255
9255_C009.fm Page 213 Tuesday, February 27, 2007 12:39 PM
213
Image Processing
that correspond to the intensities M1 and M2 in Figure 9.5. These points
can be regarded as parameters that must be specified for any given digital
image; suitable values of these parameters can be determined by inspection
of the histogram for that image.
The desirability of producing good contrast by having a histogram of the
form of Figure 9.3 has already been mentioned. Contrast stretching can be
regarded as an attempt to produce an enhanced image with a histogram of
the form of Figure 9.3. A completely flat histogram may, however, be produced in a rather more systematic way known as “histogram equalization.”
If I ( = 0, … 255) represents the gray levels in the image to be enhanced, then
the transformation J = T(I) will produce a new level J for every level I in the
original image. We introduce continuous variables i and j, each with range
0, … 1, to replace the discrete variables I and J for the gray levels. The
probability density functions p(i) and p(j) are considered. A continuous transfer function, T(i), can then be thought of in place of the transfer function
T(I), where:
j = T(i)
(9.2)
The graphs of pi(i) against i and of pj ( j) against j are simply the histograms
of the original image and of the transformed image, respectively. pj ( j) and
pi(i) are related to each other by
p j ( j)dj = pi (i)di
(9.3)
di
dj
(9.4)
so that
p j ( j) = pi (i)
To achieve histogram equalization, a special transfer function has to be
chosen so that pj ( j) = 1 for all values of j. Therefore, pi(i)di/dj = 1 or dj/di =
pi (i). Integrating this:
i
j=
∫ p (w)dw
i
(9.5)
0
and comparing this with the definition:
j = T(i)
(9.6)
of the transfer function, T(i), it can be seen that:
i
T(i) =
∫ p (w)dw
i
0
(9.7)
9255_C009.fm Page 214 Tuesday, February 27, 2007 12:39 PM
214
Introduction to Remote Sensing
pi (i)
2.0
1.0
0
0
0.5
i
1.0
FIGURE 9.8
Schematic histogram defined by Equation 9.8.
That is, Equation 9.7 defines the particular transfer function that will achieve
histogram equalization. This can be illustrated analytically for a simple
example. Consider pi(i) shown in Figure 9.8, where
pi (i) = 4i 0 ≤ i ≤ 1/2 

= 4(1 − i) 1/2 ≤ i ≤ 1 
(9.8)
The transfer function to achieve histogram equalization is then given by:
T(i) = 2i 2
= −1 + 4i − 2i 2
0 ≤ i ≤ 1/2 

1
/2 ≤ i ≤ 1 
(9.9)
This function is plotted in Figure 9.9, and the transformed histogram is
shown in Figure 9.10.
T (i)
1.0
0.5
0
0
FIGURE 9.9
Transfer function defined in Equation 9.9.
0.5
i
1.0
9255_C009.fm Page 215 Tuesday, February 27, 2007 12:39 PM
215
Image Processing
pj (j)
1.0
0.5
0
0
0.5
j
1.0
FIGURE 9.10
Transformed histogram obtained from pi(i) in Figure 9.9 using the transfer function shown in
Figure 9.9.
With a digital image, a discrete distribution — not a continuous distribution
— is being considered, with i and j replaced by the discrete variables I and J.
An approximation can be made to Equation 9.7:
J =I
J = T(I ) =
∑ PI(J )
(9.10)
J =0
where the original histogram is simply a plot of PI(I) versus I in this notation.
The histogram for the transformed gray levels J generated in this way is the
analogue, for the discrete case, of the uniform histogram produced by
Equation 9.7; it will not be quite uniform, however, because discrete rather
than continuous variables are being used. The degree of uniformity does,
however, increase as the number of gray levels is increased.
The last transformation has been concerned with transforming the histogram of the image under consideration to produce a histogram for which
pj( j) was simply a constant. It is, of course, also possible to define a transfer
function to produce a histogram corresponding to some other given function,
such as a Gaussian or Lorentzian function, rather than just a constant.
9.6.2
Edge Enhancement
Contrast enhancement involves manipulation of the pixel intensities simply
on the basis of the intensities themselves and irrespective of their positions
in the image. Another type of enhancement, called edge enhancement, is
used to sharpen an image by making clearer the positions of boundaries
between moderately large features in the image. There are a variety of
reasons why the edges of a feature in a digital image may not be particularly
sharp. To carry out edge enhancement, one must first identify the boundary
9255_C009.fm Page 216 Tuesday, February 27, 2007 12:39 PM
216
Introduction to Remote Sensing
↑y
…
…
…
…
…
…
…
…
…
…
…
…
…
…
…
…
…
x − 1, y + 1 x, y + 1
x, y
x − 1, y
x − 1, y − 1 x, y − 1
…
…
…
…
…
…
…
…
x +1, y + 1 …
x +1, y
…
x + 1, y − 1 …
…
…
…
…
…
…
…
…
…
…
x→
FIGURE 9.11
Neighboring pixels of pixel x, y.
and then take appropriate action to enhance the boundary. This process
involves what is known as spatial filtering. Edge enhancement is an example
of high-pass filtering — that is, it emphasizes high-spatial-frequency features,
features that involve intensities I(x,y) that change rapidly with x and y (of
which edges are an example). It reduces or blocks low-spatial frequency
features — that is, features that involve intensities I(x, y) that change only
slowly with x and y. Edges are located by considering the intensity of a pixel
in relation to the intensities of neighboring pixels and a transformation is
made that enhances or sharpens the appearance of the edge. This filtering
in the spatial domain is much simpler mathematically than filtering in the
frequency domain, involving Fourier transforms, which we shall consider
later in this chapter (see Section 9.9).
A linear spatial filter is a filter in which the intensity I(x, y) of a pixel, located
at x, y in the image, is replaced in the output image by some linear combination,
or weighted average, of the intensities of the pixels located in a particular
spatial pattern around the location x, y (see Figure 9.11). This is sometimes
referred to as two-dimensional convolution filtering. Some examples of the
shapes of masks or templates that can be used are shown in Figure 9.12.
For many purposes, a 3 × 3 mask is quite suitable. A coefficient is then chosen
FIGURE 9.12
Examples of various convolution masks or templates.
9255_C009.fm Page 217 Tuesday, February 27, 2007 12:39 PM
217
Image Processing
for each location in the mask, so that a template is produced; for a 3 × 3 mask,
we therefore have:
 c1

Template =  c4
 c7
c2
c5
c8
c3 

c6 
c9 
(9.11)
Therefore, using this template for the pixel (x, y), the intensity I(x, y) would
be replaced by:
I ′( x , y) = c1 I(x − 1, y + 1) + c2 I(x, y + 1) + c3I(x + 1, y + 1) + c4I(x − 1, y) + c5I(x, y)
+ c6I(x + 1, y)+ c7I(x − 1, y − 1) + c8I(x, y − 1) + c9 I(x + 1, y − 1) (9.12)
This mask can then be moved around the image one step at a time and all
the pixel values can be replaced on a pixel-by-pixel basis. At the boundaries
of the image, one needs to decide how to deal with the pixels in the first
and last row and in the first and last column when the mask, so to speak,
spills over the edge of the image. For these rows and columns, it is common
to simply use the values obtained using the mask for the pixels in the adjacent
row or column.
What effect a transformation of this kind has on the image is determined
by the choice of the coefficients in the template. Edge enhancement is only
one of several enhancements that are possible to achieve by spatial filtering
using mask templates. For instance, random noise can be reduced or
removed from an image using spatial filtering (see Section 9.6.3). In order
to identify an edge, the differences between the intensity of a pixel and those
of adjacent pixels are examined by studying the gradient or rate of change
of intensity with change in position. Near an edge, the value of the gradient
is large; whereas some distance from an edge, the gradient is small. Examples
of masks that enhance or sharpen edges in an image include
 −1 −1 −1


Mask A =  −1 9 −1
 −1 −1 −1
(9.13)
 1 −2 1 


Mask B =  −2 5 −2
 1 −2 1 
(9.14)
and
If the pixel intensities in the vicinity of x, y are all of much the same value,
then the value of I ′( x , y) after applying one of these masks will only be
altered very slightly; in the extreme case where all nine relevant pixels have
9255_C009.fm Page 218 Tuesday, February 27, 2007 12:39 PM
218
Introduction to Remote Sensing
exactly the same intensity, these masks will cause no change at all in the
pixel intensity at x, y. However, if x, y is at the edge of a feature in the image,
there will be some large differences between I(x, y) and the intensities of
some of the neighboring pixels and the effects of applying mask A or B will
be to cause I ′( x , y) to be considerably different from I(x, y) and the appearance of the edge to be enhanced.
These two masks are not sensitive to the direction of the edge; they have
a general effect of enhancing edges. If one is seeking to enhance edges that
are in a particular orientation, then one can be selective. For example, if an
edge is vertical (or north-south [N-S]), then it will be marked by large
differences, in each row, between the intensities of adjacent pixels at the
edge. Thus, at or near the edge, the magnitude of I(x, y) – I(x + 1, y) or
I(x – 1, y) – I(x, y) will be large. Therefore, we can replace I(x,y) by:
I ′( x , y) = I(x, y) − I(x + 1, y) + K
(9.15)
The constant K is included to ensure that the intensity remains positive. For
eight-bit data, the value of K is commonly chosen to be 127. If no edge is
present, then I(x, y) and I(x + 1, y) will not be markedly different and the
value of I ′( x , y) will be close to the value of K. In the vicinity of an edge, the
value of |I(x, y) – I(x + 1, y)| will be larger so that, at the edge, the values
of I ′( x , y) will be farther away from K (both below and above K). A vertical
edge will thus be enhanced, but a horizontal edge will not. As an alternative
to Equation 9.15, one can use a 3 × 3 mask, such as:
 −1
Mask C =  −1
 −1
0
0
0
1
1
1
(9.16)
to detect a vertical edge. In a similar manner, one can construct similar masks
to those in Equation 9.15 and Equation 9.16 to enhance horizontal edges or
edges running E-W, NE-SW, or NW-SE. Other templates can be used to
enhance edges that are in particular orientations (see, for example, Jensen
[1996]).
What is behind Equation 9.15 or Equation 9.16 is essentially to look at the
gradient, or the first derivative, of the pixel intensities perpendicular to an
edge. Another possibility is to use a Laplacian filter, which looks at the
second derivative and which, like Mask A, is insensitive to the direction of
the edge. The following are some examples of Laplacian filters:
0
 −1

 0
−1
4
−1
0
−1
0 
 −1
 −1

 −1
−1
8
−1
−1
−1
−1
and
 1 −2 1 


 −2 4 −2 .
 1 −2 1 
9255_C009.fm Page 219 Tuesday, February 27, 2007 12:39 PM
219
Image Processing
All of the above edge-enhancement, high-pass filters are linear filters (i.e.,
they involve taking only linear combinations of pixel intensities). However,
some nonlinear edge detectors also exist. One example is the Sobel Edge
Detector, which replaces the intensity I(x, y) of pixel x, y by:
I ′( x , y) = X 2 + Y 2
(9.17)
X = {I(x − 1,y +1) + 2I(x, y +1) + I(x +1, y +1)} − {I(x −1, y −1)
+ 2I(x, y −1) + I(x +1, y −1)}
(9.18)
Y = {I((x − 1, y + 1) + 2I(x−1, y) + I(x − 1, y − 1)} − {I(x + 1, y + 1)
+ 2I(x + 1, y) + I(x + 1, y − 1)}
(9.19)
where
and
This detects vertical, horizontal, and diagonal edges. Numerous other nonlinear edge detectors are available (see, for example, Jensen [1996]). An example
of an image in which edges have been enhanced is shown in Figure 9.13.
9.6.3
Image Smoothing
So far in this section, we have considered contrast enhancement and edge
enhancement — that is, ways to improve the quality of an image or the
aesthetic acceptability of an image by accentuating items of detail. Edge
enhancement deals with finding small details in an image and accentuating
or emphasizing them in some way. As noted, edge enhancement is an example
of high-pass filtering. The concept of smoothing an image may then, at first
sight, seem an odd thing to want to do and to be counterproductive in terms
of enhancing the usefulness or acceptability of an image. However, smoothing becomes important when small details in an image are noise rather than
useful information. Indeed, many images contain noise and it is important,
desirable and, sometimes, necessary to remove this noise. There are a number
of different ways of doing this, including
• Spatial filtering
• Frequency filtering using Fourier transforms
• Averaging of multiple images.
In spatial filtering, one can use the mask:
 1 1 1
1

Mask D =  1 1 1
9
 1 1 1
(9.20)
to remove random noise; it may be better to use a larger mask (say, 5 × 5
or even larger). But there is a danger that some useful information may
9255_C009.fm Page 220 Tuesday, February 27, 2007 12:39 PM
220
Introduction to Remote Sensing
(a)
(b)
FIGURE 9.13
Illustration of edge enhancement: (a) original image and (b) enhanced image.
become lost in the smoothing and so a less severe mask could be used,
such as:
1 1
4 2
1 1
1
Mask E =
9 2
 1 1

4
2


1 
2
1 

1
4
4
(9.21)
9255_C009.fm Page 221 Tuesday, February 27, 2007 12:39 PM
221
Image Processing
or
 1 1 1
1 

Mask F =
1 2 1
10 
 1 1 1
(9.22)
Filters of these types (D, E, and F) can be described as low-pass filters because
they preserve the low-spatial-frequency features but remove the high-spatial
frequency, or local, features (i.e., the noise).
The previous discussions assume that one is considering random noise in
the image. If the noise is of a periodic type, such as for instance the six-line
striping found in much of the old Landsat multispectral scanner (MSS) data,
then the idea of neighborhood averaging can be applied on a line-by-line,
rather than on a pixel-by-pixel, basis. However, alternatively, using a Fourierspace filtering technique is particularly appropriate; in this case, the six-line
striping gives rise, in the Fourier transform or spectrum of the image, to a
very strong component at the particular spatial frequency corresponding to
the six lines in the direction of the path of travel of the scanner. By identifying
this component in the Fourier transform spectrum of the image, removing
just this component, and then taking the inverse Fourier transform, one has
a very good method of removing the striping from the image. (Fourier
transforms are discussed in more detail in Section 9.9).
The third method, the averaging of multiple images, is only applicable in
certain rather special circumstances. First of all, one must have a number of
images that are assumed to be identical except for the presence of the random
noise. This means that the images must be coregistered very precisely to one
another. Then what is done for any given pixel, specified in position by
coordinates x and y, is to take an average of the intensities of the pixels in
the position x and y in each of the coregistered images. Because noise is
randomly distributed in the various images, this averaging tends to reduce
the effect of the noise and enhance the useful information. In practice, this
kind of multiple-image averaging is most important in SAR images. A SAR
image contains speckle that is frequently overcome by using subsets of the
raw data to generate a small number of independent images that all contain
speckle. These images will automatically be coregistered and will all contain
speckle, but the speckle in each image will be independent of that in each
of the other images; consequently the averaging of these multiple images
reduces the speckle and enhances the useful information very considerably.
9.7
Multispectral Images
In Section 9.2, we considered a true-color digital image or a false-color
composite of three spectral bands as three coregistered separate arrays. The
intensities corresponding to the three primary colors for a given pixel occupy
9255_C009.fm Page 222 Tuesday, February 27, 2007 12:39 PM
222
Introduction to Remote Sensing
the same position, x, y, in the three arrays. An alternative way of presenting
this image would be to have a single array, in which each element in the
array is a vector {Ir(x, y), Ig(x, y), Ib(x, y),} where the three components Ir(x, y),
Ig(x, y), and Ib(x, y) are the intensities for pixel x, y in the three primary colors
(red, green, and blue) of the display system. The number of intensities
associated with a given pixel need not, however, be restricted to three. Digital
data from MSSs with N bands or channels can be regarded either as a set of
N coregistered arrays or as an array in which the elements of the array are
N-dimensional vectors. If N is greater than 3 and one wishes to display a
false-color composite image, one must choose which three bands to use and
which of these bands to assign to which of the three color guns of the display
or emulsions of the color film.
In the case of data from the MSS on the Landsat series of satellites, for
example, the conventional approach is to assign bands 1, 2, and 4 to the
colors blue, green, and red, respectively. For the Landsat Thematic Mapper,
bands 2, 3, and 4 assigned to these three bands (in the same order, blue,
green, and red) gives a similar representation, as do the Système pour
l’Observation de la Terre bands 1, 2, and 3. In this way, terrain with healthy
vegetation, which has a high intensity of reflected radiation in the nearinfrared (band 4) and very little reflection in the yellow-orange (band 1)
appears red; areas of water with very little reflected radiation in the nearinfrared (band 4) appear blue; and urban areas appear gray. Although this
particular assignment of Landsat bands to colors does not produce an image
with the original color balance of the actual scene, nevertheless, it is widely
used; the origin of this lies in attempting to produce images similar to those
produced in near-infrared color photography. Instead of choosing these three
bands, one could make many other choices. The question of which three bands
to choose to extract the maximum amount of information from the sevenband image is not a question to which there is a unique answer; the answer
may vary from one image to another depending on the surface being
observed (see, for instance, the discussion of the optimum index factor in
Chapter 5 of Jensen [1996]). However, rather than any choice of three particular bands, it is almost always better to find the first three principal
components (see Section 9.8).
Contrast enhancement may be applied separately to each band of a multispectral image. By allowing for separate enhancement of each band (i.e.,
making separate choices of M1 and M2 for each band) in Figure 9.5, it is clear
that an enormous number of different shades of color can be produced in
the image. Operations applied to the individual bands separately, such as
varying the contrast stretch, may be very valuable in extracting information
from the digital data.
The idea of classifying a monochrome image has already been mentioned
in Section 9.4. A multispectral image provides the possibility of obtaining a
more refined classification than is possible with a single spectral band. Different
surfaces on the ground have different spectral reflecting properties. If a
variety of surfaces are considered, the reflected intensities in the different
9255_C009.fm Page 223 Tuesday, February 27, 2007 12:39 PM
223
Image Processing
MSS scan
line
Channel 1 2 3 4 5
cover class
Water
12345
Sand
12345
Forest
12345
Urban
12345
Corn
12345
Hay
FIGURE 9.14
Illustration of the variation of the spectra of different land cover classes in MSS data: band 1,
blue; band 2, green; band 3, red; band 4, near-infrared; band 5, thermal-infrared. (Adapted from
Lillesand and Kiefer, 1987.)
spectral bands will generally be different for a given illumination. This is
illustrated in the sketches shown in Figure 9.14.
Consider, for example, the case of three spectral bands. If the data from
three bands are used to produce a false-color composite image, then a surface
with a given spectral signature can be associated with a particular color in
the image. As previously mentioned, in a conventional Landsat MSS falsecolor composite, healthy vegetation appears red, water appears blue, and
urban areas appear gray. A classification of a Landsat scene could be carried
out on the basis of a visual interpretation of the shade of color and, indeed,
a great deal of environmental work is carried out on this basis.
As an alternative to a visual classification of a false-color image, a classification could be carried out digitally within a computer. In addition to the
advantages associated with using a computer rather than a human to interpret colors, the digital processing approach can handle more than three
bands simultaneously and therefore, hopefully, obtain a more sensitive and
accurate classification. Figure 9.15 represents a three-dimensional space in
which the coordinates along the three axes are the intensities in the three
spectral bands under consideration (bands 1, 2, and 3 of the Landsat MSS).
For any pixel in the scene, a point defined by the values of the intensities in
the three spectral bands for that pixel can be located on this diagram. Ideally,
all the pixels corresponding to a given type of land cover would then be
expected to be represented by a single point in this diagram; in practice,
however, they will be clustered close together. However, these points may
9255_C009.fm Page 224 Tuesday, February 27, 2007 12:39 PM
224
Introduction to Remote Sensing
Band 4
“A”
13 13 13
13 13 13
13
13 131313
10 10
10 10 10
10 10 10
10 10
27
27 27
27 27 27
2727 27
Band 6
Band 5
FIGURE 9.15
Sketch to illustrate the use of cluster diagrams in a three-dimensional feature space for threeband image data.
form a cluster that is distinct from, and quite widely separated from, the
clusters corresponding to pixels associated with other types of land cover.
Therefore, provided the pixels in the scene do group themselves into welldefined clusters that are quite clearly separated from one another, Figure 9.15
can be used as the basis of a classification scheme for the scene. Each cluster
can be identified with a certain land cover, either from a study of a portion
of the scene selected as a training area or from experience. By specifying the
coordinates (i.e., the intensities in the three bands) for each cluster and by
specifying the size and land cover of each cluster, one should be able to
assign any given pixel to the appropriate class. If “training data” are used
from a portion of the scene to be classified, quite good accuracy of classification is obtainable. But any attempt to classify a sequence of scenes obtained
from a given area on a variety of different dates, or a set of scenes from
different areas, with the same training data, should only be made with
extreme caution. This is because, as mentioned, the intensity of the radiation
received in a given spectral band, at a remote sensing platform, depends not
only on the reflecting properties of the surface of the land or sea but also on
the illumination of the surface and on the atmospheric conditions. If satellitereceived radiances, or aircraft-received radiances, are used without conversion to surface reflectances or normalized surface-leaving radiances (where
the normalization takes account of the variation in solar illumination of the
surface), the classification using a diagram of the form of Figure 9.15 is not
immediately transferable from one scene to another.
If more than three spectral bands are available, the computer programs
used to implement the classification that is illustrated in Figure 9.15 can
readily be generalized to work in an N-dimensional space where N is the
number of bands to be used.
9255_C009.fm Page 225 Tuesday, February 27, 2007 12:39 PM
225
Image Processing
9.8
Principal Components
The idea of the principal components transformation follows from the discussion of multispectral images in the previous section. As previously noted,
in general, multispectral images contain more information than images in a
single band. As shown, one can extract information from several bands by
carrying out a multispectral classification procedure. When information from
three bands is combined and represented in a single false-color image, the
image is likely to have a much greater number of distinguishable shades of
the whole spectrum of colors instead of only having different shades of gray.
The information in a multispectral image may be distributed fairly uniformly
among the various bands. The principal components transformation can be
regarded as a transformation of the axes in a diagram such as Figure 9.15.
This principal components transformation may be carried out with the intention of creating a new set of bands in which the information content is not
distributed fairly uniformly among the bands but rather distributed so that
the information content is concentrated as much as possible into a small
number of transformed bands. After carrying out the principal components
transformation, the maximum information content of the image can be found
in the first principal component or transformed band, with decreasing
amounts in subsequent transformed bands. If each transformed band is
viewed as a monochrome image, the first principal component will contain
very high contrast, whereas the last principal component will show virtually
no contrast and be an almost uniform shade of gray.
The principal components transformation was originally proposed by Hotelling (1933) but has been subsequently developed by a number of authors.
The origins of the transformation were in the statistical treatment of data in
psychological problems, long before the possibility of its application to the
treatment of image data in general and remote sensing data in particular
was appreciated. Because many people find the idea of principal components
difficult to understand in the multispectral image case, we shall give a brief
summary of what is involved in the rather simpler psychological case that
was originally considered by Hotelling.
Consider a set of n variables, x1, x2,…xn, attached to each individual of a
population. In Hotelling’s original discussion, he considered the scores
obtained by school children in reading and arithmetic tests. One would
expect that the variables xi will be correlated. Now consider the possibility
of the existence of a more fundamental set of independent variables, that
determine the values the xi will take. These variables are denoted by y1,
y2, …yn to establish a set of relations of the form:
xi = fi(y1, y2 …yn)
where i = 1,2,…n.
(9.23)
9255_C009.fm Page 226 Tuesday, February 27, 2007 12:39 PM
226
Introduction to Remote Sensing
The quantities yi are then called components of the complex depicted by the
tests. Now consider only normally distributed systems of components that
have zero correlations and unit variances; this may be summarized conveniently by writing:
E(yiyj) = dij
(9.24)
where dij is the Kronecker delta.
The argument is simplified by supposing that the functions fi are linear
functions of the components so that:
n
xi =
∑a y
ij
j
(9.25)
j =1
Assuming that the matrix A, with elements aij, is nonsingular, this relationship
can be inverted and the components yk written in terms of the variables xi:
n
yk =
∑b x
ki i
(9.26)
rik = E(xixk)
(9.27)
i =1
If rik is the correlation between xi and xk:
where
rik has the property that
rik = 1 if i = k and for the remaining values rik = rki.
(Hotelling worked in terms of standard measures zi obtained by taking the
deviation of each xi from its mean, xi , and dividing by its standard deviation,
σi, to simplify the formulation.) These conditions on the rik are insufficient
to enable the coefficients aij in the transformation Equation 9.25 to be determined completely. In other words, the choice of components is not completely determined and one has in fact an infinite degree of freedom in
choosing the components yi (or the coefficients aij or the coefficients bij). There
are various methods that enable the coefficients aij to be determined completely. For example, just sufficient of the coefficients might be set equal to
zero to ensure that the remainder are determined, but not overdetermined,
by the conditions imposed by the properties of rik. The method adopted by
Hotelling to resolve the variables into components was this: begin with a
component, y1, whose contribution to the variances of the variables xi is the
greatest possible; then take a second component, y2, that is independent of
y1 and whose contribution to the variances is also as great as possible,
subject to its own independence of y1; then choose y3 to maximize the
variance, subject to y3 being independent of y1 and y2. The remaining
components are determined in a similar manner, with the total not exceeding n in number, although some of the components may be neglected
because their contribution to the total variance is small. This is described
9255_C009.fm Page 227 Tuesday, February 27, 2007 12:39 PM
227
Image Processing
TABLE 9.2
Correlations for Hotelling’s Original Example
i
1
2
3
4
j
1
2
3
4
1.000
0.698
0.264
0.081
0.698
1.000
–0.061
0.092
0.264
–0.061
1.000
0.594
0.081
0.092
0.594
1.000
as the method of principal components. The detailed derivation of the formulae that enable one to determine the principal components, which involves
the use of Lagrange’s undetermined multipliers, is relatively straightforward, though slightly tedious, and is given by Hotelling (1933).
The example given by Hotelling is worth mentioning. This involved taking
some data from the results of tests of 140 schoolchildren and considering
correlations for reading speed (i = 1), reading power (i = 2), arithmetic speed
(i = 3), and arithmetic power (i = 4). The values of the correlations are shown
in Table 9.2. The result of transforming into principal components is given
in Table 9.3. The first principal component seems to measure general ability,
while the second principal component seems to measure a difference
between arithmetical ability on the one hand and reading ability on the other.
Together, these account for 83% of the variance. An additional 13% of the
variance, corresponding to the third principal component, seems to be a
matter of speed versus deliberation. The remaining contribution to the variance, associated with the fourth principal component, is negligible. One
would gain a very good idea of the information content of the results of the
tests on the schoolchildren from only the first two principal components; the
third component contains relatively little information and the fourth component almost nothing at all.
The approach can now be reformulated in terms of multispectral images.
Suppose that Ii (p,q) denotes the intensity, in the band labeled by i, associated
with the pixel in column p of row q of the image. Assuming that the image
is a square N by N image, so that 1 ≤ p ≤ N and 1 ≤ q ≤ N, and that there are
TABLE 9.3
Principal Components for Hotelling’s Original Example
Root
% of total variance
Reading speed
Reading power
Arithmetic speed
Arithmetic power
Y1
Y2
Y3
Y4
Totals
1.846
46.5
0.818
0.695
0.608
0.578
1.465
36.5
–0.438
–0.620
–0.674
.660
0.521
13
–0.292
0.288
–0.376
0.459
0.167
4
0.240
–0.229
–0.193
0.143
3.999
100
9255_C009.fm Page 228 Tuesday, February 27, 2007 12:39 PM
228
Introduction to Remote Sensing
n bands, so that 1 ≤ i ≤ n, image data in the form of two-dimensional arrays
take the place of population parameters. The complete range of subscripts
1 ≤ p ≤ N and 1 ≤ q ≤ N now corresponds to the population and each band
image corresponds to one of the parameters measured for the population.
A set of n intensity values now exists corresponding to the n bands of the
multispectral scanner, for each value of the pair of subscripts p and q. A particular
pair (p,q) in this case is the analogue of one member of the population in the
original psychological formulation of Hotelling.
Each band of the image can be thought of as a one-dimensional array or
vector, xi, or xi(k), where 1 ≤ k ≤ N2, instead of thinking of each band of the
image as a two-dimensional array, Ii(p,q). For the sake of argument, one
can suppose that the first N components of xi are constructed from the first
column of Ii(p, q), the second N components from the second column, and
so on. Thus:
xi = { Ii (1, 1), Ii (1, 2), ... Ii (1, N ), ... Ii ( N , 1), Ii ( N , 2), ... Ii ( N , N )}
(9.28)
All the image data are now contained in this set of n vectors xi, where each
vector xi is of dimension N2.
The covariance matrix of the vectors xi and xj is now defined as
(Cx )ij = E{( xi − xi )( x j − x j )′}
(9.29)
where xi = E(xi), the expectation or mean, of the vector xi, and the prime is
used to denote the transpose.
From the data, the mean and the variance, (Cx)ii, can also be estimated:
N2
∑
1
xi = 2
xi ( k )
N k =1
(9.30)
and
1
(Cx )ii = 2
N
N2
∑
k =1
1
{ xi ( k ) − xi }{ xi ( k ) − xi }′ = 2
N
N2
∑ x (k)x (k)′ − x x ′
i
i
i i
(9.31)
k =1
The mean vector will be of dimension n and the covariance matrix will be
of dimension n × n.
The objective of the Hotelling transformation is to diagonalize the covariance
matrix — that is, to transform from a set of bands that are highly correlated with
one another to a set of uncorrelated bands or principal components. In order to
achieve the required diagonalization of the covariance matrix, a transformation
9255_C009.fm Page 229 Tuesday, February 27, 2007 12:39 PM
229
Image Processing
is performed using a matrix, A, in which the elements are the components of
the normalized eigenvectors of Cx. That is:
 e11

 e21

A =  ...

 ...

 en1
e12
e22
...
...
en 2
... e1n 

... e2 n 
... ... 

.... ... 

... enn 
(9.32)
where eij is the jth component of the ith eigenvector.
The Hotelling transformation then consists of replacing the original vector
xi by a new vector yj, where:
yi = A( x j − x j )
(9.33)
and the transformed covariance matrix Cy , which is now diagonal, is related
to Cx by:
C y = AC x A′
(9.34)
where A′ denotes the transpose of A.
The example of a multispectral image containing areas corresponding to
water, vegetation-covered land, and built-over land might be used to indicate
what is involved in a slightly more concrete fashion. To distinguish among
these three categories, one could attempt to identify each area in the data
from a single band. One could also attempt to carry out a multispectral
classification, in which case, some evidence external to the digital data of
the image itself would be needed to identify the classes. By using the principal
components transformation, the maximum discrimination between different
classes can be achieved without any reference to external evidence outside
the data set of the image data itself.
9.9
Fourier Transforms
The use of Fourier transforms for the removal of noise from images is an
accepted method of image processing. To establish the notation, we write
the Fourier transform F(u) of a function f(x) of one variable x. Let us consider
one row of an image; f(x) denotes the intensity or brightness value of the
9255_C009.fm Page 230 Tuesday, February 27, 2007 12:39 PM
230
Introduction to Remote Sensing
pixel with coordinate x and we suppose that there are M pixels in the row.
The Fourier transform F(u) is then given by:
 1  M −1

 
F(u) =  
f ( x)exp −2π i  ux  
 M
 M  x= 0

(9.35)
 1  M −1
F( k ) =  
f ( x)exp{−2π ik x}
 M  x= 0
(9.36)
∑
or
∑
where k = u/M.
Thus, the function f(x) is composed of, or synthesized from, a set of harmonic
(sine and cosine) waves that are characterized by k. k, which is equal to 1/l
(where l is the wavelength), can be regarded as a spatial frequency; it is also
commonly referred to as the wave number because it is the number of
wavelengths that are found in unit length (i.e., in 1 m). Thus:
k=
u 1
=
M λ
(9.37)
M
.
u
(9.38)
or
λ=
A small value of u (also a small value of k) corresponds to a long wavelength;
thus, it characterizes a component that varies slowly in space (i.e., as one
moves along the row in the image). A large value of u (also a large value of k)
corresponds to a small wavelength; thus, it characterizes a component that
varies rapidly in space (i.e., as one moves along the row in the image). Thus,
F(u) represents the contribution of the waves of spatial frequency u to the
function f(x), in this case to the intensities of the pixels along one row of the
image. The Fourier transform for a very simple function of one variable:
f(x) = 0 – ∞ < x < a and a < x < ∞
=1 –a<x<a
(9.39)
is shown in Figure 9.16; this will be recognized in optical terms as corresponding to the intensity distribution in the diffraction pattern from a single slit.
An important property of Fourier transforms is that they can be inverted.
Thus, if the Fourier transform F(u) is available, one can reconstruct the
function f(x):
M −1
f ( x) =



∑ F(u)exp +2π i  uxM  
u= 0
(9.40)
9255_C009.fm Page 231 Tuesday, February 27, 2007 12:39 PM
231
Image Processing
F(u)
u
0
FIGURE 9.16
Standard diffraction pattern obtained from a single slit aperture; u = spatial frequency, F(u) =
radiance.
where the plus sign is included to emphasize the difference in sign in the
exponent between this equation and Equation 9.35.
An image, of course, is two-dimensional, so one must also consider a
function f(x, y) that denotes the intensity of the pixel with coordinates x and y;
f(x, y) is what we have previously called I(x, y). Then the Fourier transform
F(u, v) of the function f(x, y) is given by:
 1   1  M −1 N −1



F(u, v) =    
f ( x , y)exp −2π i  ux + vy  
 M N 
 M   N  x= 0 y= 0

∑∑
(9.41)
where M is the number of columns and N is the number of rows in the image.
The image can then be reconstructed by taking the inverse Fourier transform:
M −1 N −1
f ( x , y) =



∑ ∑ F(u, v)exp +2π i  uxM + vyN  
(9.42)
u= 0 v = 0
An example of a Fourier transform F(u,v) for a function f(x, y) of two variables, x and y, is shown in Figure 9.17; this is the two-dimensional analogue
F (u, v)
v
FIGURE 9.17
Example of two-dimensional Fourier transform.
u
9255_C009.fm Page 232 Tuesday, February 27, 2007 12:39 PM
232
Introduction to Remote Sensing
of the Fourier transform shown in Figure 9.16. The Fourier transform F(u, v)
contains the spatial frequency information of the original image.
So far, in Equation 9.40 and Equation 9.42, we have achieved nothing
except the regeneration of an exact copy of the original image. We now turn
to filtering. The range of the allowed values of u is from 0 to M – 1 and of
v is from 0 to N – 1; thus, there are M × N pairs of values of u and v. If,
instead of using all the components F(u, v), one selects only some of them to
include in an equation like Equation 9.42, one will obtain a function that is
recognizable as being related to the original image but has been altered in
some way. Low values of u and v correspond to slowly varying components
(i.e., to low-spatial-frequency components); high values of u and v correspond to rapidly varying components (i.e., to high-spatial-frequency components). If instead, of including all the terms F(u,v) in Equation 9.42, one
only includes the low frequency terms, one will obtain an image that is like
the original image, but where the low frequencies are emphasized and the
high frequencies are de-emphasized or removed completely; in other words,
one can achieve low-frequency filtering. Similarly, if one includes only the
high-frequency terms, one will obtain an image that is like the original image
but where the high frequencies are emphasized and the low frequencies are
de-emphasized or removed completely; in other words, one achieves highfrequency filtering. We now take the Fourier transform F(u, v) of the original
image f(x, y) and construct a new function G(u, v) where:
G( u , v ) = H ( u , v ) F( u , v )
(9.43)
and where H(u,v) represents a filter that we propose to apply. Then we
generate a new image g(x, y), where:
M −1 N −1
g( x , y) =



∑ ∑ G(u, v)exp +2π i  uxM + vyN  
u= 0 v = 0
So, to summarize:
f (x, y)
original image
↓
take Fourier transform
F( u , v )
Fourier transform
↓
multiply by filter H ( u , v )
G( u , v ) = H ( u , v ) F( u , v )
↓
take inverse Fourier transform
g( x , y )
processed/filtered image.
(9.44)
9255_C009.fm Page 233 Tuesday, February 27, 2007 12:39 PM
233
Image Processing
H (u, v)
0
d (u, v)
(a)
H (u, v)
1
1/ 2
0
d (u, v)
(b)
H (u, v)
1
0
d (u, v)
(c)
H (u, v)
1
0
d (u, v)
(d)
FIGURE 9.18
Four filter functions: (a) ideal filter; (b) Butterworth filter; (c) exponential filter; and (d) trapezoidal filter.
Sketches of four examples of simple low-pass filters H(u,v) are illustrated
in Figure 9.18, where
d( u , v ) = u 2 + v 2
(9.45)
These are relatively simple functions; more complicated filters can be used
for particular purposes.
9255_C009.fm Page 234 Tuesday, February 27, 2007 12:39 PM
234
Introduction to Remote Sensing
F ( u, v)
v
u
FIGURE 9.19
Fourier transform for a hypothetical noise-free image.
Let us consider the case of random noise in an image. Suppose a certain
hypothetical noise-free image gives rise to the Fourier transform shown in
Figure 9.19. In this transform, the central peak is very much larger than any
other peak and the size of the peaks decreases as one moves further away
from the center. If some random noise is now introduced into the image and
a transform of the noisy image is taken, a transform such as that shown in
Figure 9.20 will be obtained. The size of the off-center peaks has now
increased relative to the size of the central peak, and the peaks do not
necessarily become smaller as one moves further away from the origin. A
suitable filter to apply to this transform to eliminate the noise from the
regenerated image would be a low-pass filter that allows large discrete
maxima (for small values of u and v) to pass but blocks small peaks.
In addition to random noise, one may wish to remove some other blemishes
from an image. One commonly encountered example is that of striping,
which is found in some Landsat MSS images. The Landsat MSS is constructed in such a way that six scan lines of the image are generated
simultaneously using an array of six different detectors for each spectral
band, 24 detectors in all. Although the six detectors for any one band are
F (u, v)
v
u
FIGURE 9.20
Fourier transform shown in Figure 9.19 with the addition of some random noise.
9255_C009.fm Page 235 Tuesday, February 27, 2007 12:39 PM
Image Processing
235
FIGURE 9.21
Example of a Landsat MSS image showing the 6-line striping effect, band 5, path 220, row 21,
of October 24, 1976, of the River Tay, Scotland. (Cracknell et al., 1982.)
nominally identical, they are inevitably not exactly so and, consequently,
there may be a striping with a periodicity of six lines in the image (see Figure
9.21). If one takes the Fourier transform of this image, there will be a very
large single peak in F(u, v) for one particular pair of values (u, v) = (0, N/6).
u = 0 because the wave’s “direction” is in the y-axis direction and v corresponds to a spatial frequency wave with a wavelength of six scan lines or pixel
edges (i.e., = N/v = 6 [see Equation 9.38]) and therefore v = N/6. If a filter H(u,v)
is used to remove this peak, then the reconstructed image g(x, y) will be the same
as the original image except that the striping will have been removed.
Although these days almost all work with Fourier transforms is performed
digitally, it is nevertheless interesting to consider the optical analogue, not
just for historical reasons but also because in some ways it helps one realize
what is happening. If an image is held on a film transparency, the Fourier
transform can be obtained optically. Irrespective of the actual method (digital
or optical) used for performing the Fourier transform, the original function
can be reconstructed by performing the inverse Fourier transform. For a
function f(x, y) of two variables that represents a perfect image, the process
of taking the Fourier transform and then doing a second transformation on
this transform to regenerate the original image can be expected to lead to a
degeneration of the quality of the image. If optical processing is used, the
degradation arises from aberrations in the optical system; if digital processing is used, the degradation arises from rounding errors in the computer
and from truncation errors in the algorithms. It may, therefore, seem strange
that the quality of an image might be enhanced by taking a Fourier transform
9255_C009.fm Page 236 Tuesday, February 27, 2007 12:39 PM
236
Introduction to Remote Sensing
of the image and then taking the inverse Fourier transform of that transform
to regenerate the image again. However, the basic idea is that it may be
easier to identify spurious or undesirable effects in the Fourier transform
than in the original image. These effects can then be removed. This is a form
of filtering but, unlike the filtering discussed in Section 9.6, this filtering is
not carried out on the image itself (i.e., in the spatial domain), but on its
Fourier transform (i.e., in the frequency domain [by implication, the spatial
frequency domain]). Having filtered the transform to remove imperfections,
the image can then be reconstructed by performing the inverse Fourier
transform. An improvement in the quality of the image is then often obtained
in spite of the optical aberrations or numerical errors or approximations.
In taking a Fourier transform of a two-dimensional object, such as a film
image of some remotely sensed scene, one is analyzing the image into its
component spatial frequencies. This is what a converging lens does when an
object is illuminated with a plane parallel beam of coherent light. The complex
field of amplitude and phase distribution in the back focal plane is the Fourier
transform of the field across the object; in observing the diffraction pattern or
in photographing it, one is observing or recording the intensity data and not
the phase data. Actually, to be precise, the Fourier transform relation is only
exact when the object is situated in the front focal plane; for other object
positions, phase differences are introduced, although these do not affect the
appearance of the diffraction pattern. It will be clear that, because rays of light
are reversible, the object is the inverse Fourier transform of the image. The
inverse transform can thus be produced physically by using a second lens. As
already mentioned, the final image produced would, in principle, be identical
to the original object, although it will actually be degraded as a result of the
aberrations in the optical system. This arrangement has the advantage that,
by inserting a filter in the plane of the transform, the effect of that filter on the
reconstructed image can be seen directly and visually (see Figure 9.22).
FIGURE 9.22
Two optical systems suitable for optical filtering. (Wilson, 1981.)
9255_C009.fm Page 237 Tuesday, February 27, 2007 12:39 PM
237
Image Processing
The effects that different types of filters have when the image is reconstructed can thus be studied quickly and effectively. This provides an example of a situation in which it is possible to investigate and demonstrate effects
and principles much more simply and effectively with optical image processing techniques than with digital methods.
The effect of some simple filters can be illustrated with a few examples
that have been obtained by optical methods. A spatial filter is a mask or
transparency that is placed in the plane of the Fourier transform (i.e., at T
in Figure 9.22), and various types of filter can be distinguished:
•
•
•
•
A blocking filter (a filter that is simply opaque over part of its area)
An amplitude filter
A phase filter
A real-valued filter (a combination of an amplitude filter and a phase
filter, where the phase change is either 0 or p)
• A complex-valued filter that can change both the amplitude and the
phase.
A blocking filter is, by far, the easiest type of filter to produce. Figure 9.23(a)
shows an image of an electron microscope grid, and Figure 9.23(b) shows
(a)
(b)
(c)
(d)
(e)
(f )
FIGURE 9.23
(a) The optical transform from an electron microscope grid; (b) image of the grid; (c) filtered
transform; (d) image due to (c); (e) image due to zero-order component and surrounding four
orders; and (f) image when zero-order component is removed. (Wilson, 1981.)
9255_C009.fm Page 238 Tuesday, February 27, 2007 12:39 PM
238
Introduction to Remote Sensing
FIGURE 9.24
Raster removal. (Wilson, 1981.)
its optical Fourier transform. Figure 9.23(c) shows the transform with all the
nonzero ky components removed. Consequently, when the inverse transform
is taken, no structure remains in the y direction (see Figure 9.23[d]). The
effects of two other blocking filters are shown in Figure 9.23(e) and (f). The
six-line striping present in Landsat MSS images has already been mentioned.
By using a blocking filter to remove the component in the transform corresponding to this striping, one can produce a destriped image. The removal
of a raster from a television picture is similar to this (see Figure 9.24). One
might also be able to remove the result of screening a half-tone image; the
diffraction pattern from a half-tone object contains a two-dimensional
arrangement of discrete maxima, with the transform of the picture centered
on each maximum. A filter that blocks out all except one order can produce
an image without the half-tone. This approach can also be applied to the
smoothing of images that were produced on old-fashioned computer output
devices, such as line printers and teletypes (see Figure 9.25).
FIGURE 9.25
Half-tone removal. (Wilson, 1981.)
9255_C009.fm Page 239 Tuesday, February 27, 2007 12:39 PM
Image Processing
239
FIGURE 9.26
Edge enhancement by high-pass filtering. (Wilson, 1981.)
The more high spatial frequencies present in the Fourier transform, the
more fine detail can be accounted for in an image. As previously noted (see
Section 9.6.2), a high-pass filter that allows high spatial frequencies to pass
but blocks the low spatial frequencies leads to edge enhancement of the
original image because the high spatial frequencies are responsible for the
sharp edges (see Figure 9.26).
One final point is worth mentioning before we leave Fourier transforms.
There are many other situations apart from the processing of remotely sensed
images in which Fourier transforms are used in order to try to identify a
periodic feature that is not very apparent from an inspection of the original
image or system itself. This is very much what is done in X-ray and electron
diffraction work in which the diffraction pattern is used to identify or quantify
the periodic structure of the material that is being investigated. Similarly,
Fourier transforms of images of wave patterns on the surface of the sea,
obtained from aerial photography or from a SAR on a satellite, are used to
find the wavelengths of the dominant waves present. Because dealing with
a representation of the Fourier transform as a function of two variables using
three dimensions in space is inconvenient, it is more common to represent
the Fourier transform as a grayscale image in which the value of the transform F(u,v) is represented by the intensity at the corresponding point in the
u,v plane. Such representations of the Fourier transform are very familiar to
physicists and the like who encounter them frequently as films of optical,
X-ray, or electron diffraction patterns.
9255_C009.fm Page 240 Tuesday, February 27, 2007 12:39 PM
9255_C010.fm Page 241 Tuesday, February 27, 2007 12:46 PM
10
Applications of Remotely Sensed Data
10.1 Introduction
Remotely sensed data can be used for a great variety of practical applications,
all of which relate, in general, to Earth resources. For convenience, and
because the innumerable applications are so varied and far reaching, in this
chapter these applications are classed into major categories, each coming
under the purview of some recognized professional discipline or specialty.
These categories include applications to the:
•
•
•
•
•
Atmosphere
Geosphere
Biosphere
Hydrosphere
Cryosphere.
This arrangement is not entirely satisfactory because some disciplines, such
as cartographic mapping, have their own sets of unique applications; however, these disciplines also rely upon observations and measurements that
overlap with, and are of mutual interest to, other disciplines. Because many
examples of applications of remotely sensed data exist, their treatment here
can only be of a cursory nature. Furthermore, the set of applications that is
described in this chapter is by no means exhaustive. Many additional examples exist, both outside and within the categories mentioned (see Table 1.2).
10.2 Applications to the Atmosphere
10.2.1
Weather Satellites in Forecasting and Nowcasting
Weather forecasters need access to information from large areas as quickly
and as often as possible because weather observations rapidly become outdated. Satellites are best able to provide the kinds of data that satisfy these
241
9255_C010.fm Page 242 Tuesday, February 27, 2007 12:46 PM
242
Introduction to Remote Sensing
requirements in terms of both coverage and immediacy. A good description
of the current weather situation is essential to successful short-period
weather forecasting, particularly for forecasting the movement and development of precipitation within 6 hours. Satellite pictorial data are particularly useful in that they provide precision and detail for short-period weather
forecasting. The data allow synoptic observations to be made of the state of
the atmosphere, from which a predicted state may be interpolated on the
basis of physical understanding of, and past experience with, the way in
which the atmosphere behaves.
Meteorologists have been making increasing use of weather satellite data
as aids for analyzing synoptic and smaller-scale weather systems since 1960.
The use and importance of satellite data has increased with the continued
improvement of satellite instrumentation. They have also increased because
of the extra dependence placed on them following the reduction in the
number of ocean weather stations making surface and upper-air observations. Indeed, in regions where more-conventional types of surface and
upper-air observations are few or lacking — for example, oceanic areas away
from main airline routes and interiors of sparsely populated continents —
satellite data at times provide the only current or recent evidence pertaining
to a particular weather system.
Satellite observations are now regularly used in weather forecasting and
what is known as “nowcasting,” alongside observations made from land
stations, ships, and aircraft and by balloon-borne instruments. Commonly,
weather satellites produce visible and infrared images. These are the pictures
normally associated with television presentations of the weather. The relative
importance of the satellite observations depends on the weather situation.
A skilled forecaster has a very good understanding of the relationship
between the patterns in maps of temperature, pressure, and humidity and
the location (or absence) of active weather systems. Although there is not a
unique relationship between a particular cloud system and the distribution
of the prime variables, the relationships are fairly well defined and confined
within certain limits. This means that the forecaster can modify the analyses
maps to be consistent with the cloud systems as revealed by the satellite data.
“Nowcasting” is the real-time synthesis, analysis, and warning of significant
— chiefly hazardous — local and regional weather based on a combination
of observations from satellites, ground-based radar, and dense ground networks reporting through satellites. The trend toward nowcasting, enabled
by remote sensing technologies, is developing as a response to the need for
timely information in disaster avoidance and management and for numerical
models of the atmosphere. The improvement of flash-flood and tornado
warnings and the monitoring of the dispersal of an accidental radioactive
release illustrate the call on immediate weather information.
The first World Meteorological Organization World Weather Research Programme Forecast Demonstration Project (FDP) with a focus on nowcasting was
conducted in Sydney, Australia, during a period associated with the Sydney
2000 Olympic Games. The goal of the Sydney 2000 FDP was to demonstrate
9255_C010.fm Page 243 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
243
the capability of modern forecast systems and to quantify the associated benefits in the delivery of a real-time nowcast service. The FDP was not just about
providing new and improved systems that could be employed by forecasters;
rather, it demonstrated the benefits to end users by undertaking verification of
nowcasts and impact studies.
10.2.2 Weather Radars in Forecasting
Weather radars make it possible to track even small-scale weather patterns
and individual thunderstorms. All weather radars send out radio waves
from an antenna. Objects in the air, such as raindrops, snow crystals, hailstones, and even insects and dust, scatter or reflect some of the radio waves
back to the antenna. All weather radars, including Doppler radars, electronically convert the reflected radio waves into pictures showing the location
and intensity of precipitation.
Using a radar, one can figure out where it is raining and how heavy the
rain is. However, the range limitations of radar and the curvature of the
Earth mean that a single ground-based weather radar is able to observe a
traveling rainfall system for only a limited period, and then often only part
of that system. A network of weather radars is accordingly used, and a
picture of the rainfall patterns and the derived pictures are often shown on
television forecasts, where the picture is displayed using colors to denote
different intensities that represent different levels of precipitation, including:
•
•
•
•
•
•
•
Downpour (more than 16 mm/hour)
Very heavy (8 to 16 mm/hour)
Heavy (4 to 8 mm/hour)
Moderate II (2 to 4 mm/hour)
Moderate I (1 to 2 mm/hour)
Slight (0.5 to 1 mm/hour)
Very slight (less than 0.5 mm/hour).
In the U.K., a network of 15 weather radars covering the whole of the
British Isles, the Republic of Ireland, and the States of Jersey has been set up
(see Figure 10.1). This network is used to provide up-to-date information on
the distribution of surface rainfall at intervals of 15 minutes. Detailed forecasts for specific locations may be attempted by replaying recent radar image
sequences to reveal the movement of areas of rain, leading to the prediction
of their future movement.
A number of national weather radar networks exist. For example, the U.S.
National Weather Service has installed Doppler radars around the United
States, and the Australian Bureau of Meteorology operates a Weather Watch
Radar network. The Meteorological Service of Canada operates a National
Radar Program website comprising 28 Environment Canada radars and
9255_C010.fm Page 244 Tuesday, February 27, 2007 12:46 PM
244
Introduction to Remote Sensing
FIGURE 10.1 (See color insert)
The radar network operated by the U.K Met Office and the Irish Meteorological Service, Met
Éireann. (U.K. Met Office.)
2 Department of National Defence radars. In addition to national programs,
some television stations and airports have their own Doppler radars. Many
weather services provide real-time national, regional, and local radar
images on the Internet (see Figure 10.2).
Satellite data may be used to extend radar coverage. The U.K. Meteorological
Office’s weather radar network uses 15- to 30-minute Meteosat imagery to
identify rain clouds outside the radar coverage area for inclusion in forecasts.
The radar and satellite pictures are registered to the same map projection
that enables the integration of the two types of remotely sensed weather
data. The relationship between the rain at the surface, as identified by the
radar, and the cloud above, as identified in the Meteosat imagery, may then
be examined by forecasters. Because the correspondence between cloud and
rain is variable, the radar data are used to “calibrate” the satellite images in
terms of rainfall. Because rainfall patterns can be inferred in only a very
broad sense from satellite data, radar data are used instead of satellite data
when both are available.
One of the more widely recognized problems in radar meteorology is that
the variability of the drop size distribution causes the relationship between
returned signal and rainfall intensity to vary. This is because, for the wavelengths commonly used in weather radars (about 5 cm), raindrops behave
as Rayleigh scatterers and their reflectivity depends on the sixth power of
9255_C010.fm Page 245 Tuesday, February 27, 2007 12:46 PM
245
Applications of Remotely Sensed Data
> 300
200
150
100
75
50
30
15
10
7
5
3
2
1
0.50
0.15
--300
200
150
100
75
50
30
15
10
7
5
3
2
1
0.50
Rainfall rate in nm/hr
hk_comp
CAPPI
R_C_030_256Y
10 : 00 : 00
25 Jul 2001
Task: PPIVOL_∗
PRF: 500/375 Hz
Height: 3.0 km
Max range: 256 km
Proj: AED
FIGURE 10.2 (See color insert)
The eye of Typhoon Yutu is clearly revealed by weather radar at about 200 km to the southsouthwest of Hong Kong in the morning on July 25, 2001. (Hong Kong Observatory.)
the drop diameter. However, as far as a forecaster is concerned, the value of
radar lies not so much in its ability or otherwise to measure rainfall accurately at a point as in its ability to define the field of surface rainfall semiquantitatively over extended areas. One important category of error is
caused by the radar beam observing precipitation some distance above the
ground, especially at long ranges. Thus, radar measurements may either
underestimate or overestimate the surface rainfall according to whether the
precipitation accretes or evaporates, or indeed changes from snow to rain,
below the radar beam. Because these errors are caused by variations in the
physical nature of the phenomenon, no purely objective technique can be
used to correct them on all occasions, but they can to some extent be corrected
subjectively in the light of other meteorological information. Observed errors
in the radar-derived rainfall totals depend on the density of the rain gauge
network used in comparison, with widespread and uniform rain leading to
less error than, for example, isolated showers for which there may be poor
agreement because of rain gauge sparcity. Indeed, an isolated downpour
may not even be recorded by conventional methods.
9255_C010.fm Page 246 Tuesday, February 27, 2007 12:46 PM
246
Introduction to Remote Sensing
10.2.3 Determination of Temperature Changes with Height
from Satellites
In pictorial form, weather satellite data are capable of revealing excellent
resolution in the position, extent, and intensity of cloud systems. Pictorial
data, however, have to be interpreted by experienced forecasters and parameterized to enable the mapping of the quasihorizontal fields of pressure,
wind, temperature, and humidity at several discrete levels in the troposphere. Because satellites orbit far above the region of the atmosphere that
contains the weather, techniques for obtaining information about the atmosphere itself are limited by the fact that the observations that are most needed
are not measured directly. As a result, it may not be possible to obtain the
vertical resolution that is desired when measuring temperature profiles or
winds from satellites.
As well as showing the size, location, and shape of areas of cloud, visible
and infrared satellite pictures may, from an examination of the relative
brightness and texture of the images, also provide information on the vertical
structure of clouds. The brightness of a cloud image on a visible picture
depends on the Sun’s illumination, the reflectivity (which is related to the
cloud thickness), and the relative positions of the cloud, Sun, and radiometer.
On an infrared picture, the brightness depends on the temperature of the
emitting surface; in general, the brighter the image, the colder (higher) the
cloud top. Infrared imagery is obtained in regions of the electromagnetic
spectrum where the atmosphere is nearly transparent, so that radiation from
the clouds and surface should reach the satellite relatively unaffected by the
intervening atmosphere. However, the vertical distribution of atmospheric
temperature is inferred by measuring radiation in spectral regions where the
atmosphere is absorbing. If the vertical temperature distribution is known,
the distribution of water vapor may also be inferred. These measurements,
however, are technically difficult to make.
Satellite-derived temperature profiles are currently produced from the data
from Atmospheric InfraRed Sounder (AIRS) system launched on May 4,
2002, aboard the National Aeronautics and Space Administration’s (NASA’s)
AQUA weather and climate research satellite. Whereas the world’s radiosonde network provides some 4,000 soundings per day (see Figure 10.3),
AIRS retrieves 300,000 soundings of Earth’s atmospheric temperature and
water vapor in three dimensions on a global scale every day (see Figure
10.4). The European Centre for Medium-Range Weather Forecasts (ECMWF)
began incorporating data from the AIRS system into its operational forecasts
in October 2003. The ECMWF reported an improvement in forecast accuracy
of 8 hours in southern hemisphere 5-day forecasts, and the National Oceanic
and Atmospheric Administration (NOAA) reported that incorporating AIRS
data into numerical weather prediction models improves the accuracy range
of 6-day northern hemisphere weather forecasts by up to 6 hours.
Together with the Advanced Microwave Sounding Unit (AMSU) and the
Humidity Sounder for Brazil, AIRS measures temperature with an accuracy of
9255_C010.fm Page 247 Tuesday, February 27, 2007 12:46 PM
247
Applications of Remotely Sensed Data
FIGURE 10.3
World radiosonde network providing 4,000 soundings per day. (NASA/JPL/AIRS Science Team,
Chahine, 2005.)
1°C in layers 1 km thick, and humidity with an accuracy of 20% in layers 2
km thick in the troposphere (the lower part of the atmosphere). This accuracy
of temperature and humidity profiles is equivalent to radiosonde accuracy
that is nominally referred to as being 1K/1km (i.e., 1K rms accuracy with
1 km vertical resolution). Each AIRS scan line contains 90 infrared footprints,
Hour
180°E
120°W
60°W
0°
60°E
120°E
180°W
80°N
80°N
40°N
40°N
0°
0°
40°S
40°S
80°S
80°S
180°E
−3
120°W
−2
60°W
0°
60°E
120°E
180°W
−1
0
1
2
3
Hour from January 27, 2003 0000 Z
FIGURE 10.4 (See color insert)
The AIRS system provides 300,000 soundings per day. (NASA/JPL/AIRS Science Team,
Chahine, 2005.)
9255_C010.fm Page 248 Tuesday, February 27, 2007 12:46 PM
248
Introduction to Remote Sensing
with a resolution of 13.5 km at nadir and 41 km × 21.4 km at the scan
extremes. These figures should be compared with those of the earlier Television InfraRed Observation Satellite (TIROS) Operational Vertical Sounder
(TOVS) system, which provides soundings expressed as mean temperatures
through layers of depth 1.5 to 2 km, at 250 km grid spacing, with provision
for 500 km grid spacing for reduced coverage.
The limiting factors of satellite-derived soundings are:
• The raw observations relate to deep layers (defined by the atmospheric
density profile).
• The presence of clouds influences the soundings.
• Global coverage is built up over several hours (i.e., it is not obtained
at the standard times of 0000, 0600, 1200, and 1800 Z).
• Obtaining temperatures and winds in and around frontal zones,
where they are of most importance, is particularly difficult.
• The error characteristics of satellite observations are very different from
those of conventional observations, necessitating different analysis
procedures for the best use to be made of the data.
Even with these limitations, satellites are impacting meteorology — in particular, prediction models — both by providing a considerably increased
total number of observations than could have been obtained by conventional
means and also by providing these observations at increased levels of accuracy and consistency (see Figure 10.5).
10.2.4 Measurements of Wind Speed
10.2.4.1 Tropospheric Estimations from Cloud Motion
Some clouds move with the wind. If these clouds can be seen and their
geographical positions can be determined in two successive satellite pictures,
then the displacement may be used to determine the speed of the wind at
the level of the cloud. This simple principle forms the basis for the derivation
of several thousand wind observations each day from the set of weather
satellites.
Small clouds are most likely to move with the wind, but these are generally too small to be detected by the radiometers on satellites. Moreover,
their life cycle can be shorter than the 15-minute to half-hourly interval
between successive images recorded by a geostationary weather satellite.
Consequently, larger clouds, and more commonly, patterns of cloud distribution 10 to 100 km in size are used. The longer the time interval between
the pair of images, the greater the displacement and, up to a point, the
more accurate the technique. Gauged against radiosonde wind measurements, satellite winds derived by present techniques have an accuracy of
3 ms–1 at low levels. However, at upper levels, the scatter in differences of
wind velocities as measured by satellite and sonde is substantially larger.
9255_C010.fm Page 249 Tuesday, February 27, 2007 12:46 PM
249
Applications of Remotely Sensed Data
10
Pressure (mb)
20
15
100
10
5
1000
180
200
220
240
260
280
Temperature (K)
300
Altitude = H∗LN (P/1000), (H = 6 km)
25
0
320
FIGURE 10.5
Comparison of AIRS retrieval (smooth line) with that of a dedicated radiosonde (detailed line)
obtained for the Chesapeake Platform on Sept 13, 2002. (NASA/JPL/AIRS Science Team.)
It is obviously necessary to determine the height to which the satellitederived wind should be assigned. The only information available from the
satellite for this purpose is measurements of radiation relating to the cloud
top. From these measurements, the temperature of the cloud top may be
deduced and, given a knowledge of the temperature variation with height,
the cloud-top height may be derived. This process is subject to error,
especially if the cloud is not entirely opaque which, unfortunately, is often
the case for cirrus ice-clouds in the upper troposphere. Accordingly, one
must estimate the transmissivity of cirrus clouds to obtain the proper cloud
top heights.
Initially, the cloud motion data used to derive wind measurements were
obtained from geostationary satellites. These instruments obtain images of
the Earth’s surface to roughly 55° north and south of the equator. Any farther
north or south and the satellite image becomes distorted due to the curvature
of the Earth. Consequently, wind data predictions based on cloud motion
have been most accurate at latitudes lower than 55°. Although the predictions
obtained have proven useful in predicting the path and severity of developing
storms, they have been limited in their coverage of vast ocean expanses and
higher latitude regions which are the common birthplaces of many storms.
Large ocean expanses present difficulty because they have no landmarks to
show the exact location of the clouds.
9255_C010.fm Page 250 Tuesday, February 27, 2007 12:46 PM
250
Introduction to Remote Sensing
NASA’s Multi-angle Imaging SpectroRadiometer (MISR) instrument,
launched in 1999 on the Terra satellite, is the first satellite instrument to
simultaneously provide directly measured cloud height and motion data
from pole to pole. The MISR instrument uses cameras fixed at nine different
angles to view reflected light and collect global pole-to-pole images. The
cameras capture images of clouds, airborne particles, and the Earth’s surface,
collecting information about each point in the atmosphere or on the surface
from the nine different angles, providing wind values accurate to within
3 ms−1 at heights accurate to within 400 m.
In addition to being used as input data for short-term weather forecasting,
cloud height and motion data are used for long-term climate studies.
10.2.4.2 Microwave Estimations of Surface Wind Shear
In the past, weather data could be acquired over land, but knowledge of
surface winds over oceans came from infrequent, and sometimes inaccurate,
reports from ships and buoys. The best measurements of surface wind velocity
from satellites are made by radars that observe the scatter of centimeterwavelength radio waves from small capillary waves on the sea surface. Wind
speed is closely related to the flux of momentum to the sea. Accordingly, the
amount of scatter per unit area of surface, the scattering cross section, is highly
correlated with wind speed and direction at the surface (see Sections 7.2 and 7.3).
Scatterometery has its origin in early radar used in World War II. Early
radar measurements over oceans were corrupted by sea clutter (noise); and
it was unknown at that time that clutter was the radar response to the winds
over the oceans. The radar response was first related to wind in the late
1960s. A scatterometer is an active satellite sensor that detects the loss of
intensity of a transmitted signal from that returned by the ocean surface.
The radar backscatter measurements depend on the ocean surface roughness
and can be related to the ocean surface wind (or surface wind stress). Backscatter measurements are converted to wind vectors through the use of a
transfer function or an empirical algorithm (see Equations 7.8 and 7.9).
Scatterometers were operated for a few hours as part of the Skylab missions
in 1973 and 1974, demonstrating that spaceborne scatterometers were indeed
feasible. The Seasat-A Satellite Scatterometer (SASS) operated from June to
October 1978 and produced excellent maps of surface wind (see also Section
7.3). Since 1992, wind vector data have been available as a fast-delivery product from the European Space Agency’s (ESA’s) European Remote-Sensing
Satellite (ERS) polar-orbiting satellites. The ERS satellites carry three antennae but have a single sided (single swath) look at the surface of the seas.
The data provide wind vectors (speed and direction) over a relatively narrow
swath coverage of 500 km, with a footprint resolution of 50 km at a spacing
of 25 km. Unfortunately, the inversion process that converts satellite backscatter measurements into a wind vector can not provide a single, unique
solution for the wind vector but provides multiple vector solutions (up to
four). Once ESA fast-delivery wind vector data became available, it became
9255_C010.fm Page 251 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
251
obvious that there were serious problems with the vectors. The standard
accuracy specified for surface wind speed data is ±2 ms–1 for wind speeds up
to 20 ms–1, and 10% for wind speeds above that, and the accuracy for wind
direction is ±20°. Gemmill et al. (1994) found that, although the wind speed
retrievals met their specifications, the wind direction selections did not. To
improve accuracy, the vector solutions are ranked according to a probability
fit. A background wind field from a numerical model is used to influence the
initial directional probabilities. The vector solutions are then reranked according to probability determined by including the influence of the background
field (Stoffelen and Anderson, 1997). A final procedure (not used by ESA)
may then be carried out on the scatterometer wind swath to ensure that all
the winds present a reasonable and consistent meteorological pattern. This
procedure, the sequential local iterative consistency estimator (SLICE), works
by changing the directional probabilities (and rank) of each valid solution
using wind directions from surrounding cells. The SLICE algorithm was
developed by and is being used by the U.K. Meteorological Office.
The NASA Scatterometer (NSCAT), which was launched aboard Japan’s
Advanced Earth Observing Satellite (ADEOS; MIDORI in Japan) in August
1996, was the first dual-swath, Ku band scatterometer to fly since Seasat. From
September 1996, when the instrument was first turned on, until premature
termination of the mission due to satellite power loss in June 1997, NSCAT
returned a continuous stream of global sea-surface wind vector measurements.
The NSCAT mission proved so successful that plans for a follow-up mission
were accelerated to minimize the gap in the scatterometer wind database. The
QuikSCAT mission launched the SeaWinds instrument in June 1999, with an
1800-km swath during each orbit, providing approximately 90% coverage of
the Earth’s oceans every day and wind-speed measurements of 3 to 20 ms−1,
with an accuracy of 2 ms−1; a directional accuracy of 20°; and a wind vector
resolution of 25 km (see Figure 10.6). A follow-up SeaWinds instrument is
scheduled for launch on the ADEOS-II platform in August 2006.
Although the altimeters flown on Skylab, Geodynamics Experimental
Ocean Satellite–3, and Seasat were primarily designed to measure the height
of the spacecraft above the Earth’s surface, the surface wind speed, although
not direction, can be inferred from the shape of the returned altimeter pulse.
10.2.4.3 Sky Wave Radar
An attempt to evaluate the use of sky-wave radar techniques for the determination of wind and sea-state parameters (see Chapter 3) was provided by the
Joint Air-Sea Interaction project, which was carried out in the summer of 1978
(see Shearman [1981]). During the 1990s, the NOAA worked with the U.S. Air
Force and the U.S. Navy to exploit the unused ocean-monitoring capabilities of
their Cold War early-warning radar systems. This work demonstrated that
ground-based sky-wave, or over-the-horizon (OTH), radars are able to remotely
monitor ocean-surface winds and currents over data-sparse ocean areas that
would otherwise require thousands of widely dispersed in-situ instruments.
9255_C010.fm Page 252 Tuesday, February 27, 2007 12:46 PM
252
Introduction to Remote Sensing
FIGURE 10.6 (See color insert)
Tropical Storm Katrina is shown here as observed by NASA’s QuikSCAT satellite on August
25, 2005, at 08:37 UTC (4:37 a.m. in Florida). At this time, the storm had 80 km/hour (43 knots)
sustained winds and does not appear to yet have reached hurricane strength. (NASA/JPL/
QuikSCAT Science Team.)
OTH radar observations of surface wind direction offer a high-resolution
(15-km) resource for synoptic and mesoscale wind field analysis. With horizontal resolution at the lower end of the mesoscale and aerial coverage in
the synoptic scale, OTH radar has the potential to contribute significantly to
the amelioration of the data sparseness that has long plagued over-ocean
surface analysis. Because of a twofold ambiguity in the OTH radar algorithm
for determining surface wind direction, however, mapping surface wind
directions unambiguously would normally require two radars with overlapping coverage. Alternatively, the single-radar ambiguity can be resolved by
combining numerical weather prediction model analyses and surface wind
observations with meteorological insight.
There are a number of practical considerations to be taken into account
for sky wave radars (see also Section 6.4). Spatial resolution is coarse. There
may be serious interference from other sources of radio waves of the frequency used. But most especially the antennae need to be very large; the
whole system is big and expensive. The two most well documented systems
are the U.S. Air Force’s over-the-horizon-backscatter (OTH-B) air defense
radar system and the Australian Jindalee system. Other countries, in particular
Russia and China (see http://www.globalsecurity.org/wmd/world/china. oth-b.htm.),
are also understood to have developed sky wave systems. These are all military
9255_C010.fm Page 253 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
253
systems and the feasibility of developing a lower cost system for civil oceanographic work has been examined (Georges and Harlan 1999).
The Australian Jindalee Operational Radar Network, JORN, has evolved
over a period of 30 years at a cost of over $A1.8 billion. It is primarily an
imaging system which enables Australian military commanders to observe
all air and sea activity, including detecting stealth aircraft, north of Australia
to distances of at least 3000 km (see, for example, http://www.defencetalk.com/
forums/archive/index.php/t-1832.html).
The U.S. Air Force’s over-the-horizon-backscatter (OTH-B) air defense
radar system is probably by far the largest radar system in the world. It was
developed to warn against Soviet bomber attacks when the planes were still
thousands of miles from U.S. air space. Six 1 MW OTH radars see far beyond
the range of conventional microwave radars using 5-28 MHz waves reflected
by the ionosphere. With the end of the Cold War (just months after their
deployment), the three OTH radars on the U.S. West Coast were mothballed, but the three radars on the U.S. East Coast were redirected to
counter-narcotics surveillance. In 1991, NOAA recognized their potential
for environmental monitoring and asked the Air Force’s permission to look
at the part of the radar echo that the Air Force throws away—the ocean
clutter. Tropical storms and hurricanes were tracked, and a system was
developed for delivering radar-derived winds to the National Hurricane
Center. The combined coverage of the six OTH-B radars is about 90 million
square kilometres of open ocean where few weather instruments exist. Tests
have also demonstrated the ability of OTH radars to map ocean currents
(Georges and Harlan, 1994a, 1994b, Georges 1995).
Whereas OTH radars map surface wind directions on demand over large,
fixed ocean areas, active and passive microwave instruments on board several polar-orbiting satellites measure wind speeds along narrow swaths
determined by their orbital geometries. Satellites, however, do not measure
wind directions very well. Thus, the capabilities of ground-based and satellitebased ocean-wind sensors complement each other. Figure 10.7 shows 24 hours
of ERS-1 scatterometer coverage over the North Atlantic (color strips). The
wind speed is color coded, with orange indicating highest speeds. The OTH-B
wind directions for the same day are superimposed, filling in the gaps in
the satellite data.
10.2.5 Hurricane Prediction and Tracking
Satellite observations, together with land-based radar, are used extensively to forecast severe weather. Major thunderstorms, which may give
rise to tornadoes and flash-floods, are often identified at a stage where
warnings can be issued early enough to reduce loss of life and damage to
property. In more remote ocean areas, satellite observations may provide
early location of hurricanes (tropical storms) and enable their development and movement to be monitored. In the last few decades, virtually
no hurricane or tropical storm anywhere in the world has gone unnoticed
9255_C010.fm Page 254 Tuesday, February 27, 2007 12:46 PM
254
Introduction to Remote Sensing
35
OTH wind direction and
ERS–1 wind speeds 09 11 1991
0
5
10
15
20
25 m/s
25
15
5
90
70
50
30
FIGURE 10.7 (See color insert)
North Atlantic wind speed derived from ERS-1 (colored stripes) and OTH data. (Georges et al., 1998.)
by weather satellites, and much has been learned about the structure and
movements of these small but powerful vortices from the satellite evidence. Satellite-viewed hurricane cloud patterns enable the compilation
of very-detailed classifications and the determination of maximum wind
speeds. The use of enhanced infrared imagery in tropical cyclone analysis
adds objectivity and simplicity to the task of determining tropical storm
intensity. Satellite observations of tropical cyclones are used to estimate
their potential for spawning hurricanes. The infrared data not only afford
continuous day and night storm surveillance but also provide quantitative
information about cloud features that relate to storm intensity; thus, cloudtop temperature measurements and temperature gradients can be used in
place of qualitative classification techniques employing visible wavebands. The history of cloud pattern evolution and knowledge of the current intensity of tropical storms are very useful for predicting their
developmental trend over a 24-hour period and allow an early warning
capability to be provided for shipping and for areas over land in the paths
of potential hurricanes.
Hurricanes and typhoons exhibit a great variety of cloud patterns, but
most can be described as having a comma configuration. The comma tail is
composed of convective clouds that appear to curve cyclonically into a
center. As the storm develops, the clouds form bands that wrap around the
storm center producing a circular cloud system that usually has a cloud-free,
9255_C010.fm Page 255 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
255
FIGURE 10.8 (See color insert)
GOES-12 1-km visible image of Hurricane Katrina over New Orleans at 1300 on August 29,
2005. (NOAA.)
dark eye in its mature stage. The intensity of hurricanes is quantifiable either
by measuring the amount by which cold clouds circle the center or by using
surrounding temperature and eye criteria. Large changes in cloud features
are related to the intensity, whereas increased encirclement of the cloud
system center by cold clouds is associated with a decrease in pressure and
increase in wind speed.
Weather satellites have almost certainly justified their expense through the
assistance they have given in hurricane forecasting alone. The damage
caused by a single hurricane in the United States is often of the order of
billions of dollars, the most notable recent storm having been Hurricane
Katrina that devastated a substantial part of New Orleans in August 2005
(see Figure 10.8). As Hurricane Katrina gained strength in the Gulf of Mexico
on Sunday August 28, 2005, the population of New Orleans was ordered to
evacuate. Up to 80% of New Orleans was flooded after defensive barriers
against the sea were overwhelmed.
The tracking of Hurricane Katrina by satellite allowed the advance
evacuation of New Orleans and undoubtedly saved many lives. In 1983,
Hurricane Alicia caused an estimated $2.5 billion of damage and was
responsible for 1,804 reported deaths and injuries. In November 1970, a
tropical cyclone struck the head of the Bay of Bengal and the loss of life
caused by the associated wind, rain, and tidal flooding exceeded 300,000
people. Indirectly, this disaster was the trigger that led to the establishment
of an independent state of Bangladesh. Clearly, timely information about the
behavior of such significant storms may be almost priceless.
10.2.6 Satellite Climatology
The development of climatology as a field of study has been hampered by
the inadequacy of available data. Satellites are helping enormously to correct
this deficiency as they afford more comprehensive and more dynamic views
9255_C010.fm Page 256 Tuesday, February 27, 2007 12:46 PM
256
Introduction to Remote Sensing
of global climatology than were previously possible (Kondratyev and
Cracknell, 1998; Cracknell, 2001). In presatellite days, certain components of
the radiation balance, such as short wave (reflected) and long wave
(absorbed and reradiated) energy losses to space, were established by estimation, not measurement. The only comprehensive maps of global cloudiness compiled in presatellite days depended heavily on indirect evidence
and could not be time-specific. Although satellite-derived climatological
products have only been available for a few decades and are accordingly
limited in their use for longer term trend analysis, these products are becoming increasingly interesting and valuable as the databases are built up. These
databases include inventories of parameters used in the determination of:
• Earth/atmosphere energy and radiation budgets, particularly the
net radiation balance at the top of the atmosphere, which is the
primary driving force of the Earth’s atmospheric circulation
• Global moisture distributions in the atmosphere, which relate to the
distribution of energy in the atmosphere
• Global temperature distributions over land and sea, and accordingly
the absorption and radiation of heat
• Distribution of cloud cover, which is a major influence on the albedo
of the earth/atmosphere system and its component parts, and is also
an indicator of horizontal transport patterns of latent heat
• Global ozone distribution, particularly the levels of ozone at high
latitudes
• Sea-surface temperatures, which relate directly to the release of
latent heat through evaporation
• Wind flow and air circulation, which relate to energy transfer within
the earth/atmosphere system
• Climatology of synoptic weather systems, including their frequencies and spatial distribution over extended periods.
The World Climate Research Programme aims to discover how far it is possible
to predict natural climate variation and man’s influence on the climate. Satellites contribute by providing observations of the atmosphere, the land surface,
the cryosphere and the oceans with the advantages of global coverage, accuracy,
and consistency. Quantitative climate models enable the prediction and detection of climate change in response to pollution and the “greenhouse” effect. In
addition to the familiar meteorological satellites, novel meteorological missions
have been established to support the Earth Radiation Budget Experiment
(ERBE) and the International Satellite Cloud Climatology Project (ISCCP).
10.2.6.1 Cloud Climatology
Figure 10.9 shows the global monthly mean cloud amount expressed as
deviations of monthly averages from the average over the short ISCCP time
9255_C010.fm Page 257 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
257
Cloud amount (%): 21-Year deviations and
anomalies of region monthly mean from total period mean
6
Anomalies and deviations
4
ISCCP D2 global monthly mean = 66.52 ± 1.52%
--- ISCCP D2 global deviation mean = −0.00 ± 1.52%
- - ISCCP D2 global anomaly mean = 0.00 ± 1.37%
2
0
−2
−4
−6
83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 00 01 02 03 04 05
Year
FIGURE 10.9
Deviations of global monthly mean cloud amount from long-term total period mean
(1983–2005). (ISSCP.) (http://isccp.giss.nasa.gov/climanal1.html)
record, covering only about 20 years. The month-to-month variations in
globally averaged cloud are very small: cloud amounts vary by about 1% to 3%
compared with a mean value of 66.7%. Cloud amount increased by about
2% during the first 3 years of ISCCP and then decreased by about 4% over
the next decade. ISCCP began right after one of the largest recorded El Niños
of the past century (in 1982–1983) and after the eruption of the El Chichón
volcano, both of which caused some cloud changes. Other, weaker El Niños
(in 1986–1987, 1991–1992, 1994–1995, and 1997–1998) and another significant
volcanic eruption (Mount Pinatubo in 1991) have also occurred. Such variations are referred to as “natural” variability — that is, the climate varies
naturally for reasons that are not fully understood. The problem for understanding climate changes that might be produced by human activities is that
the predicted changes attributable to human activity are similar in magnitude to the natural variability. The difference between natural and humaninduced climate change will only appear clearly in much longer (more than
50 years) data records.
Figure 10.10 shows monthly mean cloud amount at local midnight derived
from the 11.5 µm channel of the Temperature-Humidity Infrared Radiometer
on Nimbus-7. Figure 10.10 (a), for January 1980, represents normal conditions,
and Figure 10.10 (b), for January 1983, represents conditions in an El Niño
year. Analysis showed that during the period from December 1982 to
January 1983, the equatorial zonal mean cloud amount is 10% higher than
9255_C010.fm Page 258 Tuesday, February 27, 2007 12:46 PM
258
Introduction to Remote Sensing
60N
30N
0
30S
60S
120W
60W
0
60E
120E
60W
0
60E
120E
0
50
100
(a)
60N
30N
0
30S
60S
120W
HWANG/GSFC
(b)
FIGURE 10.10 (See color insert)
(a) Monthly mean cloud amount at midnight in January 1980, a normal year, derived from the
Nimbus-7 Temperature Humidity Infrared Radiometer’s 11.5-µm channel data and (b) in an El
Niño year, 1983. (NASA Goddard Space Flight Center.)
in a non-El Niño year. The most significant increases occurred in the eastern
Pacific Ocean.
During the 1982–1983 El-Niño event, significant perturbations in a diverse
set of geophysical fields occurred. Of special interest are the planetary-scale
fields that act to modify the outgoing longwave radiation (OLR) field at the
top of the atmosphere. The most important is the effective “cloudiness”;
specifically, perturbations from the climatological means of cloud cover,
9255_C010.fm Page 259 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
259
height, thickness, water content, drop/crystal size distribution, and emissivity.
Also important are changes in surface temperature and atmospheric water
vapor content and, to a lesser extent, atmospheric temperature. Changes in
one or more of these parameters, regionally or globally, causes corresponding
anomalies in the broadband OLR at the top of the atmosphere.
To facilitate the examination of the time evolution of the El Niño event
from the perspective of the set of top of the atmosphere OLR fluxes, monthlyaveraged time-anomaly fields have been generated from observations
derived from the Nimbus-7 ERBE data. These are defined in terms of departures (Wm–2) from the climatology for that month. The term “climatology”
is used somewhat loosely here to indicate the 2 years between June 1980 and
May 1982. Thus, a 2-year mean pre-El Niño seasonal cycle is removed in the
creation of the anomaly maps. The peak amplitudes of the OLR anomalies
are generally reached in January.
The true global nature of the El Niño event is evident in Figure 10.10. The
negative radiation center in the equatorial Pacific has reached −88 Wm–2. To
its north and south, the accompanying positive anomalies now average half
its magnitude. An interesting large-amplitude pattern exists along the equator. The three areas that are normally quite active, convectively, at this time
of the year are Indonesia, the Amazon river basin, and the Congo river basin.
They now show positive OLR anomalies indicative of reduced convection.
These are replaced, instead, with negative anomalies over the Arabian Sea,
the Indian Ocean, and the central equatorial Pacific Ocean. The center over
Europe has intensified, whereas the center over the United States has moved
into the Gulf of Mexico.
10.2.6.2 Global Temperature
Figure 10.11(a) and (b) are the first global maps ever made of the Earth’s
mean skin temperature for day and night. The images were obtained by a
team of NASA scientists from the Jet Propulsion Laboratory in Pasadena,
CA, and the Goddard Space Flight Center in Greenbelt, MD. The satellite
data were acquired by the High-Resolution Infrared Sounder (HIRS) and
the Microwave Sounding Unit (MSU), both instruments flying on board the
NOAA weather satellites. The surface temperature was derived from the
3.7 µm window channels in combination with additional microwave and
infrared data from the two sounders. The combined data sets were computer
processed using a data analysis method that removed the effects of clouds,
the atmosphere, and the reflection of solar radiation.
The ocean and land temperature values have been averaged spatially over
a grid of 2°30’ latitude by 3° longitude and correspond to the month of
January 1979. The mean temperature values for this month clearly show
several cold regions, such as Siberia and northern Canada, during the northern
hemisphere’s winter, and a hot Australian continent during the southern
hemisphere’s summer. Mountainous areas are visible in Asia, Africa, and
South America. The horizontal gradients of surface temperature displayed
on the map in color contour intervals of 2° C show some of the major features
9255_C010.fm Page 260 Tuesday, February 27, 2007 12:46 PM
260
Introduction to Remote Sensing
Mean daytime surface temperature for January 1979
from HIRS 2 and MSU data
(a)
Mean nighttime surface temperature for January 1979
from HIRS 2 and MSU data
(b)
CHAHINE
SUSSKIND
JPL
GSFC
(1982)
Degrees Kelvin
243
253
263
273
283
293
303
313
Day-night mean surface temperature difference for January 1979
from HIRS 2 and MSU data
(c)
CHAHINE
SUSSKIND
JPL
GSFC
(1982)
Degrees Kelvin
−9 −5 −1
1
9
13 17 21 25 29
FIGURE 10.11 (See color insert)
Mean day and night surface temperatures derived from satellite sounder data: (a, top) daytime
temperature; (b, center) nighttime temperature; and (c, bottom) mean temperature difference.
(Image provided by Jet Propulsion Laboratory.)
9255_C010.fm Page 261 Tuesday, February 27, 2007 12:46 PM
261
Applications of Remotely Sensed Data
TABLE 10.1
Mean Skin Surface Temperature
during January 1979
Area
Global
N. hemisphere
S. hemisphere
Temperature (°C)
14·14
11·94
16·35
of ocean-surface temperature, such as the Gulf Stream, the Kuroshio Current,
and the local temperature minimum in the eastern tropical Pacific Ocean.
The satellite-derived sea-surface temperatures are in very good agreement
with ship and buoy measurements.
Surface temperature data are important in weather prediction and climate
studies. Because the cold polar regions cover a small area of the globe relative
to the warm equatorial regions, the mean surface temperature is dominated
by its value in the tropics. The mean calculated skin-surface temperature
during January 1979 is given in Table 10.1.
Figure 10.11(c) shows the monthly average of the differences between day
and night temperature. This difference map provides striking contrast
between oceans and continents. The white area indicates day-night temperature differences in the range ±1 K. This small difference indicates ocean
areas, having high heat capacity and a large degree of homogeneity, whereas
areas with larger day-night temperature differences are continental landmasses. The outlines of all continents can be plotted accordingly on the basis
of the magnitude of the difference between day and night temperatures. The
day-night temperature differences over land clearly distinguish between arid
and vegetated areas and may indicate soil moisture anomalies.
10.2.6.3 Global Moisture
Global moisture distributions in the atmosphere are investigated as part of
the NASA Water Vapor Project (NVAP), which produced global total and
layered global water vapor data over a 14-year span (1988 to 2001). The total
column (integrated) water vapor data sets comprise a combination of radiosonde observations, TOVS soundings, and data from the Special Sensor
Microwave/Imager (SSM/I) aboard the F8, F10, F11, F13, and F14 Defense
Meteorological Satellite Project (DMSP) satellites. Figure 10.12 shows an
example (for December 2001) of the blended global water vapor and cloud
liquid water data sets with 1 degree × 1 degree resolution produced as daily,
five-daily, and monthly averages for the period 1988 to 2001.
Two problems inherent in all infrared moisture retrievals tend to limit the
dynamic range of the TOVS data. First, the inability to perform retrievals in
areas of thick clouds may give rise to a “dry bias” (Wu et al., 1993). Second,
limitations in infrared radiative transfer theory may lead to significant
9255_C010.fm Page 262 Tuesday, February 27, 2007 12:46 PM
262
Introduction to Remote Sensing
Dec 2001
0
10
20
NVAP–NG water vapor
30
40
50
60
70
(MM)
FIGURE 10.12 (See color insert)
Global total column precipitable water for December 2001 obtained from a combination of radiosonde observations, TOVS, and SSM/I data sets. (NASA Langley Atmospheric Science Data Center.)
overestimation of water vapor in regions of large-scale subsidence (Stephens
et al., 1994). For these reasons, SSM/I data are given a higher total column
water vapor confidence level than TOVS data.
On May 1, 1998, operational stratospheric analyses began using Revised
TOVS (RTOVS) data from the NOAA-14 satellite (TOVS data were produced
by the U.S. National Environmental Satellite Data and Information Service
from the operational NOAA series of satellites for the two decades before
this date). RTOVS was introduced as a transition to the Advanced TOVS
(ATOVS) that became available on subsequent NOAA-series satellites. TOVS
and RTOVS soundings used data from the Stratospheric Sounding Unit, MSU,
and HIRS, whereas ATOVS derives soundings from the HIRS and AMSU.
New knowledge about the SSM/I instrument calibration became available
during the production of the new NVAP-Next Generation products (NVAP
after 1999). The findings of Colton and Poe (1999) have been applied to SSM/I
data to produce a Total Column Water Vapor (TCWV) product that has
reduced the effects of satellite changes. In essence, by working back in time,
the SSM/I retrievals have been normalized to each other so they could be
used over time in a seamless manner.
10.2.6.4 Global Ozone
The distribution of ozone is a key indicator of atmospheric processes and is
also of vital significance in predicting the amount of damaging solar ultraviolet radiation (UV) reaching the Earth. In order to understand the processes
which determine the physical and the photochemical behaviour of the atmosphere, detailed global measurements of the amount, and of the horizontal
9255_C010.fm Page 263 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
263
and vertical distribution, of ozone and of the other gases are necessary. There
is a long-established data set of ground-level measurements of the total
amount of ozone in the atmospheric column and there are also some measurements of ozone concentration profiles obtained using ozonesondes. This
dataset has now been augmented by data from several satellite systems.
These are principally the TOVS on the NOAA polar-orbiting satellites, various versions of the TOMS (Total Ozone Mapping Spectrometer) and a number of SBUV (Solar Backscattered UV) instruments.
Channel 9 of the HIRS, one of the TOVS instruments, which is at a wavelength of 9.7 µm, is particularly well suited for monitoring the atmospheric
ozone concentration; this is a (general) “window” (i.e. transparent) channel,
except for absorption by ozone. The radiation emitted from the Earth’s surface
and received by the HIRS instruments in this channel is attenuated by the
ozone in the atmosphere. The less ozone, the greater the amount of radiation
reaching the satellite. TOVS data have been used to determine atmospheric
ozone concentration from 1978 to the present time and images are now regularly produced from TOVS data giving hemispherical daily values of total
ozone. An advantage of TOVS over the other systems that use solar UV
radiation is that TOVS data are available at night time and in the polar regions
in winter. The drawbacks are that when the Earth’s surface is too cold (e.g.
in the high Antarctic Plateau), too hot (e.g. the Sahara desert), or too obscured
(e.g. by heavy tropical cirrus clouds) the accuracy of this method declines.
The other two groups of instruments, the TOMS and SBUV types, differ
principally in two ways. First, the TOMS instruments are scanning instruments and the SBUV instruments are nadir-looking only, and secondly, the
TOMS instruments measure only the total ozone content of the atmospheric
column, while the SBUV instruments measure both the vertical profile and
the total ozone content. At about the same time that the first TOVS was
flown, the Nimbus-7 satellite was launched and this carried, among other
instruments, the first Total Ozone Mapping Spectrometer (TOMS).
The work using instruments which are able to measure the ozone concentration using solar UV radiation began with the Backscatter Ultraviolet
(BUV) instrument flown on Nimbus-4 which was launched in 1970, followed
in 1978 by the Nimbus-7 Solar Backscatter Ultraviolet (SBUV) instrument.
These measurements have been continued from 1994 with SBUV/2 instruments on board the NOAA-9 -11, -14, -16 and -17 satellites, and TOMS
instruments on the Russian Meteor-3, Earth Probe, and Japanese ADEOS
satellites. The European Space Agency’s Global Ozone Monitoring Experiment (GOME) on the ERS-2 satellite, also performing backscattered ultraviolet measurements, complements the US efforts. The primary measurement objective of GOME is the measurement of total column amounts and
profiles of ozone and of other gases involved in ozone photochemistry. Due to
a failure of the ERS-2 tape recorder only GOME observations made while in
direct contact with ground stations have been available since June 22, 2003. A
GOME instrument will also be flown on the ESA Metop-1 mission scheduled
(at the time of writing) to be launched in October 2006, and the Metop-2
9255_C010.fm Page 264 Tuesday, February 27, 2007 12:46 PM
264
Introduction to Remote Sensing
mission to follow in 2010. In addition, the Shuttle SBUV (SSBUV) experiment
(conducting eight missions between October 1989 and January 1996) provided regular checks on the individual satellite instruments’ calibrations.
Multiple inter-comparisons with ground-based instruments have improved
data retrieval algorithms and, therefore, satellite ozone measurements have
become compatible with those of the network of ground-based measurements.
Various further instruments are planned.
There is widespread scientific, public and political interest and concern
about losses of ozone in the stratosphere. Ground-based and satellite instruments have measured decreases in the amount of stratospheric ozone in our
atmosphere. Over some parts of Antarctica, up to 60% of the total overhead
amount of ozone (known as the column ozone) is depleted during the Antarctic
spring (September-November). This phenomenon is known as the Antarctic
“ozone hole”. In the Arctic polar regions, similar processes occur that have
also begun to lead to significant depletion of the column ozone in the Arctic
during late winter and early spring in several recent years. Much smaller,
but still significant, stratospheric decreases have been seen at other, morepopulated mid-latitude regions of the Earth. Increases in surface UV-B radiation have been observed in association with decreases in stratospheric ozone,
from both ground-based and satellite-borne instruments. Ozone depletion
began in the 1970’s and continues now with statistically significant rates,
except over the 20°N-20°S tropical belt. The ozone depletion is mainly due
to the release of man-made chemicals containing chlorine such as CFCs
(chlorofluorocarbons), but also compounds containing bromine, other related
halogen compounds and also nitrogen oxides (NOx). CFCs are a common
industrial product, used in refrigeration systems, air conditioners, aerosols,
solvents and in the production of some types of packaging. Nitrogen oxides
are a by-product of lightning strikes and of combustion processes, including
aircraft emissions.
When the Antarctic ozone hole was detected, it was soon linked to the
presence of the breakdown products of CFCs. The conditions that lead to
the massive loss of ozone, the Antarctic ozone hole, are rather special. During
the winter polar night, sunlight does not reach the south pole. A strong
circumpolar wind develops in the middle to lower stratosphere. These strong
winds are known as the ‘polar vortex’. This has the effect of isolating the air
over the polar region. Since there is no sunlight, the air within the polar
vortex gets very cold and clouds form once the air temperature gets to below
about − 80° C. These clouds are called polar stratospheric clouds (or PSCs
for short), but they are not clouds of ice or water droplets. PSCs first form
as nitric acid trihydrate and as the temperature gets lower larger droplets
of water-ice with nitric acid dissolved in them can form. These PSCs are
crucial for ozone loss to occur because heterogeneous chemical reactions
occur on the surfaces of the particles in the PSCs. In these reactions the main
long-lived inorganic carriers (reservoirs) of chlorine which are formed from
the breakdown products of the CFCs are converted into molecular chlorine
Cl2. No ozone loss occurs until sunlight returns in the spring and them the
9255_C010.fm Page 265 Tuesday, February 27, 2007 12:46 PM
265
Applications of Remotely Sensed Data
Oct 1980
Oct 1981
Oct 1982
Oct 1983
Oct 1984
Oct 1985
Oct 1986
Oct 1987
Oct 1988
Oct 1989
Oct 1990
Oct 1991
100 140 180 220 260 300 340 380 420 460 500
FIGURE 10.13 (See color insert)
Monthly Southern Hemisphere ozone averages for October, from 1980 to 1991. (Dr. Glenn
Carver, Centre for Atmospheric Science, University of Cambridge, U.K.)
molecular chlorine is easily photodissociated (split by sunlight) to produce
free atoms of chlorine which are highly reactive and act as a catalyst to
destroy large amounts of ozone.
The series of pictures shown in Figure 10.13 was produced using data
from the Total Ozone Mapping Spectrometer (TOMS) instrument on the
Nimbus-7 satellite. The ozone levels computed are ‘column ozone’ or total
ozone and are expressed in Dobson Units or DU for short. One Dobson unit
corresponds to a layer of 0.01 mm of ozone if all the ozone in the column
were to be brought to standard conditions (i.e. air pressure of 1013.25 hPa
and 0° C). Typical total (column) ozone amounts in the atmosphere are of
the order of a few hundred Dobson units. The wavelengths bands measured
by the TOMS are centred at 312.5, 317.5, 331.3, 339.9, 360.0 and 380.0 nm.
The first four wavelength are absorbed to greater or lesser extents by ozone;
the final two bands are used to assess the reflectivity. The pictures shown in
Figure 10.13 show the progressive development of an ozone hole in the
month of October in the Antarctic from 1980 to 1991. It was observed that
the ozone column amount in the centre of the hole decreased by more than
50% in less than five years. Although it is not as dramatic as the Antarctic
ozone hole, there is some evidence of a similar phenomenon occurring in
the northern hemisphere at the end of the Arctic winter, see Figure 10.14.
It is important to appreciate that the atmosphere behaves differently from
year to year. Even though the same processes that lead to ozone depletion
occur every year, the effect they have on the ozone is altered by the meteorology of the atmosphere above Antarctica. This is known as the ‘variability’
of the atmosphere and this variability gives rise to changes in the amount
of ozone depleted and the dates when the depletion starts and finishes.
The Global Ozone Monitoring by Occultation of Stars (GOMOS) Instrument
on board Envisat is the newest ESA instrument intended for ozone monitoring. It provides for altitude-resolved global ozone mapping and trend
9255_C010.fm Page 266 Tuesday, February 27, 2007 12:46 PM
266
Introduction to Remote Sensing
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
WFDOAS VI
DU
100
200
300
400
GOMEI O3 NH March 1996–2005
500
FIGURE 10.14 (See color insert)
Monthly Northern Hemisphere ozone averages for March, from 1996 to 2005. (Dr. Mark Weber,
Institute of Environmental Physics, University of Bremen.)
monitoring with improved accuracy, as required for the understanding of
ozone chemistry and for model validation. GOMOS employs a novel
measurement technique that uses stars rather than the sun or the moon as
light sources (occultation) for the measurement of stratospheric profiles with
a 1.7-km vertical resolution. GOMOS provides global coverage with typically
more than 600 profile measurements per day and both a day time and night
time measurement capability.
Mention should also be made of the Odin satellite launched in February
2001, which used a launch vehicle based on decommissioned Russian intercontinental ballistic missiles as a joint undertaking between Sweden, Canada,
France, and Finland. Odin is the only satellite to have made continuous measurements of the chlorine chemistry in the ozone layer since 2001. The Odin
satellite carries two instruments: the Optical Spectrograph and Infra-Red
Imaging System and the Sub-Millimetre Radiometer. Supporting both studies
of star formation and the early solar system, and of the mechanisms behind
the depletion of the ozone layer in the Earth’s atmosphere and the effects of
global warming, the Odin satellite combines two scientific disciplines on a
single spacecraft. However, because Odin shares time between astronomy
and atmospheric observations, data are not available for every day. Although
helpful in global mapping of these processes, a few years of observations
are not sufficient to distinguish global change — or whether the ozone layer
is recovering — from the large natural variations. The Ozone Mapping and
Profiler Suite (OMPS), to be flown on the National Polar-Orbiting Operational Environmental Satellite System (NPOESS) Preparatory Project and the
NPOESS, will collect total column and vertical profile ozone data and replace
the daily global data produced by the current ozone monitoring systems,
the SBUV/2, and TOMS, but with higher fidelity.
9255_C010.fm Page 267 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
267
10.2.6.5 Summary
The chief advantages of satellite remote sensing systems for climatology are:
• Weather satellite data for the whole globe are far more complete
than conventional data.
• Satellite data are more homogeneous than those collected from a
much larger number of surface observatories.
• Satellite data are often spatially continuous, as opposed to point
recordings from the network of surface stations.
• Satellites can provide more frequent observations of some parameters in certain regions, especially over oceans and high latitudes.
• The data from satellites are collected objectively, unlike some conventional observations (e.g., visibility and cloud cover).
• Satellite data are immediately amenable to computer processing.
10.3 Applications to the Geosphere
10.3.1
Geological Information from Electromagnetic Radiation
A gamma-ray spectrometer senses the shortest wavelengths in the radiometric
environment and is able to acquire data on soil composition where conditions
such as moisture content are known, or conversely, moisture content where
the soil composition is known (see Section 5.6). In modern systems, digital
data are collected that can be used to provide a computer-map of the soil
and rock environment of an area through the covering vegetation by measuring the relative percentages of uranium, potassium, and thorium (see
Figure 10.15).
Remote sensing instruments acquiring data at wavelengths longer than
gamma-ray wavelengths are usually configured to provide eventual output
in image form. The most familiar electromagnetic sensor is the aerial camera.
Its high resolution and simplicity are balanced by its limitations in spectral
coverage. This ranges from the near-ultraviolet through the visible range
and into the near-infrared.
The traditional approach to geological mapping has involved an on-theground or “boots-on” search for rock outcrops. In most terrains, these outcrops
are scattered, isolated, and fairly inaccessible. Mapping of large regions commonly requires years of fieldwork. The process can be accelerated and made
more economical if the geologist is provided with a series of aerial photographs that pinpoint outcrops and reveal structural associations. For many
decades, such photographs have served as the visual base from which maps
are made by tracing the recognizable units where exposed. The geologist is
then able to spot-check the identity of each unit at selected localities and
extrapolate the positions of the units throughout the photographs instead of
9255_C010.fm Page 268 Tuesday, February 27, 2007 12:46 PM
268
Introduction to Remote Sensing
FIGURE 10.15 (See color insert)
Computer map of rock exposures determined from gamma-ray spectroscopy. (WesternGeco.)
surveying these units at many sites. Although high-resolution aerial photographs are a prerequisite for detailed mapping, they have certain inherent
limitations, such as geometric distortion and vignetting, where the background
tone falls off outward from the center. These distortions make the joining of
overlapping views into mosaics to provide a regional picture of a large area
difficult. Synoptic overviews are particularly valuable in geology because the
scales of interrelated landforms, structural deformation patterns, and drainage
networks are commonly expressed in tens to hundreds of kilometers, which
is the range typically covered by satellite imagery, eliminating the need to
construct mosaics of air photos. The chief value of satellite imagery to geological applications lies therefore in the regional aspect presented by individual
frames and the mosaics constructed for vast areas extending over entire geological provinces. This allows, for example, whole sections of a continent
subjected to glaciation to be examined as a unified surface on which different
glacial landforms are spatially and genetically interrelated.
Satellite imagery has been found to be very useful for singling out linear
features of structural significance. The extent and continuity of faults and
fractures are frequently misjudged when examined in the field or in individual aerial photographs. Even when displayed in mosaics of aerial photographs, these linear features are often obscured by differences in
illumination and surface conditions that cause irregularities in the aerial
mosaic. With satellite imagery, the trends of these linear features can usually
be followed across diverse terrain and vegetation even though the segments
may not be linked. In many instances these features have been identified
9255_C010.fm Page 269 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
269
with surface traces of faults and fracture zones that control patterns of
topography, drainage, and vegetation that serve as clues to their recognition.
The importance of finding these features is that lineaments often represent
major fracture systems responsible for earthquakes and for transporting and
localizing mineral solutions as ore bodies at some stage in the past.
A major objective in geological mapping is the identification of rock types
and alteration products. In general, most layered rocks cannot be directly
identified in satellite imagery because of limitations in spatial resolution and
the inherent lack of unique or characteristic differences in color and brightness of rocks whose types are normally distinguished by mineral and chemical content and grain sizes. Nor is it possible to determine the stratigraphic
age of recognizable surface units directly from remotely sensed data unless
the units are able to be correlated with those of known age in the scene or
elsewhere. In exceptional circumstances, certain rocks exposed in broad outcrops can be recognized by their spectral properties and by their distinctive
topographic expressions. However, the presence of covering soil and vegetation tends to mask the properties favorable to recognition.
10.3.2
Geological Information from the Thermal Spectrum
10.3.2.1 Thermal Mapping
The application of thermal imagery in geological mapping is a direct result of
the fact that nonporous rocks are better heat conductors than are unconsolidated soils. At night, therefore, such rocks conduct relatively more of the
Earth’s heat than the surrounding soil-covered areas, producing very marked
heat anomalies that scanners can detect. Porous rocks, on the other hand, do
not show the same intense heat anomalies on night time imagery and after
recent rainfall may actually produce cool anomalies due to their moisture
content. The very strong heat anomalies produced by most rock types permit
the detection of very small outcrops, the tracing of thin outcropping rock units
and, depending on the nature of the soil, their detection below a thin soil cover.
Loose sandy soils permit detection of suboutcrop below at least 20 cm of soil
thickness, but in moist clay soils, virtually no depth penetration occurs.
The identification of individual rock types is based mainly on field checking and on the basis of structure and texture, the latter being the result of
the characteristic jointing that particular rock types exhibit. In theory, thermal
inertia (which determines the rate at which particular rock types heat up or
cool down during the night) or narrow-band infrared detectors can be used
for the direct identification of rock type, but the practical application of these
techniques is not well developed.
10.3.2.2 Engineering Geology
Figure 10.16 is a classic example of far-infrared imaging. The visible spectrum
image on the left provides no indication of the buried stream channel appearing
in the far-infrared image on the right. Not only is the buried stream course
9255_C010.fm Page 270 Tuesday, February 27, 2007 12:46 PM
270
Introduction to Remote Sensing
FIGURE 10.16
Visible (left) and thermal-infrared (right) images of a buried stream channel. (WesternGeco.)
evident, but given the knowledge that the infrared image was acquired at
night, certain inferences may be drawn. One is that the horizontal portion
of the stream channel is probably of coarser sand and gravel than the vertical
portion. Its bright signal suggests more readily flowing water, which is
warmer than the night-air-cooled surrounding soil. The very dark signal
adjoining and above the horizontal portion of the stream course, and to both
sides of the vertical portion, probably represents clay, indicating moisture
that cooled down many years ago and remained cold compared with the
surrounding soil because of poor permeability and thermal conductivity.
Such imagery could be invaluable for investigating groundwater, avenues
for pollution movement, exploration for placers, sand, gravel clay, and
emplacing engineering structures.
Unlike hydrogeological studies or mineral exploration, engineering geological investigations are often confined to the upper 5 m of the Earth’s
surface. Thermal-infrared line scanning has proven to be very effective
within the near-surface zone, and it is becoming increasingly useful in a
variety of engineering geological problems such as those aimed at assessing
the nature of material for foundations, excavations, construction materials
and drainage purposes.
Density of materials and ground moisture content are the two dominant
factors influencing tonal variations recorded in thermal imagery. Subsurface
solid rock geology may be interpreted from one or more of a number of
“indicators,” including vegetation changes, topographic undulations, soil
variations, moisture concentrations, and mineralogical differences. As far as
particle size or grading is concerned, in unconsolidated surface materials,
9255_C010.fm Page 271 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
271
FIGURE 10.17 (See color insert)
Perspective view of Mount Oyama in Japan created by combining image data from the ASTER,
with an elevation model from the Shuttle Radar Topography Mission. (NASA/JPL/NIMA.)
lighter tones are caused by coarser gradings. Lighter tones also result from
a greater degree of compaction of surface materials.
10.3.2.3 Geothermal and Volcano Studies
Geothermal mapping using remotely sensed data is mainly carried out for
the monitoring of active volcanoes, particularly in the case of eruption prediction studies, or in geothermal energy exploration studies as part of investigations of alternative energy resources.
Figure 10.17 is a perspective view of Mount Oyama, an 820-meter-high
volcano on the island of Miyake-Jima, about 180 km south of Tokyo, Japan,
which was created by combining image data from the Advanced Spaceborne
Thermal Emission and Reflection Radiometer (ASTER) aboard NASA’s Terra
satellite with an elevation model from the Shuttle Radar Topography Mission
(SRTM). Vertical relief is exaggerated, and the image includes cosmetic
adjustments to clouds and image color to enhance clarity of terrain features:
Size of the island: Approximately 8 km (5 miles) in diameter
Location: 34.1° N, 139.5° E
Orientation: View toward the west-southwest
Image data: ASTER visible and near-infrared
Date acquired: February 20, 2000 (SRTM); July 17, 2000 (ASTER)
In late June 2000, a series of earthquakes alerted scientists to possible volcanic activity on Miyake-Jima island. On June 27, the authorities evacuated
2600 people and on July 8, the volcano began erupting and erupted five times
over the next week. The dark gray blanket covering the green vegetation in
Figure 10.17 is ash deposited by prevailing northeasterly winds between July
8 and 17. Figure 10.18(a) and (b) represent a stereopair of the ground-surface
temperature map of Miyake-Jima island following an earlier eruption in 1983.
Figure 10.18 was generated by computer processing to superimpose an image
of the temperature distribution on a stereo terrain model.
9255_C010.fm Page 272 Tuesday, February 27, 2007 12:46 PM
272
Introduction to Remote Sensing
(°C)
20. 21. 22. 25. 30. 35. 40. 45. 50.
Miyake Island 1983.10.5 19:00 stereo–pair Left:EL. = 80°W, Right:EL.80°E
FIGURE 10.18 (See color insert)
Stereopair of color-coded temperature maps of Miyake-Jima island on October 5, 1983. (Asia
Air Survey Company.)
Geothermal mapping sites are usually located in mountainous areas, and
repetitive mapping of the same area is often required for monitoring
purposes. Repetitive mapping requires strict geometric rectification of the
overlay imagery. In the rectification process, Digital Terrain Model (DTM)
data provide information for relief displacement correction, and various
effects due to solar radiation, soil moisture, and vegetation may require to
be evaluated for the refinement of the data. A surface temperature map
obtained from night-time thermal-infrared data is generally used as the
overlay. This surface temperature map requires calibration that is usually
carried out using ground measurements and references available in the
sensing instrument. The principal difficulty involves evaluating the atmospheric effects. However, the path lengths between sensor and ground
objects can be calculated from exterior orientation parameters and DTM data,
enabling the relationship between relative differences of path length and
temperature discrepancies to be established. Although the ground measurement data may have greater weight, analysis of path length effects yields
better understanding of the atmospheric effects that may vary locally in the
survey area. Ground-surface temperature obtained from the thermal-infrared data is a result of a combination of heat flow from beneath the ground
surface and contamination from other influences, such as air temperature
and solar heating, which may be filtered out if the data are used in conjunction with elevation, slope, and aspect derived from a DTM. Slope and aspect
of the ground are the main components of the thermal contamination caused
by topography and solar heating. The southward slope is usually warmer
9255_C010.fm Page 273 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
273
than the northward slope in the northern hemisphere and temperature differences may be clearly seen even on predawn thermal-infrared data. The correction for topographic conditions may be as large as 1°C for night-time data.
Differences of emissivity between objects due to different land cover conditions are a further difficulty in the evaluation of thermal-infrared data.
These differences may have to be taken into account when the data are
analyzed. A land cover map may be compiled on the basis of the multispectral characteristics of objects. Surface temperature may be obtained from
nonvegetated areas identified in the land-cover map. Because of vegetation
cover, the ground-surface temperature data available are usually very
sparsely distributed. If one assumes that the overall distribution pattern of
the ground-surface temperature reflects the underground temperature distribution, which is usually governed by the geological structure of the area,
a trend-surface analysis may be applied to interpolate between the sparse
ground temperature data and therefore make it possible to visualize the
trend of the ground-surface temperature, which is usually obscured by the
highly frequent change in observed ground-surface temperatures. Of course,
by combining thermal-infrared data with other survey data, even more useful
information can be drawn and a better understanding of the survey area
achieved. The refined remote sensing data may be cross-correlated by
computer manipulation of multivariable data sets, including geological,
geophysical, and geochemical information. These remote sensing data,
together with other remote and direct sensing measurements, may then be
used to target drill holes to test the geothermal site for geothermal resources.
10.3.2.4 Detecting Underground and Surface Coal Fires
Self-combustion is one of the many problems of coal mining. Incipient heating,
both within the mines and in the associated storage, at times gives rise to fires.
Fires can be detrimental to the production and overall quality of coal and can
constitute a major hazard, there being many instances of injuries and fatalities
due to burns or poisoning by noxious gases. It is helpful that zones of selfcombustion or fire be detected at the earliest possible time and, thereafter, continuously monitored until such time as suitable remedial action has been taken.
The traditional method of monitoring self-combustion in coal involves
thermistors. Temperatures are measured at as many points as possible on
the ground or dump, and an isothermal map is constructed. From this
information, areas of relatively higher temperatures, which could relate to
self-combustion, can be delineated. This method is, however, somewhat
conjectural particularly if the data points are sparse. An infrared imaging
system provides a very attractive alternative to the use of thermistors. This
type of system is capable of supplying a continuous record of very small
temperature changes without having to come too close to the source itself.
Figure 10.19 shows a fixed thermal imaging system located at an elevated
viewing point for monitoring a large area coal stockyard.
In the case of underground fires, heat produced from burning coal would
not necessarily escape to the surface, but the ground above the fire will be
9255_C010.fm Page 274 Tuesday, February 27, 2007 12:46 PM
274
Introduction to Remote Sensing
FIGURE 10.19
Typical thermal coal pile fire detector fitted high up on an automatic pan and tilt scanning
mechanism to provide views of an entire stockyard. (Land Instruments International.)
heated by conduction and be indicated in the data by a thermal anomaly.
Voigt et al. (2004) show how different satellite remote sensing methods can
be used to detect, analyze, and monitor near-surface coal seam fires in arid
and semiarid areas of North China.
10.3.3
Geological Information from Radar Data
Operating at wavelengths beyond the infrared region of the electromagnetic
spectrum are the various radar devices. Radar supplies its own illumination
and accordingly can collect data by day or by night. Because of the long
wavelength used, radar can also generally collect data through cloud cover
and is therefore invaluable for mapping in humid tropic environments,
which are generally characterized by almost perpetual cloud cover.
Figure 10.20 shows an X-band, synthetic aperture radar (SAR) image of
an arid environment in Arizona. The radar, responding to surface cover
texture and geometry, shows the brightest return signals from the bare rock
mountain peaks and talus. The return signals decrease progressively
downslope, through the bajada slopes, and on into the Bolson plain, with
its playa features that are totally specular — that is, they reflect the illuminating
9255_C010.fm Page 275 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
275
FIGURE 10.20
X-band SAR image of an arid environment in Arizona. (WesternGeco.)
energy away from, rather than back to, the receiver. These “dark” signals
are from the finest material in the area, probably clays and silts. Stringers of
bright signals may also be observed in the Bolson plain. These are probably
caused by coarse sands and gravels and represent the braided courses of the
highest velocity streams entering the area.
Thus radar, in this sense, is useful in the exploration for sand and gravel
construction materials and placers — that is, for emplacing construction on
the good, coarse materials rather than on clays and silts. Radar data is also
particularly effective for the identification of lineaments, depending on the
imaging geometry. Radar has been used for detecting and mapping many
other environmental factors. A major product used for such work is the
precision mosaic, from which geological maps, geomorphological maps, soil
maps, ecological conservation maps, land use potential maps, agricultural
maps, and phytological maps have been generated.
10.3.4
Geological Information from Potential Field Data
The aeromagnetometer is the most generally used potential field sensor.
Millions of line kilometers of aeromagnetic data have been acquired over
land and sea, globally, since its inception. Digitally processed aeromagnetic
data have provided information concerning the structural geology and
lithology of the environments to considerable depths.
The “poor second sister” of the potential field sensors is the airborne gravity
meter. A gravity meter can provide structural geological and lithological information from greater depths than an aeromagnetometer, particularly in
9255_C010.fm Page 276 Tuesday, February 27, 2007 12:46 PM
276
Introduction to Remote Sensing
FIGURE 10.21
GGM01 showing geophysical features, July 2003. (University of Texas Center for Space Research
and NASA.)
situations where the latter may be limited in its data acquisition, such as in
the presence of power lines and metal fences, but its operational requirements
are more demanding and costly.
Recent satellite missions equipped with highly precise inter-satellite and
accelerometry instrumentation have been observing the Earth’s gravitational
field and its temporal variability. The CHAllenging Minisatellite Payload
(CHAMP), launched in July 2000, and the Gravity Recovery and Climate
Experiment (GRACE), launched in March 2002, provide gravity fields, and
anomalies, on a routine basis. Figure 10.21 shows the GRACE Gravity
Model 01 (GGM01) that was released on July 21, 2003, based upon a preliminary analysis of 111 days of in-flight data gathered during the commissioning phase of the mission. This model is 10 to 50 times more accurate than
all previous Earth gravity models. The ESA Gravity Field and Steady State
Ocean Circulation Explorer mission is scheduled for launch in 2006.
10.3.5
Geological Information from Sonars
Sonar is the major sensor used in sea-floor mapping. The high-precision,
multibeam echosounder system, jointly developed by the U.S. Navy and
SeaBeam Instruments, Inc., provided the first images of the ocean floor near
the epicenter of the December 26, 2004, Asian tsunami. The Sonar Array
Sounding System (SASS IV) installed aboard the Royal Navy oceanographic
survey vessel, HMS Scott, is a low-frequency, high resolution multibeam
9255_C010.fm Page 277 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
277
FIGURE 10.22 (See color insert)
SASS IV subsurface map showing bathymetry data from the Sumatran subduction zone
obtained of the ocean floor near the epicenter of the 26 December 2004 Asian tsunami. (SeaBeam
Instruments, Inc., Royal Navy, British Geological Survey, Southampton Oceanography Centre,
U.K. Hydrological Office, Government of Indonesia).
sonar system that collects and processes seafloor depth data. It produces
three-dimensional bathymetric images over a wide swath in near-real time.
Following the 9.2 magnitude earthquake that occurred on December 26, HMS
Scott deployed to the area and quickly collected a significant amount of
bathymetric data. The data were then used to create three-dimensional
images for evaluation to contribute to further understanding of that particular earthquake and to assist in the prediction of such events in the future.
Figure 10.22 shows a bathymetric image of the boundary between the Indian
Ocean and Asian tectonic plates. In the left foreground at the base of the
blue is a 100 metre deep channel, termed ‘The Ditch’, which is believed to
have been formed by the earthquake. The deep channel cutting across the
image has formed through erosion as convergence between the plates has
uplifted the seabed causing erosion (Henstock et al., 2006).
10.4 Applications to the Biosphere
Remote sensing techniques play an important role in crop identification,
acreage and production estimation, disease and stress detection, and soil and
water resources characterization, and also provide inputs for crop-yield and
crop-weather models, integrated pest management, watershed management,
and agrometeorological services.
9255_C010.fm Page 278 Tuesday, February 27, 2007 12:46 PM
278
Introduction to Remote Sensing
In all but the most technologically advanced countries, up-to-date and accurate assessments of total acreage of different crops in production, anticipated
yields, stages of growth, and condition (health and vigor) are often incomplete
or untimely in relation to the information needed by agricultural managers.
These managers are continually faced with decisions on planting, fertilizing,
watering, pest control, disease, harvesting, storage, evaluation of crop quality,
and planning for new cultivation areas. Remotely sensed information is used
to predict marketing factors, evaluate the effects of crop failure, assess damage
from natural disasters, and aid farmers in determining when to plough, water,
spray, or reap. The need for accurate and timely information is particularly
acute in agricultural information systems because of the very rapid changes
in the condition of agricultural crops and the influence of crop yield predictions
on the world market; it is for these reasons that, as remote sensing technology
has developed, the potential for this technology to be used in this field has
received widespread attention. Previously, aircraft surveys were sporadically
used to assist crop and range managers in gathering useful data, but given
the cost and logistics of aircraft campaigns and the advent of multivisit multispectral satellite sensors designed specifically for the monitoring of vegetation, attention has shifted to the use of satellite imagery for agricultural
monitoring. Crop identification, yield analysis, and validation and verification
activities rely on a revisit capability throughout the growing season, which is
now available from repetitive multispectral satellite imagery.
Color-infrared film is sensitive to the green, red, and near-infrared (500 to
900 nm) portions of the electromagnetic spectrum and is widely used in
aerial and space photographic surveys for land use and vegetation analysis.
Living vegetation reflects light in the green portion of the visible spectrum
to which the human eye is sensitive. Additionally, it reflects up to 10 times
as much in the near-infrared (700 to 1100 nm) portion of the spectrum, which
is just beyond the range of human vision. When photosynthesis decreases,
either as a result of normal maturation or stress, a corresponding decrease
in near-infrared reflectance occurs. Living or healthy vegetation appears as
various hues of red in color-infrared film. If diseased or stressed, the color
response shifts to browns or yellows due to the decrease in near-infrared
reflectance. Color-infrared film is also effective for haze penetration because
blue light is eliminated by filtration.
Crops are best identified from computer-processed digital data that represent quantitative measures of radiance. In general, all leafy vegetation has a
similar reflectance spectrum regardless of plant or crop species. The differences
between crops, by which they are separated and identified, depend on the
degree of maturity and percentage of canopy cover, although differences in
soil type and soil moisture may serve to confuse the differentiation. However,
if certain crops are not separable at one particular time of the year, they may
be separable more readily at a different stage of the season due to differences
in planting, maturing, and harvesting dates. The degree of maturity and the
yield for a given crop may also influence the reflectance at any stage of growth.
This maturity and yield can be assessed as the history of any crop is traced in
9255_C010.fm Page 279 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
279
terms of its changing reflectances. When a crop is diseased or seriously damaged
(for example, by hail), its reflectances decrease, particularly in the infrared
region, allowing the presence of stress to be recognized. Lack of available
moisture also stresses a crop, the effect of which again shows up as a reduction
of reflected light intensity in the infrared region, usually with a concomitant
drop in reflectance in the green and a rise in the red.
Crop acreage estimation consists of two parts, the mensuration of field
sizes and the categorization of those fields by crop type. The mensuration
process can sometimes be facilitated by manipulating imagery to make field
boundaries more distinct. The categorizations of those fields by crop type is
then usually performed by multispectral classification (see Section 9.7).
Similarly, the biomass, or amount of feed available in grasses, bush, and
other forage vegetation of the rangeland, may also be estimated from measurements of relative radiance levels.
10.4.1 Agriculture
The 3-year Large Area Crop Inventory Experiment (LACIE), using Landsat
MSS imagery, first demonstrated that the global monitoring by satellite of food
and fiber production was possible. Where LACIE’s mission was to prove the
feasibility of Landsat for yield assessment of one crop, wheat, this activity has
now extended to the monitoring of multiple crops on a global scale. The LACIE
experiment highlighted the potential impact that a credible crop yield assessment could have on world food marketing, administration policy, transportation, and other related factors. In 1977, during Phase 3 of LACIE, it was decided
to test the accuracy of Soviet wheat crop yield data by using Landsat-3 to
assess the total production from early season to harvesting in the then Union
of Soviet Socialist Republics (USSR). In January 1977, the USSR officially
announced that it expected a total grain crop of 213.3 million metric tons. This
was about 13% higher than the country’s 1971 to 1976 average. Because Soviet
wheat historically accounted for 48% of its total grain production, the anticipated wheat yield would have been about 102 million metric tons for the year.
LACIE computations, made after the Soviet harvests, but prior to the USSR
release of figures, estimated Russian wheat production at 91.4 million metric
tons. In late January 1978, the USSR announced that its 1977 wheat production
had been 92 million metric tons. The U.S. Department of Agriculture (USDA)
final estimate was 90 million metric tons. Previous USDA assessments of Soviet
wheat yield had had an accuracy of 65/90, meaning that the USDA’s conventionally collected data could have an accuracy of ±10% only 65% of the time.
The LACIE program was designed to provide a crop-yield assessment accuracy of 90/90, or within ±10% in 90% of the years the system was used.
Earth observation satellites are now routinely used for a broad range of
agricultural applications. In developed countries, where producers are often
sophisticated users of agricultural and meteorological information, satellite
data is widely used in many “agri-business” applications. By providing
frequent, site-specific insights into crop conditions throughout the growing
9255_C010.fm Page 280 Tuesday, February 27, 2007 12:46 PM
280
Introduction to Remote Sensing
season, derived satellite data products help growers and other agriculture
professionals manage crop production risks efficiently, increasing crop yields
while minimizing environmental impacts.
The USDA’s Foreign Agricultural Service now provides near-to-real-time
agrometeorology data to the public through its Production Estimates and
Crop Assessment Division. One of the most prominent services has been the
development of a Web-based analytical tool called Crop Explorer that provides timely and accurate crop condition information on a global scale. The
Crop Explorer website (http://www.pecad.fas.usda.gov/cropexplorer/) features
near-real-time global crop condition information based on satellite imagery
and weather data. Thematic maps of major crop growing regions depict
vegetative vigor, precipitation, temperature, and soil moisture. Time-series
charts show growing season data for specific agrometeorological zones.
Regional crop calendars and crop area maps are also available for selected
regions of major agricultural significance. Every 10 days, more than 2,000
maps and 33,000 charts are updated on the Crop Explorer website, including
maps and charts for temperature, precipitation, crop modelling, soil moisture, snow cover, and vegetation indices. Indicators are further defined by
crop type, crop region, and growing season.
In Europe, the Monitoring Agriculture through Remote Sensing Techniques (MARS) project is a long-term endeavor to monitor weather and crop
conditions during the current growing season and to estimate final crop
yields for Europe by harvest time. The MARS project has developed, tested,
and implemented methods and tools specific to agriculture using remote
sensing to support four main activities:
Antifraud measures: This is a multifaceted activity, with measures
to combat fraud related to the implementation of the regulated European Common Agriculture Policy (CAP) as the central theme. Tasks
include the management of agri-environmental subsidies.
Crop-yield monitoring: This activity involves crop-yield monitoring
using agrometeorological models (the Crop Growth Monitoring System ), low resolution remote sensing methods and area estimates,
and high-resolution data combined with ground surveys.
Specific surveys: Area sampling provides rapid and specific information needed for the definition or reform of agricultural policies.
New sensors and methods: This involves embracing technological
developments in new sensors, precision farming and alternative data
collection, and processing techniques for large scale agricultural
applications.
With high-accuracy global satellite navigation systems (such as the Global
Positioning System (GPS) and Galileo) being installed on farm machinery,
new capabilities are being developed that allow mechanized operations such
as tillage, planting, fertilizer application, pesticide and herbicide application,
9255_C010.fm Page 281 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
281
irrigation, and harvesting to be optimized with the aid of geospatial information provided by imaging satellites.
The latest generation of multispectral, hyperspectral, and SAR sensors
— combined with improved models for interpretation of their data for
various crops and environmental parameters — are increasing the scope
and capabilities of Earth-observing satellites in support of agricultural
businesses. A number of commercial satellite missions (e.g., the RapidEye
system [Germany] and the Tuyuan Technologies “Surveyor” Satellite Constellation [China]) dedicated exclusively to (and funded by) crop monitoring and yield forecasting, are planned, and a sizable industry of service
companies is emerging to exploit such missions.
10.4.2
Forestry
In forestry, multispectral satellite data have proven effective in recognizing
and locating the broadest classes of forest land and timber and in separating deciduous, evergreen, and mixed (deciduous-evergreen) communities. Further possibilities include measurement of the total acreage given
to forests, and changes in these amounts, such as the monitoring of the
deforestation and habitat fragmentation of the tropical rainforest in the
Amazon. The images in Figure 10.23 show the progressive deforestation
of a portion of the state of Rondônia, Brazil. Systematic cutting of the
forest vegetation started along roads and then fanned out to create the
“feather” or “fishbone” pattern shown in the eastern half of the 1986 (b)
and 1992 (c) images.
Approximately 30% (3,562,800 km2) of the world’s tropical forests are in
Brazil. The estimated average deforestation rate from 1978 to 1988 was
15,000 km2 per year. In 2005, the federal government of Brazil indicated that
26,130 km2 of forest were lost in the year up to August 1, 2004. This figure
was produced by the National Institute for Space Research (INPE) in Brazil
on the basis of 103 satellite images covering 93% of the so-called “Deforestation Arc,” the area in which most of the trees are being cut down. INPE
has developed a near-real-time monitoring application for deforestation
detection known as the Real Time Deforestation Monitoring System.
Figure 10.24 shows an overview of the Hayman forest fire burning in the
Pike National Forest 35 miles south of Denver, CO. The images were collected
on June 12, 2002, and June 20, 2002, by Space Imaging’s IKONOS satellite.
Each photo is a composite of several IKONOS images that have been reduced
in resolution and combined to better visualize the extent of the fire’s footprint. In these enhanced color images, the burned area is purple and the
healthy vegetation is green.
According to the U.S. Forest Service, when the June 12 image was taken,
the fire had consumed 86,000 acres and had become Colorado’s worst fire
ever. The burned area on this image measures approximately 32 km × 17 km
(20 miles × 10.5 miles). This type of imagery is used to assess and measure
damage to forest and other types of land cover, for fire modelling, disaster
9255_C010.fm Page 282 Tuesday, February 27, 2007 12:46 PM
282
Introduction to Remote Sensing
6 mi
(a)
6 mi
(b)
FIGURE 10.23 (See color insert)
Progressive deforestation in the state of Rondônia, Brazil, as seen on (a) June 19, 1975 (Landsat-2
MSS bands 4, 2, and 1), (b) August 1, 1986 (Landsat-5 MSS bands 4, 2, and 1), and (c) June 22,
1992 (Landsat-4 TM bands 4, 3, and 2). (USGS.)
preparedness, insurance and risk management, and disaster mitigation
efforts to control erosion or flooding after the fire is out.
The Hazard Mapping System (HMS) operated by the NOAA National Environmental Satellite, Data, and Information Service (NESDIS) is a multiplatform remote sensing approach to detecting fires and smoke over the United
States (including Alaska and Hawaii), Canada, Mexico, and Central America.
9255_C010.fm Page 283 Tuesday, February 27, 2007 12:46 PM
283
Applications of Remotely Sensed Data
6 mi
(c)
FIGURE 10.23 (Continued).
Hayman Fire
Collected June 12, 2002
Collected June 20, 2002
FIGURE 10.24 (See color insert)
IKONOS satellite images of the Hayman forest fire burning in the Pike National Forest south
of Denver, CO. (Space Imaging.)
9255_C010.fm Page 284 Tuesday, February 27, 2007 12:46 PM
284
Introduction to Remote Sensing
The HMS utilizes NOAA’s Geostationary Operational Environmental Satellites (GOES), Polar Operational Environmental Satellites (POES), the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument on NASA’s
Terra and Aqua spacecraft, and the DMSP Operational Linescan System (OLS)
sensor (F14 and F15). Automated detection algorithms are employed for each
of the satellites (except DMSP OLS). Analyst intervention provides for the
deletion of false-positives and the addition of fires missed by the automated
detection algorithms, but this intervention is confined to Canada, the United
States, and U.S. border areas.
The primary fire detection tool for all satellites is an infrared sensor in the
3.7 to 4.0 µm range. GOES infrared imagery has 4 km (at subpoint) sensor
resolution. Smoke is detected through the use of 1 km animated visible
imagery. The coarse GOES infrared resolution is offset by a rapid update
cycle of 15 minutes, which allows for the detection of short-lived fires, and
those that are obscured by clouds for extended periods of time. Data from
each of the NOAA polar-orbiting satellites and MODIS TERRA and AQUA
are available twice a day (more frequently over Alaska). The lower temporal
frequency (when compared to the 15-minute GOES imagery) is offset by the
higher 1 km spatial resolution along the suborbital track. This allows for the
detection of smaller and cooler burning fires.
In Europe, the Forest Focus Regulation (Regulation No 2152/2003 of the
European Council and Parliament) requires the monitoring of forests and
environmental interactions, which can be met, in part, by the use of satellite
services. One of these services is the European Forest Fire Information System
(EFFIS) that implements methods for the evaluation of forest fire risk and
the mapping of burnt areas at the European scale. It has been widely used
for monitoring forest fires in Southern Europe. The EFFIS service aims to
address both prefire and postfire conditions and continuously monitors the
risk level, supplying daily information to Civil Protection Departments and
Forestry Services in the European Union Member States from May 1 until
October 31 each year.
The MODIS Rapid Response System, a collaboration between Goddard
Space Flight Center and the University of Maryland to prototype rapid access
to MODIS products, offers an internet-based mapping tool called Fire Mapper
that delivers the location of active fires in near-real time. An interactive map
showing active fires for a specified time period, combined with a choice of
geographic information system layers and satellite imagery, is provided for
regions and countries selected. Each fire detection represents the center of a
1 km pixel flagged as containing one or more actively burning fires. The
active fires are detected using data from the MODIS instrument on board
NASA’s Aqua and Terra satellites. The Fire Mapper is primarily aimed at
supporting natural resource managers, by helping them understand when
and where fires occur. The Center for Applied Biodiversity at Conservation
International has teamed up with the MODIS Rapid Response System to
develop an e-mail alert system warning of fires in or around nominated areas
and areas of biodiversity sensitivity. This alert system is able to send a range
9255_C010.fm Page 285 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
285
of products, from text messages with just the coordinates of active fires to
e-mails with a JPEG attachment showing an image of the area with the active
fire. These e-mails may also contain attribute data, such as the geographic
coordinates for the pixel flagged, the time and date of data acquisition, and
a confidence value.
10.4.3 Spatial Information Systems: Land Use and Land Cover Mapping
In recent years, satellite data have been incorporated into sophisticated information systems of a geographical nature, allowing the synthesis of remotely
sensed data with existing information. The growing complexity of society
has increased the demand for timely and accurate information on the spatial
distribution of land and environmental resources, social and economic indicators, land ownership and value, and their various interactions. Land and
geographic information systems attempt to model, in time and space, these
diverse relationships so that, at any location, data on the physical, social,
and administrative environment can be accessed, interrogated, and combined to give valuable information to planners, administrators, resource
scientists, and researchers. Land information systems are traditionally parcel
based and concerned with information on land ownership, tenure, valuation,
and land use and tend to have an administrative bias. Their establishment
depends on a thorough knowledge of the cadastral system and of the horizontal and vertical linkages that occur between and within government
departments that collect, store, and utilize land data. Geographic information
systems have developed from the resource-related needs of society and are
primarily concerned with inventory based on thematic information, particularly in the context of resource and asset management (see Burrough, 1986;
Rhind and Mounsey, 1990). The term “geospatial information system” is
used often to describe systems that encompass and link both land and
geographic information systems, but the distinctions in terminology are
seldom observed in common use.
By their very nature, geographic information systems rely on data from
many sources, such as field surveys, censuses, records from land title-deeds,
and remote sensing. The volume of data required has inevitably linked the
development of these systems to the development of information technology
with its growing capacity to store, manipulate, and display large amounts of
spatial data in both textural and graphical form. Fundamental to these systems
is an accurate knowledge of the location and reliability of the data; accordingly,
remote sensing is able to provide a significant input to these systems, in terms
of both the initial collection and subsequent updating of the data.
“Land use” refers to the current use of the land surface, whereas “land
cover” refers to the state or cover of the land only. Remotely sensed data of
the Earth’s surface generally provide information about land cover, with
interpretation or additional information being needed to ascertain land use.
Land use planning is concerned with achieving the optimum benefits in the
development and management of land, such as food production, housing,
9255_C010.fm Page 286 Tuesday, February 27, 2007 12:46 PM
286
Introduction to Remote Sensing
urbanization, manufacture, supply of raw materials, power production, transportation, and recreation. This planning aims to match land use with land
capability and to tie specific uses with appropriate natural conditions so as
to provide adequate food and materials supplies without significant damage
to the environment. Land use planning has previously been severely hampered both by the lack of up-to-date maps, showing which categories are
present and changing in an area or large region, and by the inadequacies of
the means by which the huge quantities of data involved were handled. The
costs involved in producing land cover, land use, and land capability maps
have prohibited their acquisition at useful working scales. Vast areas of Africa,
Asia, and South America remain poorly, and often incorrectly, mapped. A
United Nations Educational, Scientific and Cultural Organization–sponsored
project has completed a series of land use maps at scales of 1:5,000,000 to
1:20,000,000. Although invaluable as a general record of land use (and for
agriculture, hydrology, and geology), these maps have insufficient detail to
assist developers and managers in many of their decisions. Furthermore,
frequent changes in land use are difficult to plot at such small scales. Satellites
are able to contribute significantly to the improvement of this situation.
Two global 1-km land cover data sets have been produced from 1992–1993
Advanced Very High Resolution Radiometer (AVHRR) data: the International
Geosphere-Biosphere Program Data and Information System (IGBP-DIS) DISCover and that of the University of Maryland (UMd) (Hansen and Reed, 2000).
An update to these data sets for the year 2000 (GLC2000) has been produced
by the E.U. Joint Research Centre (JRC), in collaboration with over 30 research
teams from around the world (Bartholmé and Belward, 2005).
The general objective of the GLC2000 initiative was to provide a harmonized land cover database over the whole globe for the year 2000, the year
2000 being considered a reference year for environmental assessment in
relation to various activities, in particular the United Nation’s EcosystemRelated International Conventions. The GLC2000 data was derived from
14 months of preprocessed daily global data acquired by the VEGETATION
instrument on board the SPOT 4 satellite, the VEGA 2000 dataset (VEGETATION data for Global Assessment in 2000).
More recently, investigators at the University of Boston have begun using
MODIS data from the NASA Aqua and Terra satellites to enhance and update
the IGBP DISCover and the University of Maryland global 1 km data sets.
Land use can be deduced or inferred indirectly from the identity and distribution patterns of vegetation, surface materials, and cultural features as
interpreted from imagery. With supplementary information, which may be
extracted from appropriate databases or information systems but is usually
simply the accumulated knowledge of the interpreter, specific categories of
surface features can be depicted in map form. Accordingly, through both
satellite and aircraft coverage as the situation requires, it is now possible to
monitor changing land use patterns, survey environmentally critical areas,
and perform land capability inventories on a continuing basis. The repetitive
coverage provided by current satellites allows the continual updating of
9255_C010.fm Page 287 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
287
information systems and maps, although the frequency of revision depends on
the scale of map involved and the geographical situation of the area in question.
10.5 Applications to the Hydrosphere
10.5.1 Hydrology
The more information there is available about the hydrologic cycle, the better
a water manager is able to make decisions with regard to the allocation of
water resources for consumption, industrial use, irrigation, power generation,
and recreation. In times of excess, flood control may become the primary task;
in times of drought, irrigation and power generation may be the first concern.
The perspective gained by satellite remote sensing adds the aerial dimension
to the conventional hydrologic data collected at point measurement stations
(see, for example, Figure 10.25, which is a simulated Thematic Mapper image
FIGURE 10.25 (See color insert)
Simulated Thematic Mapper image of a section of the Ikpikpuk River on the north slope of
Alaska. (NASA Ames Research Centre.)
9255_C010.fm Page 288 Tuesday, February 27, 2007 12:46 PM
288
Introduction to Remote Sensing
of a section of the Ikpikpuk River on the north slope of Alaska). Estimates of
the occurrence and distribution of water are greatly facilitated with satellite
data, whereas the repetitive coverage provides a first step toward the assessment of the rapid changes associated with the hydrological cycle.
Fortunately, many of the hydrologic features of interest for improved water
resource management are easily detected and measured with remote sensing
systems. Although water sometimes reflects light in the visible wavelengths in a similar manner to other surface features, it strongly absorbs
light in the near-infrared. As a consequence, standing water is very dark
in the near-infrared, contrasting with soil and vegetation, which both
appear bright in this part of the spectrum. Thus, in the absence of cloud,
surface water can easily be distinguished and monitored in the optical and
near-infrared wavebands.
Snow depth and snow-covered area are two important parameters that
determine the extent of water runoff in river basins after the snow has
melted. In many parts of the world, this runoff is important for drinking
water supplies, hydroelectric power supply, and irrigation. The large spatial
variability in snow cover makes it extremely difficult to obtain reliable estimates of how much snow is on the ground. In the visible region, snow
obviously appears very bright, providing marked contrast with non-snowcovered surfaces. However, in many cases, discrimination between cloud
and snow is not at all easy in the optical wavelengths. Accordingly, substantial use has been made of active and passive microwave satellite data to map
snow cover, exploiting the all-weather ability of these systems. Passive
microwave data, which have been available from 1978, provide information
about snow cover, but not depth. Regular daily snow depth measurements
are available at climate and weather observing stations, but these sites tend
to be concentrated in populated areas, at lower elevations, and are only point
estimates, making their use for interpolation of snow depth over wide areas
problematic.
The experimental GRACE mission uses high-precision satellite-to-satellite
tracking to measure changes in the gravity field between two identical spacecraft in the same orbit. Changes in ground-water mass, based on the
extremely precise observation of time-dependent variations in the Earth
gravity field, are reflected in minute gravitational signature changes that are
attributable to soil moisture or snow water equivalent. This application of
photon-less remote sensing is intended to detect changes in mass distribution
equivalent to ±1 cm variation in water storage over a 500 × 500 km2 area.
Because the method is essentially gravimetric, no discrimination is possible
between changes in water stored in various reservoirs.
10.5.2 Oceanography and Marine Resources
Efficient management of marine resources and effective management of
activities within the coastal zone depend, to a large extent, on the ability to
identify, measure, and analyze a number of processes and parameters that
9255_C010.fm Page 289 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
289
operate or react together in the highly dynamic marine environment. In this
regard, measurements are required of the physical, chemical, geometrical,
and optical features of coastal and open zones of the oceans. These measurements include sea ice, temperature, current, suspended sediments, sea state,
bathymetry, and water and bottom color. Different remote sensing capabilities exist for the provision of the required information involving one or a
combination of measurement techniques. The study of sea surface state
(wave heights), surface currents and near-surface windspeed using active
microwave systems has been mentioned already in Chapter 7.
10.5.2.1 Satellite Views of Upwelling
The phenomenon of wind-driven coastal upwelling, and the resulting high
biological productivity, is dramatically revealed in the images of sea-surface
temperature and chlorophyll pigments off the west coast of North America
shown in Figure 10.26. During the summer, northerly winds drive surface
waters offshore and induce a vertical circulation that brings cooler water
rich in plant nutrients to the sunlit surface. Microscopic marine algae called
phytoplankton grow rapidly with the abundant nutrients and sunlight, initiating a rich biological food web of zooplankton, fish, mammals, and birds.
Such coastal upwelling regions support important fisheries around the world
and are found off the coasts of Peru and Ecuador, northwest and southwest
Africa, and California and Oregon in the U.S.
Figure 10.26 shows information derived from satellite observations made
on July 8, 1981, during a period of sustained winds from the north.
Figure 10.26(a), derived from data obtained from the AVHRR on NOAA-6,
shows the cool sea-surface temperatures along the coast (purple), especially
noticeable at Cape Blanco, Cape Mendocino, and Point Arena. The seasurface temperature in the upwelling centers is about 8° C, compared with
14° C further offshore. Several large eddies are visible, and long filaments
of the cooler water meander hundreds of kilometers offshore from the
upwelling bands in the California Current system.
Figure 10.26(b), which shows phytoplankton chlorophyll pigments, was
derived from data obtained from the Coastal Zone Colour Scanner (CZCS)
on Nimbus-7. The CZCS measures the color of the ocean, which shifts from
blue to green as the phytoplankton and their associated chlorophyll pigments become more abundant. Ocean color measurements can be converted
to pigment concentrations with a surprising degree of accuracy and hence
provide a good estimate of biological productivity using data obtained from
space. This CZCS image shows the enhanced production along the coast due
to the upwelling of the cool, high nutrient water. The image also shows the
entrainment of the phytoplankton in the filaments of water being carried
offshore, indicating that coastal production is an important source of biological material for offshore waters, which do not have a ready source of plant
nutrients.
Data for these two scenes were taken over 8 hours apart and exhibit
noticeable differences in cloud patterns (black and white regions) as a result
9255_C010.fm Page 290 Tuesday, February 27, 2007 12:46 PM
290
Introduction to Remote Sensing
(a)
(b)
FIGURE 10.26 (See color insert)
(a) Sea surface temperature determined using data from the AVHRR on the NOAA-6 satellite
and (b) the corresponding image of phytoplankton chlorophyll pigments made using data from
the CZCS on the Nimbus-7 satellite (NASA Goddard Space Flight Center). These computer
processed images were produced by M. Abbot and P. Zion at the Jet Propulsion Laboratory.
They used satellite data received at the Scripps Institution of Oceanography, and computerprocessing routines developed at the University of Miami.
of the time difference. Changes in sea-surface temperature and chlorophyll
patterns also occurred but are not so obvious because the ocean moves much
more slowly than the atmosphere. Enhanced levels of chlorophyll pigments
can be seen in regions where upwelling and temperature signals are not
apparent, such as in the southward spreading plume of the Columbia River
(northwest of Portland) and in the outflow of San Francisco Bay. These high
levels are the result of the addition of nutrients from the rivers and estuaries.
However, due to the suspended sediment content in these areas, the satellite
data may be less accurate here than elsewhere.
9255_C010.fm Page 291 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
291
The California Current had been thought to be a broad, slow current
flowing uniformly to the south. Analyses of satellite data have revealed a
very complex system of swirls, jets, and eddies, having only an average
southerly flow. The two images in Figure 10.26 demonstrate the complexity
of oceanic processes and, especially, show one aspect of the coupling between
atmospheric processes (wind speed and direction), ocean circulations
(upwelling and offshore transport), and chemical and biological processes
involved in marine ecosystems. The images also illustrate the importance of
satellite observation systems for increasing our understanding of large-scale
oceanic processes.
10.5.2.2 Sea-Surface Temperatures
Figure 10.27(a) shows the sea surface temperature and Figure 10.27(b)
shows the chlorophyll concentration of the Gulf Stream on April 18, 2005.
The Gulf Stream current is one of the strongest ocean currents on Earth
and ferries heat from the tropics far into the North Atlantic. The Gulf
Stream pulls away from the coast of the U.S. Southeast around Cape
Hatteras, NC, where the current widens and heads northeastward. In this
region, the current begins to meander more, forming curves and loops with
swirling eddies on both the colder, northwestern side of the stream and
the warmer, southeastern side. The images in Figure 10.27 were made from
data collected by MODIS on NASA’s AQUA satellite. In general, light grey
tones depict cool areas and dark tones depict warmer areas. The cooler
slope and shelf waters along the east coast of the United States are lighter
in tone, whereas the main core of the warmer Gulf Stream appears darker.
Meanders and eddies (both warm and cold) are easily recognizable. Imagery such as this is used daily by oceanographers to plot the course of the
Gulf Stream and its eddies.
In Figure 10.27(a), the warm waters of the Gulf Stream snake from bottom
left to top right, with several deep bends in their path. Indeed, the northernmost of the two deep bends actually loops back on itself, creating a closedoff eddy. On the northern side of the current, cold waters dip southward
into the Gulf Stream’s warmth. Gray areas in both images indicate clouds.
Chlorophyll, shown in Figure 10.27(b), indicates the presence of marine plant
life and is higher along boundaries between the cool and warm waters, where
currents mix up nutrient-rich water from deep in the ocean. Many of the
temperature boundaries along the loops in the Gulf Stream may be seen to
be mirrored in the chlorophyll image with a stripe of lighter blue, indicating
elevated chlorophyll.
Data in the form of analyzed charts are provided daily to the fisheries and
shipping industries, whereas information on the location of the north wall of
the Gulf Stream and the center of each eddy is broadcast daily over the Marine
Radio Network. Because certain species of commercial and game fish are
indigenous to waters of specific temperature, fisherman can save a great deal
of money in fuel costs and time by being able to locate areas of higher potential.
9255_C010.fm Page 292 Tuesday, February 27, 2007 12:46 PM
292
Introduction to Remote Sensing
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
51°
50°
Sea surface temperature (°C)
−1
2
5
8
11 14 17 20 23
49°
48°
51°
50°
49°
18 April 2005
48°
47°
47°
46°
46°
45°
45°
44°
44°
43°
Aqua MODIS
43°
42°
42°
41°
41°
40°
40°
39°
39°
38°
38°
37°
37°
36°
36°
35°
35°
34°
34°
33°
33°
32°
32°
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
FIGURE 10.27 (a) (See color insert)
(a) Sea surface temperature and (b) chlorophyll concentration of the Gulf Stream on April 18,
2005. (NASA images courtesy Norman Kuring of the MODIS Ocean Team.)
Because of the relatively strong currents associated with the main core and
eddies, commercial shipping firms and sailors take advantage of these currents, or avoid them, and realize savings in fuel and transit time.
A temperature map obtained from SMMR data averaged over 3 days to
provide the sea ice and ocean-surface temperature, spectral gradient ratio,
and brightness temperature over the polar region at 150 km resolution has
already been given in Figure 2.15. Information on the sea-ice concentration,
spectral gradient, sea-surface wind speed, liquid water over oceans, percent
polarization over terrain, and sea-ice multiyear fractions may also be
obtained from the SMMR.
9255_C010.fm Page 293 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
293
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
51°
50°
51°
Chlorophyll (mg/m3)
0.1
0.3
1
3
10
30 60
49°
48°
50°
49°
18 April 2005
48°
47°
47°
46°
46°
45°
45°
44°
44°
43°
Aqua MODIS
43°
42°
42°
41°
41°
40°
40°
39°
39°
38°
38°
37°
37°
36°
36°
35°
35°
34°
34°
33°
33°
32°
32°
−76°−75°−74°−73°−72°−71°−70°−69°−68°−67°−66°−65°−64°−63°−62°−61°−60°−59°−58°−57°−56°−55°−54°
FIGURE 10.27 (b) (See color insert)
10.5.2.3 Monitoring Pollution
Early detection of oil spills contributes to the acceleration of on-site intervention. Satellite imagery is an important tool for monitoring the surface of
the sea and for the early detection of oil slicks. SAR makes the detection of
oil pollution on the sea surface possible day and night and in most weather
conditions. Oil-spill detection SAR is based on the dampening effect oil has
on capillary and short ocean surface waves, reducing the microwave backscatter from the ocean surface.
The ERS SAR instruments have been found to be the most suitable of the
current SARs for oil spill detection. The JERS-1 SAR is not well suited for
detecting oil slicks, whereas Radarsat-1 provides acceptable results when
9255_C010.fm Page 294 Tuesday, February 27, 2007 12:46 PM
294
Introduction to Remote Sensing
TABLE 10.2
Oil Slick Detectability by SAR at Different Wind Speeds (ENVISYS)
Wind Speed ms−1
Oil Slick Detectability
0
No backscatter from the undisturbed sea surface, hence no signature of
oil slicks.
Excellent perturbation of the slightly roughened sea surface with no
impact from the wind on the oil slick itself. A high probability of falsepositives due to local wind variations.
Reduced false-positives attributable to local low-wind areas. The oil
slick will still be visible in the data and the background more
homogeneous.
Only thick oil visible. The maximum wind strength for slick detection
is a variable depending on the oil type and slick age. Thick oil may be
visible with wind stronger than 10 ms−1.
0-3
3-7
7-10
operating in Narrow-ScanSAR-Near-Range mode, as does the Envisat ASAR
in Wide Swath mode. The ERS SAR instruments operate using principles
similar to the Side-Looking Airborne Radar (SLAR) flown in traditional
surveillance aircraft, producing a grayscale image that represents the radar
backscatter from the ocean surface. With a 6-cm wavelength, VV polarization,
and an incidence angle of 23°, the instrument is very sensitive to the presence
of short gravity waves on the ocean because of Bragg scattering. These waves
are dampened by oil slicks. Thus, oil slicks can be seen as dark spots in ERS
SAR images. However, there are some limitations regarding the weather conditions in which oil slicks are able to be identified; in high winds, oil may be
well mixed into the sea, and no surface effect is observed in a SAR image,
whereas at very low winds, no SAR signal is received from the sea and,
accordingly, no slicks can be seen. As a consequence, ERS-1 and ERS-2 may
only be used for oil slick detection at appropriate wind speeds (see Table 10.2).
Satellite coverage is also an issue; because radar satellites are in polar orbits,
their coverage (i.e., their number of passes per day) depends on their latitude,
with good coverage being available close to the poles (where maritime traffic
is low, and the likelihood of a pollution event is reduced) and decreases with
the distance from the poles. For example, coverage is twice as good in the
Norwegian Sea (65° N) as in the Mediterranean Sea (35° N). The ERS
satellites have a swath width of 100 km; this means that while the average
number of satellite passes per day is 0.09 in the Norwegian Sea, the corresponding number for the Mediterranean is 0.04 and it is accordingly possible
that a pollution event may go unobserved for some time until it appears in
the satellite swath. In comparison, the Radarsat swath width when operating
in Narrow-ScanSAR-Near-Range mode is 300 km, providing up to 0.54 satellite
passes per day in the Norwegian Sea and 0.27 passes per day in the Mediterranean Sea. In Wide Swath mode, the Envisat ASAR has a swath width of
400 km, resulting in 0.72 satellite passes per day in the Norwegian Sea and 0.36
passes per day in the Mediterranean Sea. The limited satellite coverage must
9255_C010.fm Page 295 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
295
FIGURE 10.28
Oil spill in the eastern Atlantic off northwest Spain in November 2002 from the tanker Prestige. (ESA.)
be viewed in light of the lifetime of the oil slicks. A small slick might disperse
in hours, whereas a larger slick might have a lifetime of several days.
Figure 10.28 shows an Envisat ASAR image as an example of the environmental utility of satellites for detecting and monitoring oil spills. The image
shows an oil spill in the eastern Atlantic Ocean off northwest Spain that
occurred in November 2002 when the tanker Prestige sank with most of its
cargo of 25 million barrels of oil. The tanker was positioned at the head of
the oil slick in the southwest portion of the image. The Prestige started leaking
fuel on November 14, when she encountered a violent storm about 150 miles
off Spain’s Atlantic coast. For several days, the leaking tanker was pulled
away from the shore, but it split in half on November 19. About 1.5 million
barrels of oil escaped, some reaching coastal beaches in the east portion of
the image. The image also shows that when under tow the crippled tanker
was actually carried southward spreading the oil spill into a long “fuel front”
to the west of the coast, exposing almost the entire Atlantic coastline of
Galicia. The towing operation was considered by some to be a mistake
9255_C010.fm Page 296 Tuesday, February 27, 2007 12:46 PM
296
Introduction to Remote Sensing
because it did not take into account that the winds in autumn normally blow
from the west, and forecasts indicated westerly (eastward-flowing) winds
over the area for the period.
10.6 Applications to the Cryosphere
Floating ice appears on many of the world’s navigation routes for only part
of the year. However, in the case of the high Arctic regions, it is present for all
of the year. Ice interferes with, or prevents, a wide variety of marine activities,
including ships in transit, offshore resource exploration and transportation,
and offshore commercial fishing. In addition, ice can be a major cause of
damage, resulting in loss of vessels and equipment, loss of life, and major
ecological disasters. Accordingly, ice services are available to marine users for
a wide variety of applications. These include the navigation of vessels through
ice fields, planning of ship movements and routings, planning of inshore and
offshore fishing activities, extension of operational shipping and offshore drilling seasons through forecasts of ice growth and break-up, and assistance of
offshore drilling feasibility, economy, and safety. These ice services have
resulted in the reduction of maritime insurance rates and have contributed to
the design of marine vessels and structures that are economical, yet safe.
Current ice information charts providing daily up-to-date information on the
position of ice edges, concentration boundaries, ice types, floe sizes, and topographic features are prepared from ice data obtained from aircraft, satellites,
ships, and shore stations. Remote sensing techniques are particularly useful
for gathering this ice information, both from aircraft and satellite platforms.
Aircraft are specially equipped with transparent domes for visual observation,
SLARs for all-weather information gathering capability, and laser profilometers for accurate measurement of surface roughness.
Figure 10.29 is a SLAR image of an exploration platform in sheet ice
surrounded by very clearly defined tracks left by an attendant icebreaker.
The ice breaker is permanently on station in support of the exploration
platform to break up the moving ice sheet before it interferes with the
platform itself. One can fairly easy to deduce the prevalent directions of ice
flow, and the resupply lanes to the platform are similarly obvious.
An iceberg-detection service was provided for the 2005 Oryx Quest yacht
race and the 2005–2006 round-the-world Volvo Ocean Race by C-CORE, a
Canadian company providing Earth observation based geoinformation services. C-CORE supplied “pre-leg” reconnaissance and detections immediately
ahead of the race route in the areas of the Southern Ocean notorious for
harboring icebergs (see Figure 10.30). Iceberg detection was based on SAR data
acquired by the Envisat ASAR in Wide-Swath Mode, giving a 400-km swath
at a resolution of 150 m, and on data acquired from the Radarsat ScanSAR in
Narrow Mode which provides a 300-km swath at a resolution of 50 m.
9255_C010.fm Page 297 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
297
FIGURE 10.29
SLAR image of an icebreaker and drilling ship. (Canada Centre for Remote Sensing.)
FIGURE 10.30
SAR-derived iceberg analysis of the Southern Ocean for February 23, 2006, in support of the
2005–2006 round-the-world Volvo Ocean Race. (C-CORE.)
9255_C010.fm Page 298 Tuesday, February 27, 2007 12:46 PM
298
Introduction to Remote Sensing
FIGURE 10.31
SAR image used to produce Figure 10.30 showing icebergs in the Southern Ocean (C-CORE).
Icebergs typically have a stronger radar signal return than the open ocean.
After initial processing to remove “cluttering” effects from ocean waves, the
shape, number of pixels, and intensity of the signal returns were analyzed
to identify icebergs and to differentiate between icebergs and ships, which
can appear similar (see Figure 10.31).
Figure 10.32 shows the mean monthly surface emissivity for January 1979
measured at 50.3 GHz as derived from the analysis of HIRS-2/MSU data for
the whole globe. Sea ice extent and snow cover can be determined from this
field. The emissivity of snow-free land is typically 0.9 to 1.0, whereas the
emissivity of a water surface ranges from 0.5 to 0.65, increasing with decreasing surface temperature. Mixed ocean-land areas have intermediate values.
The continents are clearly indicated as well as a number of islands, seas, and
lakes. Snow-covered land has an emissivity of 0.85 or less, with emissivity
decreasing with increasing snow depth. The snow line, clearly visible in
North America and Asia, gives good agreement with that determined from
visible imagery. Newly frozen sea ice has an emissivity of 0.9 or more. Note
for example Hudson Bay, the Sea of Okhotsk, the center of Baffin Bay, and
the Chuckchi, Laptev, and East Siberian Seas. Mixed sea ice and open water
has emissivities between 0.69 and 0.90. The onset of significant amounts of
sea ice is indicated by the 0.70 contour. Comparisons of this in Baffin Bay,
the Denmark Strait, and the Greenland Sea show excellent agreement with
the 40% sea ice extent determined from the analysis of SMMR data from the
same period. Multiyear ice, such as found in the Arctic Ocean north of the
Beaufort Sea, is indicated by emissivities less than 0.80.
9255_C010.fm Page 299 Tuesday, February 27, 2007 12:46 PM
299
Applications of Remotely Sensed Data
Mean surface microwave emissivity for January 1979
from 50.3 GHz using HIRS 2 and MSU data
CHAHINE SUSSKIND
JPL
GSFC
(1982)
Percent emissivity
55
60
65
70
75
80
85
90
95
FIGURE 10.32 (See color insert)
Mean monthly microwave emissivity for January 1979 derived from HIRS2/MSU data. (NASA.)
Although there is general acceptance that the Earth’s atmosphere is getting
warmer and that the impact of climate change is expected to be amplified
at the poles, it is extremely difficult to predict what effect this “global warming” is going to have on the polar ice cover. On one hand, recent years have
already seen record summer reductions, in extent and concentrations, of sea
ice in the Arctic. In Antarctica, giant icebergs have calved and part of the
Larsen ice shelf has disintegrated (see Figure 10.33). However, on the other
hand, ships have been trapped for weeks in unusually heavy Antarctic pack
ice conditions.
NASA’s Ice, Cloud, and Land Elevation Satellite (ICESat) was launched at
the start of 2003 to determine the mass balance of the polar ice sheets and
their contributions to global sea level change and to obtain essential data for
the prediction of future changes in ice volume and sea-level. The Geoscience
Laser Altimeter System (GLAS) on ICESat sends short pulses of green and
infrared light 40 times per second and collects the reflected laser light in a
1-m telescope. The elevation of the Earth’s surface and the heights of clouds
and aerosols in the atmosphere are calculated from both precise measurements of the travel time of the laser pulses, and ancillary measurements of
the satellite’s orbit and instrument orientation. The GLAS is the first instrument to make vertical measurements of the Earth through the use of an
onboard laser light source.
ESA’s CryoSat mission was lost following the failure of the launch on
October 8, 2005. The failure occurred when the flight control system in the
upper stage did not generate the command to shut-down the second stage’s
engines. To meet the challenges of measuring ice, CryoSat carried a sophisticated radar altimeter, the Synthetic Aperture Radar Interferometric Radar
Altimeter (SIRAL). Current radar altimeters deliver data only over the sea
9255_C010.fm Page 300 Tuesday, February 27, 2007 12:46 PM
300
Introduction to Remote Sensing
Drygalski ice tongue
B-15a iceberg
50 km
FIGURE 10.33
MODIS image of January 13, 2005, showing McMurdo Sound break into pieces and the giant
B-15A iceberg, 129 km (80 miles) in length. (NASA/GSFC/MODIS Rapid Response Team.)
and large-scale homogeneous ice surfaces, but SIRAL’s design was intended
to provide detailed views of irregular sloping edges of land ice as well as
nonhomogenous ocean ice. CryoSat would have monitored precise changes
in the thickness of the polar ice sheets and floating sea ice and should have
provided conclusive evidence of rates at which ice cover may be diminishing.
A CryoSat-2 replacement mission is expected to be launched in March 2009.
10.7 Postscript
Since the launch of Landsat-1 in 1972, a continuous and growing stream of
satellite-derived Earth resources data have become available. It is certain
that tremendous amounts of additional remote sensing data will become
available, but the extent to which the data will actually be analyzed and
interpreted for solving “real-world” problems is somewhat less certain.
There is a shortage of investigations that interpret and utilize the information
to advantage because investment in the systems involved in producing the
data continues not to be matched with a similar investment in the use made
of it. Although remotely sensed data have been used extensively in research
9255_C010.fm Page 301 Tuesday, February 27, 2007 12:46 PM
Applications of Remotely Sensed Data
301
programs, space-acquired remote sensing data is being utilized much less
in routine Earth resources investigations than was predicted in early optimistic estimates. Indeed, the short history of remote sensing has been one
of transition from a total research orientation to operational and quasioperational programs. Users have developed applications at their own pace,
and the transition of these applications from a research to an operational
orientation has been gradual. However, impediments to the acceptance and
development of remote sensing that once existed, such as difficulties in
handling the volumes of data remote sensors could generate and the limitation in the precision of measurements possible with data acquired by
systems generally distant from their objectives, have now largely been overcome by advances in computing that have served to alleviate many of the
data volume and manipulation problems and by advances in the sensing
technologies employed on spacecraft.
It is important, as far as it is possible, to continue to develop techniques
that are capable of handling and disseminating remotely sensed data in real
time or very-near real time. Experience suggests that only a small fraction
of the data that are archived for use at a later date is ever actually used in
a meaningful way unless the data are readily accessible. The availability of
Internet access to datasets has contributed significantly to the wider exploitation of these data.
The magnitude and complexity of the problems facing the world require
coordinated planning, often in a regional context. Remote sensing has made
it possible for countries to obtain timely resource data to assist in the planning of their economic and social development. Remote sensing may be seen
accordingly to be of particular advantage to developing countries where the
resource data may not have been available previously. For the present, however, it can be observed that most remote sensing effort is to be found in
those parts of the world where computing and associated information technologies are already well developed. The use of remote sensing data seems
poised to expand substantially and the data itself continues to improve in
both quality and diversity. It is to be hoped that this information can lead
to the improvement of the quality of life of all who live on Earth.
9255_C010.fm Page 302 Tuesday, February 27, 2007 12:46 PM
9255_C011.fm Page 303 Wednesday, September 27, 2006 6:51 PM
References
Alishouse, J.C., Synder, S., Vongsathorn, J. and Ferraro, R.R., “Determination of
Oceanic Total Precipitable Water from the SSM/I,” IEEE Transactions on Geoscience and Remote Sensing, 28: 811, 1990.
Anding, D., and Kauth, R. “Estimation of Sea Surface Temperature from Space,”
Remote Sensing of Environment, 1:217, 1970.
Anding, D., and Kauth, R. “Reply to Comment by G.A. Maul and M. Sidran,” Remote
Sensing of Environment, 2:171, 1972.
Arthus-Bertrand, Y. The Earth from the Air. London: Thames and Hudson, 2002.
Barale, V., and Schlittenhardt, P.M. Ocean Colour: Theory and Applications in a Decade
of CZCS Experience. Dordrecht: Kluwer, 1993.
Barnes, R.A., Barnes, W.L., Esaias, W.E. and McClain, C.R., Prelaunch Acceptance
Report for the SeaWiFS Radiometer, National Aeronautics and Space Administration (NASA) Tech. Memo. 104566, 22. Greenbelt, MD: NASA Goddard Space
Flight Center, 1994.
Barnes, R.A., Eplee, R.E., Schmidt, G.M., Patt, F.S. and McClain, C.R., “Calibration
of SeaWiFS. I. Direct Techniques,” Applied Optics, 40:6682, 2001.
Barrett, E.C. and Curtis, L.F. Introduction to Environmental Remote Sensing. London:
Chapman and Hall, 1982.
Barrick, D.E. “Theory of HF and VHF Propagation across the Rough Sea. 1, The
Effective Surface Impedance for a Slightly Rough Highly Conducting Medium
at Grazing Incidence,” Radio Science, 6:517, 1971a.
Barrick, D.E. “Theory of HF and VHF Propagation across the Rough Sea. 2, Application to HF and VHF Propagation above the Sea,” Radio Science, 6:527, 1971b.
Barrick, D.E. “First-Order Theory and Analysis of MF/HF/VHF Scatter from the
Sea,” IEEE Transactions on Antennas and Propagation, AP-20:2, 1972a.
Barrick, D.E. “Remote Sensing of Sea State by Radar,” in Remote Sensing of the Troposphere. Edited by Derr, V.E. Washington, DC: U.S. Government Printing Office,
1972b.
Barrick, D.E. “The Ocean Waveheight Nondirectional Spectrum from Inversion of
the HF Sea-echo Doppler Spectrum,” Remote Sensing of Environment, 6:201,
1977a.
Barrick, D.E. “Extraction of Wave Parameters from Measured HF Radar Sea-echo
Doppler Spectra,” Radio Science, 12:415, 1977b.
Barrick, D.E., Evans, M. W. and Weber, B. L., “Ocean Surface Currents Mapped by
Radar,” Science, 197:138, 1977.
Barrick, D.E. and Weber, B.L. “On the Nonlinear Theory for Gravity Waves on the
Ocean’s Surface. Part II. Interpretation and Applications,” Journal of Physical
Oceanography, 7:11, 1977.
303
9255_C011.fm Page 304 Wednesday, September 27, 2006 6:51 PM
304
Introduction to Remote Sensing
Bartholmé, E., and Belward, A.S. “GLC2000: A New Approach to Global Land Cover
Mapping from Earth Observation Data,” International Journal of Remote Sensing,
26:1959, 2005.
Barton, I.J. “Satellite-Derived Sea Surface Temperatures: Current Status,” Journal of
Geophysical Research, 100:8777, 1995.
Barton, I.G., and Cechet, R.P. “Comparison and Optimization of AVHRR Sea Surface
Temperature Algorithms,” Journal of Atmospheric and Oceanic Technology, 6:1083,
1989.
Baylis, P.E. “Guide to the Design and Specification of a Primary User Receiving
Station for Meteorological and Oceanographic Satellite Data,” in Remote Sensing
in Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Baylis, P.E. “University of Dundee Satellite Data Reception and Archiving Facility,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: D. Reidel, 1983.
Bernstein, R.L. “Sea Surface Temperature Estimation Using the NOAA-6 Satellite
Advanced Very High Resolution Radiometer,” Journal of Geophysical Research,
87C:9455, 1982.
Bowers, D.G., Crook, P. J. E. and Simpson, J. H., An Evaluation of Sea Surface
Temperature Estimates from the AVHRR.” Remote Sensing and the Atmosphere:
Proceedings of the Annual Technical Conference of the Remote Sensing Society,
Liverpool, Reading: Remote Sensing Society, December 1982.
Bristow, M., and Nielsen, D. Remote Monitoring of Organic Carbon in Surface Waters.
Report No. EPA-600/4-81-001, Las Vegas, NV: Environmental Monitoring Systems Laboratory, U.S. Environmental Protection Agency, 1981.
Bristow, M., Nielsen, D., Bundy, D. and Furtek, R., “Use of Water Raman Emission
to Correct Airborne Laser Fluorosensor Data for Effects of Water Optical
Attenuation,” Applied Optics, 20:2889, 1981.
Brown, C.E., Fingas, M.F., and Mullin, J.V., “Laser-Based Sensors for Oil Spill Remote
Sensing,” in Advances in Laser Remote Sensing for Terrestrial and Oceanographic
Applications. Edited by Narayanan R.M., and Kalshoven, J.E. Proceedings of SPIE,
3059:120, 1997.
Bullard, R.K. “Land into Sea Does Not Go,” in Remote Sensing Applications in
Marine Science and Technology. Edited by Cracknell, A.P. Dordrecht: D. Reidel,
1983a.
Bullard, R.K. “Detection of Marine Contours from Landsat Film and Tape,” in Remote
Sensing Applications in Marine Science and Technology. Edited by Cracknell, A.P.
Dordrecht: D. Reidel, 1983b.
Bullard, R.K., and Dixon-Gough, R.W. Britain from Space: An Atlas of Landsat Images.
London: Taylor & Francis, 1985.
Bunkin, A.F., and Voliak, K.I. Laser Remote Sensing of the Ocean. New York: Wiley, 2001.
Burrough, P.A. Principles of Geographical Information Systems for Land Resources Assessment. Oxford: Oxford University Press, 1986.
Callison, R.D., and Cracknell, A.P. “Atmospheric Correction to AVHRR Brightness
Temperatures for Waters around Great Britain,” International Journal of Remote
Sensing, 5:185, 1984.
Chahine, M., “Measuring Atmospheric Water and Energy Profiles from Space,” 5th
International Scientific Conference on the Global Energy and Water Cycle,
GEWEX, 20-24 June, 2005.
9255_C011.fm Page 305 Wednesday, September 27, 2006 6:51 PM
References
305
Chappelle, E.W., Wood, F.M., McMurtrey, J.E. and Newcombe, W.W., “Laser-Induced
Fluorescence of Green Plants. 1: A Technique for Remote Detection of Plant
Stress and Species Differentiation,” Applied Optics, 23:134, 1984.
Chedin, A., Scott, N. A. and Berroir, A., “A Single-Channel Double-Viewing Angle
Method for Sea Surface Temperature Determination from Coincident Meteosat
and TIROS-N Radiometric Measurements,” Journal of Applied Meteorology,
21:613, 1982.
Chekalyuk, A.M., “Demidov, A.A., Fadeev, V.V., and Gorbunov, M.Yu., Lidar Monitoring of Phytoplankton and Dissolved Organic Matter in the Inner Seas of
Europe,” Advances in Remote Sensing, 3:131, 1995.
Chekalyuk, A.M., Hoge, F.E., Wright, C.W. and Swift, R.N., “Short-Pulse Pump-andProbe Technique for Airborne Laser Assessment of Photosystem II Photochemical Characteristics,” Photosynthesis Research, 66:33, 2000.
Clark, D.K., Gordon, H.R., Voss, K.J., Ge, Y, Broenkow,W. and Trees, C., “Validation
of Atmospheric Correction over the Oceans,” Journal of Geophysical Research,
102:17209, 1997.
Colton, M.C., and Poe, G.A. “Intersensor Calibration of DMSP SSM/I's: F-8 to F14,
1987–1997,” IEEE Trans on Geoscience and Remote Sensing, 37/1:418, 1999.
Colwell, R.N. Manual of Remote Sensing. Falls Church, VA: American Society of Photogrammetry, 1983.
Cook, A.F. “Investigating Abandoned Limestone Mines in the West Midlands of
England with Scanning Sonar,” International Journal of Remote Sensing, 6:611,
1985.
Cracknell, A.P. Ultrasonics. London: Taylor & Francis, 1980.
Cracknell, A.P. The Advanced Very High Resolution Radiometer. London: Taylor and
Francis, 1997.
Cracknell, A.P., Remote Sensing and Climate Change: Role of Earth Observation
.Berlin: Springer-Praxis, 2001.
Cracknell, A.P., MacFarlane, N., McMillan, K., Charlton, J. A., McManus, J. and
Ulbricht, K. A., ”Remote Sensing in Scotland Using Data Received from Satellites. A Study of the Tay Estuary Region Using Landsat Multispectral Scanning
Imagery,” International Journal of Remote Sensing, 3:113, 1982.
Cracknell, A.P., and Singh, S.M. “The Determination of Chlorophyll-a and Suspended
Sediment Concentrations for EURASEP Test Site, during North Sea Ocean
Colour Scanner Experiment, from an Analysis of a Landsat Scene of 27th June
1977.” Proceedings of the 14th Congress of the International Society of Photogrammetry, Hamburg, International Archives of Photogrammetry, 23(B7):225, 1980.
Crombie, D.D. “Doppler Spectrum of Sea Echo at 13.56 Mc/s,” Nature, 175:681, 1955.
Curlander, J., and McDonough, R. Synthetic Aperture Radar: Systems and Signal Processing. New York: Wiley, 1991.
Cutrona, L.J., Leith, E. N., Porcello, L. J. and Vivian, W. E., “On the Application of
Coherent Optical Processing Techniques,” Proceedings of the IEEE, 54:1026, 1966.
Emery, W.J., “Yu, Y., Wick, G.A., Schlüssel, P. and Reynolds, R.W., Correcting Infrared
Satellite Estimates of Sea Surface Temperature for Atmospheric Water Vapour
Contamination,” Journal of Geophysical Research, 99:5219, 1994.
Eplee, R.E., et al. “Calibration of SeaWiFS. II. Vicarious Techniques,” Applied Optics,
40:6701, 2001.
Evans, R.H., and Gordon, H.R. “Coastal Zone Color Scanner System Calibration: A
Retrospective Examination,” Journal of Geophysical Research, 99:7293, 1994.
9255_C011.fm Page 306 Wednesday, September 27, 2006 6:51 PM
306
Introduction to Remote Sensing
Falkowski, P.G., Greene, R. and Geider, R., “Physiological Limitations on Phytoplankton Productivity in the Ocean,” Oceanography, 5:84, 1992.
Friedl, M.A., et al. “Global Land Cover from MODIS: Algorithms and Early Results,”
Remote Sensing of Environment, 83:135–148, 2002.
Gemmill, W.H.P., Woiceshyn, C. A. Peters, and V. M. Gerald “A Preliminary Evaluation of Scatterometer Wind Transfer Functions for ERS-1 Data.” OPC Cont.
No. 97, Camp Springs, MD: NMC, 1994.
Gens, R. “Two-Dimensional Phase Unwrapping for Radar Interferometry: Developments and New Challenges,” International Journal of Remote Sensing, 24:703,
2003.
Gens, R., and van Genderen, J.L. “SAR Interferometry: Issues, Techniques, Applications,” International Journal of Remote Sensing, 17:1803, 1996.
Georges, T. M. “Costs and benefits of using the Air Force over-the-horizon radar
system for environmental research and services,” NOAA Technical Memorandum
ERL ETL-254, 39, 1995 .
Georges, T.M., and Harlan, J.A. “New Horizons for Over-the-Horizon Radar?” IEEE
Antennas and Propagation Magazine, 36:14–24, 1994a.
Georges, T.M. and Harlan, J.A. “Military over-the-horizon radars turn to ocean monitoring,” Marine Technology Society Journal, 27, 31, 1994b.
Georges, T.M., and Harlan J.A. “Mapping Surface Currents Near the Gulf Stream
Using the Air Force Over-the-Horizon Radar,” Proc. IEEE Fifth Working Conf.
on Current Measurements, St. Petersburg, FL. Piscataway, NJ: Institute of Electrical
and Electronics Engineers, Inc., 1995.
Georges, T.M., and Harlan, J.A. “The Case for Building a Current-Mapping Overthe-Horizon Radar,” Proceedings of the IEEE Sixth Working Conference on Current
Measurement, March 11-13, 1999, San Diego, Ca. Piscataway, NJ: Institute of
Electrical and Electronics Engineers, Inc., 1999. http://www.etl.noaa.gov/technology/
archive/othr/ieee_curr99.html.
Georges, T.M., Harlan, J.A., Leben, R.R. and Lematta, R.A., “A test of ocean surfacecurrents mapping with over-the-horizon radar,” IEEE Transactions on Geoscience
and Remote Sensing, 36, 101, 1998.
Ghiglia, D.C., and Pritt, M.D. Two-Dimensional Phase Unwrapping: Theory, Algorithms
and Software. John Wiley: New York, 1998.
Gloersen P., et al. “Summary of Results from the First Nimbus-7 SMMR Observations,” Journal of Geophysical Research, 89:5335, 1984.
Goldstein, R.M., Zebker, H.A. and Werner, C.L., “Satellite Radar Interferometry: TwoDimensional Phase Unwrapping,” Radio Science, 23:713–720, 1988 .
Gonzalez, R.C., Woods, R.E. and Eddins, S.L., Digital Image Processing. New York:
Prentice Hall, 2002.
Gordon, H.R. “Removal of Atmospheric Effects from Satellite Imagery of the Oceans,”
Applied Optics, 17:1631, 1978.
Gordon, H.R. “Calibration Requirements and Methodology for Remote Sensors Viewing the Ocean in the Visible,” Remote Sensing of Environment, 22:103, 1987.
Gordon, H.R. “Radiative Transfer in the Atmosphere for Correction of Ocean Color
Remote Sensors,” in Ocean Colour: Theory and Applications in a Decade of CZCS
Experience. Edited by Barale V., and Schlittenhardt, P.M. Dordrecht: Kluwer,
1993.
Gordon, H.R. “In-Orbit Calibration Strategy for Ocean Color Sensors,” Remote Sensing
of the Environment, 63:265, 1998.
9255_C011.fm Page 307 Wednesday, September 27, 2006 6:51 PM
References
307
Gordon, H.R., and Morel, A. Remote Assessment of Ocean Color for Interpretation of
Satellite Visible Imagery: A Review. New York: Springer, 1983.
Gordon, H.R., and Wang, M. “Retrieval of Water-Leaving Radiance and Aerosol
Optical Thickness over the Oceans with SeaWiFS: A Preliminary Algorithm,”
Applied Optics, 33:443, 1994.
Govindjee. “Sixty-Three Years since Kautski: Chlorophyll-a Fluorescence,” Australian
Journal of Plant Physiology, 22:131, 1995.
Graham, L.C. “Synthetic Interferometer Radar for Topographic Mapping,” Proceedings of the IEEE, 62:763,1974.
Guymer, T.H. “Remote Sensing of Sea-Surface Winds,” in Remote Sensing Applications
in Meteorology and Climatology. Edited by Vaughan, R.A. Dordrecht: D. Reidel,
1987.
Guymer, T.H., Businger, J. A., Jones, W. L. and Stewart, R. H., “Anomalous Wind
Estimates from the Seasat Scatterometer,” Nature, 294:735, 1981.
Hansen, M.C., and Reed, B. “A Comparison of the IGBP DISCover and University
of Maryland 1-km Global Land Cover Products,” International Journal of Remote
Sensing, 21:1365, 2000.
Henderson, F.M. and Lewis, A.J., Manual of Remote Sensing, volume 2, Principles and
Applications of Imaging Radar, New York: Wiley, 1998.
Hoge, F.E. “Oceanic and Terrestrial Lidar Measurements,” in Laser Remote Chemical
Analysis. Edited by Measures, R.M. New York: Wiley, 1988.
Hoge, F.E., and Swift, R.N. “Oil Film Thickness Measurement Using Airborne LaserInduced Water Raman Backscatter,” Applied Optics, 19:3269, 1980.
Hoge, F.E., and Swift, R.N. “Absolute Tracer Dye Concentration Using Airborne
Laser-Induced Water Raman Backscatter,” Applied Optics, 20:1191, 1981.
Hoge, F.E., et al. “Water Depth Measurement Using an Airborne Pulsed Neon Laser
System,” Applied Optics, 19:871, 1980.
Hoge, F.E., et al. “Active-Passive Airborne Ocean Color Measurement. 2: Applications,” Applied Optics, 25:48, 1986.
Hoge, F.E., et al. “Radiance-Ratio Algorithm Wavelengths for Remote Oceanic Chlorophyll Determination,” Applied Optics, 26:2082, 1987.
Hoge, F.E., and Swift, R.N. “Oil Film Thickness Using Airborne Laser-Induced Oil
Fluorescence Backscatter,” Applied Optics, 22:3316, 1983.
Hollinger, J.P., Peirce, J.L. and Poe, G.A, “SSM/I Instrument Evaluation,” IEEE
Transactions on Geoscience and Remote Sensing, 28:781, 1990.
Holyer, R.J. “A Two-Satellite Method for Measurement of Sea Surface Temperature,”
International Journal of Remote Sensing, 5:115, 1984.
Hooker, S.B., and McClain, C.R. “The Calibration and Validation of SeaWiFS Data,”
Progress in Oceanography, 45:427, 2000.
Hooker, S.B., Esaias, W.E., Feldman, G.C., Gregg, W.W. and McClain, C.R., An
Overview of SeaWiFS Ocean Color, National Aeronautics and Space Administration (NASA) Tech. Memo. 104566 1. Edited by Hooker, S.B. and Firestone, E.R.
Greenbelt, MD: NASA Goddard Space Flight Center, 1992.
Hotelling, H. “Analysis of a Complex of Statistical Variables into Principal Components,” Journal of Educational Psychology, 24:417, 1933.
Hutchison, K.D., and Cracknell, A.P., Visible Infrared Imager Radiometer Suite, A New
Operational Cloud Imager, Boca Raton: CRC - Taylor and Francis, 2006.
International Atomic Energy Agency. Airborne Gamma Ray Spectrometer Surveying.
Vienna, International Atomic Energy Agency, 1991.
9255_C011.fm Page 308 Wednesday, September 27, 2006 6:51 PM
308
Introduction to Remote Sensing
Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective. Upper
Saddle River, NJ: Prentice Hall, 1996.
Jones, W.L., et al. “Seasat Scatterometer: Results of the Gulf of Alaska Workshop,”
Science, 204:1413, 1979.
Jones, W.L., et al. “Evaluation of the Seasat Wind Scatterometer,” Nature, 294:704, 1981.
Kidwell, K.B. NOAA Polar Orbiter Data User’s Guide (TIROS-N, NOAA-6, NOAA-7,
NOAA-8, NOAA-9, NOAA-10, NOAA-11, NOAA-12, NOAA-13, and NOAA-14),
November 1998 revision. MD: U.S. Department of Commerce, 1998.
Kilpatrick, K.A., et al. “Overview of the NASA/NOAA Advanced Very High Resolution Radiometer Pathfinder Algorithm for Sea Surface Temperature and
Associated Matchup Database,” Journal of Geophysical Research, 106:9179, 2001.
Kim, H.H. “New Algae Mapping Technique by the Use of Airborne Laser Fluorosensor,” Applied Optics, 12:1454, 1973.
Kolawole, M.O. Radar Systems, Peak Detection, and Tracking. Oxford: Newnes, 2002.
Kondratyev, K.Y., and Cracknell, A.P., Observing Global Climate Change, London:
Taylor and Francis, 1998.
Krabill, W.B., Collins, J.G., Link, L.E., Swift, R.N., and Butler, M.L., “Airborne Laser
Topographic Mapping Results,” Photogrammetric Engineering and Remote Sensing, 50:685, 1984.
Krabill, W.B., Thomas, R.H., Martin, L.F., Swift, R.N., and Frederick, E.B., “Accuracy
of Airborne Laser Altimetry over the Greenland Ice Sheet,” International Journal
of Remote Sensing, 16: 1211,1994.
Krabill, W.B., et al. Collins, J.G., Link, L.E., Swift, R.N. and Butler, M.L., 1984, “Airborne Laser Topographic Mapping Results,” Remote Sensing of the Environment,
50:685, 1984.
Kramer, D.M., and Crofts, A.R. “Control and Measurement of Photosynthetic Electron
Transport in vivo,” in Photosynthesis and the Environment. Edited by Baker, N.R.
Dordrecht, Kluwer, 1996.
Kramer, H.J. Observation of the Earth and its Environment. Berlin: Springer, 2002. An
updated and even more comprehensive version of this book is available on
the website: http://directory.eoportal.org/pres_ObservationoftheEarthandits
Environment.html.
Labs, D., and Neckel, H. “The Absolute Radiation Intensity of the Centre of the Sun
disc in the Spectral Range 3288-12480 Å,” Zeitschrift für Astrophysik, 65:133, 1967.
Labs, D., and Neckel, H. “The Radiation of the Solar Photosphere,” Zeitschrift für
Astrophysik, 69:1, 1968.
Labs, D., and Neckel, H. “Transformation of the Absolute Solar Radiation Data into
the International Practical Temperature Scale of 1968,” Solar Physics, 15:79, 1970.
Lang, M., Lichtenthaler, H.K., Sowinska, M., Heisel, F. and Miehé, J.A., “Fluorescence
Imaging of Water and Temperature Stress in Plant Leaves,” Journal of Plant
Physiology, 148:613, 1996.
Lang, M., Stober, F. and Lichtenthaler, H.K., “Fluorescence Emission Spectra of Plant
Leaves and Plant Constituents,” Radiation and Environmental Biophysics, 30:333,
1991.
Lauritson, L., Nelson, G. J. and Porto, F. W., Data Extraction and Calibration of TIROSN/NOAA Radiometers, NOAA Technical Memorandum NESS 107. Washington,
DC: U.S. Department of Commerce, 1979.
Lewis, J.K., Shulman, I. and Blumberg, A.F., “Assimilation of Doppler Radar Current
Data into Numerical Ocean Models,” Continental Shelf Research, 18:541, 1998.
9255_C011.fm Page 309 Wednesday, September 27, 2006 6:51 PM
References
309
Lichtenthaler, H.K., Stober, F. and Lang, M., Laser-Induced Fluorescence Emission
Signatures and Spectral Fluorescence Ratios of Terrestrial Vegetation, Proceedings
of the International Geoscience and Remote Sensing Symposium, Tokyo, 18–21
August 1993, 1317. IEEE: Piscataway, 1993.
Lillesand, T.M., and Kiefer, R.W. Remote Sensing and Image Interpretation. New York:
Wiley, 1987.
Lodge, D.W.S. “The Seasat-1 Synthetic Aperture Radar: Introduction, Data Reception,
and Processing,” in Remote Sensing in Meteorology, Oceanography, and Hydrology.
Edited by Cracknell, A.P. Chichester, U.K.: Ellis Horwood, 1981.
Lohr, U. “Precision Lidar Data and True-Ortho Images,” in Conference Proceedings of
Map Asia 2003, Putra World Trade Centre, Kuala Lumpur, Malaysia, October
13–15, 2003. www.gisdevelopment.net/proceedings/mapasia/2003/index.htm.
Longuet-Higgins, M.S. “On the Statistical Distribution of the Heights of Sea Waves,”
Journal of Marine Research, 11:245, 1952.
Lüdeker, W., Dahn, H-G and Günther, K.P., “Detection of Fungal Infection of Plants
by Laser-Induced Fluorescence: An Attempt to Use Remote Sensing,” Journal
of Plant Physiology, 148:579, 1996.
McClain, C.R. “SeaWiFS Postlaunch Calibration and Validation Overview,” in SeaWiFS Postlaunch Calibration and Validation Analyses, Part 1, National Aeronautics
and Space Administration (NASA) Tech. Memo. 1999-206892 9. Edited by
Hooker, S.B., and Firestone, E.R. Greenbelt, MD: NASA Goddard Space Flight
Center, 2000.
McClain, C.R., et al. SeaWiFS Calibration and Validation Plan, National Aeronautics
and Space Administration (NASA) Tech. Memo. 104566 3. Edited by Hooker,
S.B., and Firestone, E.R. (Greenbelt, MD: NASA Goddard Space Flight Center, 1992.
McClain, E.P., Pichel, W.G. and Walton, C.C., “Comparative Performance of AVHRRBased Multichannel Sea Surface Temperatures,” Journal of Geophysical Research,
90:11587, 1985.
McCord, H.L. “The Equivalence Among Three Approaches to Deriving Synthetic
Array Patterns and Analysing Processing Techniques,” IRE Transactions on
Military Electronics, MIL-6, 116, 1962.
McKenzie, R.L., and Nisbet, R.M. “Applicability of Satellite-Derived Sea-Surface
Temperatures in the Fiji Region,” Remote Sensing of Environment, 12:349, 1982.
McMillin, L.M. “A Method of Determining Surface Temperatures from Measurements of Spectral Radiance at Two Wavelengths [PhD dissertation], Iowa State
University, Ames, 1971.
McMillin, L.M., and Crosby, D.S. “Theory and Validation of the Multiple Window
Sea Surface Temperature Technique,” Journal of Geophysical Research, 89:3655,
1984.
McMurtrey, J.E., Chappelle, E.W., Kim, M.S., Corp, L.A. and Daughtry, C.S.T., “BlueGreen Fluorescence and Visible-Infrared Reflectance of Corn (Zea mays L.)
Grain for in situ Field Detection of Nitrogen Supply,” Journal of Plant Physiology,
148:509, 1996.
MacPhee, S.B., Dow, A. J., Anderson, N. M. and Reid, D. B., “Aerial Hydrography
Laser Bathymetry and Air Photo Interpretation Techniques for Obtaining
Inshore Hydrography,” XVIth International Congress of Surveyors, , Paper 405.3,
Montreux, August 1981.
Maul, G.A. “Application of GOES Visible-Infrared Data to Quantifying Mesoscale
Ocean Surface Temperatures,” Journal of Geophysical Research, 86:8007, 1981.
9255_C011.fm Page 310 Wednesday, September 27, 2006 6:51 PM
310
Introduction to Remote Sensing
Maul, G.A., and Sidran, M. “Comment on Anding and Kauth,” Remote Sensing of
Environment, 2:165, 1972.
Muirhead, K., and Cracknell, A.P. “Review Article: Airborne Lidar Bathymetry,”
International Journal of Remote Sensing, 7:597, 1986.
Narayanan, R.M., and Kalshoven, J.E., eds. Proceedings of SPIE: Advances in Laser
Remote Sensing for Terrestrial and Oceanographic Applications, Orlando, FL, April
21–22, 1997, 3059. Bellingham, WA: SPIE-International Society for Optical
Engineering, 1997.
National Aeronautics and Space Administration (NASA). Landsat Data User’s Handbook. Document No. 76SDS4258. Greenbelt, MD: NASA, 1976.
Needham, B.H. “NOAA’s Activities in the Field of Marine Remote Sensing,” in Remote
Sensing Applications in Marine Science and Technology. Edited by Cracknell, A.P.
Dordrecht: D. Reidel, 1983.
Njoku, E.G., and Swanson, L. “Global Measurements of Sea Surface Temperature,
Wind Speed, and Atmospheric Water Content from Satellite Microwave Radiometry,” Monthly Weather Review, 111:1977, 1983.
Offiler, D. “Surface Wind Vector Measurements from Satellites,” in Remote Sensing
Applications in Marine Science and Technology. Edited by Cracknell, A.P. Dordrecht:
D. Reidel, 1983.
O’Neil, R.A., Buga-Bijunas, L. and Rayner, D. M., ”Field Performance of a Laser
Fluorosensor for the Detection of Oil Spills,” Applied Optics, 19:863, 1980.
O’Neil, R.A., Hoge, F. E. and Bristow, M. P. F., “The Current Status of Airborne Laser
Fluorosensing,” Proceedings of 15th International Symposium on Remote Sensing
of Environment, Ann Arbor, MI, May 1981.
O’Reilly, J.E., et al. “Ocean Colour Chlorophyll Algorithms for SeaWiFS,” Journal of
Geophysical Research, 103:24937, 1998.
Prabhakara, G., Dalu, G. and Kunde, V. G., “Estimation of Sea Surface Temperature
from Remote Sensing in the 11- to 13-µm Window Region,” Journal of Geophysical Research, 79:5039, 1974.
Rao, P.K., Holmes, S.J., Anderson, R.K., Winston, J.S. and Lehr, P.E., Weather Satellites:
Systems, Data and Environmental Applications. Boston: American Meteorological
Society, 1990.
Rao, P.K., Smith, W. L. and Koffler, R., “Global Sea Surface Temperature Distribution
Determined from an Environmental Satellite,” Monthly Weather Review, 100:10,
1972.
Rencz, A.N. and Ryerson, R.A., Manual of Remote Sensing, volume 3, Remote Sensing
for the Earth Sciences,New York: Wiley, 1999.
Rhind, D.W., and Mounsey, H. Understanding Geographic Information Systems. London:
Taylor & Francis, 1991.
Rice, S.O. “Reflection of Electromagnetic Waves from Slightly Rough Surfaces,” in
Theory of Electromagnetic Waves. Edited by Kline, M. New York: Interscience,
1951.
Robinson, I.S. Measuring the Oceans from Space: The Principles and Methods of Satellite
Oceanography. Berlin: Springer-Praxis, 2004.
Rogers, A.E.E., and Ingalls, R.P. “Venus: Mapping the Surface Reflectivity by Radar
Interferometry,” Science, 165:797, 1969.
Ryerson, R.A., Manual of Remote Sensing: Remote Sensing of Human Settlements, Falls
Church: ASPRS, 2006.
Sabins, F.F. Remote Sensing: Principles and Interpretation. New York: John Wiley, 1986.
9255_C011.fm Page 311 Wednesday, September 27, 2006 6:51 PM
References
311
Sathyendranath, S., and Morel, A. “Light Emerging from the Sea: Interpretation and
Uses in Remote Sensing,” in Remote Sensing Applications in Marine Science and
Technology. Edited by Cracknell, A.P. Dordrecht: D. Reidel, 1983.
Saunders, R.W. “Methods for the Detection of Cloudy Pixels,” Remote Sensing and the
Atmosphere: Proceedings of the Annual Technical Conference of the Remote Sensing
Society, Liverpool, December 1982, Reading: Remote Sensing Society, 1982.
Saunders, R.W., and Kriebel, K.T. “An Improved Method for Detecting Clear Sky
Radiances from AVHRR Data,” International Journal of Remote Sensing, 9:123,
1988.
Schneider, S.R., McGinnis, D. F. and Gatlin, J. A., Use of NOAA/AVHRR Visible and
Near-Infrared Data for Land Remote Sensing. NOAA Technical Report NESS 84.
Washington, DC: U.S. Department of Commerce, 1981.
Schroeder, L.C., et al. “The Relationship Between Wind Vector and Normalised Radar
Cross Section Used to Derive Seasat-A Satellite Scatterometer Winds,” Journal
of Geophysical Research, 87:3318, 1982.
Schwalb, A. The TIROS-N/NOAA A-G Satellite Series (NOAA E-J) Advanced TIROS-N
(ATN). NOAA Technical Memorandum NESS 116. Washington, DC: United
States Department of Commerce, 1978.
Shearman, E.D.R. “Remote Sensing of Ocean Waves, Currents, and Surface Winds
by Dekametric Radar,” in Remote Sensing in Meteorology, Oceanography, and
Hydrology. Edited by Cracknell, A.P. Chichester, U.K.: Ellis Horwood, 1981.
Sheffield, C. Earthwatch: A Survey of the Earth from Space. London: Sidgwick and
Jackson, 1981.
Sheffield, C. Man on Earth. London: Sidgwick and Jackson, 1983.
Sidran, M. “Infrared Sensing of Sea Surface Temperature from Space,” Remote Sensing
of the Environment, 10:101, 1980.
Singh, S.M., Cracknell, A. P. and Charlton, J. A., “Comparison between CZCS Data
from 10 July 1979 and Simultaneous in situ Measurements for Southeastern
Scottish Waters,” International Journal of Remote Sensing, 4:755, 1983.
Singh, S.M., et al. “Cracknell, A. P. and Spitzer, D., 1985, Evaluation of Sensitivity
Decay of Coastal Zone Colour Scanner (CZCS) Detectors by Comparison with
in situ Near-Surface Radiance Measurements,” International Journal of Remote
Sensing, 6:749, 1985.
Singh, S.M., and Warren, D.E. “Sea Surface Temperatures from Infrared Measurements,” in Remote Sensing Applications in Marine Science and Technology. Edited
by Cracknell, A.P. Dordrecht: D. Reidel, 1983.
Smart, P.L., and Laidlaw, I.M.S. “An Evaluation of Some Fluorescent Dyes for Water
Tracing,” Water Resources Research, 13:15, 1977.
Stephens, G.L., et al. “A Comparison of SSM/I and TOVS Column Water Vapor Data
over the Global Oceans,” Meteorology and Atmospheric Physics, 54:183, 1994.
Stoffelen, A., and Anderson, D. “Ambiguity Removal and Assimilation of Scatterometer Data,” Quarterly Journal of the Royal Meteorological. Society, 123:491, 1997.
Sturm, B. “The Atmospheric Correction of Remotely Sensed Data and the Quantitative Determination of Suspended Matter in Marine Water Surface Layers,” in
Remote Sensing in Meteorology, Oceanography, and Hydrology. Edited by Cracknell,
A.P. Chichester, U.K.: Ellis Horwood, 1981.
Sturm, B. “Selected Topics of Coastal Zone Color Scanner (CZCS) Data Evaluation,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
9255_C011.fm Page 312 Wednesday, September 27, 2006 6:51 PM
312
Introduction to Remote Sensing
Sturm, B. “CZCS Data Processing Algorithms,” in Ocean Colour: Theory and Applications in a Decade of CZCS Experience. Edited by Barale, V., and Schlittenhardt,
P.M. Dordrecht: Kluwer, 1993.
Summers, R.J. Educator’s Guide for Building and Operating Environmental Satellite Receiving
Stations. NOAA Technical Report NESDIS 44. Washington, DC: United States
Department of Commerce, 1989.
Tapley, B.D., et al. “The Gravity Recovery and Climate Experiment: Mission Overview
and Early Results,” Geophysical Research Letters, 31:L09607, 2004.
Teillet, P.M., Slater, P.N., Ding, Y., Santer, R.P., Jackson, R.D. and Moran, M.S., “Three
methods for the absolute calibration of the NOAA AVHRR sensors in flight,”
Remote Sensing of Environment, 31, 105, 1990.
Thekaekara, M.P., Kruger, R. and Duncan, C. H., “Solar Irradiance Measurements
from a Research Aircraft,” Applied Optics, 8:1713, 1969.
Thomas, D.P. “Microwave Radiometry and Applications,” in Remote Sensing in
Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Tighe, M.L. “Topographic Mapping from Interferometric SAR Data is Becoming an
Accepted Mapping Technology,” in Conference Proceedings of Map Asia 2003,
Putra World Trade Centre, Kuala Lumpur, Malaysia, October 13–15, 2003.
www.gisdevelopment.net/proceedings/mapasia/2003/index.htm.
Townsend, W.F. “An Initial Assessment of the Performance Achieved by the Seasat-1
Radar Altimeter,” IEEE Journal of Oceanographical Engineering, OE-5:80, 1980.
Turton, D., and Jonas, D. “Airborne Laser Scanning: Cost-Effective Spatial Data,” in
Conference Proceedings of Map Asia 2003, Putra World Trade Centre, Kuala Lumpur,
Malaysia, October 13-15, 2003. www.gisdevelopment.net/proceedings/mapasia/2003/
index.htm.
Ustin, S., Manual of Remote Sensing, volume 4, Remote Sensing for Natural Resource
Management and Environmental Monitoring, New York: Wiley, 2004.
Valerio, C. “Airborne Remote Sensing Experiments with a Fluorescent Tracer,” in
Remote Sensing in Meteorology, Oceanography, and Hydrology. Edited by Cracknell,
A.P. Chichester, U.K.: Ellis Horwood, 1981.
Valerio, C. “Airborne Remote Sensing and Experiments with Fluorescent Tracers,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
Vermote, E., and El Saleous, N. “Absolute Calibration of AVHRR Channels 1 and 2,”
in D’Souza, G., Belward, A.S. and Malingreau, J-P (eds) Advances in the Use of
NOAA AVHRR Data for Land Applications. Dordrecht: Kluwer, 1996.
Vermote, E., and Roger, J.C. “Radiative Transfer Modelling for Calibration and
Atmospheric Correction,” in Advances in the Use of NOAA AVHRR Data for Land
Applications. Edited by D’Souza, G. et al. Dordrecht: Kluwer, 1996.
Voigt, S., et al. “Integrating Satellite Remote Sensing Techniques for Detection and
Analysis of Uncontrolled Coal Seam Fires in North China,” International Journal
of Coal Geology, 59:, 121, 2004.
Wadhams, P., Tucker, W.B., Krabill, W.B., Swift, R.N., Comiso, J.C. and Davis, N.R.,
“Relationship between Sea Ice Freeboard and Draft in the Artic Basin, and
Implications for Ice Thickness Monitoring,” Journal of Geophysical Research,
97:20325, 1992.
Walton, C.C. “Nonlinear Multichannel Algorithm for Estimating Sea Surface Temperature with AVHRR Satellite Data,” Journal of Applied Meteorology, 27:115,
1988.
9255_C011.fm Page 313 Wednesday, September 27, 2006 6:51 PM
References
313
Ward, J.F. “Power Spectra from Ocean Movements Measured Remotely by Ionospheric
Radio Backscatter,” Nature, 223:1325, 1969.
Weinreb, M.P., and Hill, M.L. Calculation of Atmospheric Radiances and Brightness Temperatures in Infrared Window Channels of Satellite Radiometers. NOAA Technical
Report NESS 80. Rockville, MD: U.S. Department of Commerce, 1980.
Werbowetzki, A. Atmospheric Sounding User’s Guide. NOAA Technical Report NESS
83. Washington, DC: U.S. Department of Commerce, 1981.
Wilson, H.R. “Elementary Ideas of Optical Image Processing,” in Remote Sensing in
Meteorology, Oceanography, and Hydrology. Edited by Cracknell, A.P. Chichester,
U.K.: Ellis Horwood, 1981.
Wilson, S.B., and Anderson, J.M. “A Thermal Plume in the Tay Estuary Detected by
Aerial Thermography,” International Journal of Remote Sensing, 5:247, 1984.
Woodhouse, I.H. Introduction to Microwave Remote Sensing. Boca Raton: CRC Press, 2006.
Wu, X., et al. “A Climatology of the Water Vapor Band Brightness Temperatures from
NOAA Operational Satellites,” Journal of Climate, 6:1282, 1993.
Wurtele, M.G.,. Woiceshyn, P. M., Peteherych, S., Borowski, M. and Appleby, W. S.,
“Wind Direction Alias Removal Studies of Seasat Scatterometer-Derived Wind
Fields,” Journal of Geophysical Research, 87:3365, 1982.
Wyatt, L. “The Measurement of Oceanographic Parameters Using Dekametric Radar,”
in Remote Sensing Applications in Marine Science and Technology. Edited by Cracknell,
A.P. Dordrecht: Kluwer, 1983.
Zwick, H.H., Neville, R. A. and O’Neil, R. A., “A Recommended Sensor Package
for the Detection and Tracking of Oil Spills,” Proceedings of an EARSeL ESA
Symposium, ESA SP-167, 77, Voss, Norway, May 1981.
Bibliography
The following references are not specifically cited in the text but are general
references that readers may find useful as sources of further information or
discussion.
Allan, T. D., 1983, Satellite Microwave Remote Sensing (Chichester, U.K.: Ellis Horwood).
Carter, D.J. The Remote Sensing Sourcebook: A Guide to Remote Sensing Products, Services,
Facilities, Publications and Other Materials. London: Kogan Page, McCarta, 1986.
Cracknell, A.P., ed. Remote Sensing in Meteorology, Oceanography, and Hydrology. Chichester,
U.K.: Ellis Horwood, 1981.
Cracknell, A.P., ed. Remote Sensing Applications in Marine Science and Technology. Dordrecht:
Kluwer, 1983.
Cracknell, A.P., et al. (eds.). Remote Sensing Yearbook. London: Taylor & Francis, 1990.
Curran, P.J. Principles of Remote Sensing. New York: Longman, 1985.
Drury, S.A. Image Interpretation in Geology. London: George Allen & Unwin, 1987.
D’Souza, G., et al. Advances in the Use of NOAA AVHRR Data for Land Applications.
Dordrecht: Kluwer, 1996.
Griersmith, D.C., and Kingwell, J. Planet Under Scrutiny: An Australian Remote Sensing
Glossary. Canberra: Australian Government Publishing Service, 1988.
Hall, D.K., and Martinec, J. Remote Sensing of Ice and Snow. London: Chapman and
Hall, 1985.
9255_C011.fm Page 314 Wednesday, September 27, 2006 6:51 PM
314
Introduction to Remote Sensing
Houghton, J.T. The Physics of Atmospheres. Cambridge: Cambridge University Press,
1977.
Houghton, J.T., Taylor, F. W. and Rodgers, C. D., Remote Sounding of Atmospheres.
Cambridge: Cambridge University Press, 1984.
Hyatt, E. Keyguide to Information Sources in Remote Sensing. London: Mansell, 1988.
Kennie, T.J.M., and Matthews, M.C. Remote Sensing in Civil Engineering. Glasgow and
London: Surrey University Press, 1985.
Kidder, S.Q. and Vonder Haar, T.H., An Introduction to Satellite Meteorology. San Diego:
Academic Press, 1995.
Lo, C.P. Applied Remote Sensing. Harlow, U.K.: Longman, 1986.
Martin, S., An Introduction to Ocean Remote Sensing. Cambridge: Cambridge University
Press, 2004.
Mason, B.D. “Meteosat: Europe’s Contribution to the Global Weather Observing
System,” in Remote Sensing in Meteorology, Oceanography, and Hydrology. Edited
by Cracknell, A.P. Chichester, U.K.: Ellis Horwood, 1981.
Mather, P.M., Computer Processing of Remotely-Sensed Images: An Introduction. New
York: Wiley, 1999.
Maul, G.A. Introduction to Satellite Oceanography. Dordrecht: Martinus Nijhoff, 1985.
Muller, J.P. (ed.). Digital Image Processing in Remote Sensing. London: Taylor & Francis,
1988.
Murtha, P.A., and Harding, R.A. Renewable Resources Management: Applications of
Remote Sensing. Falls Church, VA: American Society of Photogrammetry and
Remote Sensing, 1984.
Rabchevsky, G.A. Multilingual Dictionary of Remote Sensing and Photogrammetry. Falls
Church, VA: American Society of Photograrnmetry and Remote Sensing, 1984.
Rees, W.G., The Remote Sensing Data Book.Cambridge: Cambridge University Press,
1999.
Rees, W.G., Physical Principles of Remote Sensing. Cambridge: Cambridge University
Press, 2001.
Reynolds, M. “Meteosat’s Imaging Payload,” ESA Bulletin, 11:28, 1977.
Richards, J.A. and Jia, X, Remote Sensing Digital Image Analysis. An Introduction.
Berlin: Springer, 1999.
Schanda, E. (ed.). Remote Sensing for Environmental Sciences. New York: SpringerVerlag, 1976.
Schanda, E. Physical Fundamentals of Remote Sensing. New York: Springer-Verlag, 1986.
Scorer, R.S., Satellite as Microscope. Chichester: Ellis Horwood, 1990.
Shirvanian, D. European Space Directory. Paris: Sevig Press, 1988.
Siegal, B.S., and Gillespie, A.R. Remote Sensing in Geology. New York: John Wiley, 1980.
Slater, P.N. Remote Sensing Optics and Optical Systems. Reading, MA: Addison-Wesley,
1980.
Stewart, R.H. Methods of Satellite Oceanography. Berkeley, CA: University of California
Press, 1985.
Swain, P.H., and Davis, S.M. Remote Sensing: The Quantitative Approach. New York:
McGraw-Hill, 1978.
Szekielda, K.H. Satellite Remote Sensing for Resources Development. London: Graham
& Trotman, 1986.
Taillefer, Y. A Glossary of Space Terms. Paris: European Space Agency, 1982.
Townshend, J.R.G. Terrain Analysis and Remote Sensing. London: George Allen &
Unwin, 1981.
9255_C011.fm Page 315 Wednesday, September 27, 2006 6:51 PM
References
315
Trevett, J.W. Imaging Radar for Resources Surveys. London: Chapman and Hall, 1986.
Ulaby, F.T., et al. Microwave Remote Sensing: Active and Passive: Volume 1, MRS Fundamentals and Radiometry. Reading, MA: Addison-Wesley, 1981.
Ulaby, F.T., Moore, R. K. and Fung, A. F., Radar Remote Sensing and Surface Scattering
and Emission Theory. Reading, MA: Addison-Wesley, 1982.
Ulaby, F.T., Moore, R. K. and Fung, A. F., From Theory to Applications. London: Adtech,
1986.
Vertsappen, H.T. Remote Sensing in Geomorphology. Amsterdam: Elsevier, 1977.
Widger, W.K.Meteorological Satellites. New York: Holt, Rinehart & Winston, 1966.
Yates, H.W., and Bandeen, W.R. “Meteorological Applications of Remote Sensing
from Satellites,” Proceedings IEEE, 63:148, 1975.
9255_C011.fm Page 316 Wednesday, September 27, 2006 6:51 PM
9255_A001.fm Page 317 Friday, February 16, 2007 5:03 PM
Appendix
Abbreviations and Acronyms
This list includes many of the abbreviations and acronyms that one is likely
to encounter in the field of remote sensing and is not limited to those used
in this book. The list has been compiled from a variety of sources including:
Planet under scrutiny — an Australian remote sensing glossary, D. C. Griersmith and
J. Kingwell (Canberra: Australia Government Publishing Service) 1988
Keyguide to information sources in remote sensing, E. Hyatt (London and New York:
Mansell) 1988
Multilingual dictionary of remote sensing and photogrammetry, G.A. Rabchevsky (Falls
Church, VA: American Society of Photogrammetry and Remote Sensing) 1984
Microwave remote sensing for oceanographic and marine weather-forecast models, R.A.
Vaughan (Dordrecht: Kluwer) 1990
Measuring the Oceans from Space: The principles and methods of satellite oceanography.
I.S. Robinson (Berlin: Springer-Praxis) 2004
AARS
AATSR
ACRES
ADF
ADP
AESIS
AFC
AGC
AIAA
AIT
Almaz-1
AMI
AMORSA
AMSR
AMSU
APR
Asian Association on Remote Sensing
Advanced Along-Track Scanning Radiometer
Australian Centre for Remote Sensing
Automatic Direction Finder
Automatic Data Processing
Australian Earth Science Information System
Automatic Frequency Control
Automatic Gain Control
American Institute of Aeronautics and
Astronautics
Asian Institute of Technology (Bangkok,
Thailand)
A Russian satellite carrying a radar
Active Microwave Instrument
Atmospheric and Meteorological Ocean
Remote Sensing Assembly
Advanced Microwave Scanning Radiometer
Advanced Microwave Sounding Unit
Airborne Profile Recorder; Automatic
Pattern Recognition
317
9255_A001.fm Page 318 Friday, February 16, 2007 5:03 PM
318
APT
APU
AQUA
ARIES
ARRSTC
ARSC
ASAR
ASCAT
ASPRS
ASSA
ATM
ATS
ATSR
ATSR/M
AU
AVHRR
AVNIR
BARSC
BCRS
BNSC
BOMEX
bpi
CACRS
CASI
CCD
CCRS
CCT
CEOS
CERES
CGMS
CHRIS
CIAF
CIASER
Introduction to Remote Sensing
Automatic Picture Transmission
Auxiliary Power Unit
NASA EOS (q.v.) afternoon overpass satellite
Australian Resource Information and
Environment Satellite
Asian Regional Remote Sensing Training
Centre
Australasian Remote Sensing Conference
Advanced Synthetic Aperture Radar
Advanced scatterometer
American Society of Photogrammetry and
Remote Sensing
Austrian Space and Solar Agency
Airborne Thematic Mapper
Applications Technology Satellite
Along Track Scanning Radiometer
Along Track Scanning Radiometer and
Microwave Sounder
Astronomical Unit
Advanced Very High Resolution Radiometer
Advanced Visible and Near Infrared
Radiometer
British Association of Remote Sensing
Companies
Netherlands Remote Sensing Board
British National Space Centre
Barbados Oceanographic and Meteorological
Experiment
bits per inch
Canadian Advisory Committee on Remote
Sensing
Compact Airborne Spectral Imager
Charge Coupled Device
Canada Centre for Remote Sensing
Computer Compatible Tape
Committee on Earth Observation Systems
Cloud and Earth Radiant Energy Scanner
Coordination Group for Meteorological
Satellites
Compact High-Resolution Imaging
Spectrometer
Centro Interamericano de Fotointerpretación
Centro de Investigación y Aplicación de
Sensores Remotos
9255_A001.fm Page 319 Friday, February 16, 2007 5:03 PM
319
Appendix
CIR
CLIRSEN
CNES
CNEI
CNR
CNRS
COPUOS
COSPAR
CRAPE
CRS
CRSTI
CRTO
CSIRO
CSRE
CW
CZCS
DCP
DCS
DFVLR (DLR)
DMA
DMSP
DN
DoD
Doran
DORIS
DOS
DSIR
DST
EARSeL
EARTHSAT
EBR
ECA
Colour Infrared Film
Centro de Levantamientos Integrados de
Recursos Naturales por Sensores Remotes
Centre National D’Etudes Spatiales
Comisión Nacional de Investigaciones
Espaciales
Consiglio Nationale delle Richerche
Centre National de la Recherche Scientifique
Committee on the Peaceful Uses of Outer
Space (UN)
Committee, on Space Research
Central Register of Aerial Photography for
England
Committee of Remote Sensing (Vietnam)
Canadian Remote Sensing Training Institute
Centre Régional de Télédétection de
Ouagadougou
Commonwealth Scientific and Research
Organisation (Australia)
Centre of Studies in Resources Engineering
Continuous-Wave Radar
Coastal Zone Color Scanner
Data Collection Platform
Data Collection System
Deutsche Forschungs und Versuchsanstalt
für Luft und Raumfahrt e.v. (German
Aerospace Research Establishment)
Defense Mapping Agency
Defense Meteorological Satellite Programme
(USA)
Digital Number
Department of Defense (USA)
Doppler ranging
Doppler Orbitography and Radio-positioning
Integrated by Satellite
Department of Survey (USA)
Department of Scientific and Industrial
Research (New Zealand)
Direct Sounding Transmission
European Association of Remote Sensing
Laboratories
Earth Satellite Corporation
Electron Beam Recorder
Economic Commission for Africa
9255_A001.fm Page 320 Friday, February 16, 2007 5:03 PM
320
ECMWF
EDM
EEC
EECF
ELV
EMSS
Envisat
EOP
EOPAG
EOPP
EOS
EOSAT
EPO
ERB
EREP
ERIM
EROS
ERS-1, -2
ERTS
ESA
ESIAC
ESMR
ESOC
ESRIN
ESSA
ESTEC
ETM
Eumetsat
Introduction to Remote Sensing
European Centre for Medium Range
Weather Forecasts
Electronic Distance-Measuring Device
European Economic Community
EARTHNET ERS-1 Central Facility
Expendable Launch Vehicle
Emulated Multispectral Scanner
European Space Agency’s Earth observation
satellite
Earth Observation Programme
ERS-1 Operation Plan Advisory Group
Earth Observation Preparatory Programme
Earth Observing System
Earth Observation Satellite Company
EARTHNET Programme Office
Earth Radiation Budget
Earth Resources Experimental Package
Environmental Research Institute of
Michigan
Earth Resources Observation Systems
Earth Resources Satellite 1, -2 (European)
Earth Resources Technology Satellite, later
called Landsat
European Space Agency
Electronic Satellite, Image Analysis Console
Electronically Scanned Microwave
Radiometer
European Space Operations Centre
European Space Research Institute
(Headquarters of EARTHNET Office)
Environmental Survey Satellite
European Space Technology Centre
Enhanced Thematic Mapper
European Meteorological Satellite
Organisation
FAO
FGGE
FLI
FOV
Food and Agriculture Organization (UN)
First GARP Global Experiment
Fluorescence Line Imager
Field of View
GAC
GARP
GCP
GCM
GCOS
Global Area Coverage
Global Atmospheric Research Project
Ground Control Point
General Circulation Model
Global Climate Observing System
9255_A001.fm Page 321 Friday, February 16, 2007 5:03 PM
321
Appendix
GDTA
GPS
GRACE
GRD
GRID
GSFC
GTS
Groupement pour Développement de la
Télédétection Aérospatiale
Global Environmental Monitoring System
Geodetic Satellite
US DoD (Department of Defense) altimetry
satellite mission
Geographic Information Systems
Global Line Imager
Geosynchronous Meteorological Satellite
Gulf of Alaska SEASAT Experiment
Geostationary Operational Environmental
Satellite
Global Ocean Flux Study
Global Ozone Monitoring Experiment
Geostationary Operational Meteorological
Satellite
Global Ocean Observing System
Global Observing System
Global Operational Sea Surface Temperature
Computation
Global Positioning System
Gravity Recovery and Climate Experiment
Ground Resolved Distance
Global Resource Information Database
Goddard Space Flight Centre
Global Telecommunications System
HBR
HCMM
HCMR
HDT
HDDT
HIRS
HIRS/2
HRPI
HRPT
HRV
High Bit Rate
Heat Capacity Mapping Mission
Heat Capacity Mapping Radiometer
High Density Tape
High Density Digital Tape
High-Resolution Infrared Radiation Sounder
Second generation HIRS
High Resolution Pointable Imager
High Resolution Picture Transmission
High Resolution Visible Scanner
ICW
ICSU
IFDA
IFOV
IFP
IFR
IGARSS
Interrupted Continuous Wave
International Council of Scientific Unions
Institute für Angewandte Geodäsie
Instantaneous Field Of View
Institut Français du Pétrole
Instrument Flight Regulations
International Geoscience and Remote
Sensing Society
GEMS
GEOS/3
Geosat
GIS
GLI
GMS
GOASEX
GOES
GOFS
GOME
GOMS
GOOS
GOS
GOSSTCOMP
9255_A001.fm Page 322 Friday, February 16, 2007 5:03 PM
322
IGN
IGU
IIRS
IMW
INPE
INSAT
INTERCOSMOS
I/O
IOC
IR
IRS
ISCCP
ISLSCP
ISO
ISPRS
ISRO
ITC
ITOS
ITU
Introduction to Remote Sensing
Instituto Geográfico Nacional/Institut
Géographique National
International Geophysical Union
Indian Institute of Remote Sensing
International Map of the World
Instituto de Pesquisasa Espaciais
Indian Satellite Programme
International Co-operation in Research and
Uses of Outer Space Council
Input/Output
Intergovernmental Oceanographic
Commission
Infrared
Indian Remote Sensing Satellite
International Satellite Cloud Climatology
Project
International Satellite Land Surface
Climatology Project
Infrared Space Observatory
International Society of Photogrammetry
and Remote Sensing
Indian Space Research Organization
International Institute for Aerospace Survey
and Earth Sciences (Nederland)
Improved TOS Series
International Telecommunications Union
JASIN
Jason
JERS-1
JGOFS
JPL
JRC
JSPRS
Joint Air-Sea Interaction Project
Successor to TOPEX/Poseidon (q.v.)
Japanese Earth Resources Satellite
Joint Global Ocean Flux Study
Jet Propulsion Laboratory (Pasadena, CA, USA)
Joint Research Centre (Ispra, Italy)
Japan Society of Photogrammetry and
Remote Sensing
KOMPSAT
Kosmos
Korean Earth Observing Satellite
SSSR/Russian series of Earth observing
satellites
LAC
LADS
Landsat -1, ...
Local Area Coverage
Laser Airborne Depth Sounder
Series of NASA/NOAA land observation
satellites
Indonesian National Institute of Aeronautics
and Space
LAPAN
9255_A001.fm Page 323 Friday, February 16, 2007 5:03 PM
323
Appendix
LARS
LBR
LFC
LFMR
LIDAR
LRSA
LTF
MERIS
MESSR
Meteor
Meteosat
MIIGAiK
MIMR
MLA
MODIS
MOMS
MOP
MOS
MOS-1
MSG
MSL
MSR
MSS
MSU
MTF
MTFF
MTI
NASA
NASDA
NASM
NE∆T
NESDIS
NHAP
Nimbus
NLR
NOAA
Laboratory for Applications of Remote
Sensing (Purdue University)
Laser Beam Recorder/Low Bit Rate
Large Format Camera
Low Frequency Microwave Radiometer
Light Detection and Ranging
Land Remote Sensing Assembly
Light Transfer Function
Medium-Resolution Imaging Spectrometer
Multispectral Electronic Self-Scanning
Radiometer
Soviet/Russian series of meteorological
satellites
European series of geostationary meteorological satellites
Moscow Institute of Engineers for Geodesy,
Aerial Surveying and Cartography
Multi-Band Imaging Microwave Radiometer
Multispectral Linear Array
MODerate-resolution Imaging Spectrometer
Modular Opto-electronic Multispectral Scanner
Meteosat Operational Programme
Marine Observation Satellite
Marine Observation Satellite (Japanese)
Meteosat Second Generation
Mean Sea Level
Microwave Scanning Radiometer
Multispectral Scanner
Microwave Sounding Unit
Modulation Transfer Function
Man Tended Free Flyer
Moving Target Indicator
National Aeronautics and Space
Administration (USA)
National Space Development Agency (Japan)
National Air and Space Museum
Noise Equivalent Temperature Difference
National Environmental Satellite, Data,
and Information Service (USA)
National High Altitude Program
A NASA series of experimental satellites
National Lucht-en Ruimtevaartlaboratorium
National Oceanic and Atmospheric
Administration (USA)
9255_A001.fm Page 324 Friday, February 16, 2007 5:03 PM
324
NOAA-1,-2,...
Introduction to Remote Sensing
NSCAT
NWS
NOAA series of polar-orbiting meteorological
satellites
National Point of Contact
National Polar-orbiting Operational
Environmental Satellite System
(US and Europe)
NASA Scatterometer
National Weather Service (USA)
OAS
OBRC
OCI
OLS
ORSA
OTV
Organization of American States
On Board Range Compression
Ocean Colour Imager (on ROCSAT, Taiwan)
Operational Linescan System
Ocean Remote Sensing Assembly
Orbital Transfer Vehicle
PAF
PCM
PDUS
Pixel
PM
POES
Poseidon
PPI
PRARE
PSS
Processing and Archiving Facilities
Pulse Code Modulation
Primary Data Users Station
Picture Element
Phase Modulation
Polar-Orbiting Environmental Satellite
A CNES radar altimeter (see TOPEX/
Poseidon)
Plan Position Indicator
Precise Range and Range Rate Equipment
Packet Switching System
QuikScat
NASA satellite for ocean winds
Radarsat
RBV
RECTAS
Canadian radar system
Return Beam Vidicon
Regional Centre for Training in Aerial
Surveys
Remote Sensing Online Retrieval System
(at CCRS)
Remote Sensing Technology Center of Japan
Earth observing satellite, Taiwan
Radarsat Optical Scanner
Russian (formerly Soviet) Service for
Hydrometeorology and Environmental
Monitoring
Remote Sensing Association of Australia
Remote Sensing and Photogrammetry
Society (UK)
NPOC
NPOESS
RESORS
RESTEC
ROCSAT
ROS
Roshydromet
RSAA
RSPSoc
9255_A001.fm Page 325 Friday, February 16, 2007 5:03 PM
325
Appendix
SAC
SAF
SAR
SAR-C
SASS
SBPTC
SCAMS
SCATT-2
SCS
SDUS
Seasat
Seastar
SeaWiFS
SeaWinds
SELPER
SEM
SEVERI
Shoran
SIR-A,-B,-C
SLAR
SLR
SMMR
S/N (SNR)
SPARRSO
SPOT
SSC
SSM/I
SST
SSU
SUPARCO
TDRS
TERRA
TIP
TIROS
TIRS
Space Applications Centre
Servicio Aerofotogramétrico de la Fuerza
Aerea
Synthetic Aperture Radar
C-Band Synthetic Aperture Radar
Seasat Scatterometer
Société Belge de Photogrammétrie, de
Télédétection et de Cartographie
Scanning Microwave Spectrometer
Scatterometer derived from ERS-1 instrument
Soil Conservation Service
Satellite Data Users Station
NASA proof of concept satellite for
microwave ocean remote sensing
NASA ocean colour satellite
Sea-viewing Wide Field of view Sensor
NASA wind scatterometer
Society of Latin American Specialists in
Remote Sensing
Space Environment Monitor
Spinning Enhanced Visible and Infrared
Imager
Short range navigation
Shuttle Imaging Radar (exists as -A, -B, -C)
Side-Looking Airborne Radar
Side Looking Radar
Scanning Multichannel (Multifrequency)
Microwave Radiometer
Signal-to-noise ratio
Space Research and Remote Sensing
Organization (Bangladesh)
Satellite Pour l’Observation de la Terre
Swedish Space Corporation
Special Sensor Microwave Imager
Sea Surface Temperature
Stratospheric Sounding Unit
Space and Upper Atmosphere Research
Commission (Pakistan)
Tracking Data Relay System (USA)
NASA EOS (q.v.) morning overpass satellite
TIROS Information Processor
Television and Infra-Red Observation
Satellite
Thermal Infrared Scanner
9255_A001.fm Page 326 Friday, February 16, 2007 5:03 PM
326
TM
TOGA
TOMS
TOPEX/Poseidon
TOS
TOVS
TRF
TRMM
TRSC
UHF
UN
UNEP
UNESCO
URSI
USAF
USGS
VAS
Vegetation
Introduction to Remote Sensing
Thematic Mapper
Tropical Oceans Global Atmosphere
Total Ozone Mapping Spectrometer
NASA/CNES Ocean Topography
Experiment
TIROS Operational System
TIROS Operational Vertical Sounder
Technical Reference File
Tropical Rainfall Monitoring Mission
Thailand Remote Sensing Center
Ultra High Frequency
United Nations
United Nations Environment Programme
United Nations Educational, Scientific and
Cultural Organisation
International Union on Radio Science
United States Air Force
United States Geological Survey
VHF
VHRR
VIIRS
VISSR
VTIR
VTPR
VISSR Atmospheric Sounder
Visible and infrared scanner on later SPOT
satellites
Very High Frequency
Very High Resolution Radiometer
Visible Infrared Imaging Radiometer Suite
Visible and Infrared Spin-Scan Radiometer
Visible and Thermal Infrared Radiometer
Vertical Temperature Profile Radiometer
WAPI
WCDP
WCRP
WEFAX
WISI
WMO
WOCE
WWW
World Aerial Photographic Index
World Climate Data Programme
World Climate Research Programme
Weather Facsimile
World Index of Space Imagery
World Meteorological Organisation
World Ocean Circulation Experiment
World Weather Watch
X-SAR
SAR flown on the Space Shuttle
9255_index.fm Page 327 Tuesday, February 27, 2007 12:57 PM
Index
A
Active Microwave Instrument (AMI), 70, 75
ADEOS (Japanese Advanced Earth
Observing Satellite), 72, 251
Advanced Microwave Scanning Radiometer,
72
Advanced Microwave Sounding Unit
(AMSU), 54, 177
Advanced Spaceborne Thermal Emission
and Reflection Radiometer (ASTER), 272
Advanced TIROS-N (ATN) spacecraft, 50, 51
Advanced TIROS Operational Vertical
Sounder (ATOVS), 54, 177
Advanced Very High Resolution Radiometer
(AVHRR), 54–55, 56–57, 74–76, 78–81, 85
compared to ASTR/M, 71
compared to CZCS, 69
false color composite, 34
instruments, 56, 78
land cover mapping and, 287
sea surface temperature monitoring,
27, 290
spectral channel wavelengths, 55
spectral resolution, 74
thermal-infrared scanner data, 35, 39,
178–188
weather tracking, 10
Agriculture, satellites and, 280–281
Airborne gamma ray spectroscopy, 108–112
Airborne Laser Mine Detection System
(ALMDS), 95
Airborne Oceanographic Laser (AOL),
90–91
Airborne Topographic Manager (ATM), 91
Aircraft, vs. satellites in remote sensing, 7–10
factors in choosing, 7–8
AIRS (Atmospheric InfraRed Sounder),
246–247, 249
ALMDS (Airborne Laser Mine Detection System), 95
Along Track Scanning Radiometer (ATSR/M),
70–71
Altimeters, 42, 129–137
development of, 129
function, 129–130
AMI (Active Microwave Instrument), 70, 75
AMSU (Advanced Microwave Sounding
Unit), 54, 177
AOL, 90–91
Applications Technology Satellite (ATS),
52, 59
Archiving and distribution, 83–87
evolution of, 83–84
Internet’s effect on, 85
media used, 83–84
Argos data collection system, 14–15, 17–20
creation of, 17–18
data distribution, 19
functionality, 18–19
location principle, 19–20
NOAA and, 14
overview, 14–15
segments, 18
Argos PTT, 18, 19
ASTER (Advanced Spaceborne Thermal Emission and Reflection Radiometer), 272
Astro Vision, Inc., 64
Atmospheric correction processes, 162–175
absorption by gases, 173
atmospheric effects on data collection,
162–164
atmospheric transmission, 171–172
calculation of sea-surface temperature, 168
downwelling atmospheric radiance, 167
emitted radiation, 165
reflected radiation, 168–171
scattering by aerosol particles, 174–175
scattering by air molecules, 172
space component, 167
surface radiance, 165–166
total radiance, 167
upwelling atmospheric radiance, 166–167
Atmospheric InfraRed Sounder (AIRS),
246–247, 249
ATM (Airborne Topographic Manager), 91
327
9255_index.fm Page 328 Tuesday, February 27, 2007 12:57 PM
328
ATN (Advanced TIROS-N) spacecraft, 50, 51
ATOVS (Advanced TIROS Operational
Vertical Sounder), 54, 177
ATS (Applications Technology Satellite),
52, 59
ATSR/M (Along Track Scanning Radiometer),
70–71
B
Barale, V., 202
Barrick, D.E., 124, 127
Bragg scattering
ground wave systems and, 118–119
radar equation and, 117
sky wave systems and, 120–121, 124,
125, 127
C
C/A (Clear/Acquisition) code, 96–97
Canada Center for Remote Sensing (CCRS),
90, 93
Canadian Hydrographic Service, 93
CCDs (charged-coupled devices), 19–21
CCRS, 90, 93
CHAllenging Minisatellite Payload (CHAMP),
134, 276
CHAMP (CHAllenging Minisatellite
Payload), 134, 276
CGMS (Co-ordination Group for
Meteorological Satellites), 59
Charged-coupled devices (CCDs), 29–31
CLASS, 84–85
Clear/Acquisition (C/A) code, 96–97
Clutter, 113
Coastal Ocean Dynamics Application Radar
(CODAR), 119–120
Coastal Zone Color Scanner (CZCS)
atmospheric corrections, 196, 198–200
data calibration, 193–194
extraction of marine parameters, 202–203
features, 62
Nimbus-7 and, 69, 290, 291
spatial resolution, 74–75
CODAR (Coastal Ocean Dynamics
Application Radar), 119–120
Communications systems, 12–13
Comprehensive Large Array-Data
Stewardship System (CLASS), 84–85
Co-ordination Group for Meteorological
Satellites (CGMS), 59
Cosmos satellites, 57, 112
COSPAS, 52
Cracknell, A.P., 57
Introduction to Remote Sensing
Crombie, D.D., 118
CZCS (Coastal Zone Color Scanner)
atmospheric corrections, 196, 198–200
data calibration, 193–194
extraction of marine parameters, 202–203
features, 62
Nimbus-7 and, 69, 290, 291
spatial resolution, 74–75
D
Data archiving and distribution, 83–87
evolution of, 83–84
Internet’s effect on, 85
media used, 83–84
Data collection systems (DCS), 14–20
Data reception, from remote sensing
satellites, 82–83
differences in facilities for, 83
restrictions on, 82
Defense Meteorological Satellite Program
(DMSP), 57, 262
DEMs (digital elevation models), 158
Density slicing, 209–210
Digital elevation models (DEMs), 158
Digital image displays, 205–208
analogue image processing, 207–208
color images, 207
overview, 205–207
Digital terrain models (DTMs), 100–101,
272–273
DMSP (Defense Meteorological Satellite
Program), 57, 262
Doppler effect, 16, 19
Doppler orbitography and radiopositioning
integrated by satellite (DORIS), 132–133
Doppler radar, in weather forecasting,
243–244
Doppler shifts
ground wave systems and, 118–119
sky wave systems and, 121–122, 126, 127
DORIS (Doppler orbitography and radiopositioning integrated by satellite), 132–133
DTMs (digital terrain models), 100–101,
272–273
E
Earth Radiation Budget Experiment (ERBE),
51, 53, 257
Earth Resources Technology Satellite
(ERTS-1), see Landsat
Earth’s surface, observations of, 11–12
Echo sounding, 45
ECMWF (European Centre for MediumRange Forecasts), 246
9255_index.fm Page 329 Tuesday, February 27, 2007 12:57 PM
329
Index
El Niño, 257–260
Electromagnetic radiation; see also Planck’s
radiation formula
geological information from, 268–270
infrared, 25–26
microwave, 27–29
near-infrared, 26
spectrum, 23, 24
visible, 24–25
wavelengths, 22, 24–29
Electrically Scanning Microwave Radiometer
(ESMR), 188
Environmental Protection Agency (EPA), 90
Envisat, 72
EPA, 90
ERBE (Earth Radiation Budget Experiment),
51, 53, 257
ERS (ESA Remote Sensing) satellites, 129,
143, 155, 188, 254
global ozone and, 264
limitation of coverage frequency, 75
overview, 70–71
pollution monitoring, 294–295
surface wind shear, 250
ERTS-1, see Landsat
ESA Remote Sensing (ERS) satellites,
70–71, 75
ESMR (Electrically Scanning Microwave
Radiometer), 188
EUMETSTAT (European organization for the
Exploitation of Meteorological
Satellites), 50, 58–59, 63
EUMETSTAT Polar System (EPS), 58
European Centre for Medium-Range
Forecasts (ECMWF), 246
European organization for the Exploitation of
Meteorological Satellites (EUMETSAT),
50, 58–59, 63
European Space Agency (ESA), 143, 250
F
FDP (Forecast Demonstration Project),
242–243
Feng-Yun satellites, 59, 64
Forecast Demonstration Project (FDP),
242–243
Forecasting, weather radars in,
243–245
Forestry, satellites and, 281–285
Fourier series, 114, 127
Fourier transforms, 229–239
filters, 236–239
inversion, 230–231
optical analogue, 235–236
G
GAC (global area coverage), 55, 69, 76
Gamma ray spectroscopy, 108–112
Gens, R., 155, 157
Geoid, measurement of, 129–131, 133–134
Geostationary Operational Environmental
Satellite (GOES), 52, 59, 61, 162, 284
Geostationary Operational Meteorological
Satellite (GOMS), 52, 59, 63
Geostationary meteorological satellites, 59–64
GLI (Global Line Imager), 72
Global area coverage (GAC), 55, 69, 76
Global Line Imager (GLI), 72
Global Ozone Monitoring by Occultation of
Stars (GOMOS), 267
Global positioning system, see GPS
Global Telecommunications System, see GTS
GOCE, 134
GOES (Geostationary Operational Environmental Satellite), 52, 59, 61, 284
GOMOS (Global Ozone Monitoring by
Occultation of Stars), 267
GOMS (Geostationary Operational Meteo
rological Satellite), 52, 59, 63
Gordon, H.R., 202
GPS (Global Positioning System), 19–20,
91, 96–99
GRACE (Gravity Recovery and Climate
Experiment), 134, 276, 289
Graham, L.C., 155
Gravity Recovery and Climate Experiment
(GRACE), 134, 276, 289
Ground wave systems, 118–120
GTS, 19
H
Haute Resolution Visible (HRV), 67–68, 75
High-Resolution Infrared Radiation Sounder
(HIRS/2), 54, 177, 260
HIRS/2 (High-Resolution Infrared Radiation
Sounder), 54, 177, 260
Hotelling, H., 225–229
HRV (Haute Resolution Visible), 67–68, 75
Hurricane prediction and tracking, 136, 143,
252, 254–256
Hydrology, 287–289
I
IAEA (International Atomic Energy Agency),
111, 112
ICESat, 300
9255_index.fm Page 330 Tuesday, February 27, 2007 12:57 PM
330
IFOV (instantaneous field of view), 55, 61–63,
67, 168, 188–190
AVHRR and, 53
CZCS and, 69
Landsat and, 65
resolution, 73, 74–75
IFSAR, see interferometric synthetic
aperture radar
IKONOS, 9, 63, 72, 74, 283
IJPS (Initial Joint Polar System), 58
Image enhancement, 211–221
contrast enhancement, 211–215
edge enhancement, 215–219
image smoothing, 219–221
Image processing programs, 210–211
Image processing systems, 209
Improved Limb Atmospheric
Spectrometer-II, 72
Improved TIROS Operational System (ITOS)
satellites, 50
Indian Remote Sensing Satellites (IRS),
68–69
Infrared photography, 1, 2, 4, 5, 10
military reconnaissance and, 4
remote sensing and, 1, 2
weather satellites and, 10
Initial Joint Polar System (IJPS), 58
InSAR, see interferometric synthetic
aperture radar
INSAT, 52, 63
Instantaneous field of view (IFOV), 55, 61–63,
67, 168, 188–190
AVHRR and, 53
CZCS and, 69
Landsat and, 65
resolution, 73, 74–75
Interferometric synthetic aperture radar
(InSAR), 154–158
development of, 155
differential InSAR, 158
overview, 154–155
theory behind, 155–157
topographic mapping, 158
International Atomic Energy Agency (IAEA),
111, 112
International Satellite Cloud Climatology
Project (ISCCP), 257
Ionosphere, 120–122
IRS (Indian Remote Sensing Satellites),
68–69
ISCCP (International Satellite Cloud
Climatology Project), 257
ITOS (Improved TIROS Operational System)
satellites, 50
Introduction to Remote Sensing
J
JAMI (Japanese Meteorological Imager),
52, 64
Japanese Advanced Earth Observing Satellite
(ADEOS), 72
Japanese Earth Resources Satellite-1
(JERS-1), 72
Japanese Meteorological Authority, 64
Japanese Meteorological Imager (JAMI),
52, 64
JASIN (Joint Air-Sea Interaction) project,
141–143
JERS-1 (Japanese Earth Resources Satellite-1),
72
Jindalee Operational Radar Network (JORN),
253
Joint Air-Sea Interaction (JASIN) project,
141–143
JORN (Jindalee Operational Radar Network),
253
K
Kramer, H.J., 5, 64
L
LAC (local area coverage), 55–56, 69
LACIE (Large Area Crop Inventory
Experiment), 280
Landsat
data cost and, 9
density slicing, 209
development of, 50
diagram, 30
features, 61
global coverage and, 13
image smoothing, 221
launch, 5
MSS bands, 32
multispectral images, 222–223
overview, 64–66
RBV cameras and, 29
striping, 234, 238
wavelength bands, 65
Laser Environmental Airborne Flourosensor
(LEAF), 90
Laser fluorosensing, 101–108
components, 103–105
overview, 101–103
uses, 105–108
vegetation studies, 105–106
Laser Ultrasonic Remote Sensing of
Oil Thickness (LURSOT), 107
9255_index.fm Page 331 Tuesday, February 27, 2007 12:57 PM
331
Index
LEAF (Laser Environmental Airborne
Flourosensor) 90
Lidar
bathymetry, 91–96
early airborne systems, 89–91
land surveys and, 96–101
Line-of-sight radar systems, 114–115, 118
Local area coverage (LAC), 55–56, 69
LOWTRAN, 182, 183
LURSOT (Laser Ultrasonic Remote Sensing
of Oil Thickness), 107
Multispectral scanners (MSSs)
AVHRR and, 54
common features, 61–63
compared to hyperspectral scanners, 72
image creation, 32–34
IRS and, 68–69
Landsat and, 66, 68
OLS and, 57
overview, 30
resolution, 73–74
wavelength bands, 65
M
N
Manual of Remote Sensing, 3
Marine Optical Buoy (MOBY), 201–202
Medium-Resolution Imaging Spectrometer
(MERIS), 72
MERIS (Medium-Resolution Imaging
Spectrometer), 72
Meteorological Operational (MetOp)
spacecraft, 51, 58–59
Meteorological remote sensing satellites,
50–64
geostationary meteorological satellites,
59, 63–64
polar-orbiting meteorological satellites, 50–59
MetOp (Meteorological Operational)
spacecraft, 51, 58–59
Metosat satellites
atmospheric correction and, 162
ERS and, 70
features, 61
MSG (Meteosat Second Generation), 52, 59
overview, 63–64
spatial resolution, 74–75
Microwave sensors, 38–44
Microwave Sounding Unit (MSU),
54, 177, 260
MOBY (Marine Optical Buoy), 201–202
Moderate-Resolution Imaging Spectroradiometer (MODIS), 72, 284–285
MODIS (Moderate-Resolution Imaging
Spectroradiometer), 72, 284–285
Morel, A., 202
MSU (Microwave Sounding Unit),
54, 177, 260
Multifunctional Transport Satellite-1
(MTSAT-1), 52, 64
Multilooking, 153
Multispectral images, 221–224
contrast enhancement, 222
overview, 221–222
visual classification, 223–224
NASA
airborne laser systems, 90
AOL and, 91
AQUA satellite, 146, 284, 285, 287
geostationary meteorological satellites,
52, 59, 64
ICESat, 300
Landsat and, 64–65, 66, 67
MISR, 250
NSCAT (NASA Scatterometer), 143, 251
polar-orbiting satellites, 51
SeaWinds and, 72
Terra satellite, 272, 284, 285, 287
TOPEX/Poseidon and, 71
NVAP (Water Vapor Project), 262
National Environmental Satellite, Data, and
Information Service (NESDIS), 84–85,
86–87
National Oceanic and Atmospheric Administration (NOAA), 14, 15, 17–18
POES program, 50
Wave Propagation Laboratory, 119
National Polar-Orbiting Operational
Environmental Satellite System
(NPOESS), 267
National Space Development Agency
(NASDA), 64
NDVI (normalized difference vegetation
index), 195–196
NdYAG (neodymium yttrium aluminum
garnet) lasers, 98
Near-polar orbiting satellites, 6, 13,
14, 15–16
NEMS (Nimbus-E Microwave Spectrometer),
188
Neodymium yttrium aluminum garnet
(NdYAG) lasers, 98
NESDIS, 84–85, 86–87, 284
Nimbus-E Microwave Spectrometer
(NEMS), 188
9255_index.fm Page 332 Tuesday, February 27, 2007 12:57 PM
332
Nonmeteorological remote sensing satellites,
64–73
ERS, 70–71
IRS, 68–69
Landsat, 64–66
other systems, 71–73
pioneering oceanographic satellites,
69–70
Resurs-F, 68
Resurs-O, 68
SPOT, 67–68
TOPEX/Poseidon, 71
Normalized difference vegetation index
(NDVI), 195–196
Nowcasting, 241–243
NPOESS (National Polar-Orbiting Operational
Environmental Satellite System), 267
NSCAT (NASA Scatterometer), 143, 251
O
Ocean Colour and Temperature Scanner, 72
Ocean Surface Current Radar (OSCR),
118, 120
Oceanographic satellites, 69–70, 289–296
monitoring pollution, 294–296
sea-surface temperatures, 291–294
views of upwelling, 289–291
Odin satellite, 267
OLS (Operational Linescan System), 51, 57
OMPS (Ozone Mapping and Profiler Suite),
267
Operational Linescan System (OLS), 51, 57
OSCR (Ocean Surface Current Radar),
118, 120
OTH-B (over the-horizon backscatter),
252–254
OTHR (over-the-horizon radar), 120, 251–254
Over the-horizon backscatter (OTH-B),
252–254
Over-the-horizon radar (OTHR), 120
Ozone layer, 122, 267
Ozone Mapping and Profiler Suite
(OMPS), 267
P
Passive microwave scanner data, 188–191
emissivity and, 189–190
radiation polarization, 189
sea surface temperatures, 190
sensitivity of instruments, 188
spatial resolution, 189, 191
weather conditions, 189
Planck radiation formula, 26, 36–37
Polar stratospheric clouds (PSCs), 265
Introduction to Remote Sensing
Polar-Orbiting Operational Environmental
Satellites (POESs), 50, 52–53, 55, 57–59
atmospheric sounding capability, 53–54
data reception, 85
primary function, 53
Polarization and Directionality of the Earth’s
Reflectances (POLDER), 72
POLDER (Polarization and Directionality of
the Earth’s Reflectances), 72
Potential field data, geological information
from, 276–277
PRARE (precise range and range-rate
equipment), 132, 133
Precise range and range-rate equipment
(PRARE), 132, 133
PRF (pulse repetition frequency), 150
Principal components transformation, 225–229
formulas, 225–226
multispectral images, 227–229
origins, 225
Pulse repetition frequency (PRF), 150
Q
Quickbird satellites, 9, 63, 72, 74
QuikSCAT, 143, 251
R
Radar data, geological information from,
274–276
Radar equation, 115–117
Radiative transfer equation, 175–178
explanation, 175–177
methods for solving, 177–178
Radiative transfer theory, 160–162
Radiosondes, 161, 177, 183, 186, 190
Range walk, 152
Rao, P.K., 49, 64
Rayleigh scattering
atmospheric corrections and, 199, 200
atmospheric transmission and,
171, 172
scattering by aerosol particles and, 174
thermal-infrared scanner data and, 182
weather radars and, 244
RBV (return beam vidicon) cameras, 29
Reflected radiation, 168–171
Remote sensing
cameras and, 1–2
explanation, 1
transmission of information, 3
Remotely sensed data, application to
atmosphere, 241–268
9255_index.fm Page 333 Tuesday, February 27, 2007 12:57 PM
Index
determination of temperature changes,
246–248
hurricane prediction and tracking, 254–256
measurements of wind speed, 248–254
satellite climatology, 256–268
weather radars in forecasting, 243–245
weather satellites in forecasting, 241–243
Remotely sensed data, application to
biosphere, 278–287
agriculture, 280–81
forestry, 281–285
spatial information systems, 285–287
Remotely sensed data, application to
cryosphere, 296–301
Remotely sensed data, application to
geosphere, 268–278
electromagnetic radiation, 268–270
potential field data, 276–277
radar data, 274–276
sonars, 277
thermal spectrum, 270–274
Remotely sensed data, application to
hydrosphere, 287–296
hydrology, 287–289
oceanography and marine resources,
289–296
Resolution, 73–76
frequency of coverage, 75–76
spatial resolution, 74–75
spectral resolution, 74
Resurs-F and Resurs-O, 68
Return beam vidicon (RBV) cameras, 29
Rice, S.O., 124
Robinson, I.S., 134, 136, 143
Russian Federal Service for Hydrometeorology
and Environmental Monitoring ROSHYDROMET), 51, 57
S
S&RSAT, 51–52
SAR (synthetic aperture radar), 42, 43–44, 114
antenna, 151
image formation, 146–147
multilooking, 153
overview, 145–146
pulse repetition frequency (PRF), 150
range walk, 152
resolution, 148–151
SASS IV, 250, 277
Satellite climatology, 256–268
cloud climatology, 257–260
global moisture, 262–263
global ozone, 263–267
global temperature, 260–262
summary of, 267–268
333
Satellite laser ranging (SLR), 132–133
Satellite-received radiance, atmospheric
corrections to, 195–202
aquatic applications, 195–196, 197
CZCS and, 196, 198–200
land-based, 195
ozone, 196
reflected radiation, 197
water-leaving radiance, 197–198
SBET (smoothed best-estimated trajectory),
97–98
SBUV (Solar Backscatter Ultraviolet)
instruments, 51–52, 53, 72
Scanning Laser Environmental Airborne
Flourosensor (SLEAF), 90
Scanning Multichannel Microwave Radiometer (SMMR), 164, 188–189, 191
Scatterometers, 43, 138–143
determining wind speed, 138
JASIN (Joint Air-Sea Interaction)
project, 141–143
Seasat, 139–141, 143
surface wind shear and, 250
Schlittenhardt, P.M., 202
Sea Winds, 72, 143
Seasat
altimeter, 134–137
digital image displays, 208
image formation, 153–155
mission objectives, 135
overview, 129, 130–131
scatterometer and, 138–143
spatial resolution, 146
surface wind shear and, 250–251
wind speed and, 137
SeaSonde, 119–120
SeaWiFS
atmospheric corrections, 195–96, 198,
199–200
prelaunch calibration, 193–194
SEM (Space Environment Monitor), 54
Shuttle Radar Topology Mission
(SRTM), 272
Side scan sonar, 46–47
Significant wave height (SWH), 135–136
Sky wave systems, 120–128
SLEAF (Scanning Laser Environmental
Airborne Flourosensor), 90
SLR (satellite laser ranging), 132–133
SMMR (Scanning Multichannel Microwave
Radiometer), 164, 188–189, 191
Smoothed best-estimated trajectory (SBET),
97–98
Solar Backscatter Ultraviolet (SBUV)
instruments, 51–52, 53, 72
9255_index.fm Page 334 Tuesday, February 27, 2007 12:57 PM
334
Introduction to Remote Sensing
Sonars, geological information from, 277
Sonic sensors, 44–47
echo sounding, 45
side scan sonar, 46–47
sound navigation and ranging, 44–45
Sound navigation and ranging, 44–45
Space Environment Monitor (SEM), 54
Spatial information systems, 285–287
Special Sensor Microwave Imager (SSM/I),
51, 57, 58, 74
characteristics of, 58
passive microwave radiometry, 164
spectral channels, 74
Special Sensor Microwave Radiometer
(SSMR), 57
SPOT satellites, 9, 62–63, 67–68, 70, 73, 74,
75, 93
spatial resolution, 72
VEGETATION, 63, 68, 287
SRTM (Shuttle Radar Topology Mission), 272
STAR-3i, 158
Stealth aircraft, 121
Stefan-Boltzmann Law, 37
Stratospheric Sounding Unit (SSU), 54
Stretched-Visible Infrared Spin Scan
Radiometer (S-VISSR), 64
S-VISSR (Stretched-Visible Infrared Spin Scan
Radiometer), 64
SWH (significant wave height), 135–136
Synthetic aperture radar (SAR), 42, 43–44,
114; see also interferometric synthetic
aperture radar
antenna, 151
image formation, 146–147
multilooking, 153
overview, 145–146
pulse repetition frequency (PRF), 150
range walk, 152
resolution, 148–151
Thermal-infrared sensors, 35-38
Thermal spectrum, geological information
from, 270-274
detecting coal fires, 274
engineering geology, 270-271
geothermal and volcano studies, 271-274
thermal mapping, 270
TIROS-N (Television InfraRed Observation
Satellite) series, 10, 15, 49
TIROS-N/NOAA satellites, 50, 78-82
AVHRR and, 78-79
data, 80
instrumentation, 78, 79
reception, 80-82
transmissions, 79-80
weather forecasting, 248
TIROS Operational Vertical Sounder (TOVS),
54, 177-178, 190
atmospheric corrections, 184
weather forecasting, 248
TOMS (Total Ozone Mapping Spectrometer),
70, 264-265
Topographic mapping, 155, 158
Total Ozone Mapping Spectrometer
(TOMS), 70, 264–265
TOVS (TIROS Operational Vertical Sounder),
54, 177-178, 190
atmospheric corrections, 184
weather forecasting, 248
TRMM (Tropical Rainfall Monitoring
Mission), 188, 189
Tropical Rainfall Monitoring Mission
(TRMM), 188, 189
Tropopause, 122
T
V
Telemetry stations, 19
Television InfraRed Observation Satellite
(TIROS-N) series, 10, 15, 49
Telstar, 12
Temperature changes, determination with
satellites, 246–248
Thematic Mapper, 93
Thermal-infrared scanners, 175-188
airborne, 35, 36–38
AVHRR, 179-188
data processing, 179-181
LOWTRAN, 182, 183
radiative transfer equation, 175-178
Van Genderen, J.L., 155, 157
VEGETATION, 63, 68, 287
Very High Resolution Radiometer (VHRR),
53, 54, 57, 64
VHRR (Very High Resolution Radiometer),
53, 54, 57, 64
Visible and near-infrared sensors, 29-34
classification scheme for, 29
multispectral scanners, 30-31, 32-34
push-broom scanners, 31
Visible Infrared Spin Scan Radiometer
(VISSR), 52, 59
Visible wavelength scanners, 191-204
U
Unmanned satellites, 6
U.S. Office of Naval Research, 1
9255_index.fm Page 335 Tuesday, February 27, 2007 12:57 PM
Index
atmospheric corrections to data, 195-202
data calibration, 191-195
extraction of marine parameters from
water-leaving radiance, 202-204
VISSR (Visible Infrared Spin Scan
Radiometer), 52, 59
W
Water-leaving radiance, 202-204
Weather radars, in forecasting, 243-245
Weber, B.L., 124
335
WERA (WElen Radar), 120
Wind speed, measuring, 248-254
microwave estimations of surface wind
shear, 250-251
sky wave radar, 251-254
tropospheric estimations from cloud
motion, 248-250
WMO (World Meteorological Organization),
19
World Meteorological Organization (WMO),
19
Wyatt, L., 127, 128
9255_index.fm Page 336 Tuesday, February 27, 2007 12:57 PM
9255_Color insert.fm Page 1 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 1.3
An image of the Earth from GOES-E, showing the extent of geostationary satellite coverage.
COLOR FIGURE 2.9
A false color composite of southwest Europe and northwest Africa based on National Oceanic
and Atmospheric Administration AVHRR data. (Processed by DLR for the European Space Agency.)
9255_Color insert.fm Page 2 Friday, February 16, 2007 5:08 PM
(a)
(b)
COLOR FIGURE 2.15
Sea ice and ocean surface temperatures derived from Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR); three-day average data for north and south polar regions (a) April
1979 and (b) June 1979. (NASA Goddard Space Flight Center.)
9255_Color insert.fm Page 3 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.1
The radar network operated by the U.K Met Office and the Irish Meteorological Service, Met
Éireann. (U.K. Met Office.)
COLOR FIGURE 10.2
The eye of Typhoon Yutu is clearly revealed by weather radar at about 200 km to the southsouthwest of Hong Kong in the morning on July 25, 2001. (Hong Kong Observatory.)
9255_Color insert.fm Page 4 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.4
The AIRS system provides 300,000 soundings per day. (NASA/JPL/AIRS Science Team, Chahine, 2005.)
COLOR FIGURE 10.6
Tropical Storm Katrina is shown here as observed by NASA’s QuikSCAT satellite on August
25, 2005, at 08:37 UTC (4:37 a.m. in Florida). At this time, the storm had 80 km/hour (43 knots)
sustained winds and does not appear to yet have reached hurricane strength. (NASA/JPL/
QuikSCAT Science Team.)
9255_Color insert.fm Page 5 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.7
North Atlantic wind speed derived from ERS-1 (colored stripes) and OTH data. (Georges et al., 1998.)
COLOR FIGURE 10.8
GOES-12 1-km visible image of Hurricane Katrina over New Orleans at 1300 on August 29,
2005. (NOAA.)
9255_Color insert.fm Page 6 Friday, February 16, 2007 5:08 PM
(a)
(b)
COLOR FIGURE 10.10
(a) Monthly mean cloud amount at midnight in January 1980, a normal year, derived from the
Nimbus-7 Temperature Humidity Infrared Radiometer’s 11.5-µm channel data and (b) in an El
Niño year, 1983. (NASA Goddard Space Flight Center.)
9255_Color insert.fm Page 7 Friday, February 16, 2007 5:08 PM
(a)
(b)
(c)
COLOR FIGURE 10.11
Mean day and night surface temperatures derived from satellite sounder data: (a, top) daytime
temperature; (b, center) nighttime temperature; and (c, bottom) mean temperature difference.
(Image provided by Jet Propulsion Laboratory.)
9255_Color insert.fm Page 8 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.12
Global total column precipitable water for December 2001 obtained from a combination of radiosonde observations, TOVS, and SSM/I data sets. (NASA Langley Atmospheric Science Data Center.)
COLOR FIGURE 10.13
Monthly Southern Hemisphere ozone averages for October, from 1980 to 1991. (Dr. Glenn
Carver, Centre for Atmospheric Science, University of Cambridge, U.K.)
9255_Color insert.fm Page 9 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.14
Monthly Northern Hemisphere ozone averages for March, from 1996 to 2005. (Dr. Mark Weber,
Institute of Environmental Physics, University of Bremen.)
COLOR FIGURE 10.15
Computer map of rock exposures determined from gamma-ray spectroscopy. (WesternGeco.)
9255_Color insert.fm Page 10 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.17
Perspective view of Mount Oyama in Japan created by combining image data from the ASTER,
with an elevation model from the Shuttle Radar Topography Mission. (NASA/JPL/NIMA.)
COLOR FIGURE 10.18
Stereopair of color-coded temperature maps of Miyake-Jima island on October 5, 1983. (Asia
Air Survey Company.)
9255_Color insert.fm Page 11 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.22
SASS IV subsurface map showing bathymetry data from the Sumatran subduction zone
obtained of the ocean floor near the epicenter of the 26 December 2004 Asian tsunami. (SeaBeam
Instruments, Inc., Royal Navy, British Geological Survey, Southampton Oceanography Centre,
U.K. Hydrological Office, Government of Indonesia).
9255_Color insert.fm Page 12 Friday, February 16, 2007 5:08 PM
(a)
(b)
(c)
COLOR FIGURE 10.23
Progressive deforestation in the state of Rondônia, Brazil, as seen on (a) June 19, 1975 (Landsat-2
MSS bands 4, 2, and 1), (b) August 1, 1986 (Landsat-5 MSS bands 4, 2, and 1), and (c) June 22,
1992 (Landsat-4 TM bands 4, 3, and 2). (USGS.)
9255_Color insert.fm Page 13 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.24
IKONOS satellite images of the Hayman forest fire burning in the Pike National Forest south
of Denver, CO. (Space Imaging.)
COLOR FIGURE 10.25
Simulated Thematic Mapper image of a section of the Ikpikpuk River on the north slope of
Alaska. (NASA Ames Research Centre.)
9255_Color insert.fm Page 14 Friday, February 16, 2007 5:08 PM
(a)
(b)
COLOR FIGURE 10.26
(a) Sea surface temperature determined using data from the AVHRR on the NOAA-6 satellite
and (b) the corresponding image of phytoplankton chlorophyll pigments made using data from
the CZCS on the Nimbus-7 satellite (NASA Goddard Space Flight Center). These computer
processed images were produced by M. Abbot and P. Zion at the Jet Propulsion Laboratory.
They used satellite data received at the Scripps Institution of Oceanography, and computerprocessing routines developed at the University of Miami.
9255_Color insert.fm Page 15 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.27 (a)
Sea surface temperature.
9255_Color insert.fm Page 16 Friday, February 16, 2007 5:08 PM
COLOR FIGURE 10.27 (b)
chlorophyll concentration of the Gulf Stream on April 18, 2005. (NASA images courtesy Norman
Kuring of the MODIS Ocean Team.)
COLOR FIGURE 10.32
Mean monthly microwave emissivity for January 1979 derived from HIRS2/MSU data. (NASA.)
Descargar