Subido por hfranco1982

Pipeline Risk Management Manual 3E

Risk Management
Ideas, Techniques,
and Resources
Third Edition
Pipeline Risk
Ideas, Techniques,
and Resources
Third Edition
W. Kent Muhlbauer
Gulf Professional Publishing IS an lrnprrnt of Elsevier 1°C
Gulf Professional Publishing is an imprint of Elsevier
200 Wheeler Road, Burlington, MA 01803, USA
Linacre House, Jordan Hill, Oxford OX2 8DP, UK
Copyright 02004, Elsevier Inc. All rights reserved.
No part ofthis publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any
means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the
Permissions may be sought directly from Elsevier’s Science &Technology Rights Department in Oxford, UK:
phone: ( 4 4 ) 1865 843830, fax: ( 4 4 ) 1865 853333, e-mail: [email protected] may also
complete your request on-line via the Elsevier homepage (, by selecting “Customer Support”
and then “Obtaining Permissions.”
Recognizing the importance of preserving what has been written, Elsvier prints its books on acid-free paper whenever possible.
Library of Congress Cataloging-in-PublicationData
Muhlbauer, W. Kent.
Pipeline risk management manual : a tested and proven system to prevent loss and assess risk / by W. Kent
Muhlbauer.-3rd ed.
p. cm.
Includes bibliographical references and index.
ISBN 0-7506-7579-9
1. Pipelines-Safety measures-Handbooks, manuals, etc. 2.
Pipelines-Reliability-Handbooks, manuals, etc. I. Title.
TJ930.M84 2004
6213 ’ 6 7 2 4 ~ 2 2
20030583 15
British Library Cataloguing-in-PublicationData
A catalogue record for this book is available from the British Library.
ISBN: 0-7506-7579-9
For information on all Gulf Professional Publishing publications visit our Web site at
03 04 05 06 07 08 09
10 9
Printed in the United States ofAmerica
8 7 6 5 4
3 2
Risk Assessment at a Glance
Chapter 1
Chapter 2
Chapter 3
Chapter 4
Chapter 5
Chapter 6
Chapter 7
Chapter 8
Chapter 9
Chapter 10
Chapter 11
Chapter 12
Chapter 13
Chapter 14
Chapter 15
Appendix A
Appendix B
Appendix C
Appendix D
Appendix E
Appendix F
Appendix G
Risk: Theory and Application
Risk Assessment Process
Third-party Damage Index
Corrosion Index
Design Index
Incorrect Operations Index
Leak Impact Factor
Data Management
and Analyses
Additional Risk Modules
Service Interruption Risk
Distribution Systems
Offshore Pipeline Systems
Stations and Surface
Absolute Risk Estimates
Risk Management
Typical Pipeline Products
Leak Rate Determination
Pipe Strength Determination
Surge Pressure Calculations
Sample Pipeline Risk
Assessment Algorithms
Receptor Risk Evaluation
Examples of Common
Pipeline Inspection
and Survey Techniques
33 1
38 1
As in the last edition, the author wishes to express his gratitude
to the many practitioners of formal pipeline risk management
who have improved the processes and shared their ideas. The
author also wishes to thank reviewers of this edition who contributed their time and expertise to improving portions of this
book, most notably Dr. Karl Muhlbauer and Mr. Bruce Beighle.
The first edition of this book was written at a time when formal
risk assessments of pipelines were fairly rare. To be sure, there
were some repairheplace models out there, some maintenance
prioritization schemes, and the occasional regulatory approval
study, but, generally, those who embarked on a formal process
for assessing pipeline risks were doing so for very specific
needs and were not following a prescribed methodology.
The situation is decidedly different now. Risk management is
increasingly being mandated by regulations. A risk assessment
seems to be the centerpiece of every approval process and every
pipeline litigation. Regulators are directly auditing risk assessment programs. Risk management plans are increasingly coming under direct public scrutiny.
While risk has always been an interesting topic to many, it is
also often clouded by preconceptions of requirements of huge
databases, complex statistical analyses, and obscure probabilistic techniques. In reality, good risk assessments can be
done even in a data-scarce environment. This was the major
premise of the earlier editions. The first edition even had a certain sense of being a risk assessment cookbook-“Here are the
ingredients and how to combine them.” Feedback from readers
indicates that this was useful to them.
Nonetheless, there also seems to be an increasing desire for
more sophistication in risk modeling. This is no doubt the result
of more practitioners than ever before-pushing the boundaries-as well as the more widespread availability of data and
the more powerful computing environments that make it easy
and cost effective to consider many more details in a risk
model. Initiatives are currently under way to generate more
complete and useful databases to further our knowledge and to
support detailed risk modeling efforts.
Given this as a backdrop, one objective ofthis third edition is
to again provide a simple approach to help a reader put together
some kind of assessment tool with a minimum of aggravation.
However, the primary objective of this edition is to provide a
reference book for concepts, ideas, and maybe a few templates
covering a wider range of pipeline risk issues and modeling
options. This is done with the belief that an idea and reference
book will best serve the present needs ofpipeline risk managers
and anyone interested in the field.
While I generally shy away from technical books that get too
philosophical and are weak in specific how-to’s, it is just simply
not possible to adequately discuss risk without getting into
some social and psychological issues. It is also doing a disservice to the reader to imply that there is only one correct risk management approach. Just as an engineer will need to engage in a
give-and-take process when designing the optimum building or
automobile, so too will the designer of a risk assessment/management process.
Those embarking on a pipeline risk management process
should realize that, once some basic understanding is obtained,
they have many options in specific approach. This should be
viewed as an exciting feature, in my opinion. Imagine how
mundane would be the practice of engineering if there were little variation in problem solving. So, my advice to the beginner
is simple: arm yourself with knowledge, approach this as you
would any significant engineering project, and then enjoy the
journey !
As with previous editions of this book, the chief objective of
this edition is to make pipelines safer. This is hopefully accomplished by enhancing readers’ understanding of pipeline risk
issues and equipping them with ideas to measure, track, and
continuouslyimprove pipeline safety.
We in the pipeline industry are obviously very familiar with
all aspects of pipelining. This familiarity can diminish our sensitivity to the complexity and inherent risk of this undertaking.
The transportation of large quantities of sometimes very hazardous products over great distances through a pressurized
pipeline system, often with zero-leak tolerance, is not a trivial
thing. It is useful to occasionally step back and re-assess what
a pipeline really is, through fresh eyes. We are placing a very
complex, carefully engineered structure into an enormously
variable, ever-changing, and usually hostile environment. One
might reply, “complex!? It’s just a pipe!” But the underlying
technical issues can be enormous. Metallurgy, fracture
mechanics, welding processes, stress-strain reactions, soilinterface mechanical properties of the coating as well as their
critical electrochemical properties, soil chemistry, every conceivable geotechnical event creating a myriad of forces and
loadings, sophisticated computerized SCADA systems, and
we’re not even to rotating equipment or the complex electrochemical reactions involved in corrosion prevention yet! A
pipeline is indeed a complex system that must coexist with all
of nature’s and man’s frequent lack of hospitality.
The variation in this system is also enormous. Material and
environmental changes over time are of chief concern. The
pipeline must literally respond to the full range of possible
ambient conditions of today as well as events of months and
years past that are still impacting water tables, soil chemistry,
land movements, etc. Out of all this variation, we are seeking
risk ‘signals.’Our measuring ofrisk must therefore identify and
properly consider all of the variables in such a way that we can
indeed pick out risk signals from all of the background ‘noise’
created by the variability.
Underlying most meanings of risk is the key issue of ‘probability.’ As is discussed in this text, probability expresses a
degree ofbelief:This is the most compelling definition of probability because it encompasses statistical evidence as well as
interpretations and judgment. Our beliefs should be firmly
rooted in solid, old-fashioned engineering judgment and reasoning. This does not mean ignoring statistics-rather, using
data appropriately-for diagnosis; to test hypotheses; to
uncover new information. Ideally, the degree of belief would
also be determined in some consistent fashion so that any two
estimators would arrive at the same conclusion given the same
This is the purpose of this book-to provide frameworks in
which a given set of evidence consistently leads to a specific
degree of belief regarding the safety of a pipeline.
Some of the key beliefs underpinning pipeline risk management, in this author’s view, include:
Risk management techniques are fundamentally decision
support tools.
We must go through some complexity in order to achieve
“intelligent simplification.”
In most cases, we are more interested in identifying locations
where a potential failure mechanism is more aggressive,
rather than predicting the length of time the mechanism must
be active before failure occurs.
Many variables impact pipeline risk. Among all possible
variables, choices are required to strike a balance between a
comprehensive model (one that covers all of the important
stuff) and an unwieldy model (one with too many relatively
unimportant details).
Resource allocation (or reallocation) towards reduction of
failure probability is normally the most effectiveway to practice risk management.
(The complete list can be seen in Chapter 2)
The most critical beliefunderlying this book is that all available
information should be used in a risk assessment. There are very
few pieces of collected pipeline information that are not useful
to the risk model. The risk evaluator should expect any piece of
information to be useful until he absolutely cannot see any way
that it can be relevant to risk or decides its inclusion is not cost
Any and all expert’s opinions and thought processes can and
should be codified, thereby demystifymg their personal assessment processes.The experts’ analysis steps and logic processes can
be duplicated to a large extent in the risk model. A very detailed
model should ultimately be smarter than any single individual or
group of individuals operating or maintaining the p i p e l i n e
includingthat retired guy who knew everything. It is often useful to
thinkof the model building process as ‘teaching the model’ rather
than ‘designing the model.’ We are training the model to ‘think’
xii Introduction
like the best experts and giving it the collective knowledge of the
entire organization and all the years ofrecord-keeping.
Changes from Previous Editions
This edition offers some new example assessment schemes for
evaluating various aspects of pipeline risk. After several years
of use, some changes are also suggested for the model proposed
in previous editions of this book. Changes reflect the input of
pipeline operators, pipeline experts, and changes in technology.
They are thought to improve our ability to measure pipeline
risks in the model. Changes to risk algorithms have always been
anticipated, and every risk model should be regularly reviewed
in light of its ability to incorporate new knowledge and the
latest information.
Today’s computer systems are much more robust than in past
years, so short-cuts, very general assumptions, and simplistic
approximations to avoid costly data integrations are lessjustifiable. It was more appropriate to advocate a very simple
approach when practitioners were picking this up only as a
‘good thing’ to do, rather than as a mandated and highly scrutinized activity. There is certainly still a place for the simple risk
assessment. As with the most robust approach, even the simple
techniques support decision makmg by crystallizing thinking,
removing much subjectivity,helping to ensure consistency, and
generating a host of other benefits. So, the basic risk assessment model of the second edition is preserved in this edition,
although it is tempered with many alternative and supporting
evaluation ideas.
The most significant changes for this edition are seen in the
Corrosion Index and Leak Impact Factor (LIF). In the former,
variables have been extensively re-arranged to better reflect
those variables’ relationships and interactions. In the case of
LIF, the math by which the consequence variables are com-
bined has been made more intuitive. In both cases, the variables
to consider are mostly the same as in previous editions.
As with previous editions, the best practice is to assess major
risk variables by evaluating and combining many lesser variables, generally available from the operator’s records or public
domain databases. This allows assessments to benefit from
direct use of measurements or at least qualitative evaluationsof
several small variables, rather than a single, larger variable,
thereby reducing subjectivity.
For those who have risk assessment systems in place
based on previous editions, the recommendation is simple:
retain your current model and all its variables, but build a
modern foundation beneath those variables (if you haven’t
already done so). In other words, bolster the current assessments with more complete consideration of all available
information. Work to replace the high-level assessments of
‘good,’ ‘fair,’ and ‘poor,’ with evaluations that combine several data-rich subvariables such as pipe-to-soil potential
readings, house counts, ILI anomaly indications, soil resistivities, visual inspection results, and all the many other
measurements taken. In many cases, this allows your ‘ascollected’ data and measurements to be used directly in the
risk model-no extra interpretation steps required. This is
straightforward and will be a worthwhile effort, yielding
gains in efficiency and accuracy.
As risks are re-assessed with new techniques and new information, the results will often be very similar to previous assessments. After all, the previous higher-level assessments were no
doubt based on these same subvariables,only informally. If the
new processes do yield different results than the previous
assessments, then some valuable knowledge can be gained.
This new knowledge is obtained by finding the disconnectthe basis of the differences-and learning why one of the
approaches was not ‘thinking’ correctly. In the end, the risk
assessment has been improved.
The user of this book is urged to exercise judgment in the use of the data presented here.
Neither the author nor the publisher provides any guarantee, expressed or implied with regard to
the general or specific application of the data, the range of errors that may be associated with
any of the data, or the appropriateness of using any of the data. The author accepts no responsibility for damages, if any, suffered by any reader or user of this book as a result of decisions
made or actions taken on information contained herein.
Risk Assessment at
a Glance
The following is a summary of the risk evaluation framework described in Chapters 3 through 7. It is one of several approaches to
basic pipeline risk assessmentin which the main consequences of concern are related to public health and safety, including environmental considerations.Regardlessof the risk assessment methodologyused, this summary can be useful as a checklist to ensure that
all risk issues are addressed.
Relative Risk
Leak Impact
Figure0.1 Risk assessment model flowchart.
xvi Risk assessment at a glance
Relative Risk Rating
Index Sum
(Index Sum) f (Leak Impact Factor)
[(Third Party)
+(Incorrect Operations)]
Third-party Index
Minimum Depth of Cover. ................. 0-20 pts
Activity Level. ...........................
0-20 pts
0-10 pts
Aboveground Facilities ....................
0-15 pts
LineLocating ..........................
0-1 5 pts
Public Education ..........................
Right-of-way Condition. . . . . . . . . . . . . . . . . . . . 0-5 pts
Patrol.. ................................. . O-15 pts
0-100 pts
Corrosion Index
A. Atmospheric Corrosion. . . .
A l . Atmospheric Exposure
0-2 pts
A2. AtmosphericType .....................
A3. Atmospheric Coating. ................. 0-3 pts
5 yo
B. Internal Corrosion. . . . .
. . 0-20 pts
B1. Product Corrosivity . . . . . . . . . . . . . . . . . . .0-10 pts
B2. Internal Protection. . . . . . . . . . . . . . . . . . . .0-10 pts
C. Subsurface Corrosion. ....................
.&70 pts
C 1. Subsurface Environment ...............0-20 pts
Mechanical Corrosion. . . . . . . . . . . . . . . . . .0-5 pts
C2. Cathodic Protection. ................... 0-8 pts
... 0-15 pts
Effectiveness. . . .
Interference Potential. ................ 0-10 pts
C3. Coating.. ............................ 0-10 pts
Fitness ...........
0-10 pts
Condition. ........................... 0-1 5 pts
Design Index
A. Safety Factor. . . . . . . . . . . . . . . .
.......................... 0-15 pts
C. Surge Potential. .......................... .O-10 pts
.... .0-25 pts
D. Integrity Verifications
E. Land Movements. ........................ , 6 1 5 pts
. .O-35 pts
0-100 pts
Incorrect Operations Index
A. Design
A l . Hazard Identification ..........
.. W p t s
A2. MAOP Potential . . . . . . . . . . . . .
. 0-12 pts
A3. Safety Systems. . . .
. 0-10pts
A4. Material Selection. ....................
0-2 pts
A5. Checks.. .............................
0-2 pts
0-30 pts
Risk assessment at a glance xvii
6. Construction
BI. Inspection.. . . . . . . . . . . . . . . . . . . . . . . . . . .&IO pts
0-2 pts
8 2 . Materials.. . . . . . . . . . . . . . . . . . . . . . . . . . . .
B3. Joining,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0-2 pts
B4. Backfill. .............................
&2 pts
65. Handling. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0-2 pts
6 6 . Coating,. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
0-2 pts
0-20 pts
C. Operation
C1. Procedures. ........................... 0-7 pts
C2. SCADNCommunications . . . . . . . . . . . . . . 0-3 pts
C3. DrugTesting . . . . . . . . . . . . . . . . . . . . . . . . . . . O-2 pts
C4. Safety Programs. ......................
0-2 pts
C5. SurveydMapdRecords . . . . . . . . . . . . . . . . . 0-5 pts
0-10 pts
C6. Training. ............................
C7. Mechanical Error Preventers . . . . . . . . . . . . 0-6 pts
0-35 pts
D. Maintenance
DI. Documentation. . . . . . . . . . . . . . . . . . . . . . . . 0-2 pts
D2. Schedule.. . . . . . . . . . . . . . . . . . . . . . . . . . . . . O-3 pts
D3. Procedures,. . . . . . . . . . . . . . . . . . . . . . . . . [email protected]
0-15 pts
Total Index Sum 0-400 pts
Leak Impact Factor
Leak Impact Factor = Product Hazard (PH) x Leakvolume (LV) x Dispersion (D)x Receptors (R)
A. Product Hazard (Acute + Chronic Hazards) 0-22 points
A 1. Acute Hazards
a. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
b. N r . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
c. N,. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 0-4pts
2. Chronic Hazard RQ
B. Leak Volume ( LV)
C. Dispersion (D)
D. Receptors (R)
D 1. Population Density (Pop)
D2. Environmental Considerations (Env)
D3. High-Value Areas (HVA)
Total (Pop + Env + HVA)
0-12 pts
0-10 pts
Risk: Theory and
I The science and philosophyof ris
Embracing paranoia 111
The scientificmethod 1/2
Modeling 113
II. Basicconcepts 113
Hazard 113
Risk 1/4
Farlure 114
Probability 114
Frequency, statistics, and probabi
Failure rates 115
Consequences 116
Risk assessment 117
Riskmanagement 117
Experts 118
111 Uncertainty 118
IY Bsk process-the general steps
1. The science and philosophyof risk
One of Murphy’s’ famous laws states that “left to themselves,
things will always go from bad to worse.” This humorous prediction is, in a way, echoed in the second law of thermodynamics.
That law deals with the concept of entropy. Stated simply, entropy
I Murphy$ laws arefamousparodies on scientific laws and l*, humorously pointing out all the things that can and often do go wrong in science and life.
is a measure of the disorder of a system.The thermodynamics law
states that “entropy must always increase in the universe and in
any hypothetical isolated system within it” [34]. Practical application of this law says that to offset the effects of entropy, energy
must be injected into any system. Without adding energy, the
system becomes increasingly disordered.
Although the law was intended to be a statement of a scientific property, it was seized upon by “philosophers” who
defined system to mean a car, a house, economics, a civilization, or anything that became disordered.By this extrapolation,
the law explains why a desk or a garage becomes increasingly
cluttered until a cleanup (injection of energy) is initiated. Gases
1/2 Risk: Theory and Application
diffuse and mix in irreversible processes, unmaintained buildings eventually crumble, and engines (highly ordered systems)
break down without the constant infusion of maintenance
Here is another way of looking at the concept: “Mother
Nature hates things she didn’t create.” Forces of nature seek to
disorder man’s creations until the creation is reduced to the
most basic components. Rust is an example-metal seeks to
disorder itself by reverting to its original mineral components.
If we indulge ourselves with this line of reasoning, we may
soon conclude that pipeline failures will always occur unless an
appropriate type of energy is applied. Transport of products in a
closed conduit, often under high pressure, is a highly ordered,
highly structured undertaking. If nature indeed seeks increasing disorder, forces are continuously at work to disrupt this
structured process. According to this way of thinlang, a failed
pipeline with all its product released into the atmosphere or into
the ground or equipment and components decaying and reverting to their original premanufactured states represent the less
ordered, more natural state of things.
These quasi-scientific theories actually provide a useful way
of looking at portions of our world. If we adopt a somewhat
paranoid view of forces continuously acting to disrupt our
creations, we become more vigilant. We take actions to offset
those forces. We inject energy into a system to counteract the
effects of entropy. In pipelines, this energy takes the forms of
maintenance, inspection, and patrolling; that is, protecting the
pipeline from the forces seeking to tear it apart.
After years of experience in the pipeline industry, experts
have established activities that are thought to directly offset
specific threats to the pipeline. Such activities include patrolling,
valve maintenance, corrosion control, and all of the other
actions discussed in this text. Many of these activities have
been mandated by governmental regulations, but usually only
after their value has been established by industry practice.
Where the activity has not proven to be effective in addressing a
threat, it has eventuallybeen changed or eliminated. This evaluation process is ongoing. When new technology or techniques
emerge, they are incorporated into operations protocols. The
pipeline activity list is therefore being continuously refined.
A basic premise of this book is that a risk assessment
methodology should follow these same lines of reasoning. All
activities that influence, favorably or unfavorably, the pipeline
should be considered-even if comprehensive, historical data
on the effectivenessof a particular activity are not yet available.
Industry experience and operator intuition can and should be
included in the risk assessment.
The scientific method
This text advocates the use of simplifications to better understand and manage the complex interactions of the many variables that make up pipeline risk. This approach may appear to
some to be inconsistent with their notions about scientific
process. Therefore, it may be useful to briefly review some
pertinent concepts related to science, engineering, and even
The results of a good risk assessment are in fact the advancement of a theory. The theory is a description of the expected
behavior, in risk terms, of a pipeline system over some future
period of time. Ideally, the theory is formulated from a risk
assessment technique that conforms with appropriate scientific
methodologies and has made appropriate use of information
and logic to create a model that can reliably produce such theories. It is hoped that the theory is a fair representation of actual
risks. To be judged a superior theory by the scientific community, it will use all available information in the most rigorous
fashion and be consistent with all available evidence. To be
judged a superior theory by most engineers, it will additionally
have a level of rigor and sophistication commensurate with its
predictive capability; that is, the cost of the assessment and its
use will not exceed the benefits derived from its use. If the
pipeline actually behaves as predicted, then everyone’s confidence in the theory will grow, although results consistent with
the predictions will never “prove” the theory.
Much has been written about the generation and use of theories and the scientific method. One useful explanation of the
scientific method is that it is the process by which scientists
endeavorto construct a reliable and consistent representation of
the world. In many common definitions, the methodology
involves hypothesis generation and testing of that hypothesis:
1. Observe a phenomenon.
2. Hypothesize an explanation for the phenomenon.
3. Predict some measurable consequence that your hypothesis
would have if it turned out to be true.
4. Test the predictions experimentally.
Much has also been written about the fallacy of believing
that scientists use only a single method of discovery and that
some special type of knowledge is thereby generated by this
special method. For example, the classic methodology shown
above would not help much with investigation of the nature of
the cosmos. No single path to discovery exists in science, and
no one clear-cut description can be given that accounts for all
the ways in which scientific truth is pursued [56,88].
Common definitions of the scientific method note aspects
such as objectivity and acceptability of results from scientific
study. Objectivity indicates the attempt to observe things as they
are, without altering observations to make them consistent with
some preconceived world view. From a risk perspective, we
want our models to be objective and unbiased (see the
discussion of bias later in this chapter). However, our data
sources often cannot be taken at face value. Some interpretation
and, hence, alteration is usually warranted, thereby introducing
some subjectivity. Acceptability is judged in terms ofthe degree
to which observations and experimentationscan be reproduced.
Of course, the ideal risk model will be accurate, but accuracy
may only be verified after many years. Reproducibility is
another characteristicthat is sought and immediately verifiable.
If multiple assessors examine the same situation, they should
come to similar conclusions if our model is acceptable.
The scientific method requires both inductive reasoning and
deductive reasoning. Induction or inference is the process of
drawing a conclusion about an object or event that has yet to be
observed or occur on the basis of previous observations of similar objects or events. In both everyday reasoning and scientific
reasoning regarding matters of fact, induction plays a central
role. In an inductive inference, for example, we draw conclusions about an entire group of things, or a population, on the
basis of data about a sample of that group or population; or we
predict the occurrence of a future event on the basis of observations of similar past events; or we attribute a property to a
nonobserved thing on the grounds that all observed things of
Basic concepts 113
the same kind have that property; or we draw conclusions about
causes of an illness based on observations of symptoms.
Inductive inference permeates almost all fields, including education, psychology, physics, chemistry, biology, and sociology
1561.The role of induction is central to many of our processes of
At least one application of inductive reasoning in pipeline
risk assessment is obvious-using past failures to predict
future performance. A more narrow example of inductive reasoning for pipeline risk assessment would be: “Pipeline ABC is
shallow and fails often, therefore all pipelines that are shallow
fail more often.”
Deduction on the other hand, reasons forward from established rules: “All shallow pipelines fail more frequently;
pipeline ABC is shallow; therefore pipeline ABC fails more
As an interesting aside to inductive reasoning, philosophers
have struggled with the question of what justification we have
to take for granted the common assumptions used with induction: that the future will follow the same patterns as the past;
that a whole population will behave roughly like a randomly
chosen sample; that the laws of nature governing causes and
effects are uniform; or that we can presume that a sufficiently
large number of observed objects gives us grounds to attribute
something to another object we have not yet observed. In short,
what is the justification for induction itself? Although it is
tempting to try to justify induction by pointing out that inductive reasoning is commonly used in both everyday life and science. and its conclusions are. by and large, proven to be correct.
this justification is itself an induction and therefore it raises the
same problem: Nothing guarantees that simply because induction has worked in the past it will continue to work in the future.
The problem of induction raises important questions for the
philosopher and logician whose concern it is to provide a basis
for assessment of the correctness and the value of methods of
reasoning [56,88].
Beyond the reasoning foundations of the scientific method,
there is another important characteristic of a scientific theory
or hypothesis that differentiates it from, for example, an act of
faith: A theory must be “falsifiable.”This means that there must
be some experiment or possible discovery that could prove the
theory untrue. For example. Einstein’s theory of relativity made
predictions about the results of experiments. These experiments could have produced results that contradicted Einstein,
so the theory was (and still is) falsifiable [56]. On the other
hand, the existence of God is an example of a proposition that
cannot be falsified by any known experiment. Risk assessment
results, or “theories” will predict very rare events and hence not
be falsifiable for many years. This implies an element offaith in
accepting such results.
Because most risk assessment practitioners are primarily
interested in the immediate predictive power of their assessments. many of these issues can largely be left to the philosophers. However, it is useful to understand the implications and
underpinnings of our beliefs.
As previously noted, the scientific method is a process by
which we create representations or models of our world.
Science and engineering (as applied science) are and always
have been concerned with creating models of how things work.
As it is used here, the term model refers to a set of rules that are
used to describe a phenomenon. Models can range from very
simple screening tools (Le., “ifA and not B, then risk = low”) to
enormously complex sets of algorithms involving hundreds of
variables that employ concepts from expert systems, fuzzy
logic, and other artificial intelligence constructs.
Model construction enables us to better understand our physical world and hence to create better engineered systems.
Engineers actively apply such models in order to build more
robust systems. Model building and model applicatiodevaluation are therefore the foundation of engineering. Similarly,
risk assessment is the application of models to increase the
understanding of risk, as discussed later in this chapter.
In addition to the classical models of logic. logic techniques
are emerging that seek to better deal with uncertainty and incomplete knowledge. Methods of measuring “partial truths”-when
a thing is neither completely true nor completely false-have
been created based on fuuy logic originating in the 1960s from
the University of California at Berkley as techniques to model
the uncertainty of natural language. Fuzzy logic or fuzzy set theory resembles human reasoning in the face of uncertainty and
approximate information. Questions such as “To what degree is1
safe?’ can be addressed through these techniques. They have
found engineering application in many control systems ranging
from “smart” clothes dryers to automatic trains.
II. Basic concepts
Underlying the definition of risk is the concept of hazard. The
word hazard comes from a1 zahr: the Arabic word for “dice”
that referred to an ancient game of chance [lo]. We typically
define a hazard as a characteristic or group of characteristics
that provides the potential for a loss. Flammability and toxicity
are examples of such characteristics.
It is important to make the distinction between a hozard and
a risk because we can change the risk without changing a
hazard. When a person crosses a busy street, the hazard should
be clear to that person. Loosely defined it is the prospect
that the person must place himself in the path of moving
vehicles that can cause him great bodily harm were he to be
struck by one or more of them. The hazard is therefore injury
or fatality as a result of being struck by a moving vehicle.
The risk, however, is dependent on how that person conducts
himself in the crossing of the street. He most likely realizes that
the risk is reduced if he crosses in a designated trafficcontrolled area and takes extra precautions against vehicle
operators who may not see him. He has not changed the hazard-he can still be struck by a vehicle-but his risk of injury
or death is reduced by prudent actions. Were he to encase
himself in an armored vehicle for the trip across the street,
his risk would be reduced even further-he has reduced the
consequences of the hazard.
Several methodologies are available to identify hazards and
threats in a formal and structured way. A hazard and operability
(HAZOP) study is a technique in which a team of system
experts is guided through a formal process in which imaginative scenarios are developed using specific guide words and
analyzed by the team. Event-tree and fault-tree analyses are
other tools. Such techniques underlie the identified threats
to pipeline integrity that are presented in this book. Identified
1/4 Risk: Theory and Application
threats can be generally grouped into two categories: timedependent failuremechanisms and random failuremechanisms,
as discussed later.
The phrases threat assessment and hazard identification are
sometimesused interchangeably in this book when they refer to
identifyingmechanisms that can lead to a pipeline failure with
accompanying consequences.
Risk is most commonly defined as the probability of an event
that causes a loss and the potential magnitude of that loss. By
this definition, risk is increased when either the probability of
the event increases or when the magnitude of the potential
loss (the consequencesof the event) increases. Transportation
ofproducts by pipeline is a risk because there is some probability of the pipeline failing, releasing its contents, and causing
damage (in addition to the potential loss of the product itself).
The most commonly accepted definition of risk is often
expressed as a mathematical relationship:
Risk = (event likelihood) x (event consequence)
As such, a risk is often expressed in measurable quantities
such as the expectedfrequency of fatalities, injuries, or economic
loss. Monetary costs are often used as part of an overall expression of risk however, the difficult task of assigning a dollar
value to human life or environmental damage is necessary in
using this as a metric.
Related risk terms include Acceptable risk, tolerable risk,
risk tolerance, and negligibie risk, in which risk assessment
and decision making meet. These are discussed in Chapters 14
and 1 5 .
A complete understanding of the risk requires that three
questionsbe answered:
1. What can go wrong?
2. How likely is it?
3. What are the consequences?
By answering these questions,the risk is defined.
Answering the question of “what can go wrong?’ begins with
defining a pipeline failure. The unintentional release of
pipeline contents is one definition. Loss ofintegrity is another
way to characterize pipeline failure. However, a pipeline can
fail in other ways that do not involve a loss of contents. A more
general definition is failure to perform its intended function.
In assessing the risk of service interruption, for example, a
pipeline can fail by not meeting its delivery requirements
(its intended purpose). This can occur through blockage,
contamination, equipment failure, and so on, as discussed in
Chapter 10.
Further complicating the quest for a universal definition of
failure is the fact that municipalpipeline systemslike water and
wastewater and even natural gas distribution systems tolerate
some amount of leakage (unlike most transmission pipelines).
Therefore, they might be considered to have failed only when
the leakage becomes excessive by some measure. Except in the
case of service interruption discussed in Chapter 10, the general definition of failure in this book will be excessive leakage.
The term leakage implies that the release of pipeline contents
is unintentional. This lets our definition distinguish a failure from a venting, de-pressuring, blow down, flaring, or other
deliberateproduct release.
Under this working definition, a failure will be clearer in
some cases than others. For most hydrocarbon transmission
pipelines, any leakage (beyond minor, molecular level emissions) is excessive, so any leak means that the pipeline has
failed. For municipal systems, determinationof failure will not
be as precise for several reasons, such as the fact that some
leakage is only excessive-that is, a pipe failure-after it has
continued for a period of time.
Failure occurs when the structure is subjected to stresses
beyond its capabilities,resulting in its structural integritybeing
compromised.Internal pressure, soil overburden, extreme temperatures, external forces, and fatigue are examples of stresses
that must be resisted by pipelines. Failure or loss of strength
leading to failure can also occur through loss of material by
corrosion or from mechanical damage such as scratches and
The answers to what can go wrong must be comprehensive in
order for a risk assessmentto be complete.Every possible failure mode and initiating cause must be identified. Every threat
to the pipeline, even the more remotely possible ones, must be
identified. Chapters 3 through 6 detail possible pipeline failure
mechanisms grouped into the four categories of Third Par&
Corrosion. Design, and Incorrect Opemtions. These roughly
correspond to the dominant failure modes that have been
historically observedin pipelines.
By the commonly accepted definition of risk, it is apparent that
probability is a critical aspect of all risk assessments. Some
estimate of the probability of failure will be required in order
to assess risks. This addresses the second question of the risk
definition: “How likely is it?”
Some think of probability as inextricably intertwined with
statistics. That is, “real” probability estimates arise only from
statistical analyses-relying solely on measured data or
observed occurrences.However, this is only one of five definitions of probability offered in Ref. 88. It is a compelling definition since it is rooted in aspects of the scientific process and
the familiar inductive reasoning. However, it is almost always
woefully incomplete as a stand-alonebasis for probability estimates of complex systems. In reality, there are no systems
beyond very simple, fixed-outcome-typesystems that can be
fully understood solely on the basis of past observations-the
core of statistics.Almost any system of a complexity beyond a
simple roll of a die, spin of a roulette wheel, or draw from a
deck of cards will not be static enough or allow enough trials for
statistical analysis to completely characterize its behavior.
Statisticsrequires data samples-past observationsfrom which
inferencescan be drawn. More interestingsystems tend to have
fewer available observations that are strictly representative of
their current states. Data interpretation becomes more and more
necessary to obtain meaningful estimates. As systems become
more complex, more variable in nature, and where trial observations are less available, the historical frequency approach
Basic concepts 115
will often provide answers that are highly inappropriate
estimates of probability.
Even in cases where past frequencies lead to more reliable
estimates of future events for populations, those estimates are
often only poor estimates of individual events. It is relatively
easy to estimate the average adulthood height of a class of third
graders, but more problematic when we try to predict the height
of a specific student solely on the basis of averages. Similarly,
just because the national average ofpipeline failures might be 1
per 1,000 mile-years, the 1,000-mile-longABC pipeline could
be failure free for 50 years or more.
The point is that observed past occurrences are rarely sufficient information on which to base probability estimates. Many
other types of information can and should play an important
role in determining a probability.Weather forecasting is a good
example of how various sources of information come together
to form the best models. The use of historical statistics (climatological data-what has the weather been like historically on
this date) turns out to be a fairly decent forecasting tool (producing probability estimates), even in the absence of any meteorological interpretations. However, a forecast based solely on
what has happened in previous years on certain dates would
ignore knowledge of frontal movements, pressure zones, current conditions, and other information commonly available.
The forecasts become much more accurate as meteorological
information and expert judgment are used to adjust the base
case climatological forecasts [88].
Underlying most of the complete definitions of probability is
the concept of degree of belief:A probability expresses a degree
of belief. This is the most compelling interpretation of probability because it encompasses the statistical evidence as well as
the interpretations and judgment. Ideally, the degree of belief
could be determined in some consistent fashion so that any two
estimators would arrive at the same conclusion given the same
evidence. It is a key purpose of this book to provide a framework by which a given set of evidence consistently leads to a
specific degree of belief regarding the safety of a pipeline.
(Note that the terms likelihood. probability, and chance are
often used interchangeably in this text.)
Frequency, statistics, and probability
As used in this book,frequency usually refers to a count of past
observations; statistics refers to the analyses of the past observations; and the definition ofprobability is “degree of belief,”
which normally utilizes statistics but is rarely based entirely on
A statistic is not a probability. Statistics are only numbers or
methods of analyzing numbers. They are based on observations-past
events. Statistics do not imply anything about
future events until inductive reasoning is employed. Therefore,
a probabilistic analysis is not only a statistical analysis. As previously noted, probability is a degree of belief. It is influenced
by statistics (past observations), but only in rare cases do the
statistics completely determine our belief. Such a rare case
would be where we have exactly the same situation as that from
which the past observations were made and we are making estimates for a population exactly like the one from which the past
data arose-a very simple system.
Historical failure frequencies-and the associated statistical
values-are normally used in a risk assessment. Historical
data, however, are not generally available in sufficient quantity
or quality for most event sequences. Furthermore, when data
are available, it is normally rare-event d a t a 4 n e failure in
many years of service on a specific pipeline, for instance.
Extrapolating future failure probabilities from small amounts
of information can lead to significant errors. However, historical data are very valuable when combined with all other
information available to the evaluator.
Another possible problem with using historical data is the
assumption that the conditions remain constant. This is rarely
true, even for a particular pipeline. For example, when historical data show a high occurrence of corrosion-related leaks, the
operator presumably takes appropriate action to reduce those
leaks. His actions have changed the situation and previous
experience is now weaker evidence. History will foretell the
future only when no offsetting actions are taken. Although
important pieces of evidence, historical data alone are rarely
sufficient to properly estimate failure probabilities.
Failure rates
A failure rate is simply a count of failures over time. It is usually
first a frequency observation of how often the pipeline has
failed over some previous period of time. A failure rate can also
be a prediction of the number of failures to be expected in a
given future time period. The failure rate is normally divided
into rates of failure for each failure mechanism.
The ways in which a pipeline can fail can be loosely categorized according to the behavior of the failure rate over time.
When the failure rate tends to vary only with a changing environment, the underlying mechanism is usually random and should
exhibit a constant failure rate as long as the environment stays
constant. When the failure rate tends to increase with time and is
logically linked with an aging effect, the underlying mechanism
is time dependent. Some failure mechanisms and their respective
categories are shown in Table 1.1.There is certainly an aspect of
randomness in the mechanisms labeled time dependent and the
possibility of time dependency for some of the mechanisms
labeled random. The labels point to the probability estimation
protocol that seems to be most appropriate for the mechanism.
The historical rate of failures on a particular pipeline system
may tell an evaluator something about that system. Figure 1.1 is
a graph that illustrates the well-known “bathtub shape of failure rate changes over time. This general shape represents the
failure rate for many manufactured components and systems
over their lifetimes. Figure 1.2 is a theorized bathtub curve for
Table 1.1
Failure rates vs. failuremechanisms
Failure mechanism
Third-party damage
Earth movements
Material degradation
Material defects
Nature of
Failure rate
Time dependent
Time dependent
Random (except for
slow-acting instabilities)
Time dependent
they reach the end of their useful service life. Where a timedependent failure mechanism (corrosion or fatigue) is involved,
its effects will be observed in this wear-outphase of the curve.
An examination of the failure data of a particular system may
suggest such a curve and theoretically tell the evaluator what
stage the system is in and what can be expected. Failure rates
are further discussed in Chapter 14.
Figure 1.1 Common failure rate curve (bathtubcurve)
Some pieces of equipment or installations have a high initial
rate of failure. This first portion of the curve is called the burninphase or infant mortalityphase. Here, defects that developed
during initial manufacture of a component cause failures. As
these defects are eliminated, the curve levels off into the second
zone. This is the so-called constantfailurezone and reflects the
phase where random accidents maintain a fairly constant failure rate. Components that survive the bum-in phase tend to fail
at a constant rate. Failure mechanisms that are more random in
nature-third-party damages or most land movements for
example-tend to drive the failure rate in this part of the curve.
Far into the life of the component, the failure rate may begin
to increase. This is the zone where things begin to wear out as
Inherent in any risk evaluation is a judgment of the potential
consequences. This is the last of the three risk-defining questions: If something goes wrong, what are the consequences?
Consequence implies a loss of some kind. Many of the
aspects of potential losses are readily quantified. In the case
of a major hydrocarbon pipeline accident (product escaping,
perhaps causing an explosion and fire), we could quantify
losses such as damaged buildings, vehicles, and other property; costs of service interruption; cost of the product lost; cost
of the cleanup; and so on. Consequences are sometimes
grouped into direct and indirect categories, where direct costs
Property damages
Damages to human health
Environmental damages
Loss ofproduct
Repair costs
Cleanup and remediation costs
Indirect costs can include litigation, contract violations, customer dissatisfaction, political reactions, loss of market share,
and government fines and penalties.
Third-party; earth movements;
Corrosion; fatigue
Figure 1.2 Theorized failure rate curve for pipelines.
Basic concepts 117
As a common denominator, the monetary value of losses
is often used to quantify consequences. Such “monetizing” of
consequences-assigning dollar values to damages-is straightforward for some damages. For others, such as loss of life and
environmental impacts, it is more difficult to apply. Much has
been written on the topic of the value of human life, and this is
further discussed in absolute risk quantification (see Chapter
14). Placing a value on the consequences of an accident is a key
component in society’s determination of how much it is willing
to spend to prevent that accident. This involves concepts of
acceptable risk and is discussed in Chapter 15.
The hazards that cause consequences and are created by the
loss of integrity of an operating pipeline will include some or
all ofthe following:
Toxicityiasphyxiation threats from released productscontact toxicity or exclusion of air from confined spaces
Contaminatiodpollution from released productsdamage
to flora, fauna, drinking waters, etc.
Mechanical effects from force of escaping producterosion, washouts, projectiles, etc.
Firehgnition scenarios involving released products-pool
fires, fireballs,jet fires, explosions
These hazards are fully discussed in following chapters,
beginning with Chapter 7.
Risk assessment
Risk assessment is a measuring process and a risk model is a
measuring tool. Included in most quality and management
concepts is the need for measurement. It has been said that “If
you don’t have a number, you don’t have a fact-you have an
opinion.” While the notion of a “quantified opinion” adds
shades of gray to an absolute statement like this, most would
agree that quantifying something is at least the beginning of
establishing its factual nature. It is always possible to quantify
things we truly understand. When we find it difficult to
express something in numbers, it is usually because we don’t
have a complete understanding of the concept. Risk assessment
must measure both the probability and consequences of all
of the potential events that comprise the hazard. Using the risk
assessment. we can make decisions related to managing those
Note that risk is not a static quantity. Along the length of a
pipeline, conditions are usually changing. As they change, the
risk is also changing in terms of what can go wrong, the likelihood of something going wrong, andor the potential consequences. Because conditions also change with time, risk is not
constant even at a fixed location. When we perform a risk evaluation, we are actually taking a snapshot of the risk picture at a
moment in time.
There is no universally accepted method for measuring risk.
The relative advantages and disadvantages of several
approaches are discussed later in this chapter. It is important to
recognize what a risk assessment can and cannot do, regardless
of the methodology employed. The ability to predict pipeline
failures-when and where they will occur-would obviously
be a great advantage in reducing risk. Unfortunately, this cannot be done at present. Pipeline accidents are relatively rare and
often involve the simultaneous failure of several safety provi-
sions. This makes accurate failure predictions almost impossible. So, modem risk assessment methodologies provide a surrogate for such predictions. Assessment efforts by pipeline
operating companies are normally not attempts to predict how
many failures will occur or where the next failure will occur.
Rather, efforts are designed to systematically and objectively
capture everything that can be known about the pipeline and its
environment, to put this information into a risk context, and
then to use it to make better decisions.
Risk assessments normally involve examining the factors or
variables that combine to create the whole risk picture. A complete list of underlying risk factors-that is. those items that
add to or subtract from the amount of risk-can be identified
for a pipeline system. Including all of these items in an assessment, however, could create a somewhat unwieldy system and
one of questionable utility. Therefore, a list of critical risk indicators is usually selected based on their ability to provide useful risk signals without adding unnecessary complexities. Most
common approaches advocate the use of a model to organize or
enhance our understanding of the factors and their myriad possible interactions. A risk assessment therefore involves tradeoffs between the number of factors considered and the ease of
use or cost of the assessment model. The important variables
are widely recognized, but the number to be considered in the
model (and the depth ofthat consideration)is a matter ofchoice
for the model developers.
The concept of the signal-to-noise ratio is pertinent here. In
risk assessment, we are interested in measuring risk levels-the
risk is the signal we are trying to detect. We are measuring in a
very “noisy” environment. in which random fluctuations and
high uncertainty tend to obscure the signal. The signal-to-noise
ratio concept tells us that the signal has to be of a certain
strength before we can reliably pick it out of the background
noise. Perhaps only very large differences in risk will be
detectable with our risk models. Smaller differences might be
indistinguishable from the background noise or uncertainty
in our measurements. We must recognize the limitations of
our measuring tool so that we are not wasting time chasing
apparent signals that are, in fact, false-positives or falsenegatives. The statistical quality control processes acknowledge this and employ statistical control charts to determine
which measurements are worth investigating further.
Some variables will intuitively contribute more to the signal;
that is, the risk level. Changes in variables such as population
density, type of product, and pipe stress level will very obviously change the possible consequences or failure probability.
Others. such as flow rate and depth of cover will also impact the
risk, but perhaps not as dramatically. Still others, such as soil
moisture, soil pH, and type of public education advertising, will
certainly have some effect, but the magnitude of that effect is
arguable. These latter are not arguable in the sense that they
cannot contribute to a failure, because they certainly can in
some imaginable scenarios, but in the sense that they may be
more noise than signal, as far as a model can distinguish. That
is, their contributions to risk may be below the sensitivity
thresholds ofthe risk assessment.
Risk management
Risk management is a reaction to perceived risks. It is practiced
everyday by every individual. In operating a motor vehicle,
1/8 Risk: Theory and Application
compensating for poor visibility by slowing down demonstrates a simple application of risk management. The driver
knows that a change in the weather variable of visibility impacts
the risk because her reaction times will be reduced. Reducing
vehicle speed compensates for the reduced reaction time.
While this example appears obvious, reaching this conclusion
without some mental model of risk would be difficult.
Risk management, for the purposes of this book, is the set of
actions adopted to control risk. It entails a process of first
assessing a level of risk associated with a facility and then
preparing and executing an action plan to address current and
future risks. The assimilation of complex data and the subsequent integration of sometimes competing risk reduction and
profit goals are at the heart of any debate about how best to
manage pipeline risks. Decision making is the core of risk management. Many challenging questions are implied in risk
Where and when should resources be applied?
How much urgency should be attached to any specific risk
Should only the worst segmentsbe addressed first?
Should resources be diverted from less risky segments in
order to better mitigate risks in higher risk areas?
How much will risk change if we do nothing differently?
An appropriate risk mitigation strategy might involve risk
reductions for very specific areas or, alternatively, improving
the risk situation in general for long stretches of pipeline. Note
also that a risk reduction project may impact many variables for
a few segments or, alternatively, might impact a few variables
but for many segments.
Although the process of pipeline risk management does not
have to be complex,it can incorporate some very sophisticated
engineeringand statisticalconcepts.
A good risk assessment process leads the user directly into
risk management by highlighting specific actions that can
reduce risks. Risk mitigation plans are often developed using
“what-if” scenariosin the risk assessment.
The intention is not to make risk disappear. If we make any
risk disappear, we will likely have sacrificed some other aspect
of our lifestyles that we probably don’t want to give up. As an
analogy, we can eliminate highway fatalities, but are we really
ready to give up our cars? Risks can be minimized however-at
least to the extent that no unacceptablerisks remain.
The term experts as it is used here refers to people most knowledgeable in the subject matter. An expert is not restricted to a
scientist or other technical person. The greatest expertise for a
specific pipeline system probably lies with the workforce
that has operated and maintained that system for many years.
The experience and intuition of the entire workforce should
be tapped as much as is practical when performing a risk
Experts bring to the assessment a body of knowledge that
goes beyond statistical data. Experts will discount some data
that do not adequately represent the scenario being judged.
Similarly, they will extrapolate from dissimilar situations that
may have better data available.
The experience factor and the intuition of experts should not
be discountedmerely because they cannot be easily quantified.
Normally little disagreement will exist among knowledgeable
persons when risk contributorsand risk reducers are evaluated.
If differences arise that cannot be resolved, the risk evaluator
can have each opinion quantified and then produce a compiled
value to use in the assessment.
When knowledge is incomplete and opinion, experience,
intuition, and other unquantifiable resources are used, the
assessment of risk becomes at least partially subjective. As it
turns out, knowledge is always incomplete and some aspect of
judgment will always be needed for a complete assessment.
Hence, subjectivity is found in any and all risk assessment
Humans tend to have bias and experts are not immune from
this. Knowledge of possible bias is the first step toward minimizing it. One source [88] identifies many types of bias and
heuristic assumptions that are related to learning based on
experiment or observation.These are shown inTable 1.2.
111. Uncertainty
As noted previously, risk assessment is a measuring process.
Like all measuring systems, measurement error and uncertainty arise as a result of the limitations of the measuring tool,
the process oftaking the measurement, and the person performing the measurement. Pipeline risk assessmentis also the compilation of many other measurements (depth of cover, wall
thickness, pipe-to-soil voltages, pressure, etc.) and hence
absorbs all of those measurement uncertainties. It makes use
of engineering and scientific models (stress formulas, vapor
dispersion and thermal effects modeling, etc.) that also have
accompanying errors and uncertainties. In the use of past failure rate information, additional uncertainty results from small
sample sizes and comparability, as discussed previously.
Further adding to the uncertainty is the fact that the thing
being measured is constantly changing. It is perhaps useful to
view a pipeline system, including its operating environment, as
a complex entity with behavior similar to that seen in dynamic
or chaotic systems. Here the term chaotic is being used in its
scientific meaning (chaos theory) rather than implying a disorganized or random nature in the conventional sense of the
word. In science, dynamic or chaotic systems refer to the many
systems in our world that do not behave in strictly predictable
or linear fashions. They are not completely deterministic nor
completely random, and things never happen in exactly the
same way. A pipeline, with its infinite combinations of historical, environmental, structural, operational, and maintenance
parameters, can be expected to behave as a so-called dynamic
system-perhaps establishing patterns over time, but never
repetition.As such, we recognize that, as one possible outcome
of the process of pipelining, the risk of pipeline failure is
sensitive to immeasurable or unknowable initial conditions.
In essence, we are trying to find differences in risk out of
all the many sources of variation inherent in a system that
places a man-made structure in a complex and ever-changing
environment. Recall the earlier discussion on signal-to-noise
considerationsin risk assessment.
In more practical terms, we can identify all of the threats to
the pipeline. We understand the mechanisms underlying the
Risk process-the general steps 119
Table 1.2
Types of bias and heuristics
Heuristic or bias
Availability heuristic
Availability bias
Hindsight bias
Anchoringand adjustment heuristic
Conjunctive distortion
Representativeness heuristic
Representativeness bias
Judging likelihoodby instances most easily or vividly recalled
Overemphasizing available or salient instances
Exaggerating in retrospectwhat was known in advance
Adjustingan initial probability to a final value
Insufficientlymodifying the initial value
Misjudging the probabilityof combined events relative to their individual values
Judging likelihood by similarityto some referenceclass
Overemphasizing similaritiesand neglecting other information;confusing“probability ofA given B’
with “probabilityofB given A”
Exaggerating the predictive validity of some method or indicator
Overlooking frequency information
Overemphasizing significanceof limited data
Greater confidencethan warranted, with probabilitiesthat are too extreme or distributionstoo narrow
about the mean
Less confidencethan warranted in evidence with high weight but low strength
Intentional distortionof assessedprobabilitiesto advance an assessor’s self-interest
Intentionaldistortionof assessed probabilitiesto advance a sponsor’s interest in achieving an outcome
Insensitivity to predictability
Insensitivityto sample size
Personal bias
Source: From Vick. Steven G.. Degrees of Belief:Subjective Probability and EngineeringJudgment. ASCE Press, Reston, VA, 2002.
threats. We know the options in mitigating the threats. But in
knowing these things, we also must know the uncertainty
involved-we cannot know and control enough of the details to
entirely eliminate risk. At any point in time, thousands of forces
are acting on a pipeline, the magnitude of which are “unknown
and unknowable.”
An operator will never have all of the relevant information he
needs to absolutely guarantee safe operations. There will
always be an element of the unknown. Managers must control
the “right” risks with limited resources because there will
always be limits on the amount of time, manpower, or money
that can be applied to a risk situation. Managers must weigh
their decisions carefully in light of what is known and
unknown. It is usually best to assume that
Uncertainty= increased risks
This impacts risk assessment in several ways. First, when
information is unknown, it is conservatively assumed that
unfavorable conditions exist. This not only encourages the frequent acquisition of information, but it also enhances the risk
assessment’s credibility,especially to outside observers.
It also makes sense from an error analysis standpoint. Two
possible errors can occur when assessing a condition-saying it
is “good,” when it is actually “bad,” and saying it is “bad” when
it is actually “good.” If a condition is assumed to be good
when it is actually bad, this error will probably not be discovered until some unfortunate event occurs. The operator will
most likely be directing resources toward suspected deficiencies, not recognizing that an actual deficiency has been hidden
by an optimistic evaluation. At the point of discovery by incident, the ability of the risk assessment to point out any other
deficiency is highly suspect. An outside observer can say,
“Look, this model is assuming that everything is rosy-how
can we believe anything it says?!” On the other hand, assuming
a condition is bad when it is actually good merely has the effect
of highlighting the condition until better information makes
the “red flag” disappear. Consequences are far less with this
latter type of error. The only cost is the effort to get the correct
information. So, this “guilty until proven innocent” approach is
actually an incentive to reduce uncertainty.
Uncertainty also plays a role in inspection information.
Many conditions continuously change over time. As inspection
information gets older, its relevance to current conditions
becomes more uncertain. All inspection data should therefore
be assumed to deteriorate in usefulness and, hence, in its
risk-reducing ability. This is further discussed in Chapter 2.
The great promise of risk analysis is its use in decision
support. However, this promise is not without its own element
of risk-the misuse of risk analysis, perhaps through failure
to consider uncertainty. This is discussed as a part of risk
management in Chapter 15. As noted in Ref. [74]:
The primary problem with risk assessment is that the informationon
which decisions must be based is usually inadequate. Because the
decisions cannot wait, the gaps in information must be bridged by
inferenceand belief, and these cannot be evaluated in the same way as
facts. Improving the quality and comprehensiveness of knowledge is
by far the most effective way to improve risk assessment, but some
limitationsare inherent and unresolvable, and inferenceswill always
be required.
IV. Risk process-the general steps
Having defined some basic terms and discussed general risk
issues, we can now focus on the actual steps involved in risk
management. The following are the recommended basic steps.
These steps are all fully detailed in this text.
Step 1: Risk modeling
The acquisition of a risk assessment process, usually in
the form of a model, is a logical first step. A pipeline risk
assessment model is a set of algorithms or rules that use
available information and data relationships to measure levels
of risk along a pipeline. An assessment model can be selected
1/10 Risk: Theory and Application
from some commercially available existing models, customized from existing models, or created “from scratch”
depending on your requirements. Multiple models can be
run against the same set of data for comparisons and model
Step 2: Data collection and preparation
Data collection entails the gathering of everything that can be
known about the pipeline, including all inspection data, original construction information, environmental conditions, operating and maintenance history, past failures, and so on. Data
preparation is an exercise that results in data sets that are ready
to be read into and used directly by the risk assessment model.
A collection of tools enables users to smooth or enhance data
points into zones of influence, categories, or bands to convert
certain data sets into risk information. Data collection is discussed later in this chapter and data preparation issues are
detailed in Chapter 8.
Step 3: Segmentation
Because risks are rarely constant along a pipeline, it is advantageous to segment the line into sections with constant
risk characteristics (dynamic segmentation) or otherwise
divide the pipeline into manageable pieces. Segmentation
strategies and techniques are discussed in Chapters 2 and 8,
V. Data collection
Data and information are essential to good risk assessment.
Appendix G shows some typical information-gathering efforts
that are routinely performed by pipeline operators. After several years of operation, some large databases will have developed. Will these pieces of data predict pipeline failures? Only
in extreme cases. Will they, in aggregate, tell us where risk hot
spots are? Certainly. We ohviously feel that all of this information is important-we collect it, base standards on it, base regulations on it, etc. It just needs to be placed into a risk context so
that a picture of the risk emerges and better resource allocation
decisions can be made based on that picture. The risk model
transforms the data into risk knowledge.
Given the importance of data to risk assessment, it is important to have a clear understanding of the data collection process.
There exists a discipline to measuring. Before the data gathering effort is started, four questions should be addressed
1. What will the data represent?
2. How will the values be obtained?
3. What sources ofvariation exist?
4. Why are the data being collected?
What will the data represent?
Now the previously selected risk assessment model can be
applied to each segment to get a unique risk “score” for that
segment. These relative risk numbers can later be converted into absolute risk numbers. Working with results of risk
assessments is discussed in Chapters 8,14, and 15.
The data are the sum of our knowledge about the pipeline section: everything we know, think, and feel about it-when it was
built, how it was built, how it is operated, how often it has failed
or come close, what condition it is in now, what threats exist,
what its surroundings are, and so on-all in great detail. Using
the risk model, this compilation of information will be transformed into a representation of risk associated with that section. Inherent in the risk numbers will be a complete evaluation
of the section’s environment and operation.
Step 5: Managing risks
How will the values be obtained?
Having performed a risk assessment for the segmented
pipeline, we now face the critical step of managing the risks. In
this area, the emphasis is on decision support-providing the
tools needed to best optimize resource allocation.
This process generally involves steps such as the following:
Some rules for data acquisition will often be necessary. Issues
requiring early standardization might include the following:
Step 4:Assessing risks
Analyzing data (graphically and with tables and simple
Calculating cumulative risks and trends
Creating an overall risk management strategy
Identifying mitigation projects
Performing what-if’s
These are fully discussed in subsequent chapters, especially
Chapter 15.
The first two steps in the overall process, (1) risk model
and (2) data collection, are sometimes done in reverse order.
An experienced risk modeler might begin with an examination of the types and quantity of data available and from
that select a modeling approach. In light of this, the discussion of data collection issues precedes the model-selection
Who will be performing the evaluations? The data can be
obtained by a single evaluator or team of evaluators who will
visit the pipeline operations offices personally to gather the
information required to make the assessment. Alternatively,
each portion of a pipeline system can be evaluated by those
directly involved in its operations and maintenance. This
becomes a self-evaluation in some respects. Each approach
has advantages. In the former, it is easier to ensure consistency; in the latter, acceptance by the workforce might be
What manuals or procedures will be used? Steps should
be taken to ensure consistency in the evaluations.
How often will evaluations be repeated? Reevaluations
should be scheduled periodically or the operators should be
required to update the records periodically.
Will “hard proof” or documentation be a requirement in
all cases? Or can the evaluator accept “opinion” data in some
circumstances? An evaluator will usually interview pipeline
operators to help assign risk scores. Possibly the most com-
Conceptualizing a risk assessment approach 1/11
mon question asked by the evaluator will be “How do you
know?” This should be asked in response to almost every
assertion by the interviewee(s). Answers will determine the
uncertainty around the item, and item scoring should reflect
this uncertainty. This issue is discussed in many of the
suggested scoring protocols in subsequent chapters.
What defaults are to be used when no information is
available? See the discussion on uncertainty in this chapter
and Chapter 2.
the mission statement or objective of the risk management
program. The underlying reason may vary depending on
the user, but it is hoped that the common link will be the
desire to create a better understanding of the pipeline and
its risks in order to make improvements in the risk picture.
Secondary reasons or reasons embedded in the general purpose
may include
What sources of variation exist?
Typical sources of variation in a pipeline risk assessment
Differences in the pipeline section environments
Differences in the pipeline section operation
Differences in the amount of information available on the
pipeline section
Evaluator-to-evaluator variation in information gathering
and interpretation
Day-to-day variation in the way a single evaluator assigns
Every measurement has a level of uncertainty associated
with it. To be precise, a measurement should express this uncertainty: 10 f t i 1 in., 15.7”F~0.2’.Thisuncertaintyvaluerepresents some of the sources of variations previously listed:
operator effects, instrument effects, day-to-day effects, etc.
These effects are sometimes called measurement “noise” as
noted previously in the signal-to-noise discussion. The variations that we are trying to measure. the relative pipeline risks.
are hopefully much greater than the noise. If the noise level is
too high relative to the variation of interest, or if the measurement is too insensitive to the variation of interest, the data
become less meaningful. Reference [92] provides detailed
statistical methods for determining the “usefulness” of the
If more than one evaluator is to be used, it is wise to quantify
the variation that may exist between the evaluators. This is easily done by comparing scoring by different evaluators of the
same pipeline section. The repeatability of the evaluator can
be judged by having her perform multiple scorings of the same
section (this should be done without the evaluator’s knowledge that she is repeating a previously performed evaluation). If
these sources of variation are high, steps should be taken to
reduce the variation. These steps may include
Improved documentation and procedures
Evaluator training
Refinement of the assessment technique to remove more
Changes in the information-gathering activity
Use of only one evaluator
Why are the data being collected?
Clearly defining the purpose for collecting the data is important. but often overlooked. The purpose should tie back to
Identify relative risk hot spots
Ensure regulatory compliance
Set insurance rates
Define acceptable risk levels
Prioritize maintenance spending
Build a resource allocation model
Assign dollar values to pipeline systems
Track pipelining activities
Having built a database for risk assessment purposes, some
companies find much use for the information other than risk
management. Since the information requirements for comprehensive risk assessment are so encompassing, these databases
often become a central depository and the best reference source
for all pipeline inquiries.
VI. Conceptualizing a risk assessment
Checklist for design
As the first and arguably the most important step in risk
management, an assessment ofrisk must be performed.
Many decisions will be required in determining arisk assessment approach. While all decisions do not have to be made during initial model design. it is useful to have a rather complete
list of issues available early in the process. This might help to
avoid backtracking in later stages, which can result in significant nonproductive time and cost. For example, is the risk
assessment model to be used only as a high-level screening tool
or might it ultimately be used as a stepping stone to a risk
expressed in absolute terms? The earlier this determination is
made, the more direct will be the path between the model’s
design and its intended use.
The following is a partial list of considerations in the design
of a risk assessment system. Most of these are discussed in
subsequent paragraphs of this chapter.
1. Purpose-A short, overall mission statement including the
objectives and intent of the risk assessment project.
2. Audience-Who will see and use the results of the risk
General public or special interest groups
Local, state, or federal regulators
Company-all employees
Company-management only
Company-specific departments only
3. Uses-How will the results be used?
0 Risk identrficafion-the
acquisition of knowledge, such
as levels of integrity threats, failure consequences and
overall system risk, to allow for comparison of pipeline
risk levels and evaluation of risk drivers
1/12 Risk: Theory and Application
Resource allocation-where and when to spend discretionary and/or mandated capital andor maintenance
Design or mod& an operating discipline-reate
O&M plan consistent with risk management concepts
Regulatory compliance for risk assessment-if
assessment itself is mandated
Regulatory compliancefor all required activities- flags
are raised to indicate potential noncompliances
Regulatory compliance waivers-where risk-based justifications provide the basis to request waivers of specific
integrity assessment or maintenance activities
Project appmvals+ostlbenefit
calculations, project
prioritizations and justifications
Preventive maintenance schedules-creating multiyear
integrity assessment plans or overall maintenance priorities and schedules
Due diligence-investigation and evaluation of assets
that might be acquired, leased, abandoned, or sold, from a
risk perspective
Liability reduction-reduce the number, frequency, and
severity of failures, as well as the severity of failure
consequences, to lower current operating and indirect
liability-related costs
Risk communications-present risk information to a
number of different audiences with different interests and
levels of technical abilities
4. Users-This might overlap the audience group:
Internal only
Technical staffonlwngineering, compliance, integrity,
and information technology (IT) departments
authorization, technical support,
Planning department-facility expansion, acquisitions,
and operations
District-level supervisors-maintenance and operations
Regulators-if regulators are shown the risk model or its
Other oversight-ity
council, investment partners,
insurance carrier, etc.-if access given in order to do
what-ifs, etc.
Public presentations-public
bearings for proposed
5. Resources-Who
and what is available to support the
Data-type, format, and quality of existing data
environments’suitability as residence
for risk model
communications and data management systems
Staff--availability of qualified people to design the
model and populate it with required data
Monq~-availability of funds to outsource data collection, database and model design, etc.
Industry-access to best industry practices, standards,
and knowledge
6. Designxhoices in model features, format, and capabilities:
Failure causes considered-corrosion, sabotage, land
movements, third party, human error, etc.
Consequences considered-public safety only, environment, cost of service interruption, employee safety, etc.
Facilities covered-pipe only, valves, fittings, pumps,
tanks, loading facilities, compressor stations, etc.
Scoring-define scoring protocols, establish point
ranges (resolution)
Direction of scale-higher points can indicate either
more safety or more risk
Point assignments-addition of points only, multiplications, conditionals (if X then Y), category weightings,
independent variables, flat or multilevel structures
Resolution issues-range of diameters, pressures, and
Defaults-philosophy of assigning values when little or
no information is available
Zone-ofinfluence distances-for what distance does a
piece of data provide evidence on adjacent lengths ofpipe
Relative versus absolure4hoice of presentation format
and possibly model approach
Reporting-types and frequency of output and presentations needed
General beliefs
In addition to basic assumptions regarding the risk assessment
model, some philosophical beliefs underpin this entire book. It
is u s e l l to state these clearly at this point, so the reader may be
alerted to any possible differences from her own beliefs. These
are stated as beliefs rather than facts since they are arguable and
others might disagree to some extent:
Risk management techniques are fundamentally decision
support tools. Pipeline operators in particular will find most
valuable a process that takes available information and
assimilates it into some clear, simple results. Actions can
then be based directly on those simple results.
We must go through some complexity in order to achieve
“intelligent simplification.” Many processes, originating
from sometimes complex scientific principles, are “behind
the scenes” in a good risk assessment system. These must be
well documented and available, but need not interfere with
the casual users of the methodology (everyone does not need
to understand the engine in order to benefit from use of the
vehicle). Engineers will normally seek a rational basis
underpinning a system before they will accept it. Therefore,
the basis must be well documented.
In most cases, we are more interested in identifying locations
where a potential failure mechanism is more aggressive
rather than predicting the length of time the mechanism must
be active before failure occurs.
A proper amount of modeling resolution is needed.
The model should be able to quantify the benefit of
any and all actions, from something as simple as “add
2 new ROW markers” all the way up to “reroute the entire
Many variables impact pipeline risk. Among all possible
variables, choices are required that yield a balance between a
comprehensive model (one that covers all of the important
stuff) and an unwieldy model (one with too many relatively
unimportant details). Users should be allowed to determine
their own optimum level of complexity. Some will choose to
Conceptualizing a risk assessment approach 1/13
capture much detailed information because they already
have it available; others will want to get started with a very
simple framework. However, by using the same overall risk
assessment framework, results can still be compared: from
very detailed approaches to overview approaches.
Resource allocation (or reallocation) is normally the most
effective way to practice risk management. Costs must therefore play a role in risk management. Because resources are
finite, the optimum allocation of those scarce resources is
The methodology should “get smarter” as we ourselves
learn. As more information becomes available or as new
techniques come into favor, the methodology should be flexible enough to incorporate the new knowledge, whether that
new knowledge is in the form of hard statistics, new beliefs,
or better ways to combine risk variables.
Methodology should be robust enough to apply to small as
well as large facilities, allowing an operator to divide a large
facility into subsets for comparisons within a system as well
as between systems.
Methodology should have the ability to distinguish between
products handled by including critical fluid properties,
which are derived from easy-to-obtain product information.
Methodology should be easy to set up on paper or in an
electronic spreadsheet and also easy to migrate to more
robust database software environments for more rigorous
Methodology documentation should provide the user with
simple steps, but also provide the background (sometimes
complex) underlying the simple steps.
Administrative elements of a risk management program are
necessary to ensure continuity and consistency of the effort.
Note that ifthe reader concurs with these beliefs, the bulleted
items above can form the foundation for a model design or
an inquiry to service providers who offer pipeline risk
assessmenurisk management products and services.
Scope and limitations
Having made some preliminary decisions regarding the risk
management’s program scope and content, some documentation should be established. This should become a part of the
overall control document set as discussed in Chapter 15.
Because a pipeline risk assessment cannot be all things at
once, a statement of the program’s scope and limitations is usually appropriate. The scope should address exactly what portions of the pipeline system are included and what risks are
being evaluated. The following statements are examples of
scope and limitation statements that are common to many
relative risk assessments.
This risk assessment covers all pipe and appurtenances that are a part
of the ABC Pipeline Company from Station Alpha to Station Beta as
shown on system maps.
This assessment is complete and comprehensive in terms ofits ability to capture all pertinent information and provide meaningful analyses of current risks. Since the objective of the risk assessment is to
provide a useful tool to support decision making, and since it is
intended to continuously evolve as new information is received, some
aspects of academician-type risk assessment methodologies are intentionally omitted. These are not thought to produce limitations in the
assessment for its intended use but rather are deviations from other
possible risk assessment approaches. These deviations include the
Relative risks only: Absolute risk estimations are not included
because of their highly uncertain nature and potential for misunderstanding. Due to the lack of historical pipeline failure data for various
failure mechanisms, and incomplete incident data for a multitude of
integrity threats and release impacts, a statistically valid database is
not thought to be available to adequately quantify the probability of
a failure (e.g., failureskm-year), the monetized consequences of a
failure (e.g., dollars/failure), or the combined total risk of a failure
(e.g., dollarskm-year) on apipeline-specific basis.
Certuin consequences: The focus ofthis assessment is on risks to public safety and the environment. Other consequences such as cost of
business interruption and risks to company employees are not specifically quantified. However, most other consequences are thought to be
proportional to the public safety and environmental threats so the
results will generally apply to most consequences.
Abnormal conditions: This risk assessment shows the relative risks
along the pipeline during its operation. The focus is on abnormal conditions, specifically the unintentional releases of product. Risks from
normal operations include those from employee vehicle and watercraft operation; other equipment operation; use of tools and cleaning
and maintenance fluids; and other aspects that are considered to add
normal and/or negligible additional risks to the public. Potential construction risks associated with new pipeline installations are also not
Insensitivity to length: The pipeline risk scores represent the relative
level of risk that each point along the pipeline presents to its surroundings. is the scores are insensitive to length. If two pipeline segments,
100 and 2600 ft, respectively, have the same risk score, then each
point along the 100-8 segment presents the same risk as does each
point along the 2600-ft length. Of course, the 2600-ft length presents
more overall risk than does the 100-A length, because it has many
more risk-producing points.
Note: With regard to length sensitivity, a cumulative risk calculation
adds the length aspect so that a 100-A length ofpipeline with one risk
score can be compared against a 2600-A length with a different
risk score.
Use of judgment: As with any risk assessment methodology, some
subjectivity in the form of expert opinion and engineering judgments
are required when “hard” data provide incomplete knowledge. This is
a limitation of this assessment only in that it might he considered a
limitation of all risk assessments. See also discussions in this section
dealing with uncertainty.
Related to these statements is a list of assumptions that
might underlie a risk assessment. An example of documented
assumptions that overlap the above list to some extent is
provided elsewhere.
Formal vs. informal risk management
Although formal pipeline risk management is growing in popularity among pipeline operators and is increasingly mandated
by governmentregulations, it is important to note that risk management has always been practiced by these pipeline operators.
Every time a decision is made to spend resources in a certain
way, a risk management decision has been made. This informal
approach to risk management has served us well, as evidenced
by the very good safety record of pipelines versus other modes
of transportation. An informal approach to risk management
can have the further advantages of being simple, easy to
comprehend and to communicate, and the product of expert
engineering consensus built on solid experience.
1/14 Risk: Theory and Application
However, an informal approach to risk management does
not hold up well to close scrutiny, since the process is often
poorly documented and not structured to ensure objectivity
and consistency of decision making. Expanding public concerns over human safety and environmental protection have
contributed significantly to raising the visibility of risk
management. Although the pipeline safety record is good, the
violent intensity and dramatic consequences of some accidents,
an aging pipeline infrastructure, and the continued urbanization of formerly rural areas has increased perceived, if not
actual, risks.
Historical (Informal) risk management, therefore has these
pluses and minuses:
strengths and weaknesses, including costs ofthe evaluation and
appropriatenessto a situation:
0 Simplehtuitive
Consensus is often sought
0 Utilizes experience and engineeringjudgment
0 Successful, based on pipeline safety record
Some of the more formal risk tools in common use by the
pipeline industry include some of the above and others as
discussed below.
Reasons to Change
0 Consequences of mistakes are more serious
Lack of consistency and continuity in a changing workforce
Need for better evaluation of complicated risk factors and
their interactions
Developing a risk assessment model
In moving toward formal risk management, a structure and
process for assessing risks i s required. In this book, this
structure and process is called the risk assessment model. A
risk assessment model can take many forms, but the best ones
will have several common characteristics as discussed later in
this chapter. They will also all generally originate from some
basic techniques that underlie the final model-the building
It is useful to become familiar with these building blocks
of risk assessment because they form the foundation of
most models and may be called on to tune a model from time
to time. Scenarios, event trees, and fault trees are the core
building blocks of any risk assessment. Even if the model
author does not specifically reference such tools, models
cannot be constructed without at least a mental process that
parallels the use of these tools. They are not, however, risk
assessments themselves. Rather, they are techniques and
methodologies we use to crystallize and document our understanding of sequences that lead to failures. They form a basis
for a risk model by forcing the logical identification of all risk
variables. They should not be considered risk models themselves, in this author’s opinion, because they do not pass the
tests of a fully functional model, which are proposed later in
this chapter.
Risk assessment building blocks
Eleven hazard evaluation procedures in common use by
the chemical industry have been identified [9]. These are examples of the aforementioned building blocks that lay the foundation for a risk assessment model. Each of these tools has
Safety review
Relative ranking
Preliminary hazard analysis
“What-if” analysis
FMEA analysis
Fault-tree analysis
Event-tree analysis
Human-error analysis
HAZOP. A hazard and operability study is a team technique
that examines all possible failure events and operability
issues through the use of keywords prompting the team for
input in a very structured format. Scenarios and potential
consequences are identified, but likelihood is usually not
quantified in a HAZOP. Strict discipline ensures that all
possibilities are covered by the team. When done properly,
the technique is very thorough but time consuming and
costly in terms of person-hours expended. HAZOP and
failure modes and effects analysis (FMEA) studies are
especially useful tools when the risk assessments include
complex facilities such as tank farms and pump/compressor
Fault-tree/event-tree analysis. Tracing the sequence of
events backward from a failure yields afault tree. In an event
tree, the process begins from an event and progresses forward through all possible subsequent events to determine
possible failures. Probabilities can be assigned to each
branch and then combined to arrive at complete event probabilities. An example of this application is discussed below
and in Chapter 14.
Scenarios. “Most probable” or “most severe” pipeline failure
scenarios are envisioned. Resulting damages are estimated
and mitigating responses and preventions are designed. This
is often a modified fault-tree or event-tree analysis.
Scenario-based tools such as event trees and fault trees are
particularly common because they underlie every other
approach. They are always used, even if informally or as a
thought process, to better understand the event sequences that
produce failures and consequences. They are also extremely
useful in examining specific situations. They can assist in incident investigation, determining optimum valve siting, safety
system installation, pipeline routing, and other common
pipeline analyses. These are often highly focused applications.
These techniques are further discussed in Chapter 14.
Figure 1.3 is an example of a partial event-tree analysis. The
event tree shows the probability of a certain failure-initiation
event, possible next events with their likelihood, interactions
of some possible mitigating events or features, and, finally,
possible end consequences. This illustration demonstrates
Risk assessment issues 1/15
(1/100) Large
(500/600)High thermal
Thirdparty damage
(1:2 years)
(99/600) Torch fire only
No ignition
(1/20) Corrosion
-Reported - - - - - - - - -
(’/’ O0) No damage -No event
Figure 1.3
how quickly the interrelationships make an event tree very
large and complex. especially when all possible initiating
events are considered. The probabilities associated with events
will also normally be hard to determine. For example, Figure
1.3suggests that for every 600 ignitions of product from a large
rupture. one will result in a detonation, 500 will result in
high thermal damages, and 99 will result in localized fire
damage only. This only occurs after a Ym chance of ignition,
which occurs after a Yim chance of a large rupture, and after a
once-every-two-years line strike. In reality, these numbers
will be difficult to estimate. Because the probabilities must
then be combined (multiplied) along any path in this diagram,
inaccuracies will build quickly.
Screening analyses. This is a quantitative or qualitative
technique in which only the most critical variables are
assessed. Certain combinations of variable assessments are
judged to represent more risk than others. In this fashion, the
process acts as a high-level screening tool to identify relatively risky portions of a system. It requires elements of suh-
jectivity and judgment and should be carefully documented.
While a screening analysis is a logical process to be used
subsequent to almost any risk assessment, it is noted here as a
possible stand-alone risk tool. As such, it takes on many
characteristics of the more complete models to be described,
especially the scoring-type or indexing method.
VII. Risk assessment issues
In comparing risk assessment approaches, some issues arise
that can lead to confusion. The following subsections discuss
some ofthose issues.
Absolute vs. relative risks
Risks can be expressed in absolute terms, for example, “number of fatalities per mile year for permanent residents
within one-half mile of pipeline. . . .” Also common is the use
of relative risk measures, whereby hazards are prioritized
such that the examiner can distinguish which portions of the
III 6 Risk: Theory and Application
facilities pose more risk than others. The former is a frequencybased measure that estimates the probability of a specific
type of failure consequence. The latter is a comparative measure of current risks, in terms of both failure likelihood and
A criticism of the relative scale is its inability to compare
risks from dissimilar systems-pipelines
versus highway
transportation,for example-and its inabilityto directly provide
failure predictions. However, the absolute scale often fails in
relying heavily on historical point estimates, particularly for
rare events that are extremely difficult to quantify, and in the
unwieldy numbers that often generate a negative reaction from
the public. The absolute scale also often implies a precision that
is simply not available to any risk assessment method. So, the
“absolute scale” offers the benefit of comparability with other
types of risks, while the “relative scale” offers the advantage
of ease-of-use and customizability to the specific risk being
In practical applications and for purposes of communications, this is not really an important issue. The two scales are not
mutually exclusive. Either scale can be readily converted to the
other scale if circumstances so warrant. A relative risk scale is
converted to an absolute scale by correlating relative risk scores
with appropriate historical failure rates or other risk estimates
expressed in absolute terms. In other words, the relative scale is
calibrated with some absolute numbers. The absolute scale
is converted to more manageable and understandable (nontechnical) relative scales by simple mathematical relationships.
A possible misunderstanding underlying this issue is the
common misconception that a precise-looking number,
expressed in scientific notation, is more accurate than a simple
number. In reality, either method should use the same available
data pool and be forced to make the same number of assumptions when data are not available. The use of subjective judgment is necessary in any risk assessment, regardless of how
results are presented.
Any good risk evaluation will require the generation of scenarios to represent all possible event sequences that lead to possible damage states (consequences). Each event in each
sequence is assigned a probability. The assigned probabilities
are assigned either in absolute terms or, in the case of a relative
risk application, relative to other probabilities. In either case,
the probability assigned should be based on all available information. For a relative model, these event trees are examined,
and critical variables with their relative weightings (based on
probabilities) are extracted. In a risk assessment expressing
results in absolute numbers, the probabilities must be preserved
in order to produce the absolute terms.
Combining the advantages of relative and absolute approaches
is discussed in Chapter 14.
Quantitativevs. qualitative models
It is sometimes difficult to make distinctions between qualitative and quantitative analyses. Most techniques use numbers,
which would imply a quantitative analysis, but sometimes
the numbers are only representations of qualitative beliefs.
For example, a qualitative analysis might use scores of 1,2, and 3
to replace the labels of “low,” “medium,” and “high.” To some,
these are insufficient grounds to now call the analysis quantitative.
The terms quantitative and qualitative are often used to distinguish the amount of historical failure-related data analyzed
in the model and the amount of mathematical calculations
employed in arriving at a risk answer. A model that exclusively
uses historical frequency data is sometimes referred to as quantitative whereas a model employing relative scales, even if later
assigned numbers, is referred to as qualitative or semi-quantitative. The danger in such labeling is that they imply a level of
accuracy that may not exist. In reality, the labels often tell more
about the level of modeling effort, cost, and data sources than
the accuracy of the results.
Subjectivity vs. objectivity
In theory, a purely objective model will strictly adhere to scientific practice and will have no opinion data. A purely subjective
model implies complete reliance on expert opinion. In practice,
no pipeline risk model fully adheres to either. Objectivity cannot be purely maintained while dealing with the real-world situation of missing data and variables that are highly confounded.
On the other hand, subjective models certainly use objective
data to form or support judgments.
Use of unquantifiable evidence
In any of the many difficult-to-quantify aspects of risk, some
would argue that nonstatistical analyses are potentially damaging. Although this danger of misunderstanding the role of a factor always exists, there is similarly the more immediate danger
of an incomplete analysis by omission of a factor. For example,
public education is seen by most pipeline professionals to be
a very important aspect of reducing the number of thirdparty damages and improving leak reporting and emergency
response. However, quantifying this level of importance and
correlating it with the many varied approaches to public education is quite difficult. A concerted effort to study this data is
needed to determine how they affect risk. In the absence of such
a study, most would agree that a company that has a strong public education program will achieve some level ofrisk reduction
over a company that does not. A risk model should reflect this
belief, even if it cannot be precisely quantified. Otherwise,
the benefits of efforts such as public education would not be
supported by risk assessment results.
In summary, all methodologieshave access to the same databases (at least when publicly available) and all must address
what to do when data are insufficient to generate meaningful
statistical input for a model. Data are not available for most of
the relevant risk variables of pipelines. Including risk variables
that have insufficient data requires an element of “qualitative”
evaluation.The only alternative is to ignore the variable, resulting in a model that does not consider variables that intuitively
seem important to the risk picture. Therefore, all models
that attempt to represent all risk aspects must incorporate
qualitative evaluations.
VIII. Choosing a risk assessment
Several questions to the pipeline operator may direct the choice
of risk assessment technique:
Choosing a risk assessmenttechnique 1117
What data do you have?
What is your confidence in the predictive value of the
What resources are available in terms of money, personhours, and time?
What benefits do you expect to accrue in terms of cost savings, reduced regulatory burdens, improved public support,
and operational efficiency?
These questions should be kept in mind when selecting
the specific risk assessment methodology, as discussed further in
Chapter 2. Regardless ofthe specific approach, some properties
of the ideal risk assessment tool will include the following:
Appropriate costs. The value or benefits derived from the
risk assessment process should clearly outweigh the costs of
setting up, implementing, and maintaining the program.
Ability to learn. Because risk is not constant over the length
of a pipeline or over a period of time, the model must be able
to “learn” as information changes. This means that new data
should be easy to incorporate into the model.
Signal-to-noise ratio. Because the model is in effect a measurement tool, it must have a suitable signal-to-noise ratio, as
discussed previously. This means that the “noise,” the
amount of uncertainty in the measurement (resulting from
numerous causes), must be low enough so that the “signal:’
the risk value of interest, can be read. This is similar to the
accuracy of the model, but involves additional considerations that surround the high level of uncertainty associated
with risk management.
Comparisons can be made against fixed or floating “standards” or benchmarks
Finally, a view to the next step, risk management, should be
taken. A good risk assessment technique will allow a smooth
transition into the management of the observed risks. This
means that provisions for resource allocation modeling and the
evolution ofthe overall risk model must be made. The ideal risk
assessment will readily highlight specific deficiencies and
point to appropriate mitigation possibilities.
We noted previously that some risk assessment techniques
are more appropriately considered to be “building blocks”
while others are complete models. This distinction has to do
with the risk assessment’s ability to not only measure risks, but
also to directly support risk management. As it is used here. a
complete model is one that will measure the risks at all points
along a pipeline, readily show the accompanying variables
driving the risks, and thereby directly indicate specific system
vulnerabilities and consequences. A one-time risk analysis-a
study to determine the risk level-may not need a complete
model. For instance, an event-tree analysis can be used to estimate overall risk levels or risks from a specific failure mode.
However, the risk assessment should not be considered to be
a complete model unless it is packaged in such a way that it
efficiently provides input for risk management.
Four tests
Four informal tests are proposed here by which the difference
between the building block and complete model can be seen.
The proposition is that any complete risk assessment model
should be able to pass the following four tests:
Model performance tests
(See also Chapter 8 for discussion of model sensitivity analyses.) In examining a proposed risk assessment effort, it may be
wise to evaluate the risk assessment model to ensure the
All failure modes are considered
All risk elements are considered and the most critical ones
Failure modes are considered independently as well as in
All available information is being appropriately utilized
Provisions exist for regular updates of information, including new types of data
Consequence factors are separable from probability factors
Weightings, or other methods to recognize the relative
importance of factors, are established
The rationale behind weightings is well documented and
A sensitivity analysis has been performed
The model reacts appropriately to failures of any type
Risk elements are combined appropriately (“and” versus
“or” combinations)
’ Steps are taken to ensure consistency of evaluation
Risk assessment results form a reasonable statistical distribution (outliers?)
There is adequate discrimination in the measured results
(signal-to-noise ratio)
1. The “I didn’t know that!” test
2. The “Why is that?” test
3. The “point to amap” test
4. The “What about -?’test
Again, these tests are very informal but illustrate some key
characteristics that should be present in any methodology that
purports to be a full risk assessment model. In keeping with the
informality, the descriptions below are written in the familiar,
instructional voice used as if speaking directly to the operator
of a pipeline.
The “Ididn ’t know that! ” test (new knowledge)
The risk model should be able to do more than you can do in
your head or even with an informal gathering of your experts.
Most humans can simultaneously consider a handful of factors
in making a decision. The real-world situation might be influenced by dozens of variables simultaneously. Your model
should be able to simultaneously consider dozens or even
hundreds of pieces of information.
The model should tell you things you did not already know.
Some scenario-based techniques only tend to document what is
already obvious. If there aren’t some surprises in the assessment results, you should be suspicious ofthe model’s completeness. It is difficult to believe that simultaneous consideration of
many variables will not generate some combinations in certain
locations that were not otherwise intuitively obvious.
1/18 Risk: Theoryand Application
Naturally, when given a surprise, you should then be skeptical and ask to be convinced. That helps to validate your model
and leads to the next points.
The “Whyis that? ” test (drill down)
So let’s say that the new knowledge proposed by your model is
that your pipeline XYZ in Barker County is high risk. You say,
“What?! Why is that high risk?”You should be initially skeptical, by the way, as noted before. Well, the model should be able
to tell you its reasons; perhaps it is because coincident occurrences of population density, a vulnerable aquifer, and state
park lands, coupled with 5 years since a close interval survey,
no ILI, high stress levels, and a questionable coating condition
make for a riskier than normal situation. Your response should
be to say, “Well, okay, looking at all that, it makes sense. . . .” In
other words, you should be able to interrogate the model and
receive acceptable answers to your challenges. If an operator’s
intuition is not consistent with model outputs, then one or the
other is in error. Resolution of the discrepancy will often
improve the capabilities of both operator and model.
The “point to map ” test (location specific and
This test is often overlooked. Basically, it means that you
should be able to pull out a map of your system, put your finger
on any point along the pipeline, and determine the risk at that
point--either relative or absolute. Furthermore, you should be
able to determine specifically the corrosion risk, the third-party
risk, the types of receptors, the spill volume, etc., and quickly
determine the prime drivers of the apparently higher risk. This
may seem an obvious thing for a risk assessment to do, but
many recommended techniques cannot do this. Some have predetermined their risk areas so they know little about other areas
(and one must wonder about this predetermination). Others do
not retain information specific to a given location. Others
do not compile risks into summary judgments. The risk information should be a characteristic of the pipeline at all points,
just like the pipe specification.
The “Whatabout -?
test (a measure of
Someone should be able to query the model on any aspect of
risk, such as “What about subsidence risk? What about stress
corrosion cracking?” Make sure all probability issues are
addressed. All known failure modes should be considered, even
if they are very rare or have never been observed for your
particular system. You never know when you will be comparing
your system against one that has that failure mode or will
be asked to perform a due diligence on a possible pipeline
IX. Quality and risk management
In many management and industry circles, quality is a popular concept--extending far beyond the most common uses of
the term. As a management concept, it implies a way of think-
ing and a way ofdoing business. It is widely believed that attention to quality concepts is a requirement to remain in business
in today’s competitive world markets.
Risk management can be thought of as a method to improve
quality. In its best application, it goes beyond basic safety
issues to address cost control, planning, and customer satisfaction aspects of quality. For those who link quality with competitiveness and survival in the business world, there is an
immediate connection to risk management. The prospect of a
company failure due to poor cost control or poor decisions is a
risk that can also be managed.
Quality is difficult to define precisely. While several different definitions are possible, they typically refer to concepts
such as (1) fitness-for-use, (2) consistency with specifications,
and (3) freedom from defects, all with regard to the product or
service that the company is producing. Central to many of the
quality concepts is the notion of reducing variation. This is
the discipline that may ultimately be the main “secret” of the
most successful companies. Variation normally is evidence
of waste. Performing tasks optimally usually means little
variation is seen.
All definitions incorporate (directly or by inference) some
reference to customers. Broadly defined, a customer is anyone
to whom a company provides a product, service, or information. Under this definition, almost any exchange or relationship
involves a customer. The customer drives the relationship
because he specifies what product, service, or information he
wants and what he is willing to pay for it.
In the pipeline business, typical customers include those
who rely on product movements for raw materials, such as
refineries; those who are end users of products delivered,
such as residential gas users; and those who are affected by
pipelining activities, such as adjacent landowners. As a
whole, customers ask for adequate quantities of products to be
With no service interruptions (reliability)
With no safety incidents
At lowest cost
This is quite a broad brush approach. To be more accurate,
the qualifiers of “no” and “lowest” in the preceding list must be
defined. Obviously, trade-offs are involved-improved safety
and reliability may increase costs. Different customers will
place differing values on these requirements as was previously
discussed in terms of acceptable risk levels.
For our purposes, we can view regulatory agencies as representing the public since regulations exist to serve the public
interest. The public includes several customer groups with
sometimes conflicting needs. Those vitally concerned with
public safety versus those vitally concerned with costs, for
instance, are occasionally at odds with one another. When a
regulatory agency mandates a pipeline safety or maintenance
program, this can be viewed as a customer requirement originating from that sector ofthe public that is most concerned with
the safety of pipelines. When increased regulation leads to
higher costs, the segment of the public more concerned with
costs will take notice.
Reliability 1/19
As a fundamental part ofthe quality process, we must make a
distinction between types ofwork performed in the name of the
Value-added work. These are work activities that directly
add value, as defined by the customer, to the product or service. By moving a product from point A to point B, value has
been added to that product because it is more valuable (to the
customer) at point B than it was at pointA.
Necessary work. These are work activities that are not value
added, but are necessary in order to complete the valueadded work. Protecting the pipeline from corrosion does not
directly move the product, but it is necessary in order to
ensure that the product movements continue uninterrupted.
Waste. This is the popular name for a category that includes
all activities performed that are unnecessary. Repeating a
task because it was done improperly the first time is called
rework and is included in this category. Tasks that are done
routinely, but really do not directly or indirectly support the
customer needs, are considered to be waste.
Profitability is linked to reducing the waste category while
optimizing the value-added and necessary work categories. A
risk management program is an integral part of this. as will be
The simplified process for quality management goes something like this: The proper work (value added and necessary) is
identified by studying customer needs and creating ideal
processes to satisfy those needs in the most efficient manner.
Once the proper work is identified the processes that make up
that work should be clearly defined and measured. Deviations
from the ideal processes are waste. When the company can produce exactly what the customer wants without any variation in
that production, that company has gained control over waste in
its processes. From there. the processes can be even further
improved to reduce costs and increase output, all the while
measuring to ensure that variation does not return.
This is exactly what risk management should do: identify
needs, analyze cost versus benefit of various choices, establish
an operating discipline, measure all processes, and continuously
improve all aspects of the operation. Because the pipeline capacity is set by system hydraulics, line size, regulated operating limits. and other fixed constraints, gains in pipeline efficiencies are
made primarily by reducing the incremental costs associated
with moving the products. Costs are reduced by spending in
ways that reap the largest benefits, namely, increasing the reliability of the pipeline. Spending to prevent losses and service
interruptions is an integral part of optimizing pipeline costs.
The pipeline risk items considered in this book are all either
existing conditions or work processes. The conditions are characteristics of the pipeline environment and are not normally
changeable. The work processes, however, are changeable and
should be directly linked to the conditions. The purpose of
every work process, every activity. even every individual
motion is to meet customer requirements. A risk management
program should assess each activity in terms of its benefit from
a risk perspective. Because every activity and process costs
something, it must generate some benefit-thenvise
it is
waste. Measuring the benefit, including the benefit of loss prevention, allows spending to be prioritized.
Rather than having a broad pipeline operating program to
allow for all contingencies, risk management allows the direction of more energy to the areas that need it more. Pipelining
activities can be fine-tuned to the specific needs of the various
pipeline sections.
Time and money should be spent in the areas where the
return (the benefit) is the greatest. Again, measurement systems are required to track progress, for without measurements,
progress is only an opinion.
The risk evaluation program described here provides a
tool to improve the overall quality of a pipeline operation.
It does not necessarily suggest any new techniques; instead
it introduces a discipline to evaluate all pipeline activities
and to score them in terms of their benefit to customer
needs. When an extra dollar is to be spent, the risk evaluation
program points to where that dollar will do the most good.
Dollars presently being spent on one activity may produce
more value to the customer if they were being spent another
way. The risk evaluation program points this out and measures
X. Reliability
Reliability is often defined as the probability that equipment,
machinery, or systems will perform their required functions
satisfactorily under specific conditions within a certain time
period. This can also mean the duration or probability of failure-free performance under the stated condition.
As is apparent from this definition, reliability concepts are
identical to risk concepts in many regards. In fact, sometimes
the only differences are the scenarios of interest. Where risk
often focuses on scenarios involving fatality, injury, property
damage, etc.. reliability focuses on scenarios that lead to equipment unavailability, repair costs, etc. [45]
Risk analysis is often more of a diagnostic tool, helping
us to better understand and make decisions about an overall
existing system. Reliability techniques are more naturally
applied to new structures or the performance of specific
Many of the same techniques are used, including FMEA,
root cause analyses, and event-tree/fault-tree analyses. This is
logical since many ofthe same issues underlie risk and reliability. These include failure rates, failure modes, mitigating or offsetting actions, etc.
Common reliability measurement and control efforts involve
issues of ( I ) equipment performance, as measured by availability, uptime, MTTF (mean time to failure), MTBF (mean time
between failures), and Weibull analyses; (2) reliability as a
component of operation cost or ownership costs, sometimes
measured by life-cycle cost; and (3) reliability analysis techniques applied to maintenance optimization, including reliability centered maintenance (RCM), predictive preventive
maintenance (PPM). and root cause analysis. Many of these
are, at least partially, risk analysis techniques, the results of
which can feed directly into a risk assessment model.
This text does not delve deeply into specialized reliability engineering concepts. Chapter 10, Service Interruption Risk, discusses issues of pipeline availability and
delivery failures.
Risk Assessment
1. Using this manual
To get answers quick!
Formal risk management can become a useful tool for pipeline
operators, managers, and others interested in pipeline safety
and/or efficiency. Benefits are not only obtained from an
enhanced ability to improve safety and reduce risk, but experience has shown that the risk assessmentprocess draws together
so much useful information into a central location that it
becomes a constant reference point and information repository
for decision making all across the organization.
The purpose of the pipeline risk assessment method
described in Chapters 3 through 7 of this book is to evaluate
a pipeline’s risk exposure to the public and to identify ways to
effectively manage that risk. Chapters 8 through 14 discuss
special risk assessment considerations, including special
pipeline facilities and the use of absolute risk results.
Chapter 15 describesthe transition from risk assessment to risk
While the topic ofpipeline risk management does fill the pages
of this book, the process does not have to be highly complex or
expensive. Portionsof this book can be used as a “cookbook” to
quickly implement a risk management system or simply provide ideas to pipeline evaluators.A fairly detailed pipeline risk
assessment system can be set up and functioning in a relatively
short time by just one evaluator.
A reader could adopt the risk assessment framework
described in Chapters 3 through 7 to begin assessingrisk immediately. An overview of the base model with suggested weightings of all risk variables is shown in Risk Assessment af a
Glance, with each variable fully described in later chapters.
A risk evaluator with little or no pipeline operating experience
could most certainly adopt this approach, at least initially.
Similarly, an evaluatorwho wants to assess pipelines covering a
wide range of services, environments, and operators may wish
2/22Risk Assessment Process
to use this general approach, since that was the original purpose
of the basic framework.
By using simple computer tools such as a spreadsheet or
desktop database to hold risk data, and then establishing some
administrativeprocesses around the maintenance and use ofthe
information, the quick-start applicator now has a system to support risk management. Experienced risk managers may balk at
such a simplification of an often complex and time-consuming
process. However, the point is that the process and underlying
ideas are straightforward, and rapid establishment of a very
useful decision support system is certainly possible. It may not
be of sufficient rigor for a very detailed assessment, but the user
will nonetheless have a more formal structure from which to
better ensure decisions of consistency and completeness of
For pipeline operators
Whereas the approach described above is a way to get started
quickly, this tool becomes even more powerful if the user
customizes it, perhaps adding new dimensions to the process
to better suit his or her particular needs. As with any engineered system (the risk assessment system described herein
employs many engineering principles), a degree of due
diligence is also warranted. The experienced pipeline operator should challenge the example point schedules: Do they
match your operating experience? Read the reasoning
behind the schedules: Do you agree with that reasoning?
Invite (or require) input from employees at all levels. Most
pipeline operators have a wealth of practical expertise that can
be used to fine-tune this tool to their unique operating environment. Although customizing can create some new issues,
problems can be avoided for the most part by carefully
planning and controlling the process of model setup and
The point here again is to build a useful toolbone that is
regularly used to aid in everyday business and operating decision making, one that is accepted and used throughout the
organization. Refer also to Chapter 1 for ideas on evaluating the
measuring capability of the tool.
11. Beginning risk management
Chapter 1 suggests the following as basic steps in risk management:
Step 1:Acquire a risk assessment model
A pipeline risk assessment model is a set of algorithms or
“rules” that use available information and data relationships to
measure levels of risk along a pipeline. A risk assessment
model can be selected from some commercially available
models, customized from existing models, or created “from
scratch” depending on requirements.
Step 2: Collect and prepare data
Data preparation are the processes that result in data sets that
are ready to be read into and used by the risk assessment model.
Step 3: Devise and implement a segmentation
Because risks are rarely constant along a pipeline, it is advantageous to first segment the line into sections with constant risk
characteristics (dynamic segmentation) or otherwise divide the
pipeline into manageable pieces.
Step 4:Assess the risks
After a risk model has been selected and the data have been prepared, risks along the pipeline route can be assessed. This is the
process of applying the algorithn-the rules-to the collected
data. Each pipeline segment will get a unique risk score that
reflects its current condition, environment, and the operating/
maintenance activities.These relative risk numbers can later be
converted into absolute risk numbers. Risk assessment will need
to be repeated periodically to capture changing conditions.
Step 5: Manage the risks
This step consists of determining what actions are appropriate given the risk assessment results. This is discussed in
Chapter 15.
Model design and data collection are often the most costly
parts of the process. These steps can be time consuming not
only in the hands-on aspects, but also in obtaining the necessary consensus from all key players. The initial consensus often
makes the difference between a widely accepted and a partially
resisted system. Time and resources spent in these steps can be
viewed as initial investments in a successfil risk management
tool. Program management and maintenance are normally
small relative to initial setup costs.
111. Risk assessment models
What is a model?
Armed with an understanding of the scenarios that compose the
hazard (see Chapter 1 discussion of risk model building
blocks), a risk assessment model can be constructed. The model
is the set of rules by which we will predict the future performance of the pipeline from a risk perspective. The model will be
the constructor’s representation of risk.
The goal of any risk assessment model is to quantify the
risks, in either a relative or absolute sense. The risk assessment
phase is the critical first step in practicing risk management. It
is also the most difficult phase. Although we understand engineering concepts about corrosion and fluids flow, predicting
failures beyond the laboratory in a complex “real” environment
can prove impossible. No one can definitively state where or
when an accidental pipeline failure will occur. However, the
more likely failure mechanisms, locations, and frequencies can
be estimated in order to focus risk efforts.
Some make a distinction between a model and a simulation,
where a model is a simplification of the real process and a
simulation is a direct replica. A model seeks to increase our
understanding at the expense of realism, whereas a simulation
attempts to duplicate reality, perhaps at the expense of understandability and usability. Neither is necessarily superior-
Risk assessment models 2/23
either might be more appropriate for specific applications.
Desired accuracy, achievable accuracy, intended use, and availability of resources are considerationsin choosing an approach.
Most pipeline risk efforts generally fall into the “model” category-seeking to gain risk understanding in the most efficient
Although not always apparent, the most simple to the most
complex models all make use of probability theory and statistics. In a very simple application,these manifest themselves in
experience factors and engineering judgments that are themselves based on past observationsand inductive reasoning; that
is, they are the underlying basis of sound judgments. In the
more mathematically rigorous models, historical failure data
may drive the model almost exclusively.
Especially in the fields of toxicology and medical research,
risk assessments incorporate dose-response and exposure assessments into the overall risk evaluation. Dose-response
assessment deals with the relationship between quantities of
exposureand probabilities of adverse health effects in exposed
populations. Exposure assessment deals with the possible
pathways, the intensity of exposure, and the amount of time a
receptor could be vulnerable. In the case of hazardous materials pipelines, the exposure agents of concern are both chemical
(contaminationscenarios) and thermal (fire related hazards) in
nature. These issues are discussed in Chapters 7 and 14.
Three general approaches
Three general types of models, from simplestto most complex,
are matrix, probabilistic, and indexing models. Each has
strengthsand weaknesses, as discussed below.
Matrix models
One of the simplest risk assessment structures is a decisionanalysis matrix. It ranks pipeline risks according to the likelihood and the potential consequences of an event by a simple
scale, such as high, medium, or low,or a numerical scale; from 1
to 5 , for example. Each threat is assigned to a cell of the matrix
based on its perceived likelihood and perceived consequence.
Events with both a high likelihood and a high consequence
appear higher on the resulting prioritized list. This approach
may simply use expert opinion or a more complicated application might use quantitativeinformationto rank risks. Figure 2.1
shows a matrix model. While this approach cannot consider all
pertinent factors and their relationships, it does help to crystallize thinking by at least breaking the problem into two parts
(probabilityand consequence)for separate examination.
Probabilistic models
The most rigorous and complex risk assessment model is a
modeling approach commonly referred to as probabilistic risk
assessment (PRA) and sometimes also called quantitative
risk assessment (QRA)or numerical risk assessment (NRA).
Note that these terms carry implicationsthat are not necessarily
appropriate as discussed elsewhere. This technique is used in
the nuclear, chemical, and aerospace industries and, to some
extent, in the petrochemical industry.
PRA is a rigorousmathematicaland statisticaltechnique that
relies heavily on historical failure data and event-treelfault-tree
risk L o w < I Likelihood =>High
Figure 2.1
Simple risk matrix.
analyses. Initiating events such as equipment failure and safety
system malfunction are flowcharted forward to all possible
concluding events, with probabilities being assigned to each
branch along the way. Failures are backward flowchartedto all
possible initiating events, again with probabilities assigned to
all branches.All possible paths can then be quantified based on
the branch probabilities along the way. Final accident probabilities are achieved by chaining the estimated probabilities of
This technique is very data intensive. It yields absolute risk
assessmentsof all possible failure events. These more elaborate
models are generally more costly than other risk assessments.
They are technologicallymore demanding to develop, require
trained operators, and need extensive data. A detailed PRA is
usually the most expensive of the risk assessmenttechniques.
The output of a PRA is usually in a form whereby its output can be directly compared to other risks such as motor vehicle fatalities or tornado damages. However, in rare-event
occurrences,historical data present an arguably blurred view.
The PRA methodology was first popularized through opposition to various controversialfacilities, such as large chemical
plants andnuclear reactors [88]. In addressingthe concerns, the
intent was to obtain objective assessments of risk that were
grounded in indisputable scientific facts and rigorous engineering analyses.The technique therefore makes extensive use
of failure statistics of components as foundationsfor estimates
of future failure probabilities. However, statistics paints an
incompletepicture at best, and many probabilitiesmust still be
based on expertjudgment. In attemptsto minimize subjectivity,
applicationsof this techniquebecame increasinglycomprehensive and complex, requiring thousands of probability estimates
and like numbers ofpages to document. Nevertheless,variation
in probability estimates remains, and the complexity and cost
of this method does not seem to yield commensurateincreases
in accuracy or applicability [MI.In addition to sometimes
widely differing results from “duplicate” PRAs performed
on the same system by different evaluators, another criticism
2/24 Risk Assessment Process
includes the perception that underlying assumptions and
input data can easily be adjusted to achieve some predetermined result. Of course, this latter criticism can be applied to
any process involving much uncertainty and the need for
PRA-type techniques are required in order to obtain estimates of absolute risk values, expressed in fatalities, injuries,
property damages, etc., per specific time period. This is the
subject of Chapter 14. Some guidanceon evaluating the quality
of a PRA-type technique is also offered in Chapter 14.
Indexing models
Perhaps the most popular pipeline risk assessment technique in
current use is the index model or some similar scoring technique. In this approach, numerical values (scores) are assigned
to important conditions and activities on the pipeline system
that contribute to the risk picture. This includes both riskreducing and risk-increasing items, or variables. Weightings
are assigned to each risk variable. The relative weight reflects
the importance of the item in the risk assessment and is based
on statistics where available and on engineering judgment
where data are not available. Each pipeline section is scored
based on all of its attributes. The various pipe segments may
then be ranked according to their relative risk scores in order to
prioritize repairs, inspections,and other risk mitigating efforts.
Among pipeline operators today, this technique is widely used
and ranges from a simple one- or two-factor model (where only
factors such as leak history and population density are considered) to models with hundreds of factors consideringvirtually
every item that impacts risk.
Although each risk assessmentmethod discussed has its own
strengths and weaknesses, the indexing approach is especially
appealingfor several reasons:
Provides immediate answers
Is a low-cost analysis (an intuitive approach using available
Is comprehensive (allows for incomplete knowledge and is
easily modified as new informationbecomes available)
Acts as a decision support tool for resource allocation
Identifies and places values on risk mitigation opportunities
An indexing-typemodel for pipelinerisk assessmentis a recommended feature of a pipeline risk management program and
is hlly described in this book. It is a hybrid of several of the
methods listed previously. The great advantage of this technique is that a much broader spectrum of information can he
included; for example, near misses as well as actual failures
are considered.A drawback is the possible subjectivity of the
scoring. Extra efforts must be employed to ensure consistency
in the scoring and the use of weightings that fairly represent
real-world risks.
It is reasonableto assumethat not all variable weightings will
prove to be correct in any risk model. Actual research and failure data will doubtlessly demonstrate that some were initially
set too high and some too low. This is the result of modelers
misjudging the relative importance of some of the variables.
However, even if the quantification of the risk factors is imperfect, the results nonetheless will usually give a reliable picture
of places where risks are relatively lower (fewer “bad” factors
present) and where they are relatively higher (more “bad”
factors are present).
An indexing approach to risk assessment is the emphasis of
much ofthis book.
Further discussion on scoring-type risk assessments
Scoring-typetechniques are in common use in many applications. They range from judging sports and beauty contests to
medical diagnosis and credit card fraud detection, as are discussed later. Any time we need to consider many factors simultaneously and our knowledge is incomplete, a scoring system
becomes practical. Done properly, it combines the best of all
other approaches because critical variables are identified from
scenario-based approaches and weightings are established
from probabilistic conceptswhen possible.
The genesis of scoring-typeapproaches is readily illustrated
by the following example. As operators of motor vehicles, we
generally know the hazards associated with driving as well as
the consequencesof vehicle accidents.At one time or another,
most drivers have been exposed to driving accident statistics as
well as pictures or graphic commentary of the consequencesof
accidents. Were we to perform a scientific quantitative risk
analysis, we might begin by investigating the accident statistics
of the particular make and model of the vehicle we operate.We
would also want to know something about the crash survivability of the vehicle. Vehicle condition would also have to be
included in our analysis. We might then analyze various roadways for accident history including the accident severity. We
would naturally have to compensate for newer roads that have
had less opportunityto accumulate an accident frequency base.
To be complete, we would have to analyze driver conditionas it
contributesto accident frequency or severity, as well as weather
and road conditions. Some of these variables would he quite
difficult to quantify scientifically.
After a great deal of research and using a number of critical
assumptions,we may be able to build a system model to give us
an accident probability number for each combination of variables. For instance, we may conclude that, for vehicle type A,
driven by driver B, in condition C, on roadway D, during
weather and road conditions E, the accident frequency for an
accident of severity F is once for every 200,000 miles driven.
This system could take the form of a scenario approach or a
scoring system.
Does this now mean that until 200,000 miles are driven, no
accidentsshould be expected? Does 600,000 miles driven guarantee three accidents?Of course not. What we do believe from
our study of statistics is that, given a large enough data set, the
accident frequency for this set of variables should tend to move
toward once every 200,000 miles on average, if our underlying
frequencies are representative of fume frequencies.This may
mean an accident every 10,000miles for the first 100,000miles
followed by no accidents for the next 1,900,000 miles-the
average is still once every 200,000 miles.
What we are perhaps most interestedin, however, is the relative amount of risk to which we are exposing ourselvesduring a
single drive. Our study has told us little ahout the risk of this
drive until we compare this drive with other drives. Suppose we
change weather and road conditionsto state G from state F and
find that the accident frequency is now once every 190,000
Risk assessment models 2/25
miles. This finding now tells us that condition G has increased
the risk by a small amount. Suppose we change roadway D to
roadway H and find that our accident frequency is now once
every 300,000 miles driven. This tells us that by using road H
we have reduced the risk quite substantially compared with
using road D. Chances are, however, we could have made these
general statements without the complicated exercise of calculating statistics for each variable and combining them for an
overall accident frequency.
So why use numbers at all? Suppose we now make both variable changes simultaneously. The risk reduction obtained by
road H is somewhat offset by the increasedrisk associated with
road and weather condition F, but what is the result when we
combine a small risk increase with a substantial risk reduction?
Because all of the variables are subject to change, we need
some method to see the overall picture. This requires numbers,
but the numbers can be relative-showing only that variable H
has a greater effect on the risk picture than does variable G.
Absolute numbers, such as the accident frequency numbers
used earlier, are not only difficult to obtain, they also give a
false sense of precision to the analysis. If we can only be sure of
the fact that change X reduces the risk and it reduces it more
than change Y does, it may be of little W h e r value to say that a
once in 200,000 frequency has been reduced to a once in
2 10,000 frequency by change X and only a once in 205,000 fiequency by change Y. We are ultimately most interested in the
relative risk picture of changeXversus change Y.
This reasoning forms the basis of the scoring risk assessment. The experts come to a consensus as to how a change in a
variable impacts the risk picture, relative to other variables in
the risk picture. If frequency data are available, they are certainly used, but they are used outside the risk analysis system.
The data are used to help the experts reach a consensus on the
importance of the variable and its effects (or weighting) on the
risk picture. The consensus is then used in the risk analysis.
As previously noted, scoring systems are common in many
applications. In fact, whenever information is incomplete
and many aspects or variables must be simultaneously considered, a scoring system tends to emerge. Examples include
sporting events that have some difficult-to-measure aspects
like artistic expression or complexity, form, or aggressiveness. These include gymnastics, figure skating, boxing, and
karate and other martial arts. Beauty contests are another application. More examples are found in the financial world. Many
economic models use scoring systems to assess current conditions and forecast future conditions and market movements.
Credit card fraud assessment is another example where some
purchases trigger a model that combines variables such as
purchase location, the card owner’s purchase history, items
Table 2.1
purchased, time of day, and other factors to rate the probability of a fraudulent card use. Scoring systems are also
used for psychological profiles, job applicant screening,
career counseling, medical diagnostics, and a host of other
Choosing a risk assessment approach
Any or all ofthe above-describedtechniques might have a place
in risk assessment/management. Understanding the strengths
and weaknesses of the different risk assessment methodologies
gives the decision-maker the basis for choosing one. A case can
be made for using each in certain situations. For example, a
simple matrix approach helps to organize thinking and is a first
step towards formal risk assessment. If the need is to evaluate
specific events at any point in time, a narrowly focused probabilistic risk analysis might be the tool of choice. Ifthe need is to
weigh immediate risk trade-offs or perform inexpensive overall
assessments, indexing models might be the best choice. These
options are summarized in Table 2.1.
It is important that a risk assessment identify the role of uncertainty in its use of assumptions and also identify how the state
of “no information” is assessed. The philosophy behind uncertainty and risk is discussed in Chapter 1. The recommendation
from Chapter 1 is that a risk model generally assumes that
things are “ b a d until data show otherwise. So, an underlying
theme in the assessment is that “uncertainty increases risk.”
This is a conservative approach requiring that, in the absence of
meaningful data or the opportunity to assimilate all available
data, risk should be overestimated rather than underestimated.
So, lower ratings are assigned, reflecting the assumption of reasonably poor conditions, in order to accommodate the uncertainty. This results in a more conservative overall risk
assessment. As a general philosophy, this approach to uncertainty has the added long-term benefit of encouraging data collection via inspections and testing. Uncertainty also plays a role
in scoring aspects of operations and maintenance.
Information should be considered to have a life span because
users must realize that conditions are always changing and
recent information is more useful than older information.
Eventually, certain information has little value at all in the risk
analysis.This applies to inspections, surveys, and so on.
The scenarios shown inTable 2.2 illustrate the relative value
of several knowledge states for purposes of evaluating risk
where uncertainty is involved. Some assumptions and “reasonableness” are employed in setting risk scores in the absence of
Choosing a risk assessment technique
When the need is t o . . .
A technique to use might be
Study specific events. perform post-incidentinvestigations,compare risks of specific failures.
calculatespecific event probabilities
Obtain an inexpensive overall risk model, create a resource allocation model, model
the interaction of many potential failure mechanisms, study or create an operatingdiscipline
Better quantify a belief, create a simple decision support tool, combine several
beliefs into a single solution, document choices in resource allocation
Event trees. fault trees, FMEA. PRA, HAZOP
Indexing model
2/26 Risk Assessment Process
Table 2.2 Uncertainty and risk assessment
Inspection results
Risk relevance
Timely and comprehensive inspection performed
Timely and comprehensive inspection performed
No risk issues identified
Some risk issues or indications of flaw potential identified: root
Least risk
cause analysis and proper follow-up to mitigate risk
High uncertainty regarding risk issues
Some nsk issues or Indications of flaw potential identifieduncertain reactions,uncertain mitigation of risk
More risk
No timely and comprehensiveinspection performed
Timely and comprehensive inspection performed
data; in general, however, worst-case conditions are conservatively used for default values.
Uncertainty also arises in using the risk assessment model
since there are inaccuracies inherent in any measuring tool.
A signal-to-noise ratio analogy is a useful way to look at the
tool and highlights precautions in its use. This is discussed in
Chapter 1.
Sectioning or segmenting the pipeline
It is generally recognized that, unlike most other facilities that
undergo a risk assessment, a pipeline usually does not have a
constant hazard potential over its entire length. As conditions
along the line’s route change, so too does the risk picture.
Because the risk picture is not constant, it is efficient to
examine a long pipeline in shorter sections. The risk evaluator
must decide on a strategy for creating these sections in order to
obtain an accurate risk picture. Each section will have its own
risk assessment results. Breaking the line into many short sections increases the accuracy of the assessment for each section,
hut may result in higher costs of data collection, handling, and
maintenance (although higher costs are rarely an issue with
modern computing capabilities). Longer sections (fewer in
number) on the other hand, may reduce data costs but also
reduce accuracy, because average or worst case characteristics
must govern if conditions change within the section.
Fixed-length approach
A fixed-length method of sectioning, based on rules such as
“every mile” or “between pump stations” or “between block
valves,” is often proposed. While such an approach may be initially appealing (perhaps for reasons of consistency with existing accounting or personnel systems), it will usually reduce
accuracy and increase costs. Inappropriate and unnecessary
break points that are chosen limit the model’s usefulness and
hide risk hot spots if conditions are averaged in the section, or
risks will be exaggerated if worst case conditions are used for
the entire length. It will also interfere with an otherwise efficient ability of the risk model to identify risk mitigation projects. Many pipeline projects are done in very specific locations,
as is appropriate. The risk of such specific locations is often lost
under a fixed-length sectioning scheme.
Dynamic segmentation approach
The most appropriate method for sectioning the pipeline is to
insert a break point wherever significant risk changes occur.
A significant condition change must be determined by the eval-
Most risk
uator with consideration given to data costs and desired accuracy. The idea is for each pipeline section to be unique, from a
risk perspective, from its neighbors. So, within a pipeline section, we recognize no differences in risk, from beginning to
end. Each foot ofpipe is the same as any other foot, as far as we
know from our data. But we know that the neighboring sections
do differ in at least one risk variable. It might he a change in
pipe specification (wall thickness. diameter, etc.), soil conditions (pH, moisture, etc.), population, or any of dozens of other
risk variables, but at least one aspect is different from section to
section. Section length is not important as long as characteristics remain constant. There is no reason to subdivide a 10-mile
section of pipe if no real risk changes occur within those
10 miles.
This type of sectioning is sometimes called dynamic segnienfution. It can be done very efficiently using modern computers. It can also be done manually, of course, and the manual
process might be suitable for setting up a high-level screening
Manually establishing sections
With today’s common computing environments, there is
really no reason to follow the relatively inefficient option of
manually establishing pipeline sections. However. envisioning
the manual process of segmentation might be helpful for
obtaining a better understanding of the concept.
The evaluator should first scan Chapters 3 through 7 of this
text to get a feel for the types ofconditions that make up the risk
picture. He should note those conditions that are most variable
in the pipeline system being studied and rank those items with
regard to magnitude of change and frequency of change. This
ranking will be rather subjective and perhaps incomplete, but it
will serve as a good starting point for sectioning the line(s).An
example of a short list ofprioritized conditions is as follows:
1. Population density
2. Soil conditions
3. Coating condition
4. Age ofpipeline
In this example, the evaluator(s) foresees the most significant
changes along the pipeline route to be population density, followed by varying soil conditions, then coating condition, and
pipeline age. This list was designed for an aging 60-mile
pipeline in Louisiana that passes close to several rural communities and alternating between marshlands (clay)and sandy soil
conditions. Furthermore, the coating is in various states ofdeterioration (maybe roughly corresponding to the changing soil
Risk assessment models 2/27
conditions) and the line has had sections replaced with new
pipe during the last few years.
Next. the evaluator should insert break points for the sections
based on the top items on the prioritized list of condition
changes. This produces a trial sectioning of the pipeline. If the
number of sections resulting from this process is deemed to be
too large, the evaluator needs to merely reduce the list (eliminating conditions from the bottom of the prioritized list) until
an appropriate number of sections are obtained. This trial-anderror process is repeated until a cost-effective sectioning has
been completed.
E.xaniple 2.1: Sectioning the Pipeline
Following this philosophy, suppose that the evaluator of this
hypothetical Louisiana pipeline decides to section the line
according to the following rules he has developed:
Insert a section break each time the population density along
a 1-mile section changes by more than 10%. These popula-
tion section breaks will not occur more often than each mile,
and as long as the population density remains constant, a
section break is unwarranted.
Insert a section break each time the soil corrosivity changes
by 30%. In this example, data are available showing the
average soil corrosivity for each 500-ft section of line.
Therefore, section breaks may occur a maximum of I O
times (5280 ft per mile divided by 500-ft sections) for each
mile ofpipeline.
Insert a section break each time the coating condition
changes significantly. This will be measured by the corrosion engineer’s assessment. Because this assessment is subjective and based on sketchy data, such section breaks may
occur as often as every mile.
Insert a section break each time a difference in age of
the pipeline is seen. This is measured by comparing the
installation dates. Over the total length of the line, six new
sections have been installed to replace unacceptable older
Following these rules, the evaluator finds that his top listed
condition causes 15 sections to be created. By applying the second condition rule, he has created an additional 8 sections.
bringing the total to 23 sections. The third rule yields an additional 14 sections, and the fourth causes an additional 6 sections. This brings the total to 43 sections in the 60-mile pipeline.
The evaluator can now decide if this is an appropriate number of sections. As previously noted, factors such as the desired
accuracy of the evaluation and the cost of data gathering and
analysis should be considered. If he decides that 43 sections is
too many for the company’s needs, he can reduce the number of
sections by first eliminating the additional sectioning caused by
application of his fourth rule. Elimination of these 6 sections
caused by age differences in the pipe is appropriate because it
had already been established that this was a lower-priority item.
That is, it is thought that the age differences in the pipe are not
as significant a factor as the other conditions on the list.
If the section count (now down to 37) is still too high, the
evaluator can eliminate or reduce sectioning caused by his third
rule. Perhaps combining the corrosion engineer’s “good’ and
“fair” coating ratings would reduce the number of sections
from I4 to 8.
In the preceding example, the evaluator has roughed out a
plan to break down the pipeline into an appropriate number of
sections. Again, this is an inefficient way to section a pipeline
and leads to further inefficiencies in risk assessment. This
example is provided only for illustration purposes.
Figure 2.2 illustrates a piece of pipeline sectioned based on
population density and soil conditions.
For many items in this evaluation (especially in the incorrect
operations index) new section lines will not be created. Items
such as training or procedures are generally applied uniformly
across the entire pipeline system or at least within a single
Section 4
Section 5
Section 6
Figure 2.2
Sectioning of the pipeline.
2/28Risk Assessment Process
operations area. This should not be universally assumed,
however, during the data-gathering step.
detail and complexity. Appendix E shows some samples of risk
algorithms. Readers will find a review of some database design
concepts to be useful (see Chapter 8).
Persistence of segments
Another decision to make is how often segment boundaries
will be changed. Under a dynamic segmentation strategy,
segments are subject to change with each change of data.
This results in the best risk assessments, but may create problems when tracking changes in risk over time. Difficulties can
be readily overcome by calculating cumulative risks (see
Chapter 15) or tracking specific points rather than tracking
Results roll-ups
The pipeline risk scores represent the relative level of risk
that each point along the pipeline presents to its surroundings. It is insensitive to length. If two pipeline segments,
say, 100 and 2600 ft, respectively, have the same risk score,
then each point along the 100-ft segment presents the
same risk as does each point along the 2600-ft length.
Of course, the 2600-ft length presents more overall risk
than does the 100-ft length because it has many more riskproducing points. A cumulative risk calculation adds the
length aspect so that a 100-ft length of pipeline with one
risk score can be compared against a 2600-ft length with a
different risk score.
As noted earlier, dividing the pipeline into segments
based on any criteria other than all risk variables will lead to
inefficiencies in risk assessment. However, it is common
practice to report risk results in terms of fixed lengths such as
“per mile” or “between valve stations,” even if a dynamic
segmentation protocol has been applied. This “rolling up” of
risk assessment results is often thought to be necessary for
summarization and perhaps linking to other administrative
systems such as accounting. To minimize the masking effect
that such roll-ups might create, it is recommended that several
measures be simultaneously examined to ensure a more complete use of information. For instance, when an average risk
value is reported, a worst-case risk value, reflecting the
worst length of pipe in the section, can be simultaneously
reported. Length-weighted averages can also be used to better
capture information, but those too must be used with caution. A
very short, but very risky stretch of pipe is still of concern, even
if the rest of the pipeline shows low risks. In Chapter 15, a system of calculating cumulative risk is offered. This system takes
into account the varying section lengths and offers a way to
examine and compare the effects of various risk mitigation
efforts. Other aspects of data roll-ups are discussed in Chapters
8 and 15.
IV. Designing a risk assessment model
A good risk model will be firmly rooted in engineering concepts and be consistent with experience and intuition. This
leads to the many similarities in the efforts of many different
modelers examining many different systems at many different
times. Beyond compatibility with engineering and experience,
a model can take many forms, especially in differing levels of
Data first or framework first?
There are two possible scenarios for beginning a relative risk
assessment. In one, a risk model (or at least a framework for a
model) has already been developed, and the evaluator takes
this model and begins collecting data to populate her model’s
variables. In the second possibility, the modeler compiles a list
of all available information and then puts this information into a
framework from which risk patterns emerge and risk-based
decisions can be made. The difference between these two
approaches can be summarized in a question: Does the model
drive data collection or does data availability drive model
development? Ideally, each will be the driver at various stages
of the process.
One of the primary intents of risk assessment is to capture
and use all available information and identify information gaps.
Having data drive the process ensures complete usage of all
data, while having a predetermined model allows data gaps to
be easily identified. A blend of both is therefore recommended,
especially considering possible pitfalls of taking either exclusively. Although a predefined set of risk algorithms defining
how every piece of data is to be used is attractive, it has the
potential to cause problems, such as:
Rigidity of approach.Difficulty is experienced in accepting
new data or data in and unexpected format or information
that is loosely structured.
Relative scoring. Weightings are set in relation to types of
information to be used. Weightings would need to be
adjusted if unexpected data become available.
On the other hand, a pure custom development approach
(building a model exclusively from available data) suffers
from lack of consistency and inefficiency. An experienced
evaluator or a checklist is required to ensure that significant
aspects of the evaluation are not omitted as a result of lack of
Therefore, the recommendation is to begin with lists of
standard higher level variables that comprise all of the critical
aspects of risk. Chapters 3 through 7 provide such lists for
common pipeline components, and Chapters 9 through 13 list
additional variables that might be appropriate for special situations. Then, use all available information to evaluate each
variable. For example, the higher level variable of activity (as
one measure of third-party damage potential) might be created from data such as number ofone-call reports, population
density, previous thirdparty damages, and so on. So, higher
level variable selection is standardized and consistent, yet the
model is flexible enough to incorporate any and all information that is available or becomes available in the future. The
experienced evaluator, or any evaluator armed with a comprehensive list of higher level variables, will quickly find many
useful pieces of information that provide evidence on many
variables. She may also see risk variables for which no information is available. Similar to piecing together a puzzle, a
picture will emerge that readily displays all knowledge and
knowledge gaps.
Designinga risk assessment model 2/29
Risk factors
Tvpes of information
Central to the design ofa risk model are the risk factors or variables (these terms are used interchangeably in this text) that
will be included in the assessment. A complete list of risk factors, those items that add to or subtract from the amount of
risk, can be readily identified for any pipeline system. There is
widespread agreement on failure mechanisms and underlying
factors influencing those mechanisms.
Setting up a risk assessment model involves trade-offs
between the number of factors to be considered and the ease of
use of the model. Including all possible factors in a decision
support system, however, can create a somewhat unwieldy system. So, the important variables are widely recognized, but the
number to be considered in the model (and the depth of that
consideration) is amatter of choice for the model developers.
In this book, lists ofpossible risk indicators are offered based
on their ability to provide useful risk signals. Each item’s specific ability to contribute without adding unnecessary complexities will be a function of a user’s specific system, needs,
and ability to obtain the required data. The variables and the
rationale for their possible inclusion are described in the
following chapters.
It is usually the case that some data impact several different
aspects of risk. For example, pipe wall thickness is a factor in
almost all potential failure modes: It determines time to failure
for a given corrosion rate, partly determines ability to survive external forces, and so on. Population density is a consequence variable as well as a third-party damage indicator (as a
possible measure of potential activity). Inspection results yield
evidence regarding current pipe integrity as well as possibly
active failure mechanisms. A single detected defect can yield
much information. It could change our beliefs about coating
condition, CP effectiveness, pipe strength, overall operating
safety margin, and maybe even provides new information about
soil corrosivity, interference currents, third-party activity, and
so on. All of this arises from a single piece of data (evidence).
Many companies now avoid the use of casings. But casings
were put in place for a reason. The presence of a casing is a mitigation measure for external force damage potential, but is
often seen to increase corrosion potential. The risk model
should capture both of the risk implications from the presence
of a casing. Numerous other examples can be shown.
A great deal of information is usually available in a pipeline
operation. Information that can routinely be used to update the
risk assessment includes
All survey results such as pipe-to-soil voltage readings, leak
surveys, patrols, depth of cover, population density, etc.
Documentation of all repairs
Documentation of all excavations
Operational data including pressures and flow rates
Results of integrity assessments
Maintenance reports
Updated consequence information
Updated receptor information-new housing, high occupancy buildings. changes in population density or environmental sensitivities, etc.
Results of root cause analyses and incident investigations
Availability and capabilities of new technologies
Attributes andpreventions
Because the ultimate goal of the risk assessment is to provide a
means of risk management, it is sometimes useful to make a
distinction between two types of risk variables. As noted earlier,
there is a difference between a hazard and a risk. We can usually
do little to change the hazard, but we can take actions to affect
the risk. Following this reasoning, the evaluator can categorize
each index risk variable as either an attribute or a prevention.
The attributes correspond loosely to the characteristics of the
hazard, while the preventions reflect the risk mitigation measures. Attributes reflect the pipeline’s environment-characteristics that are difficult or impossible to change. They are
characteristics over which the operator usually has little or no
control. Preventions are actions taken in response to that environment. Both impact the risk, but a distinction may be useful,
especially in risk management analyses.
Examples of aspects that are not routinely changed, and are
therefore considered attributes, include
Soil characteristics
Type of atmosphere
Product characteristics
The presence and nature ofnearby buried utilities
The other category, preventions, includes actions that the
pipeline designer or operator can reasonably take to offset risks.
Examples ofpreventions include
Pipeline patrol frequency
Operator training programs
Right-of-way (ROW) maintenance programs
The above examples of each category are pretty clear-cut. The
evaluator should expect to encounter some gray areas of distinction between an attribute and a prevention. For instance.
consider the proximity of population centers to the pipeline.
In many risk assessments, this impacts the potential for
third-party damage to the pipeline. This is obviously not an
unchangeable characteristic because rerouting of the line is
usually an option. But in an economic sense. this characteristic
may be unchangeable due to unrecoverable expenses that may
be incurred to change the pipeline’s location. Another example
would be the pipeline depth of cover. To change this characteristic would mean a reburial or the addition of more cover.
Neither of these is an uncommon action, but the practicality of
such options must be weighed by the evaluator as he classifies a
risk component as an attribute or a prevention. Figure 2.3 illustrates how some of the risk assessment variables are thought to
appear on a scale with preventions at one extreme and attributes
at the other.
The distinction between attributes and preventions is especially useful in risk management policy making. Company
standards can be developed to require certain risk-reducing
actions to be taken in response to certain harsh environments.
For example, more patrols might be required in highly populated areas or more corrosion-prevention verifications might be
required under certain soil conditions. Such a procedure would
provide for assigning a level of preventions based on the level
of attributes. The standards can be predefined and programmed
into a database program to adjust automatically the standards to
2/30Risk Assessment Process
r D e p t h cover
--------------Conditions * - - - - - - - - - - - - - - - - - - - - +Actions
Figure 2.3
the environment of the section-harsh
preventions to meet the standard.
Example items on attributes-preventions scale.
conditions require more
Model scope and resolution
Assessment scope and resolution issues further complicate
model design. Both involve choices of the ranges of certain risk
variables. The assessment of relative risk characteristics is
especially sensitive to the range of possible characteristics in
the pipeline systems to be assessed. If only natural gas transmission pipelines are to be assessed then the model does not
necessarily have to capture liquid pipeline variables such as
surge potential. The model designer can either keep this variable and score it as “no threat” or she can redistribute the
weighting points to other variables that do impact the risk.
As another example, earth movements often pose a very localized threat on a relatively few stretches of pipeline. When the vast
majority of a pipeline system to be evaluatedis not exposed to any
land movement threats, risk points assigned to earth movements
will not help to make risk distinctions among most pipeline segments. It may seem beneficial to reassign them to other variables,
such as those that warrant full consideration. However, without
the direct consideration for this variable, comparisons with the
small portions of the system that are exposed, or future acquisitions of systems that have the threat, will be difficult.
Model resolution-the signal-to-noise ratio as discussed in
Chapter I-is also sensitive to the characteristics of the systems
to be assessed. A model that is built for parameters ranging
from, say, a 40-inch, 2000-psig propane pipeline to a 1-inch, 20psig fuel oil pipeline will not be able to make many risk distinctions between a 6-inch natural gas pipeline and an 8-inch natural
gas pipeline. Similarly, a model that is sensitive to differences
between a pipeline at 1 100 psig and one at 1200psig might have
to treat all lines above a certain pressure/diameter threshold as
the same. This is an issue ofmodeling resolution.
Common risk variables that should have a range established
as part of the model design include
Diameter range
Pressure range
Products to be included
The range should include the smallest to largest values in systems to be studied as well as future systems to be acquired or
other systems that might be used as comparisons.
Given the difficulties in predicting future uses of the model,
a more generic model-widely applicable to many different
pipeline systems-might be appropriate.
Special Risk Factors
Two possible risk factors deserve special consideration since
they have a general impact on many other risk considerations.
Age as a risk variable
Some risk models use age as a risk
variable. It is a tempting choice since many man-made systems
experience deterioration that is proportional to their years in
service. However, age itself is not a failure mechanism-at
most it is a contributing factor. Using it as a stand-alone risk
variable can detract from the actual failure mechanisms and can
also unfairly penalize portions of the system being evaluated.
Recall the discussion on time-dependent failure rates in
Chapter 1, including the concept of the bathtub failure rate
curve. Penalizing a pipeline for its age presupposes knowledge
of that pipeline’s failure rate curve.
Age alone is not a reliable indicator ofpipeline risk, as is evidenced by some pipelines found in excellent operating condition even after many decades of service. A perception that age
always causes an inevitable, irreversible process of decay is not
an appropriate characterization ofpipeline failure mechanisms.
Mechanisms that can threaten pipe integrity exist but may or
may not be active at any point on the line. Integrity threats are
well understood and can normally be counteracted with a
degree of confidence. Possible threats to pipe integrity are not
necessarily strongly correlated with the passage of time,
although the “area of opportunity” for something to go wrong
obviously does increase with more time.
The ways in which the age of a pipeline can influence the
potential for failures are through specific failure mechanisms
such as corrosion and fatigue, or in consideration of changes in
manufacturing and construction methods since the pipeline
was built. These age effects are well understood and can
normally be countered by appropriate mitigation measures.
Designing a risk assessment model 2/31
Experts believe that there is no effect of age on the microcrystalline structure of steel such that the strength and ductility
properties of steel pipe are degraded over time. The primary
metal-related phenomena are the potential for corrosion and
development of cracks from fatigue stresses. In the cases of certain other materials, mechanisms of strength degradation might
be present and should be included in the assessment, Examples
include creep and UV degradation possibilities in certain plastics and concrete deterioration when exposed to certain chemical environments. In some situations, a slow-acting earth
movement could also be modeled with an age component. Such
special situations are discussed in Chapters 4 and 5.
Manufacturing and construction methods have changed over
time. presumably improving and reflecting learning experiences from past failures. Hence, more recently manufactured
and constructed systems may be less susceptible to failure
mechanisms of the past. This can be included in the risk model
and is discussed in Chapter 5.
The recommendation here is that age not be used as an independent risk variable. unless the risk model is only a very
high-level screening application. Preferably, the underlying
mechanisms and mitigations should be evaluated to determine
ifthere are any age-related effects.
rating tasks.) It is therefore useful for capturing expert judgments. However, these advantages are at least partially offset by
inferior measurement quality, especially regarding obtaining
Some emerging techniques for artificial intelligence systems
seek to make better use of human reasoning to solve problems
involving incomplete knowledge and the use of descriptive
terms. In mirroring human decision making. fuzzy logic interprets and makes use of natural language in ways similar to our
risk models.
Much research can be found regarding transforming verbal
expressions into quantitative or numerical probability values.
Most conclude that there is relatively consistent usage of terms.
This is useful when polling experts, weighing evidence. and
devising quantitative measures from subject judgments. For
example. Table 2.4 shows the results of a study where certain
expressions, obtained from interviews of individuals, were correlated against numerical values. Using relationships like those
shown in Table 2.4 can help bridge the gap between interview or
survey results and numerical quantification of beliefs.
Table 2.4
Inspecfion age
Inspection age should play a role in assessments that use the results of inspections or surveys. Since conditions should not be assumed to be static, inspection data
becomes increasingly less valuable as it ages. One way to
account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time. This
measure of information degradation can be applied to the
scores as a percentage. After a predetermined time period
scores based on previous inspections degrade to some predetermined value. An example is shown in Table 2.3. In this example, the evaluator has determined that a previous inspection
yields no useful information after 5 years and that the usefulness degrades 20% per year. By this scale, point values based
on inspection results will therefore change by 20% per year.
A more scientific way to gauge the time degradation of
integrity inspection data is shown in Chapter 5.
Inteniew dutu
Collecting information via an interview will often require the
use of qualitative descriptive terms. Such verbal labeling has
some advantages, including ease of explanation and familiarity.
(In fact. most people prefer verbal responses when replying to
Table 2.3
Assigning numbers to qualitative assessments
Almost certain
Very high chance
Very likely
High chance
Very probable
Very possible
Even chance
Medium chance
Low chance
Very low chance
Very unlikely
Very improbable
Almost impossible
Median prohahilrw
9&99 5
x0 Y?
70 87.5
65 85
I &70
IO 3
2 I
Source: From Rohrmann, 6.. "Verbal Qualifiers for Ratlng Scales:
Sociolinguistic Considerations and Psychometric Data," Project
report, Universityof Melbourne,Australia, September 2002
Example of inspection degradations
age (j'ear.YJ
fuctor /%i
Fresh data; no degradation
Inspection data is 1 year old and less representative ofactual conditions
Inspection data is now 3 years old and current conditions might now be significantly di tErent
Inspection results assumed to no longer yield useful information
2/32Risk Assessment Process
Additional studies have yielded similar correlations with
terms relating to quality and frequency. In Tables 2.5 and 2.6,
some test results are summarized using the median numerical
value for all qualitative interpretations along with the standard
deviation. The former shows the midpoint of responses (equal
number of answers above and below this value) and the latter
indicates how much variability there is in the answers. Terms
that have more variability suggest wider interpretations of
their meanings. The terms in the tables relate quality to a 1-to
10-pointnumerical scale.
The grouping or categorizing of failure modes, consequences,
and underlying factors is a model design decision that must be
made. Use of variables and subvariables helps understandability when variables are grouped in a logical fashion, but also creates intermediate calculations. Some view this as an attractive
Table 2.5 Expressions of quality
Very good
Not too bad
Standard deviation
I .3
Source: From Rohrmann. B.. “Verbal Qualifiers for Rating Scales:
Sociolinguistic Considerations and Psychometric Data,” Project
report, University of Melbourne,Australia,September 2002.
Table 2.6 Expressions of frequency
Very often
Fairly often
Moderately often
Standard deviation
Source: From Rohrmann, B.,“Verbal Qualifiers for Rating Scales:
Sociolinguistic Considerations and Psychometric Data,” Project
report, University of Melbourne, Australia, September 2002.
aspect of a model, while others might merely see it as an unnecessary complication.Without categories of variables,the model
takes on the look of a flat file, in database design analogy. When
using categories that look more like those of a relational
database design, the interdependenciesare more obvious.
The weightings of the risk variables, that is, their maximum
possible point values or adjustment factors, reflect the relative
importance of that item. Importance is based on the variable’s
role in adding to or reducing risk.
The following examples illustrate the way weightings can be
viewed. Suppose that the threat of AC-induced corrosion is
thought to represent 2% of the total threat of corrosion. It is a
relatively rare phenomenon. Suppose further that all corrosion
conditions and activities are thought to be worst case-the
pipeline is in a harsh environment with no mitigation (no coatings, no cathodic protection, etc) and atmospheric, internal,
and buried metal corrosion are all thought to be imminent. Ifwe
now addressed all AC corrosion concerns only, then we would
be adding 2% safety-reducing the threat of corrosion of
any kind by 2% (and reducing the threat of AC-induced corrosion by 100%). As another example, if public education is
assumed to carry a weight of 15 percent of the third-party
threat, then doing public education as well as it can be done
should reduce the relative failure rate from third-party damage
scenariosby 15%.
Weightings should be continuously revisited and modified
whenever evidence shows that adjustments are appropriate.
The weightings are especially important when absolute risk
calculations are being performed. For example, if an extra foot
of cover is assumed, via the weightings assigned,to reduce failure probability by 10% but an accumulation of statistical data
suggests the effect is closer to 20%, obviously the predictive
power of the model is improved by changing the weightings
accordingly. In actuality, it is very difficult to extract the true
influence of a single variable from the confounding influence
of the multitude of other variables that are influencing the scenario simultaneously.In the depth of cover example, the reality
is probably that the extra foot of cover impacts risk by 10% in
some situations, 50% in others, and not at all in still others. (See
also Chapter 8 for a discussion of sensitivity analysis.)
The issue of assigning weightings to overall failure mechanisms also arises in model development.In a relative risk model
with failure mechanisms of substantially equivalent orders of
magnitude, a simplification can be used. The four indexes
shown in Chapters 3 through 6 correspond to common failure
modes and have equal @lo0 point scales-all failure modes
are weighted equally. Because accident history (with regard to
cause of failures) is not consistent from one company to
another, it does not seem logical to rank one index over another
on an accident history basis. Furthermore, if index weightings
are based on a specific operator’s experience, that accident
experience will probably change with the operator’s changing
risk management focus. When an operator experiences many
corrosion failures, he will presumably take actions to specifically reduce corrosion potential. Over time, a different mechanism may consequently become the chief failure cause. So, the
weightings would need to change periodically, making the
tracking of risk difficult. Weightings should, however, be used
Designing a risk assessment model 2/33
to reflect beliefs about frequency of certain failure types when
linking relative models to absolute calculations or when there is
large variations in expected failure frequencies among the
possible failure types.
Risk scoring
Direction ofpoint scale
In a scoring-type relative risk assessment, one of two point
schemes is possible: increasing scores versus decreasing to represent increased risk. Either can be effectively used and each
has advantages. As a risk score, it makes sense that higher numbers mean more risk. However, as an analogy to a grading system and most sports and games (except golf), others prefer
higher numbers being better-more
safety and less risk.
Perhaps the most compelling argument for the “increasing
points = increasing safety” protocol is that it instills a mind-set
of increasing safety. “Increasing safety” has a meaning subtly
different from and certainly more positive than “lowering
risks.” The implication is that additional safety is layered
onto an already safe system, as points are acquired. This latter
protocol also has the advantage of corresponding to certain
common expressions such as “the risk situation has deteriorated’ = “scores have decreased and “risk situation has
improved” = “scores have increased.”
While this book uses an “increasing points = increasing
safety” scale in all examples of failure probability, note that this
choice can cause a slight complication if the relative risk
assessments are linked to absolute risk values. The complication arises since the indexes actually represent relative probability of survival, and in order to calculate a relative probability
of failure and link that to failure frequencies, an additional step
is required. This is discussed in Chapter 14.
important the risk will be until she sees the weighting of that
Confusion can also arise in some models when the same variable is used in different parts of the model and has a locationspecific scoring scheme. For instance, in the offshore
environment, water depth is a risk reducer when it makes
anchoring damage less likely. It is a risk increaser when it
increases the chance for buckling. So the same variable, water
depth, is a “good” thing in one part of the model and a “ b a d
thing somewhere else.
Combining variables
An additional modeling design feature involves the choice of
how variables will be combined. Because some variables will
indicate increasing risk and others decreasing, a sign convention (positive versus negative) must be established. Increasing
levels ofpreventions should lead to decreased risks while many
attributes will be adding risks (see earlier discussion of preventions and attributes). For example, the prevention of performing additional inspections should improve risk scores, while
risk scores deteriorate as more soil corrosivity indications
(moisture, pH, contaminants, etc.) are found.
Another aspect of combining variables involves the choice
of multiplication versus addition. Each has advantages.Multiplication allows variables to independently have a great impact on
a score. Adhtion better illustrates the layering of adverse conditions or mitigations. In formal probability calculations, multiplication usually represents the and operation: If corrosion
prevention = “poor” AND soil comsivity = “high” then risk =
“high.”Addition usually represents the or operation: If depth of
cover = “good” OR activity levef= ‘‘low’’ then risk =“low.”
Option 1
Risk variable = (sum of risk increasers) -(sum of nsk reducers)
Where to assign weightings
In previous editions ofthis model, it is suggested that point values be set equal to weightings. That is, when a variable has a
point value of 3, it represents 3% of the overall risk. The disadvantage of this system is that the user does not readily see
what possible values that variable could take. Is it a 5-point
variable, in which case a value of 3 means it is scoring
midrange? Or is it a 15-point variable, for which a score of
3 means it is relatively low?
An alternative point assignment scheme scores all variables
on a fixed scale such as C L l O points. This has the advantage of
letting the observer know immediately how “good” or “bad”
the variable is. For example, a 2 always means 20% from the
bottom and a 7 always means 70% of the maximum points that
could be assigned. The disadvantage is that, in this system,
weightings must be used in a subsequent calculation. This adds
another step to the calculation and still does not make the point
scale readily apparent. The observer does not know what the
70% variable score really means until he sees the weightings
assigned. A score of 7 for a variable weighted at 20% is quite
different from a score of 7 for a variable weighted at 5%.
In one case, the user must see the point scale to know that a
score of, say, 4 points represents the maximum level of mitigation. In the alternate case, the user knows that 10 always represents the maximum level of mitigation, but does not know how
where the point scales for each are in the same direction. For
Corrosion threat = (environment) - [(coating) + (cathodic protection)]
Option 2
Risk variable = (sum ofrisk increasers) + (sum ofnsk reducers)
Point scales for risk increasers are often opposite from the scale
of risk reducers. For example, in an “increasing points means
increasing risk” scheme,
Corrosion threat = (environment) + [(coating) + (cathodic protection)]
where actual point values might be
(corrosion threat) = (24) + (-5
Option 3
In this approach, we begin with an assessment ofthe threat level
and then consider mitigation measures as adjustment factors.
So, we begin with a risk and then adjust the risk downward (if
increasing points = increasing risk) as mitigation is added:
Risk variable = (threat) x (sum of% threat reduction through
2/34 Risk Assessment Process
Corrosion threat = (environment)x [(coating)+ (cathodic
Option 3 avoids the need to create codes for interactions of variables. For example, a scoring rule such as “cathodic protection
is not needed = 10 pts” would not be needed in this scheme. It
would be needed in other scoring schemes to account for a case
where risk is low not through mitigation but through absence of
The scoring should also attempt to define the interplay
of certain variables. For example, if one variable can be
done so well as to make certain others irrelevant, then the
scoring protocol should allow for this. For example, ifpatrol
(perhaps with a nominal weight 20% of the third-party
damage potential) can be done so well that we do not care
about any other activity or condition, then other pertinent
variables (such as public education. activity level, and depth
of‘cover) could be scored as NA (the best possible numerical
score) and the entire index is then based solely on patrol. In
theory, this could be the case for a continuous security presence in some situations. A scoring regime that uses multiplication rather than addition is better suited to capturing this
The variables shown in Chapters 3 through 6 use a variation
of option 2. All variables start at a value of 0, highest risk. Then
safety points are awarded for knowledge of less threatening
conditions and/or the presence of mitigations.
Any of the options can be effective as long as a point assignment manual is availableto ensure proper and consistent scoring.
Variable calculations
Some risk assessment models in use today combine risk variables using only simple summations. Other mathematical relationships might be used to create variables before they are
added to the model. The designer has the choice of where in
the process certain variables are created. For instance, if D/t
(pipe diameter divided by wall thickness) is often thought to be
related to crack potential or strength or some other risk issue. A
variable called D/t can be created during data collection and its
value added to other risk variables. This eliminates the need to
divide D by t in the actual model. Alternatively, data for diameter and wall thickness could be made directly available to the
risk model’s algorithm which would calculate the variable D/t
as part of the risk scoring.
Given the increased robustness of computer environments,
the ability to efficiently model more complex relationships is
leading to risk assessment models that take advantage of
this ability. Conditional statements “If X then Y,” including
comparative relationships [“if b o p density) > 2 then (design
factor) = 0.6, ELSE (design factor) = 0.72”] are becoming
more prevalent.
The use of these more complex algorithms to describe
aspects of risk tend to mirror human reasoning and decisionmaking patterns. They are not unlike very sophisticated
efforts to create expert systems and other artificial intelligence applications based on many simple rules that
represent our understanding. Examples of more complex
algorithms are shown in the following chapters and in
Appendix E.
Direct evidence adjustments
Risk evaluation is done primarily through the use of variables
that provide indirect evidence of failure potential. This includes
knowledge of pipe characteristics, measurements of environmental conditions, and results of surveys. From these, we infer
the potential presence of active failure mechanisms or failure
potential. However, active failure mechanisms are directly
detected by in-line inspection (ILI), pressure testing, and/or
visual inspections, including those that might be prompted by a
leak. Pressure testing is included here as a direct means because
it will either verify that failure mechanisms, even if present,
have not compromised structural integrity or it will prompt a
visual inspection.
If direct evidence appears to be in conflict with risk assessment results (based on indirect evidence), then one of three
scenarios is true:
1. The risk assessment model is wrong; an important variable
has been omitted or undervalued or some interaction of
variables has not been properly modeled.
2. The data used in the risk assessment are wrong; actual
conditions are not as thought.
3. There actually is no conflict; the direct evidence is being
interpreted incorrectly or it represents an unlikely, but statistically possible event that the risk assessment had discounted due to its very low probability.
It is prudent to perform an investigation to determine which
scenario is the case. The first two each have significant implications regarding the utility of the risk management process.
The last is a possible learning opportunity.
Any conclusions based on previously gathered indirect
evidence should be adjusted or overridden when appropriate,
by direct evidence. This reflects common practice, especially for time-dependent mechanisms such as corrosionbest efforts produce an assessment of corrosion potential,
but that assessment is periodically validated by direct observation.
The recommendation is that, whenever direct evidence of
failure mechanisms is obtained, assessments should assume
that these mechanisms are active. This assumption should
remain in place until an investigation, preferably a root cause
analysis (discussed later in this chapter). demonstrates that the
causes underlying the failure mechanisms are known and have
been addressed. For example, an observation of external corrosion damage should not be assumed to reflect old, alreadymitigated corrosion. Rather, it should be assumed to represent
active external corrosion unless the investigation concludes
Direct or confirmatory evidence includes leaks, breaks,
anomalies detected by ILI, damages detected by visual inspection, and any other information that provides a direct indication
of pipe integrity, if only at a very specific point. The use of ILI
results in a risk assessment is discussed in Chapter 5 .
The evidence should be captured in at least two areas of the
assessment: pipe strength and failure potential. If reductions
are not severe enough to warrant repairs, then the wall loss or
strength reduction should be considered in the pipe strength
evaluation (see Chapter 5). If repairs are questionable (use of
nonstandard materials or practices), then the repair itself
Designing a risk assessment model 2/35
should be evaluated. This includes a repair’s potential to cause
unwanted stress concentrations. If complete and acceptable
repairs that restored full component strength have been made,
then risk assessment “penalties” can be removed. Regardless of
repair, evidence still suggests the potential for repeat failures in
the same area until the root cause identification and elimination
process has been completed.
Whether or not a root cause analysis has been completed,
direct evidence can be compiled in various ways for use in
a relative risk assessment. A count of incidences or a density
of incidences (leaks per mile, for example) will be an
appropriate use of information in some cases, while a zoneofinfluence or anomaly-specific approach might be better
suited in others.
When such incidences are rather common-ccurring
regularly or clustering in locations-the density or count
approaches can be useful. For example, the density of ILI
anomalies of a certain type and size in a transmission pipeline
or the density ofnuisance leaks in a distribution main are useful
risk indications (see Chapters 5 and 1 I).
When direct evidence is rare in time andor space, a more
compelling approach is to assign a zone qf influence around
each incident. For example, a transmission pipe leak incident is
rare and often directly affects only a few square inches of pipe.
However, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a zone of influence, X
number of feet on either side of the leak event, can be assigned
around the leak. The length of pipeline within this zone of
influence is then conservatively treated as having leaked and
containing conditions that might suggest increased leak
susceptibility in the future.
The recommended process incorporating direct evidence
into a relative risk assessment is as follows:
A. Use all available leak history and ILI results---even when
root cause investigations are not available-to help evaluate
and score appropriate risk variables. Conservatively
assume that damage mechanisms are still active. For example, the detection of pipe wall thinning due to external
corrosion implies
0 The existence of a corrosive environment
0 Failure of both coating and cathodic protection systems
or a special mechanism at work such as AC-induced
corrosion or microbially induced corrosion
0 A pipe wall thickness that is not as thought-pipe
strength must be recalculated
Scores should be assigned accordingly.
The detection of damaged coating, gouges, or dents suggests previous third-party damages or substandard installation practices. This implies that
0 Third-party damage activity is significant, or at least was
at one time in the past
0 Errors occurred during construction
Pipe strength must be recalculated
Again, scores can be assigned accordingly.
B. Use new direct evidence to directly validate or adjust
risk scores. Compare actual coating condition, pipe wall
thickness, pipe support condition, soil corrosivity, etc., with
the corresponding risk variables’ scores. Compare the relative likelihood of each failure mode with the direct evi-
dence. How does the model’s implied corrosion rate compare with wall loss observations? How does third-party
damage likelihood compare with dents and gouges on the
top or side of pipe? Is the design index measure of land
movement potential consistent with observed support
condition or evidence of deformation?
direct evidence says
C. If disagreement is apparent-the
something is actually “good’ or “bad” while the risk
model says the opposite-then perform an investigation.
Based on the investigation results, do one or more of the
Modify risk algorithms based on new knowledge.
Modify previous condition assessments to reflect new
knowledge. For example, “coating condition is actually
bad, not fair as previously thought” or “cathodicprotection levels are actually inadequate, despite 3-year-old
close interval survey results.”
0 Monitor the situation carefully. For example, “existing
third-party damage preventions are very protective of the
pipe and this recent detection of a top side dent is a rare
exception or old and not representative of the current situation. Rescoring is not appropriate unless additional
evidence is obtained suggesting that third-party damage
potential is actually higher than assumed.” Note that this
example is a nonconservative use of information and is
not generally recommended.
Role of leak history in riskassessment
Pipeline failure data often come at a high cost-an accident
happens. We can benefit from this unfortunate acquisition of
data by refining our model to incorporate the new information.
In actual practice, it is a common belief, which is sometimes
backed by statistical analysis, that pipeline sections that have
experienced previous leaks are more likely to have additional
leaks. Intuitive reasoning suggests that conditions that promote
one leak will most likely promote additional leaks in the same
Leak history should be a part of any risk assessment. It is
often the primary basis of risk estimations expressed in
absolute terms (see Chapter 14). A leak is strong evidence of
failure-promoting conditions nearby such as soil corrosivity,
inadequate corrosion prevention, problematic pipe joints, failure of the one-call system, active earth movements, or any of
many others. It is evidence of future leak potential. This evidence should be incorporated into a relative risk assessment
because, hopefully, the evaluator’s “degree of belief” has been
impacted by leaks. Each risk variable should always incorporate the best availableknowledge ofconditions andpossibilities
for promoting failure.
Where past leaks have had no root cause analysis and/or corrective action applied, risk scores for the type of failure can be
adjusted to reflect the presence of higher failure probability
factors. A zone of influence around the leak site can be
established (see Chapter 8) to penalize nearby portions of the
In some pipelines, such as distribution systems (see Chapter
11) where some leak rate is routinely seen, the determination
as to whether a section of pipeline is experiencing a higher
frequency of leaks must be made on a relative basis. This can be
2/36Risk Assessment Process
done by making comparisons with similar sections owned by
the company or with industry-wide leak rates, as well as by
benchmarking against specific other companies or by a combination of these.
Note that an event history is only useful in predicting hture
events to the extent that conditions remain unchanged. When
corrective actions are applied, the event probability changes.
Any adjustment for leak frequency should therefore be reanalyzed periodically.
Visual inspections
A visual inspection of an internal or external pipe surface may
be triggered by an ILI anomaly investigation, a leak, a pressure
test, or routine maintenance. If a visual inspection detects
pipe damage, then the respective failure mode score for
that segment of pipe should reflect the new evidence. Points
can be reassigned only after a root cause analysis has been
done and demonstrates that the damage mechanism has been
permanently removed.
For risk assessment purposes, a visual inspection is often
assumed to reflect conditionsfor some length ofpipe beyondthe
portions actually viewed. A conservative zone some distance
either side of the damage location can be assumed. This should
reflect the degree of belief and be conservative. For instance, if
poor coating condition is observed in one site, then poor coating
condition should be assumed for as far as those conditions
(coating type and age, soil conditions, etc.) might extend.
As noted earlier, penalties from visual inspections are
removed through root cause analysis and removal of the root
cause. Historical records of leaks and visual inspectionsshould
included in the risk assessment even if they do not completely
document the inspection, leak cause, or repair as is often the
case. Because root cause analyses for events long ago are problematic, and their value in a current condition assessment is
arguable, the weighting of these events is often reduced,
perhaps in proportion to the event’s age.
Root cause analyses
Pipeline damage is very strong evidence of failure mechanisms
at work. This should be captured in the risk assessment.
However, once the cause of the damage has been removed, if it
can be, then the risk assessment should reflect the now safer
condition. Determining and removing the cause of a failure
mechanism is not always easy. Before the evidenceprovided by
actual damage is discounted, the evaluator should ensure that
the true underlying cause has been identified and addressed.
There are no rules for determining when a thorough and complete investigation has been performed. To help the evaluator
make such a judgment, the following concepts regarding root
cause analysesare offered [32].
A root cause analysis is a specializedtype of incident investigation process that is designed to find the lowest level
contributingcauses to the incident. More conventional investigations often fail to arrive at this lowest level.
For example, assume that a leak investigation reveals that a
failed coating contributedto a leak. The coating is subsequently
repaired and the previously assigned leak penalty is removed
from the risk assessment results. But then, a few years later,
another leak appears at the same location. It turns out that the
main root cause was actually soil movements that will damage
any coating, eventually leadingto a repeat leak (discountingthe
role of other corrosionpreventions; see Chapter 3). In this case,
the leak penalty in the risk assessment should have been
removed only after addressing the soil issue, not simply the
coating repair.
This example illustrates that the investigators stopped the
analysis too early by not determining the causes of the
damaged coating. The root is often a system of causes that
should be defined in the analysis step. The very basic understanding of cause and effect is that every effect has causes
(plural). There is rarely only one root cause. The focus of
any investigation or risk assessment is ultimately on effective solutions that prevent recurrence. These effective solutions are found by being very diligent in the analysis step (the
A typical indication of an incompleteanalysis is missing evidence. Each cause-and-effect relationship should be validated
with evidence. If we do not have evidence, then the causeand-effect relationship cannot be validated. Evidence must be
added to all causes in the analysis step.
In the previous example, the investigators were missing the
additional causes and its evidence to causally explain why the
coating was damaged. If the investigators had evidenceof coating damage, then the next question should have been “Why was
the coating damaged?” A thorough analysis addresses the system of causes. If investigators cannot explain why the coating
was damaged then they have not completed the investigation.
Simply repairing the coating is not going to be an effective
Technically, there is no end to a cause-and-effect chainthere is no end to the “Why?” questions.Common terminology
includes mot cause, direct cause, indirect cause, main cause,
primaty cause, contributing cause, proximate cause, physical
cause, and so on. It is also true that between any cause-andeffect relationshipthere are more causes that can be added-we
can always ask more “Why?” questionsbetween any cause and
effect. This allows an analysis to dig into whatever level of
detail is necessary.
The critical point here is that the risk evaluator should not
discount strong direct evidence of damage potential unless
there is also compelling evidence that the damage-causing
mechanisms have been permanently removed.
V. Lessons learned in establishing a risk
assessment program
As the primary ingredient in a risk management system, a risk
assessment process or model must first be established.This is
no small undertaking and, as with any undertaking, is best
accomplished with the benefit of experience. The following
paragraphs offer some insights gained through development of
many pipeline risk management programs for many varied circumstances.Of course, each situationis unique and any rules of
thumb are necessarily general and subject to many exceptions
to the rules. To some degree, they also reflect a personal preference, but nonetheless are offered here as food for thought for
those embarking on such programs. These insights include
some key points repeated from the first two chapters of this
Lessons learned in establishing a risk assessment program 2/37
The general lessons learned are as follows:
Avoid complexity
Work from general to specific.
Think “organic.”
Avoid complexity.
Use computers wisely.
Build the program as you would build a new pipeline.
Study your results.
Every single component of the risk model should yield more
benefits than the cost it adds in terms of complexity and datagathering efforts. Challenge every component of the risk model
for its ability to genuinely improve the risk knowledge at a
reasonable cost. For example:
Don’t include an exotic variable unless that variable is a
useful risk factor.
Don’t use more significant digits than is justified.
Don’t use exponential notation numbers if a relative scale
can be appropriately used.
Don’t duplicate existing databases; instead, access information from existing databases whenever possible. Duplicate
data repositories will eventually lead to data inconsistencies.
Don’t use special factors that are only designed to change
numerical scales. These tend to add more confusion than
their benefit in creating easy-to-use numbers.
Avoid multiple levels of calculations whenever possible.
Don’t overestimatethe accuracy of your results, especially in
presentations and formal documentation. Remember the
high degree ofuncertainty associated with this type of effort.
We now take a look at the specifics ofthese lessons learned.
Work from general to specific
Get the big picture first. This means “Get an overview assessment done for the whole system rather than getting
every detail for only a portion of the system.” This has two
I. No matter how strongly the project begins, things may
change before project completion. If an interruption does
occur, at least a general assessment has been done and some
useful information has been generated.
2. There are strong psychological benefits to having results
(even if very preliminary--caution is needed here) early in
the process. This provides incentives to refine and improve
preliminary results. So, having the entire system evaluated
to a preliminary level gives timely feedback and should
encourage further work.
It is easy to quickly assess an entire pipeline system by limiting the number of risk variables in the assessment. Use only a
critical few, such as population density, type of product, operating pressure, perhaps incident experience, and a few others.
The model can then later be “beefed up” by adding the variables
that were not used in the first pass. Use readily available information whenever possible.
Think “organic”
Imagine that the risk assessment process and even the
model itself are living, breathing entities. They will grow
and change over time. There is the fruit-the valuable
answers that are used to directly improve decision making.
The ideal process will continuously produce ready-to-eat fruit
that is easy to “pick” and use without any more processing.
There are also the roots-the hehind-the-scenes techniques
and knowledge that create the fruit. To ensure the fruit is
good, the roots must he properly cared for. Feed and
strengthen the roots by using HAZOPS, statistical analysis,
FEMA, event trees, fault trees, and other specific risk tools
occasionally. Such tools provide the underpinnings for the
risk model.
Allow for growth because new inspection data, new
inspection techniques, new statistical data sets to help determine weightings, missed risk indicators, new operating disciplines, and so on will arise. Plan for the most flexible
environment possible, Make changes easy to incorporate.
Anticipate that regardless of where the program begins and
what the initial focus was, eventually, all company personnel
might he visiting and “picking the fruit” provided by this
Use computers wisely
Too much reliance on computers is probably more dangerous
than too little. In the former, knowledge and insight can be
obscured and even convoluted. In the latter, the chief danger is
that inefficiencies will result-an undesirable, hut not critical,
event. Regardless of potential misuse, however. computers can
greatly increase the strength of the risk assessment process, and
no modem program is complete without extensive use of them.
The modem software environment is such that information is
easily moved between applications. In the early stages of a project, the computer should serve chiefly as a data repository.
Then, in subsequent stages, it should house the algorithnhow the raw information such as wall thickness, population
density, soil type, etc., is turned into risk information. In later
stages of the project, data analysis and display routines should
he available. Finally, computer routines to ensure ease and consistency of data entry, model tweaking, and generation of
required output should he available.
Software use in risk modeling should always follow program
development-not lead it.
Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment
system. Also use project management tools if desired to plan
the risk management project.
Intermediate stages. Use software environments that can
store, sort, and filter moderate amounts of data and generate
new values from arithmetic and logical (if. . . then. . . else. . .)
combinations of input data. Choices include modem spreadsheets and desktop databases.
Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is
desired, consider migrating to geographical information systems (GIS) platforms. If multiuser access is desired, consider
robust database environments.
2/38Risk Assessment Process
Computer usage in pipeline risk assessment and management is further discussed in Chapter 8.
Build the program as you would build a new pipeline
A useful way to view the establishment of a risk management
program, and in particular the risk assessment process, is to
consider a direct analogy with new pipeline construction. In
either case, a certain discipline is required. As with new construction, failures in risk modeling occur through inappropriate
expectations and poor planning, while success happens through
thoughtful planning and management.
Below. the project phases of a pipeline construction are
compared to a risk assessment effort.
I. Conceptualization and scope creation phase:
Pipeline: Determine the objective, the needed capacity,
the delivery parameters and schedule.
Risk assessment: Several questions to the pipeline operator may better focus the effort and direct the choice of a
formal risk assessment technique:
What data do you have?
What is your confidence in the predictive value of the
What are the resource demands (and availability) in
terms of costs, man-hours, and time to set up and maintain a risk model?
What benefits do you expect to accrue, in terms of cost
savings, reduced regulatory burdens, improved public
support, and operational efficiency?
Subsequent defining questions might include: What portions of your system are to be evaluated-pipeline only?
Tanks? Stations? Valve sites? Mainlines? Branch lines?
Distribution systems? Gathering systems? Onshore/offshore? To what level of detail?
Estimate the uses for the model, then add a margin of safety
because there will be unanticipated uses. Develop a schedule
and set milestones to measure progress.
11. Route selectiodROW acquisition:
Pipeline: Determine the optimum routing, begin the
process of acquiring needed ROW.
Risk assessment: Determine the optimum location for the
model and expertise. Centrally done from corporate
headquarters? Field offices maintain and use information? Unlike the pipeline construction analogy, this aspect
is readily changed at any point in the process and does not
have to finally decided at this early stage of the project.
111. Design:
Pipeline: Perform detailed design hydraulic calculations;
specify equipment, control systems, and materials.
Risk assessment: The heart of the risk assessment will be
the model or algorithm-that component which takes
raw information such as wall thickness, population density, soil type, etc., and turns it into risk information.
Successful risk modeling involves a balancing between
various issues including:
Identifying an exhaustive list ofcontributing factors versus choosing the critical few to incorporate in a model
(complex versus simple)
Hard data versus engineering judgment (how to incorporate widely held beliefs that do not have supporting
statistical data)
Uncertainty versus statistics (how much reliance to
place on predictive power of limited data)
Flexibility versus situation-specific model (ability to
use same model for a variety of products, geographical
locations, facility types, etc.)
It is important that all risk variables be considered even if only
to conclude that certain variables will not be included in the
final model. In fact, many variables will not be included when
such variables do not add significant value but reduce the
usability of the model. These “use or don’t use” decisions
should be done carefully and with full understanding ofthe role
of the variables in the risk picture.
Note that many simplifying assumptions are often made,
especially in complex phenomena like dispersion modeling,
fire and explosion potentials, etc.. in order to make the risk
model easy to use and still relatively robust.
Both probability variables and consequence variables are
examined in most formal risk models. This is consistent with
the most widely accepted definition of risk:
Event risk = (eventprobability) x (eventconsequence)
(See also “VI. Commissioning” for more aspects of a successful risk model design.)
IV. Material procurement:
Pipeline: Identify long-delivery-time items, prepare specifications, determine delivery and quality control
Risk assessment: Identify data needs that will take the
longest to obtain and begin those efforts immediately.
Identify data formats and level of detail. Take steps to
minimize subjectivity in data collection. Prepare data
collection forms or formats and train data collectors to
ensure consistency.
V Construction:
Pipeline: Determine number of construction spreads,
material staging, critical path schedule, inspection protocols.
Risk assessment:Form the data collection team(s), clearly
define roles and responsibilities, create critical path
schedule to ensure timely data acquisition, schedule
milestones, and take steps to ensure quality assurance/
quality control.
VI. Commissioning:
Pipeline: Testing of all components, start-up programs
Risk assessment: Use statistical analysis techniques
to partially validate model results from a numerical
basis. Perform a sensitivity analysis and some trial
“what-ifs” to ensure that model results are believable
and consistent. Perform validation exercises with experienced and knowledgeable operating and maintenance
It is hoped that the risk assessment characteristics were
earlier specified in the design and concept phase of the
project. but here is a final place to check to ensure the
Examples of scoring algorithms 2/39
All failure modes are considered.
All risk elements are considered and the most critical
ones are included.
Failure modes are considered independently as well as
in aggregate.
All available information is being appropriately utilized.
Provisions exist for regular updates of information.
including new types of data.
Consequence factors are separable from probability
Weightings, or other methods to recognize relative
importance of factors, are established.
The rationale behind weightings is well documented
and consistent.
A sensitivity analysis has been performed.
The model reacts appropriately to failures ofany type.
Risk elements are combined appropriately (“and” versus “or” combinations).
Steps are taken to ensure consistency of evaluation.
Risk assessment results form a reasonable statistical
distribution (outliers?).
There is adequate discrimination in the measured
results (signal-to-noise ratio).
Comparisons can be made against fixed or floating
standards or benchmarks.
V11. Project completion:
Pipeline: Finalize manuals, complete training, ensure
maintenance protocols are in place, and turn system
over to operations.
Risk assessment: Carefully document the risk assessment process and all subprocesses. especially the
detailed workings of the algorithm or central model.
Set up administrative processes to support an ongoing
program Ensure that control documents cover the
details of all aspects of a good administrative program,
Defining roles and responsibilities
Performance monitoring and feedback
Process procedures
Management of change
Communication protocols
Study the results
This might seem obvious, but it is surprising how many owners really do not appreciate what they have available after
completing a thorough risk assessment. Remember that your
final risk numbers should be completely meaningful in a
practical. real-world sense. They should represent everything
you know about that piece of pipe (or other system component)-all of the collective years of experience of your organization, all the statistical data you can gather, all your gut
feelings, all your sophisticated engineering calculations. If
you can’t really believe your numbers. something is wrong
with the model. When, through careful evaluation and much
experience, you can really believe the numbers, you will find
many ways to use them that you perhaps did not foresee. They
can be used to
Design an operating discipline
Assist in route selection
Optimize spending
Strengthen project evaluation
Determine project prioritization
Determine resource allocation
Ensure regulatory compliance
VI. Examples of scoring algorithms
Sample relative risk model
The relative risk assessment model outlined in Chapters 3
through 7 is designed to be a simple and straightforward
pipeline risk assessment model that focuses on potential consequences to public safety and environment preservation. It provides a framework to ensure that all critical aspects of risk
are captured. Figure 2.4 shows a flowchart of this model.
This framework is flexible enough to accommodate any level
of detail and data availability. For most variables. a sample
point-scoring scheme is presented. In many cases, alternative
scoring schemes are also shown. Additional risk assessment
examples can be found in the case studies of Chapter 14 and in
Appendix E.
The pipeline risk picture is examined in two general parts.
The first part is a detailed itemization and relative weighting of
all reasonably foreseeable events that may lead to the failure of
a pipeline: “What can go wrong?” and “How likely is it to go
wrong?. This highlights operational and design options that
can change the probability of failure (Chapters 3 through 6).
The second part is an analysis of potential consequences if a
failure should occur. This addresses the potential consequences
should failure occur (Chapter 7). The two general parts correspond to the two factors used in the most commonly accepted
definition of risk:
Risk = (event likelihood) x (eventconsequence)
The failure potential component is further broken into four
indexes (see Figure 2.4). The indexes roughly correspond to
categories of reported pipeline accident failures. That is, each
index reflects a general area to which, historically, pipeline
accidents have been attributed. By considering each variable in
each index, the evaluator arrives at a numerical value for that
index. The four index values are then summed to a total value
(called the index sum) representing the overall failure probability (or survival probability) for the segment evaluated. The individual variable values, not just the total index score, are
preserved however, for detailed analysis later.
The primary focus ofthe probability part ofthe assessment is
the potential for a particular failure mechanism to be active.
This is subtly different from the likelihood of failure.
Especially in the case of a time-dependent mechanism such as
corrosion. fatigue, or slow earth movements, the time to failure
is related to factors beyond the presence of a failure mechanism. These include the resistance of the pipe material, the
aggressiveness of the failure mechanism, and the time of
exposure. These, in turn, can be furtherexamined. For instance.
the material resistance is a function of material strength;
dimensions, most notably pipe wall thickness; and the stress
level. The additional aspects leading to a time-to-fail estimate
are usually more appropriately considered in specific investigations.
2/40 Risk Assessment Process
In the second part of the evaluation,an assessmentis made of
the potential consequences of a pipeline failure. Product characteristics,pipeline operating conditions, and the pipeline surroundings are considered in arriving at a consequence factor.
The consequence score is called the leak impact factor and
includes acute as well as chronic hazards associatedwith product releases. The leak impactfactor is combinedwith the index
sum (by dividing) to arrive at a final risk score for each section
of pipeline. The end result is a numerical risk value for each
pipeline section. All of the information incorporated into this
number is preserved for a detailed analysis, if required. The
higher-level variables of the entire process can be seen in the
flowchart in Figure 2.4.
Basic assumptions
Some general assumptionsare built into the relative risk assessment model discussed in Chapters 3 through 7. The user, and
especially, the customizer of this system, should be aware of
these and make changes where appropriate.
Independence Hazards are assumed to be additive but independent. Each item that influences the risk picture is considered separately from all other items-it independently
influences the risk. The overall risk assessmentcombines all of
the independent factors to get a final number. The final number
reflects the “area of opportunity” for a failure mechanism to be
active because the number of independentfactors is believed to
be directly proportional to the risk.
For example, if event B can only occur if event A has first
occurred, then event B is given a lower weighting to reflect the
fact that there is a lower probability of both events happening.
However, the example risk model does not normally stipulate
that event B cannot happen without eventA.
Worst case When multiple conditions exist within the same
pipeline segment, it is recommendedthat the worst-case condi-
tion for a section govern the point assignment.The rationale for
this is discussed in Chapter 1. For instance, if a 5-mile section
of pipeline has 3 ft of cover for all but 200 ft of its length (which
has only 1 ft of cover), the section is still rated as if the entire 5mile length has only 1 ft of cover. The evaluator can work
around this though his choice of section breaks (see Sectioning
of the Pipeline section earlier in this chapter). Using modem
segmentationstrategies,there is no reason to have differing risk
conditionswithin the same pipeline segment.
Relative Unless a correlation to absolute risk values has
been established, point values are meaningfid only in a relative
sense. A point score for one pipeline section only shows how
that section compares with other scored sections. Higher point
values represent increased safety-decreased probability of
failure-in all index values (Chapters 3 through 6). Absolute
risk values can be correlated to the relative risk values in some
cases as is discussed in Chapter 14.
Judgment bused The example point schedules reflect experts’
opinions based on their interpretations of pipeline industry
experience as well as personal pipelining experience. The
relative importance of each item (this is reflected in the
weighting of the item) is similarly the experts’ judgments.
If sound, statistical data are available, they are incorporated
into these judgments. However, in many cases, useful fiequency-of-occurrence data are not available. Consequently,
there is an element of subjectivityin this approach.
Public Threats to the general public are of most interest
here. Risks specific to pipeline operators and pipeline company
personnel can be included as an expansion to this system, but
only with great care since a careless additionmay interfere with
the objectivesofthe evaluation.In most cases, it is believed that
other possible consequences will be proportional to public
safety risks, so the focus on public safety will usually fairly
represent most risks.
Index sum
Figure 2.4 Flowchart of relative risk index system.
Examples of scoring algorithms 2/41
Mitigations It is assumed that mitigations never completely
erase the threat. This is consistent with the idea that the condition 0f‘‘no threat” will have less fisk than the condition
igated threat,’’ regardless of the robus~essof the mitigation
measures. It also shows that even with much prevention in
place, the hazard has not been removed.
Other examples
See Appendix E for examples of other risk scoring algorithms
for pipelines in general. Additional examples are included
several 0 t h chapters, notably in Chapters 9
through 13, where discussions involve the assessments of
special situations.
Third-party Damage
Third-partyDamage Index
A. Minimum Depth of Cover
B. Activity Level
C. Aboveground Facilities
D. Line Locating
E. Public Education Programs
E Right-of-way Condition
G. Patrol Frequency
0-20 pts
0-10 pts
0-15 pts
0-15 pts
0 - 5 pts
0-15 pts
This table lists some possible variables and weightings that
could be used to assess the potential for third-party damages to
atypical transmission pipeline (see Figures 3.1 and 3.2).
Pipeline operators usually take steps to reduce the possibility of
damage to their facilities by others. The extent to which mitigating steps are necessary depends on how readily the system
can be damaged and bow often the chance for damage occurs.
Third-party damage, as the term is used here, refers to any accidental damage done to the pipe as a result ofactivities ofpersonnel
not associated with the pipeline. This failure mode is also sometimes called outside force or external force, but those descriptions
would presumably include damaging earth movements. We use
third-party damage as the descriptor here to focus the analyses
specifically on damage caused by people not associated with the
pipeline. Potential earth movement damage is addressed in the
design index discussion of Chapter 5. Intentional damages are
covered in the sabotage module (Chapter 9). Accidental damages
done by pipeline personnel and contractors are covered in the
incorrect operations index chapter (Chapter 6).
U.S. Department of Transportation (DOT) pipeline accident
statistics indicate that third-party intrusions are often the leading cause of pipeline failure. Some 20 to 40 percent of all
pipeline failures in most time periods are attributed to thirdparty damages. In spite of these statistics, the potential for
third-party damage is often one of the least considered aspects
of pipeline hazard assessment.
The good safety record of pipelines has been attributed in
part to their initial installation in sparsely populated areas and
3/44 Third-party Damage Index
Figure 3.1 Basic risk assessment model.
Soil cover
Type of soil (rock, clay, sand, etc.)
Pavement type (asphalt, concrete, none, etc.)
Warning tape or mesh
Water depth
Population density
Stability of the area (construction, renovation, etc.)
One calls
Other buried utilities
Anchoring, dredging
Minimum depth of cover
Activity level
--Aboveground facilities
One-call system
Public education
Right-of-way condition
Vulnerability (distance, barriers, etc.)
Threats (traffic volume, traffic type, aircraft, etC.)
Response by owner
Well-known and used
Methods (door-to-door, mail, advertisements, etC.)
Signs (size, spacing, lettering, phone numbers, etc.)
Markers (air vs ground, size, visibility, spacing, etc.)
Ground patrol frequency
Ground patrol effectiveness
Air patrol frequency
Air patrol effectiveness
Figure 3.2 Assessing third-partydamage potential:sample of data used to score the third-party damage index.
Riskvariables 3/45
their burial 2.5 to 3 feet deep. However, encroachments ofpopulation and land development activities are routinely threatening many pipelines today.
In the period from 1983 through 1987, eight deaths, 25
injuries, and more than $14 million in property damage
occurred in the hazardous liquid pipeline industry due solely to
excavation damage by others. These types of pipeline failures
represent 259 accidents out of a total of 969 accidents from all
causes. This means that 26.7% of all hazardous liquid pipeline
accidents were caused by excavation damage 1871.
In the gas pipeline industry, a similar story emerges: 430
incidents from excavation damage were reported in the
1984-1987 period. These accidents resulted in 26 deaths, 148
injuries, and more than $18 million in property damage.
Excavation damage is thought to be responsible for 10.5% of
incidents reported for distribution systems, 22.7% of incidents
reported for transmissiodgathering pipelines, and 14.6% of all
incidents in gas pipelines [87]. European gas pipeline experience, based on almost 1.2 million mile-years of operations in
nine Western European countries, shows that third-party interference represents approximately 50% of all pipeline failures
To quantify the risk exposure from excavation damage, an estimate of the total number of excavations that present a chance
for damage can be made. Reference 1641 discusses the Gas
Research Institute’s (GRI’s) 1995 study that makes an effort to
determine risk exposure for the gas industry. The study surveyed 65 local distribution companies and 35 transmission
companies regarding line hits. The accuracy of the analysis was
limited by the response-less than half (41%) of the companies
responded, and several major gas-producing states were poorly
represented (only one respondent from Texas and one from
Oklahoma). The GRI estimate was determined by extrapolation
and may be subject to a large degree of error because the data
sample was not representative.
Based on survey responses, however, GFU calculated an
approximate magnitude of exposure. For those companies
that responded, a total of25,123 hits to gas lines were recorded
in 1993; from that, the GRI estimated total U.S. pipeline
hits in 1993 to be 104,128. For a rate of exposure, this number
can be compared to pipeline miles: For 1993, using a reported
1,778,600 miles of gas transmission, main, and service
lines, the calculated exposure rate was 58 hits per 1000 line
miles. Transmission lines had a substantially lower experience:
a rate of 5.5 hits per 1000 miles, with distribution lines suffering 71 hits per 1000 miles [64]. All rates are based on limited
Because the risk of excavation damage is associated with
digging activity rather than system size, “hits per digs” is a
useful measure of risk exposure. For the same year that GRI
conducted its survey, one-call systems collectively received
more than an estimated 20 million calls from excavators.
(These calls generated 300 million work-site notifications
for participating members to mark many different types of
underground systems.) Using GRI’s estimate of hits. the risk
exposure rate for 1993 was 5 hits per 1000 notifications to dig
Risk variables
Many mitigation measures are in place in most Western countries to reduce the threat of third-party damages to pipelines.
Nonetheless, recent experience in most countries shows that
this remains a major threat, despite often mandatory systems
such as one-call systems. Reasons for continued third-party
damage, especially in urban areas, include
Smaller contractors ignorant of permit or notification
No incentive for excavators to avoid damaging the lines
when repair cost (to damaging party) is smaller than avoidance cost
Inaccurate mapshecords
Imprecise locations by operator.
Many of these situations are evaluated as variables in the
suggested risk assessment model.
The pipeline designer a n 4 perhaps to an even greater extent,
the operator can affect the probability of damage from thirdparty activities. As an element ofthe total risk picture, the probability of accidental third-party damage to a facility depends on
The ease with which the facility can be reached by a third
The frequency and type ofthird-party activities nearby.
Possible offenders include
Excavating equipment
Vehicular traffic
Farming equipment
Seismic charges
Telephone posts
Wildlife (cattle, elephants, birds, etc.)
Factors that affect the susceptibility of the facility include
Depth of cover
Nature of cover (earth, rock, concrete, paving, etc.)
Man-made barriers (fences, barricades, levees, ditches. etc.)
Natural barriers (trees, rivers, ditches, rocks, etc.)
Presence of pipeline markers
Condition of right ofway (ROW)
Frequency and thoroughness of patrolling
Response time to reported threats.
The activity level is often judged by items such as:
Population density
Construction activities nearby
Proximity and volume of rail or vehicular traffic
Number of other buried utilities in the area.
3/46 Third-party Damage Index
Serious damage to a pipeline is not limited to actual punctures of the line. A mere scratch on a coated steel pipeline damages the corrosion-resistant coating. Such damage can lead to
accelerated corrosion and ultimately a corrosion failure perhaps years in the future. If the scratch is deep enough to have
removed enough metal, a stress concentration area (see Chapter
5 ) could be formed, which again, perhaps years later, may lead
to a failure from fatigue, either alone or in combination with
some form of corrosion-accelerated cracking.
This is one reason why public education plays such an important role in damage prevention. To the casual observer, a minor
dent or scratch in a steel pipeline may appear insignificantcertainly not worthy of mention. A pipeline operator knows the
potential impact of any disturbance to the line. Communicating
this to the general public increases pipeline safety.
Several variables are thought to play a critical role in the
threat of third-party damages. Measuring these variables can
therefore provide an assessment of the overall threat. Note that
in the approach described here, this index measures the potential for third-party damage-not the potential for pipeline failure from third-party damages. This is a subtle but important
distinction. Ifthe evaluator wishes to measure the latter in a single assessment, additional variables such as pipe strength, operating stress level, and characteristics of the potential third-party
intrusions (such as equipment type and strength) would need to
be added to the assessment.
What are believed to be the key variables to consider in
assessing the potential for third-party damage, are discussed in
the following sections. Weightings reflect the relative percentage contribution of the variable to the overall threat of thirdparty damage.
Assessing third-party damage potential
A. Minimum depth of cover (weighting: 20%)
The minimum depth of cover is the amount of earth, or equivalent cover, over the pipeline that serves to protect the pipe from
third-party activities.
A schedule or simple formula can be developed to assign
point values based on depth of cover. In this formula, increasing
points indicate a safer condition; this convention is used
throughout this book. A sample formula for depth of cover is as
Amount of cover in inches 3 =point value up to a maximum of
20 points
For instance,
42 in. of cover = 42 + 3 points = 14 points
24 in. of cover = 24 + 3 points = 8 points
Points should be assessed based on the shallowest location
within the section being evaluated. The evaluator should feel
confident that the depth of cover data are current and accurate;
otherwise, the point assessments should reflect the uncertainty.
Experience and logic indicates that less than one foot of
cover may actually do more harm than good. It is enough cover
to conceal the line but not enough to protect the line from even
shallow earth moving equipment (such as agricultural equip-
ment). Three feet of cover is a common amount of cover
required by many regulatory agencies for new construction.
Credit should also be given for comparable means of protecting the line from mechanical damage. A schedule can
be developed for these other means, perhaps by equating
the mechanical protection to an amount of additional earth
cover that is thought to provide equivalent protection. For
2 in. ofconcrete coating = 8 in. of additional earth cover
4 In. of concrete coating = 12 in. of additional earth cover
Pipe casing = 24 in. of additional cover
Concrete slab (reinforced)= 24 in. of additional cover.
Using the example formula above, a pipe section that has 14
in. of cover and is encased in a casing pipe would have an equivalent earth cover of 14 + 24 = 38 in., yielding a point value of
38 + 3 = 12.7.
Burial of a warning tape-a highly visible strip of material
with warnings clearly printed on it-may help to avert damage
to a pipeline (Figure 3.3). Such flagging or tape is commercially available and is usually installed just beneath the ground
surface directly over the pipeline. Hopefully, an excavator will
discover the warning tape, cease the excavation, and avoid
damage to the line. Although this early warning system
provides no physical protection, its benefit from a failureprevention standpoint can be included in this model. A derivative of this system is a warning mesh where instead of a single
strip of low-strength tape, a tough, high-visibility plastic mesh,
perhaps 30 to 36 in. wide is used. This provides some physical
protection because most excavation equipment will have at
least some minor difficulty penetrating it. It also provides additional protection via the increased width, reducing the likelihood of the excavation equipment striking the pipe before the
warning mesh. Either system can be valued in terms of an
equivalent amount of earth cover. For example:
Warning tape = 6 in. of additional cover
Warning mesh = 18 in of additional covet
As with all items in this risk assessment system, the evaluator should use his company’s best experience or other available
information to create his point values and weightings. Common
situations that may need to be addressed include rocks in one
region, sand in another (is the protection value equivalent?) and
pipelines under different roadway types (concrete versus
asphalt versus compacted stone, etc.). The evaluator need only
remember the goal of consistency and the intent of assessing
the amount of real protection from mechanical damage.
Ifthe wall thickness is greater than what is required for anticipated pressures and external loadings, the extra thickness is
available to provide additional protection against failure from
external damage or corrosion. Mechanical protection that may
be available from extra pipe wall material is accounted for in
the design index (Chapter 5).
In the case of pipelines submerged at water crossings, the
intent is the same: Evaluate the ease with which a third party
can physically access and damage the pipe. Credit should be
given for water depth, concrete coatings, depth below seafloor,
extra damage protection coatings, etc.
A point schedule for submerged lines in navigable waterways might look something like the following:
Assessing third-party damage potential 3/47
Minimum depth
of cover
Ground surface
Warning tape J
Figure 3.3 Minimum
Depth below)water surJace:
0 pts
3 pts
7 pts
0-5 ft
5 +Maximum anchor depth
>Maximum anchor depth
Depth below bottom of waterway (add thesepoints to the
points.from depth below water surface):
0 pts
&2 ft
3 pts
2-3 ft
5 pts
3-5 ft
1 pts
5 %Maximum dredge depth
10 pts
>Maximum dredge depth
Concrete coating (add these points to the points assigned fcr
uuter depth and burial depth):
0 pts
Minimum I in.
5 pts
The total for all three categories may not exceed 20 pts if a
weighting of 20% is used.
depth of cover
The above schedule assumes that water depth offers some
protection against third-party damage. This may not be a valid
assumption in every case; such an assumption should be confirmed by the evaluator. Point schedules might also reflect the
anticipated sources of damage. If only small boats can anchor
in the area, perhaps this results in less vulnerability and the
point scores can reflect this. Reported depths must reflect the
current situation because sea or riverbed scour can rapidly
change the depth of cover.
The use of water crossing surveys to determine the condition of
the line, especially the extent of its exposure to external force damage, indirectly impacts the risk picture (Figure 3.4).Such a survey
may be the only way to establish the pipeline depth and extent of its
exposureto boat trafic, currents, floatingdebris,etc. Because conditions can change dramatically when flowing water is involved,
the time since the last survey is also a factor to be considered.Such
surveys are considered in the incorrect operations index chapter
(Chapter 6).Points can be adjusted to reflect the evaluators’confidence that cover information is current with the recommendation
to penalize (show increased risk) wherever uncertainty is higher.
(See also Chapter 12 on offshore pipelines systems.)
River bank
Previous survey
Figure 3.4
River crossing survey
3/48 Third-party Damage Index
Example 3.1: Scoring the depth of cover
In this example, apipeline section has burial depths of 10 and
30 in. In the shallowest portions, a concrete slab has been
placed over and along the length of the line. The 4-in. slab is 3
ft wide and reinforced with steel mesh. Using the above
schedule, the evaluator calculates points for the shallow sections with additional protection and for the sections buried
with 30 in. of cover. For the shallow case: I O in. of cover + 24
in. of additional (equivalent) cover due to slab = (10 + 24)/3
pts = 11.3 pts. Second case: 30 in. of cover = 30/3 = I O pts.
Because the minimum cover (including extra protection)
yields the higher point value, the evaluator uses the IO-pt
score for the pipe buried with 30 in. of cover as the worst case
and,hence, the governing point value for this section.
A better solution to this example would be to separate the
10-inch and 30-inch portions into separate pipeline sections
for independent assessment.
In this section, a submerged line lies unburied on a river bottom, 30 ft below the surface at the river midpoint, rising to
the water surface at shore. At the shoreline, the line is buried
with 36 in. of cover. The line has 4 in. of concrete coating
around it throughout the entire section.
Points are assessed as follows: The shore aonroaches are
very shallow; although boat anchoring is rare, it is possible. No
protection is offered by water depth, so 0 pts are given here. The
4 in. of concrete coating yields 5 pts. Because the pipe is not
buried beneath the river bottom, 0 pts are awarded for cover.
Total score = O + 5 + 0 = 5 pts
B. Activity level (weighting: 20%)
Fundamental to any risk assessment is the area ofopportunity.
For an analysis of third-party damage potential, the area of
opportunity is strongly affected by the level of activity near the
pipeline. It is intuitively apparent that more digging activity near
the line increases the opportunity for a line strike. Excavation
OCCUTS frequently in the United States. The excavation notification system in the state of Illinois recorded more than 100,000
calls during the month ofApril 1997. New Jersey’s one-call system records 2.2 million excavation markings per year, an average of more than 6000 per day [64]. As noted previously, it is
estimated that gas pipelines are accidentally struck at the rate of
5 hits per every 1000 one-call notifications.
DOT accident statistics for gas pipelines indicate that, in the
1984-1987 period, 35% of excavation damage accidents
occurred in Class 1 and 2 locations, as defined hy DOT gas
pipeline regulations [87]. These are the less populated areas.
This tends to support the hypothesis that a higher population
density means more accident potential.
Other considerations include nearby rail systems and high
volumes of nearby traffic, especially where heavy vehicles
such as trucks or trains are prevalent or speeds are high.
Aboveground facilities and even buried pipe are at risk because
an automobile or train wreck has tremendous destructiveenergy potential.
In some areas, wildlife damage is common. Heavy animals
such as elephants, bison, and cattle can damage instrumenta-
tion and pipe coatings, if not the pipe itself. Birds and other
smaller animals and even insects can also cause damage by
their normal activities. Again, coatings and instrumentation of
aboveground facilities are usually most threatened. Where such
activity presents a threat of external force damage to the
pipeline, it can be assessed as a contributor to activity level
The activity level item is normally a risk variable that may
change over time, but is relatively unchangeable by the pipeline
operator. Relocation is usually the only means for the pipeline
operator to change this variable, and relocation is not normally
a routine risk mitigation option.
The evaluator can create several classifications of activity
levels for risk scoring purposes. She does this by describing
sufficient conditions such that an area falls into one of her classifications. The following example provides a sample of some
of the conditions that may be appropriate. Further explanation
follows the example classifications.
High activity level (0 points)
one or more of the following:
This area is characterized by
Class 3 population density (as defined by DOT CFR49Part
High population density as measured by some other scale
Frequent construction activities
High volume of one-call or reconnaissance reports (>2 per
Rail or roadway traffic that poses a threat
Many other buried utilities nearby
Frequent damage from wildlife
Normal anchoring area when offshore
Frequent dredging near the offshore line.
Medium activiy level (8 points)
by one or more of the following:
This area is characterized
Class 2 population density (as defined by DOT)
Medium population density nearby, as measured by some
other scale
No routine construction activities that could pose a threat
Few one-call or reconnaissance reports (<5 per month)
Few buried utilities nearby
Occasional wildlife damage.
LOW activity level (I5 points)
This area is characterized by
all of the following:
Class 1 population density (as defined hy DOT)
Rural, low population density as measured by some other
Virtually no activity reports (<lo per year)
No routine harmful activities in the area (agricultural activities where the equipment cannot penetrate to within 1 ft of
the pipeline depth are sometimes considered harmless).
None (2Opoints) The maximum point level is awarded when
there is virtually no chance of any digging or other harmful
third-party activities near the line.
The evaluator may assign point values between these categories, but should make an effort to ensure consistency.
Assessing third-party damage potential 3/49
In each classification of the above example, population
density is a factor. More people in an area generally means
more activity: fence building, gardening, water well construction. ditch digging or clearing, wall building, shed construction, landscaping, pool installations, etc. Many of these
activities could disturb a buried pipeline.
The disturbance could be so minor as to go unreported by the
offending party. As already mentioned, such unreported disturbances as coating damage or a scratch in the pipe wall are often
the initiating condition for a pipeline failure sometime in the
An area that is being renovated or is experiencing a growth
phase will require frequent construction activities. These may
include soil investigation borings, foundation construction,
installation of buried utilities (telephone, water, sewer, electricity, natural gas), and a host of other potentially damaging
activities. Planned or observed development is therefore a good
indicator of increased activity levels. Local community land
development or planning agencies might provide useful information to forecast such activity.
Perhaps one of the best indicators of the activity level is the
frequency of activity reports. These reports may come from
direct observation by pipeline personnel, patrols by air or
ground and telephone reports by the public or by other construction companies. The one-call systems (these are discussed
in a later section), where they are being used, provide an excellent database for assessing the level of activity, although they
might only be a lagging indicator; that is, they may show where
past activity has occurred but not necessarily be indicative of
future activity.
The presence of other buried utilities logically leads to more
frequent digging activity as these systems are repaired maintained and inspected. This increased exposure is perhaps partially offset by a presumption that utility workers are better
versed in potential excavation damages than are some other
industry excavators. If considered credible evidence of
increased risk, the density ofnearby buried utilities can be used
as another variable in judging the activity level.
Anchoring, fishing, and dredging activities, along with
dropped objects, pose the greatest third-party threats to submerged pipelines. To a lesser degree, new construction by
open-cut or directional-drill methods may also pose a threat
to existing facilities. Dock and harbor constructions and
perhaps even offshore drilling activities may also be a
Seismograph activi8
Of special note here is seismograph work or other activities
involving underground detonations. As a part of exploratory
work, perhaps searching for oil or gas reservoirs, energy is
transmitted into the ground and measured to determine information about the underlying geology of the area. This usually
involves crews laying shot lines-rows of buried explosives
that are later detonated. The detonations supply the energy
source to gather the information sought. Sometimes, instead of
explosive charges, other techniques that impart energy into the
soil are used. Examples include a weight dropped onto the
ground where the resulting shock waves are monitored and a
vibration technique that generates energy waves in certain frequency ranges.
Seismograph activity can be hazardous to pipelines and the
potential for such activity should be included in the risk assessment. The first hazard occurs ifholes are drilled to place explosives. Such drilling can, of course, place the pipeline in direct
jeopardy. Depth of cover provides little protection because the
holes may be drilled to any depth. The second hazard is the
shock waves to which the pipeline is exposed. When the explosive(s) is detonated a mass of soil is accelerated. If there is not
enough backup support for the pipeline, the pipe itself absorbs
the energy of the accelerating soil mass [29]. This adds to the
pipe stresses (Figure 3.5). Conceivably, a charge (or line of
charges) detonated far below the pipeline can be more damaging than a similar charge placed closer to the line but at the
same depth. An analysis must be performed on a case-by-case
basis to determine the extent of the threat.
Ground surface
moved by the detonation)
(this is moved against
the pipeline when charge
Figure 3.5 Seismograph activity near pipelines.
3/50 Third-party Damage Index
As of this writing, pipeline operators have little authority in
specifying minimum distances for seismograph activity.
Technically, the operator can only forbid activity on the few feet
of ROW that he controls. Cooperation from the seismograph
company is often sought.
As an additional special case of potentially damaging activity, directional boring is not always sensitive to line hits. It is
possible for a boring equipment operator to hit a facility without being aware of the hit. The drill bits, designed to go through
rock, experience little change in resistance when going through
plastic pipe or cable and can cause much damage to steel
pipelines. This is another aspect that can be included in the
activity level variable.
C. Aboveground facilities (weighting: loo/,)
This is a measure of the susceptibility of aboveground facilities
to third-party disturbance. Aboveground pipeline components
have a different type of third-party damage exposure compared
to the buried sections. Included in this type of exposure are the
threats of vehicular collision and vandalism. The argument can
be made that these threats are partially offset by the benefit of
having the facility in plain sight, thereby avoiding damages
caused by not knowing exactly where the pipeline is (as is the
case for buried sections). The evaluator should adjust the
weighting factor and the point schedule to values consistent
with the company’s judgment and experience, but also recognize that one company’s experience might not accurately represent the threats. One source reports that, due mainly to the
greater chance of impact and increased exposure to the elements, equipment located above ground has a risk of failure
approximately 100 times greater than for facilities underground [67].This will, of course, be situation specific.
While the presence of aboveground components is something that is often difficult or impossible to change-their location is usually based on strong economic andor design
considerations-preventive measures can be taken to reduce
the risk exposure. This risk variable combines the changeable
and nonchangeable aspects into a single point value but recognizes that a mitigated threat still poses more risk than a threat
that does not even exist.
The evaluator can set up a point schedule that gives the
maximum point value for sections with no aboveground
components-where the threat does not exist. For sections that
do have aboveground facilities, point credits should be given
for conditions that reduce the risk of third-party damage
(Figure 3.6). These conditions will often take the form of
vehicle barriers or other obstacles or discouragements to
trees (partial)
concrete barrier
distance from highway
9 points
Figure 3.6 Protectionfor aboveground facilities.
Assessing third-party damage potential 3/51
No aboveground facilities
Aboveground facilities
10 pts
0 pts
plus any of the following that apply (total not to exceed
The effectiveness of a one-call system depends on several
factors. The evaluator should assess this effectiveness for the
pipeline section being evaluated. Here is a sample point schedule (with explanations following):
10 pts):
Facilities more than 200 ft from vehicles
Area surrounded by 6-ft chain-link fence
Protective railing (4-in. steel pipe or better)
Trees ( 12 in. in diameter), wall, or other substantial
[email protected]) between vehicles and facility
Ditch (minimum 4-ft depthiwidth) between vehicles
and facility
Signs (“Warning,” “No Trespassing,” “Hazard” etc.)
5 pts
2 pts
3 pts
4 pts
3 pts
1 pt
Credit may be given for security measures that are thought to
reduce vandalism (intentional third-party intrusions). The
example above allows a small number of points for signs that
may discourage the casual mischief-maker or the passing
hunter taking target practice. Lighting, barbed wire, video surveillance. sound monitors, motion sensors, alarm systems, etc.,
may warrant point credits as risk reducers. Beyond minor vandalism potential, the threat of sabotage can be considered in a
risk assessment. Chapter 9 explores that aspect of risk.
D. Line locating (weighting: 15%)
A line locating program or procedure-the process of identifying the exact location of a buried pipeline in order for third parties to safely excavate nearby-is central to avoiding third-party
damages. A one-call system is a service that receives notification of upcoming digging activities and in turn notifies all owners of potentially affected underground facilities. It is the
foundation of many pipeline-locating programs. A conventional
one-call system is defined by the DOT as “a communication
system established by two or more utilities (or pipeline companies), governmental agencies, or other operators ofunderground
facilities to provide one telephone number for excavation contractors and the general public to call for notification and
recording of their intent to engage in excavation activities. This
information is then relayed to appropriate members of the onecall system, giving them an opportunity to communicate with
excavators, to identify their facilities by temporary markings,
and to follow up the excavation with inspections of their facilities.” [68] Such systems can also be established by independent
entrepreneurs. In this text, one-call generically refers to all such
notification systems, although many go by other names such as
no-dig, miss utilig, or miss-dig.
The first modern one-call system was installed in Rochester,
New York, in 1964. As of 1992, there were 88 one-call systems
in 47 states and Washington, D.C., plus similar systems operating in Canada, Australia, Scotland, and Taiwan. A report by the
National Transportation Board on a study of 16 one-call centers
gives evidence of the effectiveness of this service in reducing
pipeline accidents. In 10 instances (of the 16 studied), excavation-related accidents were reduced by20 to 40%. In the remaining six cases, these accidents were reduced by 60 to 70% [68].
One-call systems operate within stated boundaries, usually
in urban areas. Participation in and use of a one-call system is
mandatory in most states in the United States.
Proven record of efficiency and reliability
Widely advertised and well known in community
Meets minimum ULCCA standards
Appropriate reaction to calls
Maps and records
6 pts
2 pts
2 pts
2 pts
5 pts
4 pts
Add points for all applicable characteristics. The best onecall system is characterized by all of the above factors and will
have a point value of 15 points.
The first variable is a judgment of the one-call system’s
effectiveness. To get any points at all, it should be a system
mandated by law, especially when noncompliance penalties are
severe. Such a system will be more readily accepted and utilized. Beyond that, elements of the one-call systems operation
and results can be evaluated.
The next two point categories are more subjective. The evaluator is asked to judge the effectiveness and acceptance of the
system. The degree of community acceptance can be assessed
by a spot check of local excavators and by the level of advertising of the system. The evaluator may set up a more detailed
point schedule to distinguish among differences he perceives.
This detailed schedule could be tied to the results of a random
sampling ofthe one-call system.
Another category in this schedule refers to standards established by the Utility Location and Coordination Council of
America (ULCCA) for one-call centers. Local utility location
and coordinating councils (ULCCs) are established by the
American Public Works Association (APWA).
The evaluator may substitute any other appropriate industry
standard. This may overlap the first question of whether the
one-call system is mandated by law. If mandated, certain minimum standards will no doubt have been established. Minimum
standards may address
Hours of operation
Record keeping
Method ofnotification
Off-hours notification systems
Timeliness ofnotifications.
The US.National Transportation Safety Board (NTSB) [64]
reports that there are very practical distinctions between onecall centers:
An assortment of communication methods are used to receive excavators’ calls and to issue notification tickets to the centers’ participants:
centers may use telephone staff operators, voice recorded messages,
e-mail, fax machines, Internet bulletin hoards. or a combination of
methods. Service hours may be seasonally limited to a few hours a day
or extend to 24 hours a day. Some locations operate only seasonally
because of construction demand; most operate year-round. Most centers have statewide coverage but may not strictly follow State boundaries. A center may cover portions of several States (Miss Utility in
Virginia, Maryland, and the District of Columbia) or there may be several centers within a State (Idaho has six different one-call systems;
Washington and Wyoming each have nine). Centers may provide
3/52Third-party Damage Index
training to the construction community, conduct publicity campaigns
to educate the public to excavation notification requirements, and
work with facility operators to protect their underground facilities.
Other centers do little work in these areas.
Some centers use positive response procedures-members who do
not mark facilities in the construction area [instead] confirm with the
excavatorsthat they have no facilities in the area rather than just not mark
a location; other centers do not have this requirement.A part ofthe Miss
Utility program in the Richmond Virginia, area uses positive response
procedures to notify the excavator when the marking is complete.
Facility owners directly inform a voice messaging system ofthe status of
a notification ticket. As a time-saving alternative,the contractor can call
the information system anytime to receive an up-to-date status of their
marking request. Information indicating that marking has been completed or that no facilities are located in the area of excavation, allows
construction work to proceed as soon as marking is completed rather
than waiting the full time period for which marking activity is allowed.
The important elements of an effective one-call notification center
have been generally identified by industry organizations. For example, the position ofthe Associated General Contractors ofAmerica on
one-call systems is summarized in six elements:
Mandatory participation
Statewide coverage
48-hour marking response
Standard marking requirements
Continuing education
Fair system ofliahility.
Participants at the Safety Board’s 1994workshop, on the other hand
developed detailed lists of elements they believed are essential for an
effective one-call notification center, other elements a center should
have, and elements it could have. All agreed, however, that first and
foremost was the need for mandatory participation and use ofnotification centers hy all parties. The Safety Board concludes that many
essential elements and activities of a one-call notification center have
been identified but have not been uniformly implemented. [64]
The last scoring item deals with the pipeline company’s
response to a report of third-party excavation activity.
Obviously, reports that are not properly addressed in a timely
manner negate the effects of reporting. The evaluator should
look for evidence that all reports are investigated in a timely
manner. A sense of professionalism and urgency should exist
among the responders. Appropriate response may include
A system to receive and record notification of planned excavation activity
Dispatching ofpersonnel to the site to provide detailed markers of pipeline location
Comprehensivemarking and locating procedures and training
Accurate maps and records showing pipeline locations,
depths, and specifications
Prejob communications or meetings with the excavators
On-site inspection during the excavation
A system to ensure updating of drawings
Inspection of the pipeline facilities after the excavation.
The evaluator may look for documentation or other evidence
to satisfy himself that an appropriate number of these often critical actions is being taken.
Locating and marking methods
In awarding points, the evaluator may wish to distinguish
between methods of direct line location. Methods may range
from the use ofa detection device (with verification by physical
probing by experienced personnel) to merely sighting between
aboveground facilities (a method that often leads to errors).
Some pipe materials, such as plastic, are difficult to locate
when buried, and some sites require expensive excavations to
precisely locate the line, regardless ofthe material. Some materials are also susceptible to damage by the common probing
techniques used to locate the line.
Especially in congested areas, the need to determine the
exact location of the pipeline is critical. Modern locating techniques include instrumentation that can detect buried pipe via
electromagnetic signals, impressed electric signals, and ground
penetrating radar. These instruments are designed to determine
line location and depth. Because they are susceptible to extraneous signals and barriers to signal reception, a degree of operator skill is required. For a variety of situation-specific
reasons-such as pipe material, type of cover, and presence of
interfering signals-not all pipelines can be located with this
instrumentation. In some cases, special wires are inserted into
non-conducting pipe materials to aid in line location. These
tracer wires or locator wires are susceptible to damage from
corrosion, lightning surges, and external forces. Another aid to
pipeline location is the installation of small electronic markers
that emit discrete radio-frequency (RF) signals when polled by
surface instrumentation [66].
Line locating can also be accomplished by direct excavation
andor probing (also called prodding) using a stiff rod to penetrate the ground, sometimes with a water-jet assist, and physically contact the top of the pipe. With some pipe materials and
coatings, these latter methods risk damage to the pipeline.
Some common methods of line locating are listed in Table 3.1.
Practices for marking the underground facilities can have an
impact on the risk of excavation damage. Good practices
include pre-marking of the intended excavation site by the
excavator to clearly identify to the facility owner the area of
digging; positive response by the utility owner to confirm that
underground facilities have been marked or to verify that no
marking was necessary; the use of industry-accepted marking
standards to unambiguously communicate the type of facility
and its location; marking facility locations within the specified
notification time; and responding to requests for emergency
markings, when necessary. The time frame for excavation
marking is usually specified by state damage prevention laws.
Twenty states require underground facility marking to be
accomplished within 48 hours of excavation notification [64].
Some pipes that are difficult to locate from the surface
require expensive excavations to determine their precise location. This is often the case for lines located beneath concrete
sidewalks or roadways and those adjacent to buildings. For
these reasons, modern distribution systems often rely heavily
on accurate records and drawings to show exact piping locations. This allows for more potential human error, as is discussed in the incorrect operations index discussion (Chapter 6 ) .
The use of standard marking colors informs the excavator
about the type of underground facility for which the location
has been marked (Table 3.2). Markings of the appropriate color
for each facility are placed directly over the centerline of the
pipe, conduit, cable, or other feature.
Offset marking procedures are used when direct marking
cannot be accomplished. For most surfaces, including paved
surfaces, spray paint is used for markings; however, stakes or
flags may be used if necessary. A proposed marking standard
Assessing third-party damage potential 3/53
Table 3.1 Common methods of locating buried pipelines
Equipment h p e
Functional description
RF detection techniques
Conventional underground line detection method.
Requires a transmitter and a receiver.
Conductive tracing attaches the transmitter
directly to the line or tracer wire. Inductive
tracing does not require direct line connection.
Oldest, most widely used technology. Inductive signal
detection is quicker. but conductive signal reading is
more accurate.
Electromagnetic techniques
Records signal differentials of magnetic fields.
Similar to RF technology.
Useful for detecting metal objects or structures that
exhibit strong magnetic fields in the ground surface.
This type of detector is affected hy obstructions
between the transmitting signal and the locating
Magnetic method<
Useful for detecting iron and steel facilities.
Magnetic flux methods are easy to use and inexpensive.
but they are subject to interference from metal
surface structures.
Vacuum extraction
Small test holes are dug from the surface by
vacuuming out the soil. The activity, usually
referred to aspotholing, follows more
preliminary locating work to identify the general
facility location. The pothole then confirms
the location and verifies a depth for that
specific site.
Requires preliminary records search to approximate
location for potholing and special vacuum
equipment. Process can be expensive and labor
Ground penetrating radar
Radar wave reflections from underground surfaces
of different dielectric constants are used to
identify subsurface structures.
This method is relatively expensive compared to other
locator methods and does not work well in clay or
Terrain conductivity
Detects current measures that differ from average
ground surface conductivity.
This method can be useful in areas of high conductivity,
such as marine clay soils, particularly for locating
underground storage tanks.
Global positioning system (GPS)
Uses triangulated satellite telemetry to identify
latitude/longitudelocation ofground unit.
Although not a detection technology. GPS coordinates
are frequently used to define geographic location.
Source: National Transportation Safety Board, "Protecting Public Safety through Excavation Damage Prevention," Safety Study NTSB/SS-97/01,
Washington, DC: NTSB, 1997.
Table 3.2 Uniform color code of the APWA utility location and
coordinating council
Feuture identified
Electric power lines, cables, conduit, and lighting cables
Gas, oil, steam, petroleum, gaseous materials
Communications, alarm or signal lines, cable or conduit
Water, irrigation. and slurry lines
Sewers, drain lines
Temporary survey markings
Cable television
Proposed excavation
addresses conventions for marking the width of the facility,
change of direction, termination points, and multiple lines
within the same trench. The standard symbology indicates how
to mark construction sites to ensure that excavators know
important facts about the underground facilities, for example,
hand-dig areas, multiple lines in the same trench, and line termination points [64]. Such conventions help to avoid misinterpretation between locators who designate the position of
underground facilities and excavators who work around those
The points for the preceding categories are added to get a
value for a one-call system. A section that is not participating in
such a program would get zero points.
E. Public education program (weighting: 15%)
Public education programs are thought to play a significant role
in reducing third-party damage to pipelines. Most third-party
damage is unintentional and due to ignorance. This is ignorance
not only of the buried pipeline's exact location, but also ignorance of the aboveground indications of the pipeline's presence
and of pipelines in general. A pipeline company committed to
educating the community on pipeline matters will almost
assuredly reduce its exposure to third-party damage.
Some of the characteristics of an effective public education
program are shown in the following list, along with an example
relative point scale. More explanation is given in the paragraphs that follow.
Meetings with public officials once per year
Meetings with local contractors/excavators once per year
Regular education programs for community groups
Door-to-door contact with adjacent residents
2 pts
2 pts
2 pts
2 pts
4 pts
3154 Third-party Damage Index
Mailouts to contractorslexcavators
Advertisements in contractor/utility publications
once per year
2 pts
way, the value is obtained as the audience is made aware or
reminded of its role in pipeline safety.
1 Pt
Add points for all characteristicsthat apply.The best public education program will score 15 points according to this schedule.
Regular contact with property owners and residents who live
adjacent to the pipeline is thought to be the first line of defense
in public education. When properly motivated, these people
actually become protectors of the pipeline. They realize that
the pipeline is a neighbor whose fate may be closely linked to
their own. They may also act as good neighbors out of concern
for a company that has taken the time to explain to them the
pipeline’s service and how it relates to them. Although it is
probably the most expensive approach, door-to-door contact is
arguably unsurpassed in effectiveness. This is perhaps especially true where pleasant and direct contact between large corporations and concerned citizens is rare. The door-to-door
contact, when performed at least once per year, rates the highest
points in the example schedule.
Other techniques that emphasize the good neighbor
approach include regular mailouts, presentations at community
groups, and advertisements. Mailouts generally take the form
of an informational pamphlet and perhaps a promotional item
such as a magnet, calendar, memo pad, pen, rain gauge, tape
measure, or key chain with the pipeline company’s name and
24-hour phone number. The pamphlet may contain details on
pipeline safety statistics, the product being transported, and
how the company ensures the pipeline integrity (patrols,
cathodc protection, etc.). Most important perhaps is information that informs the reader of how sensitive the line can be to
damage from third-party activities. Along with this is encouragement to the reader to notify the pipeline company if any
potentially threatening activities are observed.The tokens often
included in the mailout merely serve to attract the reader’s interest and to keep the company’s name and contact number handy.
Consideration should be given to all languages commonly spoken in the area. Written materials in several languages and
bilingual contact personnel will often be necessary.
Mailouts can be effectivelysent to landowners, tenants, other
utilities, excavation contractors, general contractors, emergency response groups, and local and state agencies.
Professional, entertaining presentations are always welcomed at civic group meetings. When such presentations can
also get across a message for public safety through pipeline
awareness, they are doubly welcomed. These activities should
be included in the point schedule. Any regular advertisements
aimed at increasing public awareness of pipeline safety should
similarly be included in the evaluation schedule.
Meetings with public officials and local contractors serve
severalpurposes for the pipeline operator.While advising these
people ofpipeline interests (and the impact on their interests), a
rapport is established with the pipeline company. This rapport
can be valuable in terms of early notification of government
planning, impending project work, emergency response, and
perhaps a measure of consideration and benefit ofthe doubt for
the pipeline company. Points should be given for this activity to
the extent that the evaluator sees the value of the benefits in
terms of risk reduction.
Advertising can be company specific or can represent the
common interests of a number of pipeline companies. Either
F. Right-of-waycondition (weighting: 5%)
This item is a measure of the recognizability and inspectability
of the pipeline corridor. A clearly marked, easily recognized
ROW reduces the susceptibility of third-party intrusions and
aids in leak detection (ease of spotting vapors or dead vegetation from ground or air patrols) (Figure 3.7).
The evaluator can establish a point schedule with clear
parameters. The user of the schedule should be able to tell
exactly what actions will increase the point value. The less subjective the schedule, the greater the consistency in evaluation,
but simplicity is also encouraged. The following example
schedule is written in paragraph form where interpolations
between paragraph point values are allowed.
Excellent 5 pts
Clear and unencumbered ROW route clearly indicated; signs
and markers visible from any point on ROW or from above,
even if one sign is missing; signs and markers at all roads,
railroads, ditches, water crossings; all changes of direction
are marked; air patrol markers are present.
Good 3 pts
Clear route (no overgrowth obstructing the view along the
ROW from ground level or above); well marked: markers
are visible from every point of ROW or above if all are in
place; signs and markers at all roads, railroads, ditches, water
Average 2 p t s
ROW not uniformly cleared; more markers are needed for clear
identification at roads, railroads, waterways.
Below average I pt
ROW is overgrown by vegetation in some places; ground is not
always visible from the air or there is not a clear line of sight
along the ROW from ground level; indistinguishable as a
pipeline ROW in some places; poorly marked.
Poor Opt
Indistinguishable as a pipeline ROW no (or inadequate) markers present.
Select the point values corresponding to the closest description of the actual ROW conditions observed in the section.
Descriptions such as those given above should provide the
operator with enough guidance to take corrective action. Point
values can be more specific (markers at 90%of road crossings:
2 pts; at 75% of road crossings: 1 pt; etc.), but this may be an
unnecessary complication.
G. Patrol frequency (weighting: 15%)
Patrolling the pipeline is a proven effective method of reducing
third-party intrusions. The frequency and effectiveness of the
patrol should be considered in assessing the patrol value.
Patrolling becomes more necessary where third-party activities are largely unreported. The amount of unreported activity
will depend on many factors, but one source [ 1I] reported the
number of unreported excavation works around a pipeline system in the United Kingdom to be 25% of the total number of
Assessing third-party damage potential 3/55
Signscompany name
emergency pnone
Painted fenceposts
Figure 3.7 ROW condition.
excavation works. This is estimated to be around 775 unreported
excavations per year on their 10,400-km system [ll]. While
unreported excavation does not automatically translate into
pipeline damage, obviously the potential exists for some of those
775 excavations to contact the pipeline. These numbers represent
a U.K. situation that has no doubt changed with the increasing
use of one-call systems, but some countries do not have formal
notification systems and rely on patrols to be the primary means
of identifying third-party activity near their pipelines.
From a reactive standpoint, the patrol is also intended to
detect evidence of a leak such as vapor clouds, unusual dead
vegetation, bubbles from submerged pipelines, etc. As such, it
is a proven leak detection method (see Chapter 7 on the leak
impact factor).
From a proactive standpoint, the patrol also should detect
impending threats to the pipeline. Such threats take the form of
excavating equipment operating nearby, new construction of
buildings or roads, or any other activities that could cause a
pipeline to be struck, exposed, or otherwise damaged. Note that
some activities are only indirect indications of threats. New
building construction several hundred yards from the pipeline
will not pose a threat in itself, but the experienced observer will
investigate where supporting utilities are to be directed. Constructionofthese utilities at a later time may create the real threat.
The patrol should also seek evidence of activity that has
already passed over the line or land movements. Such evidence
is usually present for several days after the activity and may
warrant inspection of the pipeline.
Training of observers and possibly the use of checklists are
important aspects of patrol effectiveness. Reportable observations should include the following:
Land movements-landslides, subsidence. bank erosion,
creek or riverbank instability, etc.
Construction activi+both
nearby and likely to move
toward the ROW
landscaping changes, gardens, etc., may warrant additional investigation
Unauthorized activities on ROW-off-road vehicles, motorcycles, snowmobiles,etc.
Missing markershigns
Evidence of vehicular intrusions onto ROW-highway accident, train derailment, etc.
Plantings of trees, gardens
Third-party changes to slope or drainage.
Slope issues can be an important but often overlooked
aspect of pipeline stability, detectable to some extent by patrol.
Slope alterations near, but outside, the right-of way by third parties should be monitored. Construction activities near or in the
pipeline right-of-way may produce slopes that are not stable and
could put the pipeline at risk. These activities include excavation
for road or railway cuts, removal of material from the toe of a
slope, or adding significant material to the crest of a slope. The
ability to detect potentially damaging land movements is also a
risk mitigation measure discussed in Chapter 5.
One measure of patrol effectiveness would be data showing
a number of situations that were missed by the patrollers a n d
or accompanying observers when the opportunity was there.
Indirect measures include observer presence, patroller/
observer training, and analysis of the “detection opportunity.”
This opportunity analysis would look at altitude and speed of
aerial patrol an4 for ground patrol, perhaps the line of sight
3/56 Third-party Damage Index
along and either side of the ROW. The opportunity for early
discovery lies in the ability to detect activities before the
pipeline ROW is encroached.
Note also the ability of certain aircraft (helicopters) to take
immediate action to interrupt a potentially dangerous activity.
Such interruptions include landing the aircraft or dropping a
container containinga message in order to alert the third party.
The suggested point schedule will award points based on
patrol frequencyunder the assumption of optimum effectiveness.
Ifthe evaluatorjudgesthe effectivenessto be less than optimum,
he can reduce the points to the equivalent of a lower patrol frequency. This is reasonable because lower frequency and lower
effectivenessboth reduce the area of opportunity for detection.
If practical, the patrol frequency can be determinedbased on
a statistical analysis of data. Historical data will often follow a
typical rare-event frequency distribution such as those shown
in Figures 3.8 and 3.9. Figure 3.9 is based on tabulated estimates shown in Table 3.3. Once the distribution is approximated, analysis of the curve will enable some predictive
decisionsto be made. An analysis of the “opportunityto detect”
various common excavation activitiesis presented at the end of
this chapter. Such analyses can be the basis of determining
patrol frequency or assessing the probability of detection for
any given frequency. For example, management may decide
that the appropriatepatrol frequency should detect, with a 90%
confidence level, 80% of all threatening events. This might be
based on a costhenefit analysis. Patrol frequencies at or
slightly above this optimum can receive the highest points.
Frequency distribution
urve based on recent
historical data
Number of Potential Threats Found on a Single Flight
Figure 3.8
Typical patrol data.
An example point schedule is as follows:
Four days per week
Three days per week
Two days per week
Once per week
Less than four times per month; more than once
per month
15 pts
12 pts
IO pts
8 pts
6 pts
4 pts
Detection Opportunity
a 0.4
Days of Proximal Activity and/or Evidence
Figure 3.9
Patrol detection opportunities.
Third-party damage mitigation analysis 3/57
Patrol as an opportunity to prevent failures caused by
third-party damages Spectrum of third-party activities used to
produce “probabilityof detection”graph (Figure3.9)
Table 3.3
Hypothesis to be examined
Activity duration
(includes evidence
Highway construction
Buried utility crossings
Drainage work
Land clearing
Agricultural work
Fence post installation
0. I
At least 9 out of 10 third-party damage failures that would be otherwise expected are avoided through the stringent implementation of
the proposed mitigation plan.
Source URS Radian Corporation, ‘“EnvironmentalAssessment of
Longhorn Partners Pipeline.”report prepared for US EPA and DOT,
September 2000
Less than once per month
Mitigation effectiveness for third-party damageA scenario-based evaluation
2 pts
0 pts
Select the point value corresponding to the actual patrol frequency.This schedule is built for a corridor that has a frequency
of third-party intrusions that calls for a nominal patrol frequency of 3 days per week. In this case, the evaluator feels that
daily patrols are perhaps justified and provide a measurably
greater safety margin. Frequencies greater than once per day
(once per 8-hour shift, for instance) warrant no more points
than daily in this example.
The evaluator may wish to give point credits for patrols dnring activities such as close interval surveys (see Chapter 4,
Corrosion Index). Routine drive-bys, however, would need to
be carefully evaluated for their effectiveness before credit is
An example of an analysis of “opportunity to detect” potentially damaging third-party actikities is shown in the example
Third-party damage mitigation analysis
A type of scenario risk analysis of third-party damage prevention
effectiveness was done as part of an environmental assessment
for a proposed pipeline [86].This analysis is mostly an exercise
in logic, testing whether it is plausible that mitigation measures
could markedly reduce third-party damage failures from previous levels.This analysis is interesting not only because it demonstrates a type of analysis, but also because it discusses many
concepts that underlie our beliefs about third-party failure potential. Excerpts from the analysis ofRef. [86] follow.
This failure estimation is suggested by modeling and analyses shown
elsewhere in this environmental assessment,There is a question of
whether such an estimation can be supported by a logical event-tree
type analysis and examination of some ofthe past failures.Therefore,
the objective is to determine if the proposed mitigation measures
could have interrupted past failure sequences, at least 90 percent of
the time, under some set ofreasonable assumptions.
Third-party damage (or “outside force”) is a good candidate
for this examination since this failure category is often viewed
as the most random and hence, the least controllable through
mitigations. Seven (7) out oftwenty-six (26) historical leaks on
the subject pipeline were categorized as being caused by “thirdparty damage.” It is useful to characterize these incidents based
on some situation specifics. At least six (6) of the incidents
involved heavy equipment such as back hoe, bulldozer. bulldozer with ripperiplow, and ditching machine (the seventh is
not listed). Five (5) of the incidents suggest that a professional
contractor was probably involved since activities are described
as cable installations, water line installations, excavations for
an oil/gas company, land clearing, etc. At least four (4) of these
events occurred before a One-Call system was available in
Texas (beginning in the early 1990s and mandated in late 1997).
So, the opportunities for advance knowledge of the presence of
the pipeline was limited to signs, ROW indications, and perhaps some records ifthe excavator was exceptionally diligent in
a pre-job investigation. Contractor and public education efforts,
ROW condition, and actual patrol frequency are unknown. Based
on current survey information, depth of cover at these sites
varies from 19 inches to over 48 inches.
Scenarios have been created to address the question: “How
many failures, similar to these past incidents, might occur
today?” These scenarios take into account the proposed mitigation measures. Two tables are offered to show potential failure
sequences and opportunities to interrupt those sequences. Since
the previously discussed incidents occurred despite some prevention measures, the estimates are showing opportunities for damage avoidance above and beyond prevention practices thought to
be prevalent at the time of the incidents.These tables are loosely
using terminology to represent frequency of events and probability of events-this is not a rigorous statistical analysis.
In the first table, the estimated probabilities of various scenario elements are presented. The table begins with the assumption that a potentially damaging third-party activity is already
present in the immediate vicinity ofthe pipeline.
Given that an activity is present, column 2 of the table characterizes the distribution of likely activities. The distribution
assumes a predominance of heavy equipment involvement in
previous incidents, and is therefore conservative since that category is perhaps the most threatening to the pipeline.
Column 3 examines the possibility, under today’s mandated
and advertised One-Call system. that the system is used and the
3/58Third-party Damage Index
process works correctly to interrupt a potential failure
sequence. It is assumed that 60 percent of heavy equipment
operators would have knowledge of and experience with the
one-call process and would therefore utilize it. It is further
assumed that the one-call process “works” 80 percent of the
time it is used. (Both assumptions are thought to conservatively
underestimate the actual effectiveness.) This yields a 48 percent chance (60 percent x 80 percent) that this variable interrupts the sequence for that type of activity. It is assumed that
one in ten potentially damaging events would be similarly interrupted in the case of typical homeowner or farmerhancher
activity. This is lower than for the heavy equipment operators
since the latter group is thought to be more targeted with training, advertising, and presentations from owners of buried utilities. The interruption rates reflect improvements over one-call
effectiveness at the time period of the incidents, approximately
1969 to 1995, which includes periods when there was either no
one-call system available or it was available but not mandated.
The continuously increasing acceptance of the one-call protocols by the public and the response of the pipeline operator to
notifications combine to create this estimated interruption rate.
Columns 4,5, and 6 examine the possibility that, given that
an activity has escapedthe one-call process, the impending failure sequence will be interrupted by improved ROW condition,
signs, or public/contractor education. Assumptions of likelihood range from five in 100 to 15 in 100, respectively. This
means that out of every group of threatening activities, at least a
few will be interrupted by someone noticing the ROW and/or a
sign or having been briefed on pipeline issues and reacting
appropriately. In the interest of conservatism, relatively small
interruption rates are assigned to the proposed improvementsin
these variables although they can realistically prevent an incident in numerous credible scenarios.
Column 7 examines the effect of depth of cover. One reference [Ref [58] in this book] cites Western European data
(CONCAWE) which suggests that approximately 15 percent
fewer third-party damage failures occur with each foot of cover
over the normal (0.9 meters). Using this, a length-weighted
average depth of cover was calculated for the pipeline, respectively. The pipeline shows between 7 percent and 4 percent
improvement,based on the lengths that are covered deeper than
about 0.9 meters. Based on this, a value of 5 percent was
assigned to the cover variable for the “heavy equipment operations’’ type of activity. This means that five out of every 100
potentially damaging third-party activities would be prevented
from causing damage by an extra amount of cover. For homeowner activities, depth of cover is judged to be a more effective
deterrent, preventing three out of ten potential damages. One
out of ten potentially threatening ranchedfarmer activities are
assumed to be rendered non-threatening by depth of cover.
Finally, the impact of patrolling is examined in column 8.
A table of common third-party activities is presented against a
continuum of opportunity to detect, expressed in days (see
patrol figure in Table 1). The “opportunity” includes an estimate of how long after the activity occurs its presence can still
be detected. Since third-party activities can cause damages that
do not immediately lead to failure, this ability to inspect evidence of recent activity is important. The table is intended to
provide an estimate of the types of activities that can reasonably
be detected in a timely manner by a patrol. The frequency of the
various types of activities will be very location- and time-
specific, so frequencies shown are very rough estimates. It
seems reasonable to assume that activity involving heavy
equipment requires more staging, is of a longer duration, and
leaves more evidence of the activity. All of these promote the
opportunity for detection by patrol.
Statistical theory confirms that, with a few reasonable
assumptions, the probability of detection is directly proportional to the frequency of patrols. For example, calculations
indicate that the probability of detection in two patrols is twice
the probability of detection in one patrol if detection of the
same event cannot occur in both patrols. This condition is
essentially satisfied for these purposes since patrol sightings
subsequent to the initial sighting are no longer considered to be
“detections.”The key point here is that the probability that one
or more events will occur is the sum of their individual probabilities ifthe events are mutually exclusive.
Discounting patrol errors, as the patrol interval approaches 0
hours (a continuous observation of the ROW), the detection
probability approaches 100 percent. The patrol interval is
changing from a historical maximum interval between patrols
of 336 hours (once every two weeks on average, although it
could be as high as three weeks or 504 hours). The mitigation
plan requires a patrol every 24,60, or 168 hours, depending on
the location. In theory, this improves the detection probability
by multiples of 2 to 14. On the table of activities, patrol intervals of 24,60, and 168 hours suggest detections of 93 percent,
75 percent, and 36 percent of activities, respectively. This
means that, with a maximum interval between patrols of 24
hours, only 7 percent of activities would go undetected, given
the assumed distribution of activities. Obviously, the real situation is much more complex than this simple analysis, but the
rationale provides a background for making estimates of patrol
In order to make conservative estimates (possibly underestimating the patrol benefits), the increased detection probabilities under the proposed mitigation plan are assumed to be: 30
percent, 10 percent, and 20 percent for heavy equipment,
homeowner, and ranchifarm operations, respectively. This
means that about one-third of heavy equipment operations,
one in every ten homeowner activities, and one in every
five ranchifarm activities would be detected before damage
occurred or, in the case of no immediate leak, would provide the
operator time to detect andrepair damages before a leak occurs.
Homeowner and ranchifarm actions are judged to be more difficult to detect by patrol because such activities tend to appear
with less warning and are often of shorter duration than the
heavy equipment operations.
Table 2 converts Table 1 columns 3 through 8 into probabilities of the sequence NOT being interrupted-the “opposite” of
Table 1,
Column 9 of Table 2 estimates the fraction of times that the
line is under enough stress that, in conjunction with powerful
enough equipment, a rupture would occur immediately. This
stress level is a function of many variables, but it is conservatively estimated that 50 percent of the line is under a relatively
high stress level. For the 50 percent of the line that could be
damaged, but not to the extent that immediate leakage occurs,
the mitigation plan’s corrosion control and integrity reverification processes, which specifically factor in third-party damage
potential in determining reinspection intervals, are designed to
detect and remediate such damages before leaks occur.
Third-party damage mitigation analysis 3/59
Table 1
p (interruption of event sequence by. . . )
p (activ)
Heavy equipment operations
Homeowner equipment operations
Ranchlagricultural equipment operations
One Call
0. I
Column 10 of Table 2 estimates the frequency of a thirdparty activity involving equipment ofenough power to cause an
immediate leak. This may be somewhat correlated to depth of
cover, but no such distinction is made here. Heavy equipment is
assigned a value of 0.9-indicating h g h probability that the
equipment has enough power to rupture the line. A minor
reduction from a value of 1.O that would otherwise be assigned
is recognized-it is assumed that such heavy equipment normally is operated by skilled personnel. So, while heavy equipment is certainly capable of rupturing a line, a skilled operator
can usually “feel” when something as unyielding as a steel pipe
is encountered, and will investigate with hand excavation
before extra power is applied. Homeowners and ranchdfarmers are assumed to be using powerful equipment in 30 percent
and 60 percent of their activities, respectively. No credit for
operator skill is assumed in these cases.
Column 11 multiplies all column estimates and shows the
combined frequency for the three types of activities.
Although not quantified here, the impact of future focus on
the issue of third-party damages can reasonably be included.
The pipeline industry shares this concern with buried utilities
containing water, sewer, and any of several types of data transmission lines. Interruption of such lines can represent enormous costs. Additional unexamined activities that would
0. I
suggest efforts in the future to prevent such damages include
on-going government industry initiatives addressing the issue.
It is important to note that this analysis is strictly a logic exercise,
to test if the hypothesis could reasonably be supported through
assumed effectivenessof individual mitigation measures.
Th~sanalysis suggests that under the proposed mitigation plan,
and assuming modest mitigation benefits from the mitigation
measures, approximately 89 percent of hrd-party activities, not
interruptedunder previous mitigation efforts, could reasonably be
expected to be interruptedbefore they cause a pipeline failure.The
initial hypothesisthereforeseems reasonable, given the results and
the conservative assumptionsemployed in this analysis.
These calculations are based on scenarios with assumptions
that are thought to underestimate rather than overestimate
prevention effectiveness. However, since they contain a large
element of randomness, third-party damages are more difficult
to predict and prevent. Scenarios can be envisioned where all
reasonable preventive measures are ineffective and damage
does occur. Such scenarios are usually dnven by human erroran element that causes difficulty in making predictions.
Table 2
p (event) = 1 - p (interruption)
Heavy equipment operations
Homeownerequipment operations
Ranch/agricultural equipment operations
p (high stress)
p (equipment
1 Assume that 60 percent of contractors follow one-call procedure and that marking, etc., is 80 percent effective.
2 Western Europe data suggest 15 percent failure reduction per foot of additional cover (over "normal" depth).
3 Assume cover is more effective against non-heavy equipmentdamages.
4 At least six of the seven previousthird-party involved heavy equipment used by contractors.
5 Assume percent of line that is in a highly stressed condition; enough to promote leak upon moderate damage.
6 Assume that these percentages are detected prior to incident or soon thereafter (damage assessment opportunity).
7 Previous third-party damage rate allowed 336 hours as maximum interval between detection opportunities; new is 24.60, or 168 hours maximum
8 Assumes that homeowner and ranch activities tend to appear faster than most heavy equipment projects.
9 Includesdoor-to-door in Tier 3 and presentationsto excavatingcontractorseverywhere.
10 Chances that equipment is powerful enough that, in conjunction with a higher stress condition in the pipe wall, immediate rupture is likely.
11 p (damage detection before failure) =function of (patrol, CIS. ILI. fatigue, corrosion rate, stress level).
12 No one-call system was available for five out of seven previousthird-party leaks.
p (ofleuk
activity is
Corrosion Index
1. Overview
CorrosionThreat = (Atmospheric Corrosion)
+(Internal Corrosion)
+ (Buried Metal Corrosion)
Corrosion Threat
A. Atmospheric Corrosion
A1 . Atmospheric Exposures
A2. AtmosphericType
A3. Atmospheric Coating
B. Internal Corrosion
B1. Product Corrosivity
B2. Preventions
C. Subsurface Corrosion
C 1. Subsurface Environment
Soil Corrosivity
Mechanical Corrosion
C2. Cathodic Protection
Interference Potential
0-10 pts
0-5 pts
0-2 pts
0-3 pts
0-20 pts
0-10 pts
0-10 pts
0-70 pts
0-20 pts
0-15 pts
0-5 pts
0-25 pts
0-15 pts
0-10 pts
C3. Coating
Overall Threat of Corrosion
0-25 pts
0-10 pts
0-15 pts
0-100 pts
II. Background
The potential for pipeline failure caused by corrosion is perhaps the most familiar hazard associated with steel pipelines.
This chapter discusses how common industry practices of corrosion analysis and mitigation can be incorporated into the risk
assessment model (see Figure 4.1). A detailed discussion of the
complex mechanisms involved in corrosion is beyond the
scope of this text.
Corrosion comes from the Latin word corrodere, meaning
“gnaw to pieces.” Corrosion, as it is used in this text, focuses
mainly on a loss of metal from pipe, although the concepts
apply to many corrosionlike degradation mechanisms. From
previous discussions of entropyand energy flow, we can look at
corrosion from a somewhat esoteric viewpoint. Simply stated,
manufactured metals have a natural tendency to revert to their
originalmineral form. While this is usually a very slow process,
4/62 Corrosion Index
Figure 4.1 Basic risk assessment model
it does require the injection of energy to slow or halt the disintegration. Corrosion is of concern because any loss of pipe wall
thickness invariably means a reduction of structural integrity
and hence an increase in risk of failure.
Non-steel pipeline materials are sometimes susceptible to
other forms of environmental degradation. Sulfates and acids
in the soil can deteriorate cement-containingmaterials such as
concrete and asbestos cement pipe. Some plastics degrade
when exposed to ultraviolet light (sunlight). Polyethylene pipe
can be vulnerable to hydrocarbons. Polyvinyl chloride (PVC)
pipe has been attacked by rodents that actually gnaw through
the pipe wall. Pipe materials can be internally degraded when
transporting an incompatibleproduct. All of these possibilities
can be considered in this index. Even though the focus here is
on steel lines, the evaluatorcan draw parallels to assess his nonsteel lines in a similar fashion.
As with other failure modes, evaluating the potential for corrosion follows logical steps, replicating the thought process
that a corrosion control specialist would employ. This involves
(1) identifjmg the types of corrosion possible (atmospheric,
internal, subsurface), (2) identifying the vulnerability of the
pipe material, and (3) evaluating the corrosion prevention
measures used at all locations. Corrosion mechanisms are
among the most complex of the potential failure mechanisms.
As such, many more pieces of information are efficiently utilized in assessing this threat.
Some materials used in pipelines are not susceptible to corrosion and are virtually free from any kind of environmental
degradation potential. These are not miracle materials by any
means. Designers have usually traded away some mechanical
properties such as strength and flexibility to obtain this property. Such pipelines obviously carry no risk of corrosioninduced failure, and the corrosion index should reflect that
absence ofthreat (see Figure 4.2).
The two factors that must be assessed to define the corrosion
threat are the material type and the environment.The environment includes the conditions that impact the pipe wall, internally as well as externally. Because most pipelines pass through
several different environments, the assessment must allow for
this either by sectioning appropriately or by considering each
type of environmentwithin a given section and using the worst
case as the governing condition.
Several types of human errors can increase the risk from corrosion. Incorrect material selection for the environment (both
internal and external exposures) is a possible mistake. Placing
incompatiblematerials close to each other can create or aggravate corrosion potentials. This includesjoining materials such
as bolts, gaskets, and weld metal. Welding processes must be
selectedwith corrosionpotential in mind. Insufficientmonitoring or care of corrosion control systems can also be viewed as a
form of human error. These factors are covered in the incorrect
operations index discussionof Chapter 6 .
In general, four ingredients are required for the commonly
seen metallic corrosion to progress. There must exist an anode,
a cathode, an electrical connection between the two, and an
electrolyte. Removal of any one of these ingredients will halt
the corrosion process. Corrosion prevention measures are
designed to do just that.
Three types of corrosion
The corrosion index assesses three general types: atmospheric
corrosion, internal corrosion, and subsurface corrosion. This
reflects three general environment types to which the pipe wall
may be exposed.
Atmospheric corrosion deals with pipeline componentsthat
are exposed to the atmosphere.To assess the potential for corrosion here, the evaluator must look at items such as
Atmospheric type
Paintingkoatinghnspection program.
For the general risk assessment model described here,
atmospheric corrosion is weighted as 10% of the total corrosion threat. This indicates that atmospheric corrosion is a rela-
Background 4/63
Ground soil interface
Hot spots
Atmospheric Corrosion 0-10 pts
A1 . Atmospheric exposures
A2. Atmospheric type
A3. Atmospheric coating
Internal Corrosion 0-20 pts
B1. Product corrosivity
B2. Internal protection
Subsurface Corrosion 0-70 pts
C1. Subsurface environment
Soil corrosivity
Mechanical corrosion
C2. Cathodic Drotection
Interference potential \Test
C3. Coating
Type, age, application of coating
Other inspection age and results
Flowstream conditions
Upset conditions pH, solids, H,S, CO, MIC, etc.
Low-spot accumulatibns, equipment failure, etc.
Internal coating
Operational measures
Resistivity, pH, moisture, carbonates, MIC, etc.
Stress level, stress cycling, temperature, coating, CP, PH, etc
lead surveys, age, and results
Close spaced surveys, type, age, and results
DC related
AC related
Shielding potential
Type, age, application of coating
Visual inspection age and results
Other inspection age and results
Assessing corrosion potential:sample of data used to scorethe corrosion index
tively rare failure mechanism for most pipelines. This is due to
the normally slower atmospheric mechanisms and the fact that
most pipelines are predominantly buried and, hence, not
exposed to the atmosphere. The evaluator must determine if
this is an appropriate weighting for her assessments.
Internal corrosion deals with the potential for corrosion originating within the pipeline. Assessment items include
Product corrosivity
Preventive actions.
Internal corrosion is weighted as 20% of the total corrosion
risk in the examples. This indicates that internal corrosion is
often a more significant threat than atmospheric corrosion, but
still a relatively rare failure mechanism for most pipelines.
Nevertheless, some significant pipeline failures have been
attributed to internal corrosion. The evaluator may wish to give
this category a different weighting in certain situations.
Subsurface pipe corrosion is the most complicated ofthe categories, reflecting the complicated mechanisms underlying this
type of corrosion. Among the items considered in this assessment are a mix of attributes and preventions including:
Cathodic protection
Pipeline coatings
Soil corrosivity
c--Visual inspection age and results
Presence of other buried metal
Potential for stray currents
Stress corrosion cracking potential
Spacing oftest leads
Inspections of rectifiers and interference bonds
Frequency of test lead readings
Frequency and type of coating inspections
Frequency and type of inspections of pipe wall
Close interval surveys
Use of internal inspection tools.
Subsurface corrosion is weighted as 70% of the total corrosion threat in the examples of this chapter. For nonmetal lines,
the evaluator may wish to adjust this weighting to better reflect
the actual hazards.
Note that corrosion threats are very situation specific. The
weightings of the three corrosion types proposed here are
thought to generally apply to many pipelines but might be illsuited to others. Any ofthe corrosion types might lead to a failure under the right circumstances, even when weightings
suggest a relatively rare failure mechanism. The use of special
alerts or even conversions to absolute probability scales might
be appropriate, as is addressed in discussions of data analysis
later in this text.
Especially in the case of buried metal, inspection for corrosion is commonly done by indirect methods. Direct inspection
4/64 Corrosion Index
of a pipe wall is often expensive and damaging (excavation and
coating removal are often necessary to directly inspect the pipe
material). Corrosion assessments therefore usually infer corrosion potential by examining a few variables for evidence of corrosion. These inference assessments are then occasionally
confirmed by direct inspection.
Characteristics that may indicate a high corrosion potential
are often difficult to [email protected] For example, in buried metal corrosion, soil acts as the electrolyte-the environment that supports the electrochemical action necessary to cause this type of
corrosion. Electrolyte characteristics are of critical importance,
but include highly variable items such as moisture content, aeration, bacteria content, and ion concentrations. All of these
characteristics are location specific and time dependent, which
makes them difficult to even estimate accurately. The parameters affecting atmospheric and internal corrosion potentials can
be similarly difficult to estimate.
Because corrosion is often a highly localized phenomenon,
and because indirect inspection provides only general information, uncertainty is usually high. With this difficulty in mind,
the corrosion index reflects the potential for corrosion to occur,
which may or not mean that corrosion is actually taking place.
The index, therefore, does not directly measure the potential for
failure from corrosion. That would require inclusion of additional variables such as pipe wall thickness and stress levels.
So, the primary focus of this assessment is the potential for
active corrosion. This is a subtle difference from the likelihood
of failure by corrosion. The time to failure is related to the
resistance of the material, the aggressiveness of the failure
mechanism, and the time of exposure. The material resistance
is in turn a function of material strength and dimensions, most
notably pipe wall thickness, and the stress level.
In most cases, we are more interested in identifying locations
where the mechanism is potentially more aggressive rather than
predicting the length of time the mechanism must be active
before failure occurs. An exception to this is found in systems
where leak rate is used as a leading indicator of failure and
where failure is defined as a pipe break (see Chapter 1 1).
Corrosion rate
Corrosion rate can be measured directly by using actual pipe
samples removed from a pipeline and calculating metal loss over
time. Extrapolating this sample corrosion rate to long lengths of
pipe will usually be very uncertain, given the highly localized
nature of many forms of corrosion. A corrosion rate can also be
measured with coupons (metal samples) or electronic devices
placed near the pipe wall. From these measurements, actual corrosion on a pipeline can be inferred-at least for the portions
close to the measuring devices. In theory, one can also translate
in-line inspection (ILI) or other inspection results into a corrosion rate. Currently, this is seen as a very problematic exercise
given spatial accuracy limitations of continuously changing ILI
technologies and the need for multiple comparative runs over
time. However, as data become more precise, corrosion rate
estimates based on measurements become more useful.
Because the corrosion scores are intended to measure corrosion potential and aggressiveness, it is believed that the scores
relate to corrosion rates. However, the relationship can only be
determined by using actual measured corrosion rates in a vari-
ety of environments. Until the relationship between corrosion
index and corrosion rate can be established, a theoretical relationship could be theorized. An example of this is shown in
Chapter 14.
Information degradation
As discussed in earlier chapters information has a usehl life
span. Because corrosion is a time-dependent phenomenon and
corrosion detection is highly dependent on indirect survey
information, the timing of those surveys plays a role in uncertainty and hence risk.
The date ofthe information should therefore play a large role
in any determination based on inspections or surveys. One way
to account for inspection age is to make a graduated scale indicating the decreasing usefulness of inspection data over time.
This measure of information degradation can be applied to the
scores as a percentage. After a predetermined time period,
scores based on previous inspections degrade-conservatively
assuming increasing risk-to some predetermined value. An
example is shown in Table 2.2. In that example, the evaluator
has determined that a previous inspection yields no useful
information after 5 years and that the usefulness degrades
uniformly at 20%per year.
Changes from previous editions
After several years of use of previous versions of the corrosion
algorithm, some changes have been proposed in this edition of
this book. These changes reflect the input of pipeline operators
and corrosion experts and are thought to enhance the model’s
ability to represent corrosion potential.
The first significant change is the modification ofthe weightings of the three types of corrosion. In most parts of the world
and in most pipeline systems, subsurface corrosion (previously
called buried metal corrosion) seems to far outweigh the other
types of corrosion in terms of failure mechanisms. This has
prompted the change in weightings as shown inTable 4.1. Note
that these are very generalized weightings and may not fairly
represent any specific situation. A pipeline with above average
exposures to atmospheric and internal corrosion mechanisms
would warrant a change in weightings.
Another significant change is in the groupings of subsurface
corrosion variables. The new suggested scoring scheme makes
use of the previous variables, but changes their arrangements
and suggests new ways to evaluate them. A revised subsurface
corrosion evaluation shows a regrouping of variables to better
reflect their relationships and interactions.
Table 4.1
Changes to corrosion weightings
Werghtrng m
Previous naightrng
Subsurface (buried metal)
current examples
Scoring the corrosion potential 4/65
Scoring the corrosion potential
All variables consldered here continue to reflect common
industry practice in corrosion mitigatiodprevention. The variable weightings indicate the relative importance of each item in
terms of its contribution to the total corrosion risk. The evaluator must determine if these weightings are most appropriate for
the specific systems being assessed.
In the scoring system presented here, points are usually
assigned to conditions and then added to determine the corrosion threat. This system adds points for safer conditions. For
example, under subsurface corrosion of steel pipelines, three
main aspects are examined: environment, coating, and cathodic
protection. The best combination of environment (very
benign), coating (very effective), and cathodic protection (also
very effective) commands the highest points.
An alternative approach that may be more intuitive in some
ways is to begin with an assessment of the threat level and then
consider mitigation measures as adjustment factors. In this
approach, the evaluator might wish to begin with a rating of
atmosphere type, product corrosivity, or
subsurface conditions, depending on which ofthe three types of
corrosion is being examined. Then, multipliers are applied to
account for mitigation effectiveness. For example, in a scheme
where an increasing number of points represents increasing
risk, perhaps a subsurface environment of Louisiana swampland warrants a risk score of 90 (very corrosive). A dry Arizona
desert environment has an environmental rating of 20 (very low
corrosion). Then, the best coating system decreases or offsets
the environment by 50% and the best cathodic protection system offsets it by another 50%. So, the Louisiana situation with
very robust corrosion prevention would be 90 x 50% x 50% =
22.5. This is very close to the Arizona desert situation with no
coating or cathodic protection system. This is intuitive since a
very benign environment, from a corrosion rate perspective.
can be seen as roughly equivalent to a corrosive environment
with mitigation.
Further discussion of scoring options such as this can be
found in Chapter 2.
A. Atmospheric corrosion (weighting: 10% of
corrosion threat)
Atmospheric corrosion is basically a chemical change in the
pipe material resulting from the material's interaction with the
atmosphere. Most commonly this interaction causes the oxidation of metal. In the United States alone, the estimated annual
loss due to atmospheric corrosion was more than $2 billion,
according to one 1986 source [31]. Even though cross-country
pipelines are mostly buried they are not completely immune to
this type of corrosion.
The potential for and relative aggressiveness of atmospheric
corrosion is captured in this portion ofthe model. The evaluator
may also include other types of potential degradations of
exposed pipe such as the effect ofultraviolet light on some plastic materials.
A possible evaluation scheme for atmospheric corrosion
is outlined below and described in the following
Atmospheric Corrosion (10% ofcorrosion threat = I O pis)
Exposures (50% atmospheric = 5 pts)
Environment (25% atmospheric = 2 pts)
Coatings (30% atmospheric = 3 pts)
Fitness (50% of coatings = 1.5 pts)
Condition (50% of coatings = 1.5 pts)
Visual inspection (50% of Condition)
Nondestructive testing (NDT) (30% of Condition)
Destructive testing (DT) (20% of Condition)
A l . Atmospheric exposure (weighting: 50% of
atmospheric corrosion)
The evaluator must determine the greatest risk from atmospheric corrosion by first locating the portions of the pipeline
that are exposed to the most severe atmospheric conditions.
Protection from this form of corrosion is considered in the next
variable. In this way, the situation is assessed in the most conservative manner, The most severe atmospheric conditions may
be addressed by the best protective measures. However, the
assessment will be the result of the worst conditions and the
worst protective measures found in the section. This conservatism not only helps in accounting for some unknowns, it also
helps in pointing to situations where actions can be taken to
improve the risk picture.
A schedule of descriptions of all atmospheric exposure scenarios should be set up. The evaluator must decide which scenarios offer the most risk. This decision should be based on data
(historical failures or discoveries of problems), when available,
and employee knowledge and experience. The following is an
example of such a schedule for steel pipe:
Air/water interface
Groundair interface
Other exposures
Multiple occurrences detractor
0 pts
1 pts
2 pts
2 pts
3 pts
4 pts
5 pts
-1 pt
In this schedule, the worst case, the lowest point value, govems the entire section being evaluated.
Aidwater interface
The air/w'ater interfuce is also known as
a splash zone, where the pipe is alternately exposed to water
and air. This could be the result of wave or tide action, for
instance. Sometimes called waterline corrosion. the mechanism at work here is usually oxygen concentration cells.
Differences in oxygen concentration set up anodic and cathodic
regions on the metal. Under this scenario, the corrosion mechanism is enhanced as fresh oxygen is continuously brought to the
corroding area and rust is carried away. If the water happens to
be seawater or brackish (higher salt content), the electrolytic
properties enhance corrosion because the higher ion content
further promotes the electrochemical corrosion process.
Shoreline structures often have significant corrosion damages
due to the airiwater interface effect.
Cusings Industry experience points to buried casings as a
prime location for corrosion to occur. Even though the casing
4/66 Corrosion index
and the enclosed carrier pipe are beneath the ground, atmospheric corrosion can be the prime corrosion mechanism.A vent
pipe provides a path between the casing annular space and the
atmosphere. In casings, the carrier pipe is often electrically
connected to the casing pipe, despite efforts to prevent it. This
occurs either through direct metallic contact or through a
higher resistance connection such as water in the casing. When
this connection is made, it is nearly impossible to control the
direction ofthe electrochemical reaction, or even to know accurately what is happening in the casing. The worst situation
occurs when the carrier pipeline becomes an anode to the casing pipe, meaning the carrier pipe loses metal as the casing pipe
gains ions. Even without an electrical connection, the carrier
pipe is subject to atmospheric corrosion, especially as the casing becomes filled with water and then later dries out (an
aidwater interface). The inability for direct observation or even
reliable inference techniques causes this scenario to rate high
in the risk hierarchy (see Figure 4.3 and The case for/against
Ground/air interface As with the aidwater interface, the
groundair interface can be harsh from a corrosion standpoint.
This is the point at which the pipe enters and leaves the ground
(or is lying on the ground). The harshness is caused in part by
the potential for trapping moisture against the pipe (creating a
waterlair interface). Soil movements due to changing moisture
content, freezing, etc., can also damage pipe coating, exposing
bare metal to the electrolyte.
Insulation Insulation on aboveground pipe is notorious for
trapping moisture against the pipe wall, allowing corrosion to
proceed undetected. If the moisture is periodically replaced
with freshwater,the oxygen supply is refreshed and corrosion is
promoted. As with casings, such corrosion activity is usually
not directly observable and, hence, can be potentially more
If there is no corrodible portion of the pipeline
exposed to the atmosphere, the potential for atmospheric corrosion does not exist.
Supportshangers Another hot spot for corrosion as determined by industry experience is pipe supports and hangers,
which often trap moisture against the pipe wall and sometimes
provide a mechanism for loss of coating or paint. This occurs as
the pipe expands and contracts, moving against the support and
perhaps scraping away the coating. Mechanical-corrosion
damage is also possible here. This type of damage often goes
Figure 4.3
Other exposures The above cases should cover the range of
worst case exposures for steel pipe exposed to the atmosphere.
One of the above situations must exist for any aboveground piping; the pipe is either supported and/or it has one of the listed
interfaces. A situation may exist however, in which a non-steel
pipe is not subject to degradation by any of the oxidation contributors listed. A plastic pipe may not be affected by any water
or air or even chemical contact and yet may become brittle (and
hence weaker) when exposed to sunlight. Sunlight exposure
should therefore be included in that particular risk assessment.
Multiple occurrences detmctor
In this example schedule,
the evaluator deducts 1 point for sections that have multiple
occurrences of a given condition. This reflects the increased
opportunity for mishap because there are more potential corrosion sites. By this reasoning, a section containing many supports would receive 2 - 1 = 1 pts, the equivalent of a section
containing a casing. This says that the risk associated with multiple supports equals the risk associated with one casing. A further distinction could be made by specifying a point deduction
for a given number of occurrences: -1 point for 5 to 10 supports, -2 points for 10 to 20 supports, etc. This may be an
unnecessary complication, however.
Typical casing installation.
Scoring the corrosion potential 4/67
Example 4.1 :Scoring road casings
A section of steel pipeline being evaluated has several road
crossings in which the carrier pipe is encased in steel.
There are two aboveground valve stations in this section.
One of these stations has approximately 25 ft of pipe supported
on concrete and steel pedestals. The other one has no supports.
The evaluator assesses the section for atmospheric corrosion
“facilities” as follows:
Groundair interface
1 Pt
2 pts
2 pts
Picking the worst case, the point value for this section is 1 pt.
The evaluator feels that the number of casings and number of
supports and number of groundair interfaces are roughly
equivalent and chooses not to use the multiple occurrences
option. If other sections being evaluated have a significantly
different number of occurrences, adjustments would be needed
to show the different risk picture. A distinction between a section with one casing and a section with two casings is needed to
show the increased risk with two casings.
In many modem assessments, segmentation is done so that
sections with atmospheric exposures are distinct from those
that have no such exposures. A cased piece of pipe will often be
an independent section for scoring purposes since it has a distinct risk situation compared with neighboring sections with no
casing. The neighboring sections will often have no atmospheric exposures and hence no atmospheric corrosion threat at
all. This sectioning approach is a more efficient way to perform
risk assessments as is discussed in Chapter 2.
A2. Atmospheric type (weighting: 20% of
atmospheric corrosion)
Certain characteristics of the atmosphere can enhance or accelerate the corrosion of steel. They are thought to promote the
oxidation process. Oxidation is the primary mechanism evaluated in this section. Some of these atmospheric characteristics
and some simplifying generalities about them are as follows:
Chemical composition. Either naturally occurring airborne
chemicals such as salt or CO, or man-made chemicals such
as chlorine and SO,,(which may form H,SO, and H,SO,)
can accelerate the oxidation of metal.
Humidity. Because moisture can be a primary ingredient of
the corrosion process, higher air moisture content is usually
more corrosive.
Temperature.Higher temperatures tend to promote corrosion.
A schedule should be devised to show not only the effect of a
characteristic, but also the interaction of one or more characteristics. For instance, a cool, dry climate is thought to minimize
atmospheric corrosion. If a local industry produces certain airborne chemicals in this cool, dry climate, however, the atmosphere might now be as severe as a tropical seaside location.
The following is an example schedule with categories for
several different atmospheric types, ranked from most harsh to
most benign, from a corrosion standpoint:
Chemical and marine
Chemical and high humidity
Marine, swamp, coastal
High humidity and high temperature
Chemical and low humidity
Low humidity and low temperature
No exposures
0 Pt
0.5 pt
0.8 pt
1.2 pts
1.6 pts
2 pts
2 pts
A. Chemical and marine Considered to be the most corrosive atmosphere, this includes certain offshore production
facilities and refining operations, especially if in splash-zone
environments. The pipe components are exposed to airborne
chemicals and salt spray that promote oxidation, as well as
occasional submersion in water.
B. Chemical and high humidiw Also quite a harsh environment, this may include chemical or refining operations in
coastal regions. Airborne chemicals and a high moisture content in the air combine to enhance oxidation of the pipe steel.
C. Marine. swamp, coastal
High levels of salt and moisture
combine to form a corrosive atmosphere here.
D. High humidity and high temperature
Similar to the situation above, this case may be seasonal or in some other way not
as severe as the marine condition.
E. Chemical and low humidity While oxidation-promoting
chemicals are in the air, humidity is low, somewhat offsetting
the effects. Distinctions may be added to account for temperatures here.
I? Low humidity
The least corrosive atmosphere will have
no airborne chemicals, low humidity, and low temperatures.
G. No exposures
There are no atmospheric exposures in the
section being evaluated.
In applying this point schedule, the evaluator will probably
need to use judgment. The type of environment being considered will not usually fit specifically into one ofthese categories,
but will usually be comparable to one of them. Note however,
that given the low point values suggested here, scoring this variable does not warrant much research and scoring effort.
Example 4.2: Scoring atmospheric conditions
The evaluator is comparing three atmospheric conditions.
The first case is a line that runs along a beach on Louisiana’s
Gulf Coast. This most closely resembles condition C. Because
there are several chemical-producing plants nearby and winds
may occasionally carry chemicals over the line, the evaluator
adjusts the C score down by 50%.
The second case is a steel line in eastern Colorado. While the
line is seasonally exposed to higher temperatures and humidity,
it is also frequently in cold, dry air. The evaluator assigns a point
value based on an adjusted condition E This is 1.6 pts. equivalent from a risk standpoint to condition E, even though there is
no chemical risk.
The final case is a line in southern Arizona. Experience confirms that this environment does indeed experience only minor,
4/68 Corrosion Index
nonaggressive corrosion. Because the evaluator foresees the
evaluation of a line in a similarly dry, but also cold climate,
he awards points for condition F: pt adjusted for higher temperatures = 1.9 points. (He plans to score the dry, cold climate as
2 pts.)
These evaluations therefore yield the following rank order
and relative magnitude:
0.4 pt
1.6 pts
1.9 pts
The evaluator sees little difference between conditions in
Colorado and Arizona, from an atmospheric corrosion viewpoint, but feels that conditions around the line in south
Louisiana are roughly four times worse.
A3. Atmospheric coating (weighting: 30% of
atmospheric corrosion)
The third component in this study of the potential for atmospheric corrosion is an analysis ofthe preventive measures taken
to minimize the threat. Obviously, where the environment is
harsher, more preventive actions are required. From a risk
standpoint, a situation where preventive actions are not
required-a very benign environment-poses less risk than a
situation where preventive actions are being taken to protect a
pipeline from a harsh environment.
The most common form of prevention for atmospheric corrosion is to isolate the metal from the offending environment.
This is usually done with coatings. Coatings include paint, tape
wraps, waxes, asphalts, and other specially designed coatings.
For aboveground components, painting is by far the most common technique.
No coating is defect free, so the corrosion potential will
never he totally removed, only reduced. Note that, at this point,
the evaluator is making no judgments as to whether a highquality coating or inspection program is needed. That detennination is made when the attributes of facilities and atmosphere
type are combined with an assessment of these preventions.
Coating evaluations
Coating effectiveness depends on four factors:
Quality of the coating
Quality of the coating application
Quality of the inspection program
Quality of the defect correction program.
The first two address the fitness of the coating-its ability
to perform adequately in its intended service for the life of
the project. The second two address the current condition of the
coating-how it is actually performing.
For a general, qualitative evaluation, each of these components can be rated on a4-point scale: good, fair, poor, or absent.
The point values should probably be equivalent unless the evaluator can say that one component is of more importance than
another. A quality coating is of little value if the application is
poor: a good inspection program is incomplete if the defect cor-
rection program is poor. Perhaps an argument can be made that
high scores in coating and application place less importance on
inspection and defect correction. This would obviously be a
sliding scale and is probably an unnecessary complication. An
evaluation scale could look like this:
Coatingfitness (weighting: 50% ofcoating evaluation)
Coating qualit?, Evaluate the coating in terms of its appropriateness in its present application. Where possible, use data
from coating stress tests or actual field experience to rate the
quality. When these data are not available, draw from any similar experience or fromjudgment.
Good-A high-quality coating designed for its present environment.
adequate coating but probably not specifically
designed for its specific environment.
Poor-A coating is in place but is not suitable for long-term
service in its present environment.
Absent-No coating present.
Note: Some of the more important coating properties include
electrical resistance, adhesion, ease of application, flexibility,
impact resistance, flow resistance (after curing), resistance to
soil stresses, resistance to water, resistance to bacteria or other
organism attack. In the case of submerged or partially submerged lines, marine life such as barnacles or borers must be
Evaluate the most recent coating application
process and judge its quality in terms of attention to precleaning, coating thickness, the application environment (control of temperature, humidity, dust, etc.), and the curing or
setting process.
Good-Detailed specifications used, careful attention paid to
all aspects ofthe application; appropriate quality control systems used.
Fair-Most likely a proper application, but without formal
supervision or quality controls.
Poor-Careless, low-quality application performed.
Absent-Application was incorrectly done, steps omitted, environment not controlled.
Coating condition (weighting: 50% of coating
Evaluate the inspection program for its thoroughness and timeliness. Documentation may also be an integral part of the best possible inspection program.
Good-Formal, thorough inspection performed specifically
for evidence of atmospheric corrosion. Inspections are performed by trained individuals using checklists at appropriate
intervals (as dictated by local corrosion potential).
Scoring the corrosion potential 4/69
Fair-Informal inspections, but performed routinely by qualified individuals.
Poor-Little inspection; reliance is on chance sighting of problem areas.
Absent-No inspection done.
Note: Typical coating faults include cracking. pinholes,
impacts (from sharp objects), compressive loadings (stacking
of coated pipes, for instance). disbondment, softening or flowing. and general deterioration (ultraviolet degradation. for
The inspector should pay special attention to sharp corners
and difficult shapes. They are difficult to clean prior to painting, and difficult to adequately coat (paint will flow away from
sharpness). Examples are nuts, bolts, threads, and some valve
components. These are often the first areas to show corrosion
and will give a first indication as to the quality of the paint job.
Correction ofdefects
Evaluate the program of defect correction in terms of thoroughness and timeliness.
Good-Reported coating defects are immediately documented
and scheduled for timely repair. Repairs are carried out per
application specifications and are done on schedule.
Fair-Coating defects are informally reported and are repaired
at convenience.
defects are not consistently reported or
Absent-Little or no attention is paid to coating defects.
A more rigorous evaluation of coating condition would
involve specific measurements of defects found adjusted by
the time that has passed since the inspection and the use of special equipment during the inspection.
Nondestructive testing (NDT) performed during the inspection includes a visual inspection. The visual inspection can
quantify and/or characterize the defects observed. Qualitative
scales for such visual assessments can be found in National
Association of Corrosion Engineers (NACE) guidelines. NDT
using special equipment can also quantify the coating thickness
and the extent of holidays. Current thickness can be compared
against design or intended thickness to assess the degradation
or other inconsistency with design intent. If an electrical continuity tester is used the extent of holidays can be expressed in
terms of the voltage setting and number of indications or other
measurement of number and size of coating defects.
An NDT inspection rating scale can be established for a
more detailed evaluation. Table 4.2 provides an example.
Destructive testing (DT) involves removing a sample of
coating or pipe and performing laboratory tests. Properties
investigated might include
Table 4.2
Abrasion resistance
Impact resistance.
In some cases, a more detailed evaluation ofcoating condition might be warranted. An example list of variables for this
more rigorous evaluation is as follows:
Atmospheric Coating Condition
Visual inspection results
Coating failures per square foot
Date of last visual inspection
NDT inspection results
Thickness versus design thickness
Holidays per square foot
Chalking, cracking. blistering, flaking
Date ofNDT inspection
DT inspection results
Abrasion resistance
Impact resistance
Shear strength
Date of DT inspection
Example 4.3: Scoring coating condition (Good)
In this section of aboveground piping, records indicate that a
high-quality paint was applied per NACE specifications. The
operator sends a trained inspector to all aboveground sites once
each quarter, and corrects all reported deficiencies at least
twice per year. The evaluator awards points as follows:
Defect correction-good
3 pts
3 pts
Note; Twice per year defect correction is deemed appropriate
for the section's environment.
Example 4.4: Scoring coating condition (Fair)
Here, a section contains several locations of aboveground
pipe components at valve stations and compressor stations.
Touch-up painting is done occasionally at the stations. This is
done by a general contracting company at the request of the
Example NDT inspection rating scale
Blister size
<1/8 in
4/70 Corrosion Index
pipeline area foreman. No formal specifications exist. The
foreman requests paint work whenever he feels it is needed
(based on his personal inspection of a facility). The evaluator
awards points as follows:
Defect correction-poor
2.0 pts
1.75 pts
Note: In this example, the evaluator wishes to make distinctions between the evaluation scores, so she uses decimals to rate
items a little above or a little below the normal rating. This may
be appropriate in some cases, but it adds a level of complexity
that may not be warranted, given the low point values.
The evaluator feels that choice of paint is probably appropriate though not specified. Application is slightly below
fair because no specifications exist and the contractor’s workforce is usually subject to regular turnovers. Inspection is
slightly above fair because the foreman does make specific
inspections for evidence of atmospheric corrosion and is trained
in spotting this evidence. Defect correction is poor because
defect reporting and correction appear to be sporadic at best.
The casefodagainst casings
Buried casings show up at several points in this risk assessment-sometimes as risk reducers, sometimes as risk creators.
The following information provides a general discussion of the
use of pipeline casings.
Oversized pipe, called casingpipe, is sometimes placed over
the carrier pipeline to protect it from external loadings and/or to
facilitate repairs to the carrier pipe. Casings have long been
used by the pipeline industry. They are generally placed under
highways, roads, and railroads where higher external loadings
are anticipated or where pipeline leaks might cause structural
damages to a structure (see Figure 4.3 earlier).
A casing also allows for easier replacement of the pipeline if
a problem should develop. Instead of digging up a roadway, the
pipeline can simply be pulled out of the casing, repaired, and
reinstalled without disrupting traffic.
A third potential benefit from casings is that a slow pipeline
leak can be contained in the casing and detected via the casing
vent pipe rather than slowly undermining the roadway or forming underground pockets of accumulated product.
An industry controversy arises because the benefits casings
provide are at least partially offset by problems caused by their
presence. These problems are primarily corrosion related. It is
probably safe to say that corrosion engineers would rather not
have casings in their systems. It is more difficult to protect an
encased pipe from corrosion. The casing provides an environment in which corrosion can proceed undetected and prevention methods are less effective. Because the pipeline cannot be
directly inspected, indirect methods are used to give indications
of corrosion. These techniques are not comprehensive, sometimes unreliable, and often require expert interpretation.
Several dilemmadproblems are typically encountered
with casings. Atmospheric corrosion can occur if any coating
defects exist, and yet, insertion of the pipeline into the casing is
an easy way to damage the coating and create defects. End seals
are used to keep water, mud, and other possible electrolytes out
of the casing annular space, but are easily defeated by poor
design and/or installation or by minor ground movements.The
presence of electrolyte in the annular space can lead to corrosion cells between the casing and the pipeline, as well as interference problems with the cathodic protection system. Vent
pipes are often installed to release leaked products, but these
vents allow direct communication between the casing annular
space and the atmosphere--consequently, moisture is almost
always present in the annular space.
Cathohc protection is usually employed to protect buried
steel pipelines. The casing pipe can shield the pipeline from the
protective currents if there is no electrical bond between the
casing and the pipeline. If there is such a bond, the casing usually not only shields the pipeline from the current, but also
draws current from it, effectively turning the pipeline into an
anode that is sacrificed to protect the casing pipe, which is now
the cathode!
Several mitigative measures can be employed to reduce corrosion problems in casings. These were illustrated earlier in
Figure 4.3 and are described below:
Test leads. By comparing the pipe-to-soil potentials (voltages) of the pipeline versus the casing pipe, evidence of
bonding between the two is sought. Test leads allow the voltage measurements to be made.
Nonconductive spacers. These are designed to keep the
pipeline physically and electrically separated from the casing
pipe. They also help to protect the pipe coating during insertion into the casing.
End seals. These are designed to keep the annular space
free of substances that can act as an electrolyte (water, mud,
Filling the annular space. Use of a dielectric (nonconductive) substance reduces the potential for electrical paths
between the casing and the pipeline. Unfortunately, it also
negates some ofthe casing benefits listed earlier.
Reflecting the trade-off in benefits, casings can be risk
reducers (protection from external loads, including third-party
damages and land movements) yet at the same time be risk
adders in the corrosion index (promoting atmospheric and subsurface corrosion of metal). It would be nice to say that one
will always outweigh the other, but we do not know that this is
always the case. A risk costhenefit analysis for casings can be
performed by using a risk model to quantify the relative advantages and disadvantages from a risk standpoint.
Other factors must be considered in casing decisions. Often
regulatory agencies leave no choice in the matter. The owner of
the crossing (railroad, highway, etc.) may also mandate a certain design. Economics, of course, always play an important
role. The costs of casings must include ongoing maintenance
costs, but the costs of not using casing must include pipe strong
enough to carry all loads and damages to the crossing, should
pipeline replacement be needed.
As an additional benefit of applying a risk management system such as this one to the problem of casings, the pipeline
operator and designer have a rational basis for weighing the
benefits of alternate designs.
Scoring the corrosion potential 4/71
B. Internal corrosion (weighting: 20% of corrosion
Internal Corrosion (20%,20pts)
Product corrosivity (50% of internalcorrosion = 10 pts)
From potential upsets (70% ofproduct corrosivity = 7 pts)
Equipment (30% of 7 pts -2 pts)
(3OYoof7 pts, 2 pts)
Flow velocity (40% of 7 pts, 3 pts)
From flow stream characteristics (30% of product corro
sivity,3 pts)
Solids related (40% of 3 pts, 1 pt)
Water related (60% of 3 pts, 2 pts)
Preventions (5oy0of internal corrosion, 10 pts)
Measured corrosion rate (adjustments)
In this section, an assessment is made of the potential for
internal corrosion. Internal corrosion is pipe wall loss or damage caused by a reaction between the inside pipe wall and the
product being transported. Such corrosive activity may not be
the result of the product intended to be transported but rather a
result of an impurity in the product stream. Seawater intrusion
into an offshore natural gas stream, for example, is not uncommon. The natural gas (methane) will not harm steel, but saltwater and other impurities can certainly promote corrosion. Other
corrosion-promoting substances sometimes found in natural
gas include CO,, chlorides, H,S, organic acids, oxygen, free
water, solids or precipitates, or sulfur-bearing compounds.
Microorganisms that can indirectly promote corrosion
should also be considered here. Sulfate-reducing bacteria and
anaerobic acid-producing bacteria are sometimes found in oil
and gas pipelines. They produce H,S and acetic acid, respectively, both of which can promote corrosion [79].
Pitting corrosion and crevice corrosion are specialized forms
of galvanic or concentration cell corrosion commonly seen in
cases o f internal corrosion. Corrosion set up by an oxygen concentration cell can be accelerated if certain ions are present to
play a role in the reactions. The attack against certain stainless
steels by saltwater is a classic example. Erosion as a form o f
internal corrosion is also considered here.
Product reactions that do not harm the pipe material should
not be included here. A good example of this is the buildup of
wax or paraffin in some oil lines. While such buildups cause
operational problems, they do not normally contribute to the
corrosion threat unless they support or aggravate a mechanism
that would otherwise not be present or as severe.
Some of the same measures used to prevent internal corrosion, such as internal coating, are used not only to protect the
pipe, but also to protect the product from impurities that may be
produced by corrosion. Jet fuels and high-purity chemicals are
examples of pipeline products that are often carefully protected
from such contaminants.
The assessmentofthe threat from internal corrosionis evaluated
by an examination ofthe product characteristicsand the preventive
measuresbeing taken to offset certain product characteristics.
B I . Product corrosivity (weighting: 50% of internal
corrosion potential)
This is an assessment of the relative aggressiveness of the
pipeline contents that are in immediate contact with the pipe
wall. The greatest threat exists in systems where the product is
inherently incompatible with the pipe material. Another threat
arises when corrosive impurities can routinely get into the
product. These two scenarios can be scored separately and then
combined for an assessment of product corrosivity:
Product corrosivity = [flow stream charactenstics] +[upset conditions]
These components are added since the worst case scenario
would be a case where both are active in the same pipelineboth a corrosive product and potential for additional corrosion
through upsets. The weighting of the two is situation specific,
but because hydrocarbons are inherently non-corrosive and
most transportation o f hydrocarbons strives for very low product contaminant levels, a weighting emphasizing upset potential might be appropriate for many hydrocarbon transport
scenarios. The following example point scores uses a 30 / 70%
weighting scheme, emphasizing product corrosivity episodes
originating from unintentional contaminations-upsets.
For convenience, the term contaminant is used here to mean
some product component that is corrosive to the pipe wall, even
though some amounts of the component might he allowable
according to the product specification.
Normalflow stream characteristics The normal flow stream
characteristics should represent a measure of the corrosivity of
the products transported in the pipeline. This measure assesses
corrosion potential from normal contact between flowing product and the pipe wall, based on product specifications andor
product analyses. A “no-flow’’ condition might aggravate otherwise harmless contact between product and pipe wall. An
example is the higher concentrations o f dropout contaminants
that occur during no-flow or low-flow conditions, such as water
accumulation in low spots. These scenarios can be considered
here (as normal flow conditions) or they might be more efficiently handled under the evaluation of corrosivity due to upset
conditions (where they are considered to be abnormal flow conditions).
In many cases, the flow stream characteristics can be divided
into two main categories-water related and solids relatedfor purposes of evaluating corrosivity [94]. These categories do
not precisely reflect the role or transport state of the various
contaminants, but might be useful for organizing variables.
Flow stream characteristics = [water related] + [solids related]
Water-related contamination potential might include an
assessment of the concentrations of components such as
Water content
Solids-related contamination potential might include measuring the concentrations of components such as
Suspended solids (see also discussion of erosion potential)
4/72 Corrosion Index
A detailed assessment of internal corrosion would use the
actual measurements of the concentrations, especially where
such measurements are easily available to the evaluator.
Weightings can be assigned based on the perceived role of the
contaminant in corrosion. Point scales can then be developed
based on the weightings and the expected range of measurements, best case to worst case, for each contaminant.
Upset potential This aspect of internal corrosion measures
the potential for increased product corrosivity under abnormal
conditions. This might include unintentional introduction of
contaminants and changes in flow patterns that might aggravate
previously insignificant corrosion potential. The introduction
of contaminants is a function of (1) the processing prior to
delivery into the pipeline, ( 2 )equipment capabilities and failure potential, and (3) operations and maintenance practices of
the facility delivering the product into the pipeline.
Changes in flow patterns including stagnant flow conditions,
can be considered to be “upsets.” Low flow rates can lead to
increased chances of liquid or solid dropout and accumulation
at low spots, whereas high flow rates can lead to erosion.
Contaminant dropout may lead to increased contact time
between pipe wall and product, maybe at higher contaminant
concentrations (at low spot accumulation points, for instance).
Anything that leads to increased corrosive contaminant contact
with pipe walls will logically increase corrosion potential and
rate. Note, however, that subsequent higher flow rates might
sweep accumulations and hence be a mitigation measure as
described later.
Erosion is the removal of pipe wall material caused by the
abrasive or scouring effects of substances moving against the
pipe wall. It is a form of corrosion only in the pure definition of
the word, but is considered here as an internal corrosion potential. High velocities and abrasive particles in the product stream
are the normal contributing factors to erosion. Impingement
points such as elbows and valves are the most susceptible erosion points. Gas at high velocities may be carrying entrained
particles of sand or other solid residues and, consequently, can
be especially damaging to the pipe components.
Historical evidence of erosion damage is of course a strong
indicator of susceptibility. Other evidence includes high product stream velocities (perhaps indicated by large pressure
changes in short distances) or abrasive fluids. Combinations of
these factors are, of course, the strongest evidence. If, for
instance, an evaluator is told that sand is sometimes found in
filters or damaged valve seats, and that some valves had to be
replaced recently with more abrasion-resistant seat materials,
he may have sufficient reason to penalize the pipe section for
erosion potential.
The overall assessment of upset potential, as a contributing
factor to internal corrosion potential, can be accomplished
through an evaluation and scoring of the following:
Equipment-an evaluation of the types of equipment used to
remove contaminants or prevent contaminant introduction
into the pipeline, and the reliability of such equipment.
Examples include product filters, dehydrators, and scrub-
bers. Potential for carryovers due to incorrect operations,
improperly sized equipment, or unusual levels of contaminants received should be included in the evaluation.
O&Mpructices-an evaluation of the actions taken by the
operator to prevent introduction of contaminants. This may
include the degree of human intervention required and the
number of redundancies that can interrupt a sequence of
events that might otherwise result in increased contaminant
concentrations. See also the discussion under mitigation
Highestflow velociv + highestprofle-an evaluation of the
normal and worst case high flowing velocities and an assessment of this effect on erosion potential and contact time
between contaminant and pipe wall. Both the average high
and the peak velocities should be of interest.
Lowestflow velociv + lowest profile-an
evaluation of the
normal and worst case low flowing velocities and an assessment of this effect on erosion potential and contact time
between contaminant and pipe wall. Both the average low
and the lowest velocities should be of interest.
Points can be assigned to these factors based on observed or
reported conditions and can be combined for a final assessment
of upset potential.
Simplified scoring ofproduct corrosivity
In many cases, the amount of detail described above may not be
warranted for scoring internal corrosion potential. This is especially true if the risk evaluation is primarily used as a high-level
screening tool. In this case, the above factors can be considered
more generally and perhaps outside a formal scoring protocol.
These considerations can then be used to assign point values in
amore qualitative fashion.
A simple schedule can be devised to assign points to the product corrosivity if a more generalized approach is appropriate:
Strongly corrosive
Mildly corrosive
Corrosive only under special conditions
Never corrosive
0 pts
3 pts
I pts
10 pts
“Strongly corrosive” suggests that a rapid damaging kind of
corrosion is possible. The product is highly incompatible with
the pipe material. Transportation of brine solutions, water,
products with H,S, and many acidic products are examples of
materials that are highly corrosive to steel lines.
“Mildly corrosive” suggests that damage to the pipe wall is
possible but only at a slow rate. Having no knowledge of the
product corrosivity can also fall into this category. It is conservative to assume that any product can do damage, unless we
have evidence to the contrary.
“Corrosive only under special conditions” means that the
product is normally benign, but there exists the chance of introducing a harmful component into the product. CO, or saltwater
excursions in amethanepipeline are a common example. These
natural components of natural gas production are usually
removed before they can get into the pipeline. However, equipment used to remove such impurities is subject to equipment
failures, and subsequent spillage of impurities into the pipeline
is apossibility.
Scoring the corrosion potential 4/73
“Never corrosive” means that there are no reasonable possibilities that the product transported will ever be incompatible
with the pipe material.
The evaluator may also wish to interpolate and assign point
values between the ones shown.
BZ. Preventions (weighting:50% ojinternal
It is often economically advantageous to transport corrosive
substances in pipe that is susceptible to corrosion by the
substance. In these cases, it is prudent to take actions to reduce
the damage potential.
Having assessed the potential for a corrosive product, the
evaluator can now examine and evaluate mitigation measures
being employed against potential internal corrosion. A point
schedule, based on the probable effectiveness of the measures,
will show how the risk picture is affected. In the following
example schedule. points are added for each preventive action
that is employed, up to a maximum of 10 points.
A nticovvosion activities heingperfimzed:
Internal monitoring
Inhibitor injection
Not needed
Internal coating
Operational measures
0 pts
2 pts
4 pts
10 pts
5 pts
3 pts
3 pts
This. of course, means that no actions are taken to
reduce the risk of internal corrosion.
hternal monifoving Normally, this is done in either of two
ways: ( 1 ) by an electronic probe that can continuously transmit
measurements that indicate a corrosion potential or ( 2 ) by a
coupon that actually corrodes in the presence of the flowing product and is removed andmeasuredperiodically.Each ofthese methods requires an attachment to the pipeline to allow the probe or
coupon to be inserted into and extractedfrom the flowing product.
Another method involves the use of a spool piece-a test
piece of pipe that can be removed and carefully inspected for
evidence of internal corrosion. Searching for corrosion products in pipeline filters or during pigging operations is yet
another method of inspectiordmonitoring.
To be creditable under this section, an inspection method
requires a well-defined program of monitoring and interpretation of the data at specified intervals. It is further implied that
appropriate actions are taken, based on the analysis from the
monitoring program.
Where a corrosion rate is actually measured, the overall
internal corrosion score can be somewhat calibrated with this
information. Ideally, the scores will reflect the corrosion potential and will correlate well with more direct evidence such as a
measured corrosion rate. Caution must be exercised however,
when assigning favorable scores based solely on the nondetection of internal corrosion at certain times and at limited
locations. It is important to note that the potential for corrosion
might be high and is worth noting. even when no active corrosion is detected. More is said about corrosion rate later in this
chapter and in Chapter 14.
Inhibitor injection
When the corrosion mechanism is fully
understood, certain chemicals can be injected into the flowing
product stream to reduce or inhibit the reaction. Because oxygen is a chief corroding agent of steel, an “oxygen-scavenging”
chemical can combine with the oxygen in the product to prevent this oxygen from reacting with the pipe wall. A more common kind of chemical inhibitor forms a protective barrier
between the steel and the product-a
coating, in effect.
Inhibitor is reapplied periodically or continuously injected to
replace the inhibitor that is absorbed or displaced by the product stream. In cases where microorganism activity is aproblem.
biocides can be added to the inhibitor. The evaluator should be
confident that the inhibitor injection equipment is well maintained and injects the proper amount of inhibitor at the proper
rate. Inhibitor effectiveness is often verified by an internal
monitoring program as described above.
A pigging program may be necessary to supplement inhibitor
injection. The pigging would be designed to remove free liquids
or bacteria colony protective coverings, which might otherwise
interfere with inhibitor or biocide performance.
Internal coating Internal coating can take several forms
including spray-on applications of plastics, mortar, or concrete
as well as insertion liners for existing pipelines. New materials
technology allows for the creation of “lined” pipe. This is usually a steel outer pipe that is isolated from a potentially damaging product by a material that is compatible with the product
being transported. Plastics, rubbers, or ceramics are common
isolating materials. They can be installed during initial pipe
fabrication, during pipeline construction. or sometimes the
material can be added to an existing pipeline. Such twomaterial composite systems are also discussed in the design
index (Chapter 5). For purposes of this part of the risk assessment, the evaluator should assure himself that the composite
system is effective in protecting the pipeline from damage due
to internal corrosion. A common concern in such systems is the
detection and repair of a leak that may occur in the liner.
The internal coating can be judged by the same criteria as
coatings for protection from atmospheric corrosion and
buried metal corrosion described in this chapter. Note that
an internal coating that is applied for purposes of reduction
in flow resistance might be of limited usefulness in corrosion
Operational nieaswes
In situations where the product is
normally compatible with the pipe material but corrosive impurities can be introduced, operational measures are often used to
prevent the impurities. Systems used to dehydrate or filter a
product stream fall into this classification. A system that strips
sour gas (sulfur compounds) from a product stream is another
example. Maintaining a certain temperaturc on a system in
order to inhibit corrosion would also be a valid operational
measure. These systems or measures are termed operurionul
here because the operation ofthe equipment is often as critical
as the original design. Procedures and mechanical safeties
should be in place to prevent corrosive materials from entering
the pipeline in case of equipment failure or system overloads.
The evaluator should check to see that the conditions for which
the equipment was designed are still valid especially if
the effectiveness of the impurities removal cannot be directly
determined. The evaluator should look for consistency and
4/74 Corrosion Index
effectiveness in any operational measure purported to reduce
internal corrosion potential.
A pig is a cylindrical or spherical object designed to
move through a pipeline for various purposes (Figure 4.4). Pigs
are used to clean pipeline interiors (wire brushes may be
attached), separate products, push products (especially liquids), gather data (when fitted with special electronic devices),
detect leaks, etc. A wide variety of special-purpose pigs in
many shapes and configurations is possible. There is even a
bypass pig that is designed with a relief valve to clear debris
from in front of the pig if the debris causes a high differential
pressure across the pig.
A regular program of running cleaning or displacement-type
pigs to remove potentially corrosive materials is a proven effective method of reducing (but not eliminating) damage from
internal corrosion. The program should be designed to remove
liquids or other materials before they can do appreciable damage to the pipe wall. Monitoring of the materials displaced from
the pipeline should include a search for corrosion products
such as iron oxide in steel lines. This will help to assess the
extent of corrosion in the line.
Pigging is partly an experience-driven technique. From a
wide selection of pig types, the knowledgeable operator must
choose an appropriate model, design the pigging protocol
including pig speed, distance, and driving force, and assess the
progress during the operation. The evaluator should be satisfied
that the pigging operation is indeed beneficial and effective in
removing corrosive products from the line in a timely fashion.
Example 4.5: Scoring internal corrosion
A section of natural gas pipeline (steel) is being examined.
The line transports gas from offshore production wells. The gas
is dried and treated (removal of sulfur) offshore, but the offshore treating equipment malfunctions rather routinely. The
operator injects inhibitor to control corrosion from any offshore liquids that escape the dehydration process. Recently, it
was discovered that the inhibitor injector had failed for a period
of 2 weeks before the malfunction was corrected. The operator
also runs pigs once per month to remove any free-standing liquids in the pipeline. Corrosion probes provide continuous data
on the corrosion rate inside the line.
The evaluator assesses the situation as follows:
A. Product corrosivity
5 pts
The line is exposed to corrosive components only under
upset conditions, but 2 points are deducted because the upset
conditions appear to be rather frequent.
B. Internal monitoring
Inhibitor injection
Operational measures
2 pts
2 pts
2 pts
3 pts
9 pts (out of IO pts max)
Points were deducted from each of two of the preventive
measures (inhibitor injection and operational measures)
because of known reliability problems with the actions. A
penalty for the offshore operational measures was actually
taken twice in this case, once in the product corrosivity and
once in the preventive actions.
The total score for internal corrosion is then:
A + B = 5 + 9 = 14 pts
C. Subsurface corrosion (weighting: 70%)
Subsurface Corrosion (70% of overall corrosion threat,
Subsurface environment (20 pts)
Soil corrosivity (15 pts)
Mechanical corrosion ( 5 pts)
Coating (25 pts)
Fitness (10 pts)
Condition (15 pts)
Cathodic protection (25 pts)
Effectiveness (15 pts)
Interference potential (10 pts)
AC related (2 pts)
Shielding (1 pt)
DC related (7 pts)
Telluric currents (1 pt)
DC rail (3 pts)
Foreign lines (3 pts)
Figure4.4 Examples of pipeline pigs
Scoring the corrosion potential 4/75
This part of the risk assessment will apply to metallic pipe
material that is buried or submerged. If the pipeline being evaluated i s not vulnerable to subsurface corrosion, as would be the
case for a plastic pipeline or a totally aboveground pipeline, the
evaluator should use the previous two sections and any other
pertinent factors to assess the corrosion risk.
Of the three categories of corrosion, this is usually considered to be the most complex. Several corrosion mechanisms
can be at work in the case ofburiedmetals. This situation is further complicated by the fact that corrosion activity is normally
deduced only from indirect evidence4irect observation is a
rather limited option.
The most common danger is from some form of galvanic
corrosion. Galvanic corrosion occurs when a metal or metals in
an electrolyte (an electrically conductive fluid) form anodic
and cathodic regions. A cathode is a metal region that has a
greater affinity for electrons than the corresponding anodic
region. This affinity for electrons is called electronegativity.
Different metals have different electronegativities and even different areas on a single piece of metal will have slightly different electronegativities. The greater the difference, the stronger
the tendency for electrons to flow. If an electrical connection
between anode and cathode exists. allowing this electron flow,
metal will dissolve at the anode as metal ions are formed and
migrate from the parent metal. Chemical reactions occur at the
anode and the cathode as ions are formed and corrosion occurs.
Such a system, with anode, cathode, electrolyte, and electrical
connection between anode and cathode, is called agalvanic cell
and is illustrated in Figure 4.5.
Because soil is often an effective electrolyte, agalvanic corrosion cell can be established between areas along a single a
pipeline or between a pipeline and another piece ofburiedmetal.
When a new piece of pipe is attached to an old piece, a galvanic
cell can be established between the two metals. Dissimilar soils
with differences in concentrations of ions, oxygen, or moisture
can also set up anodic and cathodic regions on the pipe surface.
Corrosion cells ofthis type are called concentrationcells. When
these cells are established, the anodic region will experience
active corrosion. The severity of this corrosion is dictated by
variables such as the conductivity of the soil (electrolyte) and the
relative electronegativities ofthe anode and cathode.
Common industry practice is to employ a two-part defense
against galvanic corrosion of a pipeline. The first line of
defense is a coating over the pipeline. This is designed to isolate
the metal from the electrolyte. If this coating is perfect, the galvanic cell is effectively stopped-the electric circuit is blocked
because the electrolyte is no longer in contact with the metal. It
is safe to say, however, that no coating is perfect. If only at the
microscopic level, defects will exist in any coating system.
The second line of defense is called cathodicprofecfion(CP).
Through connections with other metals, the pipeline is turned
into a cathode, which, according to the galvanic cell model, is not
subject to loss of metal (as a matter of fact, the cathode actually
gains metal). The theory behind cathodic protection is to ensure
that the current flow is directed in such a way that current flows to
the pipeline, and away from an installed bed of metal that is
intended to corrode. The installed metal that is to corrode is
appropriately called sacrificial anode. The sacrificial anode has
a lower affinity for electrons than the steel it is protecting.
Depending on electrolyte (soil) type and some economic considerations, a voltage may be imposed on the system to further drive
the current flow. When this is necessary, the system is referred to
as an impressed current system (Figure 4.6).
In an impressed current system, rectifiers are used to drive
the low-voltage current flow between the anode bed and
the pipeline. The amount of current required is dictated by
Current flow (electrical connection)
N e g a t i
‘ - //
Figure 4.5 The galvanic corrosion cell
4/76 Corrosion Index
r Pipeline (cath
Figure 4.6
Pipeline cathodic protection with impressed current rectifier
variables such as coating condition, soil type, and anode bed
design-all of which add resistance to this electric circuit.
In the scoring approach describe here, subsurface corrosion
threat is examined in three main categories: subsurface envuonment. cathodic protection, and coating.The weightingsare generally equivalent with the weighting for subsurface environment
having slightly less weight. This slight underweighting reflects a
belief that most environmental conditions can be overcome with
the right coating and CP system. If the evaluator does not believe
this to he true, then she may wish to re-weightthe main categories.
C1. Subsurface environment (weighting 20% of
corrosion threat)
In order to better visualize the position of this variable in the
overall hierarchy of the corrosion threat assessmeb, the branch
of the risk assessment leading to this variable can be seen as
Corrosion Index
Atmospheric Corrosion
Internal Corrosion
Subsurface Corrosion
Subsurface environment
Soil corrosivity
Cathodic protection
(20 PtS)
(15 pts)
(5 Pts)
(25 PtS)
(25 pts)
A major aspect of assessing subsurface corrosion potential
is an evaluation of the environment surrounding the pipe.
A recommendation is to examine the soil corrosivity as the
most important aspect of the environment. This can then be
supplemented with an evaluation of the potential for specialized mechanical corrosion effects such as stress corrosion
Soil corrosiviw (weighting: 15%)
Because a coating system is always considered to he an imperfect barrier, the soil
is always assumed to he in contact with the pipe wall at
some points. Soil corrosivity is often a qualitative measure
of how well the soil can act as an electrolyte to promote galvanic corrosion on the pipe. Additionally, aspects of the soil
that may otherwise directly or indirectly promote corrosion
mechanisms should also be considered. These include bacterial activity and the presence of other corrosion-enhancing
The possibly damaging interaction between the soil and the
pipe coating is not a part of this variable. Soil effects on the
coating (mechanical damage, moisture damage, etc.) should he
considered when judging the coating effectiveness as a risk
The importance of soil as a factor in the galvanic cell activity
is not widely agreed on. Historically, the soil's resistance to
electrical flow has been the measure used to judge the contribution of soil effects to galvanic corrosion. As with any component of the galvanic cell, the electrical resistances play a role
in the operation of the circuit. Soil resistivity or conductivity
Scoring the corrosion potential 4/77
therefore seems to be one of the best and most commonly used
general measures of soil corrosivity. Soil resistivity is a function of interdependent variables such as moisture content,
porosity. temperature, ion concentrations, and soil type. Some
of these are seasonal variables. corresponding to rainfall or
atmospheric temperatures. Some researchers report that abrupt
changes in soil resistivity are even more important to assessing
corrosivity than the resistivity value itself. In other words,
strong correlations are reported between corrosion rates and
amount of change in soil resistivity along a pipeline [41].
A schedule can be developed to assess the average or worst
case (either could be appropriate-the choice, however, must
he consistent across all sections evaluated) soil resistivity.
This is a broad-brush measure of the electrolytic characteristic
of the soil.
Microorganism activity can promote corrosion. This is
often termed microbially induced corrosion or MZC.A family
of anaerobic bacteria (no oxygen needed for the bacteria to
reproduce), called sulfate-reducing bacteria, can cause the
depletion of the hydrogen layer adjacent to the outside pipe
wall. This hydrogen layer normally provides a degree ofprotection from corrosion. As it is removed corrosion reactions can
actually be accelerated. Soils with sulfates or soluble salts are
favorable environments for anaerobic sulfate-reducing bacteria
Although it does not actually attack the metal, the microorganism activity tends to produce conditions that accelerate corrosion. The sulfate-reducing bacteria are commonly found in
areas where stagnant water or water-logged soil is in contact
with the steel.
Previous discovery of MIC or at least microorganism presence is often the best indicator of such damage potential. Some
operators train employees to look for signs during any and all
pipe excavation and exposure. On excavation, evidence of bacterial activity is sometimes seen as a layer of black iron-sulfide
on the pipe wall. An oxidation-reduction probe can be used to
test for conditions favorable for bacteria activity. (It does not
determine if corrosion is taking place, however.)A normal cure
for microorganism-promoted corrosion is increased levels of
cathodic protection currents.
p H The ion concentration in the soil, as measured by pH,
can have a dramatic effect on corrosion potential. A pH
lower than 3 or higher than 9 (either side of the neutral 4 8
range) can promote corrosion. For metals, more acidic (lower
pH) soils promote corrosion more than the more alkaline
(higher pH) soils. The soil pH may affect other pipe materials in
other ways.
Data sources
Some publicly available databases have relative soil corrosivity evaluations for steel and concrete. These
correspond to specific geographical regions of the world. They
also show pH, moisture content, sulfates, chlorides. water table
depths, and many other soil characteristics. As of this writing,
these data sets tend to be very coarse-averaging many factors
so that the resolution is not fine enough to distinguish local hot
spots of differing characteristics. In fact. the generalized information might exactly contradict more local information. An
example would be where a large-area evaluation shows a very
low soil moisture content, but in reality, there are several small
areas within the larger area (perhaps near the creeks and
ravines) that have relatively high moisture contents most of the
year. This might be significant information for pipelines traversing such areas. Therefore, very coarse resolution data are
sometimes used only as a default or as a factor to consider in
addition to other, more location-specific information.
Scoring soil corrosivih.
A simple soil corrosivity assessment scale might use only soil
resistivity as an indicator. An example is shown in Table 4.3.
A more detailed evaluation might involve several additional
variables as discussed above. Each variable is assessed on its
own scale, either using actual measurements or in relative
terms (such as high, medium. or !OM'). They would then be
combined using some relative weighting scheme in order to
arrive at a final soil corrosivity score. An example is shown in
Table 4.4.
The soil corrosivity score could be the result of summing the
subvariable scores:
Soil corrosivity score = [soil resistivity]+ [pH] + [soil moisture] +
[MIC]+ [STATSGO steel corrosion].
Weightings are established based on the corrosion expert's
judgments or empirical data showing which factors are more
critical in determining soil corrosivity.
Different pipe materials have differing susceptibilities to
damage by various soil conditions. Sulfates and acids in the soil
can deteriorate cement-containing materials such as concrete
or asbestos-cement pipe. Polyethylene pipe may be vulnerable
to damage hy hydrocarbons. Any and all special knowledge of
pipe material susceptibility to soil characteristics should be
incorporated into this section of the corrosion index.
Chapter 11 shows an approach where soil corrosivity is
being assessed against various different pipe materials.
Mechanical corrosion effects (Weighting: 5% of
corrosion threat)
This risk variable involves the potential for damaging phenomena that consist of both a corrosion component and a mechanical component. This includes hydrogen stress corrosion
Table 4.3
Example soil corrosivityassessment scale using only
< 1,000 ohm-cm
Medium 1,000-15.000
ohm-cm or moderately
active corrosion indicated
High resistivity
(low corrosion potential);
>15.000 ohm-cm and
no active corrosion
Do not know
From ASMEIANSI 831 8
4/78 Corrosion Index
Table 4.4
Relative weighting (%)
Soil resistivity
gril moisture
STATSGO~steel corrosivityrating
Operating stress > 60% specified minimum yield strength
Operating temperature > 100°F
Distance from compressor station < 20 miles
Age > 10 years
Coating system other than fusion bonded epoxy (FBE).
An automatic assessment incorporating these criteria can be
set up in a computer environment.
Soil corrosivity scorec
aMIC =evaluationof the potential for microbially induced corrosion.
bSTATSGO = State Soil Geographic (STATSGO)soils data compiled
by the Natural Resources Conservation Service of the US.
Department of Agriculture.
cracking (HSCC), sulfide stress corrosion cracking (SSCC),
hydrogen-induced cracking (HIC), or hydrogen embrittlement,
corrosion fatigue, and erosion. In the United States, stress corrosion cracking (SCC) reportedly caused more than 250
pipeline failures in the 1965-1985 period [52]. Some failure
investigators think that these numbers represent an underreporting of the actual number of SCC related failures since
such failures are often very difficult to recognize.
Stress corrosion cracking can occur under certain combinations of physical and corrosive stresses. Evidence shows that
three conditions must be present: tensile stress, a susceptible
pipe material, and a supporting environment at the pipe surface.
SCC is sometimes referred to as an “environmentally assisted
cracking” phenomenon. A breakdown in both coating barrier
and cathodic protection must occur before SCC initiates [63].
Two different forms have been identified: high-pH SCC (classical) and near-neutral, low-pH SCC. These are similar in many
ways and differ in the role of temperature, electrolyte characteristics, and cracking morphology [63]. Both types are characterized by formation of corrosion-accelerated cracking in areas
of the pipe wall subjected to high tensile stress levels. The presence of corrosive substances aggravates the situation. Certain
types of steel are more susceptible than others. In general, a
steel with a higher carbon content is more prone to SCC.
Characteristics ofthe steel that may have been brought about by
welding or other post-manufacturing processes may also make
the steel more susceptible. Materials that have little fracture
toughness (see the design index discussion in Chapter 5) do not
offer much resistance to brittle failure. Rapid crack propagation
brought on by corrosion and stress is more likely in these materials. Note that SCC is also seen in plastic pipe materials.
Stress corrosion cracking is difficult to detect and SCC failures are not predictable. The effects can be highly localized.
Even a fairly non-corrosive environment can support a SCC
process. A previous history of this type of process is, of course,
strong evidence of the potential. In the absence of historical
data, the susceptibility of a pipeline to this sometimes violent
failure mechanism should be judged by identifying conditions
that may promote the SCC process. Predictive models have
been developed and have been effective in prioritizing excavations to find higher occurrences than would be discovered
under a plan of investigationsduring routine maintenance [63].
ASME/ANSI B31.8 notes the following as high risk factors,
where further investigation may be warranted if all of the following are present in a segment:
Tensile stress at the pipe surface is thought to be a necessary condition for SCC. The stressmight be residual, however,
and hence virtually undetectable. The higher the stress, the more
potential for crack formation and growth. Fluctuations in stress
level are also thought to play an aggravating role since such fluctuations produce fatigue loadings that can increase crack growth.
It is reasonable to assume that all pipelines will be under at least
some amount of stress. Because internal pressure is often the
largest stress contributor, pipelines operating at higher pressures
relative to their wall thickness are thought to have more susceptibility to SCC. Thermally induced stresses and hendmg stresses
can also contribute to the overall stress level, but, for simplicity’s
sake, the evaluator may choose only internal pressure as a factor
in assessing potential for SCC.
High pH levels close to the steel can be a conEnvironment
tributing factor in classic SCC.This may be caused by a high pH
in the soil, in the product, or even in the coating. Chlorides, H,S,
CO,, and high temperatures are more contributing factors. The
presence of certain bacteria will increase the risk. Persistent
moisture and coating disbondment are also threatening conditions. In general, any environmental characteristicthat promotes
corrosion should be considered to be a risk contributor here.
This must include external and internal contributors.
Steel type A high carbon content (20.28%) increases the
likelihood of stress corrosion cracking. Low ductility materials
with low fracture toughness are more susceptible. Sometimes
the rate of loading determines the fracture toughness-a material may be able to withstand a slow application of stress, but
not a rapid application (see the design index discussion in
Chapter 5). This further complicates the use of material type as
a contributing factor.
A schedule can be developed that employs these contributing
factors in an assessment ofthe potential for SCC. Low stress in
a benign environment is the best condition, whereas high stress
in a corrosive environment is the most dangerous condition.
Stress level can be expressed as a percentage of maximum
allowable operating pressure (MAOP) or specified minimum
yield strength (SMYS) of the pipe-the highest normal operating pressure divided by MAOP or SMYS.
A history of stress corrosion cracking should be seen as the
strongest evidence ofthis risk and should accordingly score the
section at 0 points.
C2. Cathodic protection (weighting 25% of
corrosion threat)
The branch of the risk assessment leading to the variable
cathodic protection is as follows:
Scoring the corrosion potential 4/79
Corrosion Indeer
Subsurface environment
ICathodic protection
Interference potential
(25 P ~ S )
(15 pts)
(10 pts)
Cuthodicprotection is the application of electric currents to a
metal in order to offset the natural electromotive force of corrosion. Chemical reactions occur at the anode and the cathode as
corrosion occurs and ions are formed. Some form of CP system
is normally used to protect a buried or submerged steel pipeline
as one part of the common two-part defense against corrosioncoating and CP. The exceptions to CP use might be instances
where temporary lines are installed in fairly non-corrosive soil
and where conditions do not warrant cathodic protection.
Nonmetal lines may not require corrosion protection.
In this evaluation, the effectiveness of the CP system is
assessed in general and in terms of possible interferences. The
effectiveness is judged by the existence of a system that meets
the following general criteria:
Enough electromotive force is provided to effectively negate
any corrosion potential.
Enough evidence is gathered, at appropriate times. to ensure
that the system is working properly.
In assessing interference potential, points are awarded based
on the potential for any of three sources of interference and
measures taken to mitigate such interference.
CP svstem effectiveness
To ensure that the CP system can be effective, the evaluator
should seek records of the initial cathodic protection design.
Are the design parameters appropriate? What was the projected
life span ofthe system? Is the system functioning according to
The evaluator should then inspect documentation of the most
recent checks on the system. Anode beds can become depleted,
conditions can change, equipment can malfunction. Will the
operator become aware of serious problems in a timely manner? Although cathodic protection problems can be caught during normal test lead readings and certainly during close interval
surveys, problems such as malfunctioning rectifiers (or worse,
rectifiers whose electrical connections have been reversed! )
should ideally be found even quicker.
Effectiveness criteria
The presence of adequate protective
currents is normally determined by measurement of the voltage
(potential) difference between the pipe metal and the electrolyte. By some common practices and regulatory agency
requirements, a pipe-to-soil potential of at least -0.85 volts
(-850 millivolts), as measured by a copper-copper sulfate reference electrode, is the general criterion indicating adequate
protection from corrosion.Another common criterion is a min-
imum negative polarization voltage shift of 100 millivolts. This
is the amount of shift in potential between the polarized
pipeline (after current has been applied for some time) and the
buried pipeline without a protective current appliedthe native state. Many corrosion control experts believe that the
100-mV shift criterion is the most conclusive measure of CP
effectiveness. Unfortunately, the 100 mV is also often the most
costly measurement to obtain. requiring a polarization survey,
whereby the pipeline is depolarized over hours or days and
comparative pipe-to-soil measurements are made. The 0.85volt criterion is normally adequate because it encompasses the
100-mV shift in almost every case since native potentials are
normally less than 700 mV.
A criterion for excessive CP currents is also often appropriate. Excessive currents might cause hydrogen evolution that
can cause coating disbondment.
The actual practice of ensuring adequate levels of cathodic
protection is often more complex than the simple application of
criteria. Readings must be carefully interpreted in light of the
measurement system used. Too much current may damage the
coating. Higher levels ofprotection may be required when there
is evidence ofbacteria-promoted corrosion. A host of other factors must often similarly be considered by the corrosion engineer in determining an adequate level of protection.
In the interpretation of all pipe-to-soil measurements, attention must be paid to the resistances that are part of the pipe-tosoil reading. The reading that is sought, but difficult to obtain,
is the electric potential difference between the outside surface
ofthe pipe and apoint in the adjacent soil a short distance away.
In actual practice, a reading is taken between the pipe surface
(via the test lead) and apoint at the ground surface, usually several feet above the pipe. The soil component of the circuit is a
nonmetallic current path. Consequently, this model is not
directly analogous to a simple electrical circuit. The measured
circuit is completed at the ground surface by contacting the soil
with a reference electrode (a half cell, usually copper electrode
in a copper sulfate solution).Therefore, the normal pipe-to-soil
reading measures not only the piece of information sought. but
also all resistances in the electric circuit, including wires, pipe
steel, instruments, connectors, and,the largest component. the
several feet of soil between the buried pipe wall and the ground
surface. The knowledgeable corrosion engineer will take readings in such a way as to separate the extraneous information
from the needed data. The industry refers to this technique as
compensatingfor the IR drop.
There is some controversy in the industry as to exactly how
the readings should be interpreted to allow for the IR drop. An
instant-off pipe-to-soil measurement, where the reading is
taken immediately as the current source is interrupted is often
taken as a reading that is relatively IR free. Therefore. some
operators use a more conservative adequacy criterion of “at
least 850 mV interrupted (or instant-ow pipe-to-soil potential
instead of “at least 850 mV with the current applied.”
In some cases the controversy is more theoretical because
government regulations mandate certain techniques. The evaluator should be satisfied that sufficient expertise exists in the
interpretation of readings to give valid answers.
One aspect of the adequacy of protection will be
the maintenance of the associated cathodic protection equipment. For impressed current CP systems. equipment such as
4/80 Corrosion Index
rectifiers and bonds must be maintained. Inspections of these
pieces of equipment are usually performed at shorter intervals
than the overall check of the potential levels. Because a rectifier
provides the driving force for the cathodic protection systems,
the operator must not allow a rectifier to be out of service for
any length of time. Monthly or at least bimonthly rectifier
inspections are often the norm.
Use of a risk assessment adjustment factor that could be
called something like a rectifier interruptionfactor is one way
to account for the effects of inconsistent application of CP. An
interruption or outage can be defined as some deviation from
normal rectifier output (probably in amperes) once sufficient
data have been accumulated to establish a baseline or normal
output for a rectifier. The tracking of kilowatt-hours may also
be useful in determining outage periods. The hours of outage
per year can be tracked and accumulated to assign risk assessment penalties for both high outage hours in any year and the
accumulation of outage hours over several years. Each indicates periods during which the pipeline might not be adequately
protected from corrosion.
A potential difficulty in application of such rectifier interruption factors is that each rectifier must be linked to the specific pipeline lengths that are influenced by that rectifier. The
adjustment factor is derived from each rectifier, but the penalty
applies to the actual portions of the pipeline that suffered the
inconsistent application of CP In a complex system of rectifiers and pipelines, it is often difficult to ascertain which
rectifiers are influencing which portions of pipeline.
Tracking the equipment performance might also provide the
ability to better quantify the benefits of remote monitoring
capabilities. If a system is installed to monitor and alarm (in a
control center as part of the SCADA system, perhaps) rectifier
malfunctions and perhaps even pipe-to-soil readings at test
leads, then outage times and inadequate protection times can be
minimized. Depending on the corrosion rates and economic
considerations such as labor costs and availability, adding such
monitoring capabilities might be justified.
Test leads
Often, the primary method for monitoring the
effectiveness of a cathodic protection system is through the use
of test leads, fixed survey points for taking pipe-to-soil voltage
readings. A test lead is normally a wire attached (usually
welded or soldered) to the buried pipeline and extended
above the ground. A test lead allows a worker to attach a voltmeter with a reference electrode and measure the pipe-to-soil
Placement of test leads at locations where interference is
possible is especially important. The most common points are
metal pipe casings and foreign pipeline crossings. At these
sites, careful attention should be paid to the direction of current
flow to ensure that the pipeline is not anodic to the other metal.
Where pipelines cross, test leads on both lines can show if the
cathodic protection systems are competing.
Several survey types are commonly used to verify
that effectiveness criteria are being met. These include variations in how pipe-to-soil readings are taken and where they are
taken. Examples of the former include on readings, instant-off
readings, voltage gradient, and polarization measurements as
previously described. Variations in where readings are taken
include readings at test lead stations only versus close interval
surveys where readings are taken at short intervals such as
every 3 to 15 feet.
An annual test lead survey is the cornerstone of many operators’ CP verification programs. A pipe-to-soil measurement
taken at a test lead measurement indicates the degree of
cathodic protection on the pipe because it indicates the tendency of current flow, both in terms ofmagnitude and direction
(to the pipe or from the pipe) (see Figure 4.6). Uncertainty
increases with increasing distance from the test lead because
the test lead reading represents the pipe-to-soil potential in only
a localized area. Because galvanic corrosion can be a localized
phenomenon, the test leads provide only limited information
regarding CP levels distant from the test leads. A test lead reading is therefore an indicator of cathodic protection only in the
immediate area around the lead-a lateral distance along
the pipe that is roughly equal to the depth of cover, according to
one rule of thumb. Closer test lead spacings, therefore, yield
more information and less chance oflarge areas of active corrosion going undetected. Because corrosion is a time-dependent
process, the number oftimes the test leads are monitored is also
Using these concepts, a coarse point schedule can be developed based on general criteria such as:
All buried metal in the vicinity of the pipeline is monitored
directly by test leads, and test lead spacing is no greater than
1 mile throughout this section Best
Test leads are spaced at distances of 1 to 2 miles apart (maximum) and all foreign pipeline crossings are monitored via
test leads; not all casings are monitored; there may be other
buried metal that is not monitored Fair
Test lead spacing is sometimes more than 2 miles; not all potential interference sources are monitored Poor
A more robust assessment can use actual distances from the
nearest test lead to characterize each point along the pipeline.
This would penalize (show higher risks), on a graduated scale,
those portions of the pipeline that are farther away from a test
lead or other opportunity for a pipe-to-soil potential reading.
The frequency of readings at test leads is rated as follows.
Pipe-to-soil readings are taken with the IR drop understood and
compensated at intervals of
<6 months
6 months-annually
Notes: As previously explained, lack of proper IR drop compensation may negate the effectiveness of readings. For our
purposes here, test lead can be any place on the pipeline where
an accurate pipe-to-soil potential reading can be taken. This
may include most aboveground facilities, depending on the
type of coating present.
Readings taken at longer intervals such as greater than one
year do have some value, but a year’s worth of corrosion might
have proceeded undetected between readings.
Close interval surve-vs A powerful tool in the corrosion
engineer’s tool bag is a variation on test lead monitoring called
close intenialsurveying (CIS) or closespacedsurveying. In this
technique, pipe-to-soil readings are taken (and IR compensa-
Scoringthe corrosion potential 4/81
tion is employed ideally) every 2 to 15 feet along the entire
length of the pipeline. In this way, almost all localized inadequate CP can be detected. It also normally yields some coating
effectiveness information.
Any aboveground pipeline attachment, including valves, test
leads, and casing vents, can be used to connect to one side of a
voltmeter. The other side ofthe voltmeter is connected by a wire
to the reference half-cell that is used to make electrical connection at the ground surface as the surveyor walks along the
pipeline. The voltmeter and data-logging device are therefore
in the circuit between the two electrodes. Results are usually
interpreted from a chart or database of the measurements that
shows peaks and valleys as the current flow changes magnitude
ordirection(Figure 4.7).
Several types of CIS are in common use. These include
DCVG (direct current voltage gradient) and various types of
interrupted surveys with various distances between readings.
AC readings can also be taken in conjunction with the DC
Ideally such a profile of the pipe-to-soil potential readings
will indicate areas of interference with other pipelines. casings,
etc.; areas of inadequate cathodic protection; and even areas of
bad coating. When needed, excavations are performed to verify
the survey readings. A CIS is repeated periodically to identify
changes in CP along the pipeline route.
The CIS technique is quite robust in monitoring the condition of buried steel pipelines and hence, can play a significant
role in risk management. It is also a proactive technique that
can be used to detect potential problems before appreciable
damage is done to the pipeline. The most credit toward risk
reduction can be given for a thorough CIS recently performed
over the entire pipeline section by trained personnel and with
careful interpretations of all readings made by a knowledgeable
corrosion engineer. An accompanying assumption (to be verified by the evaluator) is that corrective actions based on survey
results have been taken or are planned (in a timely fashion).
The survey’s role in risk reduction can be quantified at a
coarse level by simply assessing the time since the last survey.
If survey results are thought to be out of date and provide no
useful risk information after, say, 5 years, then the point assignment equation could be:
(maximum points) x (survey age. years)’5
Using CIS in a more detailed risk assessment model will
involve an assessment of the type of survey itself, as discussed
in the next paragraphs.
Scoring of CP effectiveness
The assessment of CP effectiveness should include evaluations
of how much information is available from various survey
types and frequencies. In this regard some surveys could be
judged to be more valuable in terms of uncertainty-reducing
information produced. In the following sample scoring
scheme, the evaluator has weighted various survey techniques
based on their value in ensuring adequate CP effectiveness. The
survey scores are then adjusted by factors that consider the age
of the survey and the prospect of a CP system failure.
In this scheme, the close spaced polarization survey warrants
the highest point score-55% of the maximum points for CP
effectiveness. It also encompasses the other survey types
because, in effect, it requires that “on” and “interrupted” readings also be captured. Therefore, this survey, done recently and
finding no areas of inadequate CP, leads to the full point value
for the risk variable of CP effectiveness-I 00% of the maximum points.
Other surveys are of a lesser value, with a simpler CIS “on”
survey being worth 30% and a CIS interrupted survey being
worth 20% (but more often 50% since the interrupted survey
will normally include an “on” survey, so the points can be combined). The “test lead only” surveys warrant fewer points given
its reduced ability to confirm adequate CP at locations not
close to a test lead.
Anytime a pipe-to-soil reading does not meet the minimum
criteria, CP effecriveness should be deemed inadequate and
Normal reading
(adequately protected
\ uI
a s I .4
Sudden dip
(possible interference
problem, undetected
by test lead reading
Low readings
(more current required
to protect pipe)
Distance Measured Along Pipeline
Figure 4.7
Close interval pipe-to-soilpotential survey.
4/82 Corrosion Index
scored accordingly (0 points, by the example scales). Table 4.5
shows an example of a scoring system for CP effectiveness. An
age-of-survey adjustment would be used in arriving at final
point values.
According to the above scoring rules, the evaluator has three
options for scoring the annual test lead survey. As one option,
he can consider each test lead reading to be a pipe-to-soil reading representing about 5 ft along the pipeline either side of the
reading location. This means that very short sections of pipe
would receive 20, 30, or 55% of the CP effectiveness points
(depending on what type of survey is used), where test lead
readings show adequate CP levels. All pipe sections in between
the reading locations would be penalized for having no pipe-tosoil voltage information at a l l 4 points. For example, in an
annual on-reading survey (current applied), where all readings
show adequate CP, the risk assessment will show point values
of (Maximum CP Effectiveness Points) x (Annual on survey
weighting, option 1) or 15 points x 30% = 4.5 points for the 10
ft of pipe around the test lead location, and 0 points elsewhere.
This indicates that the evaluator has no evidence that CP levels
are adequate between test lead locations.
In another option, the evaluator feels that the test lead reading does yield useful information on CP levels between test
lead reading locations, even at distances thousands of feet away.
The weighting, however, must be far less than for a CIS, where
the reading locations are very closely spaced. So, only 1% of
the maximum possible points are awarded, but the points apply
to all locations between test lead locations. In this case, the
annual, on-reading survey where all readings show adequate
CP, will show point values of (Maximum CP Effectiveness
Points) x (Annual on survey weighting, option 2) or 15 points x
1% = 0.15 points everywhere.
In yet another option, the evaluator feels that the test lead
reading yields information about surrounding lengths ofpipe in
proportion to their distance from the test lead location. In this
case, the annual, on-reading survey where all readings show
adequate CP, will show point values of (Maximum CP
Effectiveness Points) x (Annual on survey, option 3) x (test lead
adj) yielding point values of 15 points x 10% x 100% = 1.5
points for portions of the pipeline within 1 mile of the test lead
and 15 x 10% x 50% = 0.75 for portions of the pipeline 1.5
miles from a test lead.
This may appear to be a rather complicated scoring scheme,
but it does reflect the reality of the complex corrosion control
choices commonly encountered in pipeline operations. It is not
uncommon for the corrosion specialist to have results of various types of surveys of varying ages and be faced with the
challenge of assimilating all of this data into a format that can
support decision making. The previous scenarios discount
additional adjustments for age of survey and equipment malfunctions. Such adjustments should play a role in scoring
(even though they are not illustrated here) because they are
important considerations in evaluating actual CP effectiveness. The scoring scheme is patterned after the decision
process of the corrosion control engineer, but is of course considering only some of the factors that may be important in any
specific situation.
Inter$erence potential (weighting 10% of corrosion
Corrosion Index
Subsurface environment
Cathodic protection
Table 4.5 Sample of more detailed scoring for CP effectiveness
Information source (survey type)
Weight (to be
multiplied by
maximum CP
CIS polarization
CIS on (current applied)
CIS off (current is interrupted)
Comments and directionsfor scoring
Polarization survey usually gets to 100% since other survey types are done as part of
the polarization survey.
CIS readings with current applied. Ifpipe-to-soil criteria are met and survey is recent.
then points are 15 x 30% = 4.5 points.
Establishes static line the first time; can reuse static line with subsequent CIS
interrupted surveys; CIS-interrupted also gains credit for CIS-on survey, so
30 20 = 50%
Use survey type weighting (20%. 30%, or combination = 50%) and apply to 5 ft either
side of a test lead location
Apply to half the distance to next test lead
Multiply also by test lead adjustment factor
Same as above. Test i s done at test lead stations by interrupting rectifier and using a
static polarization survey measure for comparison.
30% or20%
Annual on or intempted
(at test lead locations only)
Annual polarization
(at test lead locations only)
Test lead spacing
Rectifier out of service
100%when all parts of the pipeline segment are within 1 mile oftest lead; 0% when any
part of segment is greater than 2 miles. lfany part ofthe segment is > 1 mile from the
test lead, then degrade to 0 points when distance reaches 2 miles;
Penalties for equipment outages in any year plus cumulative outages over several years.
Penalties removed after ILI or visual confirmation that no damage occurred.
Scoring the corrosion potential 4/83
nterference potential
(10 PtS)
Interference potential
AC related
DC related
Telluric currents
DC rail
Foreign tines
Corrosion is an electro-chemical process and corrosion prevention methods are designed to interrupt that process, often
with electrical methods like cathodic protection. However, the
prevention methods themselves are susceptible to defeat from
other electrical effects. The common term for these effects is
inferjerence. Three types of interference are evaluated: AC
related DC related and shielding effects.
AC-related interference (weighting: 20% of inte+rence potential) Pipelines near AC power transmission facilities are
exposed to a unique threat. Through either a ground fault or an
induction process, the pipeline may become electrically
charged. Not only is this charge potentially dangerous to people
coming into contact with the pipeline, it is also potentially
dangerous to the pipeline itself.
The degree of threat that AC presents to pipeline integrity
has been debated, Reference [38] presents case histories and an
analysis of the phenomena. This study concludes that AC can
cause corrosion even on pipelines cathodically protected to
industry standards. “The corrosion rate appears to be directly
related to the AC density such that corrosion can be expected at
AC current densities of 100 A/m2 and may occur at AC current
densities greater than 20 A/m2” [38]. Given specific measurable criteria for the threat, the evaluator might be able to develop
a threat assessment system around AC current levels directly
measured. Otherwise. indirect evidence can lead to an assessment system.
A basic understanding of the AC issue will serve the evaluator in assessing the threat potential. Electric current seeks the
path of least resistance. A buried steel conduit like a coated
pipeline may be an ideal path for current flow for some distance. Almost always, though, the current will leave the
pipeline to another more attractive path, especially where the
power line and the pipeline diverge after some distance of paralleling. The locations where the current enters or leaves the
pipe may cause severe metal loss as the electrical charge arcs to
or from the line. At a minimum, the pipeline coating may be
damaged by the AC interference effects.
The ground fault scenario of charging the pipeline includes
the phenomena of conduction, resistive coupling, and electrolytic coupling. It can occur as AC power travels through the
ground from a fallen transmission line, an accidental electrical
connection onto a tower leg, through a lightning strike on the
power system, or from an imbalance in a grounded power system. These are often the more acute cases of AC interference,
but they are also often the more easily detectable cases. The
sometimes high potentials resulting from ground faults expose
the pipe coating to high stress levels. This occurs as the soil surrounding the pipeline becomes charged, setting up a high volt-
age differential across the coating. Disbondment or arcing may
occur. If the potentials are great enough, the arcing may damage the pipe steel itself.
The induction scenario occurs as the pipeline is affected by
either the electrical or magnetic field created by the AC power
transmission. This sets up a current flow or a potential gradient in
the pipeline (Figure 4.8). These cases of capacitive or inductive
coupling are dependent on such factors as the geometrical relation ofthe pipeline to the power transmission line, the magnitude
of the power current flow, the frequency of the power system, the
coating resistivity, the soil resistivity, and the longitudinal resistivity of the steel [77]. Induced potentials become more severe as
soil resistivity and/or coating resistivity increases.
Formulas exist to estimate the potential effects of AC interference under normal and fault conditions. To perform these
calculations, some knowledge of power transmission load characteristics of the power system, including steady-state line currents and phase relationships, is required. Estimations and
measurements will be needed to generate soil, coating. and
steel resistivity values, as well as the distances and configurations between the pipeline and the power transmission facilities. The key factors in assessing the normal effects for most
situations will most likely be the characteristics of the AC
power and the distance from and configuration with the
pipeline. Fault conditions can, of course. encompass a multitude ofpossibilities.
Induced AC voltage can also be measured by methods similar to those used to measure DC pipe-to-soil voltages for
cathodic protection checks. Therefore, an AC survey can be a
part of a close interval survey (see earlier section on CIS).
thereby generating a profile of AC voltages.
Methods used to minimize the AC interference effects. both
to protect the pipeline and/or personnel coming into contact
with the line, include [53,62]
Electrical shields
Grounding mats or gradient control electrodes
Independent structure grounds
Bonding to existing structures
Supplemental grounding of the pipeline via distributed
Proper use of connectors and conductors
Insulating joints
Electrolytic grounding cells
Polarization cells
Lightning arresters.
Monitoring should be an integral part of the AC mitigation
Because so many variables are involved in performing accurate calculations and this is a relatively rare threat to most
pipelines, a simplified schedule can be set up for this rather
complex issue. In terms of risk exposure, one of three possible
scenarios can exist and be scored from a risk perspective:
No AC power is within 1000 ft of the pipeline
AC power is nearby, but preventive measures are
being used to protect the pipeline
AC power is nearby, but no preventive actions
are being taken
3 pts
1-2 pts
0 pts
4/84 Corrosion Index
causing coating or metal
Magnetic or electric
field sets up current
flow in pipeline
Figure 4.8 AC power currents on pipeline
Also fitting into the second scenario might be cases such as
Very low-power AC only
High-power AC present but at least 3000 ft away
AC nearby, but regular surveying confirms no induction
Note, however, that significant inductive interference effects
can be seen as far away as 1.2miles in high resistivity soils [SO].
In some cases, a more thorough investigation of AC effects
might be warranted. Before a site-specific analysis is done, a
more robust risk-screening tool could be used to highlight the
most critical of suspect locations. Breaking the AC interference
potential into several items for more detailed scoring might
involve the variables shown inTable 4.6.
Preventive measures can be designed for induction or for
ground fault cases or for both. As previously mentioned,
grounding cells can be designed to safely handle the discharge
of current from the pipeline. Close monitoring of the situation
would be considered part ofthe preventive measures taken. The
evaluator should be satisfied that the potential AC current problem is well understood and is being seriously addressed, before
credit is given for preventive measures.
Shielding (weighting: 10% of interference potential)
Shielding is the blocking of protective currents. Casing pipe,
especially where such pipe is coated, is a common example of
the potential to create shielding effects. Certain soil or rock
types. concrete coatings, and other buried structures (retaining
walls, culverts, foundations, etc.) are also examples. Any structure that is very close to the pipe (perhaps <2 ft) should alert the
evaluator to shielding potential.
Where the evaluator sees potential shielding situations,
points assigned should show a lowered interference score
(higher interference potential). When the operator is sensitive
to this potential and takes special precautions, points can be
awarded. When there is no potential for shielding, as has been
verified by appropriate surveys, maximum points should be
DC-related interference (weighting: 70% of interference
potential) The presence of other buried metal in the vicinity
of a buried metal pipeline is a potential concern for corrosion
prevention. Other buried metal can short circuit or otherwise
interfere with the cathodic protection system of the pipeline. In
the absence of a cathodic protection system, the foreign metal
can establish a galvanic corrosion cell with the pipeline. This
may cause or aggravate corrosion on the pipeline. This can be
quite severe: 1 amp ofDC current discharging from buried steel
can dissolve more than 20 pounds of steel per year.
The most critical interference situations occur when electrical contact occurs between the pipeline and the other metal.
Scoring the corrosion potential 4185
Table 4.6 Sample variables that can be evaluated to assess AC interference potential
N o f a laample scoring pmfocols)
Maximum credit given when AC survey is conducted at least annually at distanceno greater than I mile: no credit when greater
than 1 mile or more than 2.years between surveys.
IfAC is detected on the pipeline. penaltiesassigned where worst case is >I 5 volts (considerationofpipeline damage only-not
personnel safety issues).
Assesses the more problematic configurationsbetween the pipeline and the AC power line: parallel and then diverging represents
the highest potential for problems.
Higher strength means higher chance ofproblems.
Shorter distance means greater chance ofproblems: 0.5 mile or greater from current sourceipipeline is best score (unlesslowresistance path exists. such as waterway).
Lower resistivity means higher chance of probiems.
Points are “recovered”based on type of mitigation present.
.4C present
Soil resistivity
This is especially critical when the other metal has its own
impressed current system. Electric railroads are a good example of systems that can cause special problems for pipelines
whether or not physical contact occurs. The danger occurs
when the other system is competing with the pipeline for electrons. If the other system has a stronger electronegativity, the
pipeline will become an anode and, depending on the difference
in electron affinity. the pipeline can experience accelerated corrosion. As noted elsewhere, coatings may actually worsen the
situation if all anodic metal dissolves from pinhole-sized areas,
causing narrow and deep corrosion pits.
Common mitigation measures for interference problems
include interference bonds, isolators. and test leads. Interference bonds are direct electrical connections that allow the
controlled flow of current from one system to another. By controlling this flow, corrosion effects arising from the foreign systems can be mitigated. Isolators. when properly installed, can
similarly control the flow of current. Finally, test leads are used
to monitor for problems. By comparing the pipe-to-soil potential readings of the two systems, signs of interference can be
found. As with any monitoring system. checks must be done
regularly by trained personnel, and corrective actions must be
taken when problems are identified.
A reasonable question when assessing interference potential
from other buried metal is “How close is too close?” The proximity of the foreign metal obviously is a key factor in the risk
potential, but the distance is not strictly measured in feet or
meters. Longer distances can be dangerous in low-resistivity
soil or in cases where the current levels are relatively high. If
the foreign system also has an impressed current CP system,
the strength and location of the source are also pertinent. A reasonable rule of thumb might be to consider all buried metal
within a certain distance from the pipeline--perhaps 500 ft, if
no other calculations or experience-based distances are available. This rule should be tailored to the specific situation, but
then held constant for all pipelines evaluated.
Points can be assessed based on how many occurrences of
buried metal exist along a section. Again, the greater the area of
opportunity, the greater the risk. For pipelines in corridors with
foreign pipelines, higher threat levels of interference may exist
(although it is not uncommon for pipeline owners in sharedcorridors to cooperate and thereby reduce interference potentials).
Note that many modem approaches to pipeline segmentation
for risk assessment will create smaller, unique sections where
counts of occurrences would not be appropriate. For instance,
each occurrence of a cased road crossing would be an independent pipeline section for purposes of risk scoring. Such sections would carry the risk of interference (including shielding
effects)whereas neighboring sections might not.
Specific subvariables in assessing DC-related interference
include those shown in Table 4.7.
C3. Coating (weighting:25% of corrosion threat)
Corrosion index
(25 P
Cathodic protection
( 1 5 pts)
(25 pts)
Pipeline coatings are one part of the two-part defense against
subsurface corrosion of metallic pipe. Commonly used coatings are often a composite of two or more layers of materials.
Paints, plastics, rubbers, and other hydrocarbon-based products
Table 4.7 Sample variables for assessing DC-related interference
DC present
Investigate for the presence of
potentially interferingcurrents.
Configuration 25%
Parallel and then divergent might be
worst-case configurations.
This measures CP source strength,if
any. Intermittent voltages (for
example, electric trains)are often
more problematic.
Use shorter of source distanceor
structure(foreignpipeline. rail, etc.).
Soil resistivity 5%
Lower soil resistivitiesmight lead to
longer distancesofinterest.
Adjustment Improve scores where mitigations are
4/86 Corrosion Index
such as asphalts and tars are common coating materials. A
coating must be able to withstand a certain amount ofmechanical stress from initial construction, from subsequent soil, rock,
root movements, and from temperature changes as the pipe
moves against the adjacent soil. The coating will be continuously exposed to ground moisture and any damaging substances contained in the soil. Additionally, the coating must
adequately serve its main purpose: isolating the steel from the
electrolyte. To do so, it must be resistant to the passage of electricity and water. Because pipelines are designed for long life
spans, the coating must perform all of these functions without
losing its properties over time; that is, it must resist aging.
Typical coating systems include
Cold-applied asphalt mastics
Layered extruded polyethylene
Fusion-bonded epoxy
Coal tar enamel and wrap
Tapes (hot or cold applied).
only because they are difficult to detect, but also because they
can promote narrow and deep corrosion pits. Because galvanic
corrosion is an electrochemical reaction, a given driving force
(voltage difference) will cause a certain rate of metal ionization. If the exposed area of metal is large, the corrosion will be
wide and shallow, whereas a small exposure will lose the same
volume ofmetal, causing deeper corrosion. Deeper corrosion is
potentially more weakening to the pipe wall. A small geometric
discontinuity may also cause high stress concentrations (see
Chapter 5).
To assess the present coating condition, several things should
be considered, including the original installation process. An
evaluation similar to the one used to assess the coating for
atmospheric corrosion protection may be appropriate.
Again, no coating is defect free; therefore, the corrosion
potential will never be totally removed, merely reduced. How
effectively the coating is able to reduce corrosion potential
depends on four general factors:
Any coating system can fail. Factors contributing to failure
Mechanical damages from soil movements, rocks, roots,
construction activities
Disbondment caused by hydrogen generation from excessive
cathodc protection currents
Incorrect coating type or application for the pipeline operating condition and environment
Water penetration
Coatings can fail in numerous ways. Some failures result in
large defects that are relatively easy to detect and repair. The
presence of many small defects, however, indicates active coating degradation mechanisms that may result in massive coating
failure unless the mechanisms are addressed [50]. Correction
costs here may be considerably more expensive.
One of the main reasons for using cathodic protection
systems is that no coating system is defect free. Cathodic protection is designed to compensate for coating defects and deterioration. As such, one way to measure the condition of the coating is
to measure how much cathodic protection is needed. Cathodic
protection requirements are partially a function of soil conditions and the amount of exposed steel on the pipeline. Coatings
with defects allow more steel to be exposed and hence require
more cathodic protection. Cathodic protection is generally
measured in terms of current consumption. A certain amount of
voltage is thought to negate the corrosion effects, so the amount
of current generated while maintaining this required voltage is a
gauge of cathodic protection. A corrosion engineer can make
some estimates of coating condition from these numbers.
One potentially bad situation that is difficult to detect is an
area of disbonded coating, where the coating is separated from
the steel surface. While the coating still provides a shield of
sorts, moisture can often get between the coating and the steel.
If this moisture is occasionally replaced, active local corrosion
can proceed while showing little change in current requirements. Excessively high CP currents may cause hydrogen generation that can lead to coating disbondment.
Another common type of coating defect is the presence of
pinhole-sized defects. These can be especially dangerous not
Quality of the coating
Quality ofthe coating application
Quality of the inspection program
Quality of the defect correction program.
Each of these components can be rated on a 4-point scale
equating to the qualitative judgments of good, fair, poor, or
absent. The weighting of each component should probably be
equivalent unless the evaluator can say that one component is of
more importance than another. A quality coating is of little
value if the application is poor; a good inspection program is
incomplete if the defect correction program is poor. Perhaps an
argument can be made that high scores in coating and application place less importance on inspection and defect correction.
This would obviously be a sliding scale and is probably an
unnecessary complication.
Coatingfitness (weighting 50% of coating)
Coating Evaluate the coating in terms of its appropriateness
in its present application. Where possible, use data from coating stress tests to rate the quality. Hardness, elasticity, adhesion
to steel, and temperature sensitivity are common properties
used to determine the appropriateness. When these data are not
available, draw from company experience.
The evaluation should assess the coating’s resistance to all
anticipated stresses including a degree of abuse at initial installation, soil movements, chemical and moisture attack, temperature differentials, and gravity.
high-quality coating designed for its present
adequate coating but probably not specifically
designed for its specific environment
Poor-a coating in place but not suitable for long-term service
in its present environment
Absent-no coating prtsent.
Note: Some of the more important coating properties include
electrical resistance, adhesion, ease of application, flexibility,
impact resistance, flow resistance (after curing), resistance to
soil stresses, resistance to water (moisture uptake), and resist-
Scoring the corrosion potential 4/87
ance to bacteria or other organism attack. In the case of submerged lines, marine life such as barnacles or borers must be
Evaluate the most recent coating application
process and judge its quality in terms of attention to pre-cleaning. coating thickness, the application environment (temperature, humidity, dust, etc.), and the curing or setting process.
Good-Detailed specifications are used, carefu! attention is
paid to all aspects ofthe application; appropriate quality control systems are used.
Fair-Most likely a proper application is done, but without formal supervision or quality controls.
Poor-A careless, low-quality application is performed.
Absen/-Application was incorrectly done, steps omitted, environment not controlled.
An alternate approach to scoring coating fitness is to begin
with a score for the type of coating, reflecting the coating’s perceived future performance. This might be based on historical
performance information or laboratory tests. Then. adjustments are applied to this score for any conditions that might
affect the coating’s ability to perform. The magnitude of an
adjustment should reflect its possible impact on coating performance. Examples of adjustments are shown inTable 4.8.
Coating Condition (weighting 50% of coating)
Evaluate the inspection program for its thoroughness and timeliness. Documentation will also be an integral part
of the best possible inspection program. Inspection of underground coating can take several forms. Opportunities for visual
inspection will occasionally present themselves, as the pipe is
exposed for various reasons. When this happens, the operator
should take advantage of the situation to have trained personnel
evaluate the coating condition and record the findings.
A second inspection method, less direct than visual inspection, impresses aradio or electric signal onto the pipe andmeasures this signal strength at points along the pipeline (Figure
4.9). The signal strength should decrease linearly in direct proportion to the distance from the signal source. Peaks and unexpected changes in the signal indicate areas of non-uniform
coating-perhaps damaged coating. This technique is called a
holiday detection survev. Based on the initial survey, test holes
are dug for visual inspection ofthe coating in order to correlate
actual coating condition with signal readings.
Table 4.8
Another indirect method was mentioned in this section’s
introduction. A measure of the cathodic protection requirements--especially the change in these requirements over
time-gives an indication of the coating condition (see Figure
4.7 earlier).
The methods discussed above and other indirect observation
methods require a degree of skill on the part of the operator and
the analyzer. Industry opinion is divided on the effectiveness of
some of these techniques. The evaluator should satisfy himself
that the operator understands the technique and can demonstrate some success in its use for coating inspection.
Good-A formal, thorough inspection is performed specifically for evidence of coating deterioration. Inspections are
performed by trained individuals at appropriate intervals (as
dictated by local corrosion potential). Full use of visual
inspection opportunities in addition to one or more indirect
techniques being used.
Fair-Inspections are informal, but performed routinely by
qualified individuals. Perhaps an indirect technique is used
but maybe not to its full potential.
Poor-Little inspection is done; reliance is on chance sighting
of problem areas. Informal visual inspections when there is
the opportunity.
Absent-No inspection done.
Note: Typical coating faults include cracking, pinholes,
impacts (sharp objects). compressive loadings (stacking of
coated pipes), disbondment, softening or flowing, and general
deterioration (ultraviolet degradation, for example).
Evaluate the program of defect correcCorrection ofdefects
tion in terms of thoroughness and timeliness.
Good-Reported coating defects are immediately documented
and scheduled for timely repair. Repairs are carried out per
application specifications and are done on schedule.
Fair-Coating defects are informally reported and are repaired
at convenience.
P o o r 4 o a t i n g defects are not consistently reported or
Absent-Little or no attention is paid to coating defects.
The coating condition assessment can be made more data
driven if accurate measurements of cathodic protection current
requirements exist. These measurements are usually in the form
of milli-amperes per square foot of pipeline surface area. A
Adjustments to coating performance scores
Coating type
Adjustment that conservatively assumes ongoing coating deterioration.
Adjustment that penalizes field-appliedcoating because applicatlon conditions and surface preparation are more difficult
to control.
Damage potential
Coating environment
Coating protection
Examines the potentlal harm to the coating from its immediate environment. Mechanlcal damage is main consideration:
rock impingement, soil stress, wave action,etc. Aseismic faultings and subsidences are areas of active soil movements
that can damage coating.
Assesses the response to a potentially harmful environment-the forms of mitigation in place to minimize coating
damage potential.
Note: Points are assigned based on experience with lifecycle of various coating types.
4/88 Corrosion Index
Unexpected rise
in signal strength
(possible coating damage area)
Normal signal
slope of signal
Distance from signal source (along pipeline)
Figure 4.9 Example of coating survey results.
schedule such as that shown in Table 4.9 could be used. This
type of scale would depend on soil corrosivity. Again, lowered
electric current requirements mean less exposed metal and better electrical isolation from the electrolyte.
A special situation, such as evidence of high microorganism
activity or unusually low pH that promotes steel oxidation,
should be accounted for by reducing the point value (but not
below 0 points). Not knowing the soil corrosion potential
would conservatively warrant a score of0 points.
A more detailed approach to scoring coating conditions
would be to use inspection results directly. Adjustments are
applied to account for age of the inspection-it is conservatively assumed that coating deteriorates until inspection verifies actual condition. In the case of a visual inspection, a zone
of influence might be appropriate. Even if only a short length of
pipe is excavated and inspected, this provides some evidence of
coating condition for some distance past the excavation if there
are no known environmental changes. Zone of influence applications are discussed in Chapter 8.
Table 4.10 shows inspection-related risk variables from a
scoring system that considers three types of inspection: visual,
nondestructive testing (NDT), and destructive testing (DT).
Each variable can be weighted andor can be hrther divided
into specific measurements.
Table 4.9
Quantitativemeasure of coating condition
CP Current requirements
Coating condition
0.0003 m A / s q ft
0.003 M s q ft
0.1 M s q A
Example 4.6: Scoring coating
A buried oil pipeline in dry, sandy soil is cathodically protected by sacrificial anodes attached to the line at a spacing of
about 500 ft. A pipe-to-soil voltage measurement is taken
twice each year over the whole section to ensure that cathodic
protection is adequate. Records indicate that the line was initially coated 22 years ago with a plant-applied coal tar enamel
material that was applied over the sand-blasted and primed
pipe. An inspector supervised the coating process. The pipeto-ground potential has not changed measurably since original installation. This section of line has not been exposed for
10 years.
The evaluator assesses the coating condition as follows:
Coating Condition Parameter
Coating (good)
Application (good)
Inspection (fair)
Defect correction (fair)
Max Weighting
The coating type in this environment has a good track record.
However, no confirmatory information is available. The evaluator feels that the semiannual pipe-to-soil voltage readings give
a good indirect indication of coating condition. Full points
would be awarded if this was confirmed by visual inspection
(cracked or disbonded coating may not be found by the potential readings alone). Defect correction is an unknown at this
point. Three points are awarded based on the thoroughness with
which the operator runs other aspects of his operations-in
other words, some benefit of the doubt is given here. Coating
selection and application processes appear to be high quality,
based on records and conversations with the operator, but
Table 4.10 Three types of coating inspection
Inspec tion
Inspection age
Derived from pipe inspection reports; inspection reports could be generated for each diameter length (36-111. pipe needs a
for every 36 in. of exposed pipe); inspector notes the beginning and end station of all anomalies; measures the percent
failure area per square foot of surface area.
This is treated separately from other failure types since this failure mode is more problematic than othersdifficult to
Uses a rate-of-decay of inspection results 4 d e r inspections yield less useful risk information,
NDT inspection
Dry film thickness versus design
In situ coating surveys
Holidays as detected by IO0 volt/mil intensity
Inspection age
A measure of remaining coating.
An inferential measure of coating coverage and integrity.
A direct measure of coating integrity available during excavation and visual inspection.
Uses a rate-of-decay of inspection results-dder inspections yield less useful risk information.
DT inspection
Impact resistance
Inspection age
Visual inspection
Disbonding failures
Data obtained via laboratory or on-site tests ofcoating samples obtained from field investigations
Uses a rate-of-decay of inspection results--older inspections yield less useful risk information.
4/90 Corrosion Index
again, scores are conservatively assigned in the absence of
more evidence.
Several adjustments to previously assigned scores are appropriate. Some have already been discussed, such as adjustments for
age of surveys or equipment malfunction potential. Additional
adjustments will often be warranted when direct evidence is
included. The corrosion variables use mostly indirect evidence
to infer corrosion potential, as is consistent with the historical
practice of corrosion control in the industry. Because the scores
will ideally correlate to corrosion rate, any detection of corrosion damages or direct measurements of actual corrosion rate
can be used to calibrate the scores and/or tune the risk model.
Where a corrosion rate is actually measured, the overall corrosion score can be calibrated with this information. The reverse
is not always implied, however. Caution must be exercised in
assigning favorable scores based solely on the non-detection of
corrosion at certain times and at limited locations. It is important to note that the potential for corrosion can be high even
when no active corrosion is detected.
Previous damages
Results from an in-line inspection (ILI) or other inspections
may detect previous corrosion damage. When there is actual
corrosion damage, but risk assessment scores for corrosion
potential do not indicate a high potential, then a conflict seemingly exists between the direct and the indirect evidence. Such
conflicts are discussed in Chapter 2.
Sometimes we will not know exactly where the inconsistency lies until complete investigations have been performed.
The conflict could reflect an overly optimistic assessment of
effectiveness of mitigation measures (coatings, CL etc.) or it
could reflect an underestimate of the harshness of the environment. Another possibility is that some of the information might
be inappropriately used by the risk model. For example, detection of corrosion damages might not reflect active corrosion.
As a temporary measure to ensure that the corrosion scores
always reflect the best available direct inspection information,
limitations can be placed on corrosion scores, in proportion to
the direct inspection results. This will force the risk model to
preferentially use recent direct evidence over previous assumptions, until the conflicts between the two are investigated.
Techniques to assimilate ILI and other direct inspection information into risk scores are discussed in Chapter 5 . If such direct
inspection scores are created, they can be used as input to the
corrosion scores. Basically, a ‘ceiling’ is created that uses the
inspection information (adjusted for age and accuracy) to override scores derived from the more indirect evidence. This is
illustrated in Table 4.1 1.
In this sample, the worst ILI scores, indicating the most
extensive corrosion damage, limit the risk scores for one or
more corrosion types, depending on whether the ILI indications are internal or external wall loss. For example, suppose
that, prior to the ILI, an evaluator had assessed the coating condition, CP effectiveness, etc., and had assigned the segment a
subsurface corrosion score of 55 out of 70 (higher points indicate more safety). If the ILI score, based on the recent inspection, indicates that some damage might have occurred
(suspicious indications), then the subsurface corrosion score
would be capped at 60% x 70 (maximum points possible) = 42
and the previously assigned 5 5 would be temporarily reduced to
42, pending an investigation. In other words, the previous
assessment based on indirect evidence has been overridden by
the results ofthe ILI. The segment would be reassessed after an
investigation had determined the cause of the damage-how
the mitigation measures may have failed and how the risk
assessment may be incorrect.
An ILI score that indicates no damages puts no limitations on
corrosion scores. This is discussed in Chapter 5 , if the direct
inspection score is based upon un-verified ILI results, it can
eventually be improved through ‘pig digs’, that is, excavation,
inspection, and verifications that anomalies are indeed damages. The limitation on corrosion scores can also be reduced
even if the direct inspection score does not improve by damage
repair. This can happen if a root cause analysis of the detected
damages concludes that active corrosion is not present, despite
a poor inspection score. For example, the root cause analysis
might use previous ILI results to demonstrate that corrosion
damage is old and corrosion has been mitigated.
A critical aspect is the determination ofwhether the damages
represent active corrosion or are past damages whose progress
has been halted through increased corrosion prevention measures. Replacing anode beds, increasing current output from
rectifiers, eliminating interferences, and recoating are all
actions that could halt previously active corrosion.
This type of adjustment should be only temporarily
employed. It will not give satisfactory long-term support for the
risk model since it is, in effect, overriding risk information
rather than finding and correcting discrepancies of evidence.
Temporarily limiting corrosionscores on the basis of
recent inspections
Table 4.11
Interpretation of direct inspection score
Severe corrosion damages are identified.
Significantcorrosion damages are identified.
Possibility of some damages has been identified.
Suspicious results suggest that damages might
have occurred.
Direct evidence has verified that no corrosion
has occurred.
%of maximum
‘Use this value or current corrosion score, whichever indicates higher
corrosion threat.
Design Index
Design Risk
A. Safety Factor
B. Fatigue
C. Surge Potential
D. Integrity
E. Land Movements
0-15 pts
0-100 pts
The probability of pipeline failure is assessed, in this model, via
an evaluation of four failure mechanisms. In three of thosthird par& corrosion, and incorrect operations-the assessment focuses on the probability that the failure mechanism is
active. The potential presence of the failure mechanism is
assumed to directly correlate with failure potential under the
assumption that all failure mechanisms can eventually precipitate a failure, even in very strong pipelines. In the fourth-the
design index-the assessment looks not only at the potential for
an active failure mechanism, but also at the ability of the
pipeline to withstand failure mechanisms. This resistance to
failure (sufetyfuctor and integrity verifications variables) will
play a role in absolute risk calculations when a time-to-fail
considerationis required.
A significant element in the risk picture is the relationship
between how a pipeline was originally designed and how it is
presently being operated-its safety margin.Although this may
seem straightforward, it is actually quite a complex relationship. All original designs are based on calculations that must,
for practical reasons, incorporate assumptions.These assumptions deal with the variable material strengths and anticipated
stresses over the life of the pipeline. Safety factors and conservativeness in design are incorporated with the assumptions in
the interest of safety but further cloud the view of the actual
margin. Further complications arise with the uncertainties in
estimating the long-term interactions of many variables such
as pipe support, activity of time-dependent failure mechanisms, and actual stress loadings imposed on the structure.
In aggregate, then, the evaluator will always be uncertain in his
estimationofthe margin of safety
This uncertainty should be acknowledged,but not necessarily quantified. An evaluation system should incorporate all
known information and treat all unknowns (and “unknowables”) consistently. Because a relative risk picture is sought,
5/92 Design Index
Figure 5.1 Basic risk assessment model
the consistency in treatment of design variables provides a
consistentbase with which to perform risk comparisons.
Design is used as an index title here because most, if not all,
of the risk variables here are normally addressed directly in the
system’s basic structural design. They all have to do with structural integrity against all anticipated loads-inteqal, external,
time dependent, and random. This chapter, therefore, provides
guidance on evaluating the pipeline’s environment against the
critical design parameters.
Load vs. resistanceto load curves
Conservatism in design and specifications is not the result
of insensitivity to costs and wastes of efforts and materials.
Rather, it is an acknowledgment,after centuries of experience,
of the inherent unpredictability of the real world. Safety factors, or allowancesfor margins of error, in any structural design
are only prudent. The safety margin implies a level of risk tolerance, fiuther discussed in Chapter 14. As many modem design
efforts move toward limit-state design approaches, the historical notions of safety margins and extra robustness in a design
are being quantified and re-evaluated.
A visual model to better appreciate the relationships among
pipeline integrity management, safety factors, and risk is
shown in Figure 5.3,which illustrates the uncertainty involved
in engineeringdesign in general. We generally use single numbers to represent the material strength (load resistance) and the
anticipated load (internal pressure plus external loads) at any
point along the pipeline. However, we should not lose sight of
the fact that our single numbers really are representations or
simplificationsof underlying distributionssuch as those shown
in Figure 5.3A.
The actual loads or forces on a pipeline are not constant
either over time or space. They vary as we move along the
pipeline and they vary at a single point on the pipeline over
time. In the first distribution, we assume that the distributions
shown represent changing loads and resistances along the
pipeline. So, some pipe segments are exposed to relatively low
loads, some relatively high, and most are in a midrange.
Similarly, each pipe segment will have a different ability to
resist the loads. To find the differences, we might have to go
down to a microscopic level in the case of a new pipeline
with very consistent manufacturingprocesses,but there will be
some joints with at least minor weaknesses, allowing failure at
lower loadings; some with no weaknesses, allowing higher load
resistance; and the vast majority behaving as predicted.
The two distributions are initially separated by a generous
distancethe safety factor-so that even iftails are a bit longer
than expected (that is, if some segments are exposed to more
loads than expected andor some segments have even less
strength than expected), there is still no threat of failure. This
represents the as-designed risk condition and is illustrated in
Figure 5.3A.
It is conservative and prudent to assume that any system’s
resistance will be weakened over time, despite best efforts to
prevent it. Weaknesses are caused by time-dependentdeterioration mechanisms and repeated stresses. It is also conservative
to assume that loads might increase; perhaps due to an external
force such as earth movements or increasingtraffic loadings. In
reality, actual loads are often less than the conservative design
assumptions,leaving more safety margin.
This conservatively assumed movement of the curves toward
each other-the
reduction in safety margin-leads us to
assume anew risk state. This is shown in Figure 5.3B.This is of
course what we are trying to avoid an overlap whereby a low
resistance segment is exposed to a high load (too high) and a
failure occurs.
The role of integrity verification, a key component of risk
assessmenVmanagement, is to figuratively stop the movement
of the curves long before any overlap occurs. Integrity re-verification can reshape the resistance distribution so that we have
less variability and fewer weak points. The integrity verification in effect removes weaknesses-ven
if the only weakness
was lack of knowledge (uncertainty = increased risk, as discussed in Chapters 1and2). Figure 5.3C illustratesthe risk situation after integrity verification.The knowledge gained assures
us that no weaknesses beyond a certain detection limit exist.
This has the effect of at least truncating the “resistanceto load”
curve, if not providing enough evidence that the curve has not
New pipelines5/93
Maximum pressure
Normal pressure
Material strength
Pipe wall thickness
External loadings
Strength of fittings, valves, components
Pressure cycle magnitude
Pressure cycle frequency
Material toughness
Diameter/wall thickness ratio
Safety factor
Surge potential
Fluid bulk modulus
Pipe modulus of elasticity
Rate of flow stoppage
Flow rates
Integrity verifications
Verification date
Pressure test level
In-line inspection technique
In-line inspection accuracy
Land movements
Seismic shaking
Fault movement
Water bank erosion
Figure 5.2
Assessing threats relatedto design aspects:sample of data used to score the design index
changed shape at all. The distance between the load curve and
this truncation point reestablishes our safety factor.
Because we are uncertain about how exactly and how quickly
the curves are changing, we will be uncertain as to how much
time we can take between weakness removal efforts. The weakness removal interval selected has implications for failure probability as discussed in Chapter 14.
New pipelines
Evaluators will often need to perform a risk assessment on aproposed pipeline based on design documents. This should be considered a preliminary assessment. A preliminary risk assessment
will be based on the best available preconstmction information
such as route surveys and soil investigations. During actual
installation, new information will usually arise that might be pertinent to the risk assessment. This information might include
Unexpected subsurface conditions encountered
Use of different pipe components (elbows versus field bends,
Results of quality control inspections and tests.
Often, as-built information will be required before a detailed
risk assessment can be completed or a preliminary risk assessment can be confirmed.
New construction followed immediately by integrity verification, decreases the chance of failure from design-related
issues and from time-dependent failure mechanisms. After all,
the design process itself is an exercise in risk management.
Where conditions are judged to be more threatening, offsetting
measures are employed. This includes deeper burial, provisions
for land stabilization, increased pipe wall thicknesses, and use
of casings and anchors where appropriate. Theoretically,
these responses to changing conditions should keep the probability of failure constant along the length of the line.
Differences in failure probability occur when responses are
more or less than required for the conditions. An over-response
often occurs for economic reasons; standardization of materials
designed for worst case conditions provides a benefit when
conditions are not worst case. An under-response often occurs
for reasons of inability to completely respond to a lowfrequency, high-consequence event such as a landslide or
The challenge in a risk assessment of a new facility is to first
establish the baseline risk level, then to identify areas where
5/94 Design Index
such under- and over-responses have changed the risk picture,
and then relate both of these to the consequences at specific
areas along the pipeline route.
The terms maximum operating pressure (MOP), maximum
allowable operating pressure (MAOP), maximum permissible
pressure, and design pressure are often used interchangeably
and indeed they are used interchangeablyin this text. They all
imply an internal pressure level that comports with design
intent and safety considerations-whether the latter stem from
regulatory requirements, industry standards, or a company’s
internal policies.
MOP is normally calculated. For purposes of risk assessment, MOP can incorporateany and all design safety factors,or
it may excludethe operatingsafety factorsthat are mandated by
govemment regulations. It should not exclude engineering
safety factors that reflect the uncertainty and variability of
material strengths and the simplifying assumptions of design
formulas since these are technically based limitations on
operating pressure. These include adjustment factors for
temperature, joint types, and other considerations.Regulatory
operating safety factors, however, usually go beyond this to
allow for errors and omissions, deterioration of facilities,
and extra safety margins in general. Such allowances are
certainly needed in pipeline operation, but can be confusing
if they are included in the risk assessment.The actual margin
of safety exists between the maximum stress level caused by
the highest pressure and the stress tolerance of the pipeline.
Measuring this directly without determining the margin
between a regulated stress level and stress tolerance, makes
the assessment more intuitive and useful when differing regulatory requirements make comparisons more complicated.
Regulatory safety factors may therefore be omitted from the
MOP calculations for risk assessment purposes. As with all
elements of this risk assessment tool, such distinctions are
ultimately left to the evaluator. Because a picture of risk relative
to other pipelines is sought, any consistent definition of MOP
will work.
Surge (water hammer) pressures may be included in maximum pressure determination or alternatively, can be part of a
separate risk variable, as shown in this proposed model. Surge
potential is discussed in Appendix D.Pipe wall damages or suspected weaknesses-anomalies-may
impact pipe strength
and hence allowable pressures or safety margins. Anomalies
are discussed in Appendix C. Reductions of MOP resulting
from pipeline anomalies are normally based on remaining
effective wall thickness calculations and conform
to approaches described in industry standards such as ASME/
ANSI B3 1G, Manualfor Determining the Remaining Swngth
of Corroded Pipelines, or AGA Pipeline Research Committee
Project PR-3-805, A Modified Criterion for Evaluating the
Remaining Strength of Corroded Pipe.
It may also be important to make a distinction between a
safety-system-protected MOP from one that is impossible to
exceed due to the absence of adequate pressure productiowhere it is physically impossible to exceed the MOP because
there is no pressure source (includingstatic head to temperature
effects) that can cause an exceedance.This is covered more in
Chapter 6, but the evaluator should carefully define MOP for
purposes of risk assessment.
V. Risk variables and scoring
The Design Index is more technically complex than most of the
othercomponents of the evaluation. If the evaluatordoes not possess expertise in matters of pipeline design, outside help may be
beneficial. This is not a requirement, though. By making some
conservative assumptions and being consistent, a nonexpert can
do a credible job here. He must, however, be able to obtain
some calculated values. Where original design calculations are
available, few additional calculations are needed.
The following paragraphs describe a risk assessment model
that captures and evaluates design-related risk variables. All
variables are listed together at the beginning of this chapter for
quick reference.
A. Safety factor (weighting: 35%)
In this part of the assessment, the overall strength of the
pipeline segment and its stress levels are considered. This
includes an assessment of loads, stresses, and component
strengths. Known and foreseeable weaknesses in pipe due to
previous damage or suspect manufacturing processes are also
consideredhere. In effect, we are calculatinga safety factor or a
margin of safety, comparing what the pipeline can do (design)
versus what it is currentlybeing asked to do (operations).
The evaluation process involves an evaluation of loadings:
Internal pressure
External loadings
Special loadings
System strength (resistanceto loadings) is also evaluated
Pipe wall thickness
Pipe material strength
Pipe structuralstrength
Possible weaknesses in pipe
Other components.
When calculating stresses due to internal pressure, evaluators
may use either the maximum (design) pressures or the normal
operating pressures, depending on the type of risk assessment
being performed (see previous discussionof MOP definitions).
The former is the most conservative and is appropriatefor characterizing the maximum stress levels to which all portions of
the pipeline might be subjected, even if the normal operating
pressures for most of the pipe are far below this level. This use
of design pressure or MOP might be more appropriate when
characterizingan entire pipeline as one unit. It also avoids the
potential criticism that the assessment is not appropriately
The second alternative, using normal operating pressures,
provides a more realistic view of stress levels along the
pipeline. Portions immediately downstream of pumps or compressors would routinely see higher pressures, and downstream
Risk variables and scoring 5/95
Safety factor
Safety factor
-- .
- - -
Figure 5.3 Load and resistance distributions: (A) risk 1, situation as designed; (6)
risk 2, assumed changes over time; (C)risk 3, afler integrity
5/96 Design Index
portions might never see pressures even close to design limits.
This approach might be more appropriate for operational risk
assessments where differences along the pipeline are of most
Pressure cycling should be a part of the assessment since the
magnitude and frequency of cycling can contribute to fatigue
failure mechanisms. This is discussed elsewhere.
Calculating pipe stresses from internal pressure is discussed
in Appendix C.
External loadings
External loadings include the weight ofthe soil over the buried
line, the loadings caused by traffic moving over the line, possible soil movements (settling, faults, etc.), external pressures
and buoyancy forces for submerged lines, temperature effects
(these could also be internally generated), lateral forces due to
water flow, and pipe weight. Stress equations for some of these
are shown in Appendix C.
The diameter and wall thickness of the pipe combine to
determine the structural strength of the pipeline against most
external loadings. Pipe flexibility is also a factor. Rigid pipe
generally requires more wall thickness to support external
loads than does flexible pipe. This chapter focuses on steel pipe
design. See Chapter 11 for a discussion of other commonly
used pipe materials.
The weight of the soil or other cover and anything moving over
the pipeline comprises the overburden load. In an offshore environment, this would also include the pressure due to water
depth. Uncased pipe under roadways may require additional
wall thickness to handle the increased loads.
Often, casing pipe is installed to carry anticipated external
loads. A casing pipe is merely apipe larger in diameter than the
carrier pipe whose purpose is to protect the carrier pipe from
external loads (see Figure 4.2). Casing pipe often causes difficulties in establishing cathodic protection to prevent corrosion.
The effect of casings on the risk picture from a corrosion standpoint is covered in the corrosion index (see Chapter 4). The
impact on the design index is found here, when the casing carries the external load and produces a higher pipe safety factor
for the section being evaluated (see The case fodagainst
An unsupported pipe is subject to additional stresses compared
with a uniformly supported pipe. An unsupported condition
can arise intentionally-an aerial crossing of a ditch or stream,
for instance, or unintentionally-the result of erosion or subsidence, for example. From a risk perspective. the evaluator
should be interested in a verification that all aboveground
pipeline spans are identified, adequately supported vertically,
and restrained laterally from credible loading scenarios, including those due to gravity, internal pressure, and externally applied
loads. Especially in an offshore or submerged environment, this
must include lateral loads such as current flow and debris
impingement. Resistance to stresses from unsupported spans is
generally modeled using beam formulas (seeAppendix C).
Each aboveground span can be visually inspected to verify
the existence ofan adequate number ofpipe supports, such that
pipe spans do not exceed precalculated lengths based on
applied dead load (i.e., load due to gravity) and the internal
pressure. Pipe coating and pipe supports can also be inspected
for integrity. Historical floodwater elevations can be identified
based on field inspections andor against floodplain maps.
The maximum allowable pipe spans can be calculated based
on accepted industry standards such as ASME B31.4, Liquid
Transportation Svsteins for Hydrocarbons, Liquid Petroleum
Gas, Anhvdrous Ammonia and Alcohol, that specify requirements for gravity loads and internal pressure. Allowable span
lengths can conservatively be based on the assumption of a
beam fully restrained against rotation at its supports when
calculating the applied stresses in the span.
Third-party daniage
Loadings from third-party strikes are not normally included
in pipe design calculations, but the pipe’s design certainly influences the pipe’s ability to withstand such forces. According
to one study, pipe wall thickness in excess of 11.9 mm can
only be punctured by 5% of excavator types in service as of
1995 and none could cause a hole greater than 8Omm.
Furthermore, no holes greater than 80 mm have occurred in
pipelines operating at design factors of:0.3 with a wall thickness greater than 9.1 mm [%]. These types of statistics can be
useful in assessing risks, either in a relative sense or in absolute
terms (see Chapter 14).
Pipe buckling or crushing is most often a consideration for
offshore pipelines in deep water. Calculations can estimate
the pressure level required for buckling initiation and buckling
propagation. It is usually appropriate to evaluate buckle potential when the pipeline is in the depressured state and, thereby,
most susceptible to a uniformly applied external force (see
Appendix C). Recognition of buckling initiation pressure is
sometimes made since less pressure is needed to propagate a
buckle, compared with initiating.
The pipe’s ability to resist land movement forces such as those
generated in seismic events (fault movement, liquefaction,
shaking, ground accelerations, etc.) can also be included here.
Soil movements associated with changing moisture conditions
and temperatures can also cause longitudinal stresses to the
pipeline and, in extreme cases, can cause a lack of support
around the pipe.
The potential for damaging land movements is considered in
a later variable, but whether or not such forces are “damaging”
depends on the pipeline’s strength. The diameter and wall thickness are often good measures of the pipeline’s ability to resist
land movements.
Loss of support is covered in the discussion of spans as well
as in the evaluation ofpotential land movements.
Hydrodynamic forces can occur offshore or in any situation where the pipeline is exposed to forces from moving water,
including water-borne debris.
Risk variables and scoring 5/97
Buoyancy and buoyancy mitigation measures can introduce
new stresses into the pipeline.
Cyclic loadings and fatigue from external forces should be a
consideration in material selection and wall thickness determination as discussed elsewhere.
Temperature effects can occur through internal or external
changes in temperature. The maximum allowable material
stress depends on the temperature. Hence, temperature extremes
may require different wall thicknesses. Such changes introduce
longitudinal stresses as discussed in Appendix C.
In composite pipelines. such as a PE liner in a steel
pipe. many more complexities are introduced. Often used
to handle more corrosive materials, such composites may
have a layer of corrosion-resistant or chemical degradationresistant material and a layer of higher strength (structural)
material. Because two or more materials are involved, the
stresses in each and the interaction effects must be understood. Such calculations are not easily done. Original design
calculations must be used (or re-created when not available)
to determine minimum required wall thicknesses. The evaluator must then be sure that the additional wall thickness of one
or more of the materials will indeed add to the pipe strength
and corrosion resistance, and not detract from it. It is conceivable that an increase in wall thickness in one layer may have
an undesirable effect on the overall pipe structure. Further,
some materials may allow diffusion of the product. When this
occurs. composite designs may be exposed to additional
.4cc-ountingji1rex-ternal loads
If detailed calculations are not deemed to be cost effective, the
evaluator may choose to use a standard percentage to add to the
wall thickness required for internal pressure to account for all
other loadings combined. For instance, 10 or 20% additional
wall thickness, beyond requirements for internal pressure
alone, would be conservative for most steel pipe under normal loading conditions. This percentage should of course
be increased for sections that may be subjected to additional
loadings or where ‘diameter to wall thickness’ ratios suggest
diminished structural strength.
Pipe wall thickness
The role of increased wall thickness in risk reduction is intuitive
and verified by experimental work. Some general conclusions
from this work can be incorporated into arisk analysis. Pipe wall
thickness is assumed to be proportional to structural strengthgreater wall thickness leads to greater structural strength (not
always linearlytwith the accompanying assumption of
uniform material properties and absence of defects.
Most pipeline systems have incorporated some “extra” wall
thickness-beyond that required for anticipated loads-in the
pipe, and hence have extra strength. This is normally because of
the availability of standard manufactured pipe wall thicknesses.
Such “off-the-shelf” pipe is often more economical even
though it may contain more material than may be required for
the intended service. This extra thickness will provide some
additional protection against corrosion. external damage. and
most other failure mechanisms. Excess strength (and increased
margin of safety) also occurs when a pipeline is operated below
its design limits. This is quite common in the industry and
happens for a variety of reasons:
Downstream portions are intended to operate at lower
stresses. even though they are designed for system maximum
Diminishing supplies reduce flows and pressures.
Changes in service result in decreased stresses.
Regardless of the cause, any extra strength, beyond the current operational requirements. can be considered i n the risk
evaluation (Figure 5.4).
Research indicates that in design factors of 0.5 and below,
the failure of a typical corrosion or material defect flaw will fail
in the through-wall direction, leading to a leak rather than a rupture. In design factors of 0.3 and below, all flaws, even those
with sharp edges, will likewise fail in a similar manner[58]. A
through-wall failure mode leading to less product release (and
hence. lower consequences) is discussed in Chapter 7.
However, research also indicates that increased wall thickness is not a cure-all. Increased brittleness, greater difficulties
in detecting material defects, and installation challenges are
cited as factors that might offset the desired increase in damage
resistance [58].
As previously noted certain wall thicknesses are also
thought to reduce the chances of failure from excavating equipment. Some wall thickness-internal pressure combinations
provide enough strength (safety margin) that most conventional
excavating equipment cannot puncture them (see page 96 and
Chapter 14). However, avoidance of immediate failure is only
part of the threat reduction-nonlethal damages can still
precipitate future failures through fatigue and/or corrosion
When evaluating a variety of pipe materials. distinctions in
material strengths and toughness can be made In terms of
external damage protection, a tenth of an inch of steel offers
more than does a tenth of an inch of fiberglass. The evaluator
must make this distinction if she wants to compare the risks
associated with pipelines constructed of different materials.
An important consideration is the difference between nominal
or “specified” wall thickness and actual wall thickness. Pipe
strength calculations assume a uniform pipe wall, free from any
defect that might reduce the material strength. This includes possible reductions in effective wall thickness caused by defects such
as cracks, laminations. hard spots, etc. Pipeline integrity assessments are designed to identify areas of weaknesses that might
have originated from any of the several causes. Differences
between nominal and effectivewall thickness include:
Allowable manufacturing tolerances
Manufacturing defects
Installatiodconstruction damages
Damages suffered during operation.
Manufacturing issues
Strength It is commonly accepted that older manufacturing
and construction methods do not match today’s standards.
Technological and quality-control advances have improved
quality and consistency of both manufactured components and
construction techniques. These improvements have varying
5/98 Design Index
for external loads
Wall thickness required
for internal pressure
Figure 5.4 Cross section of pipe wall illustrating the pipe safetyfactor.
degrees of importance in a risk assessment. In a more extreme
case, depending on the method and age of manufacture, the
assumption of uniform material may not be valid. If this is the
case, the maximum allowable stress value must reflect the true
strength of the material.
Modern pipe purchasing specifications address specified
minimum yield stress (SMYS) and toughness criteria, among
other properties, as critical measures of material strength.
These properties are normally documented with certifications from a steel mill, as discussed later. The risk evaluator should ensure that specifications were appropriate,
adhered to, and relevant to the current properties ofthe pipe or
A history of failures that are attributable in part or in whole
to a specific pipe manufacture process is sufficient reason
to question the allowable stress level of the pipe material,
regardless of pipe specifications or pressure test results. In
some risk models, pipe materials received from certain steel
mills over certain periods of time are penalized due to known
Manufacturing tolerances The actual pipe wall thickness is
not usually the nominal wall thickness specified in the purchase
agreement. Nominal wall thicknesses designate a wall thickness that can vary, plus or minus, by some specified manufacturing tolerance. For the purposes of a detailed risk assessment,
the lowest effective wall thickness in the section would ideally
be used. If actual thickness measurement data are not available,
the nominal wall thickness minus the specified maximum manufacturing tolerance can be used. Note, however, that some
stress formulas are based on nominal wall thickness rather than
In the case of longitudinally welded steel pipe, the
weld seam and area around it are often metallurgically different
from the parent steel. If such seams are thought to weaken the
pipe wall, this should be taken into account when assessing
pipe strength.
A higher susceptibility to certain failure mechanisms has
been identified in older electric resistance welding (ERW) pipe.
This applies to pipe manufactured with a low-frequency ERW
process, typically seen in pipe manufactured prior to 1970.Thls
type of weld seam is more vulnerable to failure mechanisms
such as:
Lack of fusion
Nonmetallic inclusions
Excessive trim
Selective corrosion (crevice corrosion)
Fatigue at lamination ERW interface.
These mechanisms, failure databases, and supporting metallurgical investigations are more fully described in technical
literature references. Since 1970, the use of high-frequency
ERW techniques coupled with improved inspection and testing
techniques have resulted in a more reliable pipe product.
U.S. government agencies issued advisories regarding
the low-frequency ERW pipe issue, but did not recommend
derating the pipe or other special standards. The increased
defect susceptibility of this type of pipe is generally mitigated
through integrity verification processes.
A lamination is a metal separation within the pipe wall. Laminations are not uncommon
in older pipelines and generally pose no integrity concerns
unless they contribute to the formation of a blister. Hydrogen
blistering occurs when atomic hydrogen penetrates the
pipe steel to a lamination and forms hydrogen molecules
which cannot then diffuse through the steel. A continuing
buildup of hydrogen pressure can separate the layers of steel
at the lamination, causing a visible bulging at the ID and OD
Hydrogen blistering at laminations is a potential contributing cause of failure when there is an aggravating presence of
hydrogen, such as from a product like sour crude oil service.
Although hydrogen generation is possible from cathodic proLaminations and blistering
Risk variables and scoring 5/99
tection under certain circumstances, this is not thought to be a
common failure mechanism.
There is no proven method of predicting the failure pressure
level of a preexisting blister and no proven method to calculate
its crack-driving potential from the standpoint of fatigue [86].
The potential for laminations surviving pressure tests, adding
weaknesses to the pipe wall, and contributing to a future failure
can be considered by the evaluator when deemed appropriate.
Construction issites
Similar to the discussion on pipe manufacturing techniques, the
methods for welding pipe joints have improved over the years.
Girth welds today must pass a more stringent inspection than
welds from the original construction of the pipeline. Welding
standards such as API 1104 (incorporated by reference into
U S . regulations) specify additional and different potential
weld defects to be repaired than the standards from previous
It is not certain that girth weld defects, as defined by today’s
welding inspection standards, increase the probability of
weld failure in an inspected and tested pipeline. However, this
issue illustrates an improving safety and risk-awareness evolution over time, presumably rooted in actual experience and
supported by engineering calculations.
Arc bums, created during welding, are of concern due to the
possibility of tiny cracks forming around the “hard spot” that
can be created from the arc burn. A common procedure among
pipeline operators is to remove arc burns.
Some previous construction techniques might have permitted miter joints. wrinkles in field bends, certain branch reinforcement designs, certain repair methods, and other aspects
not currently acceptable for most pipeline construction. These
should be considered in evaluating the strength ofthe system.
Offsetting these concerns to some extent might be the evidence of a pipeline system in continuous and reliable operation
for many years. In other words, incorporating “withstood the
test of time” evidence may be appropriate.
Damages during operations
Failure modes and potential damage can occur when the
pipeline is in operation. These include damage from corrosion.
dents, gouging, ovality, cracking. stress corrosion cracking
(SCC), and selective seam corrosion. These are generally rare
phenomena and involve simultaneous and coincident failure
mechanisms. Potential corrosion damage and SCC are
addressed in Chapter 4.
Selective seam corrosion is a possible, but rare, phenomenon
on low-frequency ERW pipe. However, the possibility cannot
be dismissed entirely. It is an aggressive form of localized
corrosion that has no known predictive models associated
with it. Not all low-frequency ERW pipe is vulnerable since,
apparently, special metallurgy is required for increased susceptibility [ 8 6 ] .
Damages can be detected by visual inspection or through
integrity verification techniques. Until an evaluation has shown
that an indication detected on a pipe wall is potentially serious,
it is normally called an anomalv. It is only a defect if it reduces
pipe strength significantly-impairing its ability to be used as
Many anomalies will be of a size that do not require repair
because they have not reduced the pipe strength from required
levels. However, a risk assessment that examines available
pipe strength should probably treat anomalies as evidence of
reduced strength and possible active failure mechanisms.
A complete assessment ofremaining pipe strength in consideration of an anomaly requires accurate characterization of the
anomaly-its dimensions and shape. In the absence ofdetailed
remaining strength calculations, the evaluator can reduce pipe
strength by a percentage based on the severity of the anomaly.
Higher priority anomalies are discussed in Appendix C.
Stress calculations
Calculation of the required wall thickness for a pipeline to
withstand anticipated loads involves several steps. First,
Barlow’s formula for circumferential stress is used to determine the minimum wall thickness required for internal pressure alone. This calculation is demonstrated in Appendix C.
Barlow’s calculation assumes a uniform material thickness and
strength and requires the input of a maximum allowable stress
for the pipe material. It yields a stress value for the extreme
fibers ofthe pipe wall (for the stress due solely to internal pressure). By starting with a maximum allowable material stress,
the wall thickness needed to contain a given pressure is calculated. Alternately, inputting a wall thickness into the equation
yields the maximum internal pressure that the pipe can withstand. These calculations assume that there are no weaknesses
in the pipe.
Allowable material stress levels are normally specified in
pipe purchase agreements and verified by material test reports
accompanying the purchase. These reports are usually called
miN certifications of pipe material composition and properties
and are issued by the pipe manufacturing steel mill.
In the absence of mill certificates, reliable pipe specification
documents (or recent pressure test data+specially if the material ratings are questioned) regarding the maximum pressure to
which the pipe has been subjected (usually the preservice
hydrostatic test) can be used to calculate a material allowable
stress. That is, we input the maximum internal pressure into
Barlow’s formula to calculate a material allowable stress value.
From this allowable stress value. we can then calculate a minimum required wall thickness.
Scoring thepipe safetv factor
The procedure recommended here is to calculate the required
pipe wall thickness and compare it to the actual wall thickness
(see Figure 5.4),adjusted by any integrity assessment information available. The required wall thickness calculation is more
straightforward if it does not include standard safety factors.
This is not only in the interest of simplicity. but also because
some of the reasons for the safety factors are addressed in other
sections of this risk analysis. For instance, regulations often
base design safety factors on nearby population density.
Population density is part of the consequences section (see
Chapter I , Leak Impaci Factor) in this evaluation system and
would cloud the issue of pipe strength if considered here also.
Consequences are examined in detail separately from probability-of-failure considerations, for purposes of risk assessment
clarity and risk management efficiency.
511 00 Design Index
The comparison between the actual and the required wall
thickness is most easily done by using a ratio of the two numbers. Using a ratio provides a numerical scale from which
points can be assigned. If this ratio is less than one, the pipe
does not meet the design criteria-there is less actual wall
thickness than is required by design calculations. The pipeline
system has not failed either because it has not yet been exposed
to the maximum designconditions or because some error in the
calculations or associated assumptions has been made. A ratio
greater than one means that extra wall thickness (above design
requirements) exists. For instance, a ratio of 1.1 means that
there is 10% more pipe wall material than is required by design
and 1.25 means 25% more material.
The actual wall thickness should account for all possible
weaknesses, as discussed earlier and again in the integrity verification variable. This can be done using detailed stress calculations (see Appendix C) or through derating factors devised by
the evaluator.
When all issues have been considered, a simple point schedule such as that shown in Table 5.1 can be employed to award
points based on how much extra wall thickness exists. This
schedule uses the ratio of actual pipe wall to pipe wall required
and calls this ratio t.
A simple equation can also be used instead ofTable 5.1. The
( t - 1) x 35 =point value
yields approximately the same values and has the benefit of
more discrimination between differences in t.
Table 5.1
Point schedule based on extra wall thickness
1.1 1-1.20
Some examples to illustrate the pipe component ofthe safety
factor follow.
Example 5.1: Calculating the Safety Factor
A cross-country steel pipeline is being evaluated. The
pipeline transports natural gas. Original design calculations are
available. Pipe is the only type of pipeline component in the
segment being assessed. The evaluator feels that no extraordinary conditions exist on the line and proceeds as follows:
1. He uses information from the design file to determine the
required wall thickness. A MOP of 2000 psig using a grade
of steel rated for 35,000-psi maximum allowable stress
yields a required wall thickness of 0.60 in, for this diameter
of pipe (see Appendix C). External load calculations show
the need for an additional 0.08 in. in thickness to handle the
additional stresses anticipated. Surge pressures, extreme
temperatures, or other loadings are extremely unlikely.
The total required wall thickness is therefore 0.60 + 0.08 =
0.68 in.
The actual pipe wall thickness installed is a nominal 0.88 in.
Manufacturing tolerances allow this nominal to actually be
as thin as 0.79 in. No documented thickness readings indicate that the line is any thinner than this 0.79-in. value and
recent integrity verifications indicate no defects, so the
evaluator uses 0.79 as the actual wall thickness.
The ratio of actual to required wall thickness is therefore
0.79 + 0.68 = 1.16.Therefore, 16% of additional protection
against external damage or corrosion exists.
The point value for 16% extra wall thickness is 5.6, using the
equation given earlier.
Example 5.2: Calculating the safety factor
Another section of cross-country steel pipeline is being evaluated. Hydrocarbon liquids are being transported here. In this
case, original design calculations are not available. The line is
35 years old and is exposed to varying external loadings. The
evaluator proceeds as follows:
1. Because of the age of the line and the absence of original
documents, the most recent hydrostatic test pressure is used
to determine the maximum allowable stress for the pipe
material. Using the test pressure of 2200 psig, the stress
level is calculated to be 27,000 psi (see Appendix C). The
evaluator is thus reasonably sure that the pipeline can withstand a stress level of 27,000 psi. The maximum operating
pressure of the line is 1400 psig. Using this value and a
stress level of 27,000 psi, the required wall thickness (for
internal pressure only) is calculated to be 0.38 in.
2. Using some general calculations and the opinions of the
design department, the evaluator feels that an additional
10% must be added to the wall thickness to allow for external loadings for most conditions. This is an additional
0.04 in. He adds an additional 5% (total of 15% above
requirements for internal pressure alone) for situations
where the line crosses beneath roadways. This 5% is thought
to account for fatigue loadings at all types of uncased road
crossings, regardless of pipeline depth, soil type, roadway
design, and traffic speed and type. In other words, 15%
wall thickness above that required for internal pressure only
is the requirement for the worst case situation. This is an
additional 0.06 in. for sections that have uncased road
3 . Water hammer effects can produce surge pressures up to 100
psig. Such surges could lead to an internal pressure as
high as 1500 psig (100 psig above MOP). This additional
pressure requires an additional 0.02 in. of wall thickness.
4. The requiredminimum wall thicknesses are therefore 0.38 +
0.06 + 0.02 = 0.46 in. for sections with uncased crossings,
and 0.38 + 0.04 + 0.02 = 0.44 in. for all other sections.
5. The evaluator next determines the actual wall thickness.
Records indicate that the original purchased pipe had a
nominal wall thickness of 0.65 in. When the manufacturing
tolerance is subtracted from this, the wall thickness is 0.58
in. Field personnel, however, mention that wall thickness
Risk variables and scoring 5/101
checks have revealed thicknesses as low as 0.55 in. This is
confirmed by documents in the files. Additionally, there
may be weaknesses related to the low-frequency ERW pipe
manufacturing process. No integrity verifications have been
performed recently. so there is justification for a conservative assumption of pipe weaknesses. The evaluator chooses
to apply a somewhat arbitrary 12% de-rating of pipe
strength due to possible weaknesses and uses 0.48 in. as the
actual ‘effective’ wall thickness.
6 . The actual-to-required-wall thickness ratios are therefore
0.48 0.46 = 1.04 and 0.48 + 0.44 = 1.09 for sections with
and without uncased road crossings, respectively. These
ratios yield point values of 1.4 and 3.2, respectively.
Conservatism requires that the evaluator assign a value of
1.4 points for this section of pipeline.
.-lltemntivepipe strength scoring
An alternative scoring approach for the sufep./uctor, could add
pipe diameter as a variable to further consider structural
strength. Pipe strength. from an external loading standpoint, is
related to the pipe’s wall thickness and diameter. In general,
larger diameter and thicker walled pipes have stronger loadbearing capacities and should be more resistive to external
loadings. A thinner wall thickness and smaller diameter will
logically increase a pipe’s susceptibility to damage [48].
Some risk evaluators have used D/t as a variable for both
resistance against external loadings and as a susceptibility-tocracking indicator. As D/t gets larger, stress levels increaseincreasing failure potential and risk.
Another risk measure of pipe strength has been proposed
[38] as a pipe geometry score, derived from a relationship
where failure probability is estimated to be proportional to
t =pipe wall thickness (in.)
( I = pipe diameter (in. ),
As this number gets higher, the relative risk of failure from
external forces increases.
Either of these relationships is readily converted into a risk
scoring scheme similar to the one described using simple wall
thickness ratios.
Non-pipe coniponents
The evaluation of safety jactur should also include non-pipe
components whenever they are part of a segment being
assessed. If a non-pipe component is the weakest part of the
pipeline segment being evaluated its point score should govern.
Components include flanges, valve bodies. fittings, filters,
pumps, flow measurement devices, pressure vessels, and others.
Each pipeline component has a specified maximum operating pressure. This value is given by the manufacturer or
determined by calculations. The lowest pressure rating in the
system determines the weakest component and is used to set
the design pressure. Ideally, the design pressure as it is
used here should not include safety factors for the individual
components for the same reasons it is recommended that
regulatory safety factors be removed from MOP calculations
(see page 00). It may be difficult, however. to separate the
safety factor from the actual pressure-containing capabilities of Provide
the component.
A flange, for instance, may be rated by the manufacturer to
operate at a pressure of 1400 psig. It can be safely tested for
short periods at pressures up to 2160 psig, as certified by the
manufacturer. It is not obvious exactly how much pressure the
flange can withstand from these numbers and it is a nontrivial
matter to calculate it. For purposes of this risk assessment, the
value of 1400 psig should probably be used as the maximum
flange pressure even though this value certainly has a safety
factor built in. The separation of the safety factor would most
likely not be worth the effort. It also makes the comparison to
pipe strength (MOP) more valid when safety factors are
removed from each.
On the other hand, the design calculations for a pressure vessel are usually available. This would allow easy separation of
the safety factor. Again, if these calculations are not available,
the best course is probably to use the rated operating pressure.
This will yield the most conservative answer. Again. consistency is important.
As in the pipe analysis. a ratio can be used to show the difference between what a system component can do and what it is
presently being asked to do. This can be the pressure rating of
the weakest component divided by the system maximum operating pressure. When this ratio is equal to I. there is no safety
factor present (discounting some component safety factors that
were not separated).This means that the system is being operated at its limit. Ifthe ratio is less than 1, the system can theoretically fail at any time because there is a component of the
system that is not rated to operate at the system MOP A ratio
greater than 1 means that there is a safety factor present; the
system is being operated below its limit.
A simple schedule can now be developed to assign points. It
may look something like this:
Design-to-MOP Ratio
3s pis
-1 0 pts
An equation can also be used instead of the point schedule:
[(Design-to-MOPratio)- I ] x 35 =points
The steps for the evaluator are therefore:
I . Determine the pressure rating of the weakest system
2 . Divide this pressure rating (from step I ) into the systemwide MOP.
3 . Assign points based on the schedule.
This is equivalent to the previous pipe strength evaluation
but uses pressure instead of wall thickness. Because pressure
and wall thickness are proportional in a stress calculation,
pressure could also be used in the pipe strength analysis.
5/102 Design Index
Note that no credit is given for weaker components that are
protected from overpressure by other means. These scenarios
are examined in detail in the incorrect operations index
(Chapter 6).The reasoning here is that the entire risk picture is
being examined in small pieces. The fact that there exists a
weak component contributes to this piece of the risk picture,
regardless of protective actions taken. Even though a pressure
vessel is protected by a relief valve, or a thin-walled pipe section is protected by an automatic valve, the presence of such
weak components in the section being evaluated causes the
lower design-to-MOP ratio and hence the lower point values.
Of course, the evaluator may insert a section break if she feels
that a higher pressure section is being penalized by a lower
rated item when there is adequate isolation between the two.
Regardless of his choice, the adequacy of the isolation will be
evaluated in the incorrect operations index (Chapter 6 ) .
Example 5.3: Calculating the safety factorf o r
non-pipe components
The evaluator is examining a section of a jet fuel pipeline.
The MOP of the pipeline is 1200 psig. This particular section
has an aboveground storage tank that is rated for 1000 psig
maximum. The tank is the weakest component in this section. It
is located on the low-pressure end of the pipeline and is protected by relief systems and redundant control valves such that
it never experiences more pressure than 950 psig. This effectively isolates the tank from the pipeline system and does not
require that the pipeline be down-rated to a lower operating
pressure. These safety measures, however, are not considered
for this item and the design-to-MOP ratio is as follows:
Weakest component + system MOP = 1000/1200 = 0.80
This is based on the fact that the weakest component can
withstandonly 1000 psig.Thisratesapoint scoreof-l0points.
Example 5.4: Calculating the safety factorf o r
non-pipe components
In this section, the only components are pipe and valves. The
pipe is designed to operate at 2300 psig by appropriate design
calculations.The overall system is rated for a MOP of 800 psig.
The valve bodies are nominally rated for maximum pressures
of 1400 psig, with permissible hydrostatic test pressures of
2200 psig. The evaluator rates the weakest component, the
valve bodies, to be 1400 psig. Because he has no exact information as to the strength of the valve bodies, he uses the pressure
rating that is guaranteed by the manufacturer for long-term
service. The design-to-MOP ratio is, therefore,
1400/800 = 1.75, which yields a point value of 26.3 points
Example 5.5: Calculating the safety factorf o r
non-pipe components
Here, a section has valves, meters, and pipe. The MOP is
900 psig. The pipe strength is calculated to be 1700 psig.
The valve bodies and meters can all withstand pressure tests
of 2700 psig and are rated for 1800 psig in normal operation.
Again, the evaluator has no knowledge of the exact strength of
the valves and meters, so he uses the normal operation rating
of 1800 psig.
The weakest component, the pipe, governs; therefore,
1700/900= 1.89, which yields a point value of 3 1.2points
Note that in the preceding examples, the pipeline segments
being evaluated have a mixture of components. An alternative
and often-preferable segmentation strategy would create a separate pipeline segment, to be independently scored, for each
component present. This avoids the blending of dissimilar risks
within a segment scores. It has the further benefits of allowing
similar components to be grouped and compared-”apples to
apples.” See discussions segmentation strategy (Chapter 2) and
also risk evaluations of station facilities (Chapter 13).
B. Fatigue (weighting: 1%0)
Fatigue failure has been identified to be the largest single cause
of metallic material failure [47]. Historical pipeline failure data
does not indicate that this is a dominant failure mechanism in
pipelines, but it is nonetheless an aspect of risk. Because a
fatigue failure is a brittle failure, it can occur with no warning
and with disastrous consequences.
Fatigue is the weakening of a material due to repeated cycles
of stress. The amount of weakening depends on the number
and the magnitude of the cycles. Higher stresses, occurring
more often, can cause more damage to the material. Factors
such as surface conditions, geometry, material processes, fracture toughness, temperature, type of stress applied, and welding
processes influence susceptibility to fatigue failure (see
Cracking: a deeper look, in this chapter).
Predicting the failure of a material when fatigue loadings
are involved is an inexact science. Theory holds that all materials have flaws--cracks, laminations, other imperfectionsif only at a microscopic level. Such flaws are generally too
small to cause a structural failure, even under the higher
stresses of a pressure test. These flaws can grow though,
enlarging in length and depth as loads (and, hence, stress) are
applied and then released. After repeated episodes of stress
increase and reduction (sometimes hundreds of thousands of
these episodes are required), the flaw can grow to a size large
enough to fail at normal operating pressures. Unfortunately.
predicting flaw growth accurately is not presently possible
from a practical standpoint. Some cracks may grow at a controlled, rather slow rate, while others may grow literally at the
speed of sound through the material. The relationship
between crack growth and pressure cycles is based on fracture
mechanics principles, but the mechanisms involved are not
completely understood.
For the purposes of risk analysis, the evaluator need not be
able to predict fatigue failures. He must only be able to identify,
in a relative way, pipeline structures that are more susceptibleto
such failures.
Because it is conservative to assume that any amount of
cycling is potentially damaging, a schedule can be set up to
compare numbers and magnitudes of cycles. Stress magnitudes
should be based on a percentage of the normal operating pres-
Riskvariables and scoring 5/103
sures. A 100-psi pressure cycle will have a potentially greater
effect on a system rated for 150 psi MOP than on one rated for
1500 psi. Most research points to the requirement oflarge numbers of cycles at all but the highest stress levels, before serious
fatigue damage occurs.
In many pipeline instances, the cycles will be due to changes
in internal pressure. Pumps, compressors, control valves, and
pigging operations are possible causes of internal pressure
cycles. The following example schedule is therefore based on
internal pressures as percentages of MOP. If another type of
loading is more severe, a similar schedule can be developed.
Stresses caused by vehicle traffic over a buried pipeline would
be an example of a cyclic loading that may be more severe than
the internal pressure cycles.
This is admittedly an oversimplification of this complex
issue. Fatigue depends on many variables as noted previously.
At certain stress levels, even the frequency of cycles-how fast
they are occurring-is found to affect the failure point. For
purposes of this assessment, however, the fatigue failure risk
is being reduced to the two variables of stress magnitude
and number. The following schedule is offered as a possible simple way to evaluate fatigue’s contribution to the risk
One cycle is defined as going from the starting pressure to a
peak pressure and back down to the starting pressure. The cycle
is measured as a percentage of MOP.
In this example of assessing fatigue potential, the evaluator
uses the scoring protocol illustrated in Table 5.2 to analyze
various combinations of pressure magnitudes and cycles. The
point value is obtained by finding the worst case combination
of pressures and cycles. This worst case is the situation with
the lowest point value. Note the “equivalents” in this table;
9000 cycles at 90% of MOP is thought to be the equivalent of
9 million cycles at 5% of MOP; 5000 cycles at 50% MOP is
equal to 50,000 cycles at 10% of MOP, etc. In moving around
in this table, the upper right corner is the condition with the
greatest risk, and the lower left is the least risky condition.
The upper left corner and the lower right corner are roughly
Note also that Table 5.2 is not linear. The designer ofthe table
did not change point values proportionately with changes in
either the magnitude or frequency of cycles. This indicates a
belief that changes within certain ranges have a greater impact
on the risk picture. The following example illustrates further
the use of this table.
Fatigue scores based on various combinationsof
pressure magnitudes and cycles
Table 5.2
Lifetime cycles
Example 5.6: Scoring fatigue potential
The evaluator has identified two types of cyclic loadings in a
specific pipeline section: (1) a pressure cycle of about 200 psig
caused by the start of a compressor about twice a week and (2)
vehicle traffic causing a 5-psi external stress at a frequency of
about 100 vehicles per day. The section is approximately 4
years old and has an MOP of 1000 psig. The traffic loadings
and the compressor cycles have both been occurring since the
line was installed.
For the first case, the evaluator enters the table at ( 2
startdweek x 52 weekdyear x 4 years) = 416 cycles across the
horizontal axis, and (200 psig/lOOO psig) = 20% of MAOP on
the vertical axis. This combination yields a point score of about
13 points.
For the second case, the lifetime cycles are equal to (100
vehicles/day x 365 daysiyear x 4 years) = 146,000.The magnitude is equal to (5 psig/lOOO psig) = 5%. Using these two
values, the schedule assigns a point score of 7 points.
The worst case, 7 points, is assigned to the section.
Cracking: a deeper look
All materials have flaws and defects, if only at the microscopic
level. Given enough stress, any crack will enlarge, growing in
depth and width. Crack growth is not predictable under realworld conditions. It may occur gradually or literally at the
speed of sound through the material. (See also discussions on
possible failure hole sizes in Chapters 7 and 14.)
As contributors to fatigue failures, several common crackenhancing mechanisms have been identified. Hydrogeninduced cracking (HIC), stress corrosion cracking (SCC), and
sulfide stress corrosion cracking (SSCC) are recognized flawcreating or flaw-propagating phenomena (see Chapter 4). The
susceptibility of a material to these mechanisms depends on
several variables. The material composition is one of the more
important variables. Alloys, added in small quantities to ironcarbon mixtures, create steels with differing properties.
Toughness is the material property that resists fatigue failure. A
trade-off often occurs as material toughness is increased but
other important properties such as corrosion resistance, weldability, and brittleductile transitions may be adversely
affected. The fracture toughness of a material is a measure of
the degree of plastic deformation that can occur before full failure. This plays a significant role in fatigue failures. Much more
energy is required to fail a material that has a lot of fracture
toughness, because the material can absorb some of the energy
that may otherwise be contributing directly to a failure. A larger
defect is required to fail a material having greater fracture
toughness. Compare glass (low fracture toughness) with copper (high fracture toughness). In general, as yield strength goes
up, fracture toughness goes down. Therefore. flaw tolerance
often decreases in higher strength materials.
Another contributor to fatigue failures is the presence of
stress concentrators. Any geometric discontinuity such as a
hole, a crack, or a notch, can amplify the stress level in the
material. Coupled with the presence of fatigue loadings, the situation can be further aggravated and make the material even
more susceptible to this type of failure.
The process of heating and cooling of steel during initial
formation and also during subsequent heating (welding) plays
W104 Design Index
a large role in determining the microstructure of the steel.
The microstructure of two identical compositions that
were heat treated in different manners may be completely different. One may be brittle (lacks toughness), and the other
might be ductile at normal temperatures. The welding process
forms what is known as the hear-affectedzone (HAZ). This is
the portion of the parent metal adjacent to the weld that has an
altered microstructure due to the heat of the welding operation.
The HAZ is often a more brittle area in which a crack might
Because the HAZ is an important element in the structural
strength of the pipe, special attention must be paid to the welding process that creates this HAZ. The choice of welding temperature, speed of welding, preheating, post-heating, weld
metal type, and even the type of weld flux, all affect the creation of the HAZ. Improper welding procedures, either
because of the design or execution of the welding, can create a
pipeline that is much more susceptible to failure due to
cracking. This element of the risk picture is considered in the
potential for human error in the incorrect operations index
discussion in Chapter 6 .
So-called “avalanche” or “catastrophic” fractures, where
crack propagation extends literally for miles along the pipeline,
have been seen in large-diameter, high-pressure gas lines. In
these “rapid-crack-growth” scenarios, the speed of the crack
growth exceeds the pipeline depressurization wave. This can
lead to a violent pipe failure where the steel is literally flattened
out or radically distorted for great distances. From a risk standpoint, such a rupture extends the release point along the
pipeline, but probably does not materially affect the amount of
gaseous product released. An increased threat of damage due to
flying debris is present. Preventive actions to this type of failure
include crack arresters-sleeves or other attachments to the
pipe designed to slow the crack propagation until the depressurization wave can pass-and the use of more crack-resistant
materials including multilayer wall pipe.
If the evaluator is particularly concerned with this type of
failure and feels that it can increase the risk picture in her systems, she can adjust the spill score in the leak impact factor
(Chapter 7 )by giving credit for crack arrester installations, and
recognizing the increased susceptibility of large-diameter,
high-pressure gas lines (particularly those lacking material
C. Surge potential (weighting: 10%)
The potential for pressure surges, or water hammer effects, is
assessed here. The common mechanism for surges is the sudden conversion of kinetic energy to potential energy. A mass
of flowing fluid in a pipeline, for instance, has a certain
amount of kinetic energy associated with it. If this mass of
fluid is suddenly brought to a halt, the kinetic energy is converted to potential energy in the form of pressure. A sudden
valve closure or pump stoppage is a common initiator of such
a pressure surge or, as it is sometimes called, a pressure spike.
A moving product stream contacting a stationary mass of
fluid (while starting and stopping pumps, perhaps) is another
possible initiator.
This pressure spike is not isolated to the region of the initiator. In a fluid-filled pipeline, a positive pressure wave is propagated upstream of the point where the fluid flow is interrupted.
A negative pressure wave travels downstream from the point of
interruption. The pressure wave that travels back upstream
along the pipeline adds to the static pressure already in the
pipeline. A pipeline with a high upstream pressure might be
overstressed as this pressure wave arrives, causing the total
pressure to exceed the MOP.
The magnitude of the pressure surge depends on the fluid
modulus (density and elasticity), the fluid velocity, and the
speed of flow stoppage. In the case ofa valve closure as the flow
stoppage event, the critical aspect ofthe speed of closure might
not be the total time it takes to close the valve. Most of the
pressure spike occurs from the last 10% of the closing of a gate
valve, for instance.
From a risk standpoint, the situation can be improved
through the use of surge protection devices or devices that prevent quick flow stoppages (such as valves being closed too
quickly). The operator must understand the hazard and all possible initiating actions before corrective measures can be correctly employed. The evaluator should be assured that the
operator does indeed understand surge potential (see Appendix
D for calculations). He can then assign points to the section
based on the chances of a hazardous surge occurring.
To simplify this process, a hazardous surge can be defined as
one that is greater than 10% of the pipeline MOP. It may be
argued in some cases that a line, in its present service, may
operate far below MOP and hence, a 10% surge will still not
endanger the line. A valid argument, perhaps, but perhaps also
an unnecessary complication-removing a risk variable that
might be important as the operations change-in the risk
assessment. The evaluator should decide on a method and then
apply it uniformly to all sections being evaluated.
The point schedule can be set up with three general categories and room for interpolation between the categories. For
instance, evaluate the chances of a pressure surge of magnitude
greater than 10% of system MOP:
High probability
Low probability
0 pts
5 pts
10 pts
High probability exists where closure devices, equipment,
fluid modulus, and fluid velocity all support the possibility of a
pressure surge. No mechanical preventers are in place.
Operating procedures to prevent surges may or may not be in
Lowprobability exists when surges can happen (fluid modulus and velocity can produce the surge), but are safely dealt
with by mechanical devices such as surge tanks, relief valves,
and slow valve closures, in addition to operating protocol. Low
probability also exists when the chance for a surge to occur is
only through a rather unlikely chain of events.
Impossible means that the fluid properties cannot. under
any reasonable circumstances, produce a pressure surge of
magnitude greater than 10% MOP.
Example 5.7: Scoring surge potential
A crude oil pipeline has flow rates and product characteristics that are supportive of pressure surges in excess of 10% of
MOP. The only identified initiation scenario is the rapid closure
of a mainline gate valve. All of these valves are equipped with
Riskvariables and scoring 5/105
automatic electric openers that are geared to operate at a rate
less than the critical closure time (see Appendix D). If a valve
must be closed manually. it is still not possible to close the valve
too quickly-many turns of the valve handwheel are required
for each 5% valve closure. Points for this scenario are assessed
at 5.
D. Integrity verifications (weighting: 25%)
Pipeline integrity is ensured by two main efforts: (1) the detection and removal of any integrity-threatening anomalies and (2)
the avoidance of future threats to the integrity (protecting the
asset).The latter is addressed by the many risk mitigation measures commonly employed by a pipeline operator, as discussed
in Chapters 3 through 6 .
The former effort involves inspection and testing and is fundamental to ensuring pipeline integrity, given the uncertainty
surrounding the protection efforts. The purpose of inspection
and testing is to validate the structural integrity of the pipeline
and its ability to sustain the operating pressures and other anticipated loads. The goal is to test and inspect the pipeline system
at frequent enough intervals to ensure pipeline integrity and
maintain the margin of safety. This was discussed earlier and
illustrated by Figure 5.3.
A d&ct is considered to be any undesirable pipe anomaly,
such as a crack. gouge, dent. or metal loss, that could later lead
to a leak or spill. Note that not all anomalies are defects. Some
dents, gouges. metal loss. and even cracks will not affect the
service life of a pipeline. Possible defects include seam weaknesses associated with low-frequency ERW and electric flash
welded pipe, dents or gouges from past excavation damage or
other external forces. external corrosion wall losses, internal
corrosion wall losses. laminations. pipe body cracks, and circumferential weld defects and hard spots.
A conservative assumption underlying integrity verification
is that defects are present in the pipeline and are growing at
some rate, despite preventive measures. By inspecting or testing the pipeline at certain intervals, this growth can be interrupted before any defect reaches a failure size. Defects will
theoretically be at their largest size immediately before the next
integrity verification. This estimated size can be related to a
failure probability by considering uncertainty in measurements
and calculations. Therefore, the integrity re-verification interval is implicitly establishing a maximum probability of failure
for each failure mode.
The absence of any defect of sufficient size to compromise
the integrity ofthe pipeline is most commonly proven through
pressure testing and/or ILI. the two most comprehensive
integrity validation techniques used in the hydrocarbon transmission pipeline industry today. Integrity is also sometimes
inferred through absence of leaks and verifications of protective systems. For instance, CP counteracts external corrosion
of steel pipe and its potential effectiveness is determined
through pipe-to-soil voltage surveys along the length of the
pipeline, as described in Chapter 4. All ofthese measurementbased inspections and tests are occasionally supported by
visual inspections of the system. Each ofthese components of
inspection and testing of the pipeline can be-and usually
should be-a part of the risk assessment.
Common methods of pipeline survey. inspection. and testing
are shown in Appendix G. Pipe wall inspections include non-
destructive testing (NDT) techniques such as ultrasonic, magnetic particle, dye penetrant, etc., to find pipe wall flaws that
are difficult or impossible to detect with the naked eye.
Evaluating the integrity verification
For purposes of risk assessment, the age and robustness of the
most recent integrity verification should dnve the score assignment. The performance of a series of inspections, especially
using in-line inspection, where results can be overlaid and more
minor changes detected is even more valuable.
Age of verification
The age consideration can be a simple proportional scoring
approach using a predetermined information deterioration rate
Note that information deterioration refers to the diminishing
usefulness of past data to determine current pipe condition.
(see discussions in Chapters 2). The past data should be used
to characterize the current effective wall thickness until better
information replaces it. Five or 10-year information deterioration periods-after which the inspection or test data are no
longer providing meaningful evidence of current integrityare common defaults, but these can be set more scientifically.
An inspection interval is best established on the basis of two
factors: (1) the largest defect that could have survived or been
undetected in the last test or inspection and ( 2 ) an assumed
defect growth rate.
A failure size must be estimated in order to calculate a time
to failure. For cracklike defects, fracture mechanics and estimates of stress cycles (frequency and magnitude) are required
to determine this. For metal loss from corrosion, the failure size
for purposes of probability calculations can be determined
by two criteria: (1)the depth ofthe anomaly and (2) a calculated
remaining pressure-containing capacity of the defect configuration. Two criteria are advisable since the accepted calculations for remaining strength (see Appendix C) are not
considered as reliable when anomaly depths exceed 80% of the
wall thickness. Likewise, depth alone is not a good indicator of
failure potential because stress level and defect configuration
are also important variables [86].
These defect rates of growth can be estimated after successive integrity evaluations or. when such information is unavailable, based on conservative assumptions. With knowledge of
maximum surviving defect size, defect rate of growth, and
defect failure size, all of the ingredients are available to establish an optimum integrity verification schedule. This in turn
sets the information deterioration scale. Unfortunately, most
of these parameters are difficult to estimate with any degree
of confidence and resulting schedules will also be rather
Robustness of verification
Integrity verifications vary in terms of their accuracy and
ability to detect all types of potential integrity threats. The
robustness consideration for a pressure test can simply be the
pressure level above the maximum operating pressure. This
establishes the largest theoretical surviving defect. The role of
pressure level and a possible scoring protocol are discussed
5/106 Design Index
Visual and NDT inspections (see Appendix G) performed
on exposed pipe can be very thorough, but are very localized
assessments. A zone-of-influence approach (see Chapter 8) or
ideas taken from statistical sampling techniques can be used
to extrapolate integrity information from such localized
For an ILI, the assessment should ideally quantify the ability
of the 1LI to detect all possible defects and characterize them to
a given accuracy. Given the myriad of possible defects, ILI
tools, interpretation software, and postinspection excavation
programs, this can be a complex undertaking. In the final
analysis, it is again the largest theoretical undetected defect that
best characterizes the robustness.
One approach is to characterize the ILI program-tool accuracy, data interpretation accuracy, excavation verification protocol-against all possible defect types.
When both a pressure test and an ILI have been done, the
scores can be additive up to the maximum allowed by the
variable weighting.
Pressure test
A pipeline pressure test is usually a hydrostatic pressure
test in which the pipeline is filled with water, then pressurized to a predetermined pressure, and held at this test pressure for a predetermined length of time. It is a destructive
testing technique because defects are discovered by pipe
failures during the test. Other test media such as air are
also sometimes used. Tests with compressible gases carry
greater damage potential since they can precipitate failures
and causemore extensive damage than by testing with an
incompressible fluid.
The test pressure exceeds the anticipated operational maximum internal pressure to prove that the system has a margin of
safety above that pressure. It is a powerful technique in that it
proves the strength of the entire system. It provides virtually
indisputable evidence as to the system integrity (within the test
parameters). However, pressure testing does not provide information on defects or damages present below its detection
threshold. Such surviving defects might later worsen and cause
a failure.
As noted previously, all materials have flaws and defects, if
only at the microscopic level. Given enough stress, any crack
will enlarge, growing in depth and width. Under the constant
stress of a pressure test, it is reasonable to assume that a
group of flaws beyond some minimum size will grow. Below
this minimum size, cracks will not grow unless the stress
level is increased. If the stress level is rather low, only the
largest of cracks will be growing. At higher stresses, smaller
and smaller cracks will begin to grow, propagating through
the material. When a crack reaches a critical size at a
given stress level, rapid, brittle failure ofthe structure is likely.
(See previous explanations of fracture toughness and crack
propagation in this chapter.) Certain configurations of relatively large defects can survive a hydrostatic test. A very narrow and deep groove can theoretically survive a hydrostatic
test and, due to very little remaining wall thickness, is more
susceptible to failure from any subsequent wall loss (perhaps
occurring through corrosion). Such defect configurations are
rare and their failure potential at a pressure lower than the test
pressure would require ongoing corrosion or crack growth.
However, the inability to detect such flaws is a limitation of
pressure testing.
By conducting a pressure test at high pressures, the pipeline
is being subjected to stress levels higher than it should ever
encounter in everyday operation. Ideally, then, when the
pipeline is depressured from the hydrostatic test, the only
cracks left in the material are of a size that will not grow under
the stresses of normal operations. All cracks that could have
grown to a critical size under normal pressure levels would
have already grown and failed under the higher stress levels of
the hydrostatic test.
Research suggests that the length of time that a test pressure
is maintained is not a critical factor. This is based on the
assumption that there is always crack growth and whenever the
test is stopped, a crack might be on the verge of its critical size
and, hence, close to failure.
The pressure level, however, is an important parameter. The
termpressure reversal refers to a scenario in which, after a successful pressure test, the pipeline fails at a pressure lower than
the test pressure. This occurs when a defect survives the test
pressure but is damaged by the test so that it later fails at a lower
pressure when the pipeline is repressurized. The higher the test
pressure relative to the normal operating pressure, the greater
the safety margin. The chances of a pressure reversal become
increasingly remote as the margin between test and operating
pressures increases. This is explained by the theory of critical
crack size discussed earlier.
Immediately after the pressure test, uncertainty about
pipeline integrity begins to grow again. Because a new defect
could be introduced at any time or defect growth could accelerate in a very localized region, the test’s usefulness is tied to
other operational aspects of the pipeline. Introduction of new
defects could come from a variety of sources, such as corrosion, third-party damages, soil movements, pressure cycles,
etc., all of which are contributing to the constantly changing
risk picture. For this reason, pressure test data have a finite
lifetime as a measure of pipeline integrity. A pipeline can
be retested at appropriate intervals to prove its structural
Interpretation of pressure test results is often a nontrivial
exercise. Although time duration of the test may not be critical,
the pressure is normally maintained for at least 4 hours for practical reasons, if not for compliance with applicable regulations.
During the test time (which is often between 4 and 24 hours),
temperature and strain will be affecting the pressure reading.
This requires a knowledgeable test engineer to properly interpret pressure fluctuations and to distinguish between atransient
effect and a small leak on the system or the inelastic expansion
of a component.
The evaluation point schedule for pressure testing can confirm proper test methods and assess the impact on risk on the
basis of time since the last test and the test level (in relation to
the normal maximum operating pressures). An example schedule follows:
( I ) Calculate H, where H = (testpressureh4OP)
H < 1.10(1.10=testpressure lO%aboveMOP)
1.11 < H < 1.25
1 . 2 6 < H < 1.40
H > 1.41
or a simple equation can be used:
0 pts
5 pts
15 pts
Risk variables and scoring 51107
( H - l)x30=pointscore(uptoarnaxirnumof 15points)
and where the minimum = 0 points.
(2) Time since last test: Points = 10 - (“ears since test)
(minimum = Opoints)
A test 4 years ago
6 pts
A test 1 1 years ago
0 pts
Add points from (1) and (2) above to obtain the total hydrostatic test score. In this schedule, maximum points are given to
a test that occurred within the last year and that was conducted
to a pressure greater than 40% above the maximum operating
Example 5.8: Scoring hydrostatic pressure tests
The evaluator is studying a natural gas line whose MAOP is
1000 psig. This section of line was hydrostatically tested 6
years ago to a pressure of 1400 psig. Documentation on hand
indicates that the test was properly performed and analyzed.
Points are awarded as follows:
H = 1400/1000=1.4
(1) (1.4- 1)x 30
(2) 10-6years
12 pts
In-line inspection
The use of instrumented pigs to inspect a pipeline from the
inside is a rapidly maturing technology. In-line inspection. also
called smartpigging or intelligentpigging. refers to the use of
an electronically instrumented device traveling inside the
pipeline that measures characteristics of the pipe wall. Any
change in pipe wall can theoretically be detected. These devices
can also detect pipe wall cracks, laminations, and other material defects. Coating defects may someday also be detected in
this fashion. The pipe conditions found that require further
evaluation are referred to as anonialies.
The industry began to use these tools in the 1980s, but ILI
presently benefits from advancements in electronics and computing technology that make it much more useful to the
pipeline industry. State-of-the-art ILI has advanced to the point
that many pipeline companies are basing extensive integrity
management programs around such inspection. A wealth of
information is expected from such inspections when a highquality, in-line device is used and supported by knowledgeable
data analysis. It is widely believed that pipe anomalies that are
of a size not detected through failure under a normal pressure
test can be detected through ILI.
While increasingly valuable, the technology is arguably
inexact, requiring experienced personnel to obtain most meaningful results. The ILI tools cannot accommodate all pipeline
system designs-there are currently restrictions on minimum
pipe diameter, pipe shape, and radius of bends. All current ILI
tools have difficulties in detecting certain types of problemssometimes a combination of tools is needed for full defect
detection. In-line inspection is also relative costly. Precleaning
of the pipeline, possible service interruptions, risks of unnecessary repairs, and possible blockages caused by the instrument
are all possible additional costs to the operation. The ILI
process often involves trade-offs between more sensitive tools
(and the accompanying more expensive analyses) requiring
fewer excavation verifications and less expensive tools that
generate less accurate results and hence require more excavation verifications. Because this technique discovers existing
defects only. it is a lagging indicator of active failure mechanisms. ILI must be performed at sufficient intervals to detect
serious defect formations before they become critical.
General types of anomalies that can be detected to varying
degrees by ILI include
Geometric anomalies (dents, wrinkles, out-of-round pipe)
Metal loss (gouging andgeneral, pitting, and channeling corrosion)
Laminations, cracks, or cracklike features.
Some examples of available ILI devices are caliper tools,
magnetic flux leakage low- and high-resolution tools. ultrasonic wall thickness tools, ultrasonic crack detection tools, and
elastic wave crack detection tools. Each of these tools has specific applications. Most tools can detect previous third-party
damage or impacts from other outside forces. Caliper tools are
used to locate pipe deformations such as dents or out-of-round
areas. Magnetic flux leakage tools identify areas of metal loss
with the size of the detectable area dependent on the degree of
resolution of the tool. Ultrasonic wall thickness tools detect
general wall thinning and laminations. So-called “crack tools”
are specifically designed to detect cracks, especially those
whose orientation is difficult to detect by other means.
Currently, no single tool is superior in detecting all types of
anomalies and not all ILI technologies are available for smaller
pipeline sizes.
Depending on vendor specifications and ILI tool type, detection thresholds can vary. The degree of resolution (the ability to
characterize an anomaly) also depends on anomaly size, shape,
and orientation in the pipe. The probability of detecting an
anomaly using ILI increases with increasing anomaly size.
Smaller anomalies as well as certain anomaly shapes and orientations have lower detection thresholds than others.
The most common tools employ either an ultrasonic or a
magnetic flux technology to perform the inspection. The ultrasonic devices use sound waves to continuously measure the
wall thickness around the entire circumference of the pipe as
the pig travels down the line. The thickness measurement is
obtained by measuring the difference in travel time between
sound pulses reflected from the inner pipe wall and the outer
pipe wall. A liquid couplant is often required to transmit the
ultrasonic waves from the transducer to the pipe wall.
The magnetic flux pig sets up a magnetic field in the pipe
wall and then measures this field. Changes in the pipe wall will
change the magnetic field. This device emphasizes the detection of anomalies rather than measurement of wall thickness,
although experienced personnel can closely estimate defect
sizes and wall thicknesses.
In either case, all data are recorded. Both types of pigs are
composed of several sections to accommodate the measuring
5/108 Design index
instruments, the recording instruments, a power supply, and
cups used for propulsion of the pig.
After receiving an ILI indication of an anomaly, an excavation is usually required to more accurately inspect the pipeusing visual and NDT techniques (see Appendix G)-and
make repairs. Sample excavating to inspect the pipe is also used
to validate the ILI results. The process of selecting appropriate
excavation sites from the ILI results can be challenging. The
most severe anomalies are obviously inspected, but depending
on the resolution of the ILI tool and the skills of the data analyst, significant uncertainty surrounds a range of anomalies,
which may or may not be serious. Some inaccuracies also exist
in current ILI technology such as with distance measuring and
errors in pig data interpretation. These inaccuracies make
locating anomalies problematic.
Probability calculations can be performed to predict anomaly size survivability based on ILI tool detection capabilities,
measurement accuracy, and follow-up validation inspections.
These, combined with loading conditions and material science
concepts, would theoretically allow a probabilistic analysis of
future failure rates. Such calculations depend on many assumptions and hence carry significant uncertainty.
Several industry-accepted methods exist for determining
corrosion-flaw severity and for evaluating the remaining
strength in corroded pipe. ASME B31G, ASME B31G
Modified, and RSTRENG are examples of available methodologies. Several proprietary calculation methodologies are also
used by pipeline companies. These calculation routines require
measurements of the depth, geometry, and configuration of
corroded areas. Depending on the depths and proximity to one
another, some areas will have sufficient remaining strength
despite the corrosion damage. The calculation determines
whether the area must be repaired.
Scoring the ILIprocess As previously noted ILI robustness
should be a part of the evaluation. It should ideally quantify the
ability of the ILI to detect all possible defects and characterize
them to a given accuracy. It is the largest theoretical surviving
defect that best characterizes the robustness.
A complete evaluation of the ILI process can be part of the
risk assessment to ensure that the integrity verification is
robust. This will require an examination of the services and
capabilities of the ILI provider, including
Tool types, performance, and tolerances
Analysis procedures and processes (human interpretations
and computer analyses ofpig outputs)
Identification and reporting of immediate threats to pipeline
Overall report content and analysis such as corrosion type,
defect type, and minimum defect criteria
Performance specifications and variance approval processes
Vendor personnel qualifications.
An example of scoring the ILI program-tool accuracy, data
interpretation accuracy, excavation verification protocolagainst all possible defect types is shown in Table 5.3. In this
example, the evaluator has identified five general types of
defects that are of concern. Each is assigned a weighting with
the relative weights summing to 100% of the integrity threats.
The weights are set based on each defect’s expected frequency
and severity. Historical failure rate data or expert judgment can
be used to set these. The third column, Possiblepoints, is simply
each defect’s weighting multiplied by the integrity verification
variable’s maximum point value (35 points).
The next two columns reflect the capabilities of the ( I ) ILI
tool and data interpretation accuracies and (2) the excavation
verification program, respectively. These two capabilities are
added together and then multiplied by the defect’spoint value to
get the score for each defect. In the example values shown in the
table, the ILI program isjudged to be 40%effective in detecting
significant cracking-20% of that effectiveness comes from
the ILI tool and 20% from the follow-up excavation program.
Similarly, the program is judged to be 95% effective in detecting significant corrosion metal loss-90% from the tool
capability and 5% from the excavation program. No follow-up
excavation occurs for the remaining defect types (in this
example), so the effectiveness comes entirely from the tool
The sum of these scores is the assessment of the ILI robustness:
ILI robustness = sum{[defect weight] x [max points] x ([ILI capability]
+ [excavation verification capability])}
Adding the capabilities captures the belief that increased
capabilities of either the tool or the excavation program can offset limitations ofthe other. The sum is always less than 1.O since
a score of 1.O represents 100% detection capabilities, which is
not realistic.
In the example ofTable 5.3, the ILI tool and data interpretation are very capable in terms of detectingmetal loss and geometry issues. Little excavation verification i s needed. Because
those defects represent the bulk of anticipated integrity prob-
Table 5.3 Sample ILI robustness scoring program
Failure mode/de/ect
Corrosionlmetal loss
Third-party damage/dents/gouges
Manufacturing defectsllaminationsi H, blisters
Earth movement/ovalityibuckling
5 Y”
Possible points
ILI capabilih,
I .4
Risk variables and scoring 51109
lems. the inregrip verification variable receives a score of 30.5
out of 35 possible points, based on this ILI program.
These points will be reduced over time until either the information has aged to the point of little value or the ILI is repeated.
For instance, if a fixed 5-year deterioration is assumed the
score after 3 years will be:
( 5 - 3 ) / 5 x 3 0 . 2 5 = 14points
Scoring ILI results The previous discussion focused on scoring the ILI process-how timely and robust was it? It did not
take into account the use of the results of the ILL That aspect is
discussed here.
ILI results provide direct evidence of damages and, by inference. of damage potential. Such evidence should he included in
a risk assessment. The specific use of direct evidence in evaluating riskvariables is discussed in Chapter 2.
ILI results provide evidence about possibly active failure
mechanisms, as illustrated in Table 5.4.
lritegriv assessment andpipe strength When integrity assessment information becomes available, it can and should become
a part of the pipe strength calculation. All defects left uncorrected should reduce calculated pipe strength in accordance
with standard engineering stress calculations described in this
chapter. Defects that are repaired should impact other risk
model variables as direct evidence of failure mechanisms (see
Appendix C). Even if no defects are detected uncertainty has
been reduced with a corresponding reduction in perceived risk.
If the information is from very specific portions of the
pipeline-such as after a visual or NDT inspection of an excavated section of pipe-a
zone-of-influence approach (see
Chapter 8) or ideas taken from statistical sampling techniques
can be used to expand integrity information for scoring longer
stretches ofpipeline.
Full characterization of the impact of 1LI indications on pipe
strength would involve statistical analysis of anomaly measurements, considering tool accuracies. But even without detailed
calculations. the effective actual wall thickness should be
reduced depending on the nature of the anomalies detected in
the pipeline segment being scored. For example, a severe corrosion indication might warrant a 50 to 70% reduction in effective
pipe wall thickness.
This direct consideration of ILI results presumes that specific anomalies have been mapped to specific pipeline segments and that anomalies are few enough to consider
individually. Ifthis is not the case, ILIresults can also be used to
generally characterize the current integrity condition. This can
be done either as a preliminary step pending full investigations
or as stand-alone evidence.
Table 5.4
One challenge often faced by evaluators is the requirement
that they use results from different types of ILI tools. Different
tools will often have different detection capabilities and accuracies. Even similar tools used at different times can have significant variations due to the evolving technologies. To make use of
all available information, it might be necessary to establish
equivalencies between indications from various tools. An indication from a low-resolution tool should be weighted differently from one from a high-resolution tool. given the different
uncertainties involved in each.
Approach 1
An example system for generalizing ILI results is outlined here.
Under this scoring scheme, pipeline segments are characterized in terms of past damages that might reduce pipe strength
and indicate possibly active failure mechanisms. Data from the
most recent 1LI runs for every pipeline are collected. The
pipelines are divided into fixed length segments-perhaps 100
or 1000 ft long. For each segment, all ILI indications are accumulated and characterized based on their frequency and severity. Each type of anomaly is counted and weighted and then
used in setting five variables, discussed in the following subsections, that characterize the relative amount and severity of
damage to the pipe wall.
External damage This variable represents the relative quantity and severity of dents, gouges, and other indications of outside force damage. It is created by using the counts of dents,
dents on welds, and top side indications from recent ILI results.
Each is weighted according to its possible impact on pipe
strength. Higher weightings are assigned to anomalies on welds
andor those more likely to be related to third-party damage and.
hence, possibly involving a gouge or a more severe contour or
dent. As an overall adjustment to risk scores, this variable
reduces the previously calculated third-pry i n c h by up to 10%
Corrosion remaining strength This variable represents the
relative remaining strength. from a pressure-containing viewpoint, of the pipe after allowing for metal losses due to corrosion. It represents the relative severity of metal loss by
accumulating the lengths and depths of metal loss indications
in each pipeline segment. Greater emphasis is given to lengths
in keeping with commonly accepted formulas for calculating
the remaining strength of pipe. As an adjustment to risk scores,
this variable reduces the previously calculated sufeh~,fuctor
up to 30%.
Corrosion metal loss This variable represents the relative
quantity and severity of corrosion damages. It measures the
Interpretation of ILI results
Failwe mechanism
Geometric anomalies (dents. wrinkles. out-of-round pipe)
Third-party damages (normally on top and sides); improper support;bedding
(normally on bottom); excessive external loads
Gouge = third-party damage; metal loss = external or internal corrosion
Metal loss (gouging and general. pitting, and
channeling corrosion)
Laminations. cracks. or cracklike features
Fatigue and/or manufacturing defects
51110 Design Index
relative volume of metal losses from corrosion, either internal or
external. The volume of each metal loss indication is approximated by assuming a parabolic shape for the metal loss configuration. As an adjustment to risk scores, this variable reduces the
previously calculated corrosion index by up to 50%.
Crack This variable represents the relative quantity and
severity of cracking and crack-like indications. As an adjustment to risk scores, this variable reduces the previously calculated safety factor by up to 90%, recognizing the relative
unpredictability and severity of cracking.
Pipe wall flaws This is the combination ofthe other four variables described above. As an adjustment to risk scores, this
variable reduces the previously calculated [email protected] by up to
90%,in addition to previous reductions.
After this analysis, each pipeline segment has been characterized in terms of the five defect-type variables shown above.
Those five variables each impact a previously determined risk
score, as noted, In other words, the pipeline segment is penalized for having damages that are evidence of inadequate corrosion control, weakened pipe wall, etc. The amount of the
penalty is proportional to the ILI score and the scale maximum
possible value of the risk variable. This worst case penalty is set
on the basis of how much influence that factor could have on
failure probability.
Default values are set for missing information, usually due to
a lack of inspection information. Therefore, the default value
represents a condition where no current inspection information
is available and the presence of some level of anomalies will be
conservatively assumed.
In this particular application, it was conservatively assumed
that an ILI yields no useful information after 5 years from the
inspection date. ILI scores will therefore be assumed to worsen
by 20% each year until the default value is reached.
The ILI score is improved through visual inspections and
removal of any damages present. A pipeline segment that is partially replaced or repaired will show an improvement under this
scoring protocol since the anomaly count will have been
reduced which reduces the corresponding defect penalty. The
penalty can also be reduced even if the ILI score does not
improve by anomaly removal. This can happen if a root cause
analysis of the ILI anomalies concludes that active mechanisms
are not present, despite a poor ILI score. For example, the root
cause analysis might use previous ILI results to demonstrate
that corrosion damage is old and corrosion has been halted.
This is a rather complex approach and is not fully detailed
here. It is included to demonstrate one possible method to more
fully consider evidence from previous ILI in a general (not
anomaly-specific) manner.
Approach 2
Another example of an IL1 scoring application where corrosion
evaluations are adjusted by recent ILI results is presented here.
First an ILI score is generated that characterizes the overall
corrosion metal loss in the pipeline segment. This characterization could be based on a system similar to that ofApproach 1 or
it could simply involve a scale for accumulating frequency and
severity of wall loss damages in a segment.
When an ILI score indicates actual corrosion damage, but
risk assessment scores for corrosion potential do not indicate a
high potential, then a conflict may exist between the direct and
the indirect evidence. It will sometimes not be known exactly
where the inconsistency lies until complete investigations are
performed. The conflict could reflect an overly optimistic
assessment of effectiveness of mitigation measures (coatings,
CP, etc.) or it could reflect an underestimate of the harshness of
the environment. It could also be old damage from corrosion
that has since been mitigated.
To ensure that the corrosion scores always reflect the best
available information, limitations could be placed on corrosion
scores, in proportion to the ILI scores, pending results of the
full investigation. This is illustrated in Table 4.1 1 of Chapter 4.
E. Land movements (15% weighting in example
A pipeline may be subjected to stresses due to land movements
and/or geotechnical events of various kinds. These movements
may be sudden and catastrophic or they may be long-term
deformations that induce stresses on the pipeline over a period
of years. These can cause immediate failures or add considerable stresses to the pipeline and should be carefully considered
in a risk analysis.
A common categorization of failure causes is external
forces. This category blends several failure causes and makes it
difficult to separate land movements from third-party damages
as a root cause of the failure. Since this separation is critical in
risk management efforts, this risk assessment model isolates
land movements as a specific failure mode under the Design
Index. The land movement threat is very location specific.
Many miles ofpipeline are located in regions where potentially
damaging land movements are virtually impossible. On the
other hand, land movements are the primary cause of failures,
outweighing all other failure modes, for other pipelines. All of
these issues make the assignment of a weighting difficult. It
often becomes an issue of model scope, as discussed in
Chapter 2. The suggested weighting presented here should be
examined in consideration of all pipelines to be assessed with
the risk model. Where land movements are a very high threat, a
fifth failure probability index can be established specifically
for the land movement failure mode.
Land movement or geotechnical issues in general, can be
categorized in various ways. One method is proposed whereby
such events are referred to as natural hazards and categorized
as shown in Table 5.6.
In the following paragraphs, land movements are examined
as the potential for landslides, soil movements, tsunamis, seismic events, aseismic faulting, scour and erosion. Additional
threats such as sand dune formation and movement or iceberg
scour (see Chapter 12) can be included in an existing category
or evaluated independently. Land movements specific to the
offshore environment are discussed in Chapter 12.
Many of the potentially dangerous land movement scenarios
have a slope involved (Figure 5.5). The presence of a slope adds
the force of gravity. Landslides, rockslides, mudflows, and
Risk variables and scoring 511 11
Table 5.6
Possible categorization of natural hazards
Specific Events
On-ROW instability
Soil erosion
Debris flows
Fault rupture
Channel degradation
Bank erosion
Source: Porter, M., and K. W. Savigny, "Natural Hazard and Risk
Management for South American Pipelines," Proceedings of IPC
2002: 4th International Pipeline Conference, Calgary, Canada,
September 2002.
creep are the more well-known downslope movement phenomena. Another movement involving freezing, thawing, and gravity is solifluction, a cold-regions phenomenon distinct from the
more common movements [93].
Landslides can occur from heavy rain, especially on slopes
or hillsides with heavy cutting of vegetation or loadings from
construction or other activities that disturb the land. Slides can
also be caused by seismic activity. Landslide displacement of
pipe can cause structural damage and leaks by increased external force loading if the pipeline is buried under displaced soil.
Landslides can happen offshore also, with rock fall damage to
pipeline possible.
Slope issues can be an important but often overlooked aspect
of changing pipeline stability. Slope alterations near, but outside. the right of way by third parties should be monitored.
nrinil.ral slope
Construction activities near or in the pipeline right of way may
produce slopes that are not stable and could put the pipeline at
risk. These activities include excavation for road or railway
cuts, removal of material from the toe of a slope, or adding significant material to the crest of a slope. Given that maintenance
activity involving excavation could potentially occur without
engineering supervision, standard procedures may be warranted to require notification of an engineer should such conditions be found to exist.
In soil sliding analyses, a pipeline experiences axial and
bending loads depending on the direction of sliding movement
with respect to the pipe axis. Axial strains in the pipeline are
caused by soil sliding normal to the pipe axis. If the sliding
movement is 90 degrees to the pipe axis, the pipeline will predominantly experience tensile strain with small compressive
bending strains present at the transition zones of the liquefied
and nonliquefied soil sections. If the sliding movement is 45
degrees to the pipeline, both compressive and tensile axial
strains increase significantly due to the combination of axial
and bending loads.
Impact loadings are also possible, especially involving rockslides and above ground pipeline components. An evaluation
for rockfall hazards to railroads has identified some key
variables to assess. These are shown inTable 5.7. An evaluation
methodology like this is readily modified to be applicable to
Some available databases provide rankings for landslide
potential. As with soils data, these are very coarse rankings
and are best supplemented with field surveys or local
know ledge.
Soils (shrink, swell, subsidence, settling)
Effects that are not slope oriented include changes in soil vol-
ume causing shrinkage, swelling, or subsidence. These can be
caused by differential heating, cooling, or moisture contents.
Sudden subsidence or settling can cause shear forces as well as
bending stresses.
Pipe position after
slope failure-this
displacement has added
bending stresses to the
Slope profile
after slow
Line of slope
Figure 5.5
Sudden slope failure over pipeline.
5/112Design Index
Table 5.7 Rockfall hazard assessment-elative
Source volumes
Volume of rock that could fall during any
one event.
Effective mitigation
Uses three categories ofpotential volume; highest
category is >3m3.
“Favorable”or “unfavorable” geological orientations.
Use ofmeasures to either hold source volumes in
place (anchors,dowels, etc) or protect the track
(ditches,berms, etc). Measures are judged as either
“effective” or “ineffective.”
“Effective” aprons,dense vegetation, larger distances,
etc. that prevent contact with track.
Probability of certain dimensions and fragmentation
of fallingrock; characterizesresultant rubble on
Likelihood of source volume
detachingand reaching railroad track
Natural barriers
Rock size
Source: Porter, M., A. Baumgard, and K. W. Savigny,“A Hazard and Risk Management System for Large Rock Slope Hazards Affecting Pipelines in
MountainousTerrain,”Proceedingsof IPC 2002:4th International Pipeline Conference,Calgary, Canada, September 2002.
Many pipelines traverse areas of highly expansive clays that
are particularly susceptible to swelling and shrinkage due
to moisture content changes. These effects can be especially
pronounced if the soil is confined between nonyielding surfaces. Such movements of soil against the pipe can damage the
pipe coating and induce stresses in the pipe wall. Good installation practice avoids embedding pipes directly in such soils. A
bedding material is used to surround the line to protect the coating and the pipe. Again, rigid pipes are more susceptible to
structural damage from expansive soils.
The potential for the shrink or swell behavior ofpipeline foundation soils can lead to excessive pipe deflections.The potential
for excessive stresses is often seen in locations where the
pipeline connects with a facility (pump station or terminal) on a
foundation. In this circumstance, the difference in loading on
foundation soils below the pipeline and below the facility could
lead to differences in settlementand stresses on connections.
Frost heave is a cold-region phenomenon involving temperature and moisture effects that cause soil movements. As ice
or ice lenses are formed in the soil, the soil expands due to the
freezing of the moisture. This expansion can cause vertical or
uplift pressure on a buried pipeline. The amount of increased
load on the pipe is partially dependent on the depth of frost penetration and the pipe characteristics.Rigid pipes are more easily
damaged by this phenomenon. Pipelines are generally placed at
depths below the frost lines to avoid frost loading problems.
Previous mining (coal for example) operations might
increase the threat of subsidence in some areas. Changes in
groundwater can also contribute to the subsidence threat.
Ground surface subsidence can be a regional phenomenon. It
may be a consequence of excessive rates of pumpage of water
from the ground and occasionally from production of oil and
gas at shallow depths. This phenomenon occurs where fluids
are produced from unconsolidated strata that compact as pore
fluid pressures are reduced.
Seismic events pose another threat to pipelines. Aboveground
facilities are generally considered to be more vulnerable than
buried facilities, however, high stress mechanisms can be at
work in either case. Liquefaction fluidizes sandy soils to a level
at which they may no longer support the pipeline. Strong
ground motions can damage aboveground structures. Fault
movements sometimes cause severe stresses in buried pipe. A
landslide can overstressboth aboveground and buried facilities.
Threats from seismic events include
Pipeline seismic shaking due to the propagation of seismic
Pipeline transverse and longitudinal sliding due to soil liquefaction
Pipeline flotation and settlement due to soil liquefaction
Failure of surface soils (soil raveling)
Seismic-induced tsunami loads that can adversely affect
Key variables that influence a pipe’s vulnerability to seismic
events include
Pipeline characteristics
Diameter (empirical evidencedata from past seismic
events-indicates that larger diameters have lower failure
Material (cast iron and other more brittle pipe materials tend
to perform worse)
Age (under the presumption that age is correlated to level of
deterioration, older systems might have more weaknesses
and hence, be more vulnerable to damage)
Joining (continuous pipelines such as welded steel, tend to
perform better than systems with joints such as flanges or
Branches (presence of connections and branches tends to
concentrate stresses leading to more failures)
Seismic event characteristics
0 Peak ground velocity
Peak ground deformation
Landslide potential
To design a pipeline to withstand seismic forces, earthquake
type and frequency parameters must be defined. This is often
Risk variables and scoring 5/113
done in terms of probubilib of e..;ceedunce. For instance, a
common building code requirement in the U.S. is to design fsr
an earthquake event with a probability of exceedance of 10% in
50 years:
Probability of exceedance = 1 - [ 1 - 1/ t . ~ ] ’
t = design life
is = return period.
For example, a 10?’0 probability of exceedance in 50 years
equates to an annual probability of I in 475 of a certain ground
motion being exceeded each year. A ground motion noted as
having a 10% probability ofexceedance in 50 years means that
the level of ground motions has a low chance of being exceeded
in the next 50 years. In fact, there is a 90% chance that these
ground motions will nor be exceeded. This probability level
requires engineers to design structures for larger, rarer ground
motions than those expected to occur during a 50-year interval.
Fault displacement is another potential threat to a pipeline.
The relative displacement of the ground on opposite sides of an
assumed fault rupture will produce strains in a pipeline that
crosses the rupture.
Several types offault movements are possible. Each produces
a different load scenario on the pipeline crossing the fault.
Generally, normal fault displacement leads to bending and elongation of the pipeline (tension dominant loading), whereas
reverse fault displacement leads to bending and compression of
the pipeline (compression dominant loading). Strike-slip fault
displacement will either stretch or compress the pipeline
depending on the angle at which the pipeline crosses the fault.
Oblique raulting is a combination of normal or reverse movement combined with strike-slip movement. Oblique faulting
will result in either tension-dominant loading or compressiondominant loading of the pipeline depending on the pipeline’s
fault crossing angle and the direction ofthe fault movements.
Fault displacement resulting in axial compression of the
pipeline is generally a more critical condition because it can
result in upheaval buckling. Upheaval buckling causes the
pipeline to bend or bow in an upward direction.
In typical settlement‘flotation analyses, the pipeline is subjected to bending where it passes through the liquefied soil section and the bending is maximum at the transition of liquefied
and nonliquefied soil zones. When bending occurs, axial strains
are compressive in the inner fibers of the bend and tensile in the
outer fibers of the bend relative to the neutral axis of the pipeline.
Calculations of maximum tensile and compressive strains
for known faults can be made and incorporated into the assessment. Similar calculations can also be made for maximum
strains in areas of seismic-induced soil liquefaction. These calculations require the use of assumptions such as maximum displacement, maximum slip angle, amount of pipeline cover, and
intensity of the seismic event. Ideally, such assumptions are
also captured in the risk assessment since they indicate the
amount of conservatism in the calculations.
Aseismic faulting
Aseismic faulting refers to shearing-type ground movements that
are too small and too frequent to cause measurable earth tremors.
Aseismic faults can be of a type that are not discrete fractures in
the earth. Rather, they can be zones of intensely sheared
ground. In the Houston, Texas, area, such zones exist, measure
a few tens of feet wide, and are oriented in a horizontal direction perpendicular to the trend of the fault [86]. Evidence of
aseismic faulting includes visible damages to streets (often
with sharp, faultlike displacements) and foundations, although
all such damage is not the result of this phenomenon.
Aseismic faulting threatens pipe and pipe coatings because
soil mass is moving in a manner that can produce shear, bending, and buckling stresses on the pipeline. A monitoring program and stress calculations would be expected where a
pipeline is threatened by this phenomenon. The risk evaluator
can seek evidence that the operator is aware of the potential and
has either determined that there is no threat or is taking prudent
steps to protect the system.
Tsunamis are high-velocity waves, often triggered by offshore
seismic events or landslides. A seiche is a similar event that
occurs in a deep lake [70b]. These events are of less concern in
deep water, but have the potential to cause rapid erosion and
scour in shallow areas. Most tsunamis are caused by a major
abrupt displacement of the seafloor. This hazard can be evaluated by considering the potential for seismic events, and the
beach geometry, pipeline depth, and other site-specific factors.
Often a history ofsuch events is used to assess the threat.
Scour and erosion
Erosion is a common threat for shallow or above-gradepipelines,
especially when near stream banks or areas subject to highvelocity flood flows. Even buried pipelines are exposed to threats
from scour in certain situations. A potential is for the depth of
cover to erode during flood flows, exposing the pipeline. If a lateral force were sufficiently large. the pipeline could become
overstressed. Overstressing can also occur through loss of
support if the pipeline is undermined.
At pipeline crossings where the streambed is composed of
rock, the pipeline will often have been placed within a trench
cut into the rock. During floods at crossings where flow velocities are extremely high, the potential exists for pressure differences across the top of the pipeline to raise an exposed length
of pipeline into the flow, unless a concrete cap has been
installed or the overburden is otherwise sufficient to prevent
this. Calculations can be performed to estimate the lengths of
pipeline that could potentially be uplifted from a rock trench
into flows of varying velocities.
Fairly detailed scour studies have been performed on
some pipelines. These studies can be based on procedures
commonly used for highway structure evaluations such as
“Stream Stability at Highway Structures.” A scour and bank
stability study might involve the following steps:
Review the history of scour-related leaks and repairs for the
Perform hydraulic calculations to identify crossings with
potentially excessive flood flow velocities.
Obtain current and historic aerial photographs for each ofthe
crossings of potential concern to identify crossings that show
evidence of channel instability.
5/114 Design Index
Perform site-specific geomorphic studies for specific crossings. These studies may suggest mitigation measures (if any)
to address scour.
Perform studies to address the issue of uplift of the pipeline
at high-velocity rock bed crossings.
The flood flow velocities for a crossing can be estimated
using cross-sections derived from the best available mapping,
flow rates derived from region-specific regression equations,
and channel/floodplain roughness values derived from a review
of vegetation from photography or site visits.
Upstream and downstream comparisons can be made to
identify any significant changes in stream flow regime or
visual evidence of scour that would warrant a site-specific geomorphic study.
Potential impact by foreign bodies on the pipeline after a
scour event can be considered, as well as stresses caused by
buoyancy, lateral water movements,pipe oscillations in the current, etc. The maximum allowable velocity against an exposed
pipe span can be estimated and compared to potential velocities, as one means of quantifying the threat.
The potential for wind erosion, including dune formation
and movement, can also be evaluated here.
Evaluating land movement potential
The evaluator can establish a point schedule for assessing the
risk of pipeline failure due to land movements.The point scale
should reflect the relative risk among the pipeline sections evaluated. If the evaluations cover everything from pipelines in the
mountains ofAlaska to the deserts ofthe Middle East, the range
of possible point values should similarly cover all possibilities.
Evaluations performed on pipelines in a consistent environment may need to incorporate more subtleties to distinguish the
differences in risk.
As noted, public databases are available that show relative
rankings for landslides, seismic peak ground accelerations, soil
shrink and swell behavior, scour potential, and other land
movement-related issues. These are often available at no cost
through government agencies. However, they are often on a
very coarse scale and will fail to pick up some very localized,
high-potential areas that are readily identified in a field survey
or are already well known.
Scoring of land movement
It is often advantageous to develop scoring scales for each type
of land movement. This helps to ensure that each potential
threat is examined individually. These can be added so that multiple threats in one location are captured. Directly using the relative ranking scales from the available databases, and then
supplementing this with local information, can make this a very
straightforward exercise.
The threat can alternatively be examined in a more qualitative fashion and for all threats simultaneously.The following
schedule is designed to cover pipeline evaluations in which the
pipelines are in moderately differing environments.
Potentialfor significant (damaging) soil movements:
0 pts
5 pts
10 pts
15 pts
0 pts
High Areas where damaging soil movements are common or
can be quite severe. Regular fault movements, landslides, subsidence, creep, or frost heave are seen. The pipeline is exposed
to these movements. A rigid pipeline in an area of less frequent
soil movements should also be classified here due to the
increased susceptibility of rigid pipe to soil movement damage.
Active earthquake faults in the immediate vicinity of the
pipeline should be included in this category.
Medium Damaging soil movements are possible but rare or
unlikely to affect the pipeline due to its depth or position.
Topography and soil types are compatible with soil movements,
although no damage in this area has been recorded.
Low Evidence of soil movements is rarely if ever seen.
Movements and damage are not likely. There are no recorded
episodes of structural damage due to soil movements. All rigid
pipelines should fall into this category as a minimum, even
when movements are rare.
None No evidence of any kind is seen to indicate potential
threat due to soil movements.
Unknown In keeping with an "uncertainty = increased risk"
bias, having no knowledge should register as high risk, pending
the acquisition of information that suggests otherwise.
Initial investigation and ongoing monitoring are often the first
choices in mitigation of potentially damaging land movements.
Beyond that, many geotechnical and a few pipeline-specific
remedies are possible.
A geotechnical evaluation is the best method to determine
the potential for significant ground movements. In the absence
of such an evaluation, however, the evaluator should seek evidence in the form of operator experience. Large cracks in the
ground during dry spells, sink holes or sloughs that appear during periods of heavy rain, foundation problems on buildings
nearby, landslide or earthquake potential, observation of soil
movements over time or on a seasonal cycle, and displacements
of buried structures discovered during routine inspections are
all indicators that the area is susceptible. Even a brief survey of
the topography together with information as to the soil type and
the climatic conditions should either readily confirm the operator's experience or establish doubt in the evaluator's mind.
Anticipated soil movements are often confirmed by actual
measurements. Instruments such as inclinometers and extensometers can be used to detect even slight soil movements.
Although these instruments reveal soil movements, they are not
necessarily a direct indication of the stresses induced on the
pipe. They only indicate increased probability of additional
pipe stress. In areas prone to soil movements, these instruments
can be set to transmit alarms to warn when more drastic
changes have occurred.
Movements of the pipe itself are the best indication of
increased stress. Strain gauges attached to the pipe wall can be
Risk variables and scoring 51115
used to monitor the movements of the pipeline, but must be
placed to detect the areas of greatest pipe strain (largest deflections). This requires knowledge of the most sensitive areas of
the pipe wall and the most likely movement scenarios. Use of
these gauges provides a direct measure of pipeline strain that
can be used to calculate increased stress levels.
Corrective actions can be sometimes performed to the point
where the potential for significant movements is “none”.
Examples include dewatering of the soil using surface and subsurface drainage systems and permanently moving the
pipeline. While changing the moisture content of the soil does
indeed change the soil movement picture, the evaluator should
assure herself that the potential has in fact been eliminated and
not merely reduced, before she assigns the “none” classification. Moving the pipeline includes burial at a depth below the
movement depth (determined by geotechnical study; usually
applies to slope movements), moving the line out of the area
where the potential exists, and placing the line aboveground
(may not be effective if the pipe supports are subject to soil
movement damage).
Earthquake monitoring systems tell the user when and where
an earthquake has occurred and what its magnitude is often
only moments from the time of occurrence. This is very useful
information because areas that are likely to be damaged can be
immediately investigated. Specific pipeline designs to withstand seismic loadings is another mitigation measure.
Scour and erosion threats can be reduced through armoring
of the pipeline and/or reducing the potential through diversions or stabilizations. These can range from placements of
gravel or sandbags over the pipeline to installations of full
scale river diversion or sediment deposition structures to deep
pipeline installation via horizontal directional drill. The evaluator must evaluate such mitigations carefully, given the relatively high rate of failure of scour and erosion prevention
Where a land movement potential exists and the operator has
taken steps to reduce the threat, point values may be adjusted by
judging the effectiveness of threat-mitigation actions. includ-
ing the acts of monitoring, site evaluations, or other information gathering. Monitoring implies that corrective actions are
taken as needed. Continuous monitoring offers the benefit of
immediate indication of potential problems and should probably reflect lowered risk compared with occasional monitoring.
Continuous monitoring can be accomplished by transmitting a
signal from a soil movement indicator or from strain gauges
placed on the pipeline. Proper interpretation of and response to
these signals is implied in awarding the point values. Periodic
surveys are also commonly used to detect movements.
However, surveying cannot be relied on to detect sudden movements in a timely fashion.
In the case of landslide potential, especially a slow-acting
movement, stress relieving is apotential situation-specific remedy and can be accomplished by opening a trench parallel to or
over the pipeline. This effectively unloads the line from soil
movement pressures that may have been applied. Another
method is to excavate the pipeline and leave it aboveground.
Either ofthese is normally only a short-term solution. Installing
the pipeline aboveground on supports can be a permanent solution, but as already pointed out. may not be a good solution if
the supports are susceptible to soil movement damage. The use
of barriers to prevent landslide damage, for example, can also
be scored as stress relieving.
Example 5.9: Scoringpotentinl for earth movements
In the section being evaluated, a brine pipeline traverses a
relatively unstable slope. There is substantial evidence of slow
downslope movements along this slope although sudden,
severe movements have not been observed. The line is thoroughly surveyed annually, with special attention paid to potential movements. The evaluator scores the hazard as somewhere
between “high” and “medium” because potentially damaging
movements can occur but have not yet been seen. This equates
to a point score of 3 points. The annual monitoring increases the
point score by 3 points, so the final score is 6 points.
Incor rect Operations
Incorrect Operations
A. Design
AI. Hazard
A2. MOP Potential
A3. Safety Systems
A4. Material Selection
A5. Checks
B. Construction
B1. Inspection
B2. Materials
B3. Joining
B4. Backfill
B5. Handling
B6. Coating
C. Operations
C 1. Procedures
C3. DrugTesting
C4. Safety Programs
C5. Surveysimapsl
C6. Training
C7. Mechanical Error
D. Maintenance
D1. Documentation
D2. Schedule
D3. Procedures
0-30 pts
&4 pts
0-12 pts
0-10 pts
0-2 pts
0-2 pts
0-20 pts
0-10 pts
0-2 pts
0-2 pts
0-2 pts
0-2 pts
0-2 pts
0-35 pts
0-1 pts
0-3 pts
0-2 pts
0-2 pts
0-5 pts
0-10 pts
06 pts
0-15 pts
&2 pts
0-3 pts
0-10 pts
0-100 pts
Human error potential
It has been reported that 80% of all accidents are due to human
fallibility. “In structures, for example, only about 10% of failures are due to a statistical variation in either the applied load or
the member resistance. The remainder are due to human error
or abuse” [57]. Human errors are estimated to have caused 62%
of all hazardous materials accidents in the United States [85].
In the transportation industry, pipelines are comparatively
insensitive to human interactions. Processes of moving products by rail or highway or marine are usually more manpower
intensive and, hence, more error prone. However, human error
has played a direct or indirect role in most pipeline accidents.
Although one of the most important aspects of risk, the
potential for human error is perhaps the most difficult aspect to
quantify. Safety professionals emphasize that identification of
incorrect human behavior may be the key to a breakthrough in
accident prevention. The factors underlying behavior and attitude cross into areas of psychology, sociology, biology, etc.,
and are far beyond the simple assessment technique that is
being built here. The role of worker stress is discussed in
Chapter 9 and can be an addition to the basic risk assessment
proposed here.
This index assesses the potential for pipeline failure caused
by errors committed by the pipeline personnel in designing,
building, operating, or maintaining a pipeline. Human error
can logically impact any of the previous probability-of-failure
indexes-active corrosion, for example, could indicate an error
in corrosion control activities. Scoring error potential in a separate index has the advantage of avoiding duplicate assessments
for many of the pertinent risk variables. For instance, assessments of training programs and use of written procedures will
generally apply to all failure modes. Capturing such assessments in a central location is a modeling convenience and further facilitates identification of risk mitigation opportunities in
the risk management phase. If the evaluator feels that there are
differences in human error potential for each failure mode, he
can base his score on the worst case or evaluate human error
variables separately for each failure mode.
Sometimes, an action deemed to be correct at the time, later
proves to be an error or at least regrettable. Examples are
found in the many design and construction techniques that
have changed over the years-presumably because it is discovered that previous techniques did not work well, or newer techniques are superior. Low frequency ERW pipe manufacturing
processes (see Chapter 5) and the use of certain mechanical
couplings (see Chapter 13) are specific examples. These kinds
6/118 Incorrect Operations Index
Figure 6.1 Basic risk assessment model.
Hazard identification
MAOP potential
Safety systems
Material selection
Mechanical error preventers
Figure 6.2 Assessing human error potential:sample of data used to score the incorrectoperations index
Design 6/119
of issues are really not errors since they presumably were determined based on best industry practices at the time. For a risk
assessment, they are normally better assessed in the Design
Index if they relate to strength (wrinkle bends, low frequency
ERW pipe, etc) or in the Corrosion index if related to periods
with no cathodic protection, incomplete pipe-to-soil reading
techniques, etc.
Actions such as vandalism, sabotage, or accidents caused
by the public are not considered here. These are addressed to
some extent in the third-party damage index and in the optional
sabotage module discussed in Chapter 9.
Many variables thought to impact human error potential are
identified here. The risk evaluator should incorporate additional knowledge and experience into this index as such knowledge becomes available. If data, observations, or expert
judgment demonstrates correlations between accidents and
variables such as years of experience, or time of day, or level
of education. or diet, or salary. then these variables can be
included in the risk picture. It is not thought that the state ofthe
art has advanced to that point yet.
Human interaction can be either positive-preventing or
mitigating failures, or negative-exacerbating or initiating
failures. Where efforts are made to improve human performance, risk reduction is achieved. Improvements may be achieved
through better designs of the pipeline system, development
of better employees, and/or through improved management
programs. Such improvements are a component ofrisk management.
An important concept in assessing human error risk is the
supposition that small errors at any point in a process can
leave the system vulnerable to failure at a later stage. With
this in mind, the evaluator must assess the potential for
human error in each of the four phases of a pipeline's life:
design, construction, operation, and maintenance. A slight
design or construction error may not be apparent for years
until it is suddenly a contributor to a failure. By viewing the
entire pipelining process as a chain of interlinked steps, we
can also identify possible intervention points, where checks
or inspections or special equipment can be inserted to avoid a
human error-type failure. Because many pipeline accidents
are the result of more than one thing going wrong, there
are often several opportunities to intervene in the failure
Specific items and actions that are thought to minimize the
potential for errors should be identified and incorporated into
the risk assessment. A point schedule can be used to weigh the
relative impact of each item on the risk picture. Many of these
variables will require subjective evaluations. The evaluator
should take steps to ensure consistency by specifying, if
only qualitatively, conditions that lead to specific point
assignments. The point scores for many of these items will
usually be consistent across many pipeline sections if not
entire systems.
Ideally, the evaluator will find information relating to the
pipeline's design, construction, and maintenance on which
risk scores can be based. However, it is not unusual. especially
in the case of older systems, for such information to be partially
or wholly unavailable. In such a case, the evaluator can take
steps to obtain more information about the pipeline's history.
Metallurgical analysis of materials, depth-of-cover surveys,
and research of manufacturers' records are some ways in which
information can be reconstructed. In the absence of data, a philosophy regarding level gfproofcan be adopted. Perhaps more
so than in other failure modes, hearsay and employee testimony
might be available and appropriate to varying degrees. The conservative and recommended approach is to assume higher risks
when uncertainty is high. As always, consistency in assigning
points is important.
This portion of the assessment invdves many variables with
low point values. So. most variables will not have a large impact
on risk individually, but in aggregate, the scores are thought to
present a picture of the relative potential for human error leading directly to a pipeline failure.
Because the potential for human error on a pipeline is related
to the operation of stations, Chapter 13 should also be reviewed
for ideas regarding station risk assessment.
A. Design (weighting: 30%)
This is perhaps the most difficult aspect to assess for an existing
pipeline.Design and planning processes are often not well defined
or documented and are often hghIy variable. Consequently, they
are the most difficult to assess for an existingpipeline.
The suggested approach is for the evaluator to ask for evidence that certain error-preventing actions were taken during
the design phase. It would not be inappropriate to insist on documentation for each item. If design documents are available, a
check or certification of the design can be done to verify that no
obvious errors have been made.
Aspects that can be scored in this portion of the assessment
are as follows:
Hazard identification
MOP potential
Safety systems
Material selection
4 pts
12 pts
10 pts
2 pts
2 pts
A l . Hazard identification (0-4 pts)
Here, the evaluator checks to see that efforts were made to
identify all credible hazards associated with the pipeline
and its operation. A hazard must be clearly understood
before appropriate risk reduction measures can be employed.
This would include all possible failure modes in a pipeline
risk assessment. Thoroughness is important as is timeliness:
Does the assessment reflect current conditions? Have all initiating events been considered?-even the more rare events
such as temperature-induced overpressure? fire around the
facilities? safety device failure? (HAZOP studies and other
appropriate hazard identification techniques are discussed in
Chapter 1 .)
Ideally, the evaluator should see some documentation that
shows that a complete hazard identification was performed. If
documentation is not available, she can interview system
experts or explore other ways to verify that at least the more
obvious scenarios have been addressed.
Points are awarded (maximum of 4 points) based on the
thoroughness of the hazard studies. with a documented, current,
and formal hazard identification process getting the highest
6H 20 Incorrect Operations Index
A2. MOP potential (0-12 pts)
The possibility of exceeding the pressure for which the system
was designed is an element of the risk picture. Obviously, a system where it is not physically possible to exceed the design
pressure is inherently safer than one where the possibility
exists. This often occurs when a pipeline system is operated at
levels well below its original design intent. This is a relatively
common occurrence as pipeline systems change service or
ownership or as throughputs turn out to be less than intended.
The ease with which design limits might be exceeded is
assessed here. The first things required for this assessment are
knowledge of the source pressure bump, compressor, connecting pipelines, tank,well, etc.) and knowledge of the system
strength. Then the evaluator must determine the ease with
which an overpressure event could occur. Would it take only the
inadvertent closure of one valve to rapidly build a pressure that
is too high? Or would it take many hours and many missed
opportunities before pressure levels were raised to a dangerous
Structural failure can be defined (in a simplified way) as the
point at which the material changes shape under stress and does
not return to its original form when the stress is removed. When
this “inelastic” limit is reached, the material has been structurally altered from its original form and its remaining strength
might have changed as a result. The structure’s ability to resist
inelastic deformation is one important measure of its strength.
The most readily available measure of a pipeline’s strength
will normally be the documented maximum operating pressure
or MOP. The MOP is the theoretical maximum internal pressure to which the pipeline can be subjected, reduced by appropriate safety factors. The safety factors allow for uncertainties
in material properties and construction. MOP is determined
from stress calculations, with internal pressure normally causing the largest stresses in the wall of the pipe. Material stress
limits are theoretical values, confirmed (or at least evidenced)
by testing, that predict the point at which the material will fail
when subjected to h g h stress.
External forces also add stress to the pipe. These external
stresses can be caused by the weight of the soil over a buried
line, the weight of the pipe itself when it is unsupported, temperature changes, etc. In general, any external influence that
tries to change the shape of the pipe will cause a stress. Some of
these stresses are additive to the stresses caused by internal
pressure. As such, they must be allowed for in the MOP calculations. Hence, care must be taken to ensure that the pipeline
will never be subjected to any combination of internal pressures
and external forces that will cause the pipe material to be
Note that MOP limits include safety factors. If pipeline segments with different safety factors are being compared, a different measure of pipe strength might be more appropriate.
Appendix C discusses pipe strength calculations.
To define the ease of reaching MOP (whichever definition
of MOP is used) a point schedule can be designed to cover
the possibilities. Consider this example point-assignment
A. Routine
0 pts
Definition: Where routine, normal operations could allow
the system to reach MOP. Overpressure would occur
fairly rapidly due to incompressible fluid or rapid introduction of relatively high volumes of compressible fluids.
Overpressure is prevented only by procedure or single-level
safety device.
B. Unlikely
5 pts
Definition: Where overpressure can occur through a combination of procedural errors or omissions, and failure of safety
devices (at least two levels of safety). For example, a pump
running in a “deadheaded” condition by the accidental
closing of a valve, and two levels of safety system (a primary
safety and one redundant level of safety) failing, would overpressure the pipeline.
C. Extremely Unlikely
10 pts
Definition: Where overpressure is theoretically possible
(sufficient source pressure), but only through an extremely
unlikely chain of events including errors, omissions, and
safety device failures at more than two levels of redundancy.
For example, a large diameter gas line would experience
overpressure if a mainline valve were closed and communications (SCADA) failed and downstream vendors did not
communicate problems and local safety shutdowns failed,
and the situation went undetected for a matter of hours.
Obviously, this is an unlikely scenario.
D. Impossible
12 pts
Definition: Where the pressure source cannot, under any
conceivable chain of events, overpressure the pipeline.
In studying the point schedule for ease of reaching MOP, the
“routine” description implies that MOP can be reached rather
easily. The only preventive measure may be procedural, where
the operator is relied on to operate 100%error free, or a simple
safety device that is designed to close a valve, shut down a pressure source, or relieve pressure from the pipeline.
If perfect operator performance and one safety device are
relied on, the pipeline owner is accepting a high level of risk of
reaching MOP. Error-free work techniques are not realistic and
industry experience shows that reliance on a single safety shutdown device, either mechanical or electronic, allows for some
periods of no overpressure protection. Few points should be
awarded to such situations.
Note that the evaluator is making no value judgments at this
stage as to whether or not reaching MOP poses a serious threat
to life or property. Suchjudgments will be made when the “consequence” factor is evaluated.
The “unlikely” description, category B, implies a pressure
source that can overpressure the segment and protection via
redundant levels of safety devices. These may be any combination of relief valves; rupture disks; mechanical, electrical, or
pneumatic shutdown switches; or computer safeties (programmable logic controllers, supervisory control and data acquisition systems, or any kind of logic devices that may trigger an
overpressure prevention action). The requirement is that at least
two independently operated devices be available to prevent
overpressure of the pipeline. This allows for the accidental
failure of at least one safety device, with backup provided by
Operator procedures must also be in place to ensure the
pipeline is always operated at a pressure level below the MOP.
In this sense, any safety device can be thought of as a backup to
proper operating procedures. The point value of category B
should reflect the chances, relative to the other categories, of a
Design 61121
procedural error coincident with the failure of two or more
levels of safety. Industry experience shows that this is not as
unlikely an occurrence as it may first appear.
Category C, “extremely unlikely,” should be used for situations where sufficient pressure could be introduced and the
pipeline segment could theoretically be overpressured but the
scenario is even more unlikely than category B. An example of
a difference between categories B and C would be a more compressible fluid or a larger volume pipeline segment in category
C. requiring longer times to reach critical pressures. As this
chance becomes increasingly remote, points awarded should
come closer to a category D score.
The “impossible” description of category D is fairly
straightforward. The pressure source is deemed to be incapable of exceeding the MOP ofthe pipeline under an.v circumstances. Potential pressure sources must include pumps,
compressors, wellhead pressure. connecting pipelines, and
the often overlooked thermal sources. A pump that, when
operated in a deadheaded condition, can produce 1000-psig
pressure cannot, theoretically, overpressure a line whose
MOP is 1400 psig. In the absence of any other pressure
source, this situation should receive the maximum points. The
potential for thermal overpressure must not be overlooked
however. A section of liquid-full pipe may be pressured
beyond its MOP by a heat source such as sun or fire if the
liquid has no room to expand.
Further, in examining the pressure source, the evaluator may
have to obtain information from connecting pipelines as to the
maximum pressure potential of their facilities. It is sometimes
difficult to obtain the maximum pressure value as it must be
defined for this application, assuming failure of all safety and
pressure-limiting devices. In the next section, a distinction is
made between safety systems controlled by the pipeline
operator and those outside his direct control.
A3. Safety systems (0-10 pts)
Safety devices, as a component of the risk picture, are included
here in the incorrect operations index (Figure 6.2) rather than
the design index of Chapter 5 . This is done under the premise
that safety systems exist as a backup situations in which
human error causes or allows MOP to be reached. As such,
they reduce the possibility of a pipeline failure due to human
The risk evaluator should carefully consider any and all
safety systems in place. A safety system or device is a mechanical, electrical, pneumatic. or computer-controlled device that
prevents the pipeline from being overpressured. Prevention
may take the form of shutting down a pressure source or relieving pressurized pipeline contents. Common safety devices
include relief valves, rupture disks, and switchesthat may close
valves, shut down equipment, etc., based on sensed conditions.
A level of safety is considered to be any device that unilaterally
and independently causes an overpressure prevention action to
be taken. When more than one level of safety exists-with each
level independent of all other devices and their power
sources-redundancy is established (Figure 6.3).Redundancy
provides backup protection in case of failure of a safety device
for any reason. Two, three, and even four levels of safety are not
uncommon for critical situations.
In some instances, safety systems exist that are not under
the direct control of the pipeline operator. When another
pipeline or perhaps a producing well is the pressure source,
control of that source and its associated safeties may rest with
t TO
Pump motor
vent line
Safety relief valve
Pump overpressure-one level of safety
Pump overpressure-two levels of safety
Figure 6.3
Safety systems.
6/122 Incorrect Operations Index
the other party. In such cases, allowances must be made for
the other party’s procedures and operating discipline.
Uncertainty may be reduced when there is direct inspection
or witnessing of the calibration and maintenance of the other
party’s safety equipment, but this does not replace direct
control ofthe equipment.
There is some redundancy between this variable and the previously assessed MOPpotential since safety systems are noted
there also. A point schedule should be designed to accommodate all situations on the pipeline system. [Note: The evaluator
must decide if she will be considering the pipeline system as a
whole (ignoring section breaks) for this item. A safety system
will often be physically located outside of the pipeline segments it is protecting (see Example 6.3 later).] An example
schedule follows:
A. No safety devices present
B. On site, one level only
C. On site, two or more levels
D. Remote, observation only
E. Remote, observation and control
F. Non-owned active witnessing
G. Non-owned no involvement
H. Safety systems not needed
0 pts
3 pts
6 pts
1 Pt
3 pts
-2 pts
-3 pts
IO pts
In this example schedule, more than one safety system “condition” may exist at the same time. The evaluator defines the
safety system and the overpressure scenarios. He then assigns
points for every condition that exists. Safety systems that are
not thought to adequately address the overpressure scenarios
should not be included in the evaluation. Note that some conditions cause points to be subtracted.
A . No safety devices present In this case, reaching MOP is
possible, and no safety devices are present to prevent overpressure. Inadequate or improperly designed devices would
also fall into this category. A relief valve that cannot relieve
enough to offset the pressure source is an example of an ineffective device. Lack of thermal overpressure protection where
the need exists is another example of a situation that should
receive 0 pts.
B. On site, one level For this condition a single device,
located at the site, offers protection from overpressure. The site
can be the pipeline or the pressure source. A pressure switch
that closes a valve to isolate the pipeline segment is an example.
A properly sized relief valve on the pipeline itself is another
C. On site, two or more levels Here, more than one safety
device is installed at the site. Each device must be independent
of all others and be powered by a power source different from
the others. This means that each device provides an independent level of safety. More points should be awarded for this situation because redundancy of safety devices obviously reduces
D. Remote, observation on!v In this case, the pressure is
monitored from a remote location. Remote control is not
possible and automatic overpressure protection is not present.
While not a replacement for an automatic safety system, such
remote observation provides some additional b a c k u p t h e
monitoring personnel can at least notify field personnel to take
Points can be given for such systems when such observation
is reliable 95 to 100% of the time. An example would be a pressure that is monitored and alarmed (visible andor audible signal to observer) in a control room that is manned 24 hours a day
and that has a communication reliability rate of more than 95%.
On notification of an abnormal condition, the observer can dispatch personnel to correct the situation.
E. Remote, observation and control This is the same situation as the previous one with the added feature of remote control capabilities. On notification of rising pressure levels, the
observer is able to remotely take action to prevent overpressure.
This may mean stopping a pump or compressor and opening or
closing valves. Remote control capability can significantly
impact the risk picture only if communications are reliable95% or better for both receiving of the pressure signal and
transmission of the control signal. Remote control generally
takes the form of opening or closing valves and stopping pumps
or compressors. This condition receives more points because
more immediate corrective action is made possible by the addition of the remote control capabilities.
R Non-owned, active witnessing Here, overpressure prevention devices exist, but are not owned, maintained or controlled by the owner of the equipment that is being protected.
The pipeline owner takes steps to ensure that the safety
device(s) is properly calibrated and maintained by witnessing
such activities. Review of calibration or inspection reports
without actually witnessing the activities may, in the evaluator’s judgment, also earn points. Points awarded here should
reflect the uncertainties arising from not having direct control
of the devices. By assigning negative points here, identical
safety systems under different ownerships would have different point values. This reflects a difference in the risk picture
caused by the different levels of operator control and involvement.
G. Non-owned. no involvement Here again, the overpressure
devices are not owned operated, or maintained by the owner of
the equipment that is being protected. The equipment owner is
relying on another party for her overpressure protection. Unlike
the previous category, here the pipeline owner is taking no
active role in ensuring that the safety devices are indeed kept in
a state of readiness. As such, points are subtracted-the safety
system effectiveness has been reduced by the added uncertainty.
H. Safety systems not needed In the previous item, MOP
potential, the most points were awarded for the situation in
which it is impossible for the pipeline to reach MOP. Under this
scenario, the highest level ofpoints is also awarded for this variable because no safety systems are needed.
For all safety systems, the evaluator should examine the status of the devices under a loss of power scenario. Some valves
and switches are designed to “fail c1osed”on loss oftheir power
supplies (electric or pneumatic, usually). Others are designed
to “fail open,” and a third class remains in its last position: “fail
Design 6/123
last.” The important thing is that the equipment fails in a mode
that leaves the system in the least vulnerable condition.
Three examples follow of the application of this point
Example 6.1 :Scoring safety systems (CaseA )
In the pipeline section considered here, a pump station is
present. The pump is capable of overpressuring the pipeline. To
prevent this, safety devices are installed. A pressure-sensitive
switch will stop the pump and allow product to flow around
the station in a safe manner. Should the pressure switch
fail to stop the pump, a relief valve will open and vent the
entire pumped product stream to a flare in a safe manner. This
station is remotely monitored by the transmission of appropriate data (including pressures) to a control room that is manned
24 hours per day. Remote shutdown of the pump from this
control room is possible. Communications are deemed to be
98% reliable.
Example 6.3: Scoringsafety systems (Case C)
In this example, a supplier delivers product via a high-pressure pump into a pipeline section that relies on a downstream
section’s reliefvalve to prevent overpressure. The supplier has a
pressure switch at the pump site to stop the pump in the event of
high pressure. The pipeline owner inspects the pump station
owner’s calibration and inspection records for this pressure
switch. The pump station owner remotely monitors the pump
station operation 24 hours per day.
Total points = 0.5
- 2.5
Note that two levels of safety are present (pressure switch
and relief valve). and that full credit is given to the remote capabilities only after communication effectiveness is assessed
Note that in this case credit is not given for a relief valve not
in the section being evaluated. The evaluator has decided that
the downstream relief valve does not adequately protect the
pipeline section being assessed.
Note also that no points are given for the supplier’s remote
monitoring. Again, the evaluator has made the decision to simplify-he does not wish to be evaluating suppliers’ systems
beyond the presence of direct overpressure shutdown devices
located at the site. Finally, note that the evaluator has awarded
points for the pipeline owner’s inspection of the suppliers’
maintenance records. He feels that. in this case. an amount of
risk reduction is achieved by such inspections.
Example 6.2: Scoring safety sjstems (Case B)
A4. Material selection ( 6 2 pts)
For this example, a section of a gas transmission pipeline has
a supplier interconnect. This interconnect leads directly to a
producing gas well that can produce pressures and flow rates
which can overpressure the transmission pipeline. Several levels of safety are present at the well site and under the control of
the producer. The producer has agreed by contract to ensure
that the transmission pipeline owner is protected from any damaging pressures due to the well operation. The pipeline owner
monitors flow rates from the producer as well as pressures on
the pipeline. This monitoring is on a 24-hour basis, but no
remote control is possible.
The evaluator should look for evidence that proper materials
were identified and specified with due consideration to all
stresses reasonably expected. This may appear to be an obvious
point, but when coupled with ensuring that the proper material
is actually installed in the system, a number of historical failures could have been prevented by closer consideration of this
variable. The evaluator should find design documents that consider all anticipated stresses in the pipeline components. This
would include concrete coatings, internal and external coatings, nuts and bolts, all connecting systems. supports, and the
structural (load-bearing) members of the system. Documents
should show that the corrosion potential, including incompatible material problems and welding-related problems, was considered in the design.
Most importantly, a set of control documents should
exist. These control documents, normally in the form of pipeline
specifications, give highly detailed data on all system components, from the nuts and bolts to the most complex instrumentation. The specifications will address component sizes, material
compositions, paints and other protective coatings. and any special installation requirements. Design drawings specify the location and assembly parameters of each component.
When any changes to the pipeline are contemplated the control
documents should be consulted.All new and replacementmaterials should conform to the original specifications or the specifications must be formally reviewed and revised to allow different
materials. By rigidly adhering to these documents, the chance
of mistakenly installing incompatible materials is reduced.
A management-of-change(MOC) process should be in place.
Condition\ present
Total points = 9
Conditionspresen I
Total points = 4
Note that credit is given for condition C even though the
pipeline owner has no safety devices of his own in this section.
The fact that the devices are present warrants points; the fact
that they are not under the owner’s control negates some of
those points (condition G).Also, while contractual agreements
may be useful in determining liabilities ufier an accident, they
are not thought to have much impact on the risk picture. If the
owner takes an active role in ensuring that the safety devices are
properly maintained, condition F would replace G. yielding a
total point score of 5.
6/124 Incorrect Operations Index
Awarding of points for this item should be based on the
existence and use of control documents and procedures that
govern all aspects of pipeline material selection and installation. Two points are awarded for the best use of controls, 0
points if controls are not used.
A5. Checks (0-2 pts)
Here, the evaluator determines if design calculations and decisions were checked at key points duringthe design process. In the
U.S., a licensed professional e n p e e r often certifies designs.
This is a possible interventionpoint in the design process. Design
checks by qualified professionals can help to prevent errors and
omissions by the designers. Even the most routine designs
require a degree of professional judgment and are consequently
prone to error. Design checks can be performed at any stage in
the life of the system. It is probably impossible to accurately
gauge the quality of the checks-evidence that they were indeed
performed will probably have to suffice.
Two points are awarded for sections whose design process
was carefully monitored and checked.
B. Construction (suggested weighting:
Ideally, construction processes would be well defined, invariant
from site to site, and benefit from a high pride of workmanship
among all constructors. This would, of course, ensure the
highest quality and consistency in the finished product and
inspection would not be needed.
Unfortunately, t h s is not the present state of pipeline construction practice. Conformance specifications are kept wide to
allow for a myriad of conditions that may be encountered in the
field. Workforces are often transient and awarding of work
contracts is often done solely on the basis of lowest price. This
makes many projects primarily price driven; shortcuts are
sought and speed is often rewarded over attention to detail.
For the construction phase, the evaluator should find evidence
that reasonable steps were taken to ensure that the pipeline
section was constructed correctly. This includes checks on the
quality of workmanship and, ideally, another check on the design
While the post-construction pressure test verifies the system
strength, improper construction techniques could cause problems far into the future. Residual stresses, damage to corrosion
prevention systems, improper pipe support, and dents or
gouges causing stress risers are some examples of construction
defects that may pass an initial pressure test, but contribute to a
later failure.
Variables that can be scored in the assessment are as follows:
10 pts
2 pts
2 pts
2 pts
2 pts
2 pts
These same variables can also apply to ongoing construction
practices on an existing pipeline. This might include repairs,
adjustments to route or depth, and addition of valves or connections. The stability of the buried pipeline during modifications
is often a critical consideration. Construction activities near or
in the pipeline right of way may produce slopes that are not stable and could put the pipeline at risk. These activities include
excavation for road or railway cuts, removal of material from
the toe of a slope, or adding significant material to the crest of a
slope, in addition to construction activities on the pipeline
itself. Slope alterations near, but outside, the right of way by
third parties should be monitored and the responsible parties
notified and consulted about their project’s effect on the
The evaluator can assess the potential for human error in the
construction phase by examining each of the variables listed
above and discussed in more detail next.
B1. Inspection (0-10 pts)
Maximum points can be awarded when a qualified and conscientious inspector was present to oversee all aspects of the construction and the inspection provided was ofthe highest quality.
A check of the inspector’s credentials, notes during construction, work history, and maybe even the constructor’s opinion of
the inspector could be used in assessing the performance. The
scoring of the other construction variables may also hinge on
the inspector’s perceived performance.
If inspection is a complete unknown, 0 points can be
awarded. This variable commands the most points under the
construction category because current pipeline construction
practices rely so heavily on proper inspection.
B2. Materials (0-2 pts)
Ideally, all materials and components were verified as to their
authenticity and conformance to specifications prior to their
installation.Awareness of potential counterfeitmaterials should
be high for recent construction. Requisition of proper materials
is probably not sufficient for this variable. An on-site material
handler should be taking reasonable steps to ensure that the right
material is indeed being installed in the right location.
Evidence that this was properly done warrants 2 points.
B3. Joining (0-2 pts)
Pipe joints are sometimes seen as having a higher failure potential than the pipe itself. This is reasonable since joining normally occurs under uncontrolled field conditions. Highest
points are awarded when high quality of workmanship is seen
in all methods of joining pipe sections, and when welds were
inspected by appropriate means (X-ray, ultrasound, dye penetrant, etc.) and all were brought into compliance with governing
specifications. Where weld acceptance or rejection is determined by two inspectors, thereby reducing bias and error,
assurances are best. Point values should be decreased for less
than 100% weld inspection, questionable practices, or other
uncertainties. Otherjoining methods (flanges, screwed connections, polyethylene fusion welds, etc.) are similarly scored
based on the quality of the workmanship and the inspection
100% inspection of all joints by industry-accepted practices
warrants 2 points in this example.
Operation 6/125
B4. Backfill (0-2 pts)
The type of backfill used and backfilling procedures are often
critical to a pipeline’s long-term structural strength and ability
to resist corrosion. It is important that no damage to the coating
occurred during pipeline installation. Uniform and (sometimes) compacted bedding material is usually necessary to
properly support the pipe. Stress concentration points may
result from improper backfill or bedding material.
Knowledge and practice of good backfillisupport techniques
during construction warrants 2 points.
B5. Handling (0-2 pts)
For this variable. the evaluator should check that components,
especially longer sections of pipe, were handled in ways that
minimize stresses and that cold-working of steel components
for purposes of fit or line-up were minimized. Cold-working
can cause high levels ofresidual stresses, which in turn can be a
contributing factor to stress corrosion phenomena. Handling
includes storage of materials prior to installation. Protecting
materials from harmful elements should be a part ofthe evaluation for proper handling during construction.
The evaluator should award 2 points when he sees evidence
of good materials handling practices and storage techniques
during and prior to construction.
B6. Coating (0-2 pts)
This variable examines field-applied coatings (normally
required for joining) and provides an additional evaluation
opportunity for precoated components. Field-applied coatings
are problematic because effects of ambient conditions are difficult to control. Depending on the coating system, careful control of temperature and moisture might be required. All coating
systems will be sensitive to surface preparation.
Ideally, the coating application was carefully controlled and
supervised by trained individuals and preapplied coating was
carefully inspected and repaired prior to final installation of
pipe. Coating assessment in terms of its appropriateness for the
application and other factors is done in the corrosion index
also, but at the construction stage, the human error potential is
relatively high. Proper handling and backfilling directly impact
the final condition of the coating. The best coating system can
be defeated by simple errors in the final steps of installing the
The maximum points can be awarded when the evaluator is
satisfied that the constructors exercised exceptional care in
applying field coatings and caring for the preapplied coating.
The evaluator must be careful in judging all ofthe variables
just discussed, especially for systems constructed many
years ago. System owners may have strong beliefs about how
well these error-prevention activities were carried out, but
may have little evidence to verify those beliefs. Evaluations
of pipeline sections must reflect a consistency in awarding
points and not be unduly influenced by unsubstantiated
beliefs. A “documentation-required” rule would help to ensure
Excavations, even years after initial installation, provide
evidence of how well construction techniques were camed out.
Findings such as damaged coatings, debris (temporary wood
supports, weld rods, tools, rocks, etc.) buried with the pipeline.
low-quality coating applications over weld joints, etc., will
still be present years later to indicate that perhaps insufficient
attention was paid during the construction process.
C. Operation (suggestedweighting: 35%)
Having considered design and construction, the third phase,
operations, is perhaps the most critical from a human error
standpoint. This is the phase in which an error can produce an
immediate failure since personnel may be routinely operating
valves, pumps, compressors, and other equipment. Emphasis
therefore is on error prevention rather than error detection.
Most hazardous substance pipelines have redundant safety
systems and are designed with generous safety factors.
Therefore, it often takes a rather unlikely chain of events to
cause a pipeline to fail by the improper use of components.
However, history has demonstrated that the unlikely event
sequences occur more often than would be intuitively predicted. Unlike the other phases. intervention opportunities here
may be less common. But a system can also be made to be more
insensitive to human error through physical means.
As a starting point, the evaluator can look for a sense of
professionalism in the way operations are conducted. A
strong safety program is also evidence of attention being paid
to error prevention. Both of these, professionalism and safety
programs, are among the items believed to reduce errors.
The variables considered in this section are somewhat redundant with each other, but are still thought to stand on their own
merit. For example, better procedures enhance training;
mechanical devices complement training; better training and
professionalism usually mean less supervision is required.
Operations is the stage where observability and controllability should be maximized. Wherever possible, intervention
points should be established. These are steps in any process
where actions contemplated or just completed can be reviewed
for correctness. At an intervention point, it is still possible to
reverse the steps and place the system back in its prior (safe)
condition. For instance, a simple lock on a valve causes the
operator to take an extra step before the valve can be operated
perhaps leading to more consideration of the action about to be
This is also the place in the assessment where special product
reaction issues can be considered. For example, hydrate formation (production of ice as water vapor precipitates from a hydrocarbon flow stream, under special conditions) has been
identified as a service interruption threat and also. under special
conditions, an integrity threat. The latter occurs if formed ice
travels down the pipeline with high velocity, possibly causing
damages. Because such special occurrences are often controlled
through operational procedures, they warrant attention here.
A suggested point schedule to evaluate the operations phase
is as follows:
C 1.
Drug testing
Safety programs
7 pts
3 pts
2 pts
2 pts
5 pts
61126 Incorrect Operations Index
Mechanical errorpreventers
IO pts
6 pts
C1. Procedures (0-7 pts)
The evaluator should be satisfied that written procedures covering all aspects of pipeline operation exist. There should be evidence that these procedures are actively used, reviewed, and
revised. Such evidence might include filled-in checklists and
copies of procedures in field locations or with field personnel.
Ideally, use of procedures and checklists reduces variability.
More consistent operations imply less opportunity for human
error. Examples ofjob procedures include
Mainline valve checks and maintenance
Safety device inspection and calibration
Pipeline shutdown or startup
Pump/compressor operations
Product movement changes
Right-of-way maintenance
Flow meter calibrations
Instrument maintenance
Safety device testing
Management of change
Corrosion control
Control center actions
Lock-out and equipment isolation
Emergency response
oped and communicated with great care. A protocol should
exist that covers these procedures: who develops them, who
approves them, how training is done, how compliance is verified, how often they are reviewed. A document management
system should be in place to ensure version control and proper
access to most current documents. This is commonly done in a
computer environment, but can also be done with paper filing
The evaluator can check to see if procedures are in place for
the most critical operations first: starting and stopping ofmajor
pieces of equipment, valve operations, changes in flow parameters, instruments taken out of service, etc. The nonroutine activity is often the most dangerous. However, routine operations
can lead to complacency. The mandated use ofpre-flight checklists by pilots prior to every flight is an example of avoiding
reliance on memory or habits.
A strong procedures program is an important part of reducing operational errors, as is seen by the point level. Maximum
points should be awarded where procedure quality and use are
the highest. More is said about procedures in the mining variable and in Chapter 13.
C2. SCADNcommunications (0-3 pts)
and many others. Note that work near the line, but not actually
involving the pipeline, is also included because such activities
may affect the line. Unique or rare procedures should be devel-
Supervisory control and data acquisition (SCADA) refers to
the transmission of pipeline operational data (such as pressures, flows, temperatures, and product compositions) at sufficient points along the pipeline to allow monitoring of the line
from a single location (Figure 6.4). In many cases, it also
includes the transmission of data from the central monitoring
location to points along the line to allow for remote operation of
valves, pumps, motors, etc. Devices called remote terminal
units (RTUs) provide the interface between the pipeline datagathering instruments and the conventional communication
Valve station
Pump station
Figure 6.4
Pipeline SCADA systems.
Operation 61127
paths such as telephone lines, satellite transmission links, fiber
optic cables, radio waves, or microwaves. So, a SCADA system
is normally composed of all of these components: measuring
instrumentation (for flow, pressure, temperature, density, etc.),
transmitters, control equipment, RTUs, communication pathways. and a central computer. Control logic exists either in local
equipment (programmable logic controllers, PLCs) or in the
central computer.
SCADA systems usually are designed to provide an overall
view of the entire pipeline from one location. In so doing,
system diagnosis, leak detection, transient analysis, and work
coordination can be enhanced.
The main contribution of SCADA to human error avoidance is the fact that another set of eyes is watching pipeline
operations and is hopefully consulted prior to field operations. A possible detractor is the possibility of errors emerging from the pipeline control center. More humans involved
may imply more error potential, both from the field and
from the control center. The emphasis should therefore be
placed on how well the two locations are cooperating and
cross-checking each other.
Protocol may specify the procedures in which both locations
are involved. For example, the operating discipline could
require communication between technicians in the field and the
control center immediately before
Valves opened or closed
Pumps and compressors started or stopped
Vendor flows started or stopped
Instruments taken out of service
Any maintenance that may affect the pipeline operation.
Two-way communications between the field site and the
control center should be a minimum condition to justify points
in this section. Strictly for purposes of scoring this variable, a
control center need not employ a SCADA system. The important aspect is that another source is consulted prior to any
potentially upsetting actions. Telephone or radio communications, when properly applied can also be effective in preventing
human error.
Maximum points should be awarded when the cross-checking is seen to be properly performed.
of a SCADA system would ideally involve an examination of
the entire reporting process, from first indication of an abnormal condition, all the way to the final actions and associated
system response. This assessment would therefore involve an
evaluation ofthe following aspects:
A list of characteristics that could be used to assess a specific
SCADA system can be created. These characteristics are
thought to provide a representative indication of the effectiveness in reducing risks:
Alternative approach
This subsection describes an alternative approach to evaluating
the role of SCADA in human error avoidance. In this approach,
a more detailed assessment of SCADA capabilities is made part
of the risk assessment. Choice of approaches may be at least
partially impacted by the perceived value of SCADA capabilities in error prevention.
A SCADA system can impact risk in several ways:
Human error avoidance
Leak detection
Emergency response
Operational efficiencies.
As with any system, the SCADA system is only as effective
and reliable as its weakest component. A thorough assessment
Detection of abnormal conditions; for instance, what types
of events can be detected? What is the detection sensitivity
and reliability in terms of 100% of event type A occurrences
being found, 72% of event type B occurrences being found,
etc.? This includes assessment of redundant detection opportunities (by pressure loss and flow increase, for instance),
instrument calibration and sensitivities, etc.
Speed, error rate, and outage rate of the communications
pathways; number of points of failure; weather sensitivity;
third-party services; average refresh time for data; amount of
error checking during transmission; report-by-exception
Redundancy in communication pathways; outage time until
backup system in engaged
Type and adequacy of automatic logic control; local (PLCs)
versus central computer; ability to handle complex input scenarios
Human response, if required as a function of time to recognize problem, ability to set alarms limits, effectiveness of
madmachine interface (MMI); operator training: support
from logic, graphic, and tabular tools
Adequacy of remote andor automatic control actions; valve
closing or opening; instrument power supply.
Local automatic control
Local remote control (on-site control room)
Remote control as primary system
Remote control as backup to local control
Automatic backup communications with indication of
24-hour-per-day monitoring
Regular testing and calibration per formal procedures
Remote, on-site monitoring and control of all critical activities
Remote, off-site monitoring and control of all critical activities
Enforced protocol requiring real-time interface between
field operations and control room; two sources involved in
critical activities; an adequate real-time communications
system is assumed
Interlocks or logic constraints that prevent incorrect operations; critical operations are linked to pressure, flow. temperature, etc., indications, which are set as “permissives” before
the action can occur
Coverage of data points; density appropriate to complexity
of operations
Number of independent opportunities to detect incidents
Diagnostics capabilities including data retrieval, trending
charts. temporary alarms, correlations, etc.
6/128 Incorrect Operations Index
Many of these characteristics impact the leak detection and
emergency response abilities of the system. These impacts are
assessed in various consequence factors in Chapter 7.
As one variable in assessing the probability of human error,
the emphasis here is on the SCADA role in reducing human
error-type incidents. Therefore, only a few characteristics are
selected to use in evaluating the role of a specific SCADA
system. From the human error perspective only, the major
considerations are that a second “set of eyes” is monitoring all
critical activities and that a better overview of the system is
provided. Although human error potential exists in the
SCADA loop itself, it is thought that, in general, the crosschecking opportunities offered by SCADA can reduce the
probability of human error in field operations. The following
are selected as indicators of SCADA effectiveness as an error
1. Monitoring ofall critical activities and conditions
2. Reliability of SCADA system
3. Enforced protocol requiring real-time communications
between field operations and control room; two sources
involved in critical activities; an adequate real-time communications [email protected])is assumed
4. Interlocks or logic constraints that prevent incorrect operations; critical operations are linked to pressure, flow,
temperature, etc.; indications that are set as “permissives”
before the action can occur.
Note the following assumptions:
Critical activities include pump stadstop; tank transfers;
and any significant changes in flows, pressures, temperatures, or equipment status.
Monitoring is seen to be critical for human error prevention,
but control capability is mostly a response consideration
Remote monitoring is neither an advantage or disadvantage
over local (on-site control room) monitoring.
Proper testing and calibration are implied as part of reliability.
Because item 4 above (interlocks or logic constraints) is
already captured in the “Computer Permissives Program” part
ofthe variable mechanical error preventers, the remaining three
considerations can be “scored” in the assessment for probability of human error as shown in Table 6.1.
Table 6.1 Evaluation of SCADA role in human error reduction
Level 1
Level 2
Level 3
Level 4
No SCADA system exists or is not used in a manner that
promotes human error reduction.
Some critical activitiesare monitored; field actions are
informallycoordinated through a control room; system
is at least 80% operational.
Most critical activitiesare monitored;field actions are
usually coordinated through a control room; system
uptime exceeds 95%.
All critical activitiesare monitored;all field actions are
coordinated through a control room; SCADA system
reliability(measured in uptime) exceeds 99.9%.
Other aspects of the SCADA role in risk reduction can be
captured in the consequence section, under “Spill Reduction
Factors.”The more technical aspects of kind and quality of data
and control (incident detection) and the use of that capability
(emergency response), can be assessed there.
C3. Drug testing (0-2 pts)
Government regulations in the United States currently require
drug and alcohol testing programs for certain classes of
employees in the transportation industry. The intent is to reduce
the potential for human error due to an impairment of an individual. Company testing policies often include
Random testing
Testing for cause
Pre-employment testing
Postaccident testing
From a risk standpoint, finding and eliminating substance
abuse in the pipeline workplace reduces the potential for substance-abuse-relatedhuman errors.
A functioning drug testing program for pipeline employees
who play substantial roles in pipeline operations should warrant maximum points.
In cultures where drug and substance abuse is not a problem,
a practice of employee health screening may be a substitute
item to score.
C4. Safety programs (0-2 pts)
A safety program is one of the nearly intangible factors in the
risk equation. It is believed that a company-wide commitment
to safety reduces the human error potential. Judging this level
of commitment is difficult. At best the evaluator should look for
evidence of a commitment to safety. Such evidence may take
the form of some or all of the following:
Written company statement of safety philosophy
Safety program designed with high level of employee participation--evidence of high participation is found
Strong safety performance record (recent history)
Good attention to housekeeping
Signs, slogans, etc., to show an environment tuned to safety
Full-time safety personnel.
Most will agree that a company that promotes safety to a
high degree will have an impact on human error potential. A
strong safety program should warrant maximum points.
C5. Surveyslmapslrecords (0-5 pts)
While also covered in the risk indexes they specifically impact,
surveys as a part of routine pipeline operations are again considered here. Examples of typical pipeline surveys include:
Close interval (pipe-to-soil voltage) surveys
Coating condition surveys
Water crossing surveys
Operation 6/129
Deformation detection by pigging
Population density surveys
Depth of cover surveys
Sonar (subsea) surveys
Thermographic surveys
Leak detection
Each item is intended to identify areas of possible threat to
the pipeline. A formal program of surveying, including proper
documentation, implies a professional operation and a measure
of risk reduction. Routine surveying Further indicates a more
proactive, rather than reactive, approach to the operation. For
the pipeline section being evaluated points can be awarded
based on the number of surveys performed versus the number
of useful surveys that could be performed there. Survey
information should become a part of maps and records
whereby the survey results are readily available to operations
and maintenance personnel.
Maps and records document critical information about the
pipeline systems and therefore play a role in error reduction.
That role can be evaluated here.
As discussed in the third-party damage index discussion
(Chapter 3). there is often a need to routinely locate a pipeline
to protect it from pending excavations. When indirect means of
line locating, such as drawings and other records, are used,
there is an increased opportunity for incorrect locating. This is
due to the human error potential in the creation and use of
maps, including:
Incorrect initial measurements of the line location during
Errors in recording of these measurements
Errors in creation of the record documents
Failure to update documents
Incorrect filing and retrieval ofthe documents
Incorrect interpretation and communication of the data from
the document.
While some pipe movement after construction is possible,
this is normally not an important factor in line location.
Maps and records are increasingly being stored on and
retrieved from computers. Whether in digital or paper form,
and similar to the evaluation of procedures discussed previously, the scoring of surveys/maps/records can be based on
aspects such as:
of the system covered by
maps and records
Detail-level of detail shown (depth, landmarks, pipe specifications, leak history, current condition, etc.)
C l a r i w a s e of reading; chance of misinterpretation of
Timeliness of updates
Document management system--ensuring version control
and ready access to information.
Examples of common pipeline survey techniques are shown
in Appendix H. The following information on maps and records
is excerpted from a 1997 study, Ref. [64]:
Maps and Records
In general, facility records maintained by the utility owners or pipeline
operators are the most widely used sources of information about the
underground infrastructure. In the U.S., operators are required to identify facilities in environmentally sensitive areas and in densely populated
areas. In many pipeline environments, however, there is no specific
requirement for system operators to maintain a comprehensive system
map oftheir underground facilities. Nevertheless. many do maintain this
information to facilitate their business operations.
System records developed prior to the widespread use of computer
technology most likely exist as architectural and engineering diagrams
For some systems, these diagrams have been electronically imaged so
that they are easier to reference. update, and store. Digitized versions of
early maps do not always reflect the uncertainty of information that may
have been inherent on the hand-drafted version. Structural references
and landmarks that define the relative locations of underground facilities also change over time and may not he reflected on maps.
Many system maps lack documentation of abandoned facilities
Abandoned facilities result when the use of segments of the underground system are discontinued,or when replaced lines run in new locations, or when entire systems are upgraded. Without accurate records of
abandoned facilities, excavators run the risk ofmistaking the abandoned
line for an active one. thereby increasing the likelihood of hitting the
active line.
In addition to documenting the location of a facility. utility map
records may also contain informationon the age of the facility, type and
dimensions of the material. history of leakage and maintenance. status
of cathodic protection, soil content, and activity related to pending construction. However, the quality ofthis information varies widely.
Excavators. locators, and utility operators can use GPS information to
identify field locations (longitude and latitude coordinates). and they
can use this information to navigate to the sites. With the added capability of differential GPS. objects can be located to an accuracy of better
than 1 meter (1. I yards).This degree of accuracy makes differential GPS
appropriate for many aspects of mapping underground facilities.
Subsurfaceutility engineering (SUE) is a process for identifying, verifying. and documenting underground facilities. Depending on the
information available and the technologies employed to verify facility
locations, a level of the quality of information can he associated with
underground facilities. These levels, shown in Table I , indicate the
degree of uncertainty associated with the information; level A is the
most reliable and level D the least reliable. This categorization is a direct
result of the source of information and the technologies used to venfy
the information.
C6. Training (0-10 pts)
Training should be seen as the first line of defense against
human error and for accident reduction. For purposes of this
risk assessment, training that concentrates on failure prevention is the most vital. This is in contrast to training that emphasizes protective equipment, first aid, injury prevention, and
even emergency response. Such training is unquestionably cntical, but its impact on the pipeline probability of failure is indirect at best. This should be kept in mind as the training program
is assessed for its contribution to risk reduction.
Obviously, different training is needed for different job
functions and different experience levels. An effective training
program, however, will have several key aspects, including
6/130 Incorrect Operations Index
Table I
Qirulity level of
the information
Level D
Level C
Level B
Level A
Information is collected from existing utility
records without field activitiesto verify the
information. The accuracy or cornprehensiveness of the information cannot be guaranteed;
consequently,this least certain set of data is the
lowest quality level.
Adds aboveground survey data (such as manholes,
valve boxes, posts, and meters) to existing utility
records. The Federal HighwayAdministration
Office of Engineering estimates that 15-30
percent of level C facility information pertinent
to highway construction is omitted or plotted
with an error rate ofmore than 2 feet.
Confirmed existence and horizontal position of
facilities are mapped using surface geophysical
techniques.The Wo-dimensional, plan-view
map is useful in the construction planning phase
when slight changes to avoid conflictscan
produce substantialcost savings by eliminating
the relocation of utilities.
Vacuum excavation is used to positively verify
both the horizontal and vertical depth location of
common topics in which all pipeline employees should be
trained. A point schedule can be developed to credit the program for each aspect that has been incorporated. An example
(with detailed explanations afterwards) follows.
Documented minimum requirements
Topics covered:
Product characteristics
Pipeline material stresses
Pipeline corrosion
Control and operations
Emergency drills
Job procedures (as appropriate)
Scheduled retraining
2 pts
2 pts
0.5 pts
0.5 pts
0.5 pts
0.5 pts
0.5 pts
0.5 pts
2 pts
1 Pt
Documented minimum requirements A document that specifically describes the body of knowledge that is expected of
pipeline workers is a good start for a program. This document
will ideally state the minimum knowledge requirements for
each pipeline job position. Mastery of this body of knowledge
will be verified before that position is worked by an employee.
For example, a pump station operator will not be allowed to
operate a station until she has demonstrated a command of all
of the minimum requirements of that job. This should include
station shutdowns, alarms, monitors, procedures, and the
ability to recognize any abnormal conditions at the station.
Testing A formal program should verify operator knowledge
and identify deficiencies before they pose a threat to the
pipeline system. Tests that can be passed with less than 100%
correctness may be failing to identify training weaknesses.
Ideally, the operator should know exactly what knowledge he is
expected to possess. The test should confirm that he does
indeed possess this knowledge. If the test indicates deficiencies, he may be retested (within reasonable limits) until he has
mastered the body of knowledge required for his job. Testing
programs vary greatly in technique and effectiveness. It is left
to the risk evaluator to satisfy himself that the testing achieves
the desired results.
Topics Covered Regardless of their specific jobs, all pipeline
operators (and arguably, all pipeline employees) should have
some basic common knowledge. Some of these common areas
may include the following:
Product characteristics. Is the product transported flammable, toxic, reactive, carcinogenic? What are the safe exposure
limits? If released, does it form a cloud? Is the cloud heavier
or lighter than air? Such knowledge decreases the chances of
an operator making an incorrect decision due to ignorance
about the product she is handling.
Pipeline material stresses. How does the pipeline material
react to stresses? What are indications of overstressing?
What is the failure mode ofthe material? What is the weakest
component in the system? Such basic knowledge must not be
confused with engineering in the minds of the operators. All
operators should understand these fundamental concepts to
help to understand and avoid errors only-not to replace
engineering decisions. With this knowledge though, an operator may find (and recognize the significance of) a bulge in
the pipe indicating yielding had occurred. All trainees may
gain a better appreciation of the consequences of a pipeline
Pipeline corrosion. As in the above topic, a basic understanding of pipeline corrosion and anticorrosion systems may
reduce the chances of errors. With such training, a field operator would be more alert to coating damage, the presence of
other buried metal, or overhead power lines as potential
threats to the pipeline. Office personnel may also have the
opportunity to recognize a threat and bring it to the attention
of the corrosion engineer, given a fundamental understanding of corrosion. A materials handler may spot a situation of
incompatible metals that may have been overlooked in the
design phase.
Control and operations. This is most critical to the employees who actually perform the product movements, but all
employees should understand how product is moved and
controlled, at least in a general way. An operator who understands what manner of control is occurring upstream and
downstream of his area ofresponsibility is less likely to make
an error due to ignorance of the system. An operator who
understands the big picture of the pipeline system will be
better able to anticipate all ramifications of changes to the
Maintenance. A working knowledge of what is done and why
it is being done may be valuable in preventing errors. A
worker who knows how valves operate and why maintenance
is necessary to their proper operation will be able to spot
deficiencies in a related program or procedure. Inspection
and calibration of instruments, especially safety devices, will
usually be better done by a knowledgeable employee. Given
that many maintenance activities involving excavation could
Operation 6/131
occur without engineering supervision, safety training of
maintenance crews should include education on the conditions potentially leading io slope failure or other
stabilityisupport issues. Standard procedures should be written to require notification of an engineer should such conditions be found to exist.
In this schedule, points may be added for each application up
to a maximum point value of 5 points. An application is valid
only if the mechanical preventer is used in all instances of the
scenario it is designed to prevent. Ifthe section being evaluated
has no possible applications, award the maximum points (5
points) because there is no potential for this type of human
Emergent? dri1l.v The role of emergency drills as a proactive
risk reducer may be questioned. Emergency response in general
is thought to play a role only after a failure has occurred and
consequently is considered in the leak impact facfor (Chapter
7). Drills. however, may play a role in human error reduction as
employees think through a simulated failure. The ensuing
analysis and planning should lead to methods to further reduce
risks. The evaluator must decide what effect emergency drills
have on the risk picture in a specific case.
Three-way walwes It is common industry practice to install
valves between instruments and pipeline components.The ability to isolate the instrument allows for maintenance of the
instrument without taking the whole pipeline section out of
service. Unfortunately, it also allows the opportunity for an
instrument to be defeated if the isolating valve is left closed
after the instrument maintenance is complete. Obviously if the
instrument is a safety device such as a relief valve or pressure
switch, it must not be isolated from the pipeline that it is protecting.
Three-way valves have one inlet and two outlets. By closing
one outlet, the other is automatically opened. Hence, there is
always an unobstructed outlet. When pressure switches, for
instance, are installed at each outlet of a three-way valve. one
switch can be taken out of service and the other will always be
operable. Both pressure switches cannot be simultaneouslyisolated. This is a prime example of a very effective mechanical
preventer that reduces the possibility of a potentially quite serious error. Points are awarded accordingly.
Job proceditres As required by specific employee duties, the
greatest training emphasis should probably be placed on job
procedures. The first step in avoiding improper actions of
employees is to document the correct way to do things. Written
and regularly reviewed procedures should cover all aspects of
pipeline operation both in the field and in the control centers.
The use of procedures as a training tool is being measured
here Their use as an operational tool is covered in an earlier
Scheduled retraining Finally, experts agree that training is
not permanent. Habits form, steps are bypassed, things are forgotten. Some manner of retraining and retesting is essential
when relying on a training program to reduce human error. The
evaluator should be satisfied that the retraining schedule is
appropriate and that the periodic retesting adequately verifies
employee skills.
C7. Mechanical error preventers (0-6 pts)
Sometimes facetiously labeled as “idiot-proofing,” installing
mechanical devices to prevent operator error may be an effective risk reducer. Credit toward risk reduction should be given
to any such device that impedes the accomplishment of an
error. The premise here is that the operator is properly trainedthe mechanical preventer serves to help avoid inattention
errors. A simple padlock and chain can fit in this category,
because such locks cause an operator to pause and, it is hoped,
consider the action about to be taken. A more complex error
prevention system is computer logic that will prevent certain
actions from being performed out of sequence.
The point schedule for this category can reflect not only the
effectiveness of the devices being rated, but also the possible
consequences that are being prevented by the device. Judging
this may need to be subjective, in the absence of much experiential data. An example of a schedule with detailed explanations follows:
Three-way valves with dual instrumentation
Lock-out devices
Key-lock sequence programs
Computer permissives
Highlighting of critical instruments
4 pts
2 pts
2 pts
2 pts
I Pt
Lock-ouf dewices These are most effective if they are not the
norm. When an operator encounters a lock routinely. the attention-grabbing effect is lost. When the lock is an unusual feature.
signifying unusual seriousness of the operation about to be
undertaken, the operator is more likely to give the situation
more serious attention.
Key-lock sequence programs These are used primarily to
avoid out-of-sequence type errors. If a job procedure calls for
several operations to be performed in a certain sequence. and
deviations from that prescribed sequence may cause serious
problems, a key-lock sequence program may be employed to
prevent any action from being taken prematurely. Such programs require an operator to use certain keys to unlock specific
instruments or valves. Each key unlocks only a certain instrument and must then be used to get the next key. For instance. an
operator uses her assigned key to unlock a panel of other keys.
From this panel she can initially remove only key A. She uses
keyA to unlock and close valve X. When valve X is closed, key
B becomes available to the operator. She uses key B to unlock
and open valveY. This makes key C available, and so on. At the
end of the sequence, she is able to remove key A and use it to
retrieve her assigned key. These elaborate sequencing schemes
involving operators and keys are being replaced by computer
logic, but where they are used, they can be quite effective. It is
important that the keys be nondefeatable to force operator
adherence to the procedure.
Compuferpermissiwes These are the electronic equivalent to
the key-locks described in the last section. By means of software logic ladders. the computer prevents improper actions
from being taken. A pump start command will not be executed
if the valve line-up (proper upstream and downstream valves
61132 Incorrect Operations Index
open or closed as required) is not correct. A command to open a
valve will not execute if the pressure on either side of the valve
is not in an acceptable range. Such electronic permissives are
usually software programs that may reside in on-site or
remotely located computers. A computer is not a minimum
requirement, however, because simple solenoid switches or
wiring arrangements may perform similar functions. The evaluator should assess the adequacy of such permissives to
perform the intended functions. Furthermore, they should be
regularly tested and calibrated to warrant the maximum point
ment. These programs can be quite sophisticated in terms ofthe
rigor of the data analysis. Use of even rudimentary aspects of
PPM provides at least some evidence to the evaluator that maintenance is playing a legitimate role in the company’s risk reduction efforts.
The evaluator may wish to judge the strength of the maintenance program based on the following items:
Highlighting of critical instruments This is merely another
method of bringing attention to critical operations. By painting
a critical valve the color red or by tagging an instrument with a
special designation, the operator will perhaps pause and consider his action again. Such pauses to reconsider may well prevent serious mistakes. Points should be awarded based on how
effective the evaluator deems the highlighting to be.
D1. Documentation (0-2 pts)
D. Maintenance (suggestedweighting:
Improper maintenance is a type of error that can occur at several levels in the operation. Lack of management attention to
maintenance, incorrect maintenance requirements or procedures, and mistakes made during the actual maintenance activities are all errors that may directly or indirectly lead to a
pipeline failure. The evaluator should again look for a sense of
professionalism, as well as a high level of understanding of
maintenance requirements for the equipment being used.
Note that this item does not command a large share of the
risk assessment points. However, many items in the overall
pipeline risk assessment are dependent on items in this section.
A valve or instrument, which due to improper maintenance will
not perform its intended function, negates any risk reduction
that the device might have contributed. If the evaluator has
concerns about proper operator actions in this area, she may
need to adjust (downward) all maintenance-dependent variables in the overall risk evaluation. Therefore, ifthis item scores
low, it should serve as a trigger to initiate a reevaluation of the
Routine maintenance should include procedures and schedules for operating valves, inspecting cathodic protection equipment, testingcalibrating instrumentation and safety devices,
corrosion inspections, painting, component replacement, lubrication of all moving parts, engine/pump/compressor maintenance, tank testing, etc.
Maintenance must also be done in a timely fashion.
Maintenance frequency should be consistent with regulatory
requirements and industry standards as a minimum. Modem
maintenance practices often revolve around concepts ofpredicfive preventive maintenance (PPM) programs. In these programs, systematic collection and analyses of data are
emphasized so that maintenance actions are more proactive and
less reactive. Based on statistical analysis of past failures and
the criticality of the equipment, part replacement and maintenance schedules are developed that optimize the operationnot wasting money on premature part replacement or
unnecessary activities, but minimizing downtime of equip-
DI. Documentation
D2. Schedule
D3. Procedures
2 pts
3 pts
IO pts
The evaluator should check for a formal program of retaining
all paperwork or databases dealing with all aspects of maintenance exists. This may include a file system or a computer database in active use. Any serious maintenance effort will have
associated documentation. The ideal program will constantly
adjust its maintenance practices based on accurate data collection through a formal PPM approach or at least by employing
PPM concepts. Ideally, the data collected during maintenance,
as well as all maintenance procedures and other documentation, will be under a document management system to ensure
version control and ready access of information.
DZ. Schedule (0-3 pts)
A formal schedule for routine maintenance based on operating
history, government regulations, and accepted industry practices will ideally exist. Again, this schedule will ideally reflect
actual operating history and, within acceptable guidelines, be
adjusted in response to that history through the use of formal
PPM procedures or at least the underlying concepts
D3. Procedures (0-10 pts)
The evaluator should verify that written procedures dealing
with repairs and routine maintenance are readily available. Not
only should these exist, it should also be clear that they are in
active use by the maintenance personnel. Look for checklists,
revision dates, and other evidence of their use. Procedures
should help to ensure consistency. Specialized procedures are
required to ensure that original design factors are still considered long after the designers are gone. A prime example is
welding, where material changes such as hardness, fracture
toughness, and corrosion resistance can be seriously affected
by the subsequent maintenance activities involving welding.
Incorrect operations index
This is the last of the failure mode indexes in the relative risk
model (see Figure 6.1). This value is combined with the other
indexes discussed in chapters 3 through 6 and then divided by
the leak impact factor, which is discussed in Chapter 7, to arrive
at the final risk score. This final risk score is ready to be used in
risk management applications as discussed in Chapter 15.
Chapters 8 through 14 discuss some specialized applications of
risk techniques. If these are not pertinent to the systems being
evaluated, the reader can move directly to Chapter 15.
Leak Impact Factor
iquid Spill Dispersion 7/15]
Physical extent of spill 71151
hemal effects 71152
ontamination Potential 71153
Spill Migration 71153
Spill and Leak Mitigation 71154
Secondary Containment 71154
Emergency Response 71154
VI. Scoring Releares 711 54
Scoring Hazardous Liquid Releases 71155
Scoring Hazardous Vapor Releaqes 71156
ing 71158
Scores 71159
Emergency Response 71162
VIII. Receptors 71165
Population Density 7/165
Environmental Issues 71166
Environmental Sensitivity 71167
High-Value Arcas 71168
Equivalencies of Receptors 71170
I Changes in LIF Calculations 71135
TI. Background 71135
Ill Product Hazard 71136
Acute Hazards 71136
Chronic Hazards 71138
IV Leakvolume 71142
Hole Size 71142
Matenals 7/143
Stresses 71144
Initiatrng Mechanisms 71145
Release Models 71146
Hazardousvapor Releases 71146
Hazardous Liquid Spills 71147
HVL Releases 71147
V. Dispersion 71148
Jet Fire 71149
Vapor Cloud 71149
Vapor Cloud Ignition 71149
Overpressure Wave 71150
Vapor Cloud Size 71150
CloudModeling 71150
Leak Impact Factor Overview
Leak impact factor (LIF) = product hazard (PH) x leak (L) x dispersion
(D) xreeeptors (R)
A. Product Hazard (PH)
(Acute + Chronic Hazards)
Al. Acute Hazards
a. N,
b. N,
c. N,
Total (Nf+ N,+ N,)
A2. Chronic Hazard (RQ)
1-22 pts
M pts
0-4 pts
&12 pts
e 1 0 pts
B. Leak/Spill Volume (LV)
C. Dispersion (D)
D. Receptors (R)
D1. Population Density (Pop)
D2. Environmental Considerations (Env)
D3. High-Value Areas (HVA)
Total Receptors = (Pop + Env + HVA)
Note: The leak impact factor is used to adjust the index
scores to reflect the consequences of a failure. A higher point
score for the leak impact factor represents higher consequences
and a higher risk.
7/134 Leak Impact Factor
Figure 7.1 Relative risk model.
Acute hazard
Aquatic toxicity
Mammalian toxicity
Chronic hazard 4
High value areas
Spill size
r Product state (gas, liquid, combination)
Flow rate
Product characteristics
Failure size
Leak detection
Surface flow resistance
Product characteristics
Volume released
Emergency response
Flgure 7.2 Assessing potentialconsequences:samples of data used to calculate the leak impactfactor.
Background 71135
Changes in LIF calculations
Some changes to the leak impact factor (LIF), relative to the
first and second editions of this text, are recommended. The
elements of the LIF have not changed, but the protocol by
which these ingredients are mathematically combined has
been made more transparent and realistic in this discussion.
Additional scoring approaches are also presented. Given the
increasing role of risk evaluations in many regulatory and
highly scrutinized applications, there is often the need to consider increasing detail in risk assessment, especially consequence quantification. There is no universally agreed upon
method to do this. This edition ofthis book seeks to provide the
risk assessor with an understanding of the sometimes complex
underlying concepts and then some ideas on how an optimum
risk assessment model can be created. The final complexity and
comprehensiveness of the model will be a matter of choice for
the designer, in consideration of factors such as intended application, required accuracy, and resources that can be applied to
the effort.
Up to this point, possible pipeline failure initiators have been
assessed. These initiators define what can go wrong. Actions or
devices that are designed to prevent these failure initiators have
also been considered. These preventions affect the “How likely
is it?’ follow-up question to “What can go wrong?”
The last portion of the risk assessment addresses the question “What are the consequences?” This is answered by estimating the probabilities of certain damages occurring. The
consequence factor begins at the point of pipeline failure. The
title of this chapter, Leak Impact Factor: emphasizes this. What
is the potential impact of a pipeline leak? The answer primarily
depends on two pipeline condition factors: ( I ) the product and
(2) the surroundings. Unfortunately, the interaction between
these two factors can be immensely complex and variable.
The possible leak rates, weather conditions, soil types, populations nearby, etc., are in and of themselves highly variable
and unpredictable. When the interactions between these and
the product characteristics are also considered the problem
becomes reasonably solvable only by making assumptions and
The leak impact factor is calculated from an analysis of the
potential product hazard, spill or leak size, release dispersion,
and receptor characteristics. Although simplifying assumptions are used enough distinctions are made to ensure that
meaningful risk assessments result.
The main focus ofthe LIF here is on consequences to public
health and safety from a pipeline loss of containment integrity.
This includes potential consequences to the environment.
Additional consequence considerations such as service interruption costs can be included as discussed in later chapters.
The LlF can be seen as the product of four variables:
LIF=PH x LV x D x R
LIF =leak impact factor (higher \slues represent higher consequences)
PH = product hazard (as previously defined)
LV =leak volume (relative quantity of the liquid or vapor
D = dispersion (relative range of the leak)
R =receptors (all things that could be damaged).
Because each variable is multiplied by all others. any individual variable can drastically impact the final LIE This better
represents real-world situations. For instance. this equation
shows that if any one of the four components is zero. then the
consequence (and the risk) is zero. Therefore, if the product is
absolutely nonhazardous (including pressurization effects),
there is no risk. If the leak volume or dispersion is zero, either
because there is no leak or because some type of secondary
containment is used then again there is no risk. Similarly, if
there are no receptors (human or environmental or property
values) to be endangered from a leak. then there is no risk. As
each component increases, the consequence and overall risks
The full range of hazard potential from loss of integrity of
any operating pipeline includes the following:
I . Toxicit?,/asphyxiation~ontact
toxicity or exclusion of air
from confined spaces.
2. Contamination pollution-acute and chronic damage to
property, flora, fauna, drinking waters, etc.
3. Mechanical eflects-erosion,
washouts, projectiles. etc.,
from force of escaping product.
4. Firdignition scenarios:
a. Fir.eballs-normally caused by boiling liquid, expanding
vapor explosions (BLEVE) episodes in which a vessel,
usually engulfed in flames, violently explodes, creating a
large fireball with the generation of intense radiant heat
b. Flame jets--occurs when an ignited stream of material
leaving a pressurized vessel creates a long flame jet with
associated radiant heat hazards and the possibility of a
direct impingement of flame on nearby receptors
c. Vapor cloudfire--occurs when a cloud encounters an
ignition source and causes the entire cloud to combust as
air and fuel are drawn together in a flash fire situation
d. Vapor cloud explosion--occurs when a cloud ignites and
the combustion process leads to detonation of the cloud
generating blast waves
e. Liquid poolfires-a liquid pool of flammable material
forms, ignites, and creates radiant heat hazards
Naturally, not all of these hazards accompany all pipeline
operations. The product being transported is the single largest
determinant of hazard type. A water pipeline will often have
only the hazard of “mechanical effects” (and possibly drowning).A gasoline pipeline, on the other hand, carries almost all of
the above hazards.
Hazard zones, that is, distances from a pipeline release
where a specified level of damage might occur, are more fully
discussed in Chapter 14. Example calculation routines are also
provided there as well as later in this chapter. Figure 7.8, presented later in this chapter, illustrates the relative hazard zones
oftypical flammable pipeline products.
There is a range of possible outcomes--consequencesassociated with most pipeline failures. This range can be seen
as a distribution of possible consequences; from a minor nuisance leak to a catastrophic event. Point estimates of the more
7/136 Leak Impact Factor
severe potential consequences are often used as a surrogate for
the distribution in a relative risk model. When absolute risk
values are sought, the consequence distribution must be better
characterizedas is described in later chapters.
A comprehensive consequence assessment sequence might
follow these steps:
1. Determine damage states of interest (see Chapter 14)
2. Calculatehazard distances associated with damage states of
3. Estimate hazard areas based on hazard distances and source
(burning pools, vapor cloud centroid, etc.) location (see
particle fmceelement inTable 7.6)
4. Characterize receptorvulnerabilitieswithin the hazard areas
Limited modeling resources often requires some short cuts
to this process-leading the use of screening simplifications
and detailed analyses at only critical points. Such simplifications and the use of conservative assumptions for modeling
convenience,are discussed in this chapter.
A. Product hazard
The primary factor in determining the nature of the hazard is
the characteristics of the product being transported in the
pipeline. It is the product that to a large degree determines the
nature of the hazard.
In studying the impact of a leak, it is often useful to make a
distinctionbetween acute and chronic hazards.Acute can mean
sudden onset, or demanding urgent attention, or of short duration. Hazards such as fire, explosion, or contact toxicity are
considered to be acute hazards. They are immediate threats
caused by a leak.
Chronicmeans marked by a long duration.A time variable is
therefore implied. Hazards such as groundwater contamination, carcinogenicity, and other long-term health effects are
consideredto be chronic hazards. Many releases that can cause
damage to the environment are chronic hazards because they
can cause long-term effects and have the potential to worsen
with the passage of time.
The primary difference between acute and chronic hazards is
the amount of time involved. An immediate hazard, created
instantly upon initiation of an event, growing to its worst case
level within a few minutes and then improving,is an acute hazard. The hazard that potentially grows worse with the passage
of time is a chronic hazard.
For example, a natural gas release poses mostly an acute hazard. The largest possible gas cloud normally forms immediately, creating a fire/explosionhazard,and then begins to shrink
as pipeline pressure decreases. If the cloud does not find an
ignition source, the hazard is reduced as the vapor cloud
shrinks. (If the natural gas vapors can accumulate inside a
building, the hazard may become more severe as time passesit then becomes a chronic hazard.)
The spill of crude oil is more chronic in nature because the
potential for ignition and accompanying thermal effects is
more remote, but in the long term environmental damages are
A gasoline spill containsboth chronicand acute hazard characteristics. It is easily ignited, leading to thermal damage sce-
narios, and it is also has the potential to cause short- and longterm environmentaldamages.
Many products will have some acute hazard characteristics
and some chronic hazard characteristics. The evaluator should
imagine where his product would fit on a scale such as that
shown in Figure 7.3, which shows a hypothetical scale to illustrate where some common pipeline products may fit in relation
to each other. Aproduct’slocation on this scale depends on how
readily it disperses (the persistence) and how much long-term
hazard and short-termhazard it presents. Some product hazards
are almost purely acute in nature, such as natural gas. These are
shown on the left edge of the scale. Others, such as brine, may
pose little immediate (acute) threat, but cause environmental
harm as a chronic hazard. These appear on the far right side of
the scale.
Al. Acute hazards
Both gaseous and liquid pipeline products should be assessed
in terms oftheir flammability,reactivity,and toxicity.These are
the acute hazards. One industry-accepted scale for rating product hazards comes from the National Fire Prevention
Association (NFPA). This scale rates materials based on the
threat to emergencyresponse personnel (acute hazards).
If the product is a mixture of several components, the mixture itself could be rated. However, a conservative alternative
might be to base the assessment on the most hazardous component, because NFPA data might be more readily available for
the components individually.
Unlike the previous point scoring systems described in this
book, the leak impact factor reflects increasing hazard with
increasing point values.
Flammabili& Nr
Many common pipeline products are very flammable. The
greatest hazard from most hydrocarbons is from flammability.
The symbol N, is used to designate the flammability rating of
a substance according to the NFPA scale. The five-point scale
shows, in a relative way, how susceptiblethe product is to combustion. The flash point is one indicator of this flammability.
Gasoline Diesel
Fuel oil
Immediate t
Figure 7.3
Relative acute-chronichazard scalefor common pipeline
Product hazard 7/137
The j7ash point is defined as the minimum temperature at
which the vapor over a flammable liquid will “flash when
exposed to a free flame. It tells us what temperature is required
to release enough flammable vapors to support a flame.
Materials with a low flash point (<100”F)ignite and burn readily and are deemed to be flammable. If this material also has a
boiling point less than lOO”F,it is considered to be in the most
flammable class. This includes methane, propane, ethylene,
and ethane. The next highest class of substances has flash
points of less than 100°Fand boiling points greater than 100°F.
In this class, less product vaporizes and forms flammable
mixtures with the air. This class includes gasoline. crude petroleum, naphtha, and certain jet fuels.
A material is termed combustible if its flash point is greater
than 100°Fand it will still bum. This class includes diesel and
kerosene. Examples of non-combustibles are bromine and
Use the following list or Appendix A to determine the NFPA
N, value (FP = flash point; BP =boiling point [26]):
FP > 200’F
100°F< FP < 200°F
FP < 100°Fand BP < 100°F
FP < 73°Fand BP < 100°F
N,= 1
N,= 4
More will be said about flammability in the discussion of
vapor cloud dispersion later in this chapter.
Reactivity, N,
Occasionally,a pipeline will transport a material that is unstable
under certain conditions. A reaction with air, water, or with itself
could be potentially dangerous. To account for this possible
increase in hazard, a reactivity rating should be included in the
assessment of the product. The NFPA value N, is used to do this.
Although a good beginning point, the N, value should he
modified when the pipeline operator has evidence that the substance is more reactive than the rating implies. An example of
this might be ethylene. A rather common chain of events in
pipeline operations can initiate a destructive series of detonations inside the line. This is a type ofreactivity that should indicate to the handler that ethylene is unstable under certain
conditions and presents an increased risk due to that instability.
The published N, value of 2 might not adequately cover this
special hazard for ethylene in pipelines.
Use the following list or Appendix A to determine the N,
value [26]:
N, = 0 Substance is completely stable. even when heated under
fire conditions
N, = 1 Mild reactivity on heating with pressure
Nr= 2 Significant reactivity, even without heating
N, = 3 Detonation possible with confinement
Nr = 4 Detonation possible without confinement.
Note that reactivity includes self-reactivity (instability) and
reactivity with water.
The reactivity value (N,) can be obtained more objectively by
using the peak temperature of the lowest exotherm value as
follows [26]:
Exotherm. ’C
2 15-305
125-2 15
The immediate threat from the potential energy of a pressurized pipeline is also considered here. This acute threat includes
debris and pipe fragments that could become projectiles in the
event of a catastrophic pipeline failure. Accounting for internal
pressure in this item quantifies the intuitive belief that a
pressurized container poses a threat that is not present in a nonpressurized container.
The increased hazard due solely to the internal pressure is
thought to be rather small because the danger zone is usually
very limited for a buried pipeline. When the evaluator sees an
increased threat, such as an aboveground section in a populated
area, she may wish to adjust the reactivity rating upward in
point value. In general, a compressed gas will have the greater
potential energy and hence the greater chance to do damage.
This is in comparison to an incompressible fluid.
The pressure hazard is directly proportional to the amount of
internal pressure in the line. Although the MOP could be used
here, this would not differentiate between the upstream sections
(often higher pressures) and the downstream sections (usually
lower pressures). One approach would be to create a hypothetical pressure profile of the entire line and, from this, identify
normal maximum pressures in the section being evaluated.
Using these pressures, points can be assessed to reflect the risk
due to pressure.
So, to the N, value determined above, a pressure factor can be
added as follows:
Incompressible Fluids (Liquids)
0-100 psig internal pressure
>lo0 psig
Pressure Factor
0 pts
1 Pt
Compressible Fluids (Gases)
&50 psig
5 1-200 psig
>200 psig
0 pts
1 Pt
2 pts
Total point values for N, should not be increased beyond 4
points, however, because that would minimize the impact of the
flammability and toxicity factors, N, and N,, whose maximum
point scores are 4 points.
Example 7.1: Product hazard scoring
A natural gas pipeline is being evaluated. In this particular
section, the normal maximum pressure is 500 psig. The evaluator determines from AppendixA that the N, for methane is 0. To
this. he adds 2 points to account for the high pressure of this
compressible fluid. Total score for reactivity is therefore 2
Toxic& Nh
The NFPA rating for a material’s health factor is N,. The N,
value only considers the health hazard in terms of how that
7/138 Leak Impact Factor
hazard complicates the response of emergency personnel.
Long-term exposure effects must be assessed using an additional scale. Long-term health effects will be covered in the
assessment of chronic hazards associated with product spills.
Toxicity is covered in more detail in the following section.
As defined in NFPA 704, the toxicity ofthe pipeline product
is scored on the following scale [26]:
Nh = 0 No hazard beyond that of ordinary combustibles.
Nh = 1 Only minor residual injury is likely.
Nh = 2 Prompt medical attention required to avoid temporary incapacitation.
Nh = 3 Materials causing serious temporary or residual injury.
Nh = 4 Short exposure causes death or major injury.
Appendix A lists the N, value for many substances
commonly transported by pipeline.
Acute hazard score
The acute hazard is now obtained by adding the scores as
Acute hazard (&I 2 pts) = (Nf+ N, + N,,)
A score of 12 points represents a substance that poses the
most severe hazard in all three of the characteristics studied.
Note that the possible point values are low, but this is part of a
multiplying factor. As such, it will have a substantial effect on
the total risk score.
Few preventive actions are able to substantially reduce acute
hazards. To be effective, a preventive action would have to
change the characteristics of the hazard itself. Quenching a
vapor release instantly or otherwise preventing the formation of
a hazardous cloud would be one example of how the hazard
could be changed. While the probability and the consequences
of the hazardous event can certainly be managed, the state of
the art is not thought to be so advanced as to change the acute
hazard of a substance as it is being released.
Direct measurement of acute hazards
Acute hazards are often measured directly in terms fire and
explosion effects when contact toxicity is not an issue. In the
case of fire, the possible damages extend beyond the actual
flame impingement area, as is readily recognizable from
approaching a large campfire. Heat levels are normally measured as thermal radiation (or heatflux or radiant heat) and are
expressed in units of Btu/ft2-hr or kW/m2. Certain doses of
thermal radiation can cause fatality, injury, andor property
damage, depending on the vulnerability of the exposed subject
and the time of exposure. Thermal radiation effects are discussed in this chapter and quantified in Chapter 14 (see also
Figure 7.8 later in this chapter).
Explosion potential is another possible acute hazard, in the
case of vapor releases. Explosion intensity is normally characterized by the blast wave, measured as overpressure and
expressed in psig or Wa. Mechanisms leading to detonation are
discussed in this chapter and a discussion of quantification of
overpressure levels can be found in Chapter 14.
The amount of harm potentially caused by either of these
threats depends on the distance and shielding of the exposed
A2. Chronic hazard
A very serious threat from a pipeline is the potential loss of life
caused by a release of the pipeline contents. This is usually
considered to be an acute, immediate threat. Another quite
serious threat that may also ultimately lead to loss of life is the
contamination of the environment due to the release of the
pipeline contents. Though not usually as immediate a threat as
toxicity or flammability, environmental contamination ultimately affects life, with possible far-reaching consequences.
This section offers a method to rate those consequences that
are of a more chronic nature. We build on the material presented
in the previous section to do this. From the acute leak impact
consequences model, we can rank the hazard from fire and
explosion for the flammables and from direct contact for the
toxic materials. These hazards were analyzed as short-term
threats only. We are now ready to examine the longer term
hazards associated with pipeline releases.
Figure 7.4 illustrates how the chronic product hazard associated with pipeline spills can be assessed. The first criterion is
whether or not the pipeline product is considered to be hazardous. To make this determination, U S . government regulations are used. The regulations loosely define a hazardous
substance as a substance that can potentially cause harm to
humans or to the environment. Hazardous substances are more
specifically defined in a variety of regulations including the
Clean Water Act (CWA), the Clean Air Act (CAA), the
Resource Conservation and Recovery Act (RCRA), and the
Comprehensive Environmental Response, Compensation and
Liability Act (CERCLA, also known as Superfund). If the
pipeline product is considered by any of these sources to be
hazardous, a reportable spill quantity (RQ) category designation is assigned under CERCLA (Figure 7.4). These RQ designations will be used in our pipeline risk assessment to help rate
hazardous products from a chronic standpoint.
The more hazardous substances have smaller reportable spill
quantities. Larger amounts of more benign substances may be
spilled before the environment is damaged. Less hazardous
substances, therefore, have larger reportable spill quantities.
The designations are categories X, A, B, C, and D, corresponding to spill quantities of 1, 10, 100, 1000, and 5000 pounds,
respectively. Class X, a 1-pound spill, is the category for substances posing the most serious threat. Class D, a 5000-pound
spill, is the category for the least harmful substances.
The EPA clearly states that its RQ designations are not
created as agency judgments of the degree of hazard of
specific chemical spills. That is, the system is not intended to
say that a 9-pound spill of a class A substance is not a problem, while a 10-pound spill is. The RQ is designed to be a
trigger point at which the government can investigate a spill
to assess the hazards and to gauge its response to the spill.
The criteria used in determining the RQ are, however, appropriate for our purposes in ranking the relative environmental
hazards of spills.
Classifying a chemical into one ofthese reportable quantities
categories is a nontrivial exercise outlined in U.S. Regulations,
40 CFR Parts 117 and 302. The primary criteria considered
include aquatic toxicity, mammalian toxicity (oral, dermal,
inhalation), ignitability and reactivity, chronic toxicity, and
potential carcinogenicity. The lowest of these criteria (the worst
case) will determine the initial RQ ofthe chemical.
Product hazard 7/139
acute hazard
Is a formal
cleanup required?
Is the product
I 2R;ozooIo
Fuel oil
RQ = 100
= “none”
Figure 7.4 Determination of RQ
The initial RQ may then be adjusted by analysis of the secondary criteria of biodegradation, hydrolysis, and photolysis.
These secondary characteristics provide evidence as to how
quickly the chemical can be safely assimilated into the environment. A chemical that is quickly converted into harmless
compounds poses less risk to the environment. So-called “persistent” chemicals receive higher hazard ratings.
The CERCLA reportable quantity list has been revised since
its inception and will probably continue to be revised. One weakness of the system is that the best available knowledge may not
always be included in the most current version. An operator who
is intimately familiar with a substance may be in a better position
to rate that product relative to some others. When operator experience suggests that the substance is worse than the published
CERCLA RQ implies, the evaluator should probably revise the
number to a more severe rating. This can be done with the understanding that the CERCLA rating is subject to periodic review
and will most likely be updated as better information becomes
available. If the operator, on the other hand, feels that the substance is being rated too severely, the evaluator should recognize
that the operator may not realize all aspects of the risk. It is recommended that RQ ratings should not be reduced in severity rating based solely on operator opinion.
Use of the RQ factor incorporates some redundancy into the
already assigned NFPA ratings for acute hazards. However, the
overlap is not complete. The RQ factor adds information on
chronic toxicity, carcinogenicity, persistence, and toxicity to
nonhumans, none ofwhich is included in the NFPA ratings. The
overlap does specifically occur in acute toxicity, flammability,
and reactivity This causes no problems for a relative risk
Primary criteria
The following is a brief summary of each of the CERCLA
primary criteria [ 141:
7/140 Leak Impact Factor
I . Aquatic toxicity. Originally developed under the Clean
Water Act, the scale for aquatic toxicity is based on LC,,,
the concentration of chemical that is lethal to one-half ofthe
test population of aquatic animals on continuous exposure
for 96 hours (see Table 7.1; also see the Notes on toxicity
section later in this chapter).
2. Mammalian [email protected] This is a five-level scale for oral, dermal, and inhalation toxicity for mammals. It is based on
LC,, data as well as LD,, (the dose required to cause the
death of 50% of the test population) data and is shown in
Table 7.2.
3. Ignitability and reactivity. Ignitability is based on flash
point and boiling point in the same fashion as the acute characteristic, N, Reactivity is based on a substance’s reactivity
with water and with itself. For ourpurposes, it also includes
pressure effects in the assessment of acute hazards.
4. Chronic toxicity. To evaluate the toxicity, a scoring methodology assigns values based on the minimum effective
dose for repeated exposures and the severity of the effects
caused by exposure. This scoring is a fimction of prolonged exposure, as opposed to the acute factor, N,, which
deals with short-term exposure only. The score determination methodology is found in U.S. regulations (48 CFR
5. Potential canzinogenicity This scoring is based on a high
weight ofevidence designation (either a “known,” “probable,”
or “possible” human carcinogen) coupled with a potency rating. The potency rating reflects the relative strength of a substance to elicit a carcinogenic response. The net result is a
high, medium, or low hazard ranking that correspondsto RQs
of 1,10, and 100 pounds, respectively [30].
Secondary criteria
As previously stated, the final RQ rating may be adjusted by
evaluating the persistence of the substance in the environment.
The susceptibility to biodegradation, hydrolysis, and photolyTable 7.1
Aquatic toxicity
Aquatic [email protected] (LC,, range) (mg/L)
RQ (lb)
0.1-1 .o
Table 7.2
sis allows certain substances to have their RQ ratings lowered
one category (e.g., from RQlO to RQ 100).To be considered for
the adjustment, the substance has to pass initial criteria dealing
with the tendency to bioaccumulate, environmental persistence, presence of unusual hazards (such as high reactivity),and
the existence of hazardous degradation or transformation products. If the substance is not excluded because of these items, it
may be adjusted downward one RQ category if it shows a very
low persistence.
Unfortunately, petroleum, petroleum feedstocks, natural
gas, crude oil, and refined petroleum products are specifically
excluded from the EPA’s reportable quantity requirements
under CERCLA. Because these products comprise a high percentage of substances transported by pipeline, an alternative
scoring system must be used. This requires a deviation from the
direct application of the EPA rating system when petroleum
products are evaluated. For our purposes here, however, we can
extend the spirit of the EPA system to encompass all common
pipeline products. This is done by assigning RQ equivalent
classifications to substances that are not assigned an RQ classification by the EPA.
For the products not specifically listed as hazardous by EPA
regulatory agencies, a general definition is offered. If any one
of the following four properties are present, the substance is
considered to be hazardous [14]:
1. Ignitubility. Defined as a liquid with a flash point of less
than 60°C or a nonliquid that can spontaneously cause a fire
through friction, absorption of moisture, or spontaneous
chemical changes and will burn vigorously and persistently.
2. Corrosivity. Defined as liquids with pH I 2 or t 12.5, or
with the ability to corrode steel at a rate of 6.35 millimeters
per year at 55°C.
3. Reuctivity. Defined as a substance that is normally unstable,
reacts violently with water, forms potentially violent mixtures with water, generates toxic fumes when mixed with
water, is capable of detonation or explosion, or is classified
as an explosive under DOT regulations.
4. Extraction procedure toxicity. This is defined by a special
test procedure that looks for concentrations of materials
listed as contaminants in the Safe Drinking Water Act’s list
of National Interim Primary Drinking Water Regulation
contaminants [14].
Although petroleum products are specifically excluded from
regulatory control, these definitions would obviously include
most pipeline hydrocarbon products. This then becomes the
second criterion to be made in the evaluation of pipeline
Mammalian toxicity
RQ (lbl
Oral LD,, range (mg/kg)
<o. I
I &IO0
Dermal LD,, range (mg/kg)
<0.04 m&g
Inhalation LC,, mnge @pm)
<0.4 ppm
Product hazard 71141
Products that are not specifically listed with an EPAassigned RQ but do fit the definition of hazardous are now
divided into categories of volatile or nonvolatile. These products that do not meet the definition of “hazardous substance”
set forth above OR are not volatile AND do not require a formal
cleanup are assumed to have a RQ designation of “none” (see
Figure 7.4).
Following the “hazardous substance” AND volatile branch
of the flowchart (Figure 7.4), we now assess these volatile
substances. Highly volatile products of concern produce
vapors, which when released into the atmosphere, cause potential acute hazards, but usually only minimal chronic hazards.
Common pipeline products that will fall into this category
include methane, ethane, propane, ethylene, propylene, and
other liquefied petroleum gases. These products also meet the
definition of “hazardous substances” set forth above.
We can assume that the bulk of the hazard from highly
volatile substances occurs in leaks to the atmosphere. We
assume that all leaks of such products into any of the three
possible environmental media (air, soil, water) will ultimately
cause a release to the air. We can then surmise that the hazard
from these highly volatile liquids is mostly addressed in the
atmospheric dispersion modeling analysis that will he performed in the acute leak impact consequences analysis. The
chronic part ofthis leak scenario is thought to be in the potential
for (1) residual hydrocarbons to be trapped in soil or buildings
and pose a later flammability threat, and (2) the so-called
“greenhouse” gases that are thought to be harmful to the ozone
layer of the atmosphere. These threats warrant an RQ equivalent of 5000 pounds in this ranking system.
This leaves the less volatile hazardous substances, which
also need an assigned RQ. Included here are petroleum products such as kerosene, jet fuel, gasoline, diesel oil, and crude
oils. For spills ofthese substances, the acute hazards are already
addressed in the flammability, toxicity, and reactivity assessment. Now, the chronic effects such as pollution of surface
waters or groundwater and soil contamination are taken into
Spills of nonvolatile substances must be assessed as much
from an environmental insult basis as from an acute hazard
basis. This in no way minimizes the hazard from flammability,
however. The acute threat from spilled flammable liquids is
addressed in the acute portion of the leak impact. The longer
term impact of spilled petroleum products is obtained by
assigning an RQ number to these spills. It is recommended that
these products be classified as category B spills (reportable
quantities of 100 pounds) unless strong evidence places them in
another category. This means the RQ equivalent is 100 pounds.
An example of evidence sufficient to move the product down
one category (more hazardous) would be the presence of a significant amount of category X or category A material (such as
methylene chlonde--category X). This is discussed further
below. Evidence that could move the petroleum product into a
category C or category D (less hazardous) would be high
volatility or high biodegradation rates.
To make further distinctions within this group, more complex determinations must be made. The value of these additional determinations is not thought to outweigh the additional
costs. For instance, it can perhaps be generally stated that the
heavier petroleum products will biodegrade at a slower rate
than the lighter substances. This is because the degradability is
linked to the solubility, and the lighter products are usually
more soluble. However, it can also be generally stated that
the lighter petroleum substances may more easily penetrate
the soil and reach deeper groundwater regions. This is also a
solubility phenomenon. We now have conflicting results of
a single property. To adequately include the property of density (or solubility), we would have to balance the benefits of
quicker degradation with the potential of more widespread
environmental harm.
We have now established a methodology to assign a ranking, in the form of an RQ category, for each pipeline product.
An important exception to the general methodology is noted.
If the quantity spilled is great enough to trigger an RQ of
some trace component, this RQ should govern. This scenario
may occur often because we are using complete line rupture
as the main leak quantity determinant. For example, a crude
oil product that has 1% benzene would reach the benzene RQ
number on any spill greater than 1000 pounds. This is because
the benzene RQ is 10 pounds and 1% of 1000-pound spill
of product containing 1% benzene means that 10 pounds of
benzene was spilled.
To easily account for this general exception to the RQ
assignment, the evaluator should start with the leak quantity
calculation. She can then work from the CERCLA list and
determine the maximum percentage for each trace component
that must be present in the product stream before that component governs the RQ determination. Comparing this to an
actual product analysis will point out the worst case component that will determine the final RQ rating. An example illustrates this.
Example 7.2: Calculating the RQ
An 8-in. pipeline that transports a gasoline that is known to
contain the CERCLA hazardous substances benzene, toluene,
and xylene is being evaluated. The leak quantity is calculated
from the line size and the normal operating pressure (normal
pressures instead of maximum allowable pressures are used
throughout this company’s evaluations) to be 10,000 pounds.
This calculated leak quantity is now used to determine component percentages that will trigger their respective RQs for
this spill:
Benzene (RQ = 10):
Toluene (RQ = 1000)
Xylene (RQ= 1000)
10/10,000 = 0.001 = 0. I %
1000/10,000=0.1= 10%
1000/10,000=O.I= 10%
The evaluator can now look at an actual analysis to see If the
actual product stream exceeds any ofthese weight percentages.
If the benzene concentration is less than 0.1 % and the toluene
and xylene concentrations are each less than lo%, then the RQ
is set at 100 pounds, the default value for gasoline. If, however,
actual analysis shows the benzene concentration to be 0.7%,
then the benzene RQ set at 10 pounds governs. This is because,
more than 10 pounds of benzene will be spilled in a 10,000pound spill of this particular gasoline stream.
Gasolines generally are rich in benzene, but they are also
fairly volatile. Heating oils, diesel, and kerosene are more persistent, but may contain fewer toxicants and suspected carcinogens. Crude oils, of course, cover a wide range of viscosities
71142 Leak Impact Factor
and compositions. The pipeline operator will no doubt be
familiar with his products and their properties.
Note that there is a 2-point spread between each RQ classification. The evaluator may pick the midpoint between two
RQs if she has special information that makes it difficult to
strictly follow the suggested scoring. Once again, she must be
consistent in her scoring.
Product hazard score
We arrive at a total product hazard score by using this equation:
Product hazard score = acute hazard score + chronic hazard score
6. Leak volume
Notes on toxicity
An important part of the degree of consequences, both acute
and chronic, is toxicity. The degree of toxic hazard is usually
expressed in terms of exposure limits to humans. Exposure
is only an estimate of the more meaningful measure, which
is dosage. The dose is the amount of the product that gets
into the human body. Health experts have established
dosage limits beyond which permanent damage to humans
may occur. Because the intake (dose) is a quantity that is
difficult to measure, it is estimated by measuring the opportunity for ingesting a given dose. This intake estimate is the
There are three recognized exposure pathways: inhalation,
ingestion, and dermal contact. Breathing contaminated air,
eating contaminated foods, or coming into skin contact
with the contaminant can all lead to an increased dose level
within the body. Some of the exposure pathways can extend
for long distances, over long periods of time from the point
of contaminant release. Plants and animals that absorb the
contaminant may reach humans only after several levels of
the food chain. Groundwater contamination may spread
over great distances and remain undetected for long periods.
Calculations are performed to estimate dosages for each exposure pathway.
EPA ingestion route calculations include approximate
consumption rates for drinking water, h i t s and vegetables,
beef and dairy products, fish and shellfish, and soil ingestion
(by children). These consumption rates, based on age and sex of
population affected, are multiplied by the contaminant concentration and by the exposure duration. This value, divided by the
body weight and life span, yields the lifetime average ingestion
In a similar calculation, the lifetime average inhalation exposure yields an estimate of the inhalation route exposure. This is
based on studies of movement of gases into and out of the lungs
(pulmonary ventilation). The calculation includes considerations for activity levels, age, and sex.
The dermal route dose is obtained by estimating the dermal
exposure and then adjusting for the absorption of the contaminant. Included in this determination are estimates of body
surface area (which is, in turn, dependent on age and sex) and
typical clothing ofthe exposed population.
In each of these determinations, estimates are made of
activity times in outdoor play/work, showering, driving, etc.
Life spans are similarly estimated for the population under
We are not proposing that all of these parameters be individually estimated for purposes of a risk assessment.The evaluator
should realize the simplifications he is making, however, in rating spills here. Because we are only concerned with relative
hazards, accuracy is not lost, but absolute risk determination
often requires more formal methods.
For purposes here, the terms leak, spill, and release are used
interchangeably and can apply to unintentional episodes of
product escaping from a pipeline system, whether that product
is in the form of liquid, gas, or a combination. The total spill
quantity is the sum of leak volumes prior to system isolation
(includes detection and reaction times), the leak volume after
facility isolation (drain and/or depressure time), and mitigated
leak volume (secondary containment). The following paragraphs discuss pipeline spills and suggest ways to model spill
size for a relative risk assessment.
Leakedvolume or spill size is a function of leak rate, reaction
time, and facility capacities. It is a critical determinant of
damage to receptors under the assumption that hazard zone size
is proportional to spill size. This assumption is a modeling
convenience and will not hold precisely true for all scenarios.
Some leaks have a negative impact that far exceeds the
impacts predicted by a simple proportion to leak rate. For
example, in a contamination scenario, a 1 gaVday leak rate
corrected after 100 days is often far worse than a 100 gaVday
leak rate corrected in 1 day, even though the same amount of
product is spilled in either case. Unknown and complex interactions between small spills, subsurface transport, and groundwater contamination, as well as the increased ground transport
opportunity, account for the increased chronic hazard. On the
other hand, from an acute hazard perspective, such as thermal
radiation, the slower leak is preferable.
The overall equation for LIF recommends breaking the spill
and dispersion variables into separate components. This facilitates the assessment of possible spill mitigations. For example,
dispersion potential can be affected by secondary containment
where the released contents are fully contained in a leak recovery system, or at least limited in their spread by in-station
berms or natural barriers. So, even if the potential volume
released has not changed, risk can be reduced by preventing the
spread of the spill.
However, in many of the sample approaches discussed
below, it is a modeling convenience to select variables that
impact both the spill size and dispersion potential and use
them simultaneously to assess the overall spill scenario.
Therefore, this leak evaluation section is organized into separate discussions for leak size, mitigation, and dispersion potential, but the actual scoring examples usually blend the three
Hole size
As a critical component of assessing the volume or rate of a
release, the failure opening size (hole size) through which the
release occurs, must be estimated. A criterion must be established for choosing a leak rate scenario for a release from a
pipeline. It is reasonable to assume that virtually any size leak
may form in any pipeline. The evaluator could simply choose a
Leak volume 7/143
I-in.-diameter hole as the leak size. However, this would not
adequately distinguish between a 36-in.-diameter pipeline and
a 4-in.-diameter pipeline. While a 1-in. hole in either might
cause approximately the same spill or release (initially, at least),
we intuitively believe that a 36-in.4iameter pipeline presents a
greater hazard than does a4-in.-diameterpipeline, all other factors being equal. This is no doubt because a much greater
release can occur from the 36-in.-diameter pipeline than from
the 4-in. line.
The hole size is determined by the failure mode, which in
turn is a function of pipe material, stress conditions, and the
failure initiator.
As an extreme example of failure mode, an avalanchefailure, is characterized by rapid crack propagation, sometimes for
thousands of feet along a pipeline, which completely opens the
pipe. Main contributing factors to an avalanche failure include
low material toughness (a more brittle material that allows
crack formation and growth), high stress level in the pipe wall
(usually at the base of a crack), and an energy source that can
promote rapid crack growth (usually a gas compressed under
high pressure).
In many applications, a risk assessment model does not
attempt to distinguish among likely failure modes-a worst
case scenario is often assumed for simplicity. Distinguishing
between leak types adds a degree of complexity to the model;
however, the added information that is provided can be useful in
some cases. In a general sense, the leak size probabilities can
somewhat offset an otherwise higher consequence event. For
example, a smaller diameter line more prone to large breakage
can equal the consequences ofa larger line that is prone to small
pinhole leaks.
A spill size probability distribution can be developed from
an examination of past releases. This is further discussed in
Chapter 14.
We will assume, if only for the sake of simplification, that a
larger hole size leads to a larger leak and that a larger leak has the
potential for more severe consequences. Of course, under the
right circumstances, a large- or small-area failure can be equally
consequential in the pipeline system. For example, amore ductile
failure that allows only a minor pipe wall tear can leak undetected
for long periods, allowing widespread migration of leaked product. A more violent break of the pipe wall, on the other hand, may
cause a rapid depressurization and quick detection of the problem. See also the discussion of leak detection in Chapters 7 and
11 for more details regarding possible leak volumes.
Because of the many different materials and conditions that
may need to be compared when studying some pipeline systems, a consideration can be included to allow for higher or
lower anticipated incidents of large openings in a pipe failure.
One intent is to make a distinction between pipes more likely
to fail in a catastrophic fashion. This is highly dependent on
pipe material toughness. Where pipe material toughness is constant, changing pipe stress levels or initiating mechanisms will
Figure 7.5 shows a model of the interrelationships among
some of the many factors that determine the type of pipeline
leak that is likely. Initiating mechanisms that promote cracks
are more likely to lead to a large leak than are mechanisms that
cause a pinhole-type leak. (See Chapter 5 for further discussion
on fracture mechanics and crack propagation.)
When different materials and various likely failure modes
are to be included in the risk analysis, the spill size factor of
the leak impact factor can be adjusted. Although such an
adjustment is intended primarily to address widely different
materials often encountered in a single distribution system, it
can also he used to address more subtle differences in pipelines
of basically the same material but operated under different
conditions. For example, a higher strength steel pipeline
usually has slightly less ductility than Grade B steel and, when
combined with factors such as changing stress levels and crack
initiators, this raises the likelihood of an avalanche-type line
An important difference lies in materials that are prone to
more consequential failure modes. A large leak area is usually
characterized by the action ofa crack in the pipe wall. A crack is
more able to propagate in a brittle material; that is, a brittle pipe
material is more likely to fail in a fashion that creates a large
leak area--equal to or greater than the pipe cross-sectional
area. This problem is covered in more detail in a discussion of
fracture mechanics in Chapter 5.
The brittleness or ductility of a material is often expressed
in combination with its strength as a material toughness or
fracture toughness. Important material factors influencing
toughness in pipeline steels include chemical composition
(percentage of carbon, manganese, phosphorus, sulfur, silicon,
columbium, and vanadium), deoxidization practices, cold
work, and heat treatments [65]. The challenge of gauging the
likelihood of a more catastrophic failure mode is further complicated by the fact that some materials may change over time.
Given the right conditions, a ductile material can become more
Material toughness is an important variable in the potential
for certain failure modes. Even in the same material, slight differences in chemical composition and manufacture can cause
significant differences in toughness. The most common
methodused to assess material toughness is the Charpy V-notch
impact test. This test has been shown to correlate well with fracture mechanics in that test results above certain values ensure
that fatigue-cracked specimens will exhibit plastic behavior in
failure. Charpy-Izod test results for some common pipeline
materials are shown inTable 7.3.
The ASTM has reported [7] that the tensile stress behavior of
steel is not well correlated with its behavior in notched impact
tests such as the Charpy test. In other words, acceptable ductile
behavior seen in tension failures sometimes becomes unacceptable brittle behavior under notch impact failure conditions.
Therefore, specifying minimum material behavior under tensile stress will not ensure adequate material properties from a
fracture mechanics standpoint. Impact testing or some equivalent of this is needed to ensure that material toughness properties are adequate. Until the last decade or so, material toughness
or material ductility was not normally specified when pipe was
The rate of loading and the temperature are important parameters in assessing toughness. The likelihood of brittle failure
increases with increasing speed of deformation and with
decreasing temperature. Below a certain temperature, brittle
fracture will always occur in any material.
7/144 Leak Impact Factor
Crack formation
Crack growth
Internal loading
Failure Types
External loading
Rate of stressing
Stress concentrator
Surface, general
Figure 7.5
Sample of factors that influence failure hole size
High stress levels in a pipe wall are one of the most important contributing factors to a catastrophic failure. High stress
levels are a function of internal pressure, external loadings,
wall thickness, and exact pipe geometry. Mechanical pipe
damage (e.g., dents, gouges, buckles) and improper use of
some pipe fittings (e.g., sleeve taps, branch connections) can
dramatically impact stress levels by causing stress concentration points..
The energy source should also be considered here. A compressed gas, due to the higher energy potential ofthe compressible fluid can promote significantly larger crack growth and,
consequently, leak size. For relatively incompressible fluids,
decompression wave speed will usually exceed crack propagation speed and hence will not promote large crack growth. In
other words, on initiation of the leak, the pipeline depressures
quickly with an incompressible fluid. This means that usually
insufficient energy is remaining at the failure point to support
continued crack propagation.
The use of crack arrestors can also impact the risk picture.
A crack arrestor is designed to slow the crack propagation sufficiently to allow the depressurization wave to pass. Once past
the crack area, the reduced pressure can no longer drive crack
growth. More ductile or thicker material (stress levels are
Leakvolume 11145
Table 7.3
Charpy-lzod tests
Low-density polyethylene
Gray cast iron
Ductile cast iron
Carbon steel (0.2% carbon)
Carbon steel (0.45% carbon)
Tensile srrength (psi)
Charpy-lzod lest result.\ (ft-lhJ
4 1,000
I -I2
1-1 I
Source: Keyser, C. A , Materials Science in Engineering, 3rd ed., Columbus, OH: Charles E. Merrill Publishing
Company, 1980.pp. 75-101,131-159.
Note: The Charpy-lzod impact test is an accepted method for gauging material resistance to impact loadings
when a flaw (a notch) is present. The test is temperature dependent and is limited in some ways, but can serve as
a method to distinguish materials with superior resistance to avalanche-typefailures.
reduced as wall thickness increases) can act as a crack arrestor.
Allowances for these can be made in the material toughness
scoring or in the stress level scoring.
As with other pressure-related aspects of this risk assessment, it is left to the evaluator to choose stress levels representing either normal operating conditions (routine pressures and
loadings) or extreme conditions (MOP or rare loading scenarios). The appropriateness of either option will depend on the
intended uses of the assessment. The choices made should
be consistent across all sections evaluated and across all risk
variables that involve pressure.
Initiating mechanisms
Another consideration in the failure initiator is the type of damage to the pipe that has initiated the failure. For instance, some
analyses suggest that corrosion effects are more likely to lead to
pinhole-type failures, whereas third-party damage initiators
often have a relatively higher chance of leading to catastrophic
failures. Note that such general statements are very generalized. Many mechanisms might contribute to a failure and
their possible interactions. The first contributor to the formation of a crack might be very different from the contributor that
ultimately leads to the crack propagation and pipe failure.
When crack formation and growth are very low possibilities,
the likelihood of a tear or a pinhole instead of a larger failure is
When a strong correlation between initiator and failure
mode is thought to exist, a scale can be devised that relates a
consequence adjustment factor with probability score. The
probability score should capture the type of initiating event. For
example, when the third-party damage index is higher (by
some defined percentage, perhaps) than the corrosion index,
the spill score is decreased, reflecting a larger possible leak
size. When corrosion index scores are higher, the effective spill
size can be decreased, reflecting a smaller likely hole size. This
could similarly be done in the design index to capture stress and
earth movement influences. It is left to the evaluator to more
fully develop this line of reasoning when it is deemed prudent
to do so. An example of the application of this reasoning to an
absolute risk assessment is given in Chapter 14.
As an example of assessing the higher potential of avalanche
failures. an adjustment factor can be applied to apreviously cal-
culated spill score (see following section). In this sample
scheme, the two key variables to determine the adjustment
factor to be applied, especially for compressed gas pipelines,
are (1) stress level and (2) material toughness. Consideration
for initiating mechanism can also be added to this adjustment
As a base case, pipe failures are modeled as complete failures where the leak area is equal to the cross-sectional area of
the pipe. This allows a simple and consistent way to compare
the hazards related to pipes of varying sizes and operating pressures. To incorporate the adjustment factor, the base case
should be further defined as some “normal” situation, perhaps
the case of a Grade B steel line operating at 60% of the specified minimum yield strength ofthe material. This base or reference case will have a certain probability of failing in such a way
that the leak area is greater than the pipe cross-sectional
area (the base case). In situations where the probability of this
type of failure is significantly higher or lower than the base
case, an adjustment factor can be employed (see Table 7.4).
This adjustment factor will make a real change in the risk
values, but, since it is a measure of likelihood only, it does
not override the diameter and pressure factors that play the
largest role in determining spill size. The final spill score is as
Final spill score = (effectivespill size score)x (adjustmentfactor larger
where the effective spill size score is based on a small failure
opening. Alternatively, the adjustment factor can decrease the
effective spill size when the preliminary assessment assumes a
full-bore rupture scenario.
Therefore, when a material failure distinction is desired, the
evaluator can create a scale ofadjustment factors that will cover
the range of pipe materials and operating stresses that will be
encountered. When stress levels and material toughness values
reach certain levels, the effective spill size can be adjusted. For
instance, a material with lower toughness, operated at high
stress levels, might cause the score to double. This is the same
effect as a large increase in leak size (normally caused by an
increase in pipe diameter or pressure in the basic risk assessment model). Table 7.4 provides an example of an adjustment
scale. As mentioned earlier, such scales are normally more
7/146 Leak Impact Factor
Table 7.4
Effective spill size adjustment factors based on Oh SMYS of high pressure gas pipeline'
% of SMYS
Lowest (PVC)
Low (cast iron)
Medium (PE, APISLX 60, or higher steel)
Base case (A53 Grade B steel)
I .5
aUsesmaller values when evaluating a liquid pipeline
useful in gas pipelines, given the higher energy level (and,
hence, the higher possibility for catastrophic failures) associated with compressed gases.
Release models
An underlying premise in risk assessment is that larger spill
quantities lead to greater consequences. In addition to spill
size, the evaluator must identify what kinds of hazards might be
involved since some are more sensitive to release rate, while
others are more sensitive to total volume released. The rate
of release is the dominant mechanism for most short-term
thermal damage potential scenarios, whereas the volume of
release is the dominant mechanism for many contaminationpotential scenarios. Table 7.5 shows some common pipeline
products and how the consequences should probably be
modeled. Each of the modeling types shown in Table 7.5 are
discussed in this chapter.
Because potential spill sizes are so variable and because
different spill characteristics will be of interest depending on
the product type, it is useful to create a spill score to represent
the relative spill threat. To assess a spill score, the evaluator
must first determine which state (vapor or liquid) will be
present after a pipeline failure. If both states exist, the more
severe hazard should govern or the spill can be modeled as a
combination of vapor and liquid (see Appendix B).
Even though the difficult-to-predict dispersion characteristics of a vapor release appear more complex than a liquid spill,
the liquid spill is actually more challenging to model.The vapor
release scenarios lend themselves to some simplifying assumptions and the use of a few variables to use as substitutes for the
complex dispersion models. Liquid spills, on the other hand,
are more difficult to generalize because there are infinite possibilities of variables such as terrain, topography, groundwater,
Table 7.5
and other characteristics that dramatically impact the severity
ofthe spill.
In both release cases, liquid or vapor, leak detection can play
a role in potential risks. This is discussed briefly under
Dispersion in a subsequent section and also in Chapter 1 1.
Hazardous vapor releases
In an initial risk assessment for most general purposes, it is
suggested that a leak scenario of a complete line failure-a
guillotine-type shear failure-should be used to model the
worst case leak rate. This type of failure causes the leak rate to
be calculated based on the line diameter and pressure. Even
though this type of line failure is rare, the risk assessment is still
valid. By consistency of application, we can choose any hole
size and leak rate. We are simply choosing one here that serves
the dual role of incorporating the factors of pipe size and line
pressure directly into rating vapor release potential.
Alternatively, several scenarios of failure hole sizes can be
evaluated and then combined. These scenarios could represent
the distribution of all possible scenarios and would require that
the relative probability of each hole size be estimated. This
requires additional complexity of analysis. However, because this
approach incorporates the fact that the larger hole size scenarios
are usually rare, it better represents the range of possibilities.
Having determined the failure hole sizes to be used in the
assessment, the vapor release scenario also needs estimates for
the characteristics that will determine the potential consequences from the release. As discussed previously, the threats
from a vapor release are generally more dependent on release
rate than release volume, because the immediate vapor cloud
formation and thermal effects from jet fires are of most concern. Exceptions exist, of course, most notably scenarios that
involve accumulation of vapors in confined areas.
Common pipeline products and modeling of consequences
Hazard iype
Dominant hazard model
Flammablegas (methane, etc.)
Toxic gas (chlorine, H,S, etc.)
Highly volatile liquids (propane,
butane, ethylene, etc.)
Flammable liquid (gasoline, etc.)
Relativelynonflammable liquid
(diesel, fuel oil, etc.)
Thermal and blast
Acute and chronic
Thermal and contamination
Jet fire; thermal radiation
Vapor cloud dispersion modeling
Vapor cloud dispersion modeling;jet fire;
overpressure (blast) event
Pool fire; contamination
Leak Volume 7/147
As one approach to assessing relative release rate impacts,
the leak volume can be approximated by calculating how much
vapor will be released in I O minutes. Our interest in the leak
volume under this approach is not a contradiction to the earlier
statement of primary dependence on leak rate. The conversion
ofthe leak rate into a volume is merely a convenience under this
approach that allows a combined vapor-liquid release to be
modeled in a similar fashion (see Appendix B).
The highest leak rate occurs when the pressure is the highest
and the escape orifice is the largest. This leads to the assumption that, in most cases, the worst case leak rate happens near
the instant of pipeline rupture, while the internal pressure is
still the highest and after the opening has reached its largest
area. As will be discussed later, the highest leak rate generally
produces the largest cloud. As the leak rate decreases, the
cloud shrinks. As an exception for the case of a dense cloud,
vapors may ‘‘slump’’ and collect in low-lying areas or “roll”
downhill and continue to accumulate as the cloud seeks its
equilibrium size. In modeling the 10-minute release scenario,
we are conservatively assuming that all the vapor stays
together in one cloud for the full 10-minute release. We are
also conservatively neglecting the depressuring effect of 10
minutes worth of product leakage. This is done to keep the calculation simple. The 10-minute interval is chosen to allow a
reasonable time for the cloud to reach maximum size, but not
long enough to be counting an excessive mass of welldispersed material as part of the cloud. The amount of product
released and the cloud size will almost always be overestimated using the above assumptions. Again, for purposes ofthe
relative risk assessment, overestimation is not a problem as
long as consistency is ensured. See Appendix B for more discussion of leak rate determinations.
An alternative approach that avoids some of the calculation
complexities associated with estimating release quantities is to
use pressure and diameter as proxies for the release quantities.
Using a fixed damage threshold (thermal radiation levels; see
page 308), it has been demonstrated that the extent of the threat
from a burning release of gas is proportional to pressure and
diameter [83]. Therefore, pressure and diameter are suitable
variables for assessing at least one critical aspect of the potential consequences from a gas release. As in the first approach,
this can incorporate conservative assumptions regarding cloud
formation and dispersion.
Because the immediate hazards from vapor releases are
mostly influenced by leak rate, leak detection will not normally
play a large role in risk reduction. One notable exception is a
scenario where leak detection could minimize vapor accumulation in a confined space.
Hazardous liquid spills
Potential liquid spill size is a variable that depends on factors
such as hole size, system hydraulics, and the reliability and
reaction times of safety equipment and pipeline personnel.
Safety equipment and operation protocol are covered in other
sections of the assessment, so the system hydraulics alone are
used here to rank spill size.
Based on the expected potential hazards from a liquid spill,
including pool fire and contamination potential, the spill volume is a critical variable. Potential spill volume is estimated
from potential leak rates and leak times.
Leak rate is determined with a worst case line break scenari-a
full bore rupture. As with the atmospheric dispersion,
choosing this scenario allows us to incorporate the line size
and pressure into the hazard evaluation. A 36-in. diameter highpressure gasoline line poses a greater threat than a 4-in. diameter high-pressure gasoline line, all other factors being equal.
This is because the larger line can potentially create the larger
Alternatively, several scenarios of failure hole sizes can be
evaluated and then combined, as was noted for the vapor release
modeling. The scenarios would represent the distribution of
all possible scenarios and would require that the relative probability of each hole size be estimated. This requires additional
complexity of analysis. However, because this approach incorporates the fact that the larger hole size scenarios are usually
more rare, it better represents the range of possibilities. It also
can evaluate scenarios where small amounts, below detection
capabilities, are leaked for very long periods and result in larger
total volume spills. This requires evaluation of leak detection
capabilities and the construction of representative scenarios.
Because the release of a relatively small volume of an incompressible liquid can depressure the pipeline quickly, the longer
term driving force to feed the leak may be gravity and siphoning effects or pumping equipment limitations. A leak in a lowlying area may be fed for some time by the draining of the
rest of the pipeline, so the evaluator should find the worst
case leak location for the section being assessed. The leak
rate should include product flow from pumping equipment.
Reliability of pump shutdown following a pipeline failure is
considered elsewhere.
Based on the worst case leak rate and leak location for the
section, the relative spill size can be scored according to how
much product is spilled in a fixed time period of, say, 1 hour.
Leaks can be (and have been) allowed to continue for more than
1 hour, but leaks can also be isolated and contained in much
shorter periods. The 1-hour period is therefore arbitrary, but
will serve our purposes for a relative ranking. This approach
will distinguish the more hazardous situations such as highthroughput, large-diameter liquid pipelines in low-lying areas.
In many scenarios, reaction to a liquid spill plays a larger role
in consequence minimization than does reaction to a gas
release. An adjustment to the spill score can be applied when it
can be shown that special capabilities exist that will reliably
reduce the potential spill size by at least 50%, as is detailed in
later sections.
The consequences arising from various liquid spill volumes
is closely intertwined with the dispersion potential of those
volumes. Therefore, evaluating these scenarios is often best
done by simultaneously considering spill volume and dispersion, as is discussed later in this chapter. Some modeling
options are shown inTable 7.6.
Highly volatile liquid releases
Calculating the quantity of material released under flashing
conditions is a very complex task, due to the quite complex
phenomena that take place during such a release. The process
represents a nonlinear, non-equilibrium process, for at least
part of the episode. Beyond the quantity calculation, the vapor
cloud generation calculation adds further complications. Many
variables such as weather conditions, heat transfer through soil,
7/148 Leak Impact Factor
Table 7.6
Liquid spill size analysis options
Variables used in the analysis
Flow rate only
Assumes high volume; full line rupture at MOP. Higher level screening to assess differences in
consequence potential among different liquid pipeline systems (different locations, diameters, products,
etc.). Generally assumes that things get worse uniformly in proportion to increasing leak size.
Adds more resolution to identify potential consequence differences along a single pipeline route since
relative low spots are penalized for greater stabilization release volumes after section isolation.
Improves evaluations ofpotential consequences in general since site-specific variables are included.
Examples of terrain variables include slope, surface flow resistance, and waterway considerations
obtained from maps or from general field surveys. Water body intersects can be determined and
characterized based on the water body’s flow (slope at intersect point can be proxy for water flow rate).
More realistic, but must include probabilities of various hole sizes. Provides for an estimate ofpinhole leak
volumes over several years.
Also calledflowpath modeling, this is normally a computer application (GIS) that determines the path ofa
hypothetical leaked drop of liquid. Includes topography and sometimes surface flow resistance. The
computer routine is sometimes called a costingfunction. Accumulation points and water body intersects
are determined. Arbitrary stop points of potential flow paths may need to be set.
Adds aspect of ground penetration (soil permeability) and driving force ofthe volume release in order to
better characterize both the depth and lateral spread distance of the leak. Flow path stop points are
automatically determined. Volumes may be determined based on worst case releases or probabilistic
Adds hydrogeologic subsurface component to surface flow analyses to model groundwater transport of
portions of a leak that may contact the aquifer over time. More important for toxic and/or environmentally
persistent contamination scenarios.
Flow rate and draindown potential
Add basic terrain considerations
Hole size and pressure
Particle trace
Particle trace with release volume
Add aquifer characteristics
wind patterns, rate of air entrainment, and temperature changes
must be considered. In simple terms, a gaseous cloud of highly
volatile liquids (HVLs) will be formed from the initial leak,
which in turn transitions into a combination of secondary
sources. The secondary sources include the quantity of immediately flashing material, the vapor generation from a liquid
pool, and the evaporation of airborne droplets.
The initial release rate will be the highest release rate of the
event. This rate will decrease almost instantly after the rupture.
As the depressurization wave from a pipeline rupture moves
from the rupture site, pressures inside the pipeline quickly drop
to vapor pressure. At vapor pressure, the pipeline contents will
vaporize (boil), releasing smaller quantities of vapor.
Releases of a highly volatile liquid are similar in many
respects to the vapor release scenario. Key differences include
the following:
HVLs have multiphase characteristics near the release point.
Product escapes in liquid, vapor, and aerosol form, increasing the vapor generation rate in the immediate area of the
Liquid pools might form, also generating vapors.
As the pipeline reaches vapor pressure, the remaining liquid
contents vaporize through flashing and boiling, until the
release is purely gaseous.
Lower vapor pressures (compared to pure gases) generally
lead to heavier vapors (negatively buoyant), more cohesive
clouds with more concentrated product, and possibly higher
energy potential.
C. Dispersion
A release of pipeline contents can impact a very specific area,
determined by a host of pipeline and site characteristics. The
relative size of that impacted area is the subject of this portion
ofthe consequence assessment.
As modeled by physics and thermodynamics, spilledproduct
will always seek a lower energy state. The laws of entropy tell
us that systems tend to become increasingly disordered. The
product will mix and intersperse itself with its new environment in a nonreversible process. The spill has also introduced
stress into the system. The system will react to relieve the stress
by spreading the new energy throughout the system until a new
equilibrium is established.
The characteristics of the spilled product and the spill
site determine the movement of the spill. The possibilities
are spills into the atmosphere, surface water, soil, groundwater, and man-made structures (buildings, sewers, etc.). Accurately predicting these movements can be an enormously
complex modeling process. For releases into the atmosphere,
product movement is covered in the discussion of vapor
dispersion. Liquid dispersion scenarios cover releases into
other media. Some spill scenarios involve both the spill of a
liquid and vapor generation from the spilled liquid as it
For purposes of many assessments, accurate modeling ofthe
dispersion of spilled product will not be necessary. It is the
propensity to do harm that is of interest. A substance that causes
great damage even at low concentrations, released into an
area that allows rapid and wide-ranging spreading, creates the
greatest hazard.
If a product escapes from the pipeline, it is released as a
gas andor a liquid. As a gas, the product has more degrees of
freedom and will disperse more readily. This may increase or
decrease the hazard, since the product may cover more area,
but in a less concentrated form. A flammable gas will entrain
oxygen as it disperses, becoming an ignitable mixture. A toxic
gas may quickly be reduced to safe exposure levels as its
concentration decreases.
Dispersion 7/149
The relative density of the gas in the atmosphere will partly
determine its dispersion characteristics.A heavier gas will generally stay more concentrated and accumulate in low-lying
areas. A lighter gas should rise due to its buoyancy in the air.
Every density of gas will be affected to some extent by air
temperature, wind currents, and terrain.
A product that stays in liquid form when released from the
pipeline poses different problems. Environmental insult,
including groundwater contamination, and flammability are
the most immediate problems, although toxicity can play a role
in both short- and long-term scenarios.
For purposes of risk assessment, dispersion goes beyond
the physical movement of leaked product. Thermal and blast
effects can range far beyond the distance that the leaked molecules have traveled. The calculation of a hazard zone expands
the concept of dispersion to include these additional ramifications. Dispersion is normally the determining factor of a hazard
zone. Dispersion and. hence, hazard zone, are also intuitively
closely intertwined with spill quantity. This risk analysis
assesses dispersion somewhat separately from spill size in the
interest of risk management-there are risk mitigation opportunities to reduce spill size or dispersion independently.
Reductions in dispersion are assumed to reduce the potential
consequences. From a risk standpoint, the degree of dispersion
impacts the area of opportunity because more wide-ranging
effects offers greater chances to harm sensitive receptors.
Reductions in the amount or range of the spill may occur
through natural processes of evaporation and mixing and
thereby reduce the potential consequences. Similarly, reductions in the harmful properties of the substance reduce the risk.
This may occur through natural processes such as biodegradation, photolysis, and hydrolysis. If the by-products of these
reactions are less harmful than the original substance, which
they often are, the hazard is proportionally reduced. Barriers
that either limit dispersion or protect receptors from hazards
also reduce risks.
Several dispersion mechanisms-the underlying processes
that create the dispersion or hazard zone area-are examined
in this section. The hazard zone for a gas release is established
through either a jet fire or a vapor cloud. The hazard zone for
a liquid release arises from either a pool fire or a contamination scenario. HVL hazard zones can arise from any of these
Jet fire
Kelease of a flammable gas carries the threat of ignition and
subsequent fire. Thermal radiation from a sustained jet or torch
firc, potentially preceded by a fireball, is a primary hazard to
people and property in the immediate vicinity of a gas pipeline
failure. in the event of a line rupture, a vapor cloud will form,
grow in size as a function of release rate, and usually rise due to
discharge momentum and buoyancy. This cloud will normally
disperse rapidly and an ignited gas jet, or unignited plume, will
be established. If ignition occurs before the initial cloud
disperses, the gas may bum as a rising and expanding fireball.
A trench fire is a special type ofjet fire. It can occur ifa discharging gas jet impinges on the side of the rupture crater or
some other obstacle. This impingement redirects the gas jet,
reducing its momentum and length while increasing its width,
and possibly producing a horizontal profile fire. The affected
area of a trench fire can be greater than for an unobstructed jet
fire because more of the heat-radiating flame surface may be
concentrated near the ground surface [83].
Calculating hazard zones from jet fires is discussed later in
this chapter and in Chapter 14. Those discussions illustrate that
pressure, diameter, and energy content of the escaping gas are
critical determinants in the thermal effects distances.
Vapor clouds (vapor spills)
Of great interest to risk evaluators are the characteristics of
vapor cloud formation and dispersion following a product
release. Vapor can be formed from product that is initially in a
gaseous state or from aproduct that vaporizes as it escapes or as
it accumulates in pools on the ground. The amount of vapor put
into the air and the vapor concentrations at varying distances
from the source are the subject of many modeling efforts.
At least two potential hazards are created by a vapor cloud.
One occurs if the product in the cloud is toxic or displaces oxygen (that is, acts as an asphyxiant).The threat is then to any susceptible life forms that come into contact with the cloud. Larger
clouds or low-lying clouds provide a greater area of opportunity for this contact to occur and hence carry a greater hazard.
The second hazard occurs if the cloud is flammable. The
threat then is that the cloud will findan ignition source. causing
fire andor explosion. Larger clouds logically have a greater
chance of finding an ignition source and also increase the
damage potential because more flammable material may be
involved in the fire event.
Of course, the vapor cloud can also present both hazards:
toxicity and flammability.
Vapor cloud ignition
When an escaping pipeline product forms a vapor cloud the
entire range of possible concentrations of the product/air mixture exist. Within a specific fuel-to-air ratio range, the vapor
cloud will be flammable. This is the range between the upper
,flammability limit (UFL) and the lower ~fkurnmabiiit~
(LFL), which are the threshold concentration levels of interest
(also called explosion limits) representing the concentration of
the vapors in the air that support combustion. Ignition Is only
possible for concentrations of vapors mixed with air that fall
between these limits. Outside these limits, the mixture is either
too rich or too lean to ignite and burn. Because mixing is by no
means constant, the LFL distance will vary in any release event.
A flammable gas will therefore be ignitable at this point in the
Although ignition is not necessarily inevitable, there is often
a reasonable probability of ignition due to the large number of
possible ignition sources--cigarettes, engines, open flames.
residential heaters, and sparks to name just a few. It is not
uncommon during gaseous product release events for the ignition source to be created by the release of energy, including
static electricity arcing (created from high dry gas velocities),
contact sparking (e.g., metal to metal, rock to rock, rock to
metal), or electric shorts (e.g., movement of overhead power
It is conservative to assume, then, that an ignition source will
come into contact with the proper &el-to-air ratio at some point
during the release. The consequences of this contact range from
7/150 Leak Impact Factor
a jet fire to a massive fireball and detonation. On ignition, a
flame propagates through the cloud, entraining surrounding air
and fuel from the cloud. If the flame propagation speed
becomes high enough, a fireball and possibly a detonation can
occur. The fireball can radiate damaging heat far beyond the
actual flame boundaries, causing skin and eye damage and secondary fires. If the cloud is large enough, a “fire storm” can be
created, generating its own winds and causing far-reaching
secondary fires and radiant heat damage.
Ignition probabilities are discussed in Chapter 14.
In rare cases, a vapor cloud ignition can lead to an explosion.
An explosion involves a detonation and the generation of blast
waves, commonly measured as overpressure in psig. Anunconfined vapor cloud explosion, in which a cloud is ignited and the
flame front travels through the cloud quickly enough to generate a shock wave, is a rare phenomenon. Such a phenomenon is
called an overpressure wave. A confined cloud is more likely to
explode, but confinement is difficult to accurately model for an
open-terrain release. The intensity of the overpressure event is
inversely proportional to the distance from the explosion
point-the intensity is less at greater distances. Various overpressure levels can be related to various damages. An overpressure level of 10 psi generally results in injuries (eardrum
damage) among an exposed population. Higher overpressure
levels cause more damages but would only occur closer to the
explosion point. It is conservatively assumed that an unconfined vapor cloud explosion can originate in any part of the
cloud. Therefore, the overpressure distance is conservatively
added to the LFL distance (the ignition distance) for purposes
of hazard zone estimation.
The manner in which an ignited vapor cloud potentially
transforms from a burning event to an exploding event is not
well understood. It rarely occurs when the weight of airborne
vaporis less than 1OOOpounds[83].
Should a detonation occur, widespread damage is possible.A
detonation can generate powerful blast waves reaching far
beyond the actual cloud boundaries. Most hydrocarbodair
mixtures have heats of combustion greater than the heat of
explosion of TNT [8], making them very high energy substances. The possibility of vapor cloud explosions is enhanced
by closed areas, including partial enclosures created by trees or
buildings. Unconfined vapor cloud explosions are rare but
nonetheless a real danger. Certain experimental military bombs
are designed to take advantage of the increased blast potential
created by the ignition of an unconfined cloud of hydrocarbodair vapor.
Damages that could result from overpressure (blast) events
are discussed in Chapter 14.
Vapor cloud size
The predicted vapor cloud size is a function of variables such as
the release rate, release duration, product characteristics,
threshold concentrations of interest, and surrounding environment (e.g., weather, containment barriers, ignition source proximity) at the release site.
A cloud of lighter-than-air vapors such as natural gas or
hydrogen will normally be buoyant-it will tend to rise quickly
with minimum lateral spreading. An HVL cloud will normally
be negatively buoyant (heavier than air) due mostly to the evaporative cooling of the material. These dense vapors tend to slump
and flow to low points in the immediate topography. Typical
cloud configurations are roughly cigar or pancake shaped.
A stable atmospheric condition is usually chosen for release
modeling, in order to generate scenarios closer to worst case
scenarios. Atmospheric stability classes are discussed in
Chapter 14 and shown in Table 14.3 1. This stability class represents some fraction of possible weather type days in any year.
Under very favorable conditions,unignited cloud drift may lead
to extended hazard zone distances, but such events are seen to be
rare, difficult to estimate, and generally considered within the
conservativeassumptions already included in these estimates.
Many variables affect the dispersion of vapor clouds. In
general, these include
Release rate and duration
Prevailing atmospheric conditions
Limiting concentration
Elevation of source
Surrounding terrain
Source geometry
Initial density ofrelease [ 5 ] .
Release duration is not as critical in estimating maximum
cloud size since the release rate will diminish almost instantly
as the pipeline rapidly depressures under the pipeline rupture
scenario. Smaller size leaks could create vapor clouds that
would be more dependent on release duration, especially under
weather conditions that support cloud cohesiveness, but these
scenarios are not thought to produce maximum cloud sizes.
The extreme complexities surrounding a vapor release scenario make the problem only approximately solvable for even a
relatively closed system. An example of a somewhat closed
system is a well-defined leak from a fixed location where the
terrain is known and constant and where weather conditions
can be reasonably estimated from real-time data. A crosscountry pipeline, on the other hand, complicates the problem by
adding variables such as soil conditions (moisture content, temperature, heat transfer rates, etc.), topography (elevation profile, drainage pathways, waterways, etc.), and often constantly
changing terrain and weather patterns (amount of sunshine,
wind speed and direction, humidity, elevation, etc.).
Even though it vaporizes quickly, a highly volatile pipeline
product can form a liquid pool immediately after release. This
could be the case with products such as butane or ethylene. The
pool would then become a secondary source of the vapors.
Vapor generation would be dictated by the temperature of the
pool surface, which in turn is controlled by the air temperature,
the wind speed over the pool, the amount of sunshine to reach
the pool, and the heat transfer from the soil (Figure 7.6). The
soil heat transfer is in turn governed by soil moisture content,
soil type, and both recent and current weather. Even if all of
these factors could be accurately measured, the system is still a
nonlinear relationship that cannot be exactly solved.
Cloud modeling
A vapor cloud that covers more ground surface area, either due
to its size or its cohesiveness, has a greater area of opportunity
Dispersion 7/151
c _ _
Ground level
Depressure wave
Figure 7.6
Vapor cloud from pipeline rupture
to find an ignition source or to harm living creatures. This
should be reflected in the risk assessment.
To fully characterize the maximum dispersion potential,
numerous scenarios run on complex models would be required.
Even with much analysis, such models can only provide bounding estimates, given the numerous variables and possible
permutations of variables. So again, we turn to a few easily
obtained parameters that may allow us to determine a relative
risk ranking of some scenarios. An exact numerical solution is
not always needed orjustified.
Dispersion studies have revealed a few simplifying truths
that can be used in this risk assessment. In general, the rate
of vapor generation rather than the total volume of released
vapor is a more important determinant of the cloud size.
A cloud reaches an equilibrium state for a given set of
atmospheric conditions. At this equilibrium, the amount of
vapor added from the source theoretically exactly equals the
amount of vapor that leaves the cloud boundary (the cloud
boundary can be defined as any vapor concentration level).
So when the surface area of the cloud reaches a size whereby
the rate of vapor escaping the cloud equals the rate entering the
cloud, the surface area will not grow any larger (see Figure
7.6). The vapor escape rate at the cloud boundary is governed
by atmospheric conditions. The cloud will therefore remain
this size until the atmospheric conditions or the source rate
change. This fact thus yields one quantifiable risk variable:
leak rate.
So, given a constant weather condition, the cloud size is most
sensitive to the leak rate. The cloud will reach an equilibrium
where the rate of material added to the cloud balances the rate
of material leaving the cloud, thereby holding the cloud size
constant. The sensitivity is not linear, however. A IO-fold
increase in leak rate is seen as only a 3-fold increase in cloud
size, in some models.
A second simplifying parameter is the effect of molecular
weight on dispersion. Molecular weight is inversely proportional to the rate of dispersion. A higher molecular weight tends
to produce a denser cloud that has a slower dispersion rate.
A denser cloud is less impacted by buoyancy effects and air
turbulence (caused by temperature differences, wind, etc.) than
a lighter cloud. Using this fact yields another risk variable:
product molecular weight.
In the absence of more exact data, it is therefore proposed
that the increased amount of risk due to a vapor cloud can be
assessed based on two key variables: leak rate and product
molecular weight. Meteorological conditions, terrain, chemical
properties, and the host of other important variables may be
intentionally omitted for many applications. The omission may
be justifiable for two reasons: First, the additional factors are
highly variable in themselves and consequently difficult to
model or measure. Second, they add much complexity an4
arguably, little additional accuracy for purposes of relative risk
Therefore, measures of relative leak rate and molecular
weight can be used to characterize the relative dispersion of a
released gas.
Liquid spill dispersion
Physical extent ofspill
The physical extent of the liquid spill threat depends on the
extent of the spill dispersion, which in turn depends on the size
ofthe spill, the type ofproduct spilled, and the characteristics of
the spill site. The size of the spill is a function of the rate of
release and the duration. Slow leaks gone undetected for long
periods can sometimes be more damaging than massive leaks
that are quickly detected and addressed.
71152 Leak Impact Factor
To fully analyze a liquid spill scenario, a host of variables
must be assessed:
Product characteristics
Product viscosity
Product vapor pressure
Product flow rate
Product pressure
Product solubility
Product miscibility
Evapotranspiration rate.
Pipeline Characteristics
Root cause of failure
Hole dimensions
Proximity to isolation valves
Time to recognize event
Time to confirm release
Time to close block valves
Initial release volume
Stabilization release volume.
Environment Characteristics
Soil infiltration rate
Drainage pathways
Weather patterns
Proximity to ignition sources
Vegetative cover effects
Slope effects
Groundwater flow patterns
Proximity to surface waters.
In identifying all possible liquid leak impact ranges, it
may not be necessary to fully evaluate all of the potential
interplays among each of these variables. The added complexities and modeling costs often outweigh the benefits of such
detailed calculations. A range of leak analysis options is available, each of which might be appropriate for a certain type of
Topography aspects will be a critical determinant in most liquid spill scenarios. It is difficult to generate a universally applicable scoring table for topography. Which is preferable-rapid,
wide surface dispersion or limited surface transport but more
rapid ground penetration? The unfortunate (from a modeling
perspective) answer is that “it depends.”
In some cases, a concentrated spill (limited dispersion) poses
less risk, while in other cases, even at the same location, the
opposite is true. A rapid and wide dispersion might reduce ignition probability and burn time, should ignition occur. In other
cases, ignition might be preferable, thereby eliminating contamination potential or preventing migration of the spill to
other receptors. The possible receptor interactions are critical
elements of topographical considerations. This includes receptors of ground and surface water, in addition to other environmental receptors and population density and property.
It is difficult to find simplifying assumptions to use in ranking potential liquid spill scenarios, given the widely varying
threats accompanying the many differences in terrain and
topography and product characteristics. A range of analysis
options is available, of which several methods are listed inTable
7.6, from the simpler to the more complex:
Adding leak detection and emergency response considerations impacts the volumes released and adds a level ofresolution to any of the above analyses. It is especially important to
consider leak detection capabilities for scenarios involving
toxic or environmentally persistent products. In those cases, a
full line rupture might not be the worst case scenario. Slow
leaks gone undetected for long periods can be more damaging
than massive leaks that are quickly detected and addressed. A
leak detection capability curve (see Figure 7.7) can be used to
establish the largest potential volume release.
The more complex analyses are becoming more commonplace given the increased availability of powerful computing
environments and topographical information in electronic
databases. An important benefit of the more complex analysis
approaches is the ability to better characterize the receptors that
are potentially exposed to a spill-those that are actually “in
harm’s way.” In many cases, receptors may be relatively close
to, but upslope of, the pipeline and hence at much less risk.
Focusing on the locations that are more at risk is obviously an
advantage in risk management.
Spills in soil or water are the most common pipeline environmental concern. Such spills also carry the potential for groundwater contamination. Product movement through the soil
depends on such soil factors as adsorption, percolation, moisture content, and bacterial content. Soil characteristics can be
best assessed by using one of the common soil classification
systems, such as the USDA soil classification system, which
incorporates physical, chemical, and biological properties of
the soil. For simplicity, only one soil characteristic-permeability-is considered in some risk evaluations. This is also the
soil characteristic that is used in the EPA hazard ranking system
(HRStpermeability of geologic materials [ 141.
Releases into surface waters are the second potential type of
environmental insult and pathway to population receptors. The
size of the body of water and its uses determine the severity of
the hazard. Ifthe water is used for swimming, fishing, livestock
watering, irrigation, or drinking water, pollution concentrations must be kept quite low. Spills into water should take into
account the miscibility of the substance with water and the
water movement. A spill of immiscible material into stagnant
water would be the equivalent of a relatively impermeable soil.
A highly miscible material spilled into a flowing stream is the
equivalent of a highly permeable soil. (See later section dealing
with spills into waterways.)
Thermal effects
Addmg to the physical extent of the spilled product are the potential thermal effect distances arising from pools of ignited and
burning product. These are more fully discussed in Chapter 14
under the calculation of hazard zones. Potential thermal effects
are largely dependent on the size of the pool created from the
spilled product. Pool growth can be simulated using a calculation
method specified by the EPA, the Federal Emergency ManagementAgency, or the US. Department oflransportation (DOT) [5,
861.This correlation relates the release size to the pool area:
Log (A) = 0.492 log (M) + 1.617
where M represents the total liquid mass spilled in pounds
and A is the pool area in square feet.
Dispersion 71153
Alternatively, a simple geometry expression can be used to
calculate pool radius, once a pool depth is assumed. In either
case, assumptions must be made regarding rate of penetration
into the soil, evaporation, and other considerations.
Thermal radiation is related to the emissivity and transrnissivity. In accounting for shielding by surrounding layers of smoke,
emissivity is related to the normal boiling point of the material.
Higher boilingpoint fluids tend to burn with sooty flames.
Emissivity has been correlated to boiling point by means of
the following relationship:
= -0.3
I3 xTb+l17
where E, IS the effective emissive power (kW/m2) and T,
is the normal boiling point in degrees Fahrenheit [ 5 ] .
Transmissivity is a measure ofhow much ofthe emitted radiation is transmitted to a potential receptor. It is mainly a function of the path to the receptor: distance, relative humidity, and
flame temperature. Water and carbon dioxide tend to reduce the
With assumptions like constant transmissivity, the thermal
radiation from a pool fire is related to spill size and boiling
point. The emissivity value can be used in an inverse square
relationship to calculate thermal radiation levels at certain distances from the fire.
Equations for pool growth and emissivity are shown here
because they offer the opportunity to extract some simplifying
assumptions, as will be shown later. The calculation scheme
could be
tstiinate spill size+ calculatepool area- add boiling point+
calculaterelative hazard distance
Contunzinution potential
Most spills of hydrocarbon liquids will present hazards related
to both fire and contamination. Potential damages from each
hazard type tend to overlap. and are interchangeable in some
cases and additive in others. Contamination potential sometimes depends on the thermal radiation potential-if the product burns on release, then the contamination potential is
diminished or eliminated.
The environment can be very sensitive to certain substances.
Contaminations in the few parts per billion or even parts per
trillion are potentially of concern. If contamination is defined
as I O parts per billion, a IO-gallon spill of a solvent can contaminate a billion gallons of groundwater. A 5000-gallon spill from
a pipeline can contaminate 500 billion gallons of groundwater
to 10 ppb. The potential contamination is determined by the
simple formula:
V , = volumeofspill
Vz,y = volume of' groundwater contaminated
Cy = average concentration of contaminant in spilled material
Cg%= average concentration of contaminant in groundwater.
It is very difficult to generalize a contamination area esti-
mate. Any estimate is highly dependent on volume released
rate of release, soil permeability, surface flow resistance,
groundwater movement, surface water intersects, etc., all of
which are very location specific.
As one possible generalization. the contamination can be
modeled as being proportional to the potential pool size. Pool
size can be estimated as described previously. Some multiple of
this pool size (2x to lox, perhaps) can be used as a standard relative measure of contamination distance. This will of course
routinely over-and underestimate the true distances, but. when
used consistently, can help rank the damage potential.
Spill migration
While a full topographical, hydrogeological analysis is the best
way to estimate contamination potential, some basic concepts
of migration of hydrocarbons through a medium such as soil
and water can be used to model the range of a spill. Depending
on numerous factors, a hydrocarbon spill will spread laterally
as well as penetrate the soil. Quantities of lighter-than-water
hydrocarbons that penetrate the soil will often form a pancake
shape at some level below the surface, as gravity and buoyancy
forces are balanced and the spill spreads laterally.
The surface spread and soil penetration depth and movement
through soil are generally related to the product and soil characteristics captured in a variable termed h-vdruulic conducrivici..
An additional consideration that impacts the depth of penetration and the spread is the soil retention capacity (or residual
saturation capacity, hydrocarbon retained by soil particles)
as shown in Table 7.7. Increasing soil retention reduces the
spread of the spill. The product viscosity is the chief product
characteristic to be considered. The soil permeability is well
correlated with both the hydraulic conductivity and the soil
retention capacity, so it is a valuable variable for the station
risk model. For example, a scoring regime can be set up for
which the contamination potential is partly a function of the
soil retention and hydraulic conductivity. In Table 7.7, higher
scores represent higher spread of contaminants.
The phenomenon of source strength, the intensity with
which dissolved chemicals may be released from a spilled
hydrocarbon into water, is considered in assessing the product
hazard component ofthis model (where that assessment should
consider the presence of trace amounts of hazardous components) and in the use of groundwater depth as a variable.
Deeper groundwater affords more opportunity for soil retention and may minimize the lateral spread of the spill. Any
changes to the hydraulic conductivity of the soil due to the
spilled hydrocarbon are beyond the resolution of this model.
Table 7.7 Soil retention and hydraulic conductivitiesfor various
types of soil
Coarse to medium sand
Medium to fine sand
Fine sand to silt
I 100
I 0-0- I
] 0-8-
10 -10. I 0 -4
7/154 Leak Impact Factor
A drawbackto this scale appears for the special case in which
a low-penetration soil promotes a wider spill-surface area and
hence places additional receptors at risk. In very general terms,
a spill of a more acutely hazardous product might generate less
risk with greater soil penetration. This is especially true when
the product is less persistent in the soil, as is often the case with
higher flammability products. The counter to this reasoning is
the increased cleanup costs and decreased volatility as soil
penetration increases.
Note also that natural factors such as wind strength and
direction or topography can protect a receptor from damage,
even if that receptor is fairly close to the leak site.
Spill and leak mitigation
There are sometimes opportunities to reduce the volume or dispersion of released pipeline contents after a failure. The
pipeline operator’s ability to seize these opportunities can be
included in the risk assessment. Secondary containment and
emergency response, especially leak detectionireaction, are
considered to be risk mitigation measures that minimize potential consequences by minimizing product leak volumes and/or
dispersion. The effectiveness of each varies depending on the
type of system being evaluated.
Secondary containment
Opportunities to fully contain or limit the spread of a liquid
release can be considered here. These opportunities include
Natural harriers or accumulation points
Berms or levees
Containment systems.
Most secondary containment opportunities are found at
stations and are discussed in Chapter 13.
Although waterways are often areas of special environmental and population concern, they also sometimes offer an environment in which a liquid release is readily isolated. This may
be the case when the spill occurs in a stable water body such as a
pond or lake, which offers limited transport mechanisms and in
which the spilled product is relatively immiscible and insoluble. This can enable more rapid and complete cleanup, including the possible (and controversial)choice to bum off a layer of
spilled hydrocarbon from the water surface. A more damaging
scenario involves water bodies with more rapid transport mechanisms and spills that reach the more sensitive receptors that are
typically found on shorelines.
Where secondary containment exists, or it is recognized that
special natural containment exists, the evaluator can adjust the
spill score accordingly.
A system for evaluating secondary containment for pipeline
stations is shown in Chapter 13.
Emergency response
Emergency response and especially leak detection and reaction, is appropriately considered as a mitigation measure to
minimize potential consequences by minimizing spill volumes
and/or dispersion. The effectiveness varies depending on the
type of system being evaluated. Emergency response and leak
detection evaluation methods are more fully discussed later in
this chapter.
Leak detection and vapor dispersion Leak detection plays a
relatively minor role in minimizing hazards to the public in
most scenarios of gas transmission pipelines. Therefore, many
vapor dispersion analyses will not be significantly impacted by
any assumptions relative to leak detection capabilities. This is
especially true when defined damage states (see Chapter 14)
use short exposure times to thermal radiation, as is often
Reference [83] illustrates that gas pipeline release hazards
depend on release rates which in turn are governed by pressure.
In the case of larger releases, the pressure diminishes quicHymore quickly than would be affected by any actions that could
be taken by a control center. In the case of smaller leaks, pressures decline more slowly but ignition probability is much
lower and hazard areas are much smaller. In general, there are
few opportunities to evacuate a pressurized gas pipeline more
rapidly than occurs through the leak process itself, especially
when the leak rate is significant. A notable exception to this
case is that of possible gas accumulation in confined spaces.
This is a common hazard associated with urban gas distribution
systems and is covered in Chapter 11.
Another, less common exception would be a rather remote
scenario involving the ignition of a small leak that causes
immediate localized damages and then more widespread damages as more combustible surroundings are ignited over time as
the fire spreads. In that scenario, leak detection might be more
useful in minimizing potential impacts to the public.
Leak detection and liquid dispersion Leak detection capabilities play a larger role in liquid spills compared to gas releases.
Long after a leak has occurred, liquid products can be detected
because they have more opportunities for accumulation and are
usually more persistent in the environment. A small, difficultto-detect leak that is allowed to continue for a long period of
time can cause widespread contamination damages, especially
to aquifers. Therefore, the ability to quickly locate and identify
even small leaks is critical for some liquid pipelines.
Scoring releases
Once she has an understanding of release mechanisms and risk
implications, the evaluator will next need to model potential
releases for the risk assessment. This is often done by assigning
a score to various release scenarios. To score the relative dnpersion area or hazard zone of a spill or release, the relative measures of quantity released and dispersion potential can be
combined and then adjusted for mitigation measures. When the
quantity and dispersion components use the same variables, it
might be advantageous to score the two components in one
As more and more variables are added to the assessment
in order to more accurately distinguish relative consequence
potential, the benefits of the scoring approach diminish.
Eventually the evaluator should consider performing the
detailed calculations-estimating actual hazard zones using
Scoring releases 7/155
models for dispersion and thermal effects. As is noted in the
introductory chapters of this hook, the challenge when constructing a risk assessment model is to fully understand the
mechanisms at work and then to identify the optimum number
of variables for the model's intended use. For instance, Table
7.8 implies that overpressure (blast effects from a detonation) is
not a consideration for natural gas. This is a modeling simplification. Unconfined vapor cloud explosions involving methane
have not been recorded, hut confined vapor cloud explosions
are certainly possible.
Table 7.8 lists the range of possible pipeline product types
and shows the hazard type and nature. The type of model and
some choices for key variables that are probably best suited to a
hazard evaluation of each product are also shown.
Assessment resolution issues further complicate model
design, as discussed in Chapter 2. The assessment of relative
spill characteristics is especially sensitive to the range of possible products, pipe sizes, and pressures. As noted in Chapter 2, a
model that is built for parameters ranging from a 40-in., 2000psig propane pipeline to a I-in., 20-psig fuel oil pipeline will
not be able to make many risk distinctions between a 6-in. natural gas pipeline and an 8411. natural gas pipeline. Similarly, a
model that is sensitive to differences between a pipeline at 1 100
psig and one at 1200 psig might have to treat all lines above a
certain pressure/diameter threshold as the same. This is an issue
of modeling resolution.
In most cases, the scoring of a pipeline release will closely
parallel the estimation of a hazard area or hazard zone. This is
reasonable since the spill score is ranking consequence potential, which in turn is a function of hazard area. The hazard zone
is a function of the damage state of interest, where the damage
state is a function of the type of threat (thermal, overpressure,
contamination, toxicity) and the vulnerabilities of the receptors, as discussed later. In the scoring examples presented
below, it is important to recognize that the hazard zone is a
measure of the distance from the source where a receptor is
threatened. The source might not he at the pipeline failure
location. Especially in the case of hazardous liquids, whose
hazard zones often are a function of pool size, the location of
the pool can be some distance from the leak site. Envision a
steeply sloped topography where the spilled liquid will accumulate some distance from the leak site. Note also that a recep-
Table 7.8
tor can be very close to a leak site and not suffer any damages,
depending on variables such as wind strength and direction,
topography, or the presence ofbarriers.
Scoring hazardous liquid releases
As discussed, a relative assessment of potential consequences
from a liquid spill should include relative measures of contamination and thermal effects potential, both of which are a function of spill volume. Contamination area is normally assumed
to be proportional to the extent of the spill. Thermal effects are
normally assumed to be a function of pool size and the energy
content of the spilled liquid.
Three possible approaches to evaluate relative hazard areas,
independent of topographic considerations, are discussed next.
Additional examples of algorithms used to evaluate relative liquid spill consequences are shown in Chapter 14. Since there are
many tradeoffs in risk modeling, and there is no absolutely correct procedure, the intention here is to provide some ideas to the
designer of the risk assessment model.
Scoring approach A One simple (and coarse) scheme to
assess the potential liquid spill hazard area in a relative fashion
is as follows:
Contamination potential = (spill volume score) x (RQj
Thermal hazard= (spill volume score) x (N,)
Here the relative consequence area is assessed in two components: contamination hazard and thermal hazard. The spill volume score is critical in both hazards. It can be based on the
relative pumping rate and maximum drain volume and its scale
should be determined based on the full range of possible flow
rates and drain volumes.
The spill score is then multiplied by the pertinent product
hazard component. We noted previously that, in many scenarios, an area of contamination can be more widespread than a
thermally-impacted area. Some multiplier applied to the estimated pool size might be representative of the relative contamination potential. Because RQ is on a 12-point scale and N, is
on a 4-point scale, this scheme is consistent with that belief
and possibly avoids the need for a pool size multiplier. In other
Dominant hazards and variables for various products transported
Hazard tjpe
Hazard nature
Dominant hazard model
Flammable gas (methane, etc.)
Toxic gas (chlorine, H2S, etc.)
Torch fire; thermal radiation;
vapor cloud dispersion
Vapor dispersion modeling
HVL (propane, butane. ethylene, etc.)
Thermal and blast
Flammable liquid (gasoline, etc.)
Acute and
Thermal and
Relatively nonflammable liquid
(diesel, fuel oil, etc.)
Vapor dispersion modeling;
torch fire; overpressure
(blast) event
Pool fire; contamination
Key variables
impacting hazard area
Molecular weight (MW).
pressure, diameter
MW, pressure, diameter,
weather, toxicity level
MW, pressure, diameter,
weather, H,, C,
MW, boiling pt, specific gravity,
topography, ground surface
Topography,ground surface,
toxicity. environmental
7/156 Leak Impact Factor
words, this approach uses the scales for RQ and N, as a simplification to show the perceived relationships between consequence area and product characteristics.
This scheme is based on an understanding of the underlying
variables and seems intuitively valid as a mechanism for relative comparisons. It captures, for example, the idea that a gasoline and a fuel oil spill of the same quantity have equivalent
contamination potential but the gasoline potentially produces
more thermal effects. However, this or any proposed algorithm
should be tested against various spill scenarios before being
adopted as a fair measure of relative consequence potential.
This approach produces two non-dimensional scores, representing the relative consequences of contamination and thermal hazards from a liquid spill. Depending on the application,
the contamination and thermal effects potentials might be
combined for an overall score. In other applications, it might be
advantageous to keep the more chronic contamination scenario
score separate from the more acute thermal effects score.
If an equivalency is to be established, the relative consequence “value” of each hazard type must be determined.When
contamination potential is judged to be a less serious consequence than thermal effects (or vice versa), weightings can be
used to adjust the numerical impacts of each relative to the
other. Perhaps, from a cost and publicity perspective, the following relationship is perceived:
Thermal hazard = 2 x (contamination potential)
This implies that potential thermal effects should play a
larger role (double) in risk assessment and therefore in risk
management. This may not be appropriate in all cases.
Scoring approach B Another example approach that focuses
only on a thermal hazard zone, combines the relative spill volume and thermal effects in an algorithm that relates some key
variables. For example, the spill score for liquids can be based
on pool growth and effective thermal emissivity models as previously described:
Liquid spill score = LOG[(spill mass) x 0.5]/[(hoiling p ~ i n t ) ~ - ~ ]
This relationship was created by examination of the underlying thermal effects formulas and a trial-and-error process of
establishing equivalencies among various thermal effects hazard zones. It provided satisfactory differentiation capabilities
for the specific scenarios for which it was applied. However,
this algorithm has not been extensively tested to ensure that it
fairly represents most scenarios.
Pressure is not a main determinant in spill volume in this
algorithm since the product is assumed to be relatively incompressible. Except for a scenario involving spray of liquids, the
potential damage area is not thought to be very dependent on
pressure in any other regard.
Potential contamination impacts are not specifically
included in this relationship. It may be assumed that contamination areas are encompassed by the thermal effects or, alternatively, a separate contamination assessment can be performed.
Scoring approach C Scoring Approach C might be suitable
for a simple relative assessment where potential contamination
consequences are seen to be the only threat. It assumes that the
main threat is to groundwater, so soil permeability is a key
The problem is simplified here to two factors: leak volume
and soil permeability (or its equivalent if a release into water is
being studied). Points can be assessed based on the quantity of
product spilled, under a worst case scenario. The worst-case
scenario can range from a large volume, sudden spill to a very
slow, below-detection-limits spill.
Pounds spilled
Point score
< 1000
1001-1 0,000
This is an example of a score-assignment table that is
designed for a certain range of possible spills. The range of the
table should reflect the range of spill quantities that is expected
from all systems to be evaluated. This will usually be the largest
diameter, highest pressure pipeline as the worst case, and the
smallest, lowest pressure pipeline as the best case. Some trial
calculations may be needed to determine the worst and best
cases. lfthe range is too small or too large, comparisons among
spills from different lines may not be possible.
Table 7.9 can then be used to score the soil permeability for
liquid spills into soil. This assignment of points implies that
more or faster liquid movements into the soil increase the range
of the spill. Of course, greater soil penetration will decrease
surface flows and vice versa. Either surface or subsurface flow
might be the main determinant of contamination area, depending on site-specific conditions. Since groundwater contamination is the greater perceived threat here, this point scale shows
greater consequences with increasing soil permeability. When
this is not believed to be the case, the evaluator can modify the
awarding of points to better reflect actual conditions.
The soil permeability score from Table 7.9 is the second of
the two parts of the liquid spill score. The point values from
Tables 7.8 and 7.9 are added or averaged to yield the relative
score for contamination area. This score represents the belief
that a larger volumes, spilled in a higher permeability soil,
leads to a proportionally greater consequence area. Ultimately,
a scoring of the spilled substance’s hazards and persistence
(considering biodegradation, hydrolysis, and photolysis) will
combine with this number in evaluating the consequences of
the spill.
Adjustments As an additional consideration to any method of
scoring the liquid hazard zone, adjustments can be made to
account for local features that might act as dispersion amplifiers or reducers.These might include sloping terrain, streams,
ravines, water bodies, natural pooling areas, sewer systems, and
other topographical features that tend to extend or minimize a
hazard area.
Scoring hazardous vapor releases
If the model is intended only to assess risks of natural gas
pipelines (or another application with only one type of gas
being transported), then a simple approach is to use only the
Scoring Releases 11157
Table 7.9 Soil permeability score
Impervious barrier
Scoring approach A
A direct approach for evaluating the
potential consequences from a natural gas release can be based
on the hazard zone generated by a jet fire from such a release:
r =radius from pipe release point for given radiant heat intensity (feet)
I =radiant heat intensity (Btuihr/ft*)
p =maximum pipeline pressure (psi)
d = pipeline diameter (in.).
For natural gas, when a radiant heat intensity of 5000
Btulhrift2 is used as the potential damage threshold of interest,
this equation simplifies to:
r = 0.685 x p )
r = radius from pipe release point for given radiant heat intensity (ft)
p = maximum pipeline pressure (psi)
d = pipeline diameter (in.) [83].
In either case, the gas spill score can be related directly to the
hazard radius:
Clay, compact till, unfractured rock
Silt, silty clay, loess, clay loams, sandstone
Fine sand, silty sand moderately fractured rock
Gravel, sand highly fractured rock
pipelineS pressure and diameter to characterize the relative
hazard zone. This assumes that there is a fixed thermal radiation level of interest as is discussed in Chapter 14, but that level
does not necessarily need to be identified for purposes of a relative risk assessment.
Some modeling or scoring approaches to obtain relative consequence scores arc presented next. Other examples can be
found in Appendix E.
I 0-5 -I 0-7
> Io-'
even chlorine, then an additional variable is needed to distinguish among gases. Density might be appropriate when the
consequences are thought to be more sensitive to release rate.
MW or heat of combustion might be more appropriate for consequences more sensitive to thermal radiation. If a gas to be
included is thought to have the potential for an unconfined
vapor cloud explosion, then the model should also include
overpressure (explosion) effects as discussed for HVL scenarios. One of the equations from ApproachA above can be modified with some measures of energy content and dispersion
content. The scoring could also be simplified to a relationship
such as this one:
Gas spill score
This algorithm is based on the previous thermal radiation
relationship [83] and supposition that dispersion. thermal
radiation, and vapor cloud explosive potential are proportional
to MW.
This score can also be normalized as described in Approach A.
Scoring approach C
As an even simpler approach to scoring
gas releases, a point schedule can be designed to quantify the
increase in hazard as the dispersion characteristics of molecular
weight and leak rate are combined (see Table 7.10).
Table 7.10 is an example of a table that is designed for a
certain range of possible spills. The range of the table should
reflect the range of spill quantities expected. This will usually
be the largest diameter, highest pressure pipeline as the worst
case, and the smallest, lowest pressure pipeline as the best case.
Some trial calculations may be needed to determine the worst
and best cases. If the range is too small or too large, comparisons between spills from different lines may not be possible.
See Appendix B for a discussion of leak size determination.
Gas spill score = r
This can be normalized so that scores range from 0 to 10 or 0
to 100, based on the largest radius calculated for the worst case
scenario evaluated.
Note that these thermal radiation intensity levels only imply
damage states. Actual damages depend on the quantity and
types of receptors that are potentially exposed to these levels. A
preliminary assessment of structures has been performed identifying the types ofbuildings and distances from the pipeline.
See Chapter 14 for more discussion ofthese equations.
Scoring approach B
When a model is needed to evaluate risks
from various flammable gases such as methane, hydrogen, or
Table 7.10
Point schedule for quantifying hazards based on
molecularweight and leak rate
Produc f releasedafier 10 mrnufes (Ih)
4 pts
3 pts
2 pta
I pta
7/158 Leak Impact Factor
These points are the vapor spill score. In Table 7.10, the
upper right comer reflects the greatest hazard, while the lower
left is the lowest hazard. By the way in which the dispersion
factor is used to adjust the acute or chronic hazard, a higher spill
score will yield a safer condition.
By using only these two variables, several generalizations
are being implied. For instance, the release of 1000 pounds
of material in I O min potentially creates a larger cloud than
the release of 4000 pounds in an hour. Remember, it is the rate
of release that determines cloud size, not the total volume
released. The 1000-poundrelease therefore poses a greater hazard than the 4,000-pound release. Also, a 1000-pound release
of a MW 16 material such as methane is less of a hazard than a
1000-poundrelease of a MW 28 material such as ethylene. The
schedule must now represent the evaluator’s view ofthe relative
risks of a slow 4000-pound, MW 28 release versus a quick
1000-pound, MW 16 release. Fortunately, this need not be a
very sensitive ranking. Orders of magnitude are sufficiently
close for the purposes of this assessment.
Combined scoring
When hazardous liquid and vapor releases are to be assessed
with the same model, some equivalencies must be established.
Such equivalencies are difficult given the different types of
hazards and potential damages (thermal versus overpressure
versus contamination damages, for example).
A basis of equivalency must first be determined: Thermal
radiation only? Total hazard area? Extent of contamination versus thermal effects area? Equivalenciesbecome more problematic when applied across hazard types since different types of
potential damages must be compared. For instance, 10,000
square feet of contaminated soil or even groundwater is a
different damage state than a 10,000-square-foot burn radius.
Valuing the respective damage scenarios adds another level of
complexity and uncertainty.
One method to establish a relative equivalency between the
two types of spills (gas and liquid), at least in terms of acute
hazard is to examine and compare the calculated hazard
zones for buman fatality and serious injury. Using some very
specific assumptions, some damage zones involving multiple
products, diameters, pressures, and flow rates were calculated
to generate Table 7.11. The reader is cautioned against using
these tabulated hazard distances because they are based on
very specific assumptions that often will not apply to other
scenarios. The specific assumptions are intentionally omitted
to further discourage the use of these values beyond the intention here.
The estimates from Table 7.1 1 are based on very rudimentary dispersion and thermal effects analyses and are generated
only to compare some specific scenarios. It is assumed that
pumping (flow) rate is the determining factor for liquid releases
and orifice flow to atmosphere (sonic velocity) determines
vapor release rates at MAOP.
Using the Table 7.11 list of “actual” hazard distances and the
simplifying rules shown later, possible equivalencies are shown
in Table 7.12, ordered by spill score rank.The rank merely normalizes all spill scores against the maximum spill score, which
is assigned a value of 6 .
The spill scores in Table 7.12 were generated with some simple relationships, unlike the hazard zones which required rather
Table 7.11 Sample hazard radius calculations
Flow rate
mdius (f?)
54 1
Natural gas
Fuel oil
Propane cases include 0.4-psi overpressure from midpoint distance
of UFL and LFL.
complex computations. The spill score for the natural gas and
propane scenarios used this equation:
Gas spill score
x MW
based on thermal radiation relationship [83] and supposition
that dispersion, vapor cloud explosive potential, and thermal
radiation from a jet fire are proportional to MW.
Spill score for liquids (gasoline and fuel oil) were determined with this equation:
Liquid spill score =LOG[(spill mass) x 0.5]/-)
based on the pool growth model, the effective emissivity
model [86], and a constant (20,000)to numerically put the liquid spill scores on par with the vapor spill scores.
This comparison between spill scores and hazard zones for
various product release scenarios indicates that the spill score
was fairly effective in ordering these scenarios-from largest
hazard area to smallest. It is not perfect since some actual
hazard radii are not consistent with the relative rank. Note,
however, that many assumptions go into the actual calculations,
especially where vapor cloud explosions are a potential (the
propane cases in Table 7.12). So, the actual hazard zone calculations are themselves uncertain.
The table ranking seems to be plausible from an intuitive
standpoint also. A large-diameter, high-pressure gas line poses
the largest consequence potential, followed by an HVL system,
and then a large-volume gasoline transport scenario. Of course,
any ofthese could be far worse than the others in a specific scenario so the ordering can be argued. On the lower end of
the scale are the lower volume pipelines and the less flammable
liquid (fuel oil).
Other observations include the fact that liquid spill cases are
relatively insensitive to pressures-flow rate is the main deter-
Adjustments to scores 71159
Table 7.12 Sample spill scores compared to hazard radii
Natural gas
Natural gas
Natural gas
Natural gas
Fuel oil
Natural gas
Natural gas
Pipe diameter (in)
Pressure (psrg)
Flow rate (lb Iiquid/hr)
Hazard radius ft)
54 1
Spillscore rank
1 1,349
4. I
I .x
minant of hazard zone. Except for a scenario involving sprayed
material, this is plausible. Another observation is that the relative contamination potential is modeled as being equivalent to
the relative spill score. As previously noted, this incorporates
the assumption that for a liquid spill, the thermal damages and
contamination damages offset eachother to some extent: as one
increases, the other decreases. This is, of course, a modeling
convenience only and real-world scenarios can be envisioned
where this is not the case.
final outcome of an acute event in terms of loss of life, injuries.
and property damage. This is not thought to impact the acute
hazard, however. A spill with chronic characteristics, where the
nature of the hazard causes it to increase in severity as time
passes, can be impacted by emergency response. In these cases,
emergency response actions such as evacuation, blockades, and
rapid pipeline shutoff are effective in reducing the hazard.
Consequence-reducing actions must do at least one of three
VII. Adjustments to scores
2. Limit the area of opportunity for consequences.
3. Otherwise limit the loss or damage caused by the spill.
1. Limit the amount of spilled product.
As noted earlier in this chapter, two pipeline activities that can
contribute to consequence reduction are secondary containment and emergency response. Both are useful only as consequence reducers since both are reactionary to a release that
has already occurred and neither provides an opportunity to
prevent a failure. There is little argument that, especially in
scenarios involving more chronic consequences, secondary
containment and emergency response can indeed minimize
damages. They are therefore included as modifiers to the dispersion portion of the leak impact factor. The amount of the
contribution to the overall risk picture is arguable, however, and
must be carefully evaluated.
Chronic hazards have a time factor implied: potential damage level increase with passing time. Actions that can influence
what occurs during the time period of the spill will therefore
impact the consequences.
Acute hazard scenarios offer much less opportunity to intervene in the potentially consequential chain of events. The most
probable pipeline leak scenarios involving acute hazards suggest
that the consequences would not increase over time because the
dnving force (pressure) is being reduced immediately after the
leak event begins and dispersion of spilled product occurs rapidly. This means that reaction times swift enough to impact the
immediate degree of hazard are not very likely. We emphasize
immediate here so as not to downplay the importance of emergency response. Emergency response can indeed influence the
Limiting the amount of product spilled is done by isolating the
pipeline quickly or changing some transport parameter (pressure,
flowrate, type of product, etc). The area of opportunity is limited
by protecting or removing vulnerable receptors, by removing
possible ignition sources, or by limiting the extent of the spill.
Other loss is limited by prompt medical attention, quick containment, avoidance of secondary damages, and cleanup ofthe spill.
The following consequence-reducing opportunities are
discussed in this section:
Leak detection
Emergency response
Spill limiting actions
“Area of opportunity” limiting actions
Loss limiting actions
Leak detection
Leak detection can be seen as a critical part of emergency
response. It provides early notification of a potentially consequential event, and hence allows more rapid response to that
event. Given the complexity of the topic, leak detection is
examined independently of other emergency response actions,
but can be considered a spill reducing opportunity aspect of
emergency response.
7/160 Leak Impact Factor
The role of leak detection can be evaluated either in the
determination of spill size and dispersion or as a stand-alone
element that is then used to adjust previous consequence
estimates. The former approach is logical and consistent with
the real-world scenarios. The benefit ofleak detection is indeed
its potential impact on spill size and dispersion. The latter
approach, evaluating leak detection capabilities separately,
offers the advantage of a centralized location for all leak detection issues and, therefore, a modeling efficiency.
As discussed on pages 142-146, leak size is at least partially
dependent on failure mode. Small leak rates tend to occur due
to corrosion (pinholes) or some design (mechanical connections) failure modes. The most damaging leaks occur below
detection levels for long periods of time. Larger leak rates tend
to occur under catastrophic failure conditions such as external
force (e.g., third party, ground movement) and avalanche crack
Larger leaks can be detected more quickly and located more
precisely. Smaller leaks may not be found at all by some methods due to the sensitivity limitations. The trade-offs involved
between sensitivity and leak size are usually expressed in terms
of uncertainty.
The method of leak detection chosen depends on a variety of
factors including the type of product, flow rates, pressures, the
amount of instrumentation available, the instrumentation characteristics, the communications network, the topography, the
soil type, and economics. Especially when sophisticated instrumentation is involved,there is often a trade-off between the sensitivity and the number of false alarms, especially in “noisy”
systems with high levels oftransients.
At this time, instrumentation and methodology designed to
detect pipeline leaks impact a relatively narrow range of the
risk picture. Detection of a leak obviously occurs after the leak
has occurred. As is the case with other aspects of post-incident
response, leak detection is thought to normally play a minor
role, if any, in reducing the hazard, reducing the probability of
the hazard, or reducing the acute consequences. Leak detection
can, however, play a larger role in reducing the chronic consequences of a release. As such, its importance in risk management for chronic hazards may be significant.
This is not to say that leak detection benefits that mitigate
acute risks are not possible. One can imagine a scenario in
which a smaller leak, rapidly detected and corrected, averted
the creation of a larger, more dangerous leak. This would theoretically reduce the acute consequences of the potential larger
leak. We can also imagine the case where rapid leak detection
coupled with the fortunate happenstance of pipeline personnel
being close by might cause reaction time to be swift enough
to reduce the extent of the hazard. This would also impact the
acute consequences. These scenarios are obviously unreliable
and it is conservative to assume that leak detection has limited
ability to reduce the acute impacts from a pipeline break.
Increasing use of leak detection methodology is to be expected
as techniques become more refined and instrumentation
becomes more accurate. As this happens, leak detection may
play an increasingly important role.
As notedpreviously, leak quantity is a critical determinant of
dispersion and hence of hazard zone size. Leak quantity is
important under the assumption that larger amounts cause more
spread of hazardous product (more acute impacts), whereas
lower rates impact detectability (more chronic impacts). The
rate of leakage multiplied by the time the leak continues is often
the best estimate of total leak quantity. However, some potential
spill sizes are more volume dependent than leak-rate dependent. Spills from catastrophic failures or those occurring at
pipeline low points are more volume dependent than leak-rate
dependent. Such spill events are not best estimated by leak rates
because the entire volume of a pipeline segment will often be
involved, regardless of response actions.
Detection methodologies
Pipeline leak detection can take a variety of forms, several of
which have been previously discussed. Common methods used
for pipeline leak detection include
Direct observation by patrol
Happenstance direct observation (by public or pipeline
SCADA-based computational methods
SCADA-based alarms for unusual pressure, flows, temperatures, pumpicompressor status, etc.
Flow balancing
Direct burial detection systems
Acoustical methods
Pressure point analysis (negative pressure wave detection)
Pig-based monitoring
Each has its strengths and weaknesses and an associated
spectrum of capabilities.
Despite advances in sophisticated pipeline leak detection
technologies, the most common detection method might still be
direct observation. Leak sightings by pipeline employees,
neighbors, and the general public as well as sightings while
patrolling or surveying the pipeline are examples of direct
observation leak detection. Overline leak detection by handheld instrumentation (sntffeevs) or even by trained dogs (which
reportedly have detection thresholds far below instrument
capabilities) is a technique used for distribution systems.
Pipeline patrolling or surveying can be made more sensitive by
adjusting observer training, speed of survey or patrol, equipment carried (may include gas detectors, infrared sensors, etc.),
altitudeispeed of air patrol, topography, ROW conditions, product characteristics, etc. Although direct observation techniques
are sometimes inexact, experience shows them to be rather consistent leak detection methods.
More sophisticated leak detection methods require more
instrumentation and computer analysis. A mainstay of pipeline
leak detection includes SCADA-based capabilities such as
monitoring of pressures, flows, temperatures, equipment status, etc. For instance, (1) procedures might call for a leak detection investigation when abnormally low pressures or an
abnormal rate of change of pressure is detected; and ( 2 ) a flow
rate analysis, in which flow rates into a pipeline section are
compared with flow rates out of the section and discrepancies
are detected, might be required. SCADA-based alarms can be
set to alert the operator of such unusual pressure levels, differences between flow rates, abnormal temperatures, or equipment status (such as unexplained pumpicompressor stops).
Alarms set to detect unusual rates of changes in measured flow
parameters add an additional level of sensitivity to the leak
Adjustmentsto scores 7/161
detection. These methods may have sensitivity problems
because they must not give leak indications in cases where normal pipeline transients (unsteady flows or pressures, sometimes temporary as the system stabilizes after some change is
made) are causing pressure swings and flow rate changes.
Generally the operator must work around the inevitable tradeoff between many false alarms and low sensitivity to actual
leaks. Because pipeline leaks are, fortunately, rare-occurrence
events, the latter is often chosen.
SCADA-based capabilities are commonly enhanced by
computational techniques that use SCADA data in conjunction
with mathematical algorithms to analyze pipeline flows and
pressures on a real-time basis. Some use only relatively simple
mass-balance calculations. perhaps with corrections for linefill. More robust versions add conservation of momentum calculations. conservation of energy calculations, fluid properties,
instrument performance, and a host of sophisticated equations
to characterize flows, including transient flow analyses. The
nature of the operations will impact leak detection capabilities,
with more less steady flows and more compressible fluids
reducing the capabilities.
The more instruments (and the more optimized the instrument locations) that are accurately transmitting data into the
SCADA-based leak detection model, the higher the accuracy
of the model and the confidence level of leak indications.
Ideally, the model would receive data on flows, temperatures,
pressures, densities, viscosities, etc., along the entire pipeline
length. By tuning the computer model to simulate mathematically all flowing conditions along the entire pipeline and
then continuously comparing this simulation to actual data, the
model tries to distinguish between instrument errors, normal
transients, and leaks. Reportedly, depending on the system, relatively small leaks can often be accurately located in a timely
fashion. How small a leak and how swift a detection is specific
to the situation, given the large numbers of variables to consider. References 131 and 141 discuss these leak detection systems and methodologies for evaluatingtheir capabilities.
Another computer-based method is designed to detect pressure waves. A leak will cause a negative pressure wave at the
leak site. This wave will travel in both directions from the leak
at high speed through the pipeline product (much faster in liquids than in gases). By simply detecting this wave, leak size and
location can be estimated. A technique called pressure point
cznalvsic. (PPA) detects this wave and also statistically analyzes
all changes at a single pressure or flow monitoring point.
By statistically analyzing all of these data, the technique can
reportedly, with a higher degree of confidence, distinguish
between leaks and many normal transients as well as identify
instrument drift and reading errors.
A final method of leak detection involves various methods of
direct detection of leaks immediately adjacent to a pipeline.
One variation of this method is the installation of a secondary
conduit along the entire pipeline length. This secondary conduit is designed to sense leaks originating fiom the pipeline.
The secondary conduit may take the form of a small-diameter
perforated tube, installed parallel to the pipeline, which allows
air samples to be drawn into a sensor that can detect the product
leaks. The conduit ]nay also totally enclose the product pipeline
(pipe-in-pipe design) and allow the annular space to be tested
for leaks. Obviously these systems can cause a host of logistical
problems and ai-eusually not employedexcept on short lines.
The method ofleak detection chosen depends on a variety of
factors including the type of product, flow rates, pressures, the
amount of instrumentation available, the instrumentation characteristics, the communications network, the topography. the
soil type, and economics. As previously mentioned when
highly sophisticated instruments are required a trade-off often
takes place between the sensitivity and the number of false
alarms, especially in “noisy” systems with high levels of
Evaluation of leak detection cupubi1itfe.v
The evaluator should assess the nature of leak detection abilities in the pipeline section he is evaluating. The assessment
should include
What size leak can be reliably detected
How long before a leak is positively detected
How accurately can the leak location be determined.
Note that many leak detection systems perform best for only
a certain range of leak sizes, However. many overlapping
leak detection capabilities are often present in a pipeline. A leak
detection capability can be defined as the relationship between
leak rate and time to detect. This relationship encompasses both
volume-dependent and leak-rate-dependent scenarios.The former is the dominant consideration as product containment size
increases (larger diameter pipe at higher pressures). but the
latter becomes dominant as smaller leaks continue for long
As shown in Figure 7.7. this relationship can be displayed as
a curve with axes of “Time to Detect Leak’ versus “Leak Size.”
The areannder such a curve represents the worst case spill volume, prior to detection. The shape of this curve is logically
asymptotic to each axis because some leak rate level is never
detectable and an instant release of large volumes approciches
an infinite leak rate.
A leak detection capability curve can be developed by estimating the leak detection capabilities of each available method
for a variety of leak rates. A table of leak rates IS first selected
as illustrated in Table 7.13. For each leak rate, each system’s
time to detect is estimated.
In assessing leak detection capabilities, all opportunities to
detect should be considered. Therefore, all leak detection systems available should be evaluated in terms of their respective
abilities to detect various leak rates. A matrix such as that
shown inTable 7.14 can be used for this.
References [3] and [4] discuss SCADA-based leak detection
systems and offer methodologies for evaluating their capabilities. Other techniques will likely have to be estimated based on
time between observations and the time for visual. olfactory. or
auditory indications to appear. The latter will be situation
dependent and include considerations for spill migration and
evidence (soil penetration, dead vegetation. sheen on water,
etc.). The total leak time will involve detection, reaction. and
isolation time.
As a further evaluation step, an additional column can be
added to Table 7.14 for estimates of reaction time for each
detection system.This assumes that there are diferences in reactions, depending on the source ofthe leak indication.A series of
SCADA alarms will perhaps generate more nnmediate reaction
7/162 Leak Impact Factor
Tl 000
Tl 0
Tl 0
Time to Detect Leak
Figure 7.7 Leak detection capabilities.
than a passerby report that is lacking in details and/or credibility, The former scenario has an additional advantage in reaction, since steps involving telephone or radio communications
may not be part of the reaction sequence.
the value of such valves and the practicality of such automation
is questionable. In addition to the previously noted limitations
to leak detection capabilities, there are limitations and issues
regarding other aspects of an automated detection and reaction
Emergency response
A. Automatic valves. Set to close automatically, these valves
are often triggered on low pressure, high pressure, high flow,
or rate of change of pressure or flow. Regular maintenance is
required to ensure proper operation. Experience warns that
this type of equipment is often plagued by false trips that are
sometimes cured by setting relatively insensitive response
trigger points.
Spill volume limiting actions
l b s opportunity for consequence reduction includes leak detectionheaction and is often the most realistic way for the operator
to reduce the consequences of a pipeline failure. Some common
approachesto limiting spill volumes are discussed below.
One of the theoretically fastest detection and response scenarios could be valves that automatically isolate a leaking
pipeline section based on some continuously monitored parameter that has indicated a leak. However, in real applications,
Table 7.13
Check valves are another form of automatic valves and play a
spill-reducing role. A check valve might be especially useful
for liquid lines with elevation changes. Strategically placed
Detection of various leak volumes
Q (gaNday)
Notes and detection times
Total pipeline volume between valves that are predicted to be used to isolate the leak
Very slow, very difficult to detect leak; T = a few days to several months
Slow leak, difficult to detect;T = hours to days
Significant leak, readily apparentto eyes, nose, ears; T = minutesto hours
Large leak, immediately apparent;T = minutes
Adjustments to scores 71163
Table 7.14 Matrix for evaluating ability of leak detection systems to
detect various leak rates
Leak detection system
nance) of additional valves would far outweigh the possible
benefits, and also imply that such valves may actually introduce new hazards.
Mass balance for facility
(SCADA and manual)
Overline surveys
Acoustic monitoring
SCADA alarms
SCADA-based computational
SCADA-based mass balance
Staffing of surface facilities
Passerby reporting
check valves may reduce the draining or siphoning to a spill at a
lower elevation. Included in this section should be automatic
shutoffsofpumps, wells, and other pressure sources. Redundancy
should be included in all such systems before risk-reducing
credit is awarded (see Chapter 6, Incorrect Operations Index).
B. Valve spacing. Close valve spacing may provide a benefit in
reducing the spill amount. This must be coupled with the
most probable reaction time in closing those valves.
Discounting failure opening size and pressure, the two components of a release volume from a liquid line are (1) the continued pumping that occurs before the line can be shut down
and (2) the liquid that drains from the pipe after the line has
been shut down. The former is only minimally impacted by
additional valves-perhaps only helping to stop momentum
effects from pumping if a valve is rapidly closed (but potentially generating threatening pressure waves ). The main role of
additional valving, therefore, seems to be in reducing drain volumes. Because a pipeline is a closed system, hydraulic head
andor a displacement gas is needed to affect line drainage.
Hilly terrain can create natural check valves that limit hydraulic
head and gas displacement of pipeline liquids.
Concerns with the use of additional block valves include costs
and increased system vulnerabilities from malfunctioning components andor accidental closures, especially where automatic
or remote capabilities are included. For unidirectional pipelines,
check valves (preventing backflow) can provide some consequence minimization benefits. Check valves respond almost
immediately to reverse flow and are not subject to most of the
incremental risks associated with block valves since they have
less chance of accidental closure due to human error or, in the
case of automatidremote valves, failure due to system m a l h c tions. Their failure rate (failure in a closed position) is uncertain.
Studies of possible benefits of shorter distances between
valves of any type produce mixed conclusions. Evaluations of
previous accidents can provide insight into possible benefits
of closer valve spacing in reducing consequences of specific
scenarios, By one study of 336 liquid pipeline accidents, such
valves could, at best, have provided a 37% reduction in damage
[76]. Offsetting potential benefits is the often substantial costs
of additional valves and the increased potential for equipment
malfunction, which may increase certain risks (surge potential,
customer interruption, etc.). Rusin and Savvides-Gellerson
[76] calculate that the costs (installation and ongoing mainte-
C. Sensing devices. Part of the equation in response time is the
first opportunity to take action. This opportunity depends on
the sensitivity of the leak detection. All leak detection will
have an element ofuncertainty, from the possibility of crank
phone calls to the false alarms generated by instrumentation
failures or instrument reactions to pipeline transients. This
uncertainty must also be included in the following item.
D. Reaction times. If an operator intervention is required to initiate the proper response, this intervention must be assessed
in terms of timeliness and appropriateness. A control room
operator must often diagnose the leak based on instrument
readings transmitted to him. How quickly he can make this
diagnosis depends on his training, his experience, and the
level of instrumentation that is supporting his diagnosis.
Probable reaction times can be judged from mock emergency drill records when available. The evaluator can incorporate his incorrect operations index ratings (training,
SCADA, etc.) into this section also. If the control room can
remotely operate equipment to reduce the leak, the reaction
time is obviously improved. Travel time by first responders
must otherwise be factored in. If the pipeline operator has
provided enough training and communications to public
emergency response personnel so that they may operate
pipeline equipment, response time may be improved, but
possibly at the expense of increased human error potential.
Public emergency response personnel are probably not able
to devote much training time to a rare event such as a
pipeline failure. If the reaction is automatic (computer-generated valve closure, for instance) a sensitivity is necessarily
built in to eliminate false alarms.The time it takes before the
shut down device is certain of a leak must be considered.
'%ea of Opportunity Limitinghctions
As noted previously, the area of opportunity can sometimes
be limited by protecting or removing vulnerable receptors, by
removing possible ignition sources, or by limiting the extent of
the spilled product.
A. Evacuation. Under the right conditions, emergency response
personnel may be able to safely evacuate people from the
spill area. To do this, they must be trained in pipeline emergencies. This includes having pipeline maps, knowledge of
the product characteristics, communications equipment, and
the proper equipment for entering to the danger area (breathing apparatus, fire-retardant clothing, hazardous material
clothing, etc.). Obviously, entering a dangerous area in an
attempt to evacuate people is a situation-specific action. The
evaluator should look for evidence that emergency responders are properly trained and equipped to exercise any reasonable options after the situation has been assessed. Again,
the criteriamust include the time factor. Credit is given when
the risk can be reliably reduced by 50% due to appropriate
emergency response actions.
B. Blockades. Another limiting action in this category is to
limit the possible ignition sources. Preventing vehicles from
entering the danger zone has the double benefit of reducing
human exposure and reducing ignition potential.
7/164 Leak Impact Factor
C . Containment. Especially in the case of restricting the movement of hazardous materials into the groundwater, quick
containment can reduce the consequences of the spill. The
evaluator should look for evidence that the response team
can indeed reduce the spreading potential by actions
taken during emergency response. This is usually in the
form of secondary containment. Permanent forms of secondary containment are discussed in Chapter 1 1.
Loss limiting actions
Proper medical care of persons affected by the spilled product
may reduce losses. Again, product knowledge, proper equipment, proper training, and quick action on the part of the
responders are necessary factors.
Other items that play a role in achieving the consequencelimiting benefits include the following:
Emergency drills
Communications equipment
Proper maintenance of emergency equipment
Updated phone numbers readily available
Extensive training including product characteristics
Regular contacts and training information provided to fire
departments, police, sheriff, highway patrol, hospitals, emergency response teams, government officials.
These can be thought of as characteristics that help to
increase the chances ofcorrect and timely responses to pipeline
leaks. Perhaps the first item, emergency drills, is the single
most important characteristic. It requires the use of many other
list items and demonstrates the overall degree of preparedness
ofthe response efforts.
Equipment that may need to be readily available includes
Hazardous waste personnel suites
Breathing apparatus
Containers to store picked up product
Vacuum trucks
Absorbant materials
Surface-washing agents
Dispersing agents
Freshwater or a neutralizing agent to rinse contaminants
Wildlife treatment facilities.
The evaluator/operator should look for evidence that such
equipment is properly inventoried, stored, and maintained.
Expertise is assessed by the thoroughness of response plans
(each product should be addressed), the level of training of
response personnel, and the results of the emergency drills.
Note that environmental cleanup is often contracted to companies with specialized capabilities.
Assessing emergency response capabilities
Many emergency response considerations have been mentioned here. The evaluator should examine the response possibilities and the most probable response scenario. The best
evaluations of effectiveness will be situation specific-the role
of emergency response in limiting spill size or dispersion for
specific segments of pipeline. The next step is to incorporate
those evaluations into the relative risk model.
By most methods of assessing the role of spill size in risk, an
8-in. diameter pipeline presents a greater hazard than does a
6-in. diameter pipeline (all other factors held constant). When
the leak detectiodemergency response actions can limit the
spill size from an 8-in. line to the maximum spill size from a 6in. line, some measure ofrisk reduction has occurred. For simplicity sake, risk reduction could be assumed to be directly
proportional to reductions in spill size and/or extent.
Alternatively, and as a further assessment convenience, a
threshold level of consequence-reduction capabilities can be
established. Below this threshold, credit would not be given in
the risk assessment for emergency response capabilities. For
instance, the threshold could be: “reliable reduction of consequences by at least 50% in the majority of pipeline failure scenarios.” When response activities can reliably be expected to
reduce consequences by 50% compared to consequences that
would otherwise occur, the spill or dispersion score can be
adjusted accordingly. Failure to meet this threshold (in the
eyes of the evaluator) warrants no reduction in the previously
calculated spill or dispersion scores.
At first look, it may appear that an operator has many of
emergency response systems in place and they are fimctioning
to a high level. Realistically, however, it is difficult to meet a
criteria such as a 50% reduction in the effective spill size. The
spill and dispersion scores assess the amount of product spilled,
assuming worst case scenarios. To reduce either of these, emergency actions would have to always take place quickly and
effectively enough to cut either the volume released or the
extent of the spill in half.
The evaluator can take the following approach to tie this
together to calculate the liquid spill score. An example follows.
Step I : The evaluator uses the worst case pipeline spill scenario
or a combination of scenarios from which to work. She calculates the worst case as a spill score based on a l-hour, full
bore rupture.
Step 2: The evaluator determines, with operator input, methods
to attain a 50% risk reduction such as reduce spill amount by
50%, reduce population exposure by 50%(number ofpeople
or duration of exposure), contain 50% of spill before it can
cause damage, reduce health impact hy 50%.
Step 3: The evaluator determines if any action or combination
of actions can reliably reduce the risk by 50%. This is done
with consideration given to the degree of response preparedness.
If she decides that the answer in Step 3 is yes, she improves
the liquid spill score calculated earlier to show only one-half of
the previously-assumed spill volume.
Example 7.3:Adjustments to the liquid spill score
The evaluator is assessing a section of gasoline pipeline
through the town of Smithville.
The scenario he is using involves a leak of the full pipeline
flow rate. This hypothetical leak occurs at a low point in the line
profile, in the center of Smithville. He recognizes the acute
Receptors 7 / 1 6
hazard of flammability and the chronic hazards of toxicity
(high benzene component), residual flammability (from pockets of liquid), and environmental insult. He feels that a 50%
reduction in risk can be attained if the spill size is reduced
by 50%. if 50% of the spilled product is contained quickly, or if
50% of the potentially affected residents can be evacuated
hefbre they are exposed to the acute hazard.
He has determined that the leak detection and emergency
response activities are in place to warrant an adjustment of the
leak impactfactor.
The basis for this determination is the following items
observed or ascertained from interviews with the operators:
Automatic valves are set to isolate pipeline sections around
the town of Smithville. The valves trigger if a pressure drop
of more than 20% from normal operating conditions occurs.
The valves are thoroughly tested every 6 months and have a
good operating history. A 20% drop in pressure would occur
very soon after a substantial leak,
Annual emergency drills are held, involving all emergency
response personnel from Smithville. The drills are well documented and reflect a high degree of response preparedness.
The presence of the automatic valves should limit the spill
to 50% of what it would be without the valves. This alone
would have been sufficient to adjust the chronic leak impact
factor. The strong emergency response program should limit
exposure due to residual flammability and ensure proper handling of the gasoline during cleanup. Containment is not seen
as an option, but by limiting the spill size, the environmental
insult is minimized also. The evaluator sees no relief from the
acute hazard but feels an adjustment for the chronic hazard is
Exaniple 7.4:Adjustments to the liquid spill score
(Cuse B)
The evaluator is assessing a section of brine pipeline in a
remote, unpopulated area. The leak scenario she is using
involves a complete line rupture. The hazards are only chronic
in nature; that is, there are no immediate threats to public or
responders.The chronic threat is the exposure to the groundwater table, which is shallow in this area.
The best chance to reduce the chronic risk by SO% is thought
to be limiting the spill size by 50%. Emergency response will
not reliably occur quickly enough to isolate the leaking pipeline
before line depressurization and pump shutoffs slow the leak
anyway. containment in a timely fashion is not possible.
No adjustments to the chronic leak impact factor are made.
D. Receptors
Of critical importance to any risk assessment is an evaluation of
the types and quantities of receptors that may be exposed to a
hazard from the pipeline. For these purposes, the term receptor
refers to any creature. structure, land area, etc., that could
“receive” damage from a pipeline rupture. The intent is to capture relative vulnerabilities of various receptors, as part of the
consequence assessment.
Possible pipeline rupture impacts on the surrounding environmental and population receptors are highly location specific
due to the potential for ignition andor vapor cloud explosion.
Variables include the migration of the spill or leak, the sensitivity of the receptor, the nature of the thermal event, the amount
of shelter and barriers, and the time of exposure.
Because gaseous product release from a pipeline is a temporary excursion, the pollution potential beyond immediate
toxicity or flammability is not specifically addressed here
for releases into the air. This discounts the accumulative damage that can be done by many small releases of atmospheredamaging substances (such as possible ozone damages from
greenhouse gases). Such chronic hazards are considered in
the assignment of the equivalent reportable release quantity
( RQequlvalent)
for volatile hydrocarbons.
Ideally, a damage threshold would lead to a hazard area estimation that would lead to a characterization of receptor vulnerability within that hazard area. Damage threshold levels for
thermal radiation and overpressure effects are discussed in
Chapter 14.
D l . Population density
As part ofthe consequence analysis, a most critical parameter is
the proximity ofpeople to the pipeline failure. Population proximity is a factor here because the area of opportunity for harm is
increased as human activity occurs closer to the leak site. In
addition to potential thermal effects, the potential for ingesting
contaminants through drinking water, vegetation. fish, or other
ingestion pathways is higher when the leak site is nearby.
Less dilution has occurred and there is less opportunity for
detection and remediation before the normal pathways arc contaminated. The other pathways, inhalation and dermal contact,
are similarly affected.
A full evaluation of human health effects from pipeline failures is often unnecessary when the pipeline’s products are common and epidemiological effects are well known (see
discussion of product hazurd, this chapter). In assessing
absolute probabilities of injury or fatality from thermal effects.
the time and intensity of exposure must be estimated. This is
discussed earlier in this chapter and methods for quantifying
these effects are shown in Chapter 14. Shielding and ability to
evacuate are critical assumptions in such calculations. Most
general risk assessment methods will produce satisfactory
results with the simple and logical premise that risk increase as
nearby population density increases.
Population density can he taken into account by using any of
the published population density scales such as the DOT Part
192 class locations I , 2.3, and 4 (see Table 7. IS). These are for
rural to urban areas, respectively. The class locations are determined by examining the area 660 ft on either side of the
pipeline centerline, and 1 mile along the pipeline. This I-mile
by 1320-ft rectangle, centered over the pipeline, is the defined
area in which to conduct counts of dwellings.
If any 1-mile stretch of pipeline has more than 46 dwellings
inside this defined area, that section is designated a Class 3
area. A section with fewer than 46 dwellings but more than
10 dwellings in the defined area is a Class 2 area. Each mile
with fewer than 10 dwellings is a Class 1 area. A Class 4 area
exists when the defined area has a prevalence of multistory
7/166 Leak Impact Factor
A Class 3 area is also defined as a section ofpipeline that has
a high-occupancy building or well-defined outside meeting
area within the defined area. Buildings such as churches,
schools, and shopping centers that are regularly occupied
(5 days per week or 10 weeks per year) by 20 or more people are
deemed to be high-occupancy areas. The presence of one of
these within 660 ft of the pipeline is a sufficient condition to
classify the pipeline section as Class 3.
The population density, as measured by class location, is
admittedly an inexact method of estimating the number of people likely to be impacted by a pipeline failure. A thorough
analysis would necessarily require estimates of people density
(instead of house density), people’s away-from-home patterns,
nearby road traffic, evacuation potential, time of day, day of
week, and a host of other factors. This approach is further discussed in Chapter 14. The class location, however, is thought to
be reasonably correlated with “potential” population density
and, as such, will serve the purposes of this risk assessment.
Table 7.15 shows some possible population estimates based on
the class locations.
For more discrimination within class location categories, a
continuous scale can be devised in which an actual house count
would yield a score: 6 houses = 1.6,32 houses = 2.7, etc. More
qualitative estimates such as “high-density class 2 = 2.7” and
“low-density class 2 = 2.1” would also serve to provide added
discrimination. Additional provisions can be added to make
distinctions between the four classifications.
Another population density classification example, expanding and loosely linked to the DOT classifications, is shown in
Table 7.16.
The U.S. DOT also creates categories of populated areas
for use in determining high consequence areas for certain
pipelines. As defined in a recent regulation (CFR Part 192),
High Population Area (HPA) means “an urbanized area, as
defined and delineated by the Census Bureau, which contains
50,000 o r more people and has a population density of at least
Table 7.16
Example of population density scoring
Population type
Residential suburban
Semi rural
Isolated,very remote
Table 7.15
Considerations for restricted mobility populations (nursing
homes, rehabilitation centers, etc) and difficult-to-evacuate
populations might also be appropriate. Additional modeling
efforts might also assess traffic volumes and occupancies based
on time-of-day, day-of-week, and/or season.
These methods of incorporating population density into the
risk analysis support scoring or categorizing approaches. If
absolute risk values in terms of injury or fatality rates are
required, then additional analyses are usually needed. Other
methods ofquantifyingpopulations are discussed in Chapter 14.
D2. Environmental issues
In most parts of the world today, there is a strong motivation to
ensure that industrial operations are compatible with their enviTable 7.17 Scale that characterizes population densitieson a scale
of 1 to 2 points
Commercial light
Residential light future
Residential medium future
I .2
house counts and equivalent densities
DOTclass location
One-mile house count
< 10
Population score = [general population] +[special population]
Population score
DOT classifications of
1,000 people per square mile.” Other Populated Area (OPA)
means “a place, as defined and delineated by the Census
Bureau that contains a concentrated population, such as an
incorporated or unincorporated city, town, village, or other designated residential or commercial area.”
Table 7.17 provides an example of a scale characterizingpopulation densities on a relative I-to 2-point scale. By this scale,
proximity or exposure to a light residential area would raise risk
levels 1.5 times higher than those for a rural area. A heavy commercial (shopping center, business complex, etc.) or residential
exposure would double the risk, compared to a rural area.
Another scoring approach that combines a general population classification with special considerations into a 20-point
scale (where more points represents more consequences) is
shown inTable 7.18.
One-mile population count (estimatedie
< 30
I N 6
> 46 or high-occupancybuildings
> 400
Multistory buildings prevalent
Not part of DOT definition-estimatesonly.
Receptors 7/167
Table 7.18
Sample scoring of population density (20 pt scale)
General population categoty
High density
Specialpopulation categor7,
Multifamily, trailer park
Residential backyards
Residential backyards (fenced)
Special activities such as flaring, pigging, painting, and
Barriers to animal movements and other natural activities
Rainwater runoff/drainage from facilities
VisuaUaesthetic impact.
ronments. Ideally, an operation will have no adverse impact
whatsoever on the natural surroundings-air,
water, and
ground. Realistically, some trade-offs are involved and an
amount of environmental risk must be accepted. It is practical
to acknowledge that, as with public safety risks, environmental
risks cannot completely be eliminated. They can and should
however, be understood and managed. As with other pipeline
risk aspects, it is important to establish a framework in which
all environmental risk factors can be quantified and analyzed,
at least in a relative fashion.
The assessment of environmental risk is best begun with a
clear understanding of pertinent environmental issues and the
pipeline company’s position with regard to that understanding. A
company will often write a document that states its policy onprotecting the environment. This document serves to clarify and
focus the company’s position. The policy should clearly state
what environmental protection is and how the company, in its
everyday activities, plans to practice it. The policy is often a statement of intent to comply with ail regulations and accepted industry practices. A more customized policy statement will more
exactly define environmentally sensitive areas that apply to its
pipelines. Here the phrase unusually sensitive is implied because
every area is sensitive to some degree to impact from a pipeline
failure. A comprehensive policy will also address the company’s
position with regard to environmental issues that do not involve a
pipeline leak. These issues will be defined later in this section.
Assessing the environmental risk is the object of this discussion. Environmental risk factors will overlap public safety risk
factors to a large extent.
The most dramatic negative environmental impact will usually occur from a pipeline spill. However, some impact can
occur from the installation and the long-term presence of the
pipeline. The pipeline can cause changes in natural drainage
and vegetation, groundwater movements, erosion patterns, and
even animal migratory routes. The presence of aboveground
facilities can impact wildlife (and neighbors) in a variety of
non-spill ways, such as these:
Noise (pumps, compressors, product movements through
valves and piping, maintenance and operations activities)
Vehicular traffic
These non-spill impacts are normally considered in the
design phase of a new pipeline. Sensitivity to such issues in the
design phase can save large remediatiodmodification costs at a
later stage. Because these consequences can be fairly accurately predicted, the non-spill impacts can be seen more as
anticipated damages than risks. Risk assessment plays a larger
role where there is more uncertainty as to the consequences.
Another non-spill environmental risk category that is not
directly scored in this module is the risk of careless pipelining
activities. There are many opportunities to cause environmental
harm through misuse of common chemicals and through
improper operation and maintenance activities. In pipeline
operations, some potentially harmful substances commonly
used include
Paint removers
Cleaning agents
Vehicle fluids
Biocideskorrosion inhibitors.
Some activities generate potentially dangerous waste products, including
Truck loading/unloading
Pigginghnternal cleaning
Valve maintenance.
Even common excavating can disrupt natural drainage,
allow water migrations between previously separated areas,
promote erosion, and hinder vegetation and related wildlife
propagation. As evidence of an operator’s seriousness about
environmental issues, the evaluator can examine these common
practices and procedures to gauge the environmental sensitivity
of the operation.
It is worth noting that pipeline construction can and has been
designed to incorporate environmental improvements. An
example is a pipeline river crossing project that included as an
ancillary part of the scope the installation of special rock structures that facilitated a nine fold increase in fish population [8 11.
Other examples can be cited where natural habitats have been
improved or other environmental aspects enhanced as part of a
pipeline installation or operation.
D3. Environmental sensitivity
For the initial phases of risk management, a strict definition
of environmentally sensitive areas might not be absolutely
7/168 Leak impact Factor
necessary. A working definition by which most people would
recognize a sensitive area might suffice. Such a working definition would need to address rare plant and animal habitats, fragile ecosystems, impacts on biodiversity, and situations where
conditions are predominantly in a natural state, undisturbed
by man. To more fully distinguish sensitive areas, the definition
should also address the ability of such areas to absorb or
recover from contamination episodes.
The environmental effects ofa leak are partially addressed in
theprodud hazard score. The chronic component of this value
scores the hazard potential of the product by assessing characteristics such as aquatic toxicity, mammalian toxicity, chronic
toxicity, potential carcinogenicity, and environmental persistence (volatility, hydrolysis, biodegradation, photolysis). When
the RQ score is above a certain level, the pipeline surroundings
might need to be evaluated for environmental sensitivity.
Liquid spills are generally more apt to be associated with
chronic hazards. The modeling of liquid dispersions is a very
complex undertaking and is approximated for risk modeling
Areas more prone to damage andor more difficult to remediate can be identified and included in the risk assessment,
Since the RQ definition includes an evaluation of environmental persistence, spills of substances whose chronic component,
RQ, is greater than 3 are normally the types ofproducts that can
cause contamination damage. The threshold value of RQ 2 3
eliminates most gases and includes most non-HVL hydrocarbon substances transported by pipelines. Some exceptions
exist, such as H,S where the chronic component is 6 and yet the
environmental impact of an H,S leak may not be significant.
The evaluator should eliminate from this analysis substances
that will not cause environmental harm. Accumulation effects
such as greenhouse gas effects may need to be considered in
environmental sensitivity scoring.
In the United States, a definition for high environmental sensitivity includes intake locations for community water systems,
wetlands, riverine or estuarine systems, national and state parks
or forests, wilderness and natural areas, wildlife preservation
areas and refuges, conservation areas, priority natural heritage
areas, wild and scenic rivers, land trust areas designated critical
habitat for threatened or endangered species and federal and
state lands that are research natural areas [811.These area labels
fit specific definitions in the US. regulatory world. In other
countries, similar areas, perhaps labeled differently, will no
doubt exist.
Shorelines can be especially sensitive to pipeline spills.
Specifically for oil spills, a ranking system for impact to shoreline habitats has been developed for estuarian (where river currents meet tidewaters), lacustrine (lake shorelines), and riverian
(river banks) regions. Ranking sensitivity is based on the
following [20]:
Relative exposure to wave, tidal, and river flow energy
Shoreline type (rocky cliffs, beaches, marshes)
Substrate type (grain size, mobility, oil penetration, and
Biological productivity and sensitivity.
The physical and biological characteristics of the shoreline
environment, not just the substrate properties, are ideally
used to gauge sensitivity. Many of the environmental rankings
shown in Table 7.22 are taken from Ref. [20], which in turn
modified the National Oceanic and AtmosphericAdministration
(NOAA) Guidelinesfor Developing Environmental Sensitiviv
Index (ESr)Atlases and Databases (April 1993), other NOAA
guidance for freshwater environments, and FWS National
Wetlands Research Center.
In spills over water, the spilled materials behavior is critical in
determining the vulnerability of the water biota and the potential
migration of the spill to sensitive shorelines. Table 7.19 relates
some properties of spilled substances to their expected behavior
in water. This can be used to develop a scoring protocol for
offshore product dispersion based on the material’s properties.
(See also Chapter 12 for offshore pipeline risk assessments.)
As an example of an assessment approach, an evaluation of a
gasoline pipeline in the United Kingdom identified, weighted,
and scored several critical factors for each pipeline segment.
The environmental rating factors that were part of the risk
assessment included
Distance to nearest permanent surface water
Required surface water quality to sustain current land use
Conservation value
Habitat preserves
Habitats with longer lived biota (woods, vineyards,
orchards, gardens)
Rock type and likelihood of aquifer
Depth to bedrock
Distance to groundwater extraction points.
This assessment included consideration of costs and difficulties associated with responding to a leak event. Points were
assigned for each characteristic and then grouped into qualitative descriptors (low, moderate, high, very high) [%I.
Another example of assessing environmental sensitivities is
shown in Appendix E
D3. High-value areas
For both gas and liquid pipelines, some areas adjacent to a
pipeline can be identified as “high-value” areas. A high-value
area (HVA) can be loosely defined as a location that would suffer unusually high damages or generate exceptional consequences for the pipeline owner in the event ofa pipeline failure.
In making this distinction, pipeline sections traversing or otherwise potentially exposing these areas to damage should be
scored as more consequential pipeline sections. HVAs might
also bring an associated higher possibility of significant legal
costs and compensations to damaged parties. Characteristics
that may justify the high value definition include the following:
Higher [email protected] values. A spill or leak that causes damages
in areas where land values are higher or more expensive
structures are prevalent will be more costly to repair or
replace. Another example of this might be agricultural land
where more valuable crops or livestock could be damaged,
and especially where such damage precludes the use of the
area for some time.
Receptors 71169
Table 7.19 Spill behavior in water
Expected behavior in nafer
Relow ambient
Very high
Very high
Less than water
Low or partial
Relow ambient
l'ery high
Above ambient
Less than water
:lho\s ambient
I-css than water
Low or partial
Abo\,e ambient
Less than \hater
Abobe aiiibicnt
Near watei
,4bove ambient
Near water
Low or partial
4bo\ e ambient
Above ambient
Greater than warel
Above ambient
Greater than water
Low or partial
Above ambient
Greater than water
All of the liquid will rapidly boil from the surface ofthe water.
Underwater spill will most often result in the liquid boiling and
bubbles rising to the surface.
Most of the liquid will rapidly boil off. but some portion will
dissolve in the water. Some ofthe dissolved material will
evaporate with time from the water. Underwater spill will result
in more dissolution in water than surface spills
As much as 50% or more of the liquid may rapidly boil offthe
water while the rest dissolves in water. Some of the dissolved
material will evaporate with time from the water. Underwater
spills will result in more dissolution in water than surface spills.
Indeed, little vapors may escape the surface if the discharge is
sufficiently deep.
The liquid or solid will float on the water. Liquids will form
surface slicks. Substances with significant vapor pressures will
evaporate with time.
The liquid or solid will float on water as above, but will dissolve
over a period oftime. Substances with significant vapor
pressures may simultaneously evaporate with time.
These materials will rapidly dissolve in water up to the limit
(if any) oftheir solubility. Some evaporation ofthe chemical
may take place from the u'ater surface with time if its vapor
pressure is significant.
Difficult to assess. Bemube they will not dissolve. and because
specific gravities are close to water, they may float on or
beneath the surface ofthe water or disperse as blobs of liquid or
solid particles through the water column. Some evaporation of
the chemical may take place from the water surface with time if
its vapor pressure is significant.
Although a material with these properties will behave at first like
material described directly above, it will eventually dissolve in
the water. Some evaporation of the chemical may take place
from the water surface with time if its vapor pressure is
These materials will rapidly dissolve in water up to the limit
(if any) of their solubility Some evaporation ofthe chemical
niay take place from the water surface with time if its vapor
pressure is significant.
Heavier-than-water insoluble substances will sink to the bottom
and stay there. Liquids may collect in deep water pockets.
These materials will sink to the bottom and then dissolve over a
period of time.
These materials will rapidly dissolve in water up to the limit
[ifany) oftheir solubility. Some evaporation ofthe chemical
may take place from the water surface \\ ith time if its vapor
pressure is significant.
Source ARCHE (Automated Resource for &hem!cal Hazard !nodent _Evaluat/on),prepared for the Federal Emergency Management Agency,
Department of Transportation, and Environmental Protection Agency, for Handbook of Chemrcal Hazard Analysls Procedures (approximate date
1989)and software for dispersion modeling. thermal, and overpressure impacts
.4reus thut are more dificult to remediute. If a spill occurs
where access is difficult or conditions promote more
widespread damage, costs of remediation might be higher.
Examples might b e terrain difficult for equipment to access
(steep slopes, swamps, dense vegetation growth); topography that widely and quickly disperses a spilled product, perhaps into sensitive areas such as streams; damages t o surface
areas that are disruptive to repair; a n d damages to agricultural activities where damages would preclude the use o f the
area for long periods of time. T h e reader should realize that
s o m e remediation efforts can continue literally for decades.
Structures or ,facilities that ure more difficult io replace.
An example would be a hospital or university with specialized equipment that is not adequately reflected in property
Higher associated costs. If a spill occurs in a marina, a harbor, an airport, o r other locations where access interruption
could b e potentially very costly t o the local industry, this
could justify the high-value area score. If a business is interrupted b y a spill (for example, a resort area where beaches
are made inaccessible), higher damages and legal costs can
b e anticipated.
7/170 Leak Impact Factor
Historzcal areas. Areas valuable to the public, especially
when they are irreplaceable due to historical significance,
may carry a high price if damaged due to a pipeline leak. This
high price might be seen indirectly in terms of public opinion
against the company (or the industry in general) or increased
regulatory actions. Archaeological sites may fit into this
High-use areas. The areas are generally covered by population density classifications (high-occupancy buildings such
as churches, schools, and stores cause the class location to
rise to Class 3 in U.S regulations, if not already there) or by
environmentally sensitive areas such as state and national
parks. Evaluators may wish to designate other high-use areas
such as marinas, beaches, picnic areas, and boating and fishing areas as high-value areas due to the negative publicity
that a leak in such areas would generate.
Identification and scoring of HVAs can be done by
determining the most consequential conditions that exist
and scoring according to the following scale (or according to
the scale of Table 7.21, shown later). Note that the probability
of a leak, fire, and explosion is not evaluated here-nly
potential consequences should such an event occur. Interpolations between the classifications should be done. The following classifications use qualitative descriptions of HVA’s
and environmental sensitivities to score potential receptor
Neutral (default)
No extraordinary environmental or high-value considerations.
Because all pipeline leaks have the potential for environmental harm and property damage, the neutral classification
indicates that there are no special conditions that would
significantly increase the consequences of a leak, fire, or
Some environmental sensitivity. A spill has a fair chance of
causing an unusual amount of environmentalharm. Values of
surrounding residential properties are in the top 10% of the
community. High-value commercial, public, or industrial
facilities could be impacted by a leak’s fire or explosion.
Remediation costs are estimated to be about halfway
between a normal remediation and the most extreme remediation.
Extreme environmental sensitivity. Nearly any spill will cause
immediate and serious harm. High-cost remediation is anticipated. High-value facilities would almost certainly be damaged by a leak, fire, or explosion. Widespread community
disruptions would occur, as well as long-term or permanent
environmental damage.
Another sample of scoring HVAs is shown in Table 7.20. In
this scheme, various high-value areas are “valued” on a 0- to 5point scale with higher points representing more consequential
or vulnerable receptors.
Attempts to gauge all property values and land uses along the
pipeline may not be a worthwhile effort, especially since such
evaluations must be constantly updated. The HVA designation
can be reserved for extraordinary situations. Experienced
pipeline personnel will normally have a good feel for extraordi-
Table 7.20 Sample high-value area scoring
HVA description
Historic site
Busy harbor
Airport (major)
Airport (minor)
Industrial center
Interstate highway
Recreational arealparks
Special agriculhrre
Water treatmenthource
nary conditions along the lines that merit special treatment in a
risk assessment.
Equivalencies of receptors
A difficulty in all risk assessments is the determination of a
damage state on which to base frequency-of-occurrence estimates. This is further complicated by the normal presence of
several types of receptors, each with different vulnerabilities to
a threat such as thermal radiation or contamination.The overall
difficulty is sometimes addressed by running several risk
assessments in parallel, each corresponding to a certain receptor or receptor-damage state. In this approach, separate risk values would be generated for, as an example, fatalities, injuries,
groundwater contamination, property damage values, etc. The
advantage of this approach is in estimating absolute risk values.
The disadvantage is the additional complexity in modeling and
subsequent decision making. An example of this type approach
is shown in Appendix E
Another approach is to let any special vulnerability of any
threatened receptor govern the risk assessment. An example of
this approach is shown in Appendix E Appendix F presents a
protocol for grouping various receptor impacts into three sensitivity areas: normal, sensitive, and hypersensitive. This was
developed to perform an environmental assessment (EA) of a
proposed gasoline pipeline. Receptors considered and the basis
of their evaluation in this EA are shown in Table 7.21. Under
this categorization, an area was judged to be sensitive or hypersensitive if any one of the receptors is defined to be sensitive or
hypersensitive. This conservatively uses the worst case element, but does not consider cumulative effects-when multiple
sensitive or hypersensitiveelements are present.
A third option in combining various receptor types into a risk
assessment is to establish equivalencies among the receptors.
The following scheme is an example scoring of receptors for a
hazardous liquid pipeline:
High-value areas
0-10 pts
0-10 pts
Receptors 71171
Table 7.21
Bases for evaluation of various receptors
Basis of categorization
Human populations
Housebuilding counts
Distance to public drinking water facilities in various geologic formations; special hydrogeological
Stream characterization; flow path modeling; criticality of water; scoring model
Government agencies; studies; field spot checks
Parklands; rivenistreams upstream of parklands; aquifers feeding parklands
Surface water
Threatened and endangered species
Recreational areas
Public lands (national parks and forests)
Water intakes
Commercially navigable waterways
0-5 pts
&5 pts
0-5 pts
0-5 pts
0-5 pts
45 pts
This approach might be more controversial because judgments are made that directly value certain types of receptor
damages more than others. Note, however, that the other
approaches are also faced with such judgments although they
might be pushed to the decision phase rather than the assessment phase of risk management.
Table 7.22 presents another possible scoring scheme for
some environmental issues and HVAs. In this scheme, the
higher scores represent higher consequences. This table establishes some equivalencies among various environmental and
other receptors, including population density. These equivalencies may not be appropriate in all cases.This table was designed
to be used with a 4-point population density classification (the
4 classes defined by DOT). It proposes a 1-to 5-point scale to
include scores not only for population density, but also for envi-
Table 7.22 Scoring for environmental sensitivity and/or high-value areas
Environmental sensitiviw descriptions
High-value descriptions
Nesting grounds or nursing areas of endangered species;
vital sites for species propagation; high concentration
of individuals of an endangered species.
Freshwater swamps and marshes; saltwater marshes;
mangroves; vulnerable water intakes for community
water supplies (surface or groundwater intakes);
very serious damage potential.
Significant additional damages expected due to difficult
access or extensive remediation; serious harm is done
by a pipeline leak.
Shorelines with rip rap structures or gravel beaches;
gently sloping gravel riverbanks.
Rare equipment; hard to replace facilities; extensive associated
damages would be felt on loss of facilities; major costs of
business interruptions anticipated; most serious repercussions
are anticipated; high degree ofpuhlic outcry; nationalhternational news.
Very high property values; high costs and high likellhood of
business interruption; expensive industry shutdowns required;
widespread community disruptions are expected high
publicity regionally, some national coverage.
Moderate business interruptions anticipated; well-known or
important historical or archaeological sites; a degree of public
outrage is anticipated.
Long-term (one growing season or more) damage to agriculture;
other associated costs; some community disruption; regional
news stories.
Low-profile historical and archaeological sites; high-expense
cleanup area due to access, equipment needs, or other factors
unique to this area; high level of local public concern would
he seen.
Unusual public interest in this site; high-profile locations such
as recreation areas; some industry interruption (without major
costs); local news coverage.
Some level of associated costs, higher than normal, is anticipated;
limited-use buildings (warehouses, storage facilities. small
offices, etc.) might have access restricted.
Picnic grounds, gardens, high-use public areas; increasing
property values.
Property values are higher than normal.
Potential damages are normal for this class location; no
extraordinary damage expected.
Mixed sand and gravel beaches; gently sloping sand and
gravel river banks; topography that promotes wider
dispersion (slopes, soil conditions, water currents, etc.);
more serious damage potential.
Coarse-grained sand beaches; sandy river bars; gently
sloping sandy river hanks; national and state parks
and forests.
Fine-grained sand beaches; eroding scarps; exposed,
eroding river banks; difficulties expected in remediation;
higher than “normal” spill dispersal.
Wave-cut platforms in bedrock; bedrock river hanks;
minor increase in environmental damage potential.
Shoreline with rocky shores, cliffs, or banks.
No extraordinary environmental damages.
7/172 Leak Impact Factor
ronmental sensitivity and high value areas (HVAs). Scores are
determined based on qualitative descriptions and are to be
added to the population class number. The worst case (highest
number) in each column should govern. When conditions from
both columns coexist, both scores can be added to the population class number.
The extremes of this consequence scale will be intuitively
obvious-the most environmentally sensitive area and the
highest population class and the highest value areas simultaneously occurring in the same section would be the highest consequence section. The scale midrange, however, might
discomfort some people in that a certain amount of environmental sensitivity (or value ofthe surroundings) is said to equal
a certain population increase. In other words, environmental
loss and economic loss are being equated to loss of life. In Table
7.22, the highest environmental sensitivity and the greatest
HVA can each change the surroundings score by the equivalent
of one population class designation.
Assessment schemes such as the one shown in Table 7.22 are
of course very general and contain value judgments that might
be controversial. They can, however, be useful screening or
high-level tools in a risk assessment. The following examples
illustrate some general (high-level) consequence scoring for
various receptors, based onTable 7.22.
Example 7.5: Neutral consequences
A natural gas pipeline traverses an agricultural area ofclass 1
and Class 2 (low and medium) population densities. Soil conditions are organic clay and sand. Nearby housing and commercial
buildings are consistent with most comparable class locations.
There is no known endangered species that could be impacted
by a leak in this area. A leak of natural gas is lighter than air and
would have minimal chronic impact as shown by its product
hazard score (chronic component) of 2. No environmental or
high-value receptors are vulnerable from these sections.
Example 7.6: Higher consequences
Outside a major metropolitan area, a subdivision of very
expensive mansions has recently been constructed within 1800
feet of a 6-in., 400-psig fuel oil line. The class location is 2. The
pipeline is located on a slope above the new houses. The soil is
sandy. Groundwater contamination is a possibility, but there are
no intake locations for community water supplies nearby. Spill
remediation would be higher than normal due to the slope effects,
the highly permeable soil, and the anticipatedproblems with longterm remediation equipment operating near the residential area.
The housing is judged to be far enough from the pipeline and the
thermal effects from burning fuel oil are limited enough that the
immediate impact to that community is a remote possibility. The
evaluatorscores this situation 0.5 on a 0-to 1.O-point scale, in consideration of the topography and the high house values. He adds
this to the population class to get 2.5 as the receptor score.
Example 7.7: Extreme consequences
A high-pressure, 30-in. natural gas pipeline is in a corridor
that runs within 300 ft of a major university including a
researchheaching hospital. By population density, the class
location is 4 (very high). Cleanup costs for leaked natural gas
are expected to be minimal. If a fire or explosion occurs, damage could be extensive. Given the unique nature of the structures nearby and the value of the contents within-specialized
equipment, research in progress, records, and files-the evaluator feels the surroundings represent a higher value and scores
the additional consequences for pipeline operations in this area
as 0.9 on a 0-to 1.0-point scale. He adds the environmental/
HVA score to the population class to get 4.9 as the receptor
Emergency response to a gas leak would not always be quick
enough to reduce potential damages. No spill score adjustments are made.
Example 7.8: Extreme consequences
A 24-in. crude oil pipeline traverses a wetlands area and parallels a stream for over a mile within the wetlands. This is a
Class 1 area (low population). Cleanup of a spill in this
freshwater marsh would involve much damage associated
with heavy equipment and long-term remediation activities
(temporary roads, establishment of pumping stations, etc.).
Immediately adjacent to the wetlands area, and within onequarter mile of the pipeline, a small community removes water
from the stream to supplement its groundwater intakes. Noting
the immediate wetlands threat from any spill, the high cost of
remediation, and the threat to a community water supply, the
evaluator scores the conditions as 0.8 on a 0-to 1.O-point scale.
If the water intake was the community’s only water supply and
if endangered species were involved, the evaluator would have
scored the situation as 0.85 or 0.9. The receptor score is then 1 +
0.8 = 1.8.
The operator has a very strong environmental program that
includes a detailed, well-practiced response plan. Cornpanyowned equipment and contract equipment are on standby and
can be quickly placed in this area through the use ofa helicopter
that is also on 24-hour-per-day standby. Trained, equipped personnel can be at this site within 1 hour. A manned control room
should be able to detect a significant leak here within a very few
minutes. The evaluator judges that this level of response can
indeed reduce spill consequences by 50% (threshold established for modeling purposes) and, hence, he adjusts the spill
score, effectively reducing the assumed quantity spilled.
Hazard zones
Hazard zones are defined to be distances from a pipeline
release point at which significant damages could occur to a
receptor (see also Chapter 14). Therefore, a hazard zone is usually a function of how far potential thermal and overpressure
effects may extend from the release point. Note that when hazard zone is calculated as a distance from a source such as a
burning liquid pool or vapor cloud centroid that source might
not be at the pipeline failure location. In fact, the source can be
some distance from the leak site. Relative hazard zones for a
vapor release are illustrated in Figure 7.8. A hazard zone might
also include potential liquid contamination distances for vulnerable receptors such as water intakes and sensitive environ-
Receptors 711 73
ments. The thermal and overpressure distances are themselves
a function of many factors including release rate, release volume, flammability limits, threshold levels ofthermal/overpressure effects, product characteristics, and weather conditions.
A hazard zone must be defined in terms of specific damage
thresholds that could occur under defined scenarios. An example of a damage threshold is a thermal radiation (heat flux) level
that causes injury or fatality in a certain percentage of humans
exposed for a specified period of time. Another example is the
overpressure level that causes human injury or specific damage
levels to certain kinds of structures. Such damage threshold
criteria are further discussed in Chapter 14.
Receptors falling within the hazard zones are considered to
be vulnerable to damage from apipeline release. In the case of a
gas release, receptors that lie between the release point and the
lower flammable concentration boundary of the cloud are
considered to be at risk directly from fire. Receptors that lie
between the release point and the explosive damage boundary
may additionally be at risk from direct overpressure effects.
Some receptors within the hazard zone would also be at risk
from thermal radiation effects from ajet fire as well as from any
secondary fires resulting from the rupture-ignition event. See
Chapter 14 and Figure 7.8. In the case ofliquid spills, migration
of spilled product, thermal radiation from a potential pool fire,
and potential contamination would define the hazard zone.
Because an infinite number of release scenarios-and subsequent hazard zones-are possible, some simplifying assumptions are required. A very unlikely combination of events is
often chosen to represent maximum hazard zone distances. The
assumptions underlying such event combinations produce
very conservative (highly unlikely) scenarios that typically
overestimate the actual hazard zone distances. This is done
intentionally in order to ensure that hazard zones encompass
the vast majority of possible pipeline release scenarios. A further benefit of such conservatism is the increased ability of
such estimations to weather close scrutiny and criticism from
outside reviewers.
As an example of a conservative hazard zone estimation, the
calculations might be based on the distance at which a full
pipeline rupture, at maximum operating pressure with subsequent ignition, could expose receptors to significant thermal
damages, plus the additional distance at which blast (overpressure) injuries could occur in the event of a subsequent vapor
cloud explosion. The resulting hazard zone would then represent the distances at which damages could occur, but would
exceed the actual distances that the vast majority of pipeline
release scenarios would impact.
More specifically, the calculations could be based on conservative assumptions generating distances to the LFL boundary.
doubling this distance to account for inconsistent mixing, and
adding the overpressure distance for a scenario where the ignition and epicenter of the blast occur at the farthest point.
However, such conservatism may also be excessive, leading to
inefficient and costly repercussions-in the case of land-use
decisions, for example.
An estimation of potential hazard zones from pipeline
releases is an aspect ofthe previously discussed spill score. The
same issues that are used to establish relative consequences are
Thermal effects 1
Thermal effects 2
lgnitability range
Figure 7.8
Thermal and overpressure damage zones.
Overpressure 1
7/174 Leak Impact Factor
used to estimate hazard zones. Such estimates are an important
part of modeling when absolute risk estimates are sought (see
Chapter 14). The calculations underlying the estimates are
important in a relative risk assessment because they identify the
critical variables that make one release potentially more consequential than another. They also help the evaluator to better
understand the threat from the pipeline and more appropriately
characterize receptors that are potentially damaged.
Damage states that can be used to define hazard zones are
discussed in Chapter 14.
Max flow rate
Public lands
IX. Leak impact factor sample
Many approaches are possible for evaluatingthe relative consequences of a pipeline failure. For each component of the LIF
that should be considered, some sample scoring protocols have
been presented. Some additional algorithm samples can be
found in Appendix E and the case studies of Chapter 14.
Leak Impact Factor Samples
In this sample LIF algorithm, a liquid pipeline operator uses the
relationships shown in Table 7.23 to evaluate the LIE A brief
description of the variables used is as follows:
product hazard, scored as described in this
a score ranging from 0 to 1.O proportional to
relative volume of potential release; 1.O
reflects largest volume spill possible in
this risk model
volume lost to leak prior to system shut
volume lost to leak from detection to system
volume lost to leak due to drainage of
isolated pipeline section
measure of relative dispersion range
measure of relative dispersion due to
surface flows
measure of relative dispersion due to
subsurface flows
surrogate for drain volume, this is actually
the upstream and downstream lengths of
Table 7.23
Water intake
pipe that would contribute to a specific
leak location, scaled from 0 to IO.
pipe diameter
pumped flowrate of product
high value area, as defined in this chapter,
susceptible to spill damage
binary measure of presence of national
parks, wildlife refuges, etc. susceptible to
spill damage
binary measure of presence of water body
susceptible to spill damage
0-10 point scale indicating relative
population density susceptible to spill
binary measure of presence of drinking
water intake structure susceptible to spill
Leak impact factor, as defined in this chapter
This sample algorithm is a high-level screening tool used to
identify changes in consequence along the route of a specific
pipeline. The relative consequences are measured by the LIF,
whose main components are
Product hazard (PH)
Receptors (R)
Spill volume (S)
Spread range or dispersion (D)
S is a function of pumping rate, leak detection capabilities,
drain volume, and emergency response capabilities; and
LIF= PH x R x S x D
This model is applied to a pipeline transporting butadiene,
whose product hazard is greater than for most hydrocarbonsabout twice as high as for butane or propane. A higher health
hazard score (Nh per NFPA), reactivity score (N, per NFPA),
and a lower CERCLA reportable quantity create the higher
hazard level.
In the initial application of this algorithm, changes in consequence are thought to be driven solely by changes in operating
pressure and population density along this pipeline. Other
variables are included in the model but are not used initially.
Algorithms for scoring the leak impact factor
[(prod-ha) x (spill) x (spread) x (receptors)]
Product hazard
{ [(Vl) + (V2) + (v3)]/23000)/10 + 0.2
[(overland)/3 + (subsurface)/8J
[(population) + (HVA) + (public-lands)
+ (wetlands) + (water-intake) + (waters)]
[ ( m a flow rate)/l2]
[(drain) x (pipe_diameter)*]
High value
Product hazard is calculated elsewhere and stored as a database variable
3 components of total spill volume are adjusted by scaling factors
Variables adjusted by scaling factors
Total receptor score is sum of individual receptor scores, weighted elsewhere
Spill volume contributed by pumping flow rate
Volume contributed by leak detection and response time; to be included later
Contributing lengths of upstream and downstream pipe are adjusted hy pipe
diameter as surrogate volume calculation
Leak impact factorsample 71175
Table 7.24
Sample LIF algorithm
Risk rjariable
Spill size
Square root offpressure)
Product hazard
Spill size is modeled as being a function of(proportiona1 to the square root 00 of
internal pipe pressure
No differences in dispersion potential are recognized
Constant for this pipeline; for comparisons with other pipelines. natural gas = 7,
gasoline= IO
Two variables are used to quantify the receptor of interest (population density);
seeTable 7.18
Constant: no changes along pipeline
Constant: no changes along pipeline
(general population density)
+(special population areas)
Leak detection
Emergency response
Table 7.24 shows the LIF variables used in the model. Future
improvements to the analyses could include evaluations of leak
detection, drain volumes, emergency response, additional
receptors, and dispersion potential.
7/176 Leak Impact Factor
Data Management
and Analyses
VI Sconng 81182
er environments 81183
finitions 81178
Datacollectconand format 81179
Point events and continuousdata 81179
Eliminatingunnecessary segments 8/ I80
Creating categones of measurements 81
Assigningzones ofinfluence 811 80
Countable events 81181
Spatial analyses 8118 1
Data aualitvhncertaintv 81181
V. SegmkntaGon 8/18I .
1. Background
Although subsequentchapters discuss possible additions to the
risk assessment, many will, at least initially, want to work solely
with the results of the risk assessment methodology described
in Chapters 3 through 7. Therefore,this chapter discusses some
data management issues and then begins the natural progression from risk assessment to risk management.
Risk assessment is, at its core, a measurement process. As
noted in Chapter 1, there is a discipline, perhaps even an “art”
to measuring. This involves a philosophy with a clear understandingof the intent ofthe measuring.Furthermore, it requires
defined processes and structure for performing the measurements including all associateddata handling efforts.
Having accumulated some risk assessment data, the next
step is to prepare that data for decision making. Guidance on
data management issues that will often arise in the risk assess-
es of central tendency
esofvanation 81’190
andcharts 81190
Examples 8/192
ment process is offered here. Some of these issues may not
be apparent until the later stages of the effort, so advance planning will help ensure an efficient process. The numerical techniques that help extract information from data are discussed
here. Using that information in decision making is also
discussed and then more fully detailed in Chapter 15, Risk
II. Introduction
Risk assessment is a data-intensiveprocess. It synthesizes all
available informationinto new information-risk values, either
relative or absolute. The philosophy behind data collection is
discussed in Chapters 1 and 2.
The risk values themselvesbecome informationthat must be
managed.All through this book the risk “model” has meant the
8/178 Data Management and Analyses
set of algorithms used to define risk. To many, the term model
refers to the software that holds data and presents risk results.
This is actually the environment in which the model resides, not
the model itself, by the terminology of this book.
The environment housing and supporting the risk model is
critical to the success of the risk management process.
111. Risk managementprocess
This section describes the mechanics of performing a risk
assessment using the common software tools of a spreadsheet
and desktop database. The risk assessment process is designed
to capture pertinent information in a format that can be used to
first create segments with constant risk characteristics and then
assign risk scores to those segments.
Many of the data processing steps in a risk assessment
can appear complex when first studied, especially when those
steps are described rather than demonstrated. Most processes
are, however, fairly self-evident once the risk assessment
efforts are under way. As such, the reader is advised not to
become reluctant to embark on the effort based on the apparently significant issues of the process, but rather to begin the
effort and use the following sections as a reference document
as issues arise.
Chapter 1 presents an overall process for risk management.
This process can be revisited when considering potential software environments since the software will ideally fully support
each step in the process.
Step 1: risk modeling
As previously noted, a pipeline risk assessment model is a set
of algorithms or “rules” that use available information and
data relationships to measure levels of risk along a pipeline.
A model can be selected from some existing and commercially
available models, customized from existing models, or created
“from scratch” depending on your requirements. Algorithms
can be created to use data directly from a database environment
to calculate risk scores. Several common software environments will support efficient data storage, retrieval, and algorithm calculations.
Step 2: data preparation
Data preparation or conditioning produces data sets that
are ready to be loaded into and used by the risk assessment
model. Data preparation includes processes to smooth or
enhance data into zones of influence, categories, or bands as
may be appropriate. Computer routines greatly facilitate these
Step 3: segmentation
Because risks are rarely constant along apipeline, it is advantageous to first segment the line into sections with constant risk
characteristics (dynamic segmentation) or otherwise divide
the pipeline into manageable pieces. This might be a one-time
or rare event, to ensure consistent segments. Alternatively, it
might change every time the underlying data change. Again,
computer routines facilitate this.
Step 4: assessing risks
After a risk model has been selected and the data prepared, risks
along the pipeline route can be assessed. The previously
selected risk assessment model can be applied to each segment
to get a unique risk score for that segment. These relative risk
numbers can later be converted into cumulative risk values
and/or absolute risk numbers.
Moving from overviews of risk down to the smallest details
of a specific piece ofpipe will often be necessary. Softwarethat
supports rapid tabularization and perhaps map overlays is useful. Spill flowpath or dispersion modeling, soil penetration, and
hazard area determinations are aspects of the more robust risk
Step 5: managingrisks
Having performed a risk assessment for the segmented
pipeline, now comes the critical step of managing the risks.
Inthis area, the emphasis is on decision support-providingthe
tools needed by the risk manager to best optimize resource
This process generally involves steps such as these:
Analyzing data (graphically and with tables and simple statistics)
Calculating cumulative risks and trends
Creating an overall risk management strategy
Identifying mitigation projects
Performing “what-if” scenarios.
Patterns, trends, and relationships among data sets can
become an important part of managing risks. Software that
supports analytical graphics routines will be useful.
Several terms in this discussion might be used in a manner that
is unfamiliar to the reader. Terminology is not consistent among
all risk modelers so these definitions are more for convenience
in describing subsequent steps here.
Each record must have an identifier that relates that record to
a specific portion of the overall pipeline system, that is, an ID.
This identifier, along with a beginning station and ending
station, uniquely identifies a specific point on the pipeline system. It is important that the identifier-stationing combination
does indeed locate one and only one point on the system. An
alphanumeric identification system, perhaps related to the
pipeline’s name, geographic position, line size, or other common identifying characteristics, is sometimes used to increase
the utility of the ID field.
Risk variables are also commonly called events in keeping
with modem GIS terminology. The current characteristics of
each event are called conditions (also sometimes called atm’bUtes or codes). For example, for the event (population),possible
conditions include residential high, residential low, commercial high, etc. For the event mapshecords, possible conditions
are excellent, fair.poor. none. The event depth of cover could
have a number or numerical ranges such as 24”, 19”, > 48”, or
12-24” as its possible conditions. Events, as variables in the
risk assessment, can be named using standardized labels.
Data preparation 8/179
Several industry database design standards are emerging as
of this writing. Adhering to a standard model facilitates the
efficient exchange of data with vendors and service providers
(ILI, CIS, etc), as well as other pipeline companies and governmental databases.
Each event must have a condition assigned. Some conditions
can be assigned as general defaults or as a system-wide characteristic. Each event-condition combination defines a risk characteristic for aportion of the system.
A restricted vocabulary is enforced in the most robust
software applications. Only predefined terms can be used
to characterize events. This eliminates typos and the use of
different conditions to mean the same thing. For instance, for
the eventpipe manufacturer = “Republic Steel Corp,” and not
“Republic” or “Republic Steel” or “republic” or “RSC”; coating condition = “fair” and not “F” or “ok,” “medium” or “med,”
The data dictionary is a document that lists all events and
their underlying source, as well as all risk variables. It should
also show all conditions used for each event along with the full
description of each condition and its corresponding point values. The data dictionary is designed to be a reference and control document for the risk assessment. It should specify the
owner (the person responsible for the data) as well as update
frequency, accuracy, and other pertinent information about
each piece of data, sometimes called meta data.
In common database terminology, each row of data is called
a record and each column is called afield. So, each record is
composed of several fields of information and each field contains information related to each record. A collection of records
and fields can be called a database, a data set, or a table.
Information will usually be collected and put into a database (a
spreadsheet can be a type of database). Results of risk assessments will also normally be put into a database environment.
GIs is a geographical information system that combines
database capabilities with graphics (especially maps) capabilities. GIS is increasingly the software environment of choice for
assets that span large geographic areas. Most GIS environments have a programming language that can extract data and
combine them according to the rules of an algorithm. Common
applications for more detailed risk assessments will be modeling for flowpath or dispersion distances and directions, surface
flow resistance, soil penetration, and hazard zone calculations.
It can also be the calculating “engine” for producing risk scores.
SQL refers to Structured Query Language, a software language recognized by most database software. Using SQL, a
query can be created to extract certain information from the
database or to combine or present information in a certain way.
Therefore, SQL can take individual pieces of data from the
database and apply the rules of the algorithm to generate risk
IV. Data preparation
Data collection and format
Pertinent risk data will come from a variety of sources. Older
data will be in paper form and will probably need to be put into
electronic format. It is not uncommon to find many different
identification systems, with some linked to original alignment
sheets. some based on linear measurements from fixed
points, and some based on coordinate systems, such as the
Global Positioning System (GPS). Alignment sheets normally
use stationing equations to capture adjustments and changes in
the pipeline route. These equations often complicate identifiers
since a stationing shown on an alignment sheet will often be
inconsistent with the linear measurements taken in most surveys. Information will need to be in a standard format or translation routines can be used to switch between alignment sheet
stationing and linear measurements.
All input information should be collected in a standard data
format with common field (column) names. A standard data
format can be specified for collection or reformatting.
Consider this example:
=identifier relating to a specific length ofpipeline
Begstation =the beginning point for a specific event and condition, using a consistent distance measuring system
Endstation =the end point for a specific event and condition,
using the same measurement system.
= the name of the event
=the condition.
Each record in the initial events database therefore
corresponds to an event that reports a condition for some risks
variable for a specific distance along a specific pipeline.
In data collection and compilation, an evaluator may wish to
keep separate data sets-perhaps a different data set for each
event or each event in each operating area-for ease of editing
and maintenance during the data collection process. The number of separate data sets that are created to contain all the information is largely a matter of preference. Having few data sets
makes tracking of each easier, but makes each one rather large
and slow to process and also may make it more difficult to find
specific pieces of information. Having many data sets means
each is smaller and quicker to process and contains only a few
information types. However, managing many smaller data sets
may be more problematic. Especially in cases where the number of event records is not huge, maintaining separate data sets
might not be beneficial.
Separate data sets will need to be combined for purposes of
segmentation and assignment of risk scores. The combining of
data sets can be done efficiently through the use of certain
queries in the SQL ofmost common database software.
A scoring assessment requires the assignment of a numerical
value corresponding to each condition. For example, the event
environ sensitivity is scored as “High’ which equals a value of 3
points, in a certain risk model, It is also useful to preserve the
more descriptive condition (high, me4 low, etc.).
Point events and continuous data
There is a distinction between data representing a specific point
versus data representing a continuous condition over a length of
pipeline. Continuous data always have a beginning and ending
station number. A condition that stays generally constant over
longer distances is clearly continuous data. Point event data
have a beginning station number but no ending station-that is,
8/180 Data Management and Analyses
an event with no length. The distinction often has more to do
with how the data are collected. For instance, depth of cover is
normally measured at specific points and then the depth is
inferred between the measurements. So even though the depth
itself is often rather constant, the way in which it is collected
causes it to be treated as point data.
can be drawn to determine population density around the
These type of data are generally converted into continuous
bands by assuming that each reading extends one-half the
distance to the next reading.
Examples ofpoint Event Data
Eliminating unnecessary segments
A pipe-to-soil measurement
Soil pH measurements at specific points
Depth of cover-actual measurements
Drain volume calculations at specific points
Elevation data.
Examples of Continuous Datu
Pipe specifications
Depth of cover (when estimated)
Procedures score
Training score
Maintenance score
Earth movement potential
Waterways crossings
Wetlands crossings.
Some of these continuous data examples are evaluation
scores, such as “Procedures score,’’ which is described
Inferring continuous data
Because the risk model requires variables to be characterized
continuously along the pipeline, all data must eventually be in
continuous format. Special software routines can be used to
convert point event data into continuous data, or it can be done
Some data are generated as point events, even though they
would seem to be continuous by their nature. In effect, the continuous condition is sampled at regular intervals, producing
point event data. There are an infinite number of possible measurement points along any stretch of pipeline. The measurements taken are therefore spot readings or samples, which are
then used to characterize one or more conditions along the
length of the pipeline. This includes point measurements taken
at specific points, such as depth of cover, pipe-to-soil voltage,
or soil pH. In these cases, measurements are assumed to represent the condition for some length along the line.
Other point event data are not direct measurements but
rather the result of calculations. An example is a drain volume calculated based on the pipeline’s elevation profile.
These can theoretically be calculated at every inch along the
pipeline. It is common practice to select some spacing,
perhaps every 100 ft or 500 ft, to do a calculation. These calculated points are then m e d into continuous data by assuming
the calculated value extends half the distance to the next
calculation point. Other examples include internal pressure
and population density. Internal pressure changes continuously
as a function of flowrate and distance from the pressure
source. Similarly, as one moves along the pipeline, the population density theoretically changes with every meter, since each
meter represents a new point from which a circle or rectangle
Data that are collected at regular intervals along the pipeline are
often unchanging, or barely changing, for long stretches.
Examples of closely spaced measurements that often do not
change much from measurement to measurement include CIS
pipe-to-soil potential readings, depth of cover survey readings,
and soil pH readings. Unless this is taken into account, the
process that breaks the pipeline into iso-risk segments will create many more segments than is necessary. A string ofrelatively
consistent measurements can be treated as a single band of
information, rather than as many separate short bands. It is inefficient to create new risk segments based on very minor
changes in readings since, realistically, the risk model should
not react to those minor differences. It is more efficient for a
knowledgeable individual to first determine how much of a
change from point to point is significant from a risk standpoint.
For example, the corrosion specialist might see no practical difference in pipe-to-soil readings of 910 and 912 millivolts.
Indeed, this is probably within the uncertainty of the survey
equipment and process. Therefore, the risk model should not
distinguish between the two readings. However, the corrosion
specialist is concerned with a reading of 9 10 mV versus a reading of 840 mV, and the risk model should therefore react differently to the two readings. The use of normal operating pressures
is another example. The pipeline pressure is continuously
changing along the pipeline, but smaller changes are normally
not of interest to the risk assessment.
Creating categories of measurements
To eliminate the unnecessary break points in the event bands, a
routine can be used to create categories or “bins” into which
readings will be placed. For instance, all pipe-to-soil readings
can be categorized into a value of 1 to IO. There will still be
sharp delineations at the break points between categories. If a
reading of -0.89 volts falls into category = 4 and -0.90 volts
falls into category = 5, then some unnecessary segments will
still be created (assuming the difference is not of interest).
However, the quantity of segments will be reduced perhaps
vastly, depending on the number of categories used. The user
sets the level of resolution desired by choosing the number of
categories and the range of each. A statistical analysis of actual
readings, coupled with an understanding of the significance of
the measurements, can be used to establish representative categories. A frequency distribution of all actual readings will assist
in this categorization process.
Assigning zones of influence
A special case of converting point data into continuous data
involves assigning a zone of influence. Some data are very
location specific but provide some information about the surrounding lengths of pipe. These data are different from the
sample data previously discussed since the event of interest is
Segmentation 8/181
not a sample measurement but rather represents an event or
condition that is tied to a specific point on the pipeline.
However, it will he assumed to be representing some distance
either side of the location specified. An example is leak or
break data. A leak usually affects only a few inches ofpipe, hut
depending on the type of leak, it yields evidence about the susceptibility of neighboring sections of pipeline. Therefore, a
zone of influence, x number of feet either side ofthe leak event,
is reasonably assigned around the leak. The whole length of
pipeline in the zone of influence is then conservatively treated
as having leaked and containing conditions that might suggest
increased leak susceptibility in the future. Considerations will
be necessary for overlapping zones of influence, when the zone
for one event overlaps the zone for another, leaving the overlap
region to he doubly influenced.
Countable events
Some point events may be treated not as sample measurements
but rather as countable events. An example is foreign line crossings or one-call reports or ILI anomalies (when an anomalyspecific evaluation is not warranted). The count or density of
such events might be of interest, rather than a zone of influence.
The number of these events in each section can be converted
into a density. However, the density calculation derived after a
segmentation process can be misleading because section length
is highly variable under a dynamic segmentation scheme. A
density might need to be predetermined and then used as an
event prior to segmentation.
tion can also be made regarding the origin and confidence surrounding the collected data. It is entirely appropriate to gather
some data as a simple table-top exercise-for example, field
personnel indicating on an alignment sheet their knowledge of
ROW condition or depth of cover-with field verification to
come later. However, it is useful to distinguish this type of
assumed information from actual measurements taken in
the field. A soil resisitivity measured near the pipeline should
usually have a greater impact on risk perception than an
assumed regional level of soil corrosivity. Increasing uncertainty should be shown as increasing risk, for reasons detailed
in earlier chapters.
One way to account for variations in data quality is to ‘penalize’ risk variables that are not derived from direct measurement
or observation. This not only shows increasing risk with
increasing uncertainty, but also helps to value-show the benefits of-the direct measurements and justify the costs of such
activities that most agree intuitively are a risk mitigation measure. Table 8.1 shows an example ofadjustments for data quality.
The adjustment factor can then be used along with an age
(decay) adjustment as follows:
Variable score x (Quality Adjustment Factor) x (Age Adjustmrnt
to ensure that less certain information leads to higher risk
V. Segmentation
Spatial analyses
The most robust risk assessments will carefully model spill
footprints and from those estimate hazard areas and receptor
vulnerabilities within those areas. These footprints require
sophisticated calculation routines to consider even a portion of
the many factors that impact liquid spill migration or vapor
cloud dispersion. These factors are discussed in Chapter 7.
Establishing a hazard area and then examining that area for
receptors requires extra data handling steps. The hazard area
will be constantly changing with changing conditions along the
pipeline, so distances from the pipeline to perform house
counts, look for environmental sensitivities, etc. will be constantly changing, complicating the data collection and formatting efforts. For instance, a liquid pipeline located on a steep
terrain would prompt an extensive examination of downslope
receptors and perhaps disregard for upslope receptors. Modern
GIS environments greatly facilitate these spatial analyses, but
still require additional data collection, formatting, and modeling efforts. The risk assessor must determine if the increased
risk assessment accuracy warrants the additional efforts.
As detailed in Chapter 2, an underlying risk assessment principle of most pipeline risk models is that conditions constantly
change along the length of the pipelines. A mechanism is
required to measure these changes and assess their impact on
failure probability and consequence. For practical reasons.
lengths of pipe with similar characteristics are grouped so
that each length can be assessed and later compared to other
Two options for grouping lengths of pipe with similar
characteristics are fixed-length segmentation and dynamic
Table 8.1
Sample adjustments for data quality
Informed guess
Data qualityhncertainty
As discussed in Chapters 1 and 2, there is much uncertainty surrounding any pipeline risk assessment. A portion of that uncertainty comes from the data itself. It might be appropriate to
characterize collected data in terms of its quality and age-both
of which should influence the evaluator perception of risk and
hence, the risk model. A ‘rate of decay’ for information age is
discussed in Chapter 2. Adding to the decay aspect, a distinc-
Worst case
Actual measured value or direct
Based on knowledge ofthe variable,
nearby readings. etc. Confident of
this condition, but not confirmed
by actual measurement; value
proposedwill be correct 99% of
the time
Based on some knowledge and
expertjudgment, but less
confident: value proposed will be
correct 90% of the time
Applied where no reliable info I S
8/182 Data Managementand Analyses
segmentation. In the first, some predetermined length such as 1
mile or 1000 ft is chosen as the length of pipeline that will be
evaluated as a single entity. A new pipeline segment will be created at these lengths regardless of the pipeline characteristics.
Under this approach then, each pipeline segment will usually
have non-uniform characteristics. For example, the pipe wall
thickness, soil type, depth of cover, and population density
might all change within a segment. Because the segment is to
be evaluated as a single entity, the non-uniformity must be
eliminated. This is done by using the average or worst case
conditionwithin the segment.
An alternative is dynamic segmentation. This is an efficient
way of evaluating risk since it divides the pipeline into segments of similar risk characteristics-a new segment is created
when any characteristicchanges. Since the risk variablesmeasure unique conditionsalong the pipeline they can be visualized
as bands of overlapping information.Under dynamic segmentation, a new segment is created every time any condition
changes, so each pipeline segment, therefore,has a set of conditions unique from its neighbors. Section length is entirely
dependent on how often the conltions change. The smallest
segments are only a few feet in length when one or more variables are changing rapidly. The longest segments are several
hundred feet or even miles long where variables are fairly
Creating segments
A computerroutine can replace a rather tedious manual method
of creating segments under a dynamic segmentation strategy.
Related issues such as persistence of segments and cumulative
risks are also more efficiently handled with software routines.
A softwareprogram should be assessed for its handling of these
aspects. Segmentation issues are fully discussed in Chapter 2.
VI. Scoring
The algorithms or equations are “rules” by which risk scores
will be calculated from input data. Various approachesto algorithm scoring are discussed in earlier chapters and some algorithm examples are shown in Chapters 3 through 7 and also in
Appendix E. The algorithm list is often best created and maintained in a central location where relationshipsbetween equations can be easily seen and changes can be tracked. The rules
must often be examined and adjusted in consideration of other
rules. Ifweightings are adjusted, all weightings must be viewed
together. If algorithm changes are made, the central list can be
set up to track the evolution of the algorithms over time.
Alternate algorithms can be proposed and shown alongside
current versions. The algorithms should be reviewed periodically, both as part of a performance measuring feedback loop
and as an opportunity to tune the risk model for new information availability or changes in how informationshould be used.
Assigning defaults
In some cases, no information about a specific event at a specific point will be available. For example, it is not unusual to
have no confirmatory evidence regarding depth of cover in
many locations of an older pipeline. This can be seen as an
informationgap. Prior to calculatingrisk scores, it is necessary
to fill as many information gaps as possible. Otherwise, the
final scores will also have gaps that will impact decision
At every point along the pipeline, each event needs to have a
conditionassigned. If data are missing, risk calculationscannot
be completed unless some value is provided for the missing
data. Defaults are the values that are to be assigned in the
absence of any other information. There are implicationsin the
choice of default values and an overall risk assessment default
philosophy should be established.
Note that some variables cannot have a default reasonably
assigned.An example is pipe diameter, for which any kind of
default would be problematic. In these cases, the data will be
absent and might lead to a non-scoring segment, when risk
scores are calculated.
It is useful to capture and maintain all assigned defaults in
one list. Defaults might need to be periodically modified.
A central repository of default information makes retrieval,
comparison,and maintenance of default assignments easier.
Note that assignment of defaults might be governed by rules
also. Conditional statements (“if X is true, then Y should be
used”) are especially useful:
If (land-use type) =“residential high” then (populationdensity) =
Other special equations by which defaults will be assigned
may also be desired. These might involve replacing a certain
fixed value, converting the data type, special considerations for
a date format, or other special assignment.
VII. Quality assurance and quality control
Several opportunitiesarise to apply quality assuranceand quality control (QNQC) at key points in the risk assessment
process. Prior to creating segments, the following checks can
be made by using queries against the event data set (or in
spreadsheets)as the data are collected:
Ensure that ail IDS are included-to make sure that the
entire pipeline is included and that some portion of the
system(s) to be evaluated has not been unintentionally
Ensure that only correct IDS are used-find errors and typos
in the ID field.
Ensure that all records are within the appropriate beginning
and ending stationsfor the system ID-find errors in stationing, sometimescreated when converting from field-gathered
Ensure that thesum ofalidistances (endstation - begstation)
for each went does not exceed the total length of that I&
the sum might be less than the total length if some conditions
are to be later added as default values.
Ensure that the end station of each record is exactly equal to
the beginning station of the next record-this check can also
be done during segmentation since data gaps become apparent in that step. However, corrections will generally need to
be done to the events tables so the check might be appropriate
here as well.
Computer environments8/183
Several opportunities for Q N Q C also arise after each
section has been scored. The following checks can be made by
using queries against the database of scores:
Find places where scores are not being calculated. This
will usually be the result of an information gap in some
event required by the algorithm. After the default assignment, there should not be any more gaps unless it is an
event for which a default cannot logically be assigned
(such as “diameter” or “product type”). Common causes of
non-scoring segments include misnamed events or conditions, incorrect condition values, and missing default assignments.
Find places where score limits are being exceeded. This is
usually a problem with the algorithm not functioning as
intended, especially when more complex “if. . . then” conditional equations are used. Other common causes include date
formats not working as intended and changes made to either
an algorithm or condition without corresponding changes
made to the other.
Ensure that scores are calculating properly. This is
often best done by setting up queries to show variables,
intermediate calculations, and final scores for the more
complex scores especially. Scanning the results of these
queries provides a good opportunity to find errors such as
incorrect data formats (dates seem to cause issues in many
calculations) or point assignments that are not working as
These QNQC opportunities and others are summarized
Common input data errors include
1. Use of codes that are not exactly correct, Le., “high” when
“ H is required, or misspelled codes
2. Wrong station numbers, Le., a digit left off, such as entering
21997 when219997 iscorrect
3. Conflicting information, Le., assigning different conditions
to the same stretch ofpipeline, sometimes caused by overlap
of the beginning and ending stations of two entries.
Some Q N Q C checks that are useful to perform include the
I . Ensure that all pipeline segment identifiers are included in
the assessment
2. Ensure that only listed IDSare included.
3 . Find data sets whose cumulative lengths are too long or too
short, compared to the true length of an ID.
4. Find individual records within a data set whose beginning
station andor ending station are outside the true beginning
and ending points of the ID.
5. Ensure that all codes or conditions used in the data set are
included in the codes or conditions list.
6 . Ensure that the end station of each record is exactly equal to
the beginning station of the next record when data are
intended to be continuous.
7. Ensure that correctkonsistent ID formats are being used.
Common errors associated with risk score calculations
1. Problems with dates. The algorithms are generally set up to
accommodate either a day-month-year format or a monthyear format or a year-only format, but not more than one of
these at a time. The algorithm can be made more accommodating (perhaps at the expense of more processing time) or
the input data can be standardized.
2. Missing or incorrect codes. Non-scoring values (nulls) or
errors are often generated when input data are missing or
incorrect. These create gaps in t t e final scores.
3 . Data gaps. As noted in item 2, these generally represent
non-scoring values. Errors are easily traced to the initiating
problem by following the calculation path backward. For
example, using the algorithms detailed in chapters 3 through
6 , an error in IndexSum means there is a error somewhere in
one of the four underlying index calculations (thdpty. corr,
design, or incops). That error, in turn,can be traced to an
error in some subvariable within that index.
4. Maximum or minimum values exceeded. Maximum and
minimum queries or filters can be used to identify variables
that are not calculating correctly.
VIII. Computer environments
The computer is obviously an indispensable tool in a dataintensive process such as pipeline risk management. Because a
great deal of information can be gathered for each pipeline section evaluated, it does not take many evaluations before the total
amount of data become significant. The computer is the most
logical way to store and, more importantly, organize and
retrieve the data. The potential for errors in number handling is
reduced if the computer performs repetitive actions such as the
calculations to arrive at riskvalues.
Many different software environments could be used to handle
the initial data input and calculations. As the database grows,
the need for programs or routines that can quickly and easily
(from the user standpoint) search a database and display the
results of the search becomes more important. More sophisticate risk assessment models will require more robust software applications. A model that requires spatial analyses of
information, perhaps to determine spill migration or hazard
zone perimeters, requires special software capabilities. Additional desired capabilities might include automatic segmentation, assignment of zones of influence, or calculation of
intermediate pressures based on source strength, location, and
Use computers wisely
An interesting nuance to computer usage is that too much
reliance on computers is potentially more dangerous than too
little. Too much reliance can degrade knowledge and cause
insight to be obscured and even convoluted-the acceptance of
‘black box’ results with little application of engineering judgment. Underutilization of computers might result in inefficiencies-an undesirable, but not critical event. Regardless of
potential misuse, however, computers can obviously greatly
8/184 Data Management and Analyses
increase the strength of the risk assessment process, and no
modern risk management process is complete without extensive use ofthem. The modem software environment is such that
information is usually easily moved between various programs.
In the early stages of a project, the computer should serve
chiefly as a data repository. Then, in subsequent stages, it
should house the algorithms-how the raw information, such
as wall thickness, population density, soil type, etc., is turned
into risk information. In later stages ofthe project, data analysis
and display routines should be available. Finally, computer
routines to ensure ease and consistency of data entry, model
tweaking, and generation of required output should be
Software use in risk modeling should always follow program
development-not lead it. Software should be viewed as a tool
and different tools are appropriate for different phases of the
Early stage. Use pencil and paper or simple graphics software to sketch preliminary designs of the risk assessment
system. Also use project management tools if desired, to plan
the project.
Intermediate stages. Use software environments that can
store, sort, and filter moderate amounts of data and generate
new values from arithmetic and logical (if. . . then . . . else)
combinations of input data. Simplest choices include modem spreadsheets and desktop databases.
Later stages. Provide for larger quantity data entry, manipulation, query, display, etc., in a long-term, secure, and userfriendly environment. If spatial linking of information is
desired, consider migrating to a GIS platform. If multiuser
access is desired, consider robust database environments. At
this stage, specialized software acquisition or development
may be beneficial.
A decision matrix can be set up to help evaluate software
options. An example (loosely based on an actual project evaluation) is shown inTable 8.2.
The costs shown in Table 8.2 will not be relevant for many
applications-they are for illustration only. Many variables will
impact the costs ofany alternative. These options should be better defined and hlly developed with software developers, programmers, IT resources, GIS providers, and other available
Table 8.2 Decision matrix of software options
Estimated costs
Use spreadsheet tools only.
Inexpensive, completely flexible
and user customizable; data easily
(but manually) moved to other
applications; in-house
maintenance is possible.
As above, plus makes information
and capabilities accessible
to more users, automates
some data handling.
Increased performance; more
robust data handling
capabilities; more secure and
user friendly; network and
GIS compatible.
Existing customer base, vendor
support ofproduct; strong
graphics modules included;
uses common database engine.
Requires a knowledgeable user,
relatively fragile environment,
some maintenance required;
lacks some features of modern
database environment.
Some costs; performance
limitations in using
More costly, might have to rely
on outside programming
expertise for changes.
100-200 person-hours for
stand-alone program; plus
some hours to build links to
GIS (-$lO,000-$20,000)
Costly; reduced flexibility and
capabilities; some data
conversion needed; limited
data analysis capabilities;
outside programming support
needed for modifications.
Reduced flexibility; outside
programming support needed f
or modifications.
-$50,000 per user plus maintenance
fees; plus 8&200 hours of data
conversion and entry effort
Possible high costs; possibly less
common software; outside
programming support may
he needed for modifications.
Custom application; outside
programming support needed
for modifications.
10&200 person-hours for
spreadsheet tools to
increase security and
user friendliness.
Migrate model to custom
desktop database program
with user-friendly front end;
linked to GIS environment.
Purchase commercial
software--option A
Purchase commercial
software--option B.
GIS module programmed
directly into GIS software.
Modifyhpgrade commercial
software to link directly to
existing spreadsheet tools.
Inexpensive, directly compatible
with existing data; secure and
user friendly; strong data
analysis routines; uses
common Microsoft Access
database engine.
Seamless integration with GIS
environment is possible.
Keeps flexible spreadsheet
environment, adds power of
the existing application and
desktop database environment
2&80 person-hours
<$10,0OOperuserplus somedata
formatting and entry effort
stand-alone program; plus some
hours to build links to GIS
$2,000 plus 50-100 hours;
(-$5,000-$10,000); costs
assume (and do not include
onginal cost of) an existing
software program
Computer environments81185
Applications of risk management
A critical part of the role of risk assessment is of course its role
in risk management. Some potential user applications are
discussed in the following subsections.
Application I : risk awareness
This is most likely the driving force behind performing risk
evaluations on a pipeline system. Owners andor operators
want to know bow portions of their systems compare from a
risk standpoint. This comparison is perhaps best presented in
the form of a rank-ordered list. The rating or ranking list should
include some sort of reference point-a baseline or standard to
be used for comparisons. The reference point, or standard, gives
a sense of scale to the rank ordering of the company’s pipeline
The standards may be based on:
Governing regulations, either from local government agencies or from company policies. So, the standard is the risk
score of a hypothetical pipeline in some common environment that exactly meets minimum requirements of the regulations.
A pipeline or sections that are intuitively thought to be safer
than the other sections.
A fictitious pipeline section-perhaps a low-pressure nitrogen or water pipeline in an uninhabited area for a low-risk
score, perhaps a high-pressure hydrogen cyanide (very
flammable and toxic) pipeline through a large metropolitan
area for a high-risk score.
By including a standard, the user sees not only a rankordered list of his facilities, he also sees how the whole list
compares to a reference point that he can understand.
Ideally, the software program to support Application 1 will
run something like this:
Data are input for the standard and for each section evaluated. The computer program calculates numerical values for
each index, the leak impact factor (product hazards and spill
scores), and the final risk rating for every section. Any of these
calculations may later be required for detailed comparisons to
standards or to other sections evaluated. Consequently, all data
and intermediate calculations must be preserved and available
to search routines. The program will likely be called on to produce displays of pipeline sections in rank order. Sections may
be grouped by product handled, by geographic area, by index,
by risk rating, etc.
Examples o/risk data analyses There are countless ways in
which the risk picture may need to he presented. Four examples
of common applications are:
I . Pipeline company management wants to see the 20 pipeline
sections that present the most risk to the community. A list is
generated, ranking all sections by their final relative risk
number. A bar chart provides a graphic display of the 20
sections and their relative magnitude to each other.
2. Pipeline company management wants to see the 20 highest
risk pipeline sections in natural gas service in the state of
Oklahoma. A rank-ordered list for natural gas lines in
Oklahoma is generated.
3 . The corrosion control department wants to see a rank ordering of all sections, ranked by corrosion indexes, lowest to
highest. All pipeline sections are ranked strictly by corrosion index score.
4. A pipeline company wants to compare risks for LPG
pipelines in Region 1 with crude oil pipelines in Region 2.
Distributions of pertinent risk scores are generated. From
the distributions, analysts see the relative average risks, the
variability in risks, and the relative highest risks, between
the two pipeline types.
Application 2: conzpliance
Another anticipated application of this program is a comparison to determine compliance with local regulations or with
company policy. In this case, a standard is developed based on
the company’s interpretation of government regulations and on
the company policy for the operation ofpipelines (if that differs
from regulatory requirements). The computer program will
most likely be called on to search the database for instances of
noncompliance with the standard(s).
To highlight these instances of noncompliance, the program
must be able to make correct comparisons between standards
and sections evaluated. Liquid lines must be compared with liquid regulations; Texas pipelines must be compared with Texas
regulations, etc.
If the governing policies are performance based (“. . . corrosion must be prevented . . .,” “. . . all design loadings anticipated
and allowed for. . .,” etc.), the standard may change with differing pipeline environments. It is a useful technique to predefine the pipeline company’s interpretations of regulatory
requirements and company policy. These definitions will be
the prevention items in the risk evaluation. They can be used
to have the computer program automatically create standards
for each section based on that specific section’s characteristics.
Using the distinction between attributes and preventions, a
floating standard can be developed. In the floating standard, the
standard changes with changing attributes. The program is
designed so that a pipeline section’s attributes are identified
and then preventions are assigned to those attributes based on
company policies. The computer can thus generate standards
based on the attributes of the section and the level of preventions required according to company interpretations. The standard changes, or floats, with changes in attributes or company
Example 8.1: Compliance
A company has decided that an appropriate level of public
education is to be mailouts, advertisements, and speaking
engagements for urban areas, and mailouts with annual
landowneritenant visits for rural areas. With this definition, the
computer program can assign a different level of preventions
for the urban areas compared with rural areas. The program
generates these standards by simply identifying the population
density value and assigning the points accordingly.
By having the appropriate level of preventions pre-assigned
into the computer, consistency is ensured. When policy is
81186 Data Management and Analyses
changed, the standards can be easily updated. All comparisons
between actual pipeline sections and standards will be instantly
updated an4 hence, based on the most current company policy.
It is reasonable to assume that whenever an instance of
noncompliance is found, a detailed explanation will be
required. The program can be designed to retrieve the whole
record and highlight the specific item(s) that caused the
As policies and regulations change, it will be necessary to
change the standards. Routines that allow easy changes will be
ter evaluated using the cumulative risk techniques discussed in
Chapter 15. This could then be judged against the effects of
spending the same amount of money on, say, close interval surveys or new operator training programs.
The costbenefit analyses will not initially produce absolute
values because this risk assessment program yields only relative answers. For a given pipeline system, relative answers are
usually adequate. The program should help the user decide
where his dollar spent has the greatest impact on risk reduction.
Where absolute levels of spending are to be calculated techniques described in Chapters 14 and 15 will be needed.
Application 3: what-iftrials
Application 5: detailed comparisons
A useful feature in the computer program will undoubtedly be
the ability to perform "what-if" trials. Here, the user can
change items within each index to see the effect on the risk picture. For example, if air patrol frequency is increased, how
much risk reduction is obtained? What if an internal inspection
device is run in this section? If we change our public education
program to include door-to-door visits, how does that influence
the risk of third-party damage?
It will be important to preserve the original data during the
what-if trial. The trial will most likely need to be done outside
the current database. A secondary database of proposed actions
and the resulting risk ratings could be built and saved using the
what-if trails. This second database might be seen as a target or
goal database, and it could be used for planning purposes.
The program should allow specific records to be retrieved as
well as general groups of records. The whole record or group of
records will need to be easily modified while preserving the
original data. Comparisons or before-and-after studies will
probably be desirable. Graphic displays will enhance these
In some of the above applications and as a stand-alone application, comparisons among records will almost always be
requested. A user may wish to make a detailed comparison
between a standard and a specific record. She may wish to see
all risk variables that exceed the standard or all variables that
are less than their corresponding standard value.
Groups of records may also need to be compared. For example, the threat of damaging land movements of all Texas
pipelines could be compared with all Louisiana pipelines or the
internal corrosion potential of natural gas pipelines could be
compared with those for crude oil pipelines. Graphics would
enhance the presentation of the comparisons.
Application 4: spendingprioritization As an offshoot to the
ranking list for relative risk assessment, it will most likely be
desirable to create rank-order lists for prioritizing spending on
pipeline maintenance and upgrades. The list of lowest scored
sections from a corrosion risk standpoint should receive the
largest share of the corrosion control budget, for instance. The
spending priority lists will most likely be driven by the rankorderedrelative risk lists, but there may be a need for some flexibility. Spending priority lists for only natural gas pipelines may
be needed, for example. The program could allow for the
rearrangement ofrecords to facilitate ths.
A special column, or field in the database, may be added to
tabulate the projected and actual costs associated with each
upgrade. Costs associated with a certain level of maintenance
(prevention) activities could also be placed into this field. This
will help establish values of certain activities to further assist in
decision making.
The user may want to analyze spending for projects on specific pipeline sections. Alternatively, she may wish to perform
costhenefit analyses on the effects of certain programs across
the whole pipeline system. For instance, if the third-party damage index is to be improved, the user may study the effects of
increasing the patrol frequency across the whole system. The
costs of the increased patrol could be weighed against the
aggregate risk reduction, perhaps expressed as a percentage
reduction in the sum or the average of all risk values, but is bet-
Additional applications
Embedded or implied in some of the above applications are the
following tasks, which may also need to be supported by risk
management s o h a r e :
Due diligence-investigation and analysis of assets that
might be acquired
Project uppmvuls-as part of a regulatory process or company internal, an examination of the levels of risk related to a
proposed project and the judgment of the acceptability of
those risks
Alternative route analyses-a comparison, on a risk basis, of
alternative routes of a proposed pipeline.
Budget setting-a determination of the value and optimum
timing of a potential project or group of projects from a risk
Risk communications-presenting risk results to a number
of different audiences with different interests and levels of
technical abilities.
Properties of the software program
The risk assessment processes are often very dynamic. They
must continuously respond to changing information if they are
to play a significant role in all planning and decision making.
The degree of use of this risk assessment is often directly
related to the user friendliness and robustness of the software
that supports it.
Properties of a complete risk assessment model are discussed
in Chapter 2 along with some simple tests that can be used as
measures of completeness and utility. Those same tests can be
used to assess the environment that houses the risk model.
Computer environments 81187
If suitable off-the-shelf software is not available, custom software development is often an attractive alternative. Software
design is a complex process and many reference books discussing the issues are available. It is beyond the scope of this
book to delve into the design process itself, but the following
paragraphs offer some ideas to the designer or to the potential
purchaser of s o h a r e .
Before risk data are collected or compiled, a computer programmer could be participating in the creation of design specifications. He must be given a good understanding of how the
program is going to be used and by whom-software should
always be designed with the user in mind. Programs often get
used in ways slightly different from the original intentions. The
most powerful software has successfully anticipated the user’s
needs, even if the user himself has not anticipated every need.
Data input and the associated calculations are usually straightforward. Database searches, comparisons, and displays are
highly use-specific. The design process will benefit from an
investment in planning and anticipating user needs.
A complete risk management software package will be
called on to support several general functions. These functions
can be identified and supported in various ways. The following
is an example of one possible grouping of functionality:
1. Risk algorithms
2. Preparation ofpipeline data
3. Risk calculations
a. Decide on and apply a segmenting scheme
b. Run the risk assessment model against the data to calculate the risks for each segment
4. Risk management.
Risk management is supported by generating segment rankorder lists and running “what-if” scenarios to generate work
Many specific capabilities and characteristics of the best
software environment can be listed. A restricted vocabulary
will normally be useful to control data input. Error-checking
routines at various points in the process will probably be desirable. There will most likely be several ways in which the data
will have to be sorted and displayed-reporting and charting
capabilities will probably be desired. This is again dependent
on the intended use.
Data entry and extraction should be simple-required keystrokes should be minimized, use of menus and selection tools
optimized and the need for redundant operations eliminated.
Some other important software capabilities are discussed below.
Dynamic environment
Because the risk assessment tool is designed to be dynamicchanging with changing conditions and new information-the
software program must easily facilitate these changes. New
regulations may require corresponding changes to the model.
Maintenance and upgrade activities will be continuously generating new data. Changes in operating philosophies or the use
of new techniques will affect risk variables. New pipeline construction will require that new records be built. Increases in
population densities and other receptors will affect consequence potential. The relative weighting of index variables
might also be subject to change after regular reviews.
The ability to quickly and easily make changes will be a critical characteristic of the tool. As soon as updates are no longer
being made, the tool loses its usefulness. For instance, suppose
new data are received concerning the condition of coating for a
section of a pipeline. The user should be able to input the data in
one place and easily mark all records that are to be adjusted
with the new information. With only one or two keystrokes, the
marked records should be updated and recalculated. The date
and author of the change should be noted somewhere in the
database for documentation purposes.
As noted in Chapter 2 and also in this chapter, segmentation
strategies and issues can become challenging and software
assistance is often essential. Issues include initial segmentation
options, persistence of segments, calculating cumulative
(length-sensitive) risks and risk values, and tracking risk performance in a dynamic segmentation environment.
In most applications, it will be necessary to find specific
records or groups of records. Most database routines make this
easy. Normally the user specifies the characteristics of the
record or records she is seeking. These characteristics are the
search parameters the computer will use to find the record(s) of
interest. User choices are made within fields or categories of
the data. For instance, some fields that will be frequently used
in database searches include:
Geographical area
Leak impact factor
Index values.
When the user perfoms searches, he chooses specifics
within each field: It is important to show what the possible
choices are in each non-numeric field. The choices must
usually be exact matches with the database entries. Menus and
other selection tools are useful here.
The user may also wish to do specific searches for a single
item within an index such as
Find (depth ofcover scores) >1.6
Find (public education programs) = I5 pts
It is useful if the user can specify ranges when she is searching for numerical values, for example, (hydrotestscores) from 5
to 15 points or (hydrotest scores) < 20 pts.
Searches can become complex:
Find all records where (product) =“natural gas”AND (locarion)=
“South Texas” AND [(pipediameter) > “4 in.”AND < “1 2 in.” OR
(construct date) < I9701 AND (corrosionindex) < 50.
Ideally, the user would be able to perform searches by
defining search parameters in general fields, but still have the
option of defining specific items. It would be cumbersome to
prompt the user to specify a search criteria in every field prior
to a search. He should be able to quickly bypass fields in which
he is not interested. An acceptable solution would be to have
8/188 Data Management and Analyses
more than one level of fields. An upper, general level would
prompt the user to choose one or more search parameters,
perhaps from the example list above. He may also then choose
the next level of fields if he wishes to specify more detailed
Users may want the program to be designed so that it can automatically track certain items. Overall changes in the risk picture, changes in indexes, or changes in the scoring of specific
items may be of interest. Tracking of risk results over time
shows deterioration or progress toward goals. Following and
quantifying risk changes over time has special challenges in
a dynamic segmentation environment. This is discussed in
Chapters 2 and 15.
Pictures reveal things about the data that may otherwise go
unnoticed. Bar graphs, histograms, pie charts, correlations, and
run charts illustrate and compare the data in different ways.
Routines should be built to automatically produce these pictures. Graphics routines can also put information in geographically referenced format such as a map overlay showing risk
values or hazard zones in relation to streets, water bodies, populated areas, etc.
Graphics are very powerful tools-they can and should be
used for things like data analysis (trends, histograms, frequency
distributions, etc.) and for presentations. A distinction should
be made between analytical graphics and presentation graphics. The former denotes a primary risk management tool, while
the latter denotes a communication tool.
Presentation graphics can and should be very impressiveincorporating map overlays, color-coded risk values, spill dispersion plumes spreading across the topography, colorful
charts showing risk variables along the ROW, etc. These are
effective communication tools but not normally effective
analysis or management tools. It is usually impossible to manage risks frompresentation graphics. Apipeline is a long, linear
facility that cannot be shown with any resolution on a single
picture. To manage risks, the user must be able to efficiently
sort, filter, query, correlate, prioritize, and drill into the often
enormous amount of data and risk results. That cannot realistically be done in a presentation environment where information
is either very high level or spread across many drawing pages or
many screen views.
In simplistic terms, capabilities that involve charting and
comparing data and results will be analysis tools. Capabilities
that involve maps and alignment sheet style drawings will be
presentation tools. Note that presentation tools often enhance
the ability to investigate, research, and validate information.
This is part of their role as communications tools. The analyses
tools will normally be used first in risk management. They will
identify areas of special interest. Their use will lead to the subsequent use of the presentation tools to better assess or communicate the specific areas of interest identified.
In evaluating or designing graphics capabilities in a software
environment, the relative value of each type of graphics tool
should be established. The inexperienced risk manager will be
very attracted to presentation graphics and will be tempted to
direct a lot of resources toward those. When this is done at the
expense of analytical tools, the risk effort suffers.
Search capabilities (as previously described) facilitate comparisons by grouping records that support meaningful analysis.
For example, when investigating internal corrosion, it is probably useful to examine records with similar pipeline products. In
examining consequence potential, it might be useful to group
records with similar receptor types.
Comparisons between groups of records may require the
program to calculate averages, sums, or standard deviations for
records obtained by searches. Detailed comparisons-side-byside comparison of each risk variable or even all underlying
data-might also he needed.
The program should be able to display two or more records or
groups of records for direct comparison purposes. The program
may be designed to highlight differences between records of
certain magnitudes, for instance, highlight a risk variable
when it differs by more than 10% from some corresponding
“standard” value.
Records being compared will need to be accessible to the
graphics routines, since the graph is often the most powerful
method of illustrating the comparisons. A distribution of risk
scores tells more about the nature of the risk of those pipeline
segments than any one or even two statistics. Correlations, both
graphic and quantitative, will be useful.
Accessibility andprotection
The risk model and/or its results may need to he accessed
by multiple users in different locations. Network or Internet
deployment options are often a part of risk management
software design.
The database should be protected from tampering. Access to
the data can generally be given to all potential users, while
withholding change privileges. Because all users will normally
be encouraged to understand and use the program, they must be
allowed to manipulate data, but this should probably be done
exclusive of the main database. An individual or department
can be responsible for the main database. Changes to this main
database should only be made by authorized personnel, perhaps
through some type of formal change-order system.
Modern software has many protection features available,
requiring certain authorization privileges before certain operations can be completed.
The ability to generate the general statistics discussed on pages
189-192 should be a part of the software features. Note that
most of risk management decision making will be supported by
data analysis-usually involving statistical tools-rather than
by graphical tools.
If a commercial risk model is purchased, it is imperative that the
full explanation of the risk model be obtained. Consistent with
all engineering practice, the user will be responsible for the
Data analyses 8/189
results of the risk assessment and should understand and agree
with all underlying assumptions, calculations, and protocols
This book may provide some of the background documentation necessary for a s o h a r e program that incorporates a model
similar to the one described here. It contains explanations as to
why and how certain variables are given more points than others and why certain variables are considered at all. Where the
book may provide the rationale behind the risk assessment, the
software documentation must additionally note the workings of
all routines, the structure of the data, and all aspects of the program. A data dictionary is normally included in the software
IX. Data analysis
An earlier chapter made a connection between the quality
process (total quality management, continuous improvement,
etc.) and risk management. In striving to truly understand work
processes, measurement becomes increasingly important.
Once measurement is done, analysis of the resulting data is the
next step. Here again, the connection between quality and risk
is useful. Quality processes provide guidance on data analysis.
This section presents some straightforward, techniques to assist
in interpreting and responding to the information that is contained in the risk assessment data.
In using any risk assessment technique, we must recognize
that knowledge is incomplete. This was addressed in Chapter 1
in a discussion of rare occurrence events and predictions of
future events using historical data. Risk weightings, interactions, consequences, and scores are by necessity based on
assumptions. Ideally, the assumptions are supported by
sound engineering judgment and hundreds of person-years of
pipeline experience. Yet in the final analysis, high levels of
uncertainty will be present. Uncertainty is present to some
degree in any measurement. Chapter 1 provides some guidance
in minimizing the measurement inconsistencies. Recognizing
and compensating for the uncertainty is critical in proper data
The data set to be analyzed will normally represent only a
small sample of the whole “population” of data in which we are
really interested. Ifwe think of the population of data as all risk
scores, past, present, and future, then the data sample to be
analyzed can be seen as a “snapshot.” This snapshot is to be
used to predict future occurrences and make resource allocation decisions accordingly.
The objective of data analyses is to obtain and communicate
information about the risk of a given pipeline. A certain disservice is done when a single risk score is offered as the answer.
A risk score is meaningful only in relation to other risk scores or
to some correlated absolute risk value. Even if scores are
closely correlated to historical accident data, the number only
represents one possibility in the context of all other numbers
representing slightly different conditions. This necessitates the
use of multiple values to really understand the risk picture.
The application of some simple graphical and statistical
techniques changes columns and rows of numbers into trends,
central tendencies, and actioddecision points. More information is extracted from numbers by proper data analysis, and the
common mistake of “imagining information when none exists”
is avoided. Although very sophisticated analysis techniques are
certainly available, the reader should consider the costs of such
techniques, their applicability to this type of data, and the incremental benefit (if any) from their use. As with all aspects of risk
management, the benefits of the data analysis must outweigh
the costs of the analysis.
When presented with almost any set of numbers, the logical
first step is to make a “picture” of the numbers. It is sometimes
wise to do this even before summary statistics (average, standard deviation, etc.) are calculated. A single statistic, such as
the average, is rarely enough to draw meaningful conclusions
about a data set. At a minimum, a calculated measure of central
tendency and a measure of variation are both required. On the
other hand, a chart or graph can at a glance give the viewer a feel
for how the numbers are “behaving.” The use of graphs and
charts to better understand data sets is discussed in a following
To facilitate the discussion of graphs and statistics, a few
simple statistical measures will be reviewed. To help analyze
the data, two types of measurements will be of most use: measures of central tendency and measures of variation.
Measure of central tendency
This class of measurements tells us where the “center of the
data” lie. The two most common measures are the average (or
arithmetic mean, or simply mean) and the median. These are
often confused. The average is the sum of all the values divided
by the number of values in the data set. The mean is often used
interchangeably with the average. but is better reserved for use
when the entire population is being modeled. That is, the average is a calculated value from an actual data set while the mean
is the average for the entire population of data. Because we will
rarely have perfect knowledge of a population, the population
mean is usually estimated from the average of the sample data.
There is a useful rule of thumb regarding the average and a
histogram (histograms are discussed in a following section):
The average will always be the balance point of a histogram.
That is, if the x axis were a board and the frequency bars were
stacks ofbricks on the board, the point at which the board would
balance horizontally is the average. The application of this relationship is discussed later.
The second common measure of central tendency, the
median. is often used in data such as test scores, house prices,
and salaries. The median yields important information especially when used with the average. The median is the point at
which there are just as many values above as below. Unlike the
average, the median is insensitive to extreme values-cither
very high or very low numbers. The average of a data set can be
dramatically affected by one or two values being very high or
very low. The median will not be affected.
A third, less commonly used measure of central tendency is
the mode. The mode is simply the most frequently occurring
value. From a practical viewpoint, the mode is often the best
predictor ofthe value that may occur next.
An important concept for beginners to remember is that
these three values are not necessarily the same. In a normal or
bell-shaped distribution, possibly the most commonly seen distribution, they are all the same, but this is not the case for other
common distributions. lfall three are known, then the data set is
already more interpretable than if only one or two are known.
8/190 Data Managementand Analyses
Measures of variation
Also called measures of dispersion, this class of measurements
tells us how the data organize themselves in relation to a central
point. Do they tend to clump together near a point of central
tendency? Or, do they spreaduniformly in either direction from
the centralpoint?
The simplest method to define variation is with a calculation
of the range.The range is the difference between the largest and
smallest values of the data set. Used extensively in the 1920s
(calculationsbeing done by hand) as an easy approximation for
variation, the range is still widely used in creating statistical
control charts.
Another common measure is the standard deviation. T h s is
a property of the data set that indicates, on average, how far
away each data value is from the average of the data. Some subtleties are involved in standard deviation calculations, and
some confusion is seen in the applicationsof formulasto calculate standard deviations for data samples or estimate standard
deviations for data populations. For the purposes of this text, it
is important for the reader merely to understandthe underlying
concept of standard deviation. Study Figure 8.1 in which each
dot represents a data value and the solid horizontal line represents the average of all of the data values. If the &stances from
each dot to the average line are measured, and these &stances
are then averaged, the result is the standard deviation: the average distance of the data points from the average (centerline) of
the data set. Therefore, a standard deviation of 2.8 means that,
on average, the data falls 2.8 units away from the average line.
A higher standard deviation means that the data are more scattered, farther away from the center (average) line. A lower standard deviationwould be indicated by data values “hugging” the
center (average) line.
The standard deviation is considered to be a more robust
measure of dispersion than the range. This is because, in the
range calculation, only two data points are used: the high and
the low. No indication is given as to what is happening to the
other points (although we know that they lie between the high
and the low). The standard deviation, on the other hand, uses
information from every data point in measuring the amount of
variation in the data.
With calculatedvalues indicatingcentral tendency and variation, the data set is much more interpretable.These still do not,
however, paint a completepicture of the data. For example, data
symmetry is not considered. One can envision data sets with
identical measures of central tendency and variation, but quite
different shapes. While calculationsfor shape parameters such
Data points
Distance from
data point to average
Figure 8.1
Concept of standard deviation.
Average of
data points
as skewness and kurtosis can be performed to better define
aspects of the data set’s shape, there is really no substitute for a
picture of the data.
Graphs and charts
This section will highlight some common types of graphs and
charts that help extract information from data sets. Experience
will show what manner of picture is ultimately the most useful
for a particular data set, but a good place to start is almost
always the histogram.
In the absence of other indications, the recommendation is to
first create a histogram of the data.A histogram is a graph of the
number of times certain values appear. It is often used as a surrogate for a frequency distribution.A histogram uses data intervals (called bins), usually on the horizontal x axis, and the
number of data occurrences,usually on the verticaly axis (see
Figure 8.2). By such an arrangement, the histogram shows the
quantity of data contained in each bin. The supposition is that
future data will distributeitself in similarpatterns.
The histogram provides insight into the shape of the frequency distribution.The frequency distributionis the idealized
histogram of the entire population of data, where number of
occurrences is replaced by frequency of occurrence
usually on the vertical axis. The frequency versus value relationship is shown as a single line, rather than bars. This represents the distributionof the entire population of data.
The most common shape of frequency distributions is the
normal or bell curve distribution (Figure 8.3). Many, many naturally occurring data sets form a normal distribution.If a graph
is made of the weights of apples harvested from an orchard, the
weights would be normally distributed.A graph of the heights
of the apple trees would show a bell curve. Test scores or
measures of human intelligence are usually normally distributed as well as vehicle speeds along an interstate, measurements of physical properties (temperature,weight, etc.), and so
on. Much of the pipeline risk assessment data should be normally distributed. When a data set appears to he normally distributed, several things can be immediately and fairly reliably
assumed about the data:
The data are symmetrical. There should always be about the
same number of values above an average point as below that
point. The average equals the median.
The average point is equal to both the median and the mode.
This means that the average represents a value that should
occur more often than any other value. Values closer to the
average occur more frequently; those farther away less
Approximately 68% of the data will fall within one standard
deviationeither side of the average.
Approximately 97% of the data will fall within three standard deviationseither side ofthe average.
Other possible shapes commonly seen with risk-related data
include the uniform distribution, exponential, and Poisson
distribution. In the uniform (or rectangular) distribution (see
Figure 8.3), the following can be assumed:
Data analyses 8/191
Risk Score
Figure 8.2
Histogram of riskscores.
The data set is symmetrical.
The average point is also the median point, but there is not a
mode. All values have an equal chance of occurring.
Exponential and Poisson distributions (see Figure 8.3), often
seen in rare events, can have the following characteristics:
The data are nonsymmetrical.Data values below the average
are more likely than those above the average. Often zero is
the most likely value in this distribution.
The average and median and mode are not the same. The
relationship between these values provides information
relating to the data.
Bimodal distribution (or trimodal, etc.)
Normal (bell-shaped)
When the histogram shows two or more peaks (see Figure 8.3),
the data set has multiple modes. This is usually caused by two or
more distinct populations in the data set, each correspondingto
one of the peaks. For each peak there is a variable(s) unique to
some of the data that causes that data to shift from the general
distribution. A better analysis is probably done by separating
the populations. In the case of the risk data, the first place to
look for a variable causing the shift is in the leak impactfactor.
Because of its multiplying effect, slight differences in the LZF
can easily cause differing clumping of data points. Look for
variations in product characteristics, pipe size and pressure,
population density, etc. A more subtle shift might be caused by
any other risk variable.
A caution regarding the use of histograms and most other
graphical methods is in order. The shape of a graph can often be
radicallychanged by the choice of axes scales. In the case of the
histogram, part ofthe scaling is the choice ofbin width.A width
too wide conceals the actual data distribution.A width too narrow can show too much unimportant,random variation (noise).
Run charts
Figure 8.3
Examples of distribution.
When a time series is involved, an obvious choice of graphing
technique is the run chart. In this chart, the change in a value
8/192 Data Management and Analyses
over time is shown. Trends can therefore be spotted that is, “In
which direction and by what magnitude are things changing
over time?” Used in conjunction with the histogram, where the
evaluator can see the shape of the data, information and patterns ofbehavior become more available.
Correlation charts
Of special interest to the risk manager are the relationships
between risk variables. With risk variables including attributes,
preventions, and costs, the interactions are many. A correlation
chart (Figure 8.4) is one way to qualitatively analyze the extent
of the interaction between two variables.
Correlation can be quantified but for a rough analysis, the
two variables can be plotted as coordinates on an x,y set of axes.
If the data are strongly related (highly correlated), a single line
ofplotted points is expected. In the highest correlation, for each
value ofx, there is one unique corresponding value ofy. In such
high correlation situations, values of y can be accurately predicted from values ofx.
If the data are weakly correlated, scatter is seen in the plotted points. In this situation, there is not a unique y for every x.
A given value of x might provide an indication for the corresponding y if there is some correlation present, but the predictive capability of the chart diminishes with increasing scatter of
the data points. The degree of correlation can also be quantified
with numerical techniques.
There are many examples of expected high correlation: coating condition versus corrosion potential, activity level versus
third-party damage, product hazard versus leak consequences,
etc. Both the presence and absence of a correlation can be
HLC charts
A charting technique borrowed from stock market analysis, the
high-low-close (HLC) chart (Figure 8.5) is often used to show
daily stock share price performance. For purposes of risk score
analysis, the average will be substituted for the “close” value.
This chart simultaneously displays a measure of central tendency and the variation. Because both central tendency and
variation are best used together in data analysis, this chart provides a way to compare data sets at a glance.
One way to group the data would be by system name, as
shown in Figure 8.5. Each system name contains the scores of
all pipeline sections within that system. Other grouping options
include population density, product type, geographic area, or
any other meaningful slicing ofthe data. These charts will visually call attention to central tendencies or variations that are not
consistent with other data sets being compared. In Figure 8.5,
the AB Pipeline system has a rather narrow range and a relatively high average. This is usually a good condition. The
Frijole Pipeline has a large variation among its section scores,
and the average seems to be relatively low.
Because the average can be influenced by just one low
score, a HLC chart using the median as the central tendency
measure might also be useful. The observed averages and
variations might be easily explained by consideration of product type, geographical area, or other causes. An important
finding may occur when there is no easy explanation for an
We now look at some examples of data analysis.
Correlation Chart
x x
Risk Score
Figure 6.4 Correlation chart: risk score versus costs of operation.
Data analyses 81193
Maximum - Average Minimum
System Name
Figure 8.5 HLC chart of risk scores.
Example 8.2: Initial analysis
The pipeline system evaluated in this example was broken
into 21 distinct sections as the initial analysis began. Each section was scored in each index and the corresponding LIF.The
evaluatorplaces the overallrisk scores on a histogram as shown
in Figure 8.6. Normally, it takes around 30 data points to define
the histogram shape, so it is recognized that using only these 21
data points might present an incomplete picture of the actual
shape. Nonetheless, the histogram reveals some interesting
aspects of the data. The data appear to be bimodal, indicating
two distinct groups of data. Each set of data might form a normal distribution (at least there is no strong indication that the
data sets are not normally distributed).Rather than calculating
summary statisticsat this point, the evaluator chooses to investigate the cause of the bimodal distribution. Suspectingthe LZF
as a major source of the bimodal behavior, a histogram of LZF
scores is created as shown in Figure 8.6. A quick check of the
raw data shows that the difference in the LIF scores is indeed
mostly due to two population densities existing in this system:
Class 1 and Class 3 areas. This explains the bimodal behavior
and prompts the analyst to examine the two distributions independently for some issues.
The data set is now broken into two parts for fixther analysis.
The seven records for the Class 1 area are examined separately
from the Class 3 records. Figure 8.7 shows an analysis by index
of the risk scores for each data set. There do not appear to be any
major differences in index values within a data set (an item-byitem comparison would be the most accurate way to verify
Some quick calculations yield the following preliminary
analysis: For this system, and similar systems yet to be evaluated, Class 1 area sections are expectedto score between 70 and
140, with the average scores falling around 120. Class 3 area
Risk Scores
Figure 8.6 Example8.2 analysis.
81194 Data Management and Analyses
Class 1
0Class 3
System 1
Figure 8.8
Third Party
Figure 8.7
Example 8.2 index comparison
scores should range from 30 to 90 with the average scores
falling around 60. In either case, every 10 points of risk reduction (index sum increases) will improve the overall safety
picture by about 5%.
From such a small overview data set, it is probably not yet
appropriate to establish decision points and identification of
Example 8.3: Initial comparisons
In this example, the evaluating company performed risk
assessments on four different pipeline systems. Each system
was sectioned into five or more sections. For an initial comparison ofthe risk scores,the evaluatorwants to compareboth central tendency and variation. The average and the range are
chosen as summary statistics for each data set.
Figure 8.8 shows a graphical representation of this information on a HLC chart. Each vertical bar represents the risk
scores of a corresponding pipeline system. The top and bottom
tick marks on the bar show the highest and lowest risk score;
the middle tick mark shows the average risk score. Variability
is highest in system 2. This would most likely indicate differences in the LIFwithin that set of records. Such differences are
most commonly caused by changes in population density, but
common explanations also include differences in operating
pressures, environmental sensitivity, or spreadability. Index
items such as pipe wall thickness, depth of cover, and coating
condition also introduce variability, but unless such items
are cumulative, they do not cause as much variability as LIF
The lowest overall average of risk scores occurs in system 4.
Because scores are also fairly consistent (low variability)here,
the lower scores are probably due to the LIF. A more hazardous
product or a wider potential impact area (greater dispersion)
would cause overall lower scores.
In general, such an analysis provides some overall insight
into the risk analysis. Pipeline system 4 appears to carry the
highest risk. More risk reduction efforts should be directed
there. Pipeline system 2 shows higher variability than other
systems. This variability should be investigated because it may
indicate some inconsistencies in operating discipline. As
System 2
System 3
System 4
Example 8.3 analysis
always, when using summary scores like these, the evaluator
must ensure that the individualindex scores are appropriate.
Example 8.4: Verificationof operating discipline
In this example, the corrosion indexes of 32 records are
extracted from the database.The evaluator hypothesizes that in
pipeline sections where coating is known to be in poor condition more corrosion preventive actions are being taken. To verify this hypothesis, a correlation chart is created that compares
the coating condition score with the overall corrosion index
score. Initially, this chart (Figure 8.9a) shows low correlation;
that is, the data are scattered and a change in coating condition
is not always mirrored by a correspondingchange in corrosion
To ensure that the correlation is being fairly represented, the
evaluator looks for other variables that might introduce scatter
into the chart. Attribute items such asproduct [email protected],presence ofACpower nearby, and atmospheric condition might be
skewing the correlation data. Creating several histograms of
these other corrosion index items yields more information.
Seven of the records represent pipeline sections where internal
corrosionis a significantpotential problem. Two records have an
unusually high risk from the presence ofAC power lines nearby.
Because internal corrosion potential and AC power influences are not of interest in this hypothesis test, these records are
removed from the study set. This eliminates their influence on
the correlation investigation and leaves 23 records that are
thought to be fairly uniform. The resulting correlation of the 23
records is shown in Figure 8.9b.
Figure 8.9b shows that a correlation does appear. However,
there are two notable exceptions to the trend. In these cases, a
poor coating condition score is not being offset by higher corrosion index scores. Further investigation shows that the two
records in question do indeed have poor coating scores, but
have not been recently surveyed by a close interval pipe-to-soil
voltage test. The other sections are on a regular schedule for
such surveys.
X. Risk model performance
Given enough time and analyses, a given risk model can be validated by measuring predicted pipeline failures against actual.
The current state-of-the-art does not allow such validation for
Risk model performance 81195
(a) Coating Condition Score (32 records)
(b) Coating Condition Score (23 records)
Figure 8.9
Example 8.4 analysis.
reasons including; models have not existed long enough, data
collection has not been consistent enough, and pipeline failures
on any specific system are not frequent enough. In most cases,
model validation is best done by ensuring that risk results are
consistent with all available information (such as actual
pipeline failures and near-failures) and consistent with the
experiences and judgments of the most knowledgeable experts.
The latter can be at least partially tested via structured model
testing sessions andor model sensitivity analyses (discussed
later). Additionally, the output of a risk model can be carehlly
examined for the behavior ofthe risk values compared with our
knowledge of behavior of numbers in general.
Therefore, part of data analysis should be to assess the capabilities of the risk model itself, in addition to the results produced from the risk model. A close examination of the risk
results may provide insight into possible limitations of the risk
model including biases, inadequate discrimination, discontinuities, and imbalances.
Some sophisticated routines can be used to evaluate algorithm outputs. A Monte Carlo simulation uses random numbers
to produce distributions ofall possible outputs from a set of risk
algorithms. The shape of the distribution might help evaluate
the “fairness” of the algorithms. In many cases a normal, or
bell-shaped, distribution would be expected since this is a very
common distribution of material properties and properties of
engineered structures as well as many naturally occurring characteristics (height and weight of populations, for instance).
Alternative distributions are possible, but should be explainable. Excessive tails or gaps in the distributions might indicate
discontinuities or biases in the scoring possibilities.
Sensitivity analyses can be set up to measure the effect of
changes in any variables on the changes in the risk results. This
is akin to signal-to-noise discussions from earlier chapters
because we are evaluating how sensitive the results are to small
changes in underlying data. Because some changes will be
“noise”-uncertainty in the measurements-the
analysis will help us decide which changes might really be
telling us there is a significant risk change and which might
only be responding to natural variations in the overall systembackground noise.
8/196Data Management and Analyses
Sensitivity analysis
The overall algorithm that underlies a risk model must react
appropriately-neither too much nor too little-to changes in
any and all variables. In the absence of reliable data, this appropriate reaction is gauged to a large extent by expertjudgment as
to how the real-world risk is really impacted by a variable
Sensitivity analysis generally refers to an evaluation of the
relative change in results due to a change in inputs-the sensitivity of outputs to changes in inputs. Sensitivity analysis can be
a very statistically rigorous process if advanced techniques
such as ANOVA (analysis of variance), factorial design, or
other statistical design of experiments techniques are used to
quantify the influence of specific variables. However, some
simple mathematical and logical techniques can alternatively
be used to gauge the impact on results caused by changing certain inputs. Some of the previously discussed graphical tools
can be useful here. For example, a correlation chart can help
verify expected relationships among variables or alert the analyst to possible model weaknesses when expectations are not
From the mathematical formula behind the risk algorithm
presented in Chapters 3 through 7, the effect of changes on any
risk variable can be readily seen. Any percentage change in an
index value represents a change in the probability of failure and
hence, the overall risk. For example, an increase (improvement)
in the corrosion index translates to some percentage reduction
in risk of that type of failure. This improvement could he
achieved through changes in a risk activity or condition such as
in-line inspection, close-interval surveys, or coating condition
or through some combination of changes in multiple variables.
Similarly, a change in the consequences (the leak impactfactol;
LIF) correlates to the same corresponding change in the overall
risk score.
Some variables such as pressure and population density
impact both the probability and consequence sides of the risk
algorithm. In these cases, the impact is not obvious.
A spreadsheet can be developed to allow "what-if'' comparisons and sensitivity analyses for specific changes in risk variables. An example of such comparisons for a specific risk
model is shown in Table 8.3. The last column of this table indicates the impact of the change shown in the first column. For
instance, the first row shows that this risk model predicts a 10%
overall risk reduction for each 10% increase in pipe wall thickness, presumably in a linearly proportional fashion. (Note that
any corrosion-related benefit from increased wall thickness is
not captured in this model since corrosion survivability is not
being considered.)
Table 8.3 reflects changes from a specific set of variables
that represent a specific risk situation along the pipeline.
Results for different sets of variables might be different. This
type of"what-if" scenario generation also serves as a riskmanagement tool.
Table 8.3 "What-if"comparisons and analyses of changes in risk variables
Change in overall risk 5(%)
Increase pipe wall thickness by 10%.
Reduce pipeline operating pressure by 10%.
Improve leak detection from 20 min to 10 min
(including reaction).
If population increases from density of
22 per mile to 33 per mile (50% increase).
Increase air patrol frequency.
Increase pipe diameter by 10%.
Improve depth-of-cover score by 10%.
Pipe factor
Pipe factor, leak size, MAOPpotential, etc.
Leak size (LIF)
-0. I
Air patrol (third-party index)
Pipe factor, leak size (LIF)
Cover (third-party index)
Possibly -5 depending on initial and end states
Additional Risk
This chapter offers some ideas for considering two additional
topics in the basic risk assessmentmodel:
Stress and human errors-measurable variables that indicate a
more stressful workplace, possibly leading to higher error rates
Sabotag+variables to consider when the threat of intentional attacks against a pipeline facility are to be assessed.
Where either is seen to be a significant contributor to failure
potential, inclusion of additional risk variables into the risk
assessment might be warranted. However, for many pipelines,
issues regarding operator stress levels and sabotage potential
are either not significant or so uniform as to make distinctions
impossible. So, either of these can be a part of the risk assessment but should be added only when the evaluatorjudges that
its benefit exceedsthe cost ofthe complexitythat is added by its
1. Stress and human errors
The incorrect operations index is largely a measure of the
potential for human errors. When there is no knowledge
deficiency, human error is almost exclusively caused by distraction. That is, when the person knows what to do and how
to do it but inadvertently does it incorrectly, that incorrect
action is the result of at least a momentary loss of focus-a
Stress is a known contributor to loss of focus. Many studies
have explored the relationship between stress and accidents.
A general consensus is that there is indeed a strong correlation
between the two. Stress can also be a beneficial condition
because it creates the desire to change something. Some
expertstherefore make a distinction between positive and negative stress. For purposes of this discussion, the focus will be on
negative stress-that set of human reactions that has a potentially destructive effect on health and safety.
Stress is a highly subjectivephenomenon in that equal external conditions do not initiate equal stress states among all people. It is not the external conditionthat causes the stress, it is the
manner in which the external condition is viewed by an individual that determines the reaction. More and more, stress is being
viewed as a matter of personal choice, indicating that people
can control their reaction to external stimulus to a greater
degree than was previously thought. Nonetheless, experience
shows that certain external stimuli can be consistently linked
with higher stress states in many individuals.
9/198 Additional Risk Modules
Because the stress level in an individual is so subjective, it is
nearly impossible to estimate the impact of a stressor on a person’s job functioning ability (the external stimulus). For example, the fear ofjob loss might be a significant cause of concern
in one employee but have virtually no impact on another.
The differences might be due to present financial condition,
financial responsibilities, confidence in obtaining alternative
employment, history ofjob losses, fear of rejection, presence of
any stigmas attached to loss of employment, etc., all of which
are highly subjective interpretations.
It is beyond the scope of this text-and perhaps beyond present scientific capabilities-to accurately quantify the level of
stress in a given work group and relate that to accident frequency. A thorough psychological screening of every individual in the workplace would be the most exacting method to
identify the ability to handle stress and the ability to avoid focus
errors. This might give a snapshot indication of the propensity
for human errors in the work group. The benefits of such a
study, however, including the associated high levels of uncertainty, may not outweigh the costs of the effort.
For purposes of risk assessment, however, we can identify
some common influences that historically have been linked to
higher levels of stress as well as some widespread stress reducers. This is useful in distinguishing groups that may be more
prone to human error during a specified time interval.
Adjustments to the risk score can be made when strong indications of higher or lower thannormal stress levels exist.
Physical stressors
Noise, temperature, humidity, vibration, and other conditions
of the immediate environment are physical contributors to
stress. These are thought to be aggravating rather than initiating
causes. These stimuli tend to cause an increase in arousal level
and reduce the individual’s ability to deal with other stresses.
The time and intensity of exposure will play a role in the impact
ofphysical stressors.
Job stressors
Working relationships Examples of these stressors include
roles and responsibilities not clearly defined, personality conflicts, and poor supervisory skills.
Promotions Examples include no opportunity for advancement, poorly defined and executed promotion policies, highly
competitive work relationships.
Job security Indicators that this might be a stress issue include
recent layoffs, rumors of takeovers, and/or workforce reductions.
Changes This is a potential problem in that there may be
either too many changes (new technology, constantly changing
policies, pressures to learn and adapt) or too few, leading to
monotony and boredom.
Workload Again, either too much or too little can cause stress
problems. Ideally, employees are challenged (beneficial stress)
but not overstressed.
Office politics When favoritism is shown and there is poor
policy definition or execution, people can sense a lack of fairness, and teamwork often breaks down with resulting stress.
Organizational structure and culture Inhcators of more
stressful situations include the individual’s inability to influence aspects of his or her job, employee’s lack of control, and
lack of communication.
Perception of hazards associated with thejob If a job is perceived to be dangerous, stress can increase.An irony here is that
continued emphasis on the hazards and need for safety might
increase stress levels among employeesperforming the job.
Other common stressors
Shift work A nonroutine work schedule can lead to sleep disorders, biological and emotional changes, and social problems.
Shift work schedules can be designed to minimize these effects.
Family relationships When the job requires time away from
home, family stresses might be heightened. Family issues in
general are occasional sources of stress.
Social demands Outside interests, church, school, community obligations, etc., can all be stress reducers or stress
enhancers, depending on the individual.
Isolation Working alone when the individual’s personality is
not suited to this can be a stressor.
Undesirable living conditions Stress can increase when an
individual or group is stationed at a facility, has undesirable
housing accommodations near the work assignment, or lives in
a geographical area that is not oftheir choosing.
Assessing stress levels
Even if the evaluator is highly skilled in human psychology, it
will be difficult to accurately quantify the stress level of a work
group. A brief visit to a work group may not provide a representative view of actual, long-term conditions. On any given day or
week, stress indicators might be higher or lower than normal. A
certain amount of job dissatisfaction will sometimes be voiced
even among the most stress-free group. Because this is a difficult area to quantify, point changes due to this factor must
reflect the high amount of uncertainty. It is recommended that
the evaluator accept the default value for a neutral condition,
unless he finds strong indications that the actual stress levels
are indeed higher or lower than normal.
In adjusting previously assigned risk assessment scores, it
has been theorized that a very low stress level can bolster existing error-mitigation systems and lead to a better incorrect operations index score. A workforce free from distractions is better
able to focus on tasks. Employees who feel satisfied in their
jobs and are part of a team are normally more interested in their
work, more conscientious and less error prone. Therefore,
when evidence supports a conclusion of “very low stress,”
additional points can be added.
On the other hand, it is theorized that a high stress level
or high level of distraction can undermine existing error-
Stress and human errors 9/199
mitigation systems and lead to increased chances of human
error. A higher negative stress level leading to a shortened
attention span can subvert many of the items in the incorrect
operations index. Training, use of procedures, inspections,
checklists, etc., all depend on the individual dedicating attention to the activity. All loss of focus will reduce effectiveness. It
will be nearly impossible to accurately assess the stress level
during times of design and construction of older pipelines.
Therefore, the assessments will generally apply to human error
potential for operations and maintenance activities of existing
pipelines and all aspects of planned pipelines.
Stress levels can, of course, impact the potential of other failure modes, as can many aspects of the incorrecf operations
index. As a modeling convenience and consistent with the use
of the incorrect operations index, only that index is adjusted by
the human stress issue in this example risk model.
Indications of higher stress andor distraction levels can be
identified and prioritized. The following list groups indicators
into three categories, arranged in priority order. The first categories provide more compelling evidence of a potentially
higher future error rate:
Category INegative Indicators
High current accident rate
High current rate of errors
these indicators are selected partly because they are quantifiable measures, the data are not always readily available. In the
absence of such data, it is suggested that no point adjustments
be made. Where indications exist, a relative point or percentage
adjustment scale for the incorrect operations index can be set
up as shown inTable 9.1, In this example table, a previously calculated Incorrect Operations index score would be reduced by
up to 20 points or 25% when significant indicators of negative
stress exist.
There is also the possibility that a workforce has unusually
low stress levels, presumably leading to a low error rate.
Indications of lower stress levels might be
Category I Positive Indicators
Low accident rate
Low rate of errors
Category II Positive Indicators
Category Ill Positive Indicators
High motivation, general satisfaction
Strong sense of teamwork and cooperation
Much positive feedback in employee surveys or interviews
Low employee turnover
High degree of control and autonomy among most
High participation in suggestion systems.
Category II Negative Indicators
High substance abuse
High absenteeism
High rate of disciplinary actions
Category Ill Negative Indicators
Low motivation, general dissatisfaction,
Low teamwork and cooperation (evidence of conspiracies,
unhealthy competition, “politics”)
Much negativity in employee surveys or interviews
High employee turnover
Low degree of control and autonomy among most employees
Low (or very negative) participation in suggestion systems.
Interpreting these signs is best done in the context of historical data collected from the workplace being evaluated and other
similar workplaces. The adjective high is, of course, relative.
The evaluator will need some comparative measures, either
from other work groups within the company or from published
industry-wide or country-wide data, or perhaps even from
experience in similar evaluations. Care should be exercised in
accepting random opinions for these items. Although most of
Table 9.1
Low substance abuse
Low absenteeism
Low rate of disciplinary actions
As with the negative indicators, comparative data will be
required and opinions should be only very carefully used. For
instance, a low incidence of substance abuse should only warrant points if this was an unusual condition for this type of work
group in this culture. Where indications exist, arelative point or
percentage adjustment scale for the incorrect operations index
can be set up as shown inTable 9.2.
In the examples given in Tables 9.1 and 9.2, the results of the
stress/distraction analysis would he as follows: When one or
more of the indicators shows clear warning signals, the evaluator can reduce the overall incorrect operations index score by
up to 20 points or 25%. When these signs are reversed and
clearly show a better work environment than other similar operations, up to 20 points or 25% can be added to the incorrect
operations index. These are intended only to capture unusual
situations. Points should be added or deducted only when
strong indications of a unique situation are present.
Example adjustment scale for the three negative indicator categories
Presence ofany Category I negative indicators
Presence of any Category I1 negative indicators
Presence ofany two Category I11 negative indicators
Combined maximum
Point changefmrnpreviously calculated
Inc Ops Score
- 12
Percent change applied toprevroush
calculated Inc Ops Score
9/200AdditionalRisk Modules
Table 9.2 Example adjustmentsto incorrect Operations Index for the three positive indicator categories
Point change from previously calculated
Inc Ops Score
Percent change applied to previously
calculated Inc Ops Score
Presence of any Category I positive indicators
Presence of any Category 11positive indicators
Presence of any two Category I11 positive indicators
Combined maximum
High stress
Low stress
-20 pts or -25%
0 pts
+20 pts or +25%
The following example scoring scenarios use the point
adjustment option (rather than percentage adjustment) from the
previous adjustment tables.
Example 9.1:Neutral stress conditions
In the work environment being scored, the evaluator sees a
few indications of overall high stress. Specifically, she
observes an increase in accident/error rate in the last 6 months,
perhaps due to a high workload recently and loss of some
employees through termination. On the other hand, she
observes a high sense of teamwork and cooperation, an overall
high motivation level, and low absenteeism. Although the accident rate must be carefully monitored the presence of positive
as well as negative indicators does not support a situation
unusual enough to warrant point adjustments for stress
Example 9.2: Higher stress conditions
In this workplace being scored, the evaluator assesses conditions at a major pumping station and control room. There are
some indications that a higher than normal level of stress
exists. In the last year, many organizational changes have
occurred, including the dismissal of some employees. This is
not a normal occurrence in this company. Job security concerns seem to be widespread, leading to some competitive
pressures within work teams. Upper management reported
many employee complaints regarding supervisors at these
sites during the last 6 months. There is no formal suggestion
system in place--employees have taken it on themselves to
report dissatisfactions. In light ofjob security issues, the evaluator feels that this is an important fact. Records show that in
the last 6 months, absenteeism has risen by 5% (even after
adjusting for seasonalityta figure that, taken alone, is not
statistically significant. The evaluator performs informal, random interviews of three employees. After allowing for an
expected amount of negative feedback, along with a reluctance
to “tell all” in such interviews, the evaluator nonetheless
feels that an undercurrent of unusually high stress presently
exists. Accident frequencies in the last year have not increased,
The evaluator identifies no Category I items, possibly one
Category I1 item (the uncertain absenteeism number), and two
Category I11 items (general negativity, high complaints). He
i 5
reduces the incorrect operations index by 7 points in consideration ofthese conditions.
Example 9.3: Lower stress conditions
At this site, the evaluator finds an unusual openness and
communication level among the employees. Reporting relationships seem to be informal and cordial. Almost everyone at a
meeting participates enthusiastically; there seems to be no
reluctance to speak freely. A strong sense of teamwork and
cooperation is evidenced by posters, bulletin boards, and direct
observation of employees. There appears to be a high level of
expertise and professionalism in all levels, as shown in the audit
for other risk items. Absenteeism is very low; the unit has been
accident free for 9 years-a noteworthy achievement considering thz amount of vehicle driving, hands-on maintenance, and
other exposures of the work group.
The evaluator identifies Category I, 11, and 111 items,
assesses this as an unusually low stress situation, and adds 18
points to the incorrect operations index. The full score of 20
points is not applied because the evaluator is not as familiar
with the work group as she could be and therefore decides that
an element ofuncertainty exists.
II. Sabotage module
The threat of vandalism, sabotage, and other wanton acts of
mischiefare addressed to a limited degree in various sections of
this risk assessment such as the third-party dumuge and design
indexes. This potential threat may need to be more fully considered when the pipeline is in areas ofpolitical instability or public unrest. When more consideration is warranted, the results of
this module be incorporated into the risk assessment. For purposes here, the term sabotage will be used to encompass all
intentional acts designed to upset the pipeline operation.
Sabotage is primarily considered to be a direct attack against
the pipeline owner. Because ofthe strategicvalue of pipelines and
their vulnerable locations, pipelines are also attacked for other
reasons. Secondary motivationsmay include pipeline sabotage as
An indirect attack against a government that supports the
A means of drawing attention to an unrelated cause
A protest for political, social, or environmental reasons
A way to demoralize the public by undermining public confidence in its government’s ability to provide basic services
and security.
Sabotage module 9/201
It would be naive to rule out the possibility of attack completely in any part of the world. However, this module is
designed to be used when the threat is more than merely a tbeoretical potential. Inclusion of this module should be prompted
by any of the following conditions in the geographical area
being evaluated:
Previous acts directed against an owned facility have
Random acts impacting owned or similar facilities are occurring
The company has knowledge of individuals or groups that
have targeted it.
Because the kinds of conditions that promote sabotage can
change quickly, the potential for future episodes is difficult to
predict. For some applications, the evaluator may wish to
always include the sabotage module for consistency reasons.
An important first step in sabotage assessment is to understand the target opportunities from the attackers’ point of view.
I t is useful to develop “what-if” scenarios of possible sabotage
and terrorist attacks. A team of knowledgeable personnel can
be assembled to develop sabotage strategies that they would
use. should they wish to cause maximum damage. The scenarios should be as specific as possible, noting all ofthe following
What pipeline would be targeted?
Where on the pipeline should the failure occur?
What time of year, day of week, time of day?
How would the failure be initiated?
How would ignition be ensured, if ignition was part of the
What would be the expected damages? Best case? Worst
What would be the probability of each scenario?
As seen in the leak impact,factor development discussion,
the most damaging scenarios could involve unconfined vapor
cloud explosions, toxic gases, or rapidly dispersed flammable
liquids (via roadways, sewer systems, etc), all in “target-rich”
environments. Fortunately, these are also very rare scenarios.
Even if a careful orchestration of such an event were attempted,
the practical difficulties in optimizing the scenario for maximum impact would be challenging even for knowledgeable
The threat assessment team should use these scenarios as
part of a vulnerability assessment. Existing countermeasures
and sequence-interruption opportunities should be identified.
Additional prevention measures should be proposed and discussed. Naturally, care should be exercised in documenting
these exercises and protecting such documentation.
The nature of the sabotage threat is quite different than all
threats previously considered.A focused human effort to cause
a failure weighs more on the risk picture than the basically random or slower acting forces of nature. Because any aspect of
the pipeline operation is a potential target, all failure modes can
theoretically be used to precipitate a failure but the fast-acting
failure mechanisms will logically be the saboteur’s first choice.
It must be conservatively assumed that a dedicated intruder will
eventually find a way to cause harm to a facility. This implies
that, eventually, a pipeline failure will occur as long as the
attacks continue. It is recommended that the sabotage threat be
included as a stand-alone assessment. It represents a unique
type of threat that is independent and additive to other threats.
To be consistent with other failure threat assessments (discussed in Chapters 3 through 6), a 100-point scale, with
increasing points representing increasing safety, can be used in
evaluations. Specific point values are not always suggested
here because a sabotage threat can be so situation specific.
The evaluator should review all of the variables suggested,
add others as needed, and determine the initial weightings
based on an appropriate balance between all variables.
Variables with a higher potential impact on risk should have
higher weightings.
The overall potential to a sabotage event can first be assessed
based on the current sociopolitical environment, where lower
points reflect lower safety-greater threat levels. A score of
100 points indicates no threat of sabotage.
. .. . . . . . . . . ... .O-100 pts
Then points can be added to the “attack potential” score
based on the presence of mitigating measures. In the sample list
of considerations below, seven mitigating measures are
assessed as are portions of the previously discussed Incorrect
Operations index:
Attack Potential
A. Community Partnering
B. Intelligence
C. Security Forces
D. Resolve
E. Threat of Punishment
F. Industry Cooperation
G. Facility Accessibility (barrier preventions, detection preventions)
Incorrect Operations Index:
A . Design
B. Construction
C. Operations
D. Maintenance
Finally, some modifications to the Leak Impact Factor
detailed in Chapter 7 might also be appropriate, as is discussed.
Attack potential
Anticipation of attacks is the first line of defense. Indications
that the potential for attack is significant include (in roughly
priority order)
A history of such attacks on this facility
A history of attacks on similar facilities
Presence of a group historically responsible for attacks
High tension situations involving conflict between the operating (or owner) company and other groups such as
0 Activists
(political, environmental, labor, religious
extremists, etc.)
Former employees
Hostile labor unions
0 Local residents.
9/202Additional Risk Modules
In many cases, the threat from within the local community is
greatest. An exception would be a more organized campaign
that can direct its activities toward sites in different geographic
areas. An organized guerrilla group is intuitively a more potent
threat than individual actions.
An aspect of sabotage, probably better termed vandalism,
includes wanton mischief by individuals who may damage
facilities. Often an expression of frustration, these acts are generally spontaneous and directed toward targets of convenience.
While not as serious a threat as genuine sabotage, vandalism
can nonetheless be included in this assessment.
Experience in the geographic area is probably the best gauge
to use in assessing the threat. If the area is new to the operator,
intelligence can be gained via government agencies (state
department, foreign affairs, embassies, etc.) and local government activities (city hall, town meetings, public hearings, etc.).
The experience of other operators is valuable. Other operators
are ideally other pipeline companies, but can also be operators
of production facilities or other transportation modes such as
railroad, truck, and marine.
To assess the attack potential, a point adjustment scale can be
set up as foIlows:
Low attack probability
50-80 pts
(situation is very safe)
Although something has happened to warrant the inclusion of
this module in the risk assessment, indications of impending threats are very minimal. The intent or resources of possible perpetrators are such that real damage to facilities is
only a very remote possibility. No attacks other than random (not company or industry specific) mischief have
occurred in recent history. Simple vandalism such as spray
painting and occasional theft of non-strategic items (building materials, hand tools, chains, etc.) would score in this
Medium probability
20-50 pts
This module is being included in the risk assessment because
a real threat exists. Attacks on this company or similar
operations have occurred in the past year and/or conditions
exist that could cause a flare-up of attacks at any time.
Attacks may tend to be propagated by individuals
rather than organizations or otherwise lack the full measure
of resources that a well-organized and resourced saboteur
may have.
High probability (threat
0-20 pts
is significant)
Attacks are an ongoing concern. There is a clear and present
danger to facilities or personnel. Conditions under which
attacks occur continue to exist (no successfd negotiations,
no alleviation of grievances that are prompting the hostility).
Attacks are seen to be the work of organized guerrilla
groups or other well-organized, resourced, and experienced
Assigning of points between those shown is encouraged
because actual situations will always be more complex than
what is listed in these very generalizedprobability descriptions.
A more rigorous assessment can be done by examining and
scoring specific aspects of attack potential.
Sabotage mitigations
As the potential for an attack increases, preventive measures
should escalate. However, any mitigating measure can be overcome by determined saboteurs. Therefore, the risk can only be
reduced by a certain amount for each probability level.
Awarding of points and/or weightings i s difficult to generalize.
Most anti-sabotage measures will be highly situation specific.
The designer of the threat assessment model should assign
weightings based on experience, judgment, and data, when
available. Insisting that all weightings sum to 1OGrepresenting 100% of the mitigation potential-helps in assigning
weights and balancing the relative benefits of all measures. In a
sense, evaluating the potential for sabotage also assesses the
host country’s ability to assist in preventing damage. The following sabotage threat reduction measures are generally available to the pipeline ownerioperator in addition to any support
provided by the host country.
A . Communitypartnering
One strategy for reducing the threat of sabotage and vandalism
is to “make allies from adversaries.” The possibility of attack is
reduced when “neighbors” are supportive of the pipeline activities. This support is gained to some extent through general public education. People feel less threatened by things that they
understand. Support of pipeline operations is best fostered,
however, through the production of benefits to those neighbors.
Benefits may include jobs for the community, delivery of
needed products (an immediate consumable such as heating oil
or gas for cooking is more important than intermediate products such as ethylene or crude oil), or the establishment of infrastructure by the company. Threat of attack is reduced ifpipeline
operators establish themselves as contributing members of a
community. In developing countries, this strategy has led to
agricultural assistance, public health improvements, and the
construction of roads, schools, hospitals, etc. Improvements of
roads, telephone service, and other infrastructure not only
improve the quality of life, they also have the secondary benefit
of aiding in the prevention and response to sabotage. An appreciative community will not only be less inclined to cause damage to the facilities of such a company, but will also tend to
intervene to protect the company interests when those interests
benefit the community.
Such a program should not be thought of (and definitely not
be labeled) as a bribe or extortion payment by the operating
company. In some cases, the program may be thought of as fair
compensation for hsrupting a Community. In other cases where
the pipeline is merely used as a convenient target in a regional
dispute that does not involve the operation at all, assistance programs can be seen as the cost of doing business or as an additional local tax to be paid. Whatever the circumstances, a
strategy of partnering with a community will be more effective
if the strategy is packaged as the “right thing to do” rather than
as a defensive measure. The way the program is presented internally will affect company employees and will consequently
spill over into how the community views the actions. Employee
interaction with the locals might be a critical aspect of how the
program is received. If the pipeline company or sponsoring
government is seen as corrupt or otherwise not legitimate, this
assistance might be seen as a temporary payoff without long-
Sabotage module 9/203
term commitment and will not have the desired results. It might
be a difficult task to create the proper alliances to win public
support, and it will usually be a slow process. (See also the
“Intelligence” section next.)
Community partnering can theoretically yield the most benefit as a risk mitigator because removal of the incentive to
attack is the most effective way to protect the pipeline. When
such a program is just beginning, its effectiveness will be hard
to measure. For risk assessment purposes, the evaluator might
assess the program initially and then modify the attack potential variable as evidence suggests that the program is achieving
its intended outcome.
Various elements of a community partnering program can be
identified and valued, in order to assess the benefits from the
Significant, noticeable, positive impact of program
Regular meetings with community leaders to determine how
and where money is best spent
Good publicity as a community service.
These elements are listed in priority order, from most important to least, and can be additive-add points for all that are
present, using a point assignment scale consistent with the perceived benefit of this mitigation. In many cases, this variable
should command a relatively high percentage of possible mitigation benefits-perhaps 2&70%.
B. Intelligence
Forewarning of intended attacks is the next line of defense.
Intelligence gathering can be as simple as overhearing conversations or as sophisticated as the use of high-resolution spy
satellites, listening devices, and other espionage techniques.
Close cooperation with local and national law enforcement
may also provide access to vital intelligence. Local police
forces are normally experienced in tracking subversives. They
know the citizens, they are familiar with civilian leaders, they
can have detailed information on criminals and subversive
groups, and their support is important in an active anti-sabotage
program. However, some local police groups may themselves
be corrupt or less than effective. When the local police force is
seen as a government protection arm (rather than protection for
the people), a close alliance might be counterproductive and
even impact the effectiveness of a damage prevention program
The evaluator should be aware that effectiveness of intelligence gathering is difficult to gauge and can change quickly as
fragile sources of information appear and disappear. Maximum
value should be awarded when the company is able to reliably
and regularly obtain information that is valuable in preventing
or reducing acts of sabotage. As a rough way of scoring this
item, a simple ratio can be used:
Number of acts thwarted through intelligence gathering efforts
number of acts attempted
Hence, if it is believed that three acts were avoided (due to
forewarning) and eight acts occurred (even if unsuccessful,
they should be counted), then award 318 ofthe maximum point
C. Securityjbrces
The effectiveness of a security force will be situation specific.
Rarely can enough security personnel be deployed to protect
the entire length of a pipeline. If security is provided from a
government that is presently unpopular, the security forces
themselves might be targets and bring the risk of damage closer
to the pipeline. It is not uncommon in some areas for pipeline
owners to deploy private security personnel. The evaluator
should look for evidence of professionalism and effectiveness
in such situations. Maximum value should be awarded when
the security force presents a strong deterrent to sabotage.
D. Resolve
A well-publicized intention to protect the company’s facilities
is a deterrent in itself. When the company demonstrates unwavering resolve to defend facilities and prosecute perpetrators,
the casual mischief-maker is often dissuaded. Such resolve can
be partially shown by large, strongly worded warning signs.
These warnings should be reinforced by decisive action should
an attack occur. A high-visibility security force also demonstrates resolve. Maximum value should be awarded for a highprofile display that might include signs, guards, patrols, and
publicized capture and prosecution of offenders.
E. Threat ofpunishment
Fear ofpunishment can be a deterrent to attacks, to some extent.
A well-publicized policy and good success in prosecution of
perpetrators is a line of defense. The assessed value of this
aspect can be increased when the threat of punishment is
thought to play a significant role. The evaluator should be
aware that a government that is not seen as legitimate might be
deemed hypocritical in punishing saboteurs harshly while its
own affairs are not in order. In such cases, the deterrent effect of
punishment might actually foster support for the saboteurs
[ 121. In many cases, threat of punishment (arguably) has a minimal impact on reducing attacks.
F Industry cooperation
Sharing of intelligence, training employees to watch neighboring facilities (and, hence, multiplying the patrol effectiveness),
sharing of special patrols or guards, sharing of detection
devices, etc., are benefits derived from cooperation between
companies. Particularly when the companies are engaged in
similar operations, this cooperation can be inexpensive and
effective. Maximum value should be awarded when a pipeline
company’s anti-sabotage efforts are truly expanded by these
cooperative efforts.
G. Facility accessibility
Attacks will normally occur at the easiest (most vulnerable) targets and, as a secondary criteria, those targets that will cause the
most aggravation to have repaired. Such sites include the
remote, visible stations along the pipeline route (especially
pump and compressor stations), the exposed piping on supports
and bridges, and locations that will be difficult to repair (steep
mountain terrain, swampland, heavy jungle, etc.).
9/204AdditionalRisk Modules
The absence of such facilities is in itself a measure of protection and would be scored as the safest condition. The underlying premise is that a buried pipeline is not normally an
attractive target to a would-be saboteur, due to the difficulty in
access. Line markers might bring unwanted attention to the line
location. Of course, this must be weighed against the benefits
of reducing unintentional damage by having more signage.The
evaluator may wish to score incidences of line markers or
even cleared ROW as aspects of sabotage threat if deemed
Where surface facilities do exist, points should be subtracted
for each occurrence in the section evaluated. The magnitude of
this point penalty should be determined based on how much
such facilities are thought to increase the attack potential and
vulnerability for the pipeline segment. Different facilities
might warrant different penalties depending on their attractiveness to attackers.
Surface facilities such as pump and compressor stations are
often the most difficult and expensive portions of the pipeline
system to repair. Use of more sophisticated and complex equipment often requires associated delays in obtaining replacement
parts, skilled labor, and specialized equipment to effect repairs.
This is fiuther reason for a stronger defensive posture at these
Preventive measures for unintentional third-party intrusions
(scored in the third-party damage index) offer some overlap
with mischief-preventing activities (fences around aboveground facilities, for example) and are sometimes reconsidered
in this module. More points should be awarded for devices and
installations that are not easily defeated. The presence of such
items better discourages the casual intruder. Preventive measures at each facility can bring the point level nearly to the point
of having no such facilities, but not as high as the score for “no
vulnerable facilities present.” This is consistent with the idea
that “no threat” (in this case “no facility”) will have less risk
than “mitigated threat,” regardless of the robusmess of the mitigation measures. From a practical standpoint, this allows the
pipeline owner to minimize the risk in a number of ways
because several means are available to achieve the highest level
of preventive measures to offset the point penalty for the surface facility. However, it also shows that even with many preventions in place, the hazard has not been removed. Mitigations
can be grouped into two categories: barrier-type preventions,
where physical barriers protect the facility, and detection-type
preventions, where detection and response are a deterrent.
The “penalty” assigned for the presence of surface facilities
can be reduced for all mitigative conditions at each facility
within the pipeline section evaluated. Some common mitigation measures or conditions, in roughly priority order from
most effective to least, are listed here:
Burrier- Type Preventions
Electrified fence in proper working condition
Strong fencdgate designed to prevent unauthorized entry by
humans (barbed wire, anti-scaling attachments, heavy-gauge
wire, thick wood, or other anti-penetration barrier)
Normal fencing (chain link, etc.)
Strong locks, not easily defeated
Guards (professional, competent) or guard dogs (trained)
Alarms, deterrent type, designed to drive away intruders with
lights, sounds, etc.
Staffing (value dependent on hours manned and number of
High visibility (difficult to approach the site undetected;
good possibility exists of “friendly eyes” observing an intrusion and taking intervening action)
Barriers to prevent forcible entry by vehicles (These may be
appropriate in extreme cases. Ditches and other terrain
obstacles provide a measure of protection. Barricades that do
not allow a direct route into the facility, but instead force a
slow, twisting maneuver around the barricades, prevent rapid
penetration by a vehicle.)
Dense, thorny vegetation (This type of vegetation provides a
barrier to unauth