Table Of ContentADVANCED SEMINAR ON
COMMON CAUSE FAILURE ANALYSIS
IN PROBABILISTIC ~AFETY ASSESSMENT
o~~ooro-
-COUR9~9
ON RELIABll..ITY AND RISK ANALYSIS
A series devoted to the publication of courses and educational seminars given at the
Joint Research Centre, Ispra Establishment, as part of its education and training program.
Published for the Commission of the European Communities,
Directorate-General Telecommunications, Information Industries and Innovation,
Scientific and Technical Communications Service.
The publisher will accept continuation orders for this series which may be cancelled at any time and
which provide for automatic billing and shipping of each title in the series upon publication.
Please write for details.
ADVANCED SEMINAR ON
COMMON CAUSE
FAILURE ANALYSIS
IN PROBABILISTIC
SAFETY ASSESSMENT
Proceedings of the ISPRA Course held at the Joint Research Centre,
Ispra, Italy, 16-19 November 1987
Edited by
ANIELLO AMENDOLA
Commission o/the European Communities,
Joint Research Centre, Ispra Establishment, Ispra, Italy
SPRINGER-SCIENCE+BUSINESS MEDIA, B.V.
Library of Congress Cataloging in Publication Data
Advanced Sefi 1nar on Co.aon Cause F=a 11 ure Ana 1y s 1 s 1n Probab 11 15t 1 t
S.fety Assessoent (1987 Iopra. Italy)
Advanced Sel1nar an COliilon Causl! Fa l1ure Ana 1 ys 1s 1n Probab 111 5t le
Safety Assess,.ent : proceedlngs of the ISPRA tourse held It the
Joint Researeh Centre. Iopra. It.ly. 16-19 Nov •• ber 1987 I edtted by
Anlel10 Alendola.
p. CI. -- (ISPRA taurses on rel1ab111ty and rlsk analys1s)
Bibliography, p.
Includes index.
ISBN 978-90-481-4045-9 ISBN 978-94-017-0629-2 (eBook)
DOI 10.1007/978-94-017-0629-2
1. Systelll fallures (Englneerlng)--Congresses. 2. Rel1abtllty
(Eng 1n aer lng )--Congresses. 3. PrObab 111t lBs--Congresses.
1. Araendola. Anlello. 1939- II. CO"III1ss1on of the European
Coml1lun 1t ies. .Jo 1n t Researcr. Centre. Ispra Estab 1, shlllent.
III. Tltle. IV. Ser1es.
TA169.5.A34 1987
620· .00452--de20 89-8000
ISBN 978-90-481-4045-9
Commission of the European Communities _ 'oinl R=oh C",.e "",. (V""e), II.!y
Publication arrangements by
Commission of the European Communities,
Directorate-General Telecommunications, Information Industries and Innovation, Scientific and
Teclmical Communications Service, Luxembourg
EUR 11760
© 1989 Springer Science+Business Media Dordrecht
Originally published by Kluwer Academic Publishers in 1989
All Rights Reserved
No part of the material protected by this copyright notice may be reproduced or
utilized in any form or by any means, electronic or mechanical,
including photocopying, recording or by any information storage and
retrieval system, without written permis sion from the copyright owner.
TABLE OF CONTENTS
Foreword vii
Introduction (A. Amendola)
Treatment of Common Cause Failures. The Nordic Perspective /
(S. Hirschberg) 9
Classification of Multiple Related Failures (A. Amendola) 31
Design Defences against Multiple Related Failures (P. Humphreys) 47
Design-Related Defensive Measures against Dependent Failures - ABB
ATOM's Approach (S. Hirschberg and L. I. Tiren) 71
Design Defences against Common Cause/Multiple Related Failure
(P. Doerre and R. Schilling) 101
Measures Taken at Design Level to Counter Common Cause Failures.
A Few Comments Concerning the Approach of EDF (T. Meslin) 107
Analysis Procedures for Identification of MRF's (P. Humphreys) 113
Dependent Failure Modelling by Fault Tree Technique (S. Contini) 131
Treatment of Multiple Related Failures by MARKOV Method
(J. Cantarella) 145
Parametric Models for Common Cause Failures Analysis
(K. N. Fleming) 159
Estimation of Parameters of Common Cause Failure Models (A. Mosleh) 175
Pitfalls in Common Cause Failure Data Evaluation (P. Doerre) 205
Experience and Results of the CCF-RBE (A. Poucet) 221
Analysis of CCF-DATA-Identification. The Experience from the
Nordic Benchmark (K. E. Petersen) 235
Some Comments on CCF-Quantification. The Experience from the
Nordic Benchmark (K. Porn) 243
Analysis of Common Cause Failures Based on Operating Experience:
Possible Approaches and Results (T. Meslin) 257
Multiple Related Failures from the Nordic Operating Experience
(K. U. Pulkkinen) 277
The Use of Abnormal Event Data from Nucleat Power Reactors for
Dependent Failure Analysis (H. W. Kalfsbeek) 289
MRF's from the Analysis of Component Data (P. Humphreys,
A. M. Games, N. J. Holloway) 303
Index 343
-vi-
FOREWORD
There is today a wide range of pubLications avaiLabLe on the theory of
reLiabiLity and the technique of ProbabiListic Safety AnaLysis (PSA).
To pLace this work properLy in this context, we must recaLL a basic
concept underLying both theory and technique, that of redundancy.
ReLiabiLity is something which can be designed into a system, by the
introduction of redundancy at appropriate points. John Von Neumann's
historic paper of 1952 'ProbabiListic Logics and the Synthesis of
ReLiabLe Organisms from UnreLiabLe Components" has served as
inspiration for aLL subsequent work on systems reLiabiLity. This paper
sings the praises of redundancy as a means of designing reLiabiLity
into systems, or, to use Von Neumann's words, of minimising error.
Redundancy, then, is a fundamentaL characteristic which a designer
seeks to buiLd in by using appropriate structuraL characteristics of
the 'modeL" or representation which he uses for his work. But any
modeL is estabLished through a process of de Limination and
decomposition. FirstLy, a "Universe of Discourse" is delineated; its
component eLements are then separated out; and moreover in a
probabiListic framework for each eLement each possibLe state is
defined and assigned an appropriate possibiLity measure caLLed
probability.
This process of deLimitation and decomposition is at the root of many
probLems of modern technoLogy - divergence between the modeL and
reaLity, disagreements among experts, and the consequent reLuctance on
the part of the pubLic to trust the experts. But the fact is that some
such process is necessary; it is the essence of modern scientific
method, and as such is the basis of our modern industriaL
civi Lization.
In my opinion, the reLuctance among poLitico-technicaL circLes to
accept and understand the vaLue of ProbabiListic Safety AnaLysis stems
from the fact that a PSA has to carry this method of deLimitation and
decomposition to the extreme. But we must remember that nonetheLess
the PSA methodoLogy succeeds in capturing and modeLLing a Larger and
more consistent segment of reaLity than any previous method of safety
anaLysis.
-vii-
In the cases we are considering, this analysis can throw up fictitious
redundancies, redundancies which appear in the model but are not
reflected in reality. The solution to the problem of Common Cause
Failure, Common Mode Failures, of dependencies, is in essence to avoid
creating fictitious redundancies in a model, or at least to eliminate
them in subsequent elaboration of the model.
I must emphasise that this problem lies upstream of any mathematical
manipulation of the model; it is a problem of representation of
reality. Any solutions will therefore be closely linked to the reality
being modelled and the type of model chosen.
The importance of this book lies in its overall approach to this
problem. It starts from the well-known decomposition methodology,
based on familiar Boolean two value logic, of fault tree analysis. It
then goes on to demonstrate, by the use of many concrete examples, how
this process can be guided, refined, and corrected by successive
approximations, in order to arrive at a model which is both
technically correct and practically useful.
This work is full of practical guidance and useful heuristics
indeed, it turns out that many so-called rules should be seen as
useful heuristics! These offer considerable help in the specific tasks
faced by the analyst trying to model a complex system. The book also
represents an important, if not unique, synthesis of experience and
collection of field data.
Altogether, we have here an extremely useful review of the '~tate of
the art" for anyone involved in the technical work of establishing a
PSA, as well as a cultural achievement which will appeal to those
interested in evaluating the methodology of probabilistic
representation of complex systems.
Dr. Giuseppe Volta
Commission of the European Communities
Joint Research Centre - Ispra Site
Director of the Institute for Systems Engineering
-~-
INTRODUCTION
A. Amendola
CEC-JRC
Institute for Systems Engineering
Systems Engineering and Reliability Division
21020 Ispra (Va) - Italy
For reliability assessments, complex systems are normally decomposed
into the number of their constituting items. These do not coincide
with the "hardware" components only. Indeed, a physical system
necessarily interacts with human operators according to control,
emergency, repair and maintenance procedures, technical
specifications, etc. So that such "software" elements and human
operators are further constituents of the system. At a higher system
level, even other factors such as the conceptual design or the overall
organizational management of the macrosystem, of which the physical
system is a part, can be considered as further generalized components
interacting with the physical system.
After the necessary decomposition a major problem for correct
modelling and assessment is the reconstruction of the actual
dependency structures among the items after having identified them and
made them explicit: this is the only way of achieving a model as close
as possible to the real system.
The assumption of "independence" of the items and that of purely
random failure processes are very helpful because of the easy
probabilistic calculations they make possible, but, of course, they
are very far from reality. Even when, by a well structured analysis
procedure, functional links among the different items have been
identified, dependency structures provoked by less perceivable
"software" elements, or by events occurring outside the physical
boundary of the operating systems (for instance events which occurred
during the conceptual design or the manufacturing) can become
dramatically evident at a certain time of system operation and only
under particular demands.
The unavoidable existence of more or less hidden dependency
structures is the limiting factor which impedes a system to achieve
unlimited reliability. The assignment of probabilities for failures
provoked by dependencies, which can only be hypothesized from
statistical evidence on systems different from those under
investigation, is the crucial problem of any risk assessment project.
The awareness of this problem, the need to implement adequate
defences in a system against the occurrence of common cause failures
A.Amendola (ed.),
Advanced Seminar on Common Cause Failure Analysis in Probabilistic Safety Assessment, 1-7.
© 1989 ECSC, EEC, EAEC, Brussels and Luxembourg.
(as such dependency structures are being usually labelled) have
encouraged several proposals for analysis procedures and mathematical
models as well as significant international collaborative projects.
Already in November 1975 a Task Force on Problems of Rare Events
in the Reliability Analysis of Nuclear Power Plants has been set up on
the basis of a recommendation of CSNI (the Committee on the Safety of
Nuclear Installations of the Nuclear Energy Agency of the OECD). A
subsequent CSNI Research Group placed emphasis on protective systems
for nuclear reactors and elaborated a classification system (1) for
common mode failures (terminology problems are discussed elsewhere in
the book (2» especially directed towards defences against CCFs.
Further insights from the CSNI project and from UKAEA-SRS
researchers in the field came from the Watson Review in 1981 (3).
In the meantime a number of probabilistic models were proposed:
Fleming's model based on the ratio of CCF to total failure rate (4),
the shock model by Apostolakis (5), the common load by Mankamo (6),
the multivariate model by Vesely (7) and the binomial failure rate
model by Atwood (8), which were followed by other models described
elsewhere in the book.
Together with these statistical parameter models to predict CCF
rates, several procedures have been proposed to identify CCF events
and to include them into the system logical diagrams (see a non
exhaustive but relevant literature list at Refs.(9-l9». Also some
data on Diesel generators (20) and pumps (21) have been estimated and
published since then.
Despite this significant theoretical effort, after the Wash 1400,
that used a boundary model for including CCFs, the CCF problem was for
many years either ignored in practice (the first NPP PSAs did not
include CCFs) or poorly approached. This was indeed evidenced in
Europe by the first Systems Reliability Benchmark Exercise (S-RBE)
(22) and in the USA by the identified need to agree on a consistent
classification system (23) as a basis for establishing an adequate
data base of CCF occurrences. Furthermore, problems connected with
sound statistical estimation procedures were identified and originated
discussions which continued even in recent papers (24-29).
The S-RBE project was aimed at assessing the complete procedure
of a reliability evaluation of a complex system by starting from the
basic documentation and familiarization with the reference system.
This was the EDF Auxiliary Feedwater System of the Paluel Unit. It was
constituted by two redundant trains, each one again with a double and
diverse redundancy (motor-driven and turbo-driven pumps). Therefore,
it presented interesting challenges to the expert teams involved in a
CCF analysis.
Participation in the exercise included representatives of major
partners involved in NPP safety assessment in Europe, i.e.
authorities, vendors, utilities and research institutes from EEC
member countries and Sweden.
S-RBE included both a structured qualitative analysis and
reliability modelling and evaluation; furthermore, it was subdivided
into several phases in order to separate the effects of the different
contributors upon the overall spread of the results. During all
-2-