Toggle Menu



Opening remarks and introduction
W. Kent Muhlbauer, WKM Consultancy

W. Kent Muhlbauer


Exploratory Data Analysis
Joel Anderson, RSI Pipeline Solutions

Joel Anderson


Risk Decisions Under Uncertainty
Justin Raimondi, New Century Software

Justin Raimond


Quantitative Risk Assessment following an ILI Survey (ILI-Based Risk Assessment)
Jane Dawson, Baker Hughes Process & Pipeline Services

Jane Dawson


Risk‐Based Decision‐Making Supported by Machine Learning
Mike Gloven, Pipeline-Risk

Mike Gloven


Closing remarks
W. Kent Muhlbauer, WKM Consultancy


End of Webinar


Organized by:

Clarion Technical Conferences

Pipeline risk algorithms can take in hundreds of variables and perform hundreds more calculations to determine the probability of failure for each segment of pipe. The number of dynamic segments generated on a moderately sized system can be a million or more. Typically, the end-product is a table of information dozens of columns wide by possibly a million or more rows. Once a risk run is completed, an equally critical step is to QC and analyze the risk estimates. However, given the size of the data, this can be the most overwhelming part and where a risk analysis process can easily break down. Making sense of these tens of millions of numbers can be a daunting task for the unprepared.

In this presentation, several techniques will be demonstrated on how to explore large data sets. Some of the topics covered will be outlier detection, correlations, exploratory data plotting and interpretation along with common pitfalls to avoid in an analysis. These techniques will apply to any data set and is not limited to risk analysis. Since resources are finite, it is necessary to be able to quickly extract meaning from data. This allows for better informed decisions about how to manage risks to make pipelines safer.

Our risk assessments are strongly informed by inspections such as ILI. It is often impractical and/or inefficient, from a risk management standpoint, to remediate every anomaly called by an ILI tool. Therefore, there are always going to be anomalies left in the pipe until the next assessment. This has a risk implication.

The problem is how to determine when “enough is enough” in your dig program and where to draw the line with what is left in the ground until the next reassessment. The multiple levels of variability in pipe properties, defect dimensions, and corrosion rate lead to a large amount of uncertainty about the true risk of these unexamined anomalies. Assuming all the pipe properties are nominal and the ILI tool is without error would not be prudent risk management in the presence of this uncertainty. Alternatively, assuming “worst case” for everything is certainly conservative but leads to unnecessary expending of finite resources (time and money). This leads to increased risk elsewhere, where the resources could have been applied for greater benefit.

In this paper we examine a Monte Carlo solution for dealing with the uncertainty surrounding these critical integrity management decisions. This solution allows for the variability in any inputs and allows for better informed risk decisions. While these methods might seem exotic, they are not as difficult as they may seem. These techniques can be applied to any type of risk-based decision process where uncertainty is inherent.

Pipeline risk assessment is a foundational component of effective pipeline integrity management. The Federal pipeline safety integrity management regulations require pipeline operators to use risk assessments. Many operators adopted primarily qualitative and relative risk models to initially meet the need to prioritize baseline integrity assessments and identify pipeline threats. However, the use of risk assessment required by the pipeline safety regulations goes beyond the prioritization of pipeline segments for baseline assessment and includes (but is not limited to):

As pipeline operators attempt to meet the above requirements and progressively adopt an operating strategy of continual risk reduction whilst minimizing total expenditures within safety, environmental,  and reliability constraints, the need for a more quantitative approach to risk assessment is becoming increasingly evident. This view is supported by a recent PHMSA review recommending pipeline operators develop and use risk models consistent with the need of supporting risk management decisions to reduce risks rather than focusing the choice of risk model on the perceived initial quality and completeness of the input data. The use of quantitative risk models will enable superior understanding of the risks from pipeline systems and improve the critical safety information feeding into the overall integrity and risk management processes.

While a wholescale move to quantitative risk modelling is likely to be a daunting task to many operators, it is possible to achieve a meaningful quantitative risk assessment for what are likely to be the key threats affecting pipeline integrity management without the need of a major investment to cover implementation of new software and associated databases. Such a risk assessment can be achieved for many threats by using the quantitative data provided by an ILI survey of the subject pipeline augmented by relatively few other key data attributes.

This paper describes an ILI based risk model developed initially for gas pipelines that provides a quantitative risk assessment of multiple pipeline threats (probability of failure and consequences of failure). Results are presented to show the value of quantitative risk outputs and how they can be used to support decisions in the overall integrity and risk management processes.

Machine learning is a process to reveal useful patterns in data and is based on common methods found in linear algebra, descriptive and inferential statistics, and calculus. As data continues to become more available and accessible, machine learning is emerging as a foundational practice to support the validation of risk beliefs, measurement of root cause data and optimization of options to mitigate risk. Further, machine learning addresses the constraint that most if not all risk data is a sample of the larger population of potentially influencing data. Thus, since inferential statistics are a foundational component of machine learning, it is useful for the practitioner to understand confidence and hypothesis testing to more fully grasp the uncertainty of results which are based on data inferred from the larger population.

This paper will present the fundamental elements of the machine learning process using external corrosion as a case study. The process will demonstrate how threat susceptibility and severity can be learned and validated through actual observations by applying shallow and deep learning methods to data. The results are learned models which will be shown to support assessment of un‐piggable pipelines, dig prioritization, inference of missing data, prioritization of data, optimization of inspection intervals and mitigative decision‐making. The case study will also show how learned models are explicitly validated through actual observations, an often overlooked concept in existing risk practices