PIPELINE RISK MANAGEMENT
NOVEMBER 18-19, 2020 | HOUSTON
Plus TRAINING COURSES
NOVEMBER 16-17, 2020
|8.15||Opening remarks & introduction|
|8.30||1.||Dancing with the Swans|
|Joel Anderson, Enable Midstream Partners, LP|
|9.00||2.||Risk Decisions Under Uncertainty|
|Justin Raimondi, New Century Software (Speaker)|
|Joel Anderson, Enable Midstream Partners, LP|
|Kent Muhlbauer, WKM Consultancy|
|9.30||3.||Agile Approaches for Transitioning to Quantitative Risk Assessment Modeling|
|Kevin Spencer, Dynamic Risk (Speaker)|
|Armando Borjas, Dynamic Risk|
|David Oliver, Dynamic Risk|
|Dan Williams, Dynamic Risk|
|11.00||4.||Threat, Risk and Consequence|
|Randy Vaughan, R&F Pipecon Resources, Inc.|
|11.30||5.||The Value of Quantifying the Risk-Reduction Benefits of the Pipeline Integrity Program|
|Aleksandar Tomic, TC Energy (Speaker)|
|Shahani Kariyawasam, TC Energy|
|Brenn Snider, TC Energy|
|1.45||6.||Quantitative Risk Assessment following an ILI Survey (ILI-Based Risk Assessment)|
|Jane Dawson, Baker Hughes, Process & Pipeline Services (Speaker)|
|T Hoffmann, Baker Hughes, Process & Pipeline Services|
|I Yablonskikh, Baker Hughes, Process & Pipeline Services|
|2.15||7.||Internal Corrosion Risk Evaluation|
|David Wint, Nalco Champion (Speaker)|
|Anji Bordelon, Nalco Champion|
|3.30||8.||The Value That a System-Wide Risk Assessment Can Bring in the 21st Century: TC Energy’s Approach|
|Aleksandar Tomic, TC Energy (Speaker)|
|Shahani Kariyawasam, TC Energy|
|4.00||9.||Continuous Improvement of Risk Aggregation and Integration with “The Matrix”|
|Kelly Thompson, Williams Companies|
|4.30||10.||On Setting Integrity Reliability & Risk Targets for Gas Transmission Pipelines|
|Sherif Hassanien, Enbridge Gas Transmission & Midstream (Speaker)|
|Andy Drake, Enbridge Gas Transmission & Midstream|
|5.00||End of Day, Cocktail Reception|
|8.30||11.||Exploratory Data Analysis|
|Joel Anderson, Enable Midstream Partners, LP|
|9.00||12.||Risk‐Based Decision‐Making Supported by Machine Learning|
|Mike Gloven, Expert Infrastructure Solutions, Inc.|
|9.30||13.||Refining Fatality Consequence Estimation for Gas Pipeline Risk Modeling|
|Pablo Lugo, Williams Companies|
|11.00||14.||The Evolution of Quantitative Risk Assessments: Integrating Environmental Consequence Analysis to Refine Results|
|Landon Lucadou, Dynamic Risk (Speaker)|
|Jeremy Fontenault, RPS Group (Speaker)|
|Mark Brimacombe, Pembina Pipeline Corporation|
|Tyler Klashinsky, Dynamic Risk|
|Kevin Spencer, Dynamic Risk|
|11.30||15.||Risk based Optimization of an ILI Program for Corrosion|
|Shahani Kariyawasam, TC Energy (Speaker)|
|Dongliang Lu, TC Energy|
|1.30||16.||AC Interference Threat Assessment|
|John Skates, American Innovations (Speaker)|
|Andy Florence, Pipeline Integrity Management Services|
|2.00||17.||Quantitative Risk Assessment of Underground Gas Storage System Using Bayesian Network Models|
|Francois Ayello, DNV GL USA (Speaker)|
|Arun Agarwal, DNV GL USA|
|Vincent DeMay, DNV GL USA|
|Vijay Raghunathan, DNV GL USA|
|Narasi Sridhar, DNV GL USA|
|3.00||18.||Bayesian Networks for Geohazard Risk Assessment|
|Smitha Koduru, C-FER Technologies|
|3.30||19.||Proposed Environmental Risk Acceptance Criteria for Canadian Pipelines|
|Mark Stephens, C-FER Technologies|
|4.00||20.||Proposed Life Safety Risk Acceptance Criteria for Canadian Pipelines|
|Shahani Kariyawasam, TC Energy (Speaker)|
|Dongliang Lu, TC Energy|
|Aleksandar Tomic, TC Energy|
|4.30||End of Conference|
In catastrophic risk, the question is, “how bad can bad be?”. To attempt to answer this question, people will often ask, “how bad has it been in the past?”. The weakness of this approach is that catastrophic events are often too rare to apply traditional statistics in a reasonable manner and the consequences have the potential to be business killers. Additionally, the span of time for which data is available can be too short to potentially witness one of these events. Even though these types of events are rare, the potential consequences to a company can be thousands of times higher than a “typical” incident, and a single event can outweigh the sum of every other event up to that point.
Most traditional statistical assumptions work well when you are near the average but, when considering catastrophic events, these live out on the tails of the distribution where the traditional assumptions would infer that the probability of something of that magnitude happening is effectively zero. This is what is referred to as “tails risk,” the amount of risk that lives in the extremes. Although the consideration of this is relatively new to pipeline risk, it is a comparatively old and well‐studied problem in fields such as finance, flood design and insurance. This presentation will cover many real‐life examples of tails risk and how to account for it in an integrity management program.
Our risk assessments are strongly informed by inspections such as ILI. It is often impractical and/or inefficient, from a risk management standpoint, to remediate every anomaly called by an ILI tool. Therefore, there are always going to be anomalies left in the pipe until the next assessment. This has a risk implication.
The problem is how to determine when “enough is enough” in your dig program and where to draw the line with what is left in the ground until the next reassessment. The multiple levels of variability in pipe properties, defect dimensions, and corrosion rate lead to a large amount of uncertainty about the true risk of these unexamined anomalies. Assuming all the pipe properties are nominal and the ILI tool is without error would not be prudent risk management in the presence of this uncertainty. Alternatively, assuming “worst case” for everything is certainly conservative but leads to unnecessary expending of finite resources (time and money). This leads to increased risk elsewhere, where the resources could have been applied for greater benefit.
In this paper we examine a Monte Carlo solution for dealing with the uncertainty surrounding these critical integrity management decisions. This solution allows for the variability in any inputs and allows for better informed risk decisions. While these methods might seem exotic, they are not as difficult as they may seem. These techniques can be applied to any type of risk-based decision process where uncertainty is inherent.
Pipeline operators are increasingly transitioning their pipeline risk assessment approaches to more objective, quantitative risk models that provide for improved decision making, better ability to manage risk levels to acceptability targets and ultimately support their programs to eliminate high-impact events.
Many operators have an existing index‐based or relative risk ranking models that, although characterized as “qualitative” in terms of the risk measure or output, actually utilize a significant amount of data that could also support a more quantitative expression of risk. Industry trending for risk modeling also indicates regulatory and Standards alignment with an expectation for more robust, quantitative approaches that can be used with risk acceptance criteria.
Risk acceptance guidelines for the pipeline industry are either being explored (PHMSA) or developed (CSA Z662). Successfully implementing the transition to quantitative risk assessment requires an agile plan and roadmap, key stakeholder engagement, risk communication strategy and alignment of integrity planning activities to best leverage the new risk assessment outputs. There are a number of approach options for quantitative risk modeling including reliability approaches (based on limit states), failure frequency approaches, physics‐based approaches and fault trees for estimating the likelihood of failure.
On the consequence side, the use of digitized structure information and occupancies along the pipeline system for safety considerations and the use of detailed outflow and overland spill modeling analysis for environmental considerations lends itself to a more robust approach. The presentation walks through case studies and lessons learned for operators successfully transitioning from an index‐based relative risk approach to a more robust, objective, quantitative approach. The overall roadmap with key steps in the transition is first laid out. Opportunities for leveraging existing data inputs and strategies for overcoming data gaps are explored. Methods employed for aligning the risk assessment approach and outputs to an existing corporate risk matrix, where applicable, are also discussed.
The means of applying best modeling practices while testing in an iterative and agile manner using software and incorporating uncertainties and data quality levels (TVC) into the modeling approach are presented. In closing, examples of using the new quantitative risk outputs to inform integrity management decision making, mitigation planning and integrity budgeting activities is presented.
Background: It seems these days that whenever you mention risk, risk analysis, threat, consequences, etc. people in the pipeline industry (both operators and inspectors) begin to look for a means of escape. It’s like the airline commercial, you can tell by the look on their faces and their body language they develop the urge to “Wanna Get Away.” It becomes very apparent they really don’t want to get into that conversation, they’ve heard it all so many times before. It is even more problematic to gather an audience and give a talk or speech on the subjects; risk, risk analysis, threat, consequences, etc.
New Approach: A new approach would be to talk about risk, risk analysis, threat, consequences, etc. in everyday street language at first. You know, maybe not mention the Integrity Management Program (IMP) until you’re well into the discussion. Everyone attending the discussion would certainly be able to relate to everyday risk taking and the consequences involved; especially when it is pointed out that this is an everyday occurrence, and a lot of time without even a second thought.
Hopeful Conclusion: Given a new perspective, such as the one we’ve tried here, concerning the subjects at hand, the audience should than be more attentive to the discussion as it relates to the IMP rules. Also, by paraphrasing the rules and regulations while keeping their meanings correct should help the audience understand better what is being said. Certainly, a new perspective, if possible, without losing the definition, is always another means of getting something across better. When it’s all said and done, it is hopeful that the audience will realize that the subject matter is not meant to be so scary, ominous, or so difficult to perform correctly. It’s just threat, risk and consequence – that’s it.
Pipeline operators are facing greater public scrutiny in recent years. This has created a growing need to prevent any failure and ensure that the Pipeline integrity budget is effectively utilized. Therefore, it is extremely beneficial to prioritize projects that ensure the greatest risk reduction for a given integrity budget. This presentation will cover the basics of an effective cost benefit analysis of system wide integrity budget, accounting for compliance driven as well as risk‐based work, quantifying the risk reduction benefits from multiple perspectives, including safety, commercial, and regulatory risk measures. It will also cover effective visualization techniques to represent the cumulative risk reduction benefit for the entire integrity program through a cost benefit curve. Interpretations of the results and how that enables optimization will also be presented. Lastly, an example of a cost benefit analysis will be discussed in the form of make‐piggable prioritization for uninspected pipelines, and how this analysis was used to prioritize launcher and receiver installations at TC Energy.
Pipeline risk assessment is a foundational component of effective pipeline integrity management. The Federal pipeline safety integrity management regulations require pipeline operators to use risk assessments. Many operators adopted primarily qualitative and relative risk models to initially meet the need to prioritize baseline integrity assessments and identify pipeline threats. However, the use of risk assessment required by the pipeline safety regulations goes beyond the prioritization of pipeline segments for baseline assessment and includes (but is not limited to):
As pipeline operators attempt to meet the above requirements and progressively adopt an operating strategy of continual risk reduction whilst minimizing total expenditures within safety, environmental, and reliability constraints, the need for a more quantitative approach to risk assessment is becoming increasingly evident. This view is supported by a recent PHMSA review recommending pipeline operators develop and use risk models consistent with the need of supporting risk management decisions to reduce risks rather than focusing the choice of risk model on the perceived initial quality and completeness of the input data. The use of quantitative risk models will enable superior understanding of the risks from pipeline systems and improve the critical safety information feeding into the overall integrity and risk management processes.
While a wholescale move to quantitative risk modelling is likely to be a daunting task to many operators, it is possible to achieve a meaningful quantitative risk assessment for what are likely to be the key threats affecting pipeline integrity management without the need of a major investment to cover implementation of new software and associated databases. Such a risk assessment can be achieved for many threats by using the quantitative data provided by an ILI survey of the subject pipeline augmented by relatively few other key data attributes.
This paper describes an ILI based risk model developed initially for gas pipelines that provides a quantitative risk assessment of multiple pipeline threats (probability of failure and consequences of failure). Results are presented to show the value of quantitative risk outputs and how they can be used to support decisions in the overall integrity and risk management processes.
Nalco Champion has developed a systematic approach that enables both hazard liquids and natural gas pipeline operators the ability to evaluate their internal corrosion risk threats. Pipeline Risk IQ integrates internal corrosion data with pipeline operating conditions to equip an operator with a comprehensive view of the threats and risk levels along the pipeline route. Pipeline Risk IQ is utilized to determine whether a corrosive environment exists and accurately identify the contributing internal corrosion threats. This process enhances an operator’s ability to focus on problematic pipeline segments while improving their internal corrosion mitigation programs and regulatory compliance.
This process aligns existing pipeline GIS data that may include High Consequence Areas, class location information, and/or in-line inspection data with internal corrosion test data. This test data is collected in the field with a mobile device and uploaded to Nalco Champion’s server. Industry based algorithms are employed to dynamically segment the system by risk threat level. Through a secure web portal, Pipeline Risk IQ provides a visual display of these line segments to easily identify the areas with the highest risk threat and susceptibility to internal corrosion attack. Pipeline Risk IQ can store historical internal corrosion data to track and trend the effectiveness of treatment programs, pigging programs and changes in the pipeline operating conditions that have been implemented. This insight aids operators in establishing any additional preventative mitigative measures that may be required to maintain flow efficiency, internal corrosion control and meet regulatory requirements.
The US regulation and Canadian standards require a System Wide Risk Assessment (SWRA) to be performed for all pipelines. Typically, an annual SWRA is performed by operators and used to identify high-risk sections. Appropriate identification of these high-risk sections is expected to avoid significant failures. Are SWRAs developed with the intention of avoiding these failures? How can we ensure SWRA achieves these expectations?
This presentation examines the purpose of SWRA and takes a data driven approach to critically assess its effectiveness. In the 21st century, where vast amounts of data are being generated through inspections, patrolling, monitoring, and management systems, TC Energy’s approach seeks to leverage all the evidence or leading indicators of high risk and imminent failures, as available and as applicable. Furthermore, this presentation will describe the six‐year evolution of a quantitative SWRA approach with a built‐in continuous improvement cycle. Examples of learning from failures, assessments, and analytical studies and how they were incorporated into the SWRA are demonstrated. Additionally, the appropriate risk measures are covered, with the focus on how these risk measures assure safety and how they are applied across the system.
In recent years, the transition from an indexing model to a quantitative model and increased use of risk results in decision making have heightened focus on the importance of pipeline risk model results. Decision makers need actionable, credible risk results which can be rolled up to support decisions at the appropriate level. That may be the trap‐to‐trap piggable segment; it may be the valve‐to‐valve isolation segment if evaluating automatic shutoff valve retrofits.
We will review aggregation of risk from different threats, and aggregation of risk from multiple dynamic segments to a larger segment to support decision makers in decisions about assessment, remediation, and other preventive and mitigative activities.
This presentation will review different explored aggregation strategies and contrast them with our current methodology. We will also review how our aggregated risk results are then categorized to integrate with the company risk matrix decision methodology. We share our experiences in the spirit of openness to foster collaboration and sharing of best practices in this and other areas.
Presentation Area: Risk‐appetite thresholds: acceptable risk, tolerable risk, ‘safe enough’, reliability targets & ALARP Integrating quantitative reliability and risk (QRA) sciences to pipeline integrity management programs (PIMP) is vastly growing as part of the path to zero strategy set by operators. A corner stone to QRA is defining reliability/risk‐appetite thresholds along with what is meant by ALARP in this context. Recent publications by key operators, international standards, and updated CSA Z662 Annex O 2019 offer useful guidelines to define such thresholds.
This presentation discusses briefly a summary of these guidelines, the importance of setting operators’ specific integrity target reliability & risk levels, how to choose such targets, and how to determine the safety of a pipeline asset using QRA while keeping an eye on the estimated annual expected number of failures. Moreover, the presentation provides ideas to improve the communication of integrity risk & reliability along with selected targets.
Case studies representing historical failures against recommended targets are discussed in order to ensure the adequacy of such safety thresholds. The ALARP concept is introduced within the recommended scheme of targets, define actions for pipes within ALARP region, and introduce the augmentation of a risk‐informed decision‐making framework for safe operations. Finally, a real‐life application of system‐wide integrity assessment is discussed.
Pipeline risk algorithms can take in hundreds of variables and perform hundreds more calculations to determine the probability of failure for each segment of pipe. The number of dynamic segments generated on a moderately sized system can be a million or more. Typically, the end-product is a table of information dozens of columns wide by possibly a million or more rows. Once a risk run is completed, an equally critical step is to QC and analyze the risk estimates. However, given the size of the data, this can be the most overwhelming part and where a risk analysis process can easily break down. Making sense of these tens of millions of numbers can be a daunting task for the unprepared.
In this presentation, several techniques will be demonstrated on how to explore large data sets. Some of the topics covered will be outlier detection, correlations, exploratory data plotting and interpretation along with common pitfalls to avoid in an analysis. These techniques will apply to any data set and is not limited to risk analysis. Since resources are finite, it is necessary to be able to quickly extract meaning from data. This allows for better informed decisions about how to manage risks to make pipelines safer.
Machine learning is a process to reveal useful patterns in data and is based on common methods found in linear algebra, descriptive and inferential statistics, and calculus. As data continues to become more available and accessible, machine learning is emerging as a foundational practice to support the validation of risk beliefs, measurement of root cause data and optimization of options to mitigate risk. Further, machine learning addresses the constraint that most if not all risk data is a sample of the larger population of potentially influencing data. Thus, since inferential statistics are a foundational component of machine learning, it is useful for the practitioner to understand confidence and hypothesis testing to more fully grasp the uncertainty of results which are based on data inferred from the larger population.
This paper will present the fundamental elements of the machine learning process using external corrosion as a case study. The process will demonstrate how threat susceptibility and severity can be learned and validated through actual observations by applying shallow and deep learning methods to data. The results are learned models which will be shown to support assessment of un‐piggable pipelines, dig prioritization, inference of missing data, prioritization of data, optimization of inspection intervals and mitigative decision‐making. The case study will also show how learned models are explicitly validated through actual observations, an often overlooked concept in existing risk practices
Our gas pipeline risk model has used the potential impact radius (PIR) as described in ASME B31.8S Paragraph 3.2 to estimate fatalities, assuming all occupants of structures in the radius were fatalities in the event of rupture with ignition. Given that the PIR was set at the 1% chance of fatality distance, use of the PIR in this way provides little discrimination and adds another layer of conservatism to model results. We revisited the equations and concepts presented in GRI-00/0189, A Model for Sizing High Consequence Areas Associated with Natural Gas Pipelines to look for refinement opportunities in our consequence estimates.
After reviewing assumptions and inputs to the equation, we used two additional heat flux values to calculate radii for different mortality rates. Taken with the original PIR heat flux value, this yielded equations for r1, r50, and r100, radii for 1%, 50%, and 100% mortality rates. Using graphical analysis and our three radii, we set mortality estimates for three regions around each dynamic segment: the fatality circle, a 75% ring, and a 12% ring. Applying these estimations to populations around our assets allows for greater discrimination of consequence and more credible fatality estimates. Use of these refined figures is part of our consequence estimation continual improvement effort. Peer review and discussion of these approximations with underlying data presented is expected to drive further refinement.
Pipeline risk assessments are an essential tool for pipeline operators to effectively manage and mitigate hazards along their pipeline assets. Historically, risk assessments have used generalized approaches to quantify the consequence of a potential loss of containment from a pipeline. This approach has included numerous assumptions, including: simplified estimates of the volume of oil that could be released from the pipeline, not accounting for the migration of the oil overland and in surface water features, and simplified estimates for cleanup costs that don’t account for the trajectory and fate (behavior and weathering) of the released product and the environments impacted. With an increased focus on the environmental impacts of a release, more sophisticated approaches have been developed that utilize advanced outflow calculations and spill modeling to reduce the uncertainties associated with potential releases.
This paper describes the evolution of an environmental consequence model and the implementation within a North American liquid operator’s risk program. A multi‐phased outflow calculation process is used to determine the volume of potential product releases by utilizing specific pipeline parameters, including: the time to detect that a release has occurred while oil is flowing at the pumped flow rate, the time to shut down the pumps, the valve closure times and the overall volume predicted to drain under gravity once the ruptured pipeline segment has been completely isolated by valves. These outflow volumes are then utilized in a spill model to predict the trajectory and fate of hypothetical accidental releases over the land surface as well as in surface watercourses and waterbodies. Outputs from the model include the pathway of the release, the time duration of the release along that path, and the volume of oil reaching or remaining in each location within the environment at the end of the simulation. Results from the oil spill modeling are used to estimate the cleanup costs for each hypothetical release based on the environments impacted and volume of oil remaining on the land surface and reaching a watercourse.
The improved analytics ultimately provide a greater granularity on the potentially impacted environmental receptors, leading to a more accurate risk profile and a framework for risk informed decision making. In addition to the improvements to risk management, results from the consequence assessment can also be used to improve integrity management and emergency response planning.
An in‐line inspection (ILI) based threat management program (referred to as “ILI based program” hereafter) is the most successful path to manage time‐dependent threats that are prevalent on majority of the existing pipeline systems. It has proven to reduce industry wide failure rates but has not always prevented all post ILI failures.
ILI-based programs can give very different failure performance and mitigation results depending on the ILI tools, assessment models, safety factors, and assessment methodologies. Historical evidence and data analytics show the differences in ILI based programs in both performance and mitigation results. TC Energy (TCE) has improved the corrosion management program which is the largest threat management program to be safer and more economical, by understanding and accounting for uncertainties through reliability‐based methods and incorporating learning from incidents, as evidenced by the continuous reduction in failure rates.
In conjunction, TCE has also demonstrated that using more accurate and precise methodologies, such as improved defect assessment models (e.g. Psqr model for corrosion assessment) and defect‐specific assessment methods, can reduce the unnecessary remediation actions without compromising safety. These developments make ILI based programs safer and more economical.
TCE has also developed methodologies to monitor performance (or effectiveness) of ILI based programs, including failure rate trending, post ILI failure examination, and monitoring digs. Therefore, the performance of the ILI based program can be monitored to enable continuous learning and improvement. All these steps drive cost efficiency, unnecessary cost avoidance, and thus efficient programs that avoid failures with reduced costs. How each of these choices prudently optimize programs will be demonstrated through data analytics.
AC Interference well-known and documented threat to people and assets but is gaining a lot more attention these days as our country becomes more urbanized and power lines and pipelines are forced to work in ever closer proximity to each other. Proper consideration of AC Interference within your risk model should be more than just a single risk factor within the external corrosion threat. It requires it’s on focused program and set of risk factors to consider driving a comprehensive monitoring and mitigation strategy. We’ll show you how to build a proper ACI program as required by NACE’s recently published standard: SP 21424-2018.
An AC Interference Threat Assessment allows operators to identify segments of their pipelines that may be at a higher risk of AC Interference due to inductive and resistive coupling, which are the main contributors to AC corrosion. At a high level, the process has the following steps:
The California Code of Regulations, Title 14, Chapter 4, Section 1726.3 requires a quantitative risk assessment of the probability of threats and hazards. Gas utilities are already using a variety of qualitative and semi-quantitative risk assessment approaches. However, factors, such as the aging of the storage system along with greater demand on gas, changing regulatory climate and public perception have necessitated the need for a renewed look at the risk management methods. We will present a quantitative risk assessment methodology that combines Bow Ties (BT) and Bayesian Networks (BN). The advantage of a bow tie approach is its ease of communication to the stakeholders, but it lacks the ability to quantify risk. The advantage of the BN approach is its ability to quantify interactive threats, but it can be very complex to communicate. The combined methodology creates a Barrier Based (i.e. BT) Quantitative (i.e. BN) Risk Management Approach for UGS that is easy to communicate while capturing the complexity of the underlying system.
The methodology was developed, tested and demonstrated using inputs of two Californian gas utility partners for depleted reservoir UGS systems and sponsored by the California Energy Commission. The system is divided into two major components: wellhead and sub-surface. The modeling approaches for these two sub-systems are somewhat different because the wellhead consists mostly of discrete components (valves, etc.), whereas the sub-surface consists of continuous components, such as production tubing and casing, in combination with discrete components such as packers. The boundary of the sub-surface system considered stops at the borehole (i.e. the geosphere is not modeled). The boundary of the wellhead terminates with the surface Christmas tree (i.e. pipelines, gas compressors and treatment units are not considered). The model focuses mainly on failure probability as this is considered to be the weakest aspect of current quantitative risk assessment of UGS. The presentation will describe the overall approach and demonstrate the methodology using several hypothetical scenarios. The BN model demonstrates the interactions among different factors that may be difficult to integrate into a single analytical model. For example, the wellhead mechanical damage probability is increased because of oxygen concentration in the gas stream due to wall thinning by internal corrosion. The second advantage of the Bayesian network model is its ability to deal with sparse data set, which is unfortunately the case for most UGS systems.
A quantitative risk-based approach has been shown to provide a rational framework for making integrity management decisions that improve safety and reduce repair costs for buried pipelines and there is a significant incentive to apply such methods to all threats, including geohazards. A consistent approach applied to all threats increases the ability to identify and mitigate dominant threats and supports allocating resources where they are expected to achieve the highest reduction in the risk. However, geohazards are not amenable to the standard simulation-based methods that are typically used for other threats such as corrosion and cracking.
Bayesian networks (BNs) offer an approach that can overcome the challenges specific to geohazards while being consistent with the probabilistic methods used for other threats. They provide a rational framework to estimate failure probabilities and provide a means for incorporating information from varied sources, such as subject matter experts (SMEs), slope monitoring measurements (e.g., slope inclinometers, strain gauges), depth of cover surveys, inertial tool inline inspections, and results from finite element analyses. This presentation focuses on a simple BN that was developed in a study to demonstrate the integration of multiple data sources in order to predict the probability of pipeline failure due to slope movement.
Risk-based methods are increasingly being used to support pipeline design and integrity management decisions, and quantitative analysis methods are considered to provide the most objective and defensible basis for risk-informed decision making. In this regard, the decision-making process is greatly facilitated by having consensus-based risk acceptance criteria against which the results of a line-specific risk assessment can be compared.
This presentation describes a consequence measure that has been developed to quantitatively scale the impact severity of releases from pipelines transporting low-vapor-pressure hydrocarbon liquids for which the dominant concern is environmental damage and the socioeconomic impact on people in the affected area. The presentation also introduces environmental risk acceptance criteria that have been developed for pipelines based on a calibration exercise carried out using the consequence measure.
The proposed environmental risk acceptance criteria, and the pipeline spill consequence measure upon which they are based, are currently being considered for inclusion in the next edition of the Canadian Pipeline Design Standard Z662. This presentation is intended as a companion to “Proposed Life Safety Risk Acceptance Criteria for Canadian Pipelines,” which outlines proposed safety risk criteria that are also under consideration for inclusion in CSA Z662.
Risk‐based methods are being used to ensure safety and optimize pipeline design and integrity management decisions. In order to achieve a goal, it needs to be defined and measurable. The goal of safety needs a clear definition and measure so that safety can be achieved purposefully. Quantitative risk assessment methods and related criteria provide the clear definition and measure for safety. The decision‐making process is greatly facilitated by having consensus‐based risk acceptance criteria against which the results of a line‐specific risk assessment can be compared. This presentation describes the risk measures chosen to represent individual and societal safety considerations and also introduces related risk acceptance criteria.
The proposed life safety risk acceptance criteria are currently being considered for inclusion in the next edition of the Canadian Pipeline Design Standard Z662. This presentation is intended as a companion to “Proposed Environmental Risk Criteria for Canadian Pipelines,” which outlines proposed environmental risk acceptance criteria that are also under consideration for inclusion in CSA Z662.