The Digital Pipeline Solutions Forum
DPSF
May 18-19, 2022 | Houston

Program

Opening Perspectives

8:30 1. Unlocking Transformational Value
    Shahani Kariyawasam, TC Energy
9:00 2. Addressing Digital – the VirtualTDC
    Cliff Johnson, PRCI, John O’Brien, itcSkills
9:30   Coffee break

Automation of Assessment Processes

10:30 3. Neural Networks for Pipeline Joint Detection
    Mike Byington, INGU
11:00 4. How Automated Data Normalization & Alignment Helps Operators Make Better Decisions
    Jordan Dubuc, Michael Murray, OneBridge Solutions
11:30 5. Pipeline Vintage Prediction Using Machine Learning
    Joel Anderson, RSI, Peter Veloo, PG&E
12:00   Lunch

Evaluating & Qualifying Machine Learning Processes

1:30 6. Value Added Pipeline Applications Using High Fidelity Fiber Optic Monitoring & Machine Learning
    Mike Hooper, Steven Koles, Ehsan Jalilian, John Hull, Hifi Engineering
2:00 7. Considerations for Developing and Deploying Machine Learning Models for Pipeline Risk and Integrity Assessment
    Daryl Bandstra, Shawn Smith, Integral Engineering
2:30 8. The Benefits & Challenges of Implementing Machine Learning Processes in Support of the Digital Pipeline
    Mike Gloven, Pipeline-Risk
3:00   Coffee
3:30 9. Predicting External Corrosion Growth Rate Distributions for Onshore Pipelines for Input Into Probabilistic Integrity Assessments
    Steven Carrell, James White, Katherine Taylor, Jonathan Martin, Simon Irvine, Roland Palmer-Jones, ROSEN Group
4:00 10. Predictive Corrosion Modelling – Rise of the Machines
    Mike Westlund, Parth Iyer, Dynamic Risk
4:30 11. Preventing Common Pitfalls with Monte Carlo Simulations in Pipeline Reliability Engineering
    Gabriel Langlois‐Rahme, Miaad Safari, Vincent Lacobellis and Lawrence Cheung, Enbridge Gas
5:00   Cocktail reception in the Exhibition Hall
     

DAY 2

Opening Perspective – Day 2

8:30 12. Digital Transformations Required to Support the Industry 4.0 Era
    Brad Eichelberger, DNV

Flow Modeling

9:00 13. Real Time Networks
    David Gower, Anthony Gilbert, DNV
9:30   Coffee

Building the Digital Twin

10:30 14. Driving and Optimizing Integrity Performance Using a Comprehensive Digital Twin
    Sergiu Lucut et al, TC Energy
11:00 15. Geospatial Technology – The Foundation for Creating a Pipeline Digital Twin
    Jeff Allen, Esri
11:30 16. Digitizing Pipeline Construction with the Latest in Mobile, Machine Learning and Blockchain Technologies
    Ahbay Chand, Petro IT
12:00   Lunch

Building the Digital Twin - 2

1:30 17. Relational Database Framework for Integrity Management Programs
    Martin Glomski, Jeffrey Kornuta, Exponent, Isabelle Poulin, Peter Veloo, PG&E
2:00 18. Improved Pipeline Monitoring, Leak Detection and Compliance using Automated Analysis of Advanced Satellite Data
    Peter Weaver, OSK
2:30   Coffee
3:30   Building the Digital Twin – Round Table Discussion
4:30   End of conference

 

Organized by:

Clarion Technical Conferences     

Our world is transforming and rapidly becoming digital. Digital twins are abstracting and modeling everything. They improve pipeline operators’ business processes, reduce risk, optimize operational efficiencies, and enhance decision-making with automation to predict outcomes.

Digital twins provide greater context to solve business challenges by creating relationships and streamlining workflows. Geographic information system (GIS) technology is foundational for any digital twin. This presentation will explore how geospatial technology interconnects information, systems, models, and behaviors with spatial context, creating holistic digital representations of pipeline assets to support operational efficiencies and drive bottom line savings to the organization.

Data Capture and integration

Real time and Visualization

Sharing and Collaboration

Analyze and Predict

Though the method of recording as-built data on pipeline construction has evolved over the years, it is nowhere near what the true capabilities of technology are today. If we are honest, “paper” still plays a significant role in data collection and inspection. What the industry at times refers to as digital is a PDF document. However, some organizations have recently taken the lead in changing this status quo and moving towards a true digital twin of the asset. To make such a digital twin possible they have realized it is not enough to organize data on the asset once it has been fully constructed and commissioned – instead, processes need to be put in place up front. For best results, these processes need to be part of detailed engineering and the procedures laid out therein. Most importantly, all stakeholders in the construction of an asset need to benefit from the digital twin and the process of creating it. In other words, there needs to be an incentive for all teams involved in order to make such an initiative successful. This paper aims to deliver a prescription on exactly how this can be achieved, with examples from some of the largest major capital projects in recent times. It will explain how mobile technologies and machine learning can be applied in the field to bring about real time decision support and traceability. Data recorded on every minute quality detail of material, fabrication and installation lead to a record set that gives the image of the digital twin’s true usability. It will also go on to explain how blockchain technology (which is commonly associated only with crypto-currency) can play a pivotal role in construction projects today and in the future.


Distributed fiber optic sensing has been gaining significant momentum in pipeline industry adoption. The primary application of this technology has been in preventative leak detection. However, new value-added applications are now emerging with potential to deliver extra value to the pipeline operators, over and above leak detection.

Different fiber optic sensing technologies exist which can be appropriately positioned for various applications. One of these specialized fiber optic sensing approaches is known as high fidelity distributed sensing (HDS) which uses a different interferometry to achieve very high signal-to-noise ratio (SNR). Along with high SNR, HDS also provides integrated acoustics, temperature and strain/vibration, and is optimized to do so over long distances without degradation of fidelity. This makes the HDS technology particularly appropriate for preventative pipeline leak detection as well as a number of other value-added applications.

Case studies will be provided to showcase the value of using artificial intelligence and supervised / unsupervised machine learning to explore new frontiers in pipeline monitoring; including a variety of “value added” applications such as flow anomaly detection, flow rate, pressure, and density estimation. Other applications such as pig, vehicle, and train detection / filtering, tracking and analysis will also be presented.

Effective pipeline risk assessment and the associated integrity management requires a lot of information, but in our experience there are two key inputs. The first is an understanding of the current condition of the pipeline. The second is reliable information on the rate of deterioration. Corrosion is one of the main integrity threats that the industry has to manage. In-line inspection that collects data on metal loss has allowed the industry to effectively manage both internal and external corrosion for many pipelines. However, challenges remain – two of which are: understanding the condition of pipelines that are very difficult to inspect, and predicting how rapidly corrosion defects may grow to a critical size.

We have previously investigated data-driven methods for predicting the condition of pipelines that have not been inspected and developed the concept of ‘Virtual ILI’, with low-resolution and high-resolution options, by using predictor variables such as age, rainfall, coating, slope, soil type, etc. More recently, we have again turned our attention to corrosion growth rates (CGRs). In earlier studies, we looked into best practice in the prediction and application of CGRs based on historical inspection data, and concluded that per segment upper bound rates offered the best compromise of safety and efficiency, with per feature rates being clearly unreliable and unsafe. In our latest work we are aiming to predict CGRs where there is few past inspection data. This work involves generating pipeline corrosion growth rate probability distributions on a per segment basis as inputs into probabilistic assessments – considered best practice within the industry. In this study, we utilise data from ROSEN’s Integrity DataWarehouse (IDW) including associated geospatial and environmental metadata. The work leverages applied data science practices and begins by refining the segmentation process of the pipelines using unsupervised learning clustering techniques, followed by a detailed Exploratory Data Analysis (EDA) examining the CGR distributions and the impact the aforementioned predictor variables may have on them.

The study aims to provide generalizable CGR parameters for use in the probabilistic assessment of pipelines where previous inspection data is either limited or not available.

How do gas distribution companies make best use of smart meter data and produce an approach to network modelling that supports real-time operation and meets the challenges of understanding flows across all conditions? Collecting detailed data, at a suitable granularity, for all consumers may be costly both in terms of metering and the handling of data. The Real-Time Networks project offers an alternative practical approach to the problem.

Real Time Networks is an innovation project that involved the collection of a large volume of detailed demand data for an optimal number of individual gas consumers, and the use of this data to create a novel real-time modelling approach that will enable a gas network for the future that is flexible, secure, cost effective and safe. The objective was to provide a detailed understanding of demand patterns for all typical gas consumers, and their application in network modelling, to provide the basis for optimal gas network design and operation.

A key part of the project was the installation of flow loggers (recording data at a 6-minute granularity) at approximately 1200 domestic and non-domestic sites in the South-East region of the UK gas network, plus installing modern and novel flow and gas quality sensors to create the Medway Test Area network (a sub-area of the full South-East zone). To pass data from the sensors and loggers, a cloud-based communications system was developed as a proof of concept for real-time data logging technologies and interfaces.

Consumer logger data was collected for a period of 2+ years, and this detailed information was used to develop the Real Time Networks Demand Model. This model facilitates peak, off-peak and real-time modelling at any level from individual consumers to the full gas network for an entire area or country, and it has been optimised using millions of targeted test runs using actual data collected for the project to ensure its accuracy across the whole range of site types and group sizes. In particular, the Demand Model was designed to carry out the following functions:

The models developed in the projects were implemented in a working prototype delivery system, which incorporates both demand modelling and network modelling under any conditions specified by the user (both peak and off-peak). Models were created and run within this system for the Medway Test Area and a number of other networks in the South-East region.

Pipeline operators regularly employ risk-based inspection (RBI) decision making to prioritize expenditures. The goal when using an RBI process to drive asset integrity is always reduction of risk to as low as reasonably practicable (ALARP). Traditional corrosion risk modelling is done in one of two ways – qualitative or quantitative. The qualitative algorithm involves assigning numerical scores which represent failure susceptibility to factors that are known to influence internal and external corrosion propagation on pipelines. This includes factors such as external coating type, soil type, maintenance pigging, chemical inhibition programs, and so on. The quantitative algorithm typically evaluates pipeline inspection data using a probability of exceedance (POE) method to generate a probability of failure (POF) for each metal loss anomaly reported by the pipeline inspection tool.

This paper presents a machine-learning (ML) approach to predict areas and extent of metal loss corrosion in an effort to quantify qualitative risk factors such as prevention and mitigation (P&M) activities. The results showed promise with a high accuracy and 90% confidence for axial location and depth of both internal and external metal loss anomalies. This in turn, combined with the corrosion growth analysis can help pipeline operators develop robust, yet accurate long-term mitigation plans for their pipeline assets while prioritizing the risk-reduction achieved by implementing additional P&M measures. Supporting cases are discussed to help explain the intended use of this algorithm and the interpretation of the results.

Conventional ILI tools use odometer wheels to determine the location of identified defects. In addition, typically above ground markers (AGMs) are used to confirm and potentially correct for odometer wheel slippage. This paper will address how to accurately determine defect locations by a free‐floating unconventional ILI tool solely relying on the data acquired by the tool.

Although multiple sensors and methods play a role in the defect localization, this paper will specifically focus on a neural network which uses the data from a 3‐axes mems magnetometer as input to identify the joints with a high probability. The results from a neural network trained with 90,898 joints from 111 pipelines and applied to a pipeline outside of the training set will be presented.

Data continues to revolutionize the asset integrity management process at TCE. Digitization, efficient storage, accurate alignment, and improved accessibility of expansive data sets has paved the way for next‐generation data science and advanced analytic methods to optimize and improve asset integrity performance.

TCE safely operates a network of over 60,000 miles of natural gas and liquids pipelines. Every segment (e.g. joint) within the system has a series of records and data associated with it. To ignite innovation and improve performance, TCE created a fit‐for‐purpose digital twin of the pipeline focusing on essential parameters and overlays required for safe and effective pipeline integrity management. Each pipe segment in the digital twin contains approximately 300 data fields, this includes continuous and discrete information such as comprehensive pipeline properties, environmental characteristics, spatial overlays, design information, construction records, operational data, and maintenance and assessment records.

Integrating all this data into a usable solution is a challenge due to the size, availability, heterogeneity, and level of digitization. All raw data used for the digital twin had to undergo several data processing steps; such as extract, transform, and load (ETL) processes, specifically designed for each parameter that would validate, cleanse, manipulate, and align the data into a useable format. Once processed, the pipeline data was integrated using a dynamic segmentation process where each piece of pipe with a differing essential parameter is delineated, thus creating a referenceable and useful digital twin.

TCE continues to successfully harness the power of the digital twin by building new analysis and assessment tools to derive integrity and business intelligence, such as TCE’s fully quantitative pipeline risk assessment program, suite of risk‐reduction and risk‐forecasting tools, and reliability‐based threat assessment programs. The resulting actionable knowledge from these programs has been tested and applied, proving exceptionally valuable. This has led to a strong data‐driven decision‐making culture across the organization, guiding and optimizing TCE’s integrity performance ensuring continued safe operation of TCE’s pipeline system.

Digitalization is on the doorstep of the industry already faced with challenges from aging assets, lean operating budgets, tighter regulations, new fuel mixes and a generational change in the workforce. Operators are faced with complex investment decisions built on immerging technologies. The purpose of my presentation is to highlight some key elements to be considered during this transformation.

Inline Inspection (ILI) reports are the basis for much integrity work, providing a snapshot of the pipeline at one point in time. While some tasks can be accomplished with just one ILI, much more information can be gained by a detailed alignment of current and past ILIs. Today’s most advanced alignment software will align a full history of ILIs file to each other automatically, compensating for any repair sections, routing changes, or changes in flow direction. Additionally, complete pit‐to‐pit matching of every single anomaly call provides a detailed history of each defect on a pipe.

With this complete alignment and matching, whole layers of information and error correction are possible instantaneously that would have been impossible in the past. Growth or apparent nucleation trends can be examined in granular detail across multiple ILIs instead of just the coarse difference between two ILIs in an area. Complete, automated alignment across multiple ILIs extracts much more information out of users’ existing ILIs than has previously been possible.

A run comparison (runcom) of two successive ILI inspections is the most common way to infer corrosion growth rates. The runcom sequence matching problem aims to zero out the odometry error of known construction feature locations - typically welds - so that corrosion anomalies can be compared pipe joint by pipe joint. This is a tedious, iterative sequence matching process developed in the era of paper records with the weakness that it provides no affirmative measure of correctness. To maximize the confidence in a runcom, one must examine as many pairs as possible and minimize the likelihood of mismatches.

This paper presents a novel way to compare pipeline construction features using pipe tallies. It uses the accumulated bias in the position estimate to develop an affirmative measure of correctness called the "distortion curve,” generated with a machine learning algorithm. With this curve, the correctness of a run comparison can be quickly evaluated to facilitate locating mismatched pairs. This approach is beneficial to pipeline operators because it directs attention to areas with a high variance, where features are more likely to be mismatched, and supplies confidence that the sections with low variance have been correctly matched.

In this paper, an example 13-mile pipeline with 4,000+ construction features will be used to demonstrate that feature matching in advance of anomaly matching can be completed in a short amount of time with a high degree of confidence.

As part of the regulations published in October of 2019, PHMSA requires operators that do not have sufficient records of pipeline material properties to conduct materials verification in accordance with §192.607. The materials verification regulations in §192.607(b) stipulate that the records established under this section “must be maintained for the life of the pipeline and be traceable, verifiable, and complete [(TVC)].” Therefore, operators should seek to implement a comprehensive and robust data management solution to store the massive amount and varied types of data that are collected during this process.

As a part of their materials verification program, the Pacific Gas and Electric Company (PG&E) has collected more than 25,000 materials verification data points. This presentation will include PG&E’s database and data management solution for archiving relevant materials data as it pertains to nondestructive examination (NDE), flaw detection NDE, and laboratory testing data for transmission assets. This implementation increases the capabilities of the data, enables long-term data retention in support of regulatory requirements, and aligns with industry-wide data-stewardship efforts for integrity management. Details of the relational database architecture will be provided, along with specific examples of how this architecture both supports materials verification efforts and facilitates data science approaches for integrity management programs.

Machine learning is emerging as an important practice for pipeline integrity management and risk, as well as supporting the fundamental processes which comprise the digital twin. Studies have shown that less than 1% of machine learning projects actually make their way into production. This presentation discusses five areas of focus to improve the success rate of machine learning models - data cost, model performance, model transparency, actionable results and business adoption.

As we know, the pandemic had a tremendous impact on the oil and gas industry. One issue in particular that has been highlighted in this time is the need for better monitoring of oil and gas pipelines to prevent and detect spills and leaks.  

Historically, hyperspectral imaging technology has been expensive to deploy, making it limited to government and military powers. However, the continuation of oil spills and gas leaks are demonstrating the need for the industry to be better prepared. This highlights the opportunity for satellite-based technology to increase the efficiency of pipeline monitoring in order to protect civilians, wildlife and plant life.  

What makes hyperspectral imaging different from traditional monitoring practices?

The current state of oil and gas pipeline monitoring involves using small planes to visually monitor pipelines, which leads to inefficient monitoring and does not provide an accurate means in preventing spills and leaks from occurring. With hyperspectral imaging, companies are provided with hundreds of color bands in order to effectively see the unseen. Therefore, while regular remote sensing technology can see pipelines, hyperspectral imaging can tell us if a pipeline is leaking, as well as the material that’s leaking.  

What is most important to monitor when it comes to oil and gas pipeline monitoring and what can hyperspectral imaging see that is not apparent to the human eye?  

There is currently a need for utilizing technology to focus resources where they really matter in order to detect and prevent spills and leaks. Hyperspectral imaging allows companies to focus resources on:  

Hyperspectral imaging also helps us see what cannot be seen through human eyes including:

The future of using hyperspectral imaging from space to prevent and detect oil and gas leaks before they create a destructive path  

Hyperspectral imaging from space has the ability to prevent oil and gas leaks through the use of pixel data. Each pixel provides hundreds of data points for chemical ID, and these chemical signatures can then be matched to identify elements, compounds, and objects. Then, algorithms can rapidly detect and classify where a pipeline is, the potential of a leak, and if it is leaking, what exactly it is leaking before it creates destruction.  

With the increase in computational power in modern times and more pipeline operators shifting towards reliability-based methods for integrity management, Monte Carlo (MC) simulations enable pipeline operators and reliability engineers to quickly perform complex simulations without the need of highly advanced analytical solutions. This gives operators a robust method to understand different pipeline threats and uncertainties. MC methods utilize many simulated experiments where input variables with known distributions and statistical parameters are randomly sampled to calculate a target variable distribution. There are common pitfalls that can occur when incorrectly applying this method and the empirical analysis of MC simulations requires a correct interpretation of the results. To the authors knowledge, there are no research works in pipeline engineering explicitly describing these common pitfalls or demonstrating how to interpret MC simulation results with classical statistics for pipeline reliability engineering problems. This study guides operators using a practical example where three different Inline Inspection matches of the same corrosion defect are used to simulate probabilistic corrosion growth rate (PCGR) using linear regression. A common pitfall described involves performing MC simulations without calculating the final target variable in the sample simulation. For PCGR, this means storing the CGR slope and the intercept of a model as distributions to be later used in a new simulation. When this is done without also storing covariance, this can lead to incorrect projections of corrosion and can have significant impact on pipeline safety. Another pitfall is on time selection for the PCGR problem – measuring time relative to the AD calendar year increases model sensitivity to rounding and the influence of covariance. The last pitfall compares MC forecast distributions to statistical forecast distributions – and how to correctly interpret the similarities and differences between the two. Overall, this study provides important guidelines that operators can follow to confidently use MC simulations – providing better data for decisions around pipeline safety.

Machine learning (ML) methods have enabled significant technical achievements in recent years; however, these methods are still in the early stages of application in conventional and regulated industries. This paper discusses three considerations that should be reviewed when developing or deploying machine learning models in a pipeline risk or integrity context. With each consideration, an illustrative pipeline industry example is provided as a demonstration.

The first consideration introduces model constraints to restrict predictions for inputs that are outside of the range of the training data. This avoids the potential for a model to report a high confidence prediction for a set of inputs that are well outside the range of any previously seen example observation, effectively flagging the case for an engineer to review. Outside the field of machine learning, applying model constraints is common – where empirical models are generally limited to the ranges of attributes observed in verification experiments to prevent extrapolation.

The second consideration introduces common open-source interpretability tools such as SHAP and LIME which are used to understand how a model makes a certain prediction. These tools identify which features make a considerable contribution to a model’s output and which direction the output is affected. The results of this analysis of a model’s performance may identify areas where a particular predictor is causing overfitting, or where adding a constraint, such as monotonicity, may help enforce a known causal relationship.

The third and final consideration introduces the difference between in-sample and out-of-sample prediction problems when estimating the performance of a model. Pipeline industry problems may require a ML model to provide predictions for a pipeline or region that is not represented in the dataset (spatial out-of-sample) or a future time-period (temporal out-of-sample). Failure to consider these spatial or temporal dependencies when validating and evaluating a ML model may lead to unconservative estimates of the model’s real-world performance. This scenario is demonstrated with an example dataset where corrosion density is predicted on a set of pipelines and various performance estimation methods are applied.

We are in a world where all the talk is about Digital. In 70 years PRCI has been at the forefront of physical pipeline research and development with a premier world class physical facility based in Houston Texas. In a 70-year period like most of our members we generated vast amounts of data. We have facilitated new and improved sensing and in the current era we are at the cutting edge of Pipelines 4.0 and beyond.

In this presentation we will discuss the evolving Virtual Technology Development Center (VTDC) and how this will create a virtual data environment facilitating advanced analytics on standardized data sets enhancing learning, a repository of lessons learned in the industry, the ability to model and assess emerging solutions in a Virtual world before expensive commitments.

By creating a vast and safe sharing, collaborative entity we create value for our members and the industry. This also will allow us to receive feedback and interact with the industry.

About the Speakers

Cliff Johnson, CAE joined Pipeline Research Council International (PRCI) as President in September 2010. He previously served as Director of Public Affairs at NACE International. Mr. Johnson’s legislative career includes positions with Texas State Senator Gregory Luna, US Congresswoman Connie Morella, US Senator Kay Bailey Hutchison, and the Governor of Texas Ann Richards. Under Mr. Johnson’s leadership, PRCI’s membership has grown to over 75 members across the globe. In 2015, he led the creation of the Technology Development Center– a state-of-the-art pipeline research facility in Houston. In 2019, he developed the Strategic Research Priority (SRP) to enable PRCI members to drive the execution of strategic research efforts. In 2021, he launched the Emerging Fuels Institute, a consortium of industry members driving research to enable the energy transition.
Mr. Johnson earned his Master of Public Affairs from the Lyndon Baines Johnson School of Public Affairs at the University of Texas at Austin, and a Bachelor of Arts in Political Science from Austin College in Sherman, Texas. He has obtained his Certified Association Executive (CAE) designation and is active in the American Society of Association Executives. He resides in Northern Virginia with his wife, Courtney, and five children (Emma, Oleg, Vivian, Lena, and Hana).

John O’Brien has 40 years industry experience in design, construction, operation, and asset integrity management around pipelines onshore and offshore across the globe. He has been a leader in R&D funding from a major operator identifying needs and translating them into value adding solutions for the industry. In recent years he has worked at the cutting edge of the so- called Industry 4.0 and generation of new sensing and digital integrated solutions. He has worked extensively on concepts of ‘SMART Facilities’ and concepts to overcome barriers to change.

In addition to consulting, he is also currently Vice Chair TWI Council UK one of the worlds premier welding and joining institutes, Board Advisor to ASNT on AI/ML in NDE, Board Advisor Subsea Institute Houston. He has 33 years involvement with standards development at API. John is a Digital Solutions Advisor to PRCI.

Please check back