Using Artificial Intelligence to Keep the Oil and Gas Industry Safe

Credit: DNV GL

_x000D_

By MarEx 2018-08-30 19:51:11

_x000D_

_x000D_

As artificial intelligence (AI) systems begin to control safety-critical infrastructure across a growing number of industries, DNV GL has released a position paper on the responsible use of AI. The paper asserts that data-driven models alone may not be sufficient to ensure safety and calls for a combination of data and causal models to mitigate risk.

_x000D_

_x000D_

Entitled AI + Safety, the position paper details the advance of AI and how such autonomous and self-learning systems are becoming more and more responsible for making safety-critical decisions. The paper states that as the complexity of engineering systems increases, and more and more systems are interconnected and controlled by computers, human minds have become hard pressed to cope with, and understand, the associated enormous and dynamic complexity. “In fact, it seems likely that we will be unable to apply human oversight to many of these systems at the timescale required to ensure safe operation. Thus machines need to make safety-critical decisions in real-time, and we, the industry, have the ultimate responsibility for designing artificially intelligent systems that are safe!”

_x000D_

_x000D_

The operation of many safety-critical systems has traditionally been automated through control theory by making decisions based on a predefined set of rules and the current state of the system. Conversely, AI tries to automatically learn reasonable rules based on previous experience.

_x000D_

_x000D_

Since major incidents in the oil and gas industry are scarce, such scenarios are not well captured by data-driven models alone as not enough failure-data is available to make such critical decisions. AI and machine-learning algorithms, which currently rely on data-driven models to predict and act upon future scenarios, may not be sufficient then to assure safe operations and protect lives.

_x000D_

_x000D_

The position paper stresses that if the industry can supplement these data-driven models by generating physics-based casual data, it will be significantly closer to the safe implementation of AI in safety-critical systems.

_x000D_

_x000D_

The example of SpaceX’s successfully launched the world’s biggest rocket, Falcon Heavy, in February 2018 is cited. The two first-stage side boosters landed safely back on the launch pad in the upright position – ready for refurbishing and new flights. No full-system test was possible prior to the launch, and the team of engineers relied on computer models, together with previous experience of similar systems and advanced machine learning models, to simulate how the launch would play out and to determine how to control the actual boosters back to the launch pad. “This is a perfect example of what can be achieved through extensive use of models based on both causal knowledge and data-driven methods, making autonomous real-time adjustments.”

_x000D_

_x000D_

However, that safe landing of the first stages was not achieved on the very first attempt. The Falcon Heavy core booster was supposed to land upright on a drone ship in the Atlantic, but missed by 90 meters and crashed into the ocean. This illustrates two important aspects, says DNV GL. First, even when we have designed and tested something thoroughly and in extreme detail, there is always an element of stochastic variation that is difficult to foresee. Six days after the launch Musk tweeted “Not enough ignition fluid to light the outer two engines after several three engine relights. Fix is pretty obvious.”

_x000D_

_x000D_

There are many possible reasons as to why the core booster had too little ignition fluid. It could be on the provisioning side – not filling the fuel tanks sufficiently before launch; or it could be related to environmental loads – maybe winds caused the controls to use more fuel than anticipated; or it could be any other of a multitude of reasons. This underpins that: It is “easy” to make something work, but it can be near impossible to ensure that it will not fail!

_x000D_

_x000D_

“When our system might fail, we need to understand and address the associated risk. That is why the Falcon Heavy was launched (east bound to utilize Earth’s rotation) out of cape Canaveral Air Force Station on the east cost of Florida. This ensured that any failures resulting in a crash of the massive rockets would mean that only oceans would be hit.”

_x000D_

_x000D_

DNV GL has joined forces with Norway’s largest universities and companies, including Equinor, Kongsberg Group and Telenor, to establish a Norwegian “powerhouse” for AI. The Norwegian Open AI Lab aims to improve the quality and capacity for research, education and innovation in AI, machine learning and big data.

_x000D_

source: www.maritime-executive.com