A View from Manufacturing and Operations

0
6






Martin R. Gonzalez, PhD, Senior Manager, Refining Technology, bp

Martin R. Gonzalez, PhD, Senior Manager, Refining Technology, bp

Marty Gonzalez is a seasoned researcher at BP with a strong focus on applying advanced technologies to solve real-world challenges in energy. A University of Wisconsin– Madison graduate, he blends technical expertise with practical insight to deliver impact. Passionate about innovation, he champions AI, causal modeling, and hybrid approaches that move beyond prediction to support confident, data-driven action in complex environments. He is also a trustee of the AI Applied Consortium, a non-profit organization composed of members from industry and academia, brought together to develop AI-based solutions for society’s biggest problems.

The hype around large language models (LLMs) like ChatGPT has created strong demand for AI across virtually all industries. LLMs hold the promise of tapping into vast knowledge repositories to uncover insights about almost any topic.

However, after a few years of broad GenAI deployment, companies like Johnson & Johnson have not realized the business value they expected. Gartner’s hype cycle even shows LLMs falling from the “peak of inflated expectations” into the “trough of disillusionment.”

Much of this challenge can be attributed to an LLM’s tendency to fabricate. Even when grounded through techniques like retrieval-augmented generation or prompt engineering, LLM-based solutions often fail to generate business results.

The Problem with LLMs

It is great that pre-trained generative models have created a thirst for all things AI. Widespread Chabot availability makes it easy for anyone to get started. Still, I believe a singular LLM approach is not what most businesses need. Delivering insights through a Chabot often doesn’t meet the threshold needed to incite real action.

In the oil and gas industry, predictive machine learning models have spread with moderate success. Adding an LLM interface to explain those predictions may improve clarity over vague alerts, but often it still falls short.

What businesses truly need is clarity and confidence to act decisively. That is why AI must go beyond prediction and explanation even beyond decision intelligence to focus on intervention.

An AI that inspires action must answer: “What options do I have right now?” and “What will happen good or bad, if I pursue each option?” Only then can engineers, technicians, or operators be confident in taking action and getting buy-in from leadership.

AI’s near-term goal should be to help people think more effectively, not replace their thinking. Consider a common example in manufacturing: troubleshooting a clogged reactor.

What’s Clogging My Reactor?

In process industries like refining, engineers are assigned to troubleshoot when issues arise. These are smart, experienced people with strong ideas on root causes and potential fixes. I used to be one of them.

Let us say a reactor under an engineer’s care is experiencing pluggage that threatens throughput. The engineer might review pressure trends, run simulations, or do calculations to find out if the problem evolved gradually or was triggered by an event.

She might consult an LLM to retrieve data from previous incidents or best practice manuals perhaps written by long-retired experts.

 What businesses truly need is clarity and confidence to act decisively, that is why AI must go beyond prediction to focus on intervention 

Assume an ML model is deployed to catch anomalies early. Assuming it is not a sensor glitch, she now faces a tough call: recommend a costly unit shut down or wait it out. Her supervisor may tell her to consult headquarters, where a formal root cause analysis gets underway. Meanwhile, weeks pass, the issue worsens, and other components like a compressor or heat exchanger start acting up.

Yes, it is easier now to detect anomalies and extract insights from document repositories, thanks to deep learning and LLMs. But rather than using one model to detect and another to interpret, what you really need is an AI that helps explore possibilities one that inherently understands cause and effect.

Yes, I am making the case for causal AI, but more importantly, I am calling on the industry to help bring this idea into full maturity.

Causal? To What Effect?

Causal AI is not a new type of AI, but a strategic application of known technologies to guide interventions. Its roots lie in fields like sociology, economics, and epidemiology, where researchers must figure out cause and effect even when experiments are messy or uncontrolled.

To explore interventions, causal models identify counterfactuals—what might have happened under different conditions or choices. This reframes statistical inference to focus on how to make things better.

While correlation is not causation, causal AI acknowledges this. It requires human input—typically through a knowledge graph mapping relationships. A subject matter expert refines it by removing false links, correcting misdirections, and adding missing logic.

I see this need for human input as a feature, not a flaw. Hardcoding known truths helps protect against hallucinations. In many disciplines, physical laws are often blended with empirical models. A sweet spot—robust and accurate—emerges when data and expert knowledge work together. (More on that in my future blog on hybrid modeling.)

These models can be applied in several ways: running algorithms on graphs (like Netflix’s recommender engine), extracting rules to build expert systems, or encoding relationships into matrices to leverage linear algebra.

Though rooted in “softer sciences,” causal AI holds real promise for manufacturing.

Providing the Confidence to Act

Let us return to our engineer and her plugged reactor. Instead of relying on a reactive patchwork of tools, she now has a model built on expert knowledge. It incorporates all potential causes of reactor fouling and connects them to the precise data needed to confirm or eliminate each one. Perhaps an LLM even helped build the knowledge graph using archived best practices.

The model performs real-time calculations, just as she would, but follows logic branches based on data. It includes recommended actions tied to real-world constraints like current fuel margins, maintenance lead times, and cost implications.

With this information, the engineer confidently builds a case for the best course of action—a controlled outage after peak gasoline demand subsides. The damage is manageable until then, and the root cause is being addressed.

Her plant manager sees a data-backed scenario analysis comparing options and signs off immediately. The planning team mobilizes early to reduce commercial impact.

This kind of intelligent intervention doesn’t yet exist at scale, but many are working to make it happen. The AI Applied Consortium is a not-for-profit connecting experts, tech providers, and academia to advance technologies like these.

Let me leave you with a quote from Aristotle:

“Excellence is never an accident… Choice, not chance, determines your destiny.”