Ask the Expert: Artificial Intelligence in EHS

May 22, 2019

Last week we announced that Enablon AI for Safe Operations was named as a SIIA CODiE Award finalist in the “Best Artificial Intelligence Enabled Solution” category.

The SIIA CODiE Awards are the premier awards for the software and information industries, and have been recognizing product excellence for over 30 years.

I spoke recently with Martin Vauthier, Head of Artificial Intelligence Analytics at Enablon. Here are Martin’s answers to five questions, which give you a great idea of what’s happening with AI, especially regarding the use of AI in workplace safety.

Artificial Intelligence is another buzzword we hear often. Sometimes there’s a disconnect between how much we hear about a technological innovation, and how much it is actually used. Where do you think AI is in the technology adoption phase?

I would say that AI is somewhere between the early adopter and early majority phases. It’s a sort of “sweet-spot” because the use of AI is large enough that organizations can be confident that it works. Obviously like any piece of software or technology, it’s not perfect, but we’re not talking here about some futuristic gadget that remains to be proven. And the fact that it is gradually making its way in the early majority stage means that companies are in a great position to get a competitive edge by adopting AI-enabled solutions, compared to risk-averse competitors who may still be waiting.

There are other indications also that AI is being used. First, according to the Verdantix roadmap for EHS technologies, predictive technologies are well into the “Launch” phase and getting close to the “Growth” phase. One of the main benefits of AI is precisely to give organizations a predictive capability. Second, according to a Forbes article on indicators of the state of AI, “25% of businesses surveyed have implemented cognitive technologies such as AI or machine learning, either as pilot projects or as long-term strategies”, and “81% predicted growth in AI”.

Can you provide a tangible example or two of how AI can improve safety? What can AI do in your examples that safety professionals could not do on their own?

At Enablon, we’re working on many solutions that leverage AI, but two are already available: Enablon Pulse and Enablon Juno. Both capabilities aim to unlock insights from safety data and put the right corrective or preventive actions in place.

Enablon Pulse is the industry’s first ever out-of-the-box predictive analytics application. It calculates risk levels on an ongoing basis, i.e. the potential severity and likelihood of an incident occurring at a specific site, based on data collected across the Enablon suite. The solution dynamically alerts employees in real-time when a risk threshold has been reached at any facility, or section of a facility. This is key to prevent incidents and reduce risks.

Predicting and preventing an incident is not enough if you don’t do something to reduce risks. This is where Enablon Juno, our context-aware assistant, comes in the picture. Leveraging NLP, the solution suggests specific action plans or controls to reduce risks of incidents, that have proved effective in very similar contexts in the past. This then allows the organization to apply the same corrective or preventive action plans throughout all locations that have similar conditions leading to similar risks.

Imagine if a safety manager had to dynamically identify risk areas, and analyze a large volume of historic data to determine the best response, all in a small amount of time. It would be an impossible task. With AI, the impossible becomes possible.

What do you say to safety professionals who are nervous that AI may threaten their jobs?

It’s the complete opposite! If you’re a safety professional, AI will make your job easier, and I believe, even more effective: it drastically increases traditional data processing capabilities, allowing to mine a much – MUCH – greater volume of data, in a much smaller amount of time. AI leaves much more time for safety professionals to focus on preventive safety programs, as it spares them the time-consuming part of the job which consists in pulling, analyzing and interpreting information from many different data sets, about lagging events.

Of course, reaching this level requires a lot of efforts from companies. They need to focus on the “four Vs”: Volume, Variety, Velocity, and Veracity – collect quality data as frequently as possible on many different aspects. But they also need to be able to connect these data sets together.

This is what big data analytics is about, and AI is really here to help get the most out of all the efforts companies put in data collection. Safety professionals can’t just keep pulling information out of their many systems anymore: it has to be pushed to them.

Also, AI and other technological innovations will free safety professionals to spend more time on advancing shared beliefs, values, attitudes and customs regarding safety. They will be free to focus on what matters most: safety culture.

AI is made up of algorithms. Humans program these algorithms. Humans can make mistakes. Is there a risk that sometimes AI may not produce the intended outcome?

AI is software technology, and software is not perfect. Even technological giants like Microsoft, Apple and Google have to issue upgrades once a while to address various issues. So I wouldn’t single out AI for concern or criticism. It’s in the same “boat” as other technologies.

Having said that, there are steps you can take to make sure that AI is producing the intended outcomes, and the most common is to run AI model validation frameworks. A common methodology is to look at your model’s recall capabilities – that is, run it against past data, and see how many incidents it is able to capture/predict.

Another quite significant aspect if you ask me, is to keep humans in the loop – an AI model is like a teenager, it needs to learn.

Steve Wozniak, one of the co-founders of Apple, said recently that “We should strive for machines that can do what humans do, but I don’t think we’re ever going to make it”. While machines can run tests faster, humans still have the advantage when it comes to strategy and deductive reasoning, Wozniak said. It’s like he was making the case for AI and human thinking to work together. How do you think this applies to workplace safety? Where do you see the “border” between AI and humans?

Wozniak makes a great point: AI will not replace humans. There are still things that only humans can do, and it will remain that way. As we saw earlier, promoting a safety culture is one of them. But there’s more.

With AI, millions of data points can be analyzed, and the following questions can be answered: What happened? What is likely to happen? What should be done next? But there’s one question AI cannot answer: Why did a person think that way? What fears, emotions, anxieties, etc., led a person to think in a way that led to an incident? This is where I see the “border” between AI and humans.

It’s important that safety managers continue to talk to people and not dismiss what they have to say because of AI and data analytics. They need to talk to workers and understand how they feel, think, act and react.

Ultimately, AI and humans will have to work together. Think about an airplane. With auto-pilot and technological advancements, it’s theoretically possible for a plane to fly on its own or to fly it remotely. But do you still want a pilot crew in the cockpit of a jet with 300 passengers on board? I do!

That’s our Q&A with Martin. There are a lot of interesting perspectives that I’m glad we shared with you.



Jean-Grégoire Manoukian

Content Thought Leader