10 Questions to Consider About Artificial Intelligence Risks
“Is artificial intelligence less than our intelligence?” ~Spike Jonze
Every month, we publish three roundups on Enablon Insights: EHS Roundup, Sustainability Roundup, Risk Roundup. Each roundup highlights ten articles or online resources that caught our attention and that deserve a second look.
Sometimes, one of the articles is so interesting that we write a separate post to highlight it. This is such a post. The article is from Deloitte, it was included in the December Risk Roundup, and is titled “On the Board’s Agenda: Board Oversight of Algorithmic Risk”.
The Deloitte article is relevant because of the growing use of Artificial Intelligence (AI) in various business processes. AI combines algorithms to allow computers, machines and software programs to operate in an intelligent manner and solve problems, achieve goals or provide actionable insights through the use of advanced analytics and data.
Some examples of the use of AI include:
- Early detection and warning of malfunctioning assets or equipment that could lead to failure, and potential incidents or production delays.
- Suggestion of action plans and controls in the event of an incident, based on the context of the incident, including data on similar, past incidents.
- Dynamic identification of workplace areas with high risks of incidents based on hazards, likelihood and severity of adverse events, and effectiveness of controls. This includes warnings to workers through mobile devices.
There’s a reason why I selected the quote that appears on top. One of the appeals of AI, in addition to its ability to simulate human intelligence, is that it may be less prone to errors and flaws than human reasoning. Or is it? Since AI is a combination of algorithms, and since algorithms are programmed by humans, there could be a risk that AI may not produce the desired outcomes. Eventually AI technology may become so smart that it can program itself (Mark Cuban even speculates that this is a reason why studying philosophy may soon be worth more than computer science), but for now humans are programming AI, and therefore there is an element of risk that is introduced.
This does not undermine the benefits of AI, but it does imply that organizations should consider AI risk, ask themselves some key questions, and even audit the outcomes of AI. Here are 10 key questions to consider regarding AI risk, and that are based on the Deloitte article on board oversight of algorithmic risk:
1) Where and how is AI used in the organization?
2) What are the potential impacts of unintended outcomes of AI?
3) Is the organization aware of any AI cases with incorrect outcomes? Has it received any complaints about them from stakeholders such as customers, suppliers, employees, etc.
4) If yes, what kinds of problems have those incorrect AI outcomes created, and how have they been addressed or resolved?
5) What monitoring systems are in place to provide indications of problems with AI?
6) Who oversees the use of AI and related risks?
7) What processes does the organization have in place to monitor and test AI?
8) Is AI independently reviewed? By whom? How often?
9) How secure is the programming code behind AI from cyber-theft or hacking?
10) Has management developed an inventory of tested and “risk-rated” AI uses so that the organization can rely upon them and focus on other AI uses that may pose greater risks?
Emerging technologies can produce many positive impacts, including improved workplace safety, better environmental performance, more effective supply chain monitoring, increased productivity, and better product quality. But emerging technologies, such as AI, can also create risks. By recognizing and managing AI risks, companies can leverage the full power and benefits of AI more effectively.
View the recording of our webinar with LNS Research to learn how advanced analytics capabilities can enable a more proactive, predictive approach to EHS and risk management to achieve operational excellence: