Can Artificial Intelligence be safe?

Since the pub­li­ca­tion of ChatGPT, everyone is talking about Arti­fi­cial Intel­li­gence. But can clever algo­rithms, which gen­erate texts for example, also be ground­breaking in safe automa­tion? Yes, they can. How­ever, the ques­tion that remains is: how?

There is actu­ally no magic or wiz­ardry behind the term “Arti­fi­cial Intel­li­gence” (AI). It is pure math­e­matics. Put simply, it is a pro­grammed code, which attempts to mimic human behav­iour, the aim being to auto­mate human deci­sions. The foun­da­tions for this tech­nology were cre­ated back in the 60s. How­ever, it’s not until now – with suit­ably pow­erful hard­ware, new con­cepts and new algo­rithms – that the sub­ject has seen a huge upturn. These devel­op­ments make AI tech­nolo­gies usable for industry, to the extent that experts are pre­dicting a rev­o­lu­tion in the global economy.

“Arti­fi­cial Intel­li­gence is a syn­onym for a rev­o­lu­tion in data pro­cessing.”

Lukas Elwinger, AI Engi­neer at Pilz

“Arti­fi­cial Intel­li­gence is a syn­onym for a rev­o­lu­tion in data pro­cessing”, Lukas Elwinger, AI Engi­neer at Pilz, explains. “It will enable us to over­come chal­lenges that until now could not be resolved by con­ven­tional algo­rithms and tech­nolo­gies. That in turn offers space for new busi­ness areas, busi­ness models, prod­ucts and ser­vices.” The ulti­mate dis­ci­pline is to bring Arti­fi­cial Intel­li­gence and machinery safety together. Because AI is what’s called a black box: “You cannot pre­dict or com­pre­hend how a com­plex system makes a deci­sion”, Lukas Elwinger explains. “But that is absolutely essen­tial for func­tional safety. Each reac­tion of the safety system must be jus­ti­fi­able – even when it involves AI.” Three com­po­nents are rel­e­vant when it comes to imple­menting AI into machinery safety: safety and secu­rity, trans­parency and reli­a­bility

Requirements for the use of AI in machinery safety

Only when safety, trans­parency and reli­a­bility fac­tors have been ade­quately researched, can Arti­fi­cial Intel­li­gence be imple­mented in machinery safety. © Pilz GmbH & Co. KG, Ost­fildern

Enabling AI for machinery safety

Where trans­parency is con­cerned, research is cur­rently dealing with this pre­cise aspect: the ability to explain why and how AI makes a deci­sion. Experts are using Explain­able AI to make the com­plex system of AI under­stand­able and there­fore trans­parent for people. With regard to reli­a­bility, experts are using algo­rithms and methods to quan­tify uncer­tainty. As a result, AI sys­tems are indi­cating how sure or unsure they are in their deci­sion, recog­nising invalid entries and reacting accord­ingly. Reli­a­bility can be fur­ther increased through appro­priate con­cepts and archi­tec­tures. In the area of safety, research uses formal ver­i­fi­ca­tion methods to verify whether AI is safe and error-free. All three com­po­nents con­tribute towards the max­imum pos­sible reli­a­bility and safety of AI in the indus­trial envi­ron­ment, and for the pro­tec­tion of humans. Because an all-encom­passing Arti­fi­cial Intel­li­gence that is equal to, or even exceeds, human intel­li­gence – often referred to in the media as “strong AI” – does not yet exist. That means: an AI system will only work reli­ably for tasks that have been defined by humans, with pre-defined frame­work con­di­tions.

What does the legislation say?

1. EU AI Act

The EU AI Act, a pro­posed EU reg­u­la­tion for Arti­fi­cial Intel­li­gence, pro­vides har­monised rules for the devel­op­ment, deploy­ment and use of AI sys­tems within the EU, but also for the safety and fun­da­mental rights of indi­vid­uals. The AI Act goes beyond the require­ments of deploy­ment and exam­ines the whole product life­cycle. One sen­si­tive issue is the def­i­n­i­tion of an AI system, as this gov­erns the whole scope of the reg­u­la­tion. So far, algo­rithms that are already in gen­eral use can fall under the scope of the AI Act. How­ever, it remains to be seen what the final def­i­n­i­tion will be. Because the EU AI Act is cur­rently still in the draft stage.

2. EU Machinery Regulation

The EU Machinery Reg­u­la­tion already estab­lishes that a con­for­mity assess­ment pro­ce­dure must be car­ried out when AI sys­tems are used in func­tional safety. Accord­ingly, safety-rel­e­vant AI sys­tems are listed as “high-risk machinery”. For man­u­fac­turers such as Pilz, this means that a con­for­mity assess­ment pro­ce­dure can only be car­ried out with the involve­ment of a noti­fied body, even if the rel­e­vant har­monised stan­dards are applied.

3. ISO/IEC CD TR 5469 “Artificial Intelligence — Functional safety and AI systems”

Inter­na­tion­ally, work is underway on the devel­op­ment of an ini­tial stan­dard: ISO/IEC CD TR 5469 “Arti­fi­cial Intel­li­gence — Func­tional safety and AI sys­tems”. This deals explic­itly with “AI & Safety”. Experts from Pilz will col­lab­o­rate on this stan­dard in both national and inter­na­tional com­mit­tees.

“In the short term, we expect AI to achieve increased effi­ciency in pro­duc­tion. In the long term, we hope to be able to use Arti­fi­cial Intel­li­gence prof­itably, even in a safety-rel­e­vant envi­ron­ment”, says Matthias Holzäpfel, Vice Pres­i­dent Advance Devel­op­ment at Pilz, looking to the future. AI-based object recog­ni­tion, object tracking and sensor data fusion are ele­men­tary com­po­nents for imple­menting safe person detec­tion, for example. Cur­rently, such appli­ca­tions can only be imple­mented at great engi­neering expense. In the future, AI sys­tems will be a nec­es­sary tool.

Pilz has Arti­fi­cial Intel­li­gence in its sights. Our experts are working quite specif­i­cally with AI tech­nology, and with cur­rent research on AI and machinery safety. As a reli­able safe automa­tion partner, for whom the focus is on the pro­tec­tion of human and machine, Pilz will not leave the use of Arti­fi­cial Intel­li­gence to chance, how­ever, but will keep a crit­ical eye on the leg­is­la­tion and the devel­op­ment of stan­dards.

Share with your network!

1 Star2 Stars3 Stars4 Stars5 Stars (Average rating)

Leave a Reply