Since the publication of ChatGPT, everyone is talking about Artificial Intelligence. But can clever algorithms, which generate texts for example, also be groundbreaking in safe automation? Yes, they can. However, the question that remains is: how?
There is actually no magic or wizardry behind the term “Artificial Intelligence” (AI). It is pure mathematics. Put simply, it is a programmed code, which attempts to mimic human behaviour, the aim being to automate human decisions. The foundations for this technology were created back in the 60s. However, it’s not until now – with suitably powerful hardware, new concepts and new algorithms – that the subject has seen a huge upturn. These developments make AI technologies usable for industry, to the extent that experts are predicting a revolution in the global economy.
“Artificial Intelligence is a synonym for a revolution in data processing.”
Lukas Elwinger, AI Engineer at Pilz
“Artificial Intelligence is a synonym for a revolution in data processing”, Lukas Elwinger, AI Engineer at Pilz, explains. “It will enable us to overcome challenges that until now could not be resolved by conventional algorithms and technologies. That in turn offers space for new business areas, business models, products and services.” The ultimate discipline is to bring Artificial Intelligence and machinery safety together. Because AI is what’s called a black box: “You cannot predict or comprehend how a complex system makes a decision”, Lukas Elwinger explains. “But that is absolutely essential for functional safety. Each reaction of the safety system must be justifiable – even when it involves AI.” Three components are relevant when it comes to implementing AI into machinery safety: safety and security, transparency and reliability
Requirements for the use of AI in machinery safety
Enabling AI for machinery safety
Where transparency is concerned, research is currently dealing with this precise aspect: the ability to explain why and how AI makes a decision. Experts are using Explainable AI to make the complex system of AI understandable and therefore transparent for people. With regard to reliability, experts are using algorithms and methods to quantify uncertainty. As a result, AI systems are indicating how sure or unsure they are in their decision, recognising invalid entries and reacting accordingly. Reliability can be further increased through appropriate concepts and architectures. In the area of safety, research uses formal verification methods to verify whether AI is safe and error-free. All three components contribute towards the maximum possible reliability and safety of AI in the industrial environment, and for the protection of humans. Because an all-encompassing Artificial Intelligence that is equal to, or even exceeds, human intelligence – often referred to in the media as “strong AI” – does not yet exist. That means: an AI system will only work reliably for tasks that have been defined by humans, with pre-defined framework conditions.
What does the legislation say?
1. EU AI Act
The EU AI Act, a proposed EU regulation for Artificial Intelligence, provides harmonised rules for the development, deployment and use of AI systems within the EU, but also for the safety and fundamental rights of individuals. The AI Act goes beyond the requirements of deployment and examines the whole product lifecycle. One sensitive issue is the definition of an AI system, as this governs the whole scope of the regulation. So far, algorithms that are already in general use can fall under the scope of the AI Act. However, it remains to be seen what the final definition will be. Because the EU AI Act is currently still in the draft stage.
2. EU Machinery Regulation
The EU Machinery Regulation already establishes that a conformity assessment procedure must be carried out when AI systems are used in functional safety. Accordingly, safety-relevant AI systems are listed as “high-risk machinery”. For manufacturers such as Pilz, this means that a conformity assessment procedure can only be carried out with the involvement of a notified body, even if the relevant harmonised standards are applied.
3. ISO/IEC CD TR 5469 “Artificial Intelligence — Functional safety and AI systems”
Internationally, work is underway on the development of an initial standard: ISO/IEC CD TR 5469 “Artificial Intelligence — Functional safety and AI systems”. This deals explicitly with “AI & Safety”. Experts from Pilz will collaborate on this standard in both national and international committees.
“In the short term, we expect AI to achieve increased efficiency in production. In the long term, we hope to be able to use Artificial Intelligence profitably, even in a safety-relevant environment”, says Matthias Holzäpfel, Vice President Advance Development at Pilz, looking to the future. AI-based object recognition, object tracking and sensor data fusion are elementary components for implementing safe person detection, for example. Currently, such applications can only be implemented at great engineering expense. In the future, AI systems will be a necessary tool.
Pilz has Artificial Intelligence in its sights. Our experts are working quite specifically with AI technology, and with current research on AI and machinery safety. As a reliable safe automation partner, for whom the focus is on the protection of human and machine, Pilz will not leave the use of Artificial Intelligence to chance, however, but will keep a critical eye on the legislation and the development of standards.