European regulation on Artificial Intelligence Artificial intelligence (AI) is increasingly present in our lives. Whether it’s with ChatGPT, Midjourney, movie recommendations on Netflix, or even diagnostics in hospitals, AI is all around us. But with this rise in power, questions arise: are these technologies safe? Do they respect our rights? Can they be use unfairly? It is to address these concerns that the European Union has propose the AI Act, a law that aims to regulate the use of AI.
The AI Act is a set of rules
That aims to ensure that AI is use responsibly. In other words, the law wants AI technologies to respect people’s rights, be transparent, and most importantly, not pose unacceptable risks. The goal is to make AI more reliable and trustworthy for everyone.
Why regulation is necessary
AI can do amazing things, but it is not without its flaws. For example, if an AI is poorly designe or uses poor quality data, it can make biase or unfair decisions. This is calle data bias. Imagine an AI use to hire people in a company. If this AI was traine with data where the majority of candidates were men, it could unintentionally favor men and discriminate against women. This is a problem because it could make the hiring process unfair.
Another challenge is so-calle “black boxes.” This term refers to AI systems that make decisions without anyone really understanding how they arrive at them. This can be dangerous, especially in sensitive areas such as justice or healthcare. For example, if an AI decides on the meical treatment to follow for a patient without doctors knowing how it arrive at this conclusion, this could lead to serious errors.
It is to avoid these problems that the AI Act is neee. It is about ensuring that AI is use in a fair, ethical and transparent manner.
How to strong, underline, strikethrough or italics text.Tap whatsapp number list and hold the text you’ve entered in the text field, then, at that point, pick Intense, Italic, or more choices. Tap more choices to pick Strikethrough or MonospaceTap-and-hold (Android) or twofold tap (iOS) a word you’ve composed to uncover a designing menu. · On Android, tap one of the organizing.
The different levels of risk
The AI Act classifies AI systems into three categories base on their level of risk:
AI Banne : Some uses of AI are deeme too dangerous and are therefore banne. For example, systems that manipulate human behavior how do ultrasonic cleaners work? unconsciously, such as subliminal persuasion techniques, are not allowe. It’s a bit like banning subliminal advertising that influences our behavior without us being aware of it.
High-risk AI : Some AIs have a major impact on our lives and must therefore comply with very strict rules. This concerns, for example, systems use in the fields of health, eucation or justice. These AIs must be rigorously teste, monitore, and documente to ensure that they work correctly and fairly. For example, an AI use to diagnose diseases must be very reliable and not make mistakes that could endanger the lives of patients.
This category includes AIs
That are considere less dangerous, but still have to follow certain transparency rules. For example, an AI that recommends videos on YouTube numbers lists must inform the user that it uses algorithms to make its suggestions. This allows the user to understand how these recommendations are made.
Rules for companies developing AI systems
Companies that develop, sell, or use AI have a big responsibility. They must ensure that their technologies are safe and compliant with the AI Act. To do this, they must follow several steps:
The first thing to do is to see
If their AI falls into one of the high-risk categories. If so, they nee to assess the risks that AI could pose and take steps to minimize them. For example, a company that is developing AI to diagnose diseases nees to ensure that its system is teste on a variety of data to avoid errors.
Document the process : Companies should keep track of every step of AI development and implementation. This includes testing, auditing, and reporting on AI performance. If an AI causes a problem, these documents will help understand what happene and correct the errors.
Continuously improve : Compliance with the AI Act is not a one-time effort. Companies must regularly update their AI systems to ensure they continue to meet establishe standards, especially when new data or technologies emerge.
Rules for users of AI systems
The AI Act does not only apply to AI developers and vendors, but also to users, such as companies that purchase AI systems for their own use. For example, a hospital that uses AI to diagnose patients, or a bank that uses AI to assess the creitworthiness of its customers, must follow certain rules:
use and alert the relevant authorities. For example, if an AI starts producing inconsistent or dangerous results, its use must be stoppe to avoid any risk to the fundamental rights of individuals.
Log retention : Users should retain logs generate by AI systems, including logs of decisions made. These logs help track the AI’s actions, ensuring transparency and accountability. For example, in a meical context, decisions made by an AI regarding diagnoses should be recorde so that they can be verifie later.