Ten Answers on European Artificial Intelligence Law | Technology

Artificial intelligence (AI) systems and programs are capable of performing tasks typical of human intelligence, such as reasoning, learning (Machine learning), perceive and understand natural language and solve problems. From routine applications like shopping or watching movies to the development of new drug formulations or the organization of manufacturing processes, it is already present in all areas of our lives.

Subscribe to continue reading

Read without limitations

Artificial intelligence (AI) systems and programs are capable of performing tasks typical of human intelligence, such as reasoning, learning (Machine learning), perceive and understand natural language and solve problems. It is already present in all areas of our lives, from routine shopping apps or watching movies to developing new drug formulations or the organization of manufacturing processes. It allows you to automate tasks, make decisions, improve efficiency and deliver solutions in areas as diverse as medicine, industry, robotics or financial services. European AI legislation is gradually starting to apply with the aim of guaranteeing its development with ethical and legal criteria. Here are 10 answers to doubts created by a pioneering initiative in the world:

Why does Europe regulate it?

Artificial intelligence delivers societal benefits, spurs economic growth, and improves innovation and competitiveness. These applications usually pose little or no risk. But others may create situations that contravene rights and freedoms, such as using artificial intelligence or biometric data to create unwanted pornographic images to classify people by features of their appearance or for employment, education, health or predictive policing.

What are the types of risk?

Minimal Risk: Most systems fall into this category. For these applications, the supplier may voluntarily follow ethical requirements and adhere to a code of conduct. General purpose AI is considered AI that is trained with a computing power of 10²⁵ floating point operations per second (FLOPS). FLOPS is a measure of system performance, and the Commission considers the aforementioned dimension a threshold for potential systemic risks. The EU considers ChatGPT-4 from OpenAI and Gemini from Google DeepMind to be in this range, which could be reviewed through a delegated act.

READ  Godfather of AI warns of real danger of technology

High risk: These are models that may affect people's safety or their rights. The list is open to permanent review, but the standard already foresees areas of application included in this section, such as critical communication and distribution infrastructure, education, workforce management or access to essential services.

Unacceptable Risk: Systems included in this section are prohibited because they violate fundamental rights. This list includes social classification or scoring, which exploits people's vulnerability and identifies race, opinion, belief, sexual orientation or emotional reaction. Exceptions have been made for its police use to investigate 16 specific offenses relating to missing persons, kidnapping, kidnapping and sexual exploitation, threats to life or security or responding to a current or foreseeable threat of terrorist attack. In urgent cases, exceptional use may be approved, but if denied, all data and information must be deleted. In non-urgent situations, impacts from a fundamental rights perspective should be assessed in advance and the relevant market surveillance authority and data protection authority notified.

Specific risk to transparency: Refers to manipulation risks created by hoaxes that appear genuine (Deep fakes) or with dialog applications. The standard requires that it be made clear beyond doubt that the user is an artificial creation or that he is interacting with a machine.

Systemic risk: The standard takes into account that the widespread use of large capacity systems, such as cyber attacks or financial fraud or the spread of bias, could cause massive or widespread damage.

Who must obey the law?

All public and private agents using artificial intelligence systems within the EU, whether they are European or not, must comply with the law. This affects program providers, those who use them, and those who buy them. Everyone needs to make sure their system is secure and compliant with the law. For high-risk systems, systems must undergo compliance assessment to ensure data quality, traceability, transparency, human oversight, accuracy, cybersecurity and robustness before and after being marketed. If the organization or its purpose is changed significantly, this assessment should be repeated. Unless such systems are used for law enforcement and migration purposes, high-risk AI systems or organizations acting on their behalf must also be registered in a public EU database. Providers of models with systemic risks (computing power greater than 10²⁵ FLOPS) are obligated to assess and mitigate them, report serious incidents, perform advanced testing and evaluation, ensure cyber security, and provide information on the energy consumption of their models.

READ  Learn 4 Benefits of Using Technology in International Business | news

What should a conformity assessment include?

Procedures, duration and frequency of use, types of individuals and groups affected, specific risks, human monitoring measures and action plan if risks are implemented.

How does a supplier know the effects of their product?

Large companies already have their own systems in place to conform to the standard. For small companies and users of open source systems, the law creates controlled testing and rehearsal spaces in real conditions, which provide a controlled environment to test innovative technologies for six months. They may be subject to inspections.

Who is exempt?

Providers of free and open source models are exempt from commercialization obligations, but not from risk avoidance obligations. This rule does not affect research, development and prototype activities or developments for defense or national security applications. General-purpose AI systems must meet transparency requirements such as producing technical documentation, complying with EU copyright law, and disseminating detailed summaries of the content used for the system's training.

Who Monitors Compliance?

It establishes a European Office of Artificial Intelligence, a scientific advisory board and national monitoring authorities for accreditation of monitoring systems and applications. AI agencies and offices must have access to the information they need to fulfill their duties.

When will the AI ​​Act fully apply?

Following its adoption, the AI ​​Act will come into effect 20 days after its publication and will gradually become fully applicable over 24 months. In the first six months, Member States must remove the banned systems. Within a year, administrative obligations for general purpose AI will be imposed. Within two years, all high-risk systems should be adequate.

READ  Exclusive presentation of the new Peugeot 3008: elegance, technology and its outstanding stability

What are the penalties for violations?

Where artificial intelligence systems that do not meet the requirements of the Regulation are marketed or used, Member States shall establish effective, proportionate and dissuasive sanctions for infringement and notify them to the Commission. Fines of up to €35 million or 7% of annual global revenue of the previous financial year, up to €15 million or 3% of revenue and up to €7.5 million or 1 .5% of business volume. For each type of violation, the limit is the lower of the two amounts for SMEs and the higher for other entities.

What can a victim of a breach do?

The AI ​​Act provides a right to complain to a national authority and makes it easier for individuals to seek compensation for damages caused by high-risk artificial intelligence systems.

You can follow EL PAÍS Technology Inside Facebook Y X Or sign up here to reach us Seminal Newsletter.

Dodaj komentarz

Twój adres e-mail nie zostanie opublikowany. Wymagane pola są oznaczone *