- Love Wellness Papa
- Posts
- How Brussels wants to promote artificial intelligence while regulating risks
How Brussels wants to promote artificial intelligence while regulating risks
[ad_1]
In February 2020, the European Commission presented its white paper on its artificial intelligence strategy. After more than a year, it has just unveiled its proposal for a regulation on Wednesday April 21 at a press conference led by Margrethe Vestager, the executive vice-president, and Thierry Breton, commissioner in charge of the internal market.
The main objective of this future legislation is to “strengthen Europe’s position as a global hub of excellence in the field of AI, from the laboratory to the market, to ensure that, in Europe, AI respects our values and our rules and to exploit its potential for industrial purposes “, said Thierry Breton on this occasion.
The Commission provides a definition of what it means by “artificial intelligence”. It is “software (…) capable, for a given set of human-defined objectives, of generating results such as content, predictions, recommendations or decisions influencing the environments with which they are used. interact “. This is a fairly broad definition to encompass many technological aspects.
The Digital Factory offers you five key principles to retain from this text which, before becoming final, will have to be approved by the Parliament and the Council, a body bringing together the ministers of the Member States by field of activity.
1) to whom does this new framework apply?
These new rules will apply as well to public sector actors than to companies whatever their size, inside or outside the European Union, for any AI system placed on the market in the Union or the use of which has an “impact” on persons located in the EU. However, it does not apply for non-professional private use.
In other words, it means that a foreign company, American or Chinese for example, will be obliged to respect this framework if it wants to market its algorithm within the EU. Moreover, Microsoft applauded the adoption of the white paper during its “Data science and law forum” event organized in March 2020 in Brussels, a highly symbolic city.
2) Facial recognition prohibited in principle
Title II of the proposed regulation is devoted to prohibited uses. The placing on the market or the use of an AI system that deploys “techniques” for manipulating a person’s behavior so as to cause him harm, the one who exploits the vulnerabilities of a group through discriminatory practices or which aims to note individuals according to their social behavior.
The Commission also wishes to ban the use of “remote biometric identification systems ‘in real time’ in spaces accessible to the public for the purpose of maintaining order”, which in other words revises to a facial recognition system. But she plans three exceptions to this principle: the search for victims, prevention of a specific threat, detection or identification of a suspect for an offense punishable by a penalty or deprivation of liberty of three years.
From a procedural point of view, each use for law enforcement purposes must be subject to prior authorization granted by “a judicial authority or an independent administrative authority” of the Member State in which the use is to take place. However, in a “duly justified” emergency, use may begin without authorization. It will be requested during or after the use of facial recognition.
3) A risk-based approach
As foreseen in its White Paper, the European Commission adopts a risk-based approach by distinguishing between “high risk” AI systems and others.
“High risk” AI systems
For an AI to be considered at “high risk”, it must cumulatively be used in a sector where significant risks are to be expected and used in this sector in such a way that these risks are likely to arise. We can cite the technologies used in critical infrastructures, such as transport, and which are likely to endanger the life of people, those used in education and vocational training, those integrated in safety components, those used in the management border controls or administration of justice and democratic processes.
To be able to be placed on the market, these “high risk” systems will have to present “adequate systems. assessment and mitigation risks “and robust data sets. They will also have to monitoring their activities to ensure traceability results, own detailed documentation on their operation and its purposes and clear information intended for the end user.
Human supervision must also be at the heart of these systems. It aims to “prevent or reduce to a minimum the risks to health, safety or fundamental rights”, one can read in the text. The person responsible for this monitoring must, for example, be able to fully understand “the capabilities and limitations high-risk AI system “and be able to monitoring its operation, so that “signs of abnormalities, malfunctions and unexpected performance” can be detected and dealt with as soon as possible.
“Limited” or “minimal” risk systems
Some systems “limited” risk, such as a chatbot, must respect a series of transparency obligations. People must be informed that they are interacting with an algorithm unless it is obvious, specifies article 52. Are also included in this category the “deepfakes” The authors of these video effects more real than life will be in the obligation to specify that the content has been generated or manipulated artificially.
“Minimal” risk systems constitute a category by default, that is to say it includes all “other AI systems”. They are not subject to any additional legal obligation. However, the suppliers of these systems will be able to adhere to codes of conduct voluntarily, specifies Brussels.
4) create a European artificial intelligence committee
At European level, a new body should be created: the European Artificial Intelligence Committee, provides for Article 56 of the proposal. It will be responsible for contributing to cooperation between national authorities and helping them to ensure consistent application of future rules. This is the equivalent of the European Data Protection Board (EDPS) for the General Data Protection Regulation (GDPR).
The board of directors of the future committee will be composed one representative from each Member State from a market surveillance authority. He will be under the management of the Commission responsible for calling meetings and preparing the agenda.
At the local level, these are national market surveillance authorities who will ensure compliance with these new rules. Each member state will therefore have to designate a body. This must imperatively have “rfinancial and human resources “needed to fulfill its monitoring and control missions. Its members will have to present specialized skills in the sectors of AI and data but also fundamental rights.
5) Sanctions for violation of the rules
The Commission provides for a graduation in the amount of penalties. In detail, companies or public actors using banned algorithms risk a fine of up to up to 30 million euros or up to 6% of sales annual worldwide. Failure to comply with obligations, apart from those relating to explicitly prohibited algorithms, is punishable by a fine of up to up to 20 million euros or Jup to 4% of turnover annual worldwide.
Finally, the provision of incorrect, incomplete or misleading information to notified bodies and competent national authorities in response to a request is punishable by administrative fines which may range from up to 10 million euros or up to 2% of its total worldwide annual sales.
[ad_2]