European Union reaches a deal on the world’s first artificial intelligence law

Image

Many times we question what the limit is—or if there is a limit—regarding issues related to the scope of the tests launched by companies investing in the new capabilities of AI.

A few days ago, the European Union reached a deal on the world’s first Artificial Intelligence Law. A provisional agreement to be applied in stages was reached after 36 hours of negotiations, by which duties and rules were defined.

That is to say, to sum up, it allows or prohibits the use of technology in accordance with the risk it implies for people.  Furthermore, it seeks to protect the EU from giants such as China or United States of America. 

Even though both the European Parliament and the EU Council have to ratify this deal, after several months negotiating, the countries involved could reach an agreement, which is not a small deed. 

One of the flashpoints of the negotiation was the regulation of the use of biometric identification in the public space in pursuit of citizen security. 

Moreover, other relevant issue is related to the AI identification of emotions. The artificial intelligence systems that could identify emotions and manipulate human behaviour shall be prohibited in schools and work centres. 

Finally, one of the main topics was related to the use of the ChatGPT: even though its use is not forbidden, its developers are now bound to fulfil stricter rules, by contextualizing and clarifying the data and information that the platform provides. 

Many Europeans screamed blue murder, as they consider this Act will allow the United States of America to grow exponentially as regards AI development, leaving Europe doomed to failure.

Anyhow, a few weeks ago, Joe Biden established what, to us, is a good effort made by the U.S. government to address the biggest risks arising from current technology. It is a decree that seeks to harness the potential and regulate the risks associated with artificial intelligence. Basically, it proposes to compel companies to demonstrate that their most powerful systems are safe before they are used.