in

Europe’s Efforts to Regulate AI Risk Unintended Consequences

Key Takeaways:
– The EU Parliament has given its nod to the Artificial Intelligence Act.
– Originating in 2021, the legislation aims to classify AI risks and prohibit certain unacceptable applications.
– An AI specialist voiced concerns over the possibility of “AI policy tax havens,” where countries might lax their regulations to entice investments.

The European Union is taking significant strides towards the regulation of artificial intelligence. This Wednesday marked a pivotal moment as the European Parliament endorsed the Artificial Intelligence Act with a majority vote—523 in favor, 46 opposed, and 49 abstentions.

Thierry Breton, the European Commissioner for the Internal Market, celebrated this development on social media, proclaiming, “Europe is NOW a global standard-setter in AI. We are regulating as little as possible — but as much as needed!” This legislation represents the first comprehensive effort by a major regulatory body to mitigate the potential dangers AI poses to its citizens, setting a precedent that other nations, including China, have begun to follow with rules targeting specific AI uses.

Despite the positive reception, some experts, like AI and deepfakes authority Henry Ajder, have their reservations. Ajder praised the ambition of the act but cautioned that it might render Europe less attractive on the global stage. He expressed concerns over companies deliberately avoiding development in regions with stringent regulations, fearing that some countries might become “AI policy tax havens” by not enforcing strict laws to draw in businesses.

The journey of the Artificial Intelligence Act began in 2021 and reached a provisional agreement among member states in December 2023. The legislation seeks to categorize AI applications based on their risk levels, outright banning those deemed to pose unacceptable risks.

Neil Serebryany, CEO of Calypso AI, views the act as a “key milestone in the evolution of AI,” despite acknowledging the potential initial burden of compliance on businesses. He sees it as an opportunity for the advancement of AI in a responsible and transparent manner, encouraging companies to integrate social values into their products from the onset.

The regulation is slated to be enforced starting May, pending final approvals, with a phased implementation set for 2025. The specifics of how these rules will affect businesses remain somewhat unclear.

Avani Desai, CEO of cybersecurity firm Schellman, suggests that the act could mirror the impact of the EU’s General Data Protection Regulation (GDPR), necessitating U.S. companies to comply with certain standards to operate within Europe. As the EU Commission sets up the AI Office and begins establishing standards, Marcus Evans from Norton Rose Fulbright advises companies to start preparing immediately to navigate the new regulations effectively, with some obligations taking effect this year and others over the next three years.