EU AI Act Takes Another Step Forward

With a full vote expected in April, two European Parliament committees came to a provisional agreement on the future governance of artificial intelligence.

Shane Snider , Senior Writer, InformationWeek

February 15, 2024

2 Min Read
Sign with EU flag behind a barbed wire fence with the inscription AI-Act
Imago via Alamy Stock

European lawmakers on Tuesday ratified a provisional agreement that paves the way for landmark legislation for artificial intelligence that will likely have worldwide implications.

While the Biden Administration’s AI executive order provides some regulation in the US, the EU’s AI Act will represent the first major world power codifying AI protections into law. The landmark rules will establish regulations for AI systems like the popular OpenAI chatbot, ChatGPT, and will rein in governments’ use of biometric surveillance.

The European Parliament’s civil liberties (LIBE) and internal market (IMCO) committees approved a draft by a vote of 71-8. The regulations were originally proposed in April 2021 and moved ahead quickly last year after the explosive growth of AI sparked by the success of ChatGPT.

With passage in April, the EU would roll out the AI Act in phases between 2024 and 2027 as increasing levels of legal requirements target “high-risk” AI applications.

Despite the overwhelming support, there were some dissenters. In a statement, LIBE committee member Patrick Breyer said the rules did not go far enough to offer safeguards. “The EU’s AI Act opens the door to permanent facial surveillance in real time: Over 7,000 people are wanted by European arrest warrant for the offences listed in the AI Act. Any public space in Europe can be placed under permanent biometric mass surveillance on these grounds.”

Related:Biden Pens Landmark AI Executive Order

He added, “This law legitimizes and normalizes a culture of mistrust. It leads Europe into a dystopian future of a mistrustful high-tech surveillance state.”

Earlier this month, EU member states endorsed the AI Act deal reached in December.

Margrethe Vestager, the EU’s digital chief, said the AI Act was urgently needed considering the recent spread of fake sexually explicit images of pop star Taylor Swift on social media.

“What happened to @taylorswift13 tells it all: the #harm that #AI can trigger if badly used, the responsibility of #platforms, & why it is so important to enforce #tech regulation,” she posted on X (formerly Twitter).

Var Shankar, Responsible AI Institute’s executive director, tells InformationWeek in an email interview that the EU Act may complement other AI regulatory initiatives. “The EU AI Act represents a thoughtful and comprehensive approach to AI governance and positions the EU as a leader in setting global rules for AI use,” he says. “At the same time, it is not clear whether we will see a ‘Brussels-effect’ like we did with GDPR [the EU’s General Data Protection Regulation], since the US and China both have well-developed AI governance models and host the largest AI companies.”

Shankar adds, “Organizations are also looking to international AI standards and to efforts like the G7’s Hiroshima code of Conduct for Advanced AI Systems to help guide an international consensus on what constitutes responsible AI implementation.”

Read more about:

Regulation

About the Author(s)

Shane Snider

Senior Writer, InformationWeek, InformationWeek

Shane Snider is a veteran journalist with more than 20 years of industry experience. He started his career as a general assignment reporter and has covered government, business, education, technology and much more. He was a reporter for the Triangle Business Journal, Raleigh News and Observer and most recently a tech reporter for CRN. He was also a top wedding photographer for many years, traveling across the country and around the world. He lives in Raleigh with his wife and two children.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights