Lawmakers who helped shape the European Union’s landmark AI Act are worried that the 27-member bloc is considering watering down aspects of the AI rules in the face of lobbying from U.S. technology companies and pressure from the Trump administration.
The EU’s AI Act was approved just over a year ago, but its rules for general-purpose AI models like OpenAI’s GPT-4o will only come into effect in August. Ahead of that, the European Commission—which is the EU’s executive arm—has tasked its new AI Office with preparing a code of practice for the big AI companies, spelling out how exactly they will need to comply with the legislation.
But now a group of European lawmakers, who helped to refine the law’s language as it passed through the legislative process, is voicing concern that the AI Office will blunt the impact of the EU AI Act in “dangerous, undemocratic” ways. The leading American AI vendors have amped up their lobbying against parts of the EU AI Act recently, and the lawmakers are also concerned that the Commission may be looking to curry favor with the Trump administration, which has already made it clear it sees the AI Act as anti-innovation and anti-American.
The EU lawmakers say the third draft of the code, which the AI Office published earlier this month, takes obligations that are mandatory under the AI Act and inaccurately presents them as “entirely voluntary.” These obligations include testing models to see how they might allow things like wide-scale discrimination and the spread of disinformation.
In a letter sent Tuesday to European Commission vice president and tech chief Henna Virkkunen, first reported by the Financial Times but published in full for the first time below, current and former lawmakers said making these model tests voluntary could potentially allow AI providers who “adopt more extreme political positions” to warp European elections, restrict freedom of information, and disrupt the EU economy.
“In the current geopolitical situation, it is more important than ever that the EU rises to the challenge and stands strong on fundamental rights and democracy,” they wrote.
Brando Benifei, who was one of the European Parliament’s lead negotiators on the AI Act text and the first signatory on this week’s letter, told Fortune Wednesday that the political climate may have something to do with the watering-down of the code of practice. The second Trump administration is antagonistic toward European tech regulation; Vice President JD Vance warned in a fiery speech at the Paris AI Action Summit in February that “tightening the screws on U.S. tech companies” would be a “terrible mistake” for European countries.
“I think there is pressure coming from the United States, but it would be very naive [to think] that we can make the Trump administration happy by going in this direction, because it would never be enough,” noted Benifei, who currently chairs the European Parliament’s delegation for relations with the U.S.
Benifei said he and other former AI Act negotiators had met with the Commission’s AI Office experts, who are drafting the code of practice, on Tuesday. On the basis of that meeting, he expressed optimism that the offending changes could be rolled back before the code is finalized.
“I think the issues we raised have been considered, and so there is space for improvement,” he said. “We will see that in the next weeks.”
Virkkunen had not provided a response to the letter, nor to Benifei’s comment about U.S. pressure, at the time of publication. However, she has previously insisted that the EU’s tech rules are fairly and consistently applied to companies from any country. Competition Commissioner Teresa Ribera has also maintained that the EU “cannot transact on human rights [or] democracy and values” to placate the U.S.
Shifting obligations
The key part of the AI Act here is Article 55, which places significant obligations on the providers of general-purpose AI models that come with “systemic risk”—a term that the law defines as meaning the model could have a major impact on the EU economy or has “actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale.”
The act says that a model can be presumed to have systemic risk if the computational power used in its training “measured in floating point operations [FLOPs] is greater than 1025.” This likely includes many of today’s most powerful AI models, though the European Commission can also designate any general-purpose model as having systemic risk if its scientific advisors recommend doing so.
Under the law, providers of such models have to evaluate them “with a view to identifying and mitigating” any systemic risks. This evaluation has to include adversarial testing—in other words, trying to get the model to do bad things, to figure out what needs to be safeguarded against. They then have to tell the European Commission’s AI Office about the evaluation and what it found.
This is where the third version of the draft code of practice becomes problematic.
The first version of the code was clear that AI companies need to treat large-scale disinformation or misinformation as systemic risks when evaluating their models, because of their threat to democratic values and their potential for election interference. The second version didn’t specifically talk about disinformation or misinformation, but still said that “large-scale manipulation with risks to fundamental rights or democratic values,” such as election interference, was a systemic risk.
Both the first and second versions were also clear that model providers should consider the possibility of large-scale discrimination as a systemic risk.
But the third version only lists risks to democratic processes, and to fundamental European rights such as non-discrimination, as being “for potential consideration in the selection of systemic risks.” The official summary of changes in the third draft maintains that these are “additional risks that providers may choose to assess and mitigate in the future.”
In this week’s letter, the lawmakers who negotiated with the Commission over the final text of the law insisted that “this was never the intention” of the agreement they struck.
“Risks to fundamental rights and democracy are systemic risks that the most impactful AI providers must assess and mitigate,” the letter read. “It is dangerous, undemocratic and creates legal uncertainty to fully reinterpret and narrow down a legal text that co-legislators agreed on, through a Code of Practice.”
This story was originally featured on Fortune.com
Source link