The European Commission launches a Digital Services Act probe into X’s Grok AI over alleged failures to prevent illegal content.
The European Union has launched a formal investigation into X, formerly known as Twitter, over concerns that its Grok AI chatbot may be producing and spreading illegal content. The probe focuses on whether the company failed to properly assess risks before releasing new AI features and whether it broke EU digital laws in the process.
The European Commission announced the investigation on Monday, and it falls under the Digital Services Act, a law designed to hold large online platforms accountable for harmful and illegal content.
Concerns Over Illegal and Harmful Material
According to the Commission, the investigation will examine whether Grok’s image generation tools allowed the creation and spread of illegal material, including sexually explicit images involving children. Officials say these risks were not only theoretical but have already caused real harm.
The Commission stated that EU citizens may have been exposed to serious and illegal content through Grok’s features. Regulators will now assess whether X took enough steps to prevent this from happening.
Risk Assessment Under Scrutiny
One of the main questions in the probe is whether X properly evaluated the risks linked to Grok before making the tool available to users. Under EU law, large platforms are required to assess potential risks and implement safeguards before launching features that could be misused. The Commission said it will review how X handled this process and whether it complied with its legal duty to reduce the spread of illegal content.
Rising Concerns Over Deepfakes
The investigation comes amid growing international concern about the misuse of AI tools to create deepfake images. Grok has faced criticism for its role in generating non-consensual and sexualized images, including content involving minors. Advocacy groups and researchers have warned that such tools can be abused easily and spread quickly on social media platforms, making it difficult to remove harmful material once it appears.
X Introduced Restrictions, But Issues Remain
In response to public backlash, X introduced new restrictions about two weeks ago. The company limited Grok’s image generation feature to paid subscribers and added technical measures meant to prevent users from digitally altering images to remove clothing from people.
X also blocked access to the feature in countries where such content is illegal. However, these actions may not have gone far enough. Researchers found that roughly one-third of sexualized images of children identified in a sample by the Center for Countering Digital Hate were still accessible on X after the changes were made.
Strong Words From EU Officials
Henna Virkkunen, the European Commission’s Executive Vice President for Tech Sovereignty, Security and Democracy, said the investigation will determine whether X respected the rights of European citizens. She stated that the probe will examine whether the platform treated the safety of users, including women and children, as an acceptable cost of doing business.
Earlier this month, EU officials also criticized a Grok feature known as Spicy Mode, which allowed the creation of explicit images. At a press conference in Brussels, Commission spokesperson Thomas Regnier strongly condemned the feature, calling it illegal and unacceptable in Europe.
Calls for Better Control Over AI Content
Industry voices have also weighed in on the issue. Fraser Edwards, co-founder and CEO of cheqd, said creators should have control over how their image and likeness are used in AI-generated content.
He argued that the backlash over deepfake abuse highlights a deeper problem with the internet. According to Edwards, there is still no reliable way to verify who created synthetic content or whether its use was approved. As a result, responsibility often falls on platforms like X instead of the individuals who generated the harmful material.
Possible Violations of EU Law
If the Commission confirms its findings, X could be found in violation of several sections of the Digital Services Act. These include rules that require platforms to assess and reduce systemic risks, especially those linked to illegal content and gender based violence.
The law also demands transparency and accountability from large online platforms operating in the EU. Penalties under the Digital Services Act can be severe, including large fines or further restrictions on platform operations.
Part of a Broader Case Against X
This investigation builds on an earlier case launched in late 2023. That case resulted in a fine of 120 million euros against X in December for deceptive design practices, failures in advertising transparency, and limited access for researchers. Since then, the Commission has expanded its scrutiny to include Grok AI and its content generation features. Regulators have previously raised concerns about antisemitic content produced by the chatbot as well.
What Happens Next
The investigation is still in its early stages. The Commission will now gather evidence, review X’s internal processes, and assess whether the company met its legal obligations under EU law. X will have the opportunity to respond to the allegations. Depending on the outcome, the case could lead to further fines or stricter requirements for how AI tools are deployed on the platform.
A Test Case for AI Regulation
The probe into Grok is being closely watched across the tech industry. It may set an important precedent for how AI-powered features are regulated in Europe. As AI tools become more powerful and widely used, regulators are under pressure to ensure that innovation does not come at the cost of public safety and basic rights. For X, the outcome of this investigation could shape how it develops and deploys AI products in the future.
