The UK is pressing Elon Musk’s X to curb the spread of intimate Deepak images online.
The UK Presses government has urged Elon Musk’s social media platform X to take immediate action over the growing spread of intimate Deepak images on its network. Ministers say the situation has become serious and harmful, especially for women and girls who are being targeted without their consent.
The warning from Britain comes as concern grows across Europe about the misuse of artificial intelligence tools to create fake images that appear real. These images often show women or minors in sexualized or revealing ways, even though the people shown never agreed to such content being created or shared.
Government Describes Content as Disturbing
UK Presses Secretary Liz Kendall strongly criticized the spread of these images and described the content as deeply disturbing. In a public statement, she said the situation was unacceptable and demanded swift action from X.
She said no one should have to face the trauma of seeing fake intimate images of themselves online. According to her, such content is humiliating and harmful, and it affects victims long after the images are posted.
Kendall also stressed that women and girls are the main targets of this abuse. She said the government would not tolerate platforms allowing these images to circulate unchecked.
Focus on X and Its AI Tools
The criticism follows reports that X’s built-in artificial intelligence chatbot, known as Grok, has been used to generate large numbers of sexualized images. These images can be created on demand and often show women and minors wearing little clothing or placed in suggestive situations.
While the images are not real photographs, they are designed to look realistic. Experts warn that this makes them especially damaging, as viewers may believe they show real people in compromising situations.
Campaigners say the ease with which these images can be made has led to a surge in abuse. They argue that platforms must take responsibility for how their tools are used.
Rise of Non-Consensual Deepfakes
Deepfake technology has advanced rapidly in recent years. What once required specialist skills can now be done using simple tools available to the public. This has made it easier for people to create fake videos and images that appear convincing.
Non-consensual deepfakes involve using someone’s likeness without permission. Victims often discover these images through friends, family, or online searches. The emotional impact can be severe, leading to anxiety, fear, and damage to reputations.
Women, public figures, and young people are particularly at risk. Advocacy groups say this form of abuse is becoming one of the fastest-growing online harms.
Pressure Mounts on Social Media Platforms

The UK Presses is not alone in raising concerns. Other European countries have also called for stronger action against Deepak abuse. Regulators and lawmakers are questioning whether social media companies are doing enough to protect users.
Critics argue that platforms often respond too slowly and rely on victims to report content after harm has already occurred. They say stronger safeguards are needed to prevent such material from appearing in the first place.
Technology experts say companies that develop AI tools must build in limits to prevent misuse. Without these protections, harmful content can spread quickly before moderators have a chance to remove it.
UK’s Stand on Online Safety
Britain has taken a tougher stance on online harm in recent years. The government has introduced new rules that require platforms to remove illegal and harmful content more quickly. Companies that fail to comply can face heavy fines.
Liz Kendall’s statement suggests that the government is prepared to use these powers if necessary. She made it clear that X must take responsibility for what happens on its platform.
Officials say protecting users from abuse is not optional. Platforms that allow harmful content to spread risk losing public trust and facing legal consequences.
Impact on Victims
For victims of deepfake abuse, the experience can be life-changing. Many say the images make them feel powerless and exposed. Even when content is removed, copies can continue to circulate on other sites.
Support groups report an increase in people seeking help after discovering fake images of themselves online. Some victims fear the uk presses impact on their careers, relationships, and mental health.
Experts say the harm caused by deepfakes is similar to other forms of sexual abuse. They argue that laws and enforcement need to reflect the seriousness of the crime.
Calls for Faster Action
Campaigners have welcomed the UK Presses government’s strong language but say words must be followed by action. They want platforms to introduce automatic detection tools and stricter controls on AI-generated content.
Some groups are also calling for clearer reporting systems and faster response times. They say victims should not have to navigate complex processes to get harmful images removed.
There are also demands for better cooperation between governments and technology companies to tackle the issue at a global level.
Responsibility of AI Developers
The controversy has also sparked debate about the responsibilities of companies that build AI systems. Critics say developers must consider how their tools can be abused before releasing them to the public.
They argue that profit and innovation should not come at the cost of user safety. Without proper safeguards, advanced tools can cause real-world harm.
Supporters of regulation say clear rules can help ensure AI is used responsibly while still allowing innovation to continue.
X Yet to Respond in Detail
At the time of the government’s statement, X had not issued a detailed public response addressing the concerns raised. In the past, the platform has said it is committed to tackling abuse and improving moderation.
However, critics argue that current measures are not enough. They say the scale of the problem shows that stronger controls are needed, especially when AI tools are involved.
The lack of a clear response has added to pressure on the company to explain how it plans to address the issue.
Wider Debate on Online Harm
The situation has added to a wider debate about the role of social media in society. As platforms become more powerful, governments are questioning how much responsibility they should carry.
Some argue that technology companies have grown too fast without enough oversight. Others warn that heavy regulation could limit free expression.
For now, the focus remains on protecting users from harm while balancing innovation and openness.
Conclusion
The UK Presses call for urgent action highlights the growing concern over intimate Deepak images on social media platforms. With women and girls being targeted at alarming rates, officials say companies like X must act quickly and decisively.
As AI tools become more advanced, the risks of misuse also increase. Governments, platforms, and developers now face pressure to work together to stop abuse before it causes lasting harm.
The message from Britain is clear. Online safety must come first, and platforms that fail to protect users will be held to account.
