
An alarming rise in artificial intelligence (AI) intimate deepfakes has resulted in global backlash against chatbots and raised serious concerns over “nudification” applications. Elon Musk’s chatbot Grok has most recently been under fire for generating sexually explicit images of women and children. This issue surfaced in December 2025 when Grok Imagine, Musk’s image and video generation tool, was used to non-consensually alter images of X users on a large scale. These images are readily available to all users on X. Trends over the past several weeks suggest that Grok currently generates one non-consensual intimate deepfake image every minute.
Grok is not the only culprit of generating intimate deepfakes. Nudification apps, accessible through social media and search engines, produce nude images of real individuals using generative AI. The democratization of AI image generation has led to increased accessibility of the deepfake tools. With this, AI service providers such as OpenAI have been working to greatly improve the quality of their deepfake technology. These improvements mean that intimate deepfakes are often highly realistic, jeopardizing the privacy of non-consenting subjects. Of these generated deepfakes, an estimated 98 per cent depict nude subjects, and an estimated 99 per cent of nude deepfakes are of women and girls. An October 2024 study conducted by the non-profit Internet Matters highlighted that, as the volume of deepfakes has rapidly increased, the evidence suggests that the majority of deepfakes are used for harmful purposes.
In response to this problem, governmental committees are urging their governments to clarify their response to the issue of intimate deepfakes generated by AI.
In 2024, the Australian government announced its plan to impose a “digital duty of care” on tech companies, aiming to reduce their online harms. Generally, a duty of care forms the basis of the law of negligence, creating an onus on parties to ensure the safety of the beneficiary by avoiding foreseeable harms. This new proposed duty would require tech companies to carry out regular risk assessments to proactively identify and address harmful content. Since this proposition, the intimate deepfake problem in Australia has increased.
In June 2025, Australia’s eSafety Commissioner reported that complaints involving deepfakes and other digitally manipulated images of individuals under 18 have more than doubled over the past 18 months. Despite this, it is not a crime to develop, host or promote the use of nudification tools, whether it is used for the creation of adult or child content. Despite the introduction of a bill to criminalize and ban the solicitation and use of these AI abuse apps, the Australian government has retained its focus on introducing the digital duty of care. However, the government is yet to provide a solid time frame for when its ban will come into effect.
The UK government has followed suit in identifying intimate deepfake production as an issue. In June 2025, the UK government enacted the Data (Use and Access) Act (the Act) in order to comprehensively protect its citizens’ data, amending several pieces of existing data and privacy legislation. Despite the Act’s enaction, the government has warned that changes could take upwards of 12 months to be affected, and commencement regulations will determine the exact date of animation for each measure.
Included in the Act is a provision that would make it an offence to use AI to generate intimate images without the subject’s consent. This provision is yet to come into force, and the UK government has not provided any timeframe for its proposed ban of nudification tools.
Responding to the UK intimate deepfake issue, the UK’s Technology Secretary Liz Kendall said this past week that she would support the Office of Communications in banning UK’s access to Elon Musk’s X platform.
Australia and the UK are not the only countries voicing their responses — France, Ireland, Brazil and India have all indicated investigations or possible regulatory action against X in response to Grok’s AI-generated intimate deepfakes.
At home, deepfakes have previously been identified as “a real threat to a Canadian future” by the Canadian government, and a threat to Canadian democracy. However, Canada has remained silent amid the current deepfake scandal.
With no present considerations of additional regulations against X in relation to the backlash, Canadian ministers and departments are still active on the platform. It is important to note that there is currently no legislation governing AI models and Canada’s privacy legislation has not been updated in the last 40 years.
The rapid proliferation of AI-generated intimate deepfakes exposes significant gaps in existing legal and regulatory frameworks worldwide.
