AI Deepfake Nude Services Soar in Popularity: Research

AI Deepfake Nude Services Soar in Popularity: Research
courtesy of cointelegraph.com

Social Media Analytics Company Reports Alarming Increase in "AI Undressing"

A recent report from Graphika, a social media analytics company, reveals a significant surge in the use of "AI undressing" services. These services involve the use of generative artificial intelligence (AI) tools specifically designed to remove clothing from user-submitted images. The report measures the number of comments and posts on Reddit and X that contain referral links to websites and Telegram channels offering synthetic Non-Consensual Intimate Images (NCII) services. Shockingly, the volume of these links has increased by a staggering 2,408% compared to last year, with over 32,100 instances recorded so far in 2023.

The Dark Side of Synthetic NCII

Synthetic NCII services refer to the creation of explicit content using AI tools without the consent of those depicted. Graphika warns that the accessibility and affordability of these AI tools make it easier for providers to generate and distribute realistic explicit content at scale. Without such services, customers would be responsible for managing their custom image diffusion models, a time-consuming and potentially costly task. However, the increasing use of AI undressing tools raises concerns about the creation of fake explicit content and its potential contribution to targeted harassment, sextortion, and the production of child sexual abuse material (CSAM).

Expanding Beyond Images: AI in Video Deepfakes

While AI undressing tools primarily target images, there have been instances of AI being used to create video deepfakes featuring celebrities like YouTube personality Mr. Beast and Hollywood actor Tom Hanks. This expansion into video deepfakes further highlights the potential risks and challenges posed by AI-generated content.

The Threat of AI-Generated Child Pornography

A separate report from the UK-based Internet Watch Foundation (IWF) revealed a shocking discovery of over 20,254 images of child abuse on a single dark web forum in just one month. The IWF warns that AI-generated child pornography could overwhelm the internet due to advancements in generative AI imaging. Distinguishing between deepfake pornography and authentic images has become increasingly challenging, posing a serious threat to information integrity, especially on social media platforms.

International Concerns and Regulatory Actions

The United Nations has highlighted the urgent and significant threat posed by AI-generated media to information integrity, particularly on social media platforms. In response, negotiators from the European Parliament and Council have agreed on rules governing the use of AI in the European Union. These actions reflect the growing recognition of the need to address the risks and dangers associated with AI technology.






Did you miss our previous article...
https://trendinginthenews.com/crypto-currency/kyberswap-exploiter-linked-to-50m-hxa-token-movement