Connect with us

Tech

FBI warns of increasing use of AI-generated deepfakes in sextortion schemes

Published

on

The FBI on Monday warned of the increasing use of artificial intelligence to generate phony videos for use in sextortion schemes that attempt to harass minors and non-consulting adults or coerce them into paying ransoms or complying with other demands.

The scourge of sextortion has existed for decades. It involves an online acquaintance or stranger tricking a person into providing a payment, an explicit or sexually themed photo, or other inducement through the threat of sharing already obtained compromising images to the public. In some cases, the images in the scammers’ possession are real and were obtained from someone the victim knows or an account that was breached. Other times, the scammers only claim to have explicit material without providing any proof.

After convincing victims their explicit or compromising pictures are in the scammers’ possession, the scammers demand some form of payment in return for not sending the content to family members, friends, or employers. In the event victims send sexually explicit images as payment, scammers often use the new content to keep the scam going for as long as possible.

In recent months, the FBI said in an alert published Monday, the use of AI to generate fake videos that appear to show real people engaged in sexually explicit activities has grown.

“The FBI continues to receive reports from victims, including minor children and non-consenting adults, whose photos or videos were altered into explicit content,” officials wrote. “The photos or videos are then publicly circulated on social media or pornographic websites for the purpose of harassing victims or sextortion schemes.”

Advertisement

They went on to write:

As of April 2023, the FBI has observed an uptick in sextortion victims reporting the use of fake images or videos created from content posted on their social media sites or web postings, provided to the malicious actor upon request, or captured during video chats. Based on recent victim reporting, the malicious actors typically demanded: 1. Payment (e.g., money, gift cards) with threats to share the images or videos with family members or social media friends if funds were not received; or 2. The victim send real sexually themed images or videos.

Software and cloud-based services for creating so-called deepfake videos are abundant online and run the gamut from freely available open-source offerings to subscription accounts. With advances in AI in recent years, the quality of these offerings have drastically improved to the point where a single image of a person’s face is all that’s needed to create realistic videos that use the person’s likeness in a fake video.

Most deepfake offerings at least ostensibly include protections designed to prevent deepfake abuse by, for instance, using a built-in check designed to prevent the program from working on “inappropriate media.” In practice, these guard rails are often easy to skirt, and there are services available in underground markets that don’t come with the restrictions.

Scammers often obtain victims’ photos from social media or elsewhere and use them to create “sexuallythemed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” FBI officials warned. “Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else. The photos are then sent directly to the victims by malicious actors for sextortion or harassment, or until it was self-discovered on the Internet. Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the Internet.”

The FBI urged people to take precautions to prevent their images from being used in deepfakes.

“Although seemingly innocuous when posted or shared, the images and videos can provide malicious actors an abundant supply of content to exploit for criminal activity,” officials stated. “Advancements in content creation technology and accessible personal images online present new opportunities for malicious actors to find and target victims. This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.”

People who have received sextortion threats should retain all evidence available, particularly any screenshots, texts, tape recordings, emails that document usernames, email addresses, websites or names of platforms used for communication, and IP addresses. They can immediately report sextortion to:

Advertisement

Source: Ars Technica

Follow us on Google News to get the latest Updates

Trending