Connect with us

Published

on

The alarming rise of AI ‘nudify’ apps that create explicit images of real people

In June of last year, Jessica Guistolise received a text message that would change her life.

While the technology consultant was dining with colleagues on a work trip in Oregon, her phone alerted her to a text from an acquaintance named Jenny, who said she had urgent information to share about her estranged husband, Ben.

After a nearly two-hour conversation with Jenny later that night, Guistolise recalled, she was dazed and in a state of panic. Jenny told her she’d found pictures on Ben’s computer of more than 80 women whose social media photos were used to create deepfake pornography — videos and photos of sexual activities made using artificial intelligence to merge real photos with pornographic images. All the women in Ben’s images lived in the Minneapolis area.

Jenny used her phone to snap pictures of images on Ben’s computer, Guistolise said. The screenshots, some of which were viewed by CNBC, revealed that Ben used a site called DeepSwap to create the deepfakes. DeepSwap falls into a category of “nudify” sites that have proliferated since the emergence of generative AI less than three years ago. 

CNBC decided not to use Jenny’s surname in order to protect her privacy and withheld Ben’s surname due to his assertion of mental health struggles. They are now divorced.

Guistolise said that after talking to Jenny, she was desperate to cut her trip short and rush home.

In Minneapolis the women’s experiences would soon spark a growing opposition to AI deepfake tools and those who use them.

One of the manipulated photos Guistolise saw upon her return was generated using a photo from a family vacation. Another was from her goddaughter’s college graduation. Both had been taken from her Facebook page.  

“The first time I saw the actual images, I think something inside me shifted, like fundamentally changed,” said Guistolise, 42.

CNBC interviewed more than two dozen people — including victims, their family members, attorneys, sexual-abuse experts, AI and cybersecurity researchers, trust and safety workers in the tech industry, and lawmakers — to learn how nudify websites and apps work and to understand their real-life impact on people.

It’s not something that I would wish for on anybody,” Guistolise said.

Jessica Guistolise, Megan Hurley and Molly Kelley talk with CNBC in Minneapolis, Minnesota, on July 11, 2025, about fake pornographic images and videos depicting their faces made by their mutual friend Ben using AI site DeepSwap.

Jordan Wyatt | CNBC

Nudify apps represent a small but rapidly growing corner of the new AI universe, which exploded following the arrival of OpenAI’s ChatGPT in late 2022. Since then, Meta, Alphabet, Microsoft, Amazon and others have collectively spent hundreds of billions of dollars investing in AI and pursuing artificial general intelligence, or AGI — technology that could rival and even surpass the capabilities of humans. 

For consumers, most of the excitement to date has been around chatbots and image generators that allow users to perform complex tasks with simple text prompts. There’s also the burgeoning market of AI companions, and a host of agents designed to enhance productivity. 

But victims of nudify apps are experiencing the flip side of the AI boom. Thanks to generative AI, products such as DeepSwap are so easy to use — requiring no coding ability or technical expertise — that they can be accessed by just about anyone. Guistolise and others said they worry that it’s only a matter of time before the technology spreads widely, leaving many more people to suffer the consequences.

Guistolise filed a police report about the case and obtained a restraining order against Ben. But she and her friends quickly realized there was a problem with that strategy.

Ben’s actions may have been legal. 

The women involved weren’t underage. And as far as they were aware, the deepfakes hadn’t been distributed, existing only on Ben’s computer. While they feared that the videos and images were on a server somewhere and could end up in the hands of bad actors, there was nothing of that sort that they could pin on Ben. 

One of the other women involved was Molly Kelley, a law student who would spend the ensuing year helping the group navigate AI’s uncharted legal maze. 

“He did not break any laws that we’re aware of,” Kelley said, referring to Ben’s behavior. “And that is problematic.”

Ben admitted to creating the deepfakes, and told CNBC by email that he feels guilty and ashamed of his behavior.

Jenny described Ben’s actions as “horrific, inexcusable, and unforgivable,” in an emailed statement.

“From the moment I learned the truth, my loyalty has been with the women affected, and my focus remains on how best to support them as they navigate their new reality,” she wrote. “This is not an issue that will resolve itself. We need stronger laws to ensure accountability — not only for the individuals who misuse this technology, but also for the companies that enable its use on their platforms.”

Readily available

Like other new and simple-to-use AI tools, experts say that many apps that have nudify services advertise on Facebook and are available to download from the Apple App Store and Google Play Store.

Haley McNamara, senior vice president at the National Center on Sexual Exploitation, said nudify apps and sites have made it “very easy to create realistic sexually explicit, deepfake imagery of a person based off of one photo in less time than it takes to brew a cup of coffee.”

Two photos of Molly Kelley’s face and one of Megan Hurley’s appear on a screenshot taken from a computer belonging to their mutual friend Ben, who used the women’s Facebook photos without their consent to make fake pornographic images and videos using the AI site DeepSwap, July 11, 2025.

A spokesperson from Meta, Facebook’s owner, said in a statement that the company has strict rules barring ads that contain nudity and sexual activity and that it shares information it learns about nudify services with other companies through an industrywide child-safety initiative. Meta characterized the nudify ecosystem as an adversarial space and said it’s improving its technology to try to prevent bad actors from running ads. 

Apple told CNBC that it regularly removes and rejects apps that violate its app store guidelines related to content deemed offensive, misleading and overtly sexual and pornographic. 

Google declined to comment.

The issue extends well beyond the U.S.

In June 2024, around the same time the women in Minnesota discovered what was happening, an Australian man was sentenced to nine years in prison for creating deepfake content of 26 women. That same month, media reports detailed an investigation by Australian authorities into a school incident in which a teenager allegedly created and distributed deepfake content of nearly 50 female classmates.

“Whatever the worst potential of any technology is, it’s almost always exercised against women and girls first,” said Mary Anne Franks, professor at the George Washington University Law School.

Security researchers from the University of Florida and Georgetown University wrote in a research paper presented in August that nudify tools are taking design cues from popular consumer apps and using familiar subscription models. DeepSwap charges users $19.99 a month to access “premium” benefits, which includes credits that can be used for AI video generation, faster processing and higher-quality images.

The researchers said the “nudification platforms have gone fully mainstream” and are “advertised on Instagram and hosted in app stores.”

Guistolise said she knew that people could use AI to create nonconsensual porn, but she didn’t realize how easy and accessible the apps were until she saw a synthetic version of herself participating in raunchy, explicit activity. 

According to the screenshots of Ben’s DeepSwap page, the faces of Guistolise and the other Minnesota women sit neatly in rows of eight, like in a school yearbook. Clicking on the photos, Jenny’s pictures show, leads to a collection of computer-generated clones engaged in a variety of sexual acts. The women’s faces had been merged with the nude bodies of other women.

DeepSwap’s privacy policy states that users have seven days to look at the content from the time they upload it to the site, and that the data is stored for that period on servers in Ireland. DeepSwap’s site says it deletes the data at that point, but users can download it in the interim onto their own computer. 

The site also has a terms of service page, which says users shouldn’t upload any content that “contains any private or personal information of a third party without such third party’s consent.” Based on the experiences of the Minnesota women, who provided no consent, it’s unclear whether DeepSwap has any enforcement mechanism. 

DeepSwap provides little publicly by way of contact information and didn’t reply to multiple CNBC requests for comment.

CNBC reporting found AI site DeepSwap, shown here, was used by a Minneapolis man to create fake pornographic images and videos depicting the faces of more than 80 of his friends and acquaintances.

In a press release published in July, DeepSwap used a Hong Kong dateline and included a quote attributed to a person the release identified as CEO and co-founder Penyne Wu. The media contact on the release was listed as marketing manager Shawn Banks. 

CNBC was unable to find information online about Wu, and sent multiple emails to the address provided for Banks, but received no response. 

DeepSwap’s website currently lists “MINDSPARK AI LIMITED” as its company name, provides an address in Dublin, and states that its terms of service are “governed by and construed in accordance with the laws of Ireland.”

However, in July, the same DeepSwap page had no mention of Mindspark, and references to Ireland instead said Hong Kong. 

Psychological trauma

Kelley, 42, found out about her inclusion in Ben’s AI portfolio after receiving a text message from Jenny. She invited Jenny over that afternoon.

After learning what happened, Kelley, who was six months pregnant at the time, said it took her hours to muster the strength to view the photos captured from Jenny’s phone. Kelley said what she saw was her face “very realistically on someone else’s body, in images and videos.” 

Kelley said her stress level spiked to a degree that it soon started to affect her health. Her doctor warned her that too much cortisol, brought on by stress, would cause her body not “to make any insulin,” Kelley recalled. 

“I was not enjoying life at all like this,” said Kelley, who, like Guistolise, filed a police report on the matter.

Kelley said that in Jenny’s photos she recognized some of her good friends, including many she knew from the service industry in Minneapolis. She said she then notified the women and she purchased facial-recognition software to help identify the other victims so they could be informed. About half a dozen victims have yet to be identified, she said.

“It was incredibly time consuming and really stressful because I was trying to work,” she said. 

Victims of nudify tools can experience significant trauma, leading to suicidal thoughts, self-harm and a fear of trust, said Ari Ezra Waldman, a law professor at University of California, Irvine who testified at a 2024 House committee hearing on the harms of deepfakes.

Waldman said even when nudified images haven’t been posted publicly, subjects can fear that the images may eventually be shared, and “now someone has this dangling over their head like a sword of Damocles.” 

“Everyone is subject to being objectified or pornographied by everyone else,” he said. 

Three victims showed CNBC explicit, AI-created deepfake images depicting their faces as well as those of other women, during an interview in Minneapolis, Minnesota, on July 11, 2025.

Megan Hurley, 42, said she was trying to enjoy a cruise last summer off the western coast of Canada when she received an urgent text message from Kelley. Her vacation was ruined. 

Hurley described instant feelings of deep paranoia after returning home to Minneapolis. She said she had awkward conversations with an ex-boyfriend and other male friends, asking them to take screenshots if they ever saw AI-generated porn online that looked like her. 

“I don’t know what your porn consumption is like, but if you ever see me, could you please screencap and let me know where it is?” Hurley said, describing the kinds of messages she sent at the time. “Because we’d be able to prove dissemination at that point.”

Hurley said she contacted the FBI but never heard back. She also filled out an online FBI crime report, which she shared with CNBC. The FBI confirmed that it received CNBC’s request for comment, but didn’t provide a response.

The group of women began searching for help from lawmakers. They were led to Minnesota state Sen. Erin Maye Quade, a Democrat who had previously sponsored a bill that became a state statute criminalizing the “nonconsensual dissemination of a deep fake depicting intimate parts or sexual acts.”  

Kelley landed a video call with the senator in early August 2024. 

In the virtual meeting, several women from the group told their stories, and explained their frustrations about the limited legal recourse available. Maye Quade went to work on a new bill, which she announced in February, that would compel AI companies to shut down apps using their technology to create nudify services. 

The bill, which is still being considered, would fine tech companies that offer nudify services $500,000 for every nonconsensual, explicit deepfake that they generate in the state of Minnesota.

Maye Quade told CNBC in an interview that the bill is the modern equivalent of longstanding laws that make it illegal for a person to peep into someone else’s window and snap explicit photos without consent. 

“We just haven’t grappled with the emergence of AI technology in the same way,” Maye Quade said.

Minnesota state Sen. Erin Maye Quade, at left, talks to CNBC’s Jonathan Vanian and Katie Tarasov in Minneapolis on July 11, 2025, about her efforts to pass state legislation that would fine tech companies that offer nudify services $500,000 for every nonconsensual, explicit deepfake image they generate in her state.

Jordan Wyatt | CNBC

But Maye Quade acknowledged that enforcing the law against companies based overseas presents a significant challenge. 

“This is why I think a federal response is more appropriate,” she said. “Because actually having a federal government, a country could take far more actions with companies that are based in other countries.”

Kelley, who gave birth to her son in September 2024, characterized one of her late October meetings with Maye Quade and the group as a “blur,” because she said she was “mentally and physically unwell due to sleep deprivation and stress.”

She said she now avoids social media. 

“I never announced the birth of my second child,” Kelley said. “There’s plenty of people out there who have no idea that I had a baby. I just didn’t want to put it online.”

The early days of deepfake pornography

The rise of deepfakes can be traced back to 2018. That’s when videos showing former President Barack Obama giving speeches that never existed and actor Jim Carrey, instead of Jack Nicholson, appearing in “The Shining” started going viral. 

Lawmakers sounded the alarm. Sites such as Pornhub and Reddit responded by pledging to take down nonconsensual content from their platforms. Reddit said at the time that it removed a large deepfake-related subreddit as part of an enforcement of a policy banning “involuntary pornography.”

The community congregated elsewhere. One popular place was MrDeepFakes, which hosted explicit AI-generated videos and served as an online discussion forum. 

By 2023, MrDeepFakes became the top deepfake site on the web, hosting 43,000 sexualized videos containing nearly 4,000 individuals, according to a 2025 study of the site by researchers from Stanford University and the University of California San Diego.

MrDeepFakes claimed to host only “celebrity” deepfakes, but the researchers found “that hundreds of targeted individuals have little to no online or public presence.” The researchers also discovered a burgeoning economy, with some users agreeing to create custom deepfakes for others at an average cost of $87.50 per video, the paper said.

Some ads for nudify services have gone to more mainstream locations. Alexios Mantzarlis, an AI security expert at Cornell Tech, earlier this year discovered more than 8,000 ads on the Meta ad library across Facebook and Instagram for a nudify service called CrushAI. 

AI apps and sites like Undress, DeepNude and CrushAI are some of the “nudify” tools that can be used to create fake pornographic images and videos depicting real people’s faces pulled from innocuous online photos.

Emily Park | CNBC

At least one DeepSwap ad ran on Instagram in October, according to the social media company’s ad library. The account associated with running the ad does not appear to be officially tied to DeepSwap, but Mantzarlis said he suspects the account could have been an affiliate partner of the nudify service.

Meta said it reviewed ads associated with the Instagram account in question and didn’t find any violations.

Top nudify services are often found on third-party affiliate sites such as ThePornDude that earn money by mentioning them, Mantzarlis said. 

In July, Mantzarlis co-authored a report analyzing 85 nudify services. The report found that the services receive 18.6 million monthly unique visitors in aggregate, though Mantzarlis said that figure doesn’t take into account people who share the content in places such as Discord and Telegram.

As a business, nudify services are a small part of the generative AI market. Mantzarlis estimates annual revenue of about $36 million, but he said that’s a conservative prediction and includes only AI-generated content from sites that specifically promote nudify services. 

MrDeepFakes abruptly shut down in May, shortly after its key operator was publicly identified in a joint investigative report from Canada’s CBC News, Danish news sites Politiken and Tjekdet, and online investigative outlet Bellingcat.

CNBC reached out by email to the address that was associated with the person named as the operator in some materials from the CBC report, but received no reply. 

With MrDeepFakes going dark, Discord has emerged as an increasingly popular meeting spot, experts said. Known mostly for its use in the online gaming community, Discord has roughly 200 million global monthly active users who access its servers to discuss shared interests. 

CNBC identified several public Discord servers, including one associated with DeepSwap, where users appeared to be asking others in the forum to create sexualized deepfakes based on photos they shared. 

Leigh Cassidy Gibson, a researcher at the University of Florida, co-authored the 2025 paper that looked at “20 popular and easy-to-find nudification websites.” She confirmed to CNBC that while DeepSwap wasn’t named, it was one of the sites she and her colleagues studied to understand the market. More recently, she said, they’ve turned their attention to various Discord servers where users seek tutorials and how-to guides on creating AI-generated, sexual content.

Discord declined to comment.

‘It’s insane to me that this is legal right now’

At the federal level, the government has at least taken note. 

In May, President Donald Trump signed the “Take It Down Act” into law, which goes into effect in May. The law bans online publication of nonconsensual sexual images and videos, including those that are inauthentic and generated by AI. 

“A person who violates one of the publication offenses pertaining to depictions of adults is subject to criminal fines, imprisonment of up to two years, or both,” according to the law’s text.

Experts told CNBC that the law still doesn’t address the central issue facing the Minnesota women, because there’s no evidence that the material was distributed online. 

Maye Quade’s bill in Minnesota emphasizes that the creation of the material is the core problem and requires a legal response. 

Some experts are concerned that the Trump administration’s plans to bolster the AI sector will undercut states’ efforts. In late July, Trump signed executive orders as part of the White House’s AI Action Plan, underscoring AI development as a “national security imperative.” 

As part of Trump’s proposed spending bill earlier this year, states would have been deterred from regulating AI for a 10-year period or risk losing certain government subsidies related to AI infrastructure. The Senate struck down that provision in July, keeping it out of the bill Trump signed in August.  

“I would not put it past them trying to resurrect the moratorium,” said Waldman, of UC Irvine, regarding the tech industry’s continued influence on AI policy.

A White House official told CNBC that the Take It Down Act, which was supported by the Trump administration and signed months prior to the AI Action Plan, criminalizes nonconsensual deepfakes. The official said the AI Action Plan encourages states to allow federal laws to override individual state laws.

In San Francisco, home to OpenAI and other high-valued AI startups, the city can pursue civil cases against nudify services due to California consumer protection laws. Last year San Francisco sued 16 companies associated with nudify apps.

The San Francisco City Attorney’s office said in June that an investigation related to the lawsuits had led to 10 of the most-visited nudify websites being taken offline or no longer being accessible in California. One of the companies that was sued, Briver LLC, settled with the city and has agreed to pay $100,000 in civil penalties. Additionally, Briver no longer operates websites that can create nonconsensual deepfake pornography, the city attorney’s office said.

Jordan Wyatt | CNBC

Further south, in Silicon Valley, Meta in June sued Hong Kong-based Joy Timeline HK, the company behind CrushAI. Meta said that Joy Timeline attempted to “circumvent Meta’s ad review process and continue placing these ads, after they were repeatedly removed for breaking our rules.”

Still, Mantzarlis, who has been publishing his research on Indicator, said he continues to find nudify-related ads on Meta’s platforms. 

Mantzarlis and a colleague from the American Sunlight Project discovered 4,215 ads for 15 AI nudifier services that ran on Facebook and Instagram since June 11, they wrote in a joint report on Sept. 10. Mantzarlis said Meta eventually removed the ads, some of which were more subtle than others in implying nudifying capabilities.  

Meta told CNBC that earlier this month that it removed thousands of ads linked to companies offering nudify services and sent the entities cease-and-desist letters for violating the company’s ad guidelines.

In Minnesota, the group of friends are trying to get on with their lives while continuing to advocate for change. 

Guistolise said she wants people to realize that AI is potentially being used to harm them in ways they never imagined.

“It’s so important that people know that this really is out there and it’s really accessible and it’s really easy to do, and it really needs to stop,” Guistolise said. “So here we are.”

Survivors of sexual violence can seek confidential support from the National Sexual Assault Hotline at 1-800-656-4673.

Continue Reading

Technology

Applied Materials lays off 4% of workforce

Published

on

By

Applied Materials lays off 4% of workforce

Signage outside Applied Materials headquarters in Santa Clara, California, U.S., on Thursday, May 13, 2021.

David Paul Morris | Bloomberg | Getty Images

Chip equipment manufacturer Applied Materials is laying off 4% of its workforce.

The company on Thursday began notifying impacted employees around the world “across all levels and groups,” it said in a filing. Applied Materials provides equipment, services and software to industries, including the semiconductor industry.

Applied Materials had approximately 36,100 full-time employees, according to an August 2025 filing. A layoff of 4% would represent about 1,444 employees.

“Automation, digitalization and geographic shifts are redefining our workforce needs and skill requirements,” the company wrote in the filing. “With this in mind, we have been focused for some time on building high-velocity, high-productivity teams, adopting new technologies and simplifying organizational structures.”

The move comes at the end of the company’s fiscal year. Earlier this month, the Applied Materials forecasted a $600 million hit to fiscal 2026 revenue after the U.S. expanded its restricted export list. That resulted in company shares to dipping 3% in extended trading.

As a result of the workforce reduction, Applied Materials expects to incur charges of approximately $160 million to $180 million, consisting primarily of severance and other one-time employment termination benefits to be paid in cash, the filing states.

The company said the cuts are a way to position itself “as a more competitive and productive organization.”

Continue Reading

Technology

Microsoft AI chief says company won’t build chatbots for erotica

Published

on

By

Microsoft AI chief says company won’t build chatbots for erotica

Mustafa Suleyman CEO and co-founder of Inflection AI speaks during the Axios BFD event in New York City, U.S., October 12, 2023. 

Brendan Mcdermid | Reuters

Microsoft AI CEO Mustafa Suleyman said the software giant won’t build artificial intelligence services that provide “simulated erotica,” distancing itself from longtime partner OpenAI.

“That’s just not a service we’re going to provide,” Suleyman said on Thursday at the Paley International Council Summit in Menlo Park, California. “Other companies will build that.”

Suleyman’s comments come a week after OpenAI CEO Sam Altman said his company plans to allow verified adults to use ChatGPT for erotica. Altman said that OpenAI is “not the elected moral police of the world.”

Microsoft has for years been a major investor and cloud partner to OpenAI, and the two companies have used their respective strengths to build big AI businesses. But the relationship has shown signs of tension of late, with OpenAI partnering with Microsoft rivals like Google and Oracle, and Microsoft focusing more on its own AI services.

Earlier on Thursday, Microsoft announced a series of new features for its Copilot AI chatbot, including an AI companion called Mico that can respond to users through a call feature and express itself by changing its color.

Suleyman in August penned an essay titled “We must build AI for people; not to be a person.” He argued that tech companies should not build “seemingly conscious” services that can give humans the impression that they may be capable of suffering, and wrote that conscious AIs could create another “axis of division” for humanity.

On Thursday, Suleyman said the creation of seemingly conscious AI is already happening, primarily with erotica-focused services. He referenced Altman’s comments as well as Elon Musk’s Grok, which in July launched its own companion features, including a female anime character.

“You can already see it with some of these avatars and people leaning into the kind of sexbot erotica direction,” Suleyman said. “This is very dangerous, and I think we should be making conscious decisions to avoid those kinds of things.”

OpenAI didn’t immediately respond to requests for comment, while xAI responded saying, “Legacy Media Lies.”

WATCH: Why it’s time to take AI-human relationships seriously

Why it’s time to take AI-human relationships seriously

Continue Reading

Technology

Apple begins shipping American-made AI servers from Texas

Published

on

By

Apple begins shipping American-made AI servers from Texas

Workers at a factory in Houston, Texas build servers for Apple.

Apple

Apple has started shipping advanced servers for artificial intelligence applications out of a factory in Houston, Texas, the company announced on Thursday.

These servers are a core part of Apple’s commitment to spend $600 billion in the U.S. on advanced manufacturing, suppliers, and other initiatives, and the milestone could please President Donald Trump, who has called for Apple and other technology companies to do more manufacturing on U.S. shores.

Apple’s plan to assemble servers in the U.S. was first revealed in February.

Apple Chief Operating Officer Sabih Khan said on Thursday that the servers will power the company’s Apple Intelligence and Private Cloud Compute services. Apple is using its own silicon in its Apple Intelligence servers.

“Our teams have done an incredible job accelerating work to get the new Houston factory up and running ahead of schedule and we plan to continue expanding the facility to increase production next year,” Khan said in a statement.

The Houston factory is on track to create thousands of jobs, Apple said. The Apple servers were previously manufactured overseas.

Read more CNBC tech news

In August, Apple CEO Tim Cook met with Trump to announce additional U.S. spending, especially on semiconductor companies under a program it calls the American Manufacturing Program.

Cook gave Trump a gift based on the U.S.-made Corning glass used on iPhones and Apple Watches.

Apple also opened a manufacturing academy in partnership with Michigan State in July.

While Trump has praised Cook and Apple for its U.S. spending commitments, he has also at times pushed Apple to make its iPhones in the U.S., a process that experts say could take years and would be costly.

The Trump administration has separately called for and cancelled tariffs that could hurt Apple, which imports its computers and phones to the U.S. from China, India, and Vietnam.

In September, Cook said in a CNBC interview that Apple is contributing to U.S. manufacturing by doing business with U.S.-based semiconductor suppliers, and that its spending and expertise is enabling chips to be fabricated and packaged entirely in the U.S.

“You can add a lot by making it global and then stitching together the end-to-end supply chain in semiconductors,” Cook said. “I can’t stress how important this is and how much that will add to what we’re doing.”

A factory building Apple servers for AI in Texas

Continue Reading

Trending