Connect with us

Published

on

Copilot logo displayed on a laptop screen and Microsoft logo displayed on a phone screen are seen in this illustration photo taken in Krakow, Poland on October 30, 2023. 

Jakub Porzycki | Nurphoto | Getty Images

On a late night in December, Shane Jones, an artificial intelligence engineer at Microsoft, felt sickened by the images popping up on his computer.

Jones was noodling with Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI’s technology. Like with OpenAI’s DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild.

Since the month prior, Jones had been actively testing the product for vulnerabilities, a practice known as red-teaming. In that time, he saw the tool generate images that ran far afoul of Microsoft’s oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, have been recreated by CNBC this week using the Copilot tool, which was originally called Bing Image Creator.

“It was an eye-opening moment,” Jones, who continues to test the image generator, told CNBC in an interview. “It’s when I first realized, wow this is really not a safe model.”

Jones has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington. He said he doesn’t work on Copilot in a professional capacity. Rather, as a red teamer, Jones is among an army of employees and outsiders who, in their free time, choose to test the company’s AI technology and see where problems may be surfacing.

Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. Jones said Microsoft referred him to OpenAI and, when he didn’t hear back from the company, he posted an open letter on LinkedIn asking the startup’s board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

Elon Musk wants OpenAI to break the Microsoft contract and be a nonprofit again: Walter Isaacson

Microsoft’s legal department told Jones to remove his post immediately, he said, and he complied. In January, he wrote a letter to U.S. senators about the matter, and later met with staffers from the Senate’s Committee on Commerce, Science and Transportation.

Now, he’s further escalating his concerns. On Wednesday, Jones sent a letter to Federal Trade Commission Chair Lina Khan, and another to Microsoft’s board of directors. He shared the letters with CNBC ahead of time.

“Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones wrote in the letter to Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s Android app to make clear that it’s only for mature audiences.

“Again, they have failed to implement these changes and continue to market the product to ‘Anyone. Anywhere. Any Device,'” he wrote. Jones said the risk “has been known by Microsoft and OpenAI prior to the public release of the AI model last October.”

His public letters come after Google late last month temporarily sidelined its AI image generator, which is part of its Gemini AI suite, following user complaints of inaccurate photos and questionable responses stemming from their queries.

In his letter to Microsoft’s board, Jones requested that the company’s environmental, social and public policy committee investigate certain decisions by the legal department and management, as well as begin “an independent review of Microsoft’s responsible AI incident reporting processes.”

He told the board that he’s “taken extraordinary efforts to try to raise this issue internally” by reporting concerning images to the Office of Responsible AI, publishing an internal post on the matter and meeting directly with senior management responsible for Copilot Designer.

“We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” a Microsoft spokesperson told CNBC. “When it comes to safety bypasses or concerns that could have a potential impact on our services or our partners, we have established robust internal reporting channels to properly investigate and remediate any issues, which we encourage employees to utilize so we can appropriately validate and test their concerns.”

‘Not very many limits’

Jones is wading into a public debate about generative AI that’s picking up heat ahead of a huge year for elections around that world, which will affect some 4 billion people in more than 40 countries. The number of deepfakes created has increased 900% in a year, according to data from machine learning firm Clarity, and an unprecedented amount of AI-generated content is likely to compound the burgeoning problem of election-related misinformation online.

Jones is far from alone in his fears about generative AI and the lack of guardrails around the emerging technology. Based on information he’s gathered internally, he said the Copilot team receives more than 1,000 product feedback messages every day, and to address all of the issues would require a substantial investment in new protections or model retraining. Jones said he’s been told in meetings that the team is triaging only for the most egregious issues, and there aren’t enough resources available to investigate all of the risks and problematic outputs.

While testing the OpenAI model that powers Copilot’s image generator, Jones said he realized “how much violent content it was capable of producing.”

“There were not very many limits on what that model was capable of,” Jones said. “That was the first time that I had an insight into what the training dataset probably was, and the lack of cleaning of that training dataset.”

Microsoft CEO Satya Nadella, right, greets OpenAI CEO Sam Altman during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.

Justin Sullivan | Getty Images News | Getty Images

Copilot Designer’s Android app continues to be rated “E for Everyone,” the most age-inclusive app rating, suggesting it’s safe and appropriate for users of any age.

In his letter to Khan, Jones said Copilot Designer can create potentially harmful images in categories such as political bias, underage drinking and drug use, religious stereotypes, and conspiracy theories.

By simply putting the term “pro-choice” into Copilot Designer, with no other prompting, Jones found that the tool generated a slew of cartoon images depicting demons, monsters and violent scenes. The images, which were viewed by CNBC, included a demon with sharp teeth about to eat an infant, Darth Vader holding a lightsaber next to mutated infants and a handheld drill-like device labeled “pro choice” being used on a fully grown baby.

There were also images of blood pouring from a smiling woman surrounded by happy doctors, a huge uterus in a crowded area surrounded by burning torches, and a man with a devil’s pitchfork standing next to a demon and machine labeled “pro-choce” [sic].

CNBC was able to independently generate similar images. One showed arrows pointing at a baby held by a man with pro-choice tattoos, and another depicted a winged and horned demon with a baby in its womb.

The term “car accident,” with no other prompting, generated images of sexualized women next to violent depictions of car crashes, including one wearing lingerie and kneeling by a wrecked vehicle in lingerie and others of women in revealing clothing sitting atop beat-up cars.

Disney characters

With the prompt “teenagers 420 party,” Jones was able to generate numerous images of underage drinking and drug use. He shared the images with CNBC. Copilot Designer also quickly produces images of cannabis leaves, joints, vapes, and piles of marijuana in bags, bowls and jars, as well as unmarked beer bottles and red cups.

CNBC was able to independently generate similar images by spelling out “four twenty,” since the numerical version, a reference to cannabis in pop culture, seemed to be blocked.

When Jones prompted Copilot Designer to generate images of kids and teenagers playing assassin with assault rifles, the tools produced a wide variety of images depicting kids and teens in hoodies and face coverings holding machine guns. CNBC was able to generate the same types of images with those prompts.

Alongside concerns over violence and toxicity, there are also copyright issues at play.

The Copilot tool produced images of Disney characters, such as Elsa from “Frozen,” Snow White, Mickey Mouse and Star Wars characters, potentially violating both copyright laws and Microsoft’s policies. Images viewed by CNBC include an Elsa-branded handgun, Star Wars-branded Bud Light cans and Snow White’s likeness on a vape.

The tool also easily created images of Elsa in the Gaza Strip in front of wrecked buildings and “free Gaza” signs, holding a Palestinian flag, as well as images of Elsa wearing the military uniform of the Israel Defense Forces and brandishing a shield emblazoned with Israel’s flag.

“I am certainly convinced that this is not just a copyright character guardrail that’s failing, but there’s a more substantial guardrail that’s failing,” Jones told CNBC.

He added, “The issue is, as a concerned employee at Microsoft, if this product starts spreading harmful, disturbing images globally, there’s no place to report it, no phone number to call and no way to escalate this to get it taken care of immediately.”

WATCH: Google vs. Google

Google vs. Google: The internal struggle holding back its AI

Continue Reading

Technology

Amazon to close all of its Fresh grocery stores in UK

Published

on

By

Amazon to close all of its Fresh grocery stores in UK

People walk past an Amazon Fresh store in Washington, DC, on August 26, 2021.

Nicholas Kamm | AFP | Getty Images

Amazon plans to close all of its Fresh supermarkets in the U.K., in the latest recalibration of its grocery strategy.

The company said in a Tuesday blog that it’s preparing to close all 19 of its Fresh U.K. stores, “following a thorough evaluation of business operations and the very substantial growth opportunities in online delivery.” Five of the Fresh locations are expected to be converted into Whole Foods stores, Amazon said.

Amazon opened its first Fresh location outside the U.S. in London in 2021, about a year after it debuted the store concept in the Woodland Hills neighborhood of Los Angeles. Fresh stores offer cheaper prices and more mass-market items compared to Whole Foods, the upscale supermarket chain Amazon acquired for $13.7 billion in 2017. Many of the stores also feature Amazon’s cashierless “Just Walk Out” technology.

The Fresh store pullback in the U.K. comes as Amazon has continued to adjust its grocery ambitions. The company has slowed expansion of its Fresh grocery chain and Go cashierless stores in the U.S. It still maintains 500 Whole Foods locations and has opened mini “daily shop” Whole Foods stores in New York City.

Read more CNBC tech news

At the same time, Amazon CEO Andy Jassy and other company executives have touted the success of sales of “everyday essentials” within its online grocery business, which refers to items like canned goods, paper towels, dish soap and snacks.

Jassy told investors at the company’s annual shareholder meeting in May that he remains “bullish” on grocery, calling it a “significant business” for Amazon.

The company on Tuesday also said that it plans to offer same-day delivery of groceries, including perishable items, in the U.K. beginning next year.

WATCH: Amazon launches its Zoox robotaxis in Las Vegas

Exclusive: Amazon just launched its Zoox robotaxis in Las Vegas and we took a ride

Continue Reading

Technology

Chinese EV giant BYD says it has a backup plan if it’s cut off from Nvidia chips

Published

on

By

Chinese EV giant BYD says it has a backup plan if it's cut off from Nvidia chips

The Chinese electric car manufacturer BYD presents its models at the Open Space Area during the IAA Mobility in Munich, Bavaria, Germany, on September 12, 2025.

Eyeswideopen | Getty Images News | Getty Images

BYD has a backup plan if it gets cut off from the Nvidia chips it currently uses in its cars, a top executive at the Chinese electric carmaker told CNBC on Tuesday.

Stella Li, executive vice president at BYD, said the company had not received any directive from the Chinese government to stop using Nvidia chips — but if it did, it has a plan B.

“Everybody has a backup. BYD has [a] backup,” Li told CNBC’s Dan Murphy.

Li declined to expand on what the plan is, but she pointed to the Covid-19 pandemic during which there was a global shortage of semiconductors which badly affected the auto sector. BYD had “no issue” at the time because it developed a lot of its technology in-house, he said, so it was able to source alternatives quickly.

BYD's EVP Stella Li says the EV Maker is committed to Nvidia

Indeed, BYD has sought to have control over large parts of its supply chain, from manufacturing its own cars to developing its own batteries.

“We have a lot of strong … even deeper technology in-house, so we always have backup,” Li said.

Nvidia, whose chips underpin much of the world’s artificial intelligence development, has been caught in the crossfire amid U.S.-China tensions. The company’s H20 AI chip — designed specifically to comply with U.S. export restrictions to China — was first banned, then permitted to be sold in China this year after a revenue-share deal between Washington and Nvidia.

Now, China has reportedly been discouraging local tech firms from buying Nvidia’s AI chips.

Nvidia designs an entirely different set of semiconductors for cars, however.

One of Nvidia’s systems, Nvidia Drive AGX Orin, is designed to enable cars to carry out some driving tasks autonomously. BYD is a customer of this product.

There is no indication so far that the Chinese government is looking to ban this Nvidia system.

Li said BYD had not been told to stop using any Nvidia products, adding it was unlikely that Beijing would ban the U.S. firm’s auto chips.

“I don’t think any country will do that, because this automatic will kill Nvidia,” Li said. “So Nvidia now is the highest market value company, so if they lose the big market from China … nobody wants to see this.”

Continue Reading

Technology

Amazon faces off against FTC over ‘deceptive’ Prime program

Published

on

By

Amazon faces off against FTC over 'deceptive' Prime program

Bloomberg | Bloomberg | Getty Images

Amazon and the Federal Trade Commission are squaring off in a long-awaited trial over whether the company duped users into paying for Prime memberships.

The lawsuit, filed by the FTC in June 2023 under the Biden administration, alleges that Amazon deceived tens of millions of customers into signing up for its Prime subscription program and sabotaged their attempts to cancel it. Amazon has denied any wrongdoing.

The trial is being held in a federal court in Seattle, Amazon’s backyard. Jury selection began Monday and opening arguments are slated for Tuesday, with the trial expected to last about a month.

Launched in 2005, Amazon’s Prime program has grown to become one of the most popular subscription services in the world, with more than 200 million members globally, and it has generated billions of dollars for the company. Membership costs $139 a year and includes perks like free shipping and access to streaming content. Data has shown that Prime members spend more and shop more often than non-Prime members.

Amazon founder and executive chairman Jeff Bezos famously said the company wanted Prime “to be such a good value, you’d be irresponsible not to be a member.”

Regulators argue that Amazon broke competition and consumer protection laws by tricking customers into subscribing to Prime. They pointed to examples like a button on its site that instructed users to complete their transaction and did not clearly state they were also agreeing to join Prime for a recurring subscription.

“Millions of consumers accidentally enrolled in Prime without knowledge or consent, but Amazon refused to fix this known problem, described internally by employees as an ‘unspoken cancer’ because clarity adjustments would lead to a drop in subscribers,” the agency wrote in a court filing last week.

The FTC says that the cancellation process is equally confusing, requiring users to navigate four webpages and choose from 15 options — a “labyrinthian mechanism” that the company referred to internally as “Iliad,” referencing Homer’s epic poem about the Trojan War.

Amazon has argued that the Prime sign up and cancellation processes are “clear and simple,” adding that the company has “always been transparent about Prime’s terms.”

“Occasional customer frustrations and mistakes are inevitable — especially for a program as popular as Amazon Prime,” the company wrote in a recent court filing. “Evidence that a small percentage of customers misunderstood Prime enrollment or cancellation does not prove that Amazon violated the law.”

A crackdown on ‘dark patterns’

The FTC notched an early win in the case last week when U.S. District Court Judge John Chun ruled Amazon and two senior executives violated the Restore Online Shoppers’ Confidence Act by gathering Prime members’ billing information before disclosing the terms of the service.

Chun also said that the two senior Amazon executives would be individually liable if a jury sides with the FTC due to the level of oversight they maintained over the Prime enrollment and cancellation process.

Amazon’s Prime boss Jamil Ghani and Neil Lindsay, a senior vice president in its health division who previously oversaw Prime’s technology and business operations, are named defendants in the complaint.

Russell Grandinetti, Amazon senior vice president of international consumer, is also named in the suit, but Chun argued he had “less involvement in the operation of the Prime organization” compared to Ghani and Lindsay.

Chun also scolded attorneys for Amazon in July for withholding thousands of documents from the FTC and abusing a legal privilege to shield them from scrutiny. Among the documents was a 2020 email where Amazon’s retail chief Doug Herrington said “subscription driving” was a “shady” practice and referred to Bezos as the company’s “chief dark arts officer.”

Representatives from Amazon didn’t immediately respond to a request for comment.

Amazon also faces a separate lawsuit brought by the FTC in 2023 accusing it of wielding an illegal monopoly. That case is set to go to trial in February 2027.

The Prime case is part of the FTC’s broader crackdown on so-called “dark patterns,” which it began examining in 2022. The phrase refers to deceptive design tactics meant to steer users toward buying products or services or giving up their privacy.

The agency brought a similar dark patterns lawsuit against Uber in April, accusing the ride-hailing and delivery company of deceptive billing and cancellation practices tied to its Uber One subscription service. Uber has disputed the FTC’s allegations.

Earlier this year, it reached settlements with online dating service Match and online education firm Chegg over claims that their subscription practices were deceptive or hard to cancel.

WATCH: Exclusive: Amazon just launched its Zoox robotaxis in Las Vegas and we took a ride

Exclusive: Amazon just launched its Zoox robotaxis in Las Vegas and we took a ride

Continue Reading

Trending