Connect with us

Published

on

Steve Proehl | Corbis Unreleased | Getty Images

Apple defended its new system to scan iCloud for illegal child sexual abuse materials (CSAM) on Monday during an ongoing controversy over whether the system reduces Apple user privacy and could be used by governments to surveil citizens.

Last week, Apple announced it has started testing a system that uses sophisticated cryptography to identify when users upload collections of known child pornography to its cloud storage service. It says it can do this without learning about the contents of a user’s photos stored on its servers.

Apple reiterated on Monday that its system is more private than those used by companies like Google and Microsoft because its system uses both its servers and software running on iPhones.

Privacy advocates and technology commentators are worried Apple’s new system, which includes software that will be installed on people’s iPhones through an iOS update, could be expanded in some countries through new laws to check for other types of images, like photos with political content, instead of just child pornography.

Apple said in a document posted to its website on Sunday governments cannot force it to add non-CSAM images to a hash list, or the file of numbers that correspond to known child abuse images Apple will distribute to iPhones to enable the system.

“Apple will refuse any such demands. Apple’s CSAM detection capability is built solely to detect known CSAM images stored in iCloud Photos that have been identified by experts at NCMEC and other child safety groups,” Apple said in the document. “We have faced demands to build and deploy government-mandated changes that degrade the privacy of users before, and have steadfastly refused those demands. We will continue to refuse them in the future.”

It continued: “Let us be clear, this technology is limited to detecting CSAM stored in iCloud and we will not accede to any government’s request to expand it.”

Some cryptographers are worried about what could happen if a country like China were to pass a law saying the system has to also include politically sensitive images. Apple CEO Tim Cook has previously said that the company follows laws in every country where it conducts business.

Companies in the U.S. are required to report CSAM to the National Center for Missing & Exploited Children and face fines up to $300,000 when they discover illegal images and don’t report them.

A reputation for privacy

Apple’s reputation for defending privacy has been cultivated for years through its actions and marketing. In 2016, Apple faced off against the FBI in court to protect the integrity of its on-device encryption systems in the investigation of a mass shooter.

But Apple has also faced significant pressure from law enforcement officials about the possibility of criminals “going dark,” or using privacy tools and encryption to prevent messages or other information from being within the reach of law enforcement.

The controversy over Apple’s new system, and whether it’s surveilling users, threatens Apple’s public reputation for building secure and private devices, which the company has used to break into new markets in personal finance and healthcare.

Critics are concerned the system will partially operate on an iPhone, instead of only scanning photos that have been uploaded to the company’s servers. Apple’s competitors typically only scan photos stored on their servers.

“It’s truly disappointing that Apple got so hung up on its particular vision of privacy that it ended up betraying the fulcrum of user control: being able to trust that your device is truly yours,” technology commentator Ben Thompson wrote in a newsletter on Monday.

Apple continues to defend its systems as a genuine improvement that protects children and will reduce the amount of CSAM being created while still protecting iPhone user privacy.

Apple said its system is significantly stronger and more private than previous systems by every privacy metric the company tracks and that it went out of its way to build a better system to detect these illegal images.

Unlike current systems, which run in the cloud and can’t be inspected by security researchers, Apple’s system can be inspected through its distribution in iOS, an Apple representative said. By moving some processing onto the user’s device, the company can derive stronger privacy properties, such as the ability to find CSAM matches without running software on Apple servers that check every single photo.

Apple said on Monday its system doesn’t scan private photo libraries that haven’t been uploaded to iCloud.

Apple also confirmed it will process photos that have already been uploaded to iCloud. The changes will roll out through an iPhone update later this year, after which users will be alerted that Apple is beginning to check photos stores on iCloud against a list of fingerprints that correspond to known CSAM, Apple said.

Continue Reading

Technology

OpenAI stops Sora videos of Martin Luther King Jr. after users made ‘disrespectful’ deepfakes

Published

on

By

OpenAI stops Sora videos of Martin Luther King Jr. after users made 'disrespectful' deepfakes

Dr. Martin Luther King Jr. addressing crowd of demonstrators outside the Lincoln Memorial during the March on Washington for Jobs and Freedom.

Francis Miller/The LIFE Picture Collection via Getty Images

OpenAI halted artificial intelligence-generated videos of Martin Luther King Jr. after users utilized its short-form video tool Sora to create “disrespectful depictions” of the civil rights leader.

“While there are strong free speech interests in depicting historical figures, OpenAI believes public figures and their families should ultimately have control over how their likeness is used,” OpenAI said in a post to social media platform X.

The ChatGPT maker said it will work to toughen “guardrails” for historical figures and that public figures or representatives can ask to not appear in Sora videos.

OpenAI did not immediately respond to CNBC’s request for comment.

Read more CNBC tech news

Other public figures have also called out the use of AI deepfakes.

Last week, Zelda Williams, the daughter of late comedian Robin Williams, asked that people stop sending her AI videos of her father.

Last year, actress Scarlett Johansson said the company used a voice that sounded “eerily similar” to her performance in the movie “Her” on ChatGPT. OpenAI later pulled the voice from its platform.

OpenAI launched Sora at the end of September. The tool allows users to create AI-generated short videos using a text prompt. In less than five days, Sora head Bill Peebles said the tool had amassed over 1 million downloads, hitting the milestone faster than ChatGPT.

Its ascent and the rise of AI-generated videos have also raised questions and concerns over the spread of misinformation, copyright infringement and the proliferation of AI slop, quickly produced videos that flood social feeds.

We tested OpenAI’s Sora 2 AI-video app to find out why Hollywood is worried

Continue Reading

Technology

Meta announces new AI parental controls following FTC inquiry

Published

on

By

Meta announces new AI parental controls following FTC inquiry

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.

David Paul Morris | Bloomberg | Getty Images

Meta on Friday announced new safety features that will allow parents to see and manage how their teenagers are interacting with artificial intelligence characters on the company’s platforms.

Parents will have the option to turn off one-on-one chats with AI characters completely, Meta said. They will also be able to block specific AI characters, get insight into the topics their children are discussing with them.

Meta is still building the controls, and the company said they will start to roll out early next year.

“Making updates that affect billions of users across Meta platforms is something we have to do with care, and we’ll have more to share soon,” Meta said in a blog post.

Meta has long faced criticism over its handling of child safety and mental health on its apps. The company’s new parental controls come after the Federal Trade Commission launched an inquiry into several tech companies, including Meta, over how AI chatbots could potentially harm children and teenagers.

Read more CNBC tech news

The agency said it wants to understand what steps these companies have taken to “evaluate the safety of these chatbots when acting as companions,” according to a release.

In August, Reuters reported that Meta allowed its chatbots to have romantic and sensual conversations with kids. Reuters found that a chatbot was able to have a romantic conversation with an eight-year-old, for instance.

Meta made changes to its AI chatbot policies following the report and now prevents its bots from discussing subjects like self-harm, suicide and eating disorders with teens. The AI is also supposed to avoid potentially inappropriate romantic conversations.

The company announced additional AI safety updates earlier this week. Meta said its AIs should not respond to teens with “age-inappropriate responses that would feel out of place in a PG-13 movie,” and it’s already releasing those changes across the U.S., the U.K., Australia and Canada.

Parents can already set time limits on app use and see if their teenagers are chatting with AI characters, Meta said. Teens can only interact with a select group of AI characters, the company added.

OpenAI, which is also named in the FTC inquiry, has made similar enhancements to its safety features for teens in recent weeks. The company officially rolled out its own parental controls late last month, and it’s developing a technology to better predict a user’s age.

Earlier this week, OpenAI announced a council of eight experts who will advise the company and provide insight into how AI affects users’ mental health, emotions and motivation.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

WATCH: Megacap AI talent wars: Meta poaches another top Apple executive

Megacap AI talent wars: Meta poaches another top Apple executive

Continue Reading

Technology

‘Cockroach’ hunting, Bolton indicted, Apple eyes F1 and more in Morning Squawk

Published

on

By

'Cockroach' hunting, Bolton indicted, Apple eyes F1 and more in Morning Squawk

An automatic teller machine at the Zions Bank headquarters in Salt Lake City, Utah, on July 10, 2023.

Kim Raff | Bloomberg | Getty Images

This is CNBC’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox.

Here are five key things investors need to know to start the trading day:

1. Calling the exterminator

Stocks dropped yesterday amid mounting fears on Wall Street about the prevalence of bad loans, and what it means for a slew of regional banks. That’s led to some “cockroach” hunting, as investors race to assess the health of financial institutions’ lending businesses.

Let’s break this down:

  • Earlier this week, JPMorgan CEO Jamie Dimon warned that there could be more “cockroaches” out there, in reference to the collapses of auto parts maker First Brands and subprime car lender Tricolor Holdings.
  • Dimon appeared to be invoking the cockroach theory, which suggests that bad news for one company can lead to several other negative disclosures.
  • Shares of Jefferies, which has exposure to First Brands, dropped more than 10% yesterday. Zions, which earlier in the week said it had to take a large charge on bad loans, closed down 13%. Western Alliance said a borrower committed fraud and ended the session down nearly 11%.
  • Regional bank stocks tanked yesterday as a result, in turn driving down the broader market. Bank credit concerns also dragged on the European markets today.
  • The closely followed 10-year U.S. Treasury yield plunged to levels last seen in early April, when President Donald Trump’s unveiled his steep tariff policy.
  • Beyond banking, investors continued to monitor the U.S.-China trade dispute. China’s Ministry of Commerce accused the U.S. of creating “panic” over its rare earth export controls and said it was open to trade talks.
  • U.S. stock futures fell this morning, but are well off their lows. Follow live markets updates here.

2. Bolton indicted

John Bolton, former national security advisor, speaks during a Senate briefing hosted by the Organization of Iranian American Communities to discuss U.S. policy on Iran, in Washington, D.C., March 16, 2023.

Tom Williams | Cq-roll Call, Inc. | Getty Images

John Bolton, a former national security advisor to President Donald Trump, was indicted yesterday by a federal grand jury on charges of mishandling classified information. Bolton is the third Trump adversary to face criminal charges in recent weeks, following the indictments of former FBI Director James Comey and New York Attorney General Letitia James.

Meanwhile in Washington, a bill to fund the military during the government shutdown failed in the Senate yesterday. The vote came hours after senators voted down funding legislation for the 10th time. United Airlines CEO Scott Kirby told CNBC yesterday that bookings could start slowing if the government doesn’t reopen soon.

3. Paying the piper

In an aerial view, a container ship arrives at the Port of Oakland on Oct. 10, 2025 in Oakland, California.

Justin Sullivan | Getty Images

You’re likely already feeling the economic impact of Trump’s tariff policy, according to S&P Global.

The firm’s analysis found the levies will run global businesses nearly $1.2 trillion (yes, trillion) this year. Even under conservative estimates, S&P said two-thirds of that cost is expected to be passed down to consumers.

While we’re on the subject of tariffs’ economic impact: The U.S. budget deficit in 2025 shrunk by slightly more than 2%, compared with the 2024 fiscal year. As CNBC’s Jeff Cox notes, revenue from Trump’s tariffs helped mitigate some government spending. Still, the federal government’s shortfall sits at $1.78 trillion.

4. Apple’s rights race

SINGAPORE, SINGAPORE – OCTOBER 05: George Russell of Great Britain driving the (63) Mercedes AMG Petronas F1 Team W16 leads Max Verstappen of the Netherlands driving the (1) Oracle Red Bull Racing RB21 Lando Norris of Great Britain driving the (4) McLaren MCL39 Mercedes Oscar Piastri of Australia driving the (81) McLaren MCL39 Mercedes and the rest of the field at the start prior to the F1 Grand Prix of Singapore at Marina Bay Street Circuit on October 05, 2025 in Singapore, Singapore.

Mark Thompson | Getty Images Sport | Getty Images

Apple will soon announce a deal valued at $140 million annually for F1’s U.S. media rights, sources told CNBC’s Alex Sherman. The partnership will help the technology giant build out its sports streaming portfolio, which already includes Major League Soccer and MLB content.

In an interview this week, Eddy Cue, Apple’s senior vice president of services, said Apple has “love” for F1. Cue also said the modern sports watching experience has “gone backwards” as so many different streaming services get in the game.

Get Morning Squawk directly in your inbox

5. Bright future

Meta Ray-Ban Gen 2 AI glasses during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.

David Paul Morris | Bloomberg | Getty Images

The parent company of sunglasses maker Ray-Ban has a specific company to thank for its recent performance: Meta.

EssilorLuxottica said a sizable amount of its revenue growth in the third quarter was tied to its partnership with the big tech company to develop and sell smart glasses. Stefano Grassi, EssilorLuxottica’s finance chief, called the Meta products a “lift” for the business.

Speaking of Meta, Oracle‘s shares were able to buck yesterday’s market downturn after the company confirmed a cloud deal with the Facebook parent.

The Daily Dividend

Here are some stories we’d recommend making time for over the weekend:

CNBC’s Hugh Son, Sarah Min, Spencer Kimball, Jordan Novet, Jonathan Vanian, Ari Levy, Alex Sherman, Jeff Cox, Leslie Josephs, Dan Mangan and Lillian Rizzo contributed to this report. Josephine Rozzelle edited this edition.

Continue Reading

Trending