Connect with us

Published

on

The Microsoft 365 website on a laptop arranged in New York, US, on Tuesday, June 25, 2024. 

Bloomberg | Bloomberg | Getty Images

The beginning of the year is a great time to do some basic cyber hygiene. We’ve all been told to patch, change passwords, and update software. But one concern that has been increasingly creeping to the forefront is the sometimes quiet integration of potentially privacy-invading AI into programs.   

“AI’s rapid integration into our software and services has and should continue to raise significant questions about privacy policies that preceded the AI era,” said Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro. Many programs we use today — whether it be email, bookkeeping, or productivity tools, and social media and streaming apps — may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.

“This leaves all of us vulnerable to uses of our personal information without the appropriate consent. It’s time for every app, website, or online service to take a good hard look at the data they are collecting, who they’re sharing it with, how they’re sharing it, and whether or not it can be accessed to train AI models,” Owens said. “There’s a lot of catch up needed to be done.”

Where AI is already inside our daily online lives

Owens said the potential issues overlap with most of the programs and applications we use on a daily basis.

“Many platforms have been integrating AI into their operations for years, long before AI became a buzzword,” she said. 

As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its “Smart Compose” feature. “And streaming services like Netflix rely on AI to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.

“While these tools offer convenience, consumers should consider the potential privacy trade-offs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review privacy settings, understand what data is being shared, and regularly check for updates to terms of service,”  Owens said.

One tool that has come in for particular scrutiny is Microsoft’s connected experiences, which has been around since 2019 and comes activated with an optional opt-out. It was recently highlighted in press reports — inaccurately, according to the company as well as some outside cybersecurity experts that have taken a look at the issue — as a feature that is new or that has had its settings changed. Leaving the sensational headlines aside, privacy experts do worry that advances in AI can lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.

“When tools like connected experiences evolve, even if the underlying privacy settings haven’t changed, the implications of data use might be far broader,” Owens said. 

A spokesman for Microsoft wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models. He added that in certain instances, customers may consent to using their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-backed features many people have come to expect from productivity tools such as real-time co-authoring, cloud storage and tools like Editor in Word that provide spelling and grammar suggestions.

Default privacy settings are an issue

Ted Miracco, CEO of security software company Approov, said features like Microsoft’s connected experiences are a double-edged sword — the promise of enhanced productivity but the introduction of significant privacy red flags. The setting’s default-on status could, Miracco said, opt people into something they aren’t necessarily aware of, primarily related to data collection, and organizations may also want to think twice before leaving the feature on.

“Microsoft’s assurance provides only partial relief, but still falls short of mitigating some real privacy concern,” Miracco said.

Perception can be its own problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.

Having the default to enablement shifts the dynamic significantly,” Vahdat said. “Automatically enabling these features, even with good intentions, inherently places the onus on users to review and modify their privacy settings, which can feel intrusive or manipulative to some.”

His view is that companies need to be more transparent, not less, in an environment where there is a lot of distrust and suspicion regarding AI.

Companies including Microsoft should emphasize default opt-out rather than opt-in, and might provide more granular, non-technical information about how personal content is handled because perception can become a reality.

“Even if the technology is completely safe, public perception is shaped not just by facts but by fears and assumptions — especially in the AI era where users often feel disempowered,” he said.

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Default settings that enable sharing make sense for business reasons but are bad for consumer privacy, according to Jochem Hummel, assistant professor of information systems and management at Warwick Business School at the University of Warwick in England.

Companies are able to enhance their products and maintain competitiveness with more data sharing as the default, Hummel said. However, from a user standpoint, prioritizing privacy by adopting an opt-in model for data sharing would be “a more ethical approach,” he said. And as long as the additional features offered through data collection are not indispensable, users can choose which aligns more closely with their interests.

There are real benefits to the current tradeoff between AI-enhanced tools and privacy, Hummel said, based on what he is seeing in the work turned in by students. Students who have grown up with web cameras, lives broadcast in real-time on social media, and all-encompassing technology, are often less concerned about privacy, Hummel said, and are embracing these tools enthusiastically. “My students, for example, are creating better presentations than ever,” he said.  

Managing the risks

In areas such as copyright law, fears about massive copying by LLMs have been overblown, according to Kevin Smith, director of libraries at Colby College, but AI’s evolution does intersect with core privacy concerns.

“A lot of the privacy concerns currently being raised about AI have actually been around for years; the rapid deployment of large language model trained AI has just focused attention on some of those issues,” Smith said. “Personal information is all about relationships, so the risk that AI models could uncover data that was more secure in a more ‘static’ system is the real change we need to find ways to manage,” he added.

In most programs, turning off AI features is an option buried in the settings. For instance, with connected experiences, open a document and then click “file” and then go to “account” and then find privacy settings. Once there, go to “manage settings” and scroll down to connected experiences. Click the box to turn it off.  Once doing so, Microsoft warns: “If you turn this off, some experiences may not be available to you.”  Microsoft says leaving the setting on will allow for more communication, collaboration, and AI served-up suggestions.

In Gmail, one needs to open it, tap the menu, then go to settings, then click the account you want to change and then scroll to the “general” section and uncheck the boxes next to the various “Smart features” and personalization options.

As cybersecurity vendor Malwarebytes put it in a blog post about the Microsoft feature: “turning that option off might result in some lost functionality if you’re working on the same document with other people in your organization. … If you want to turn these settings off for reasons of privacy and you don’t use them much anyway, by all means, do so. The settings can all be found under Privacy Settings for a reason. But nowhere could I find any indication that these connected experiences were used to train AI models.”

While these instructions are easy enough to follow, and learning more about what you have agreed to is probably a good option, some experts say the onus should not be on the consumer to deactivate these settings. “When companies implement features like these, they often present them as opt-ins for enhanced functionality, but users may not fully understand the scope of what they’re agreeing to,” said Wes Chaar, a data privacy expert.

“The crux of the issue lies in the vague disclosures and lack of clear communication about what ‘connected’ entails and how deeply their personal content is analyzed or stored,” Chaar said. “For those outside of technology, it might be likened to inviting a helpful assistant into your home, only to learn later they’ve taken notes on your private conversations for a training manual.”

The decision to manage, limit, or even revoke access to data underscores the imbalance in the current digital ecosystem. “Without robust systems prioritizing user consent and offering control, individuals are left vulnerable to having their data repurposed in ways they neither anticipate nor benefit from,” Chaar said.

Continue Reading

Technology

Stocks end November with mixed results despite a strong Thanksgiving week rally

Published

on

By

Stocks end November with mixed results despite a strong Thanksgiving week rally

Continue Reading

Technology

Palantir has worst month in two years as AI stocks sell off

Published

on

By

Palantir has worst month in two years as AI stocks sell off

CEO of Palantir Technologies Alex Karp attends the Pennsylvania Energy and Innovation Summit, at Carnegie Mellon University in Pittsburgh, Pennsylvania, U.S., July 15, 2025.

Nathan Howard | Reuters

It’s been a tough November for Palantir.

Shares of the software analytics provider dropped 16% for their worst month since August 2023 as investors dumped AI stocks due to valuation fears. Meanwhile, famed investor Michael Burry doubled down on the artificial intelligence trade and bet against the company.

Palantir started November off on a high note.

The Denver-based company topped Wall Street’s third-quarter earnings and revenue expectations. Palantir also posted its second-straight $1 billion revenue quarter, but high valuation concerns contributed to a post-print selloff.

In a note to clients, Jefferies analysts called Palantir’s valuation “extreme” and argued investors would find better risk-reward in AI names such as Microsoft and Snowflake. Analysts at RBC Capital Markets raised concerns about the company’s “increasingly concentrated growth profile,” while Deutsche Bank called the valuation “very difficult to wrap our heads around.”

Adding fuel to the post-earnings selloff was the revelation that Burry is betting against Palantir and AI chipmaker Nvidia. Burry, who is widely known for predicting the housing crisis that occurred in 2008 and the portrayal of him in the film “The Big Short,” later accused hyperscalers of artificially boosting earnings.

Palantir CEO Alex Karp vocally hit the front lines, appearing twice in one week on CNBC, where he accused Burry of “market manipulation” and called the investor’s actions “egregious.”

“The idea that chips and ontology is what you want to short is bats— crazy,” Karp told CNBC’s “Squawk Box.”

Despite the vicious selloff, Palantir has notched some deal wins this month. That included a multiyear contract with consulting firm PwC to speed up AI adoption in the U.K. and a deal with aircraft engine maintenance company FTAI.

But those announcements did little to shake off valuation worries that have haunted all AI-tied companies in November.

Across the board, investors have viciously ditched the high-priced group, citing fears of stretched valuations and a bubble.

In November, Nvidia pulled back more than 12%, while Microsoft and Amazon dropped about 5% each. Quantum computing names such as Rigetti Computing and D-Wave Quantum have shed more than a third of their value.

Apple and Alphabet were the only Magnificent 7 stocks to end the month with gains.

Sill, questions linger over Palantir’s valuation, and those worries aren’t a new concern.

Even after its steep price drop, the company’s stock trades at 233 times forward earnings. By comparison, Nvidia and Alphabet traded at about 38 times and 30 times, respectively, at Friday’s close.

Karp, who has long defended the company, didn’t miss an opportunity to clap back at his critics, arguing in a letter to shareholders that the company is making it feasible for everyday investors to attain rates of return once “limited to the most successful venture capitalists in Palo Alto.”

“Please turn on the conventional television and see how unhappy those that didn’t invest in us are,” Karp said during an earnings call. “Enjoy, get some popcorn. They’re crying. We are every day making this company better, and we’re doing it for this nation, for allied countries.”

Palantir declined to comment for this story.

WATCH: Palantir CEO Alex Karp: We’ve printed venture results for the average American

Palantir CEO Alex Karp: We've printed venture results for the average American

Continue Reading

Technology

CME disruption, Black Friday, the K-beauty boom and more in Morning Squawk

Published

on

By

CME disruption, Black Friday, the K-beauty boom and more in Morning Squawk

CME Group sign at NYMEX in New York.

Adam Jeffery | CNBC

This is CNBC’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox.

Here are five key things investors need to know to start the trading day:

1. Down and out

Stock futures trading was halted this morning after a data center “cooling issue” took down several Chicago Mercantile Exchange services. Individual stocks were still trading before the bell, while the CME said futures indexes and options trading would open fully at 8:30 a.m. Follow live markets updates here.

The stock market has rebounded during the holiday-shortened trading week. But the three major indexes are still on pace to end November’s trading month — which ends with today’s closing bell — in the red. The Dow and S&P 500 are poised to snap six-month winning streaks, while the Nasdaq Composite is on track to see its first negative month in eight.

Today’s trading session ends early at 1 p.m. ET.

2. Shopping and dropping

A Black Friday sale sign is displayed in a shop window at an outlet mall in Carlsbad, California, U.S., Nov. 25, 2025.

Mike Blake | Reuters

Black Friday was once considered the biggest in-person shopping day of the year, drawing huge crowds to stores in search of bargains. But while millions are still expected to partake in the occasion, it’s not what it used to be.

Here’s what to know:

  • In the past six years, online sales have outpaced brick-and-mortar spending on Black Friday. Data shows in-person foot traffic has been mostly flat over the last few years, as well.
  • No matter where they make their purchases, shoppers are also skeptical that they’re getting the best deals.
  • As CNBC’s Gabrielle Fonrouge reports, the shift has meant a change in strategy for many of the retail industry’s biggest names. Some have started offering their holiday sales earlier in the season, while others are spacing out their promotions.
  • Deloitte reported that the average consumer will shell out $622 between Nov. 27 and Dec. 1, a decrease of 4% from last year.
  • Even as the day of deals loses its allure, AT&T found that Gen Z participates the most, while their older counterparts do their shopping closer to Christmas.

3. AI comeback

Cfoto | Future Publishing | Getty Images

Alphabet has been a notable exception to the recent tech downturn. Shares of the Google parent have surged more than 13% this month as Wall Street sees the company as an AI leader.

Alphabet began the month by announcing its latest tensor processing units, or TPUs, called Ironwood. Last week, the company launched its latest AI model, Gemini 3, which caught positive attention from Silicon Valley heavyweights.

Shares of the stock are now up close to 70% this year, making it the best-performer within megacap tech. But experts told CNBC’s Jennifer Elias that Alphabet’s lead in the competitive AI market is marginal and could be hard to hold onto.

Get Morning Squawk directly in your inbox

4. Tech’s tug of wars

Alibaba announced plans to release a pair of smart glasses powered by its AI models. The Quark AI Glasses are Alibaba’s first foray into the smart glasses product category.

Alibaba

The Alphabet-Nvidia AI race isn’t the only tech rivalry that has heated up in recent days.

Alibaba‘s AI-powered smart glasses went on sale yesterday. With its new wearable tech offering, the Chinese tech company is going up against major players — namely Meta, which unveiled its smart glasses with Ray Ban in September.

Meanwhile, Counterpoint Research found Apple is poised to ship more smartphones than Samsung this year for the first time in 14 years. Apple is also poised to boast a larger market share, driven by strong iPhone 17 sales.

5. From Seoul to Los Angeles

Carly Xie looks over facial mask items at the Face Shop, which specializes in Korean cosmetics, in San Francisco, April 15, 2015.

Avila Gonzalez | San Francisco Chronicle | Hearst Newspapers | Getty Images

American shoppers are increasingly looking to South Korea for their cosmetics. NielsenIQ found U.S. sales of so-called “K-beauty” products are slated to surge more than 37% this year to above $2 billion.

Retailers ranging from beauty product hubs Ulta and Sephora to big-box chains Walmart and Costco are jumping on the trend. On top of that, Olive Young — aka the “Sephora of Seoul” — is opening its first U.S. store in Los Angeles next year.

The Daily Dividend

Here are some stories worth circling back to over the weekend:

CNBC’s Chloe Taylor, Gabrielle Fonrouge, Laya Neelakandan, Jessica Dickler, Sarah Min, Sean Conlon, Jennifer Elias, Arjun Kharpal and Luke Fountain contributed to this report. Josephine Rozzelle edited this edition.

Continue Reading

Trending