Connect with us

Published

on

The Microsoft 365 website on a laptop arranged in New York, US, on Tuesday, June 25, 2024. 

Bloomberg | Bloomberg | Getty Images

The beginning of the year is a great time to do some basic cyber hygiene. We’ve all been told to patch, change passwords, and update software. But one concern that has been increasingly creeping to the forefront is the sometimes quiet integration of potentially privacy-invading AI into programs.   

“AI’s rapid integration into our software and services has and should continue to raise significant questions about privacy policies that preceded the AI era,” said Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro. Many programs we use today — whether it be email, bookkeeping, or productivity tools, and social media and streaming apps — may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.

“This leaves all of us vulnerable to uses of our personal information without the appropriate consent. It’s time for every app, website, or online service to take a good hard look at the data they are collecting, who they’re sharing it with, how they’re sharing it, and whether or not it can be accessed to train AI models,” Owens said. “There’s a lot of catch up needed to be done.”

Where AI is already inside our daily online lives

Owens said the potential issues overlap with most of the programs and applications we use on a daily basis.

“Many platforms have been integrating AI into their operations for years, long before AI became a buzzword,” she said. 

As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its “Smart Compose” feature. “And streaming services like Netflix rely on AI to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.

“While these tools offer convenience, consumers should consider the potential privacy trade-offs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review privacy settings, understand what data is being shared, and regularly check for updates to terms of service,”  Owens said.

One tool that has come in for particular scrutiny is Microsoft’s connected experiences, which has been around since 2019 and comes activated with an optional opt-out. It was recently highlighted in press reports — inaccurately, according to the company as well as some outside cybersecurity experts that have taken a look at the issue — as a feature that is new or that has had its settings changed. Leaving the sensational headlines aside, privacy experts do worry that advances in AI can lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.

“When tools like connected experiences evolve, even if the underlying privacy settings haven’t changed, the implications of data use might be far broader,” Owens said. 

A spokesman for Microsoft wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models. He added that in certain instances, customers may consent to using their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-backed features many people have come to expect from productivity tools such as real-time co-authoring, cloud storage and tools like Editor in Word that provide spelling and grammar suggestions.

Default privacy settings are an issue

Ted Miracco, CEO of security software company Approov, said features like Microsoft’s connected experiences are a double-edged sword — the promise of enhanced productivity but the introduction of significant privacy red flags. The setting’s default-on status could, Miracco said, opt people into something they aren’t necessarily aware of, primarily related to data collection, and organizations may also want to think twice before leaving the feature on.

“Microsoft’s assurance provides only partial relief, but still falls short of mitigating some real privacy concern,” Miracco said.

Perception can be its own problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.

Having the default to enablement shifts the dynamic significantly,” Vahdat said. “Automatically enabling these features, even with good intentions, inherently places the onus on users to review and modify their privacy settings, which can feel intrusive or manipulative to some.”

His view is that companies need to be more transparent, not less, in an environment where there is a lot of distrust and suspicion regarding AI.

Companies including Microsoft should emphasize default opt-out rather than opt-in, and might provide more granular, non-technical information about how personal content is handled because perception can become a reality.

“Even if the technology is completely safe, public perception is shaped not just by facts but by fears and assumptions — especially in the AI era where users often feel disempowered,” he said.

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Default settings that enable sharing make sense for business reasons but are bad for consumer privacy, according to Jochem Hummel, assistant professor of information systems and management at Warwick Business School at the University of Warwick in England.

Companies are able to enhance their products and maintain competitiveness with more data sharing as the default, Hummel said. However, from a user standpoint, prioritizing privacy by adopting an opt-in model for data sharing would be “a more ethical approach,” he said. And as long as the additional features offered through data collection are not indispensable, users can choose which aligns more closely with their interests.

There are real benefits to the current tradeoff between AI-enhanced tools and privacy, Hummel said, based on what he is seeing in the work turned in by students. Students who have grown up with web cameras, lives broadcast in real-time on social media, and all-encompassing technology, are often less concerned about privacy, Hummel said, and are embracing these tools enthusiastically. “My students, for example, are creating better presentations than ever,” he said.  

Managing the risks

In areas such as copyright law, fears about massive copying by LLMs have been overblown, according to Kevin Smith, director of libraries at Colby College, but AI’s evolution does intersect with core privacy concerns.

“A lot of the privacy concerns currently being raised about AI have actually been around for years; the rapid deployment of large language model trained AI has just focused attention on some of those issues,” Smith said. “Personal information is all about relationships, so the risk that AI models could uncover data that was more secure in a more ‘static’ system is the real change we need to find ways to manage,” he added.

In most programs, turning off AI features is an option buried in the settings. For instance, with connected experiences, open a document and then click “file” and then go to “account” and then find privacy settings. Once there, go to “manage settings” and scroll down to connected experiences. Click the box to turn it off.  Once doing so, Microsoft warns: “If you turn this off, some experiences may not be available to you.”  Microsoft says leaving the setting on will allow for more communication, collaboration, and AI served-up suggestions.

In Gmail, one needs to open it, tap the menu, then go to settings, then click the account you want to change and then scroll to the “general” section and uncheck the boxes next to the various “Smart features” and personalization options.

As cybersecurity vendor Malwarebytes put it in a blog post about the Microsoft feature: “turning that option off might result in some lost functionality if you’re working on the same document with other people in your organization. … If you want to turn these settings off for reasons of privacy and you don’t use them much anyway, by all means, do so. The settings can all be found under Privacy Settings for a reason. But nowhere could I find any indication that these connected experiences were used to train AI models.”

While these instructions are easy enough to follow, and learning more about what you have agreed to is probably a good option, some experts say the onus should not be on the consumer to deactivate these settings. “When companies implement features like these, they often present them as opt-ins for enhanced functionality, but users may not fully understand the scope of what they’re agreeing to,” said Wes Chaar, a data privacy expert.

“The crux of the issue lies in the vague disclosures and lack of clear communication about what ‘connected’ entails and how deeply their personal content is analyzed or stored,” Chaar said. “For those outside of technology, it might be likened to inviting a helpful assistant into your home, only to learn later they’ve taken notes on your private conversations for a training manual.”

The decision to manage, limit, or even revoke access to data underscores the imbalance in the current digital ecosystem. “Without robust systems prioritizing user consent and offering control, individuals are left vulnerable to having their data repurposed in ways they neither anticipate nor benefit from,” Chaar said.

Continue Reading

Technology

CNBC Daily Open: Some hope after last week’s U.S. market rout

Published

on

By

CNBC Daily Open: Some hope after last week's U.S. market rout

Traders work on the floor of the New York Stock Exchange (NYSE) on Nov. 21, 2025 in New York City.

Spencer Platt | Getty Images

Last week on Wall Street, two forces dragged stocks lower: a set of high-stakes numbers from Nvidia and the U.S. jobs report that landed with more heat than expected. But the leaves that remained after hot tea scalded investors seemed to augur good tidings.

Even though Nvidia’s third-quarter results easily breezed past Wall Street’s estimates, they couldn’t quell worries about lofty valuations and an unsustainable bubble inflating in the artificial intelligence sector. The “Magnificent Seven” cohort — save Alphabethad a losing week.

The U.S. Bureau of Labor Statistics added to the pressure. September payrolls rose far more than economists expected, prompting investors to pare back their bets of a December interest rate cut. The timing didn’t help matters, as the report had been delayed and hit just as markets were already on edge.

By Friday’s close, the S&P 500 and Dow Jones Industrial Average lost roughly 2% for the week, while the Nasdaq Composite tumbled 2.7%.

Still, a flicker of hope appeared on the horizon.

On Friday, New York Federal Reserve President John Williams said that he sees “room” for the central bank to lower interest rates, describing current policy as “modestly restrictive.” His comments caused traders to increase their bets on a December cut to around 70%, up from 44.4% a week ago, according to the CME FedWatch tool.

And despite a broad sell-off in AI stocks last week, Alphabet shares bucked the trend. Investors seemed impressed by its new AI model, Gemini 3, and hopeful that its development of custom chips could rival Nvidia’s in the long run.

Meanwhile, Eli Lilly’s ascent into the $1 trillion valuation club served as a reminder that market leadership doesn’t belong to tech alone. In a market defined by narrow concentration, any sign of broadening strength is a welcome change.

Diversification, even within AI’s sprawling ecosystem, might be exactly what this market needs now.

What you need to know today

And finally…

The Beijing music venue DDC was one of the latest to have to cancel a performance by a Japanese artist on Nov. 20, 2025, in the wake of escalating bilateral tensions.

Screenshot

Japanese concerts in China are getting abruptly canceled as tensions simmer

China’s escalating dispute with Japan reinforces Beijing’s growing economic influence — and penchant for abrupt actions that can create uncertainty for businesses.

Hours before Japanese jazz quintet The Blend was due to perform in Beijing on Thursday, a plainclothesman walked into the DDC music club during a sound check. Then, “the owner of the live house came to me and said: ‘The police has told me tonight is canceled,'” said Christian Petersen-Clausen, a music agent.

— Evelyn Cheng

Correction: This report has been updated to correct the spelling of Eli Lilly.

Continue Reading

Technology

Meta halted internal research suggesting social media harm, court filing alleges

Published

on

By

Meta halted internal research suggesting social media harm, court filing alleges

Meta halted internal research that purportedly showed that people who stopped using Facebook became less depressed and anxious, according to a legal filing that was released on Friday.

The social media giant was alleged to have initiated the study, dubbed Project Mercury, in late 2019 as a way to help it “explore the impact that our apps have on polarization, news consumption, well-being, and daily social interactions,” according to the legal brief, filed in the United States District Court for the Northern District of California.

The filing contains newly unredacted information pertaining to Meta.

The newly released legal brief is related to high-profile multidistrict litigation from a variety of plaintiffs, such as school districts, parents and state attorneys general against social media companies like Meta, Google’s YouTube, Snap and TikTok.

The plaintiffs claim that these businesses were aware that their respective platforms caused various mental health-related harms to children and young adults, but failed to take action and instead misled educators and authorities, among several allegations.

“We strongly disagree with these allegations, which rely on cherry-picked quotes and misinformed opinions in an attempt to present a deliberately misleading picture,” Meta spokesperson Andy Stone said in a statement. “The full record will show that for over a decade, we have listened to parents, researched issues that matter most, and made real changes to protect teens—like introducing Teen Accounts with built-in protections and providing parents with controls to manage their teens’ experiences.”

A Google spokesperson said in a statement that “These lawsuits fundamentally misunderstand how YouTube works and the allegations are simply not true.”

“YouTube is a streaming service where people come to watch everything from live sports to podcasts to their favorite creators, primarily on TV screens, not a social network where people go to catch up with friends,” the Google spokesperson said. “We’ve also developed dedicated tools for young people, guided by child safety experts, that give families control.”

Snap and TikTok did not immediately respond to a request for comment.

The 2019 Meta research was based on a random sample of consumers who stopped their Facebook and Instagram usage for a month, the lawsuit said. The lawsuit alleged that Meta was disappointed that the initial tests of the study showed that people who stopped using Facebook “for a week reported lower feelings of depression, anxiety, loneliness, and social comparison.”

Meta allegedly chose not to “sound the alarm,” but instead stopped the research, the lawsuit said.

“The company never publicly disclosed the results of its deactivation study,” according to the suit. “Instead, Meta lied to Congress about what it knew.”

The lawsuit cites an unnamed Meta employee who allegedly said, “If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

Stone, in a series of social media posts, pushed back on the lawsuit’s implication that Meta shuttered the internal research after it allegedly showed a causal relationship between its apps and adverse mental-health effects.

Stone characterized the 2019 study as flawed and said it was the reason that the company expressed disappointment. The study, Stone said, merely found that “people who believed using Facebook was bad for them felt better when they stopped using it.”

“This is a confirmation of other public research (“deactivation studies”) out there that demonstrates the same effect,” Stone said in a separate post. “It makes intuitive sense but it doesn’t show anything about the actual effect of using the platform.”

CNBC’s Lora Kolodny contributed reporting.

WATCH: Final trades: Meta, S&P Global and Idexx Lab.

Continue Reading

Technology

Google’s new AI model puts OpenAI, the great conundrum of this market, on shakier ground

Published

on

By

Google's new AI model puts OpenAI, the great conundrum of this market, on shakier ground

Continue Reading

Trending