Connect with us

Published

on

Charred remains of buildings are pictured following the Palisades Fire in the Pacific Palisades neighborhood in Los Angeles, California, U.S. Jan. 15, 2025. 

Mike Blake | Reuters

Google and YouTube will donate $15 million to support the Los Angeles community and content creators impacted by wildfires, YouTube CEO Neal Mohan announced in a blog post Wednesday.

The contributions will flow to local relief organizations including Emergency Network Los Angeles, the American Red Cross, the Center for Disaster Philanthropy and the Institute for Nonprofit News, the blog said. When the company’s LA offices can safely reopen, impacted creators will also be able to use YouTube’s production facilities “to recover and rebuild their businesses” as well as access community events.

“To all of our employees, the YouTube creator community, and everyone in LA, please stay safe and know we’re here to support,” Google CEO Sundar Pichai posted on X.

The move comes days before Sunday’s impending TikTok ban that has already seen content creators begin asking fans to follow them on other social platforms. YouTube Shorts, a short-form video platform within YouTube, is a competitor to TikTok, along with Meta’s Instagram Reels and the fast-growing Chinese app Rednote, otherwise known as Xiahongshu.

Read more CNBC tech news

“In moments like these, we see the power of communities coming together to support each other — and the strength and resilience of the YouTube community is like no other,” Mohan wrote.

YouTube’s contributions are in line with a host of other LA companies pledging multi-million dollar donations aimed at assisting employees and residents impacted by the LA fires. Meta announced a $4 million donation split between CEO Mark Zuckerberg and the company while both Netflix and Comcast pledged $10 million donations to multiple aid groups.

Disclosure: Comcast owns NBCUniversal, the parent company of CNBC.

WATCH: TikTok: What creators would do if the short-form video app goes dark

TikTok: What creators would do if the short-form video app goes dark

Continue Reading

Technology

TikTok’s U.S. operations could be worth as much as $50 billion if ByteDance decides to sell

Published

on

By

TikTok’s U.S. operations could be worth as much as  billion if ByteDance decides to sell

Jakub Porzycki | Nurphoto | Getty Images

Business moguls such as Elon Musk should be prepared to spend tens of billions of dollars for TikTok’s U.S. operations should parent company ByteDance decide to sell. 

TikTok is staring at a potential ban in the U.S. if the Supreme Court decides to uphold a national security law in which service providers such as Apple and Google would be penalized for hosting the app after the Sunday deadline. ByteDance has not indicated that it will sell the app’s U.S. unit, but the Chinese government has considered a plan in which X owner Musk would acquire the operations, as part of several scenarios in consideration, Bloomberg News reported Monday.

If ByteDance decides to sell, potential buyers may have to spend between $40 billion and $50 billion. That’s the valuation that CFRA Research Senior Vice President Angelo Zino has estimated for TikTok’s U.S. operations. Zino based his valuation on estimates of TikTok’s U.S. user base and revenue in comparison to rival apps. 

TikTok has about 115 million monthly mobile users in the U.S., which is slightly behind Instagram’s 131 million, according to an estimate by market intelligence firm Sensor Tower. That puts TikTok ahead of Snapchat, Pinterest and Reddit, which have U.S. monthly mobile user bases of 96 million, 74 million and 32 million, according to Sensor Tower.

Zino’s estimate, however, is down from the more than $60 billion that he estimated for the unit in March 2024, when the House passed the initial national security bill that President Joe Biden signed into law the following month.

The lowered estimate is due to TikTok’s current geopolitical predicament and because “industry multiples have come in a bit” since March, Zino told CNBC in an email. Zino’s estimate doesn’t include TikTok’s valuable recommendation algorithms, which a U.S. acquirer would not obtain as part of a deal, with the algorithms and their alleged ties to China being central to the U.S. government’s case that TikTok poses a national security threat.

Analysts at Bloomberg Intelligence have their estimate for TikTok’s U.S. operations pegged in the range of $30 billion to $35 billion. That’s the estimate they published in July, saying at the time that the value of the unit would be “discounted due to it being a forced sale.”  

Bloomberg Intelligence analysts noted that finding a buyer for TikTok’s U.S. operations that can both afford the transaction and deal with the accompanying regulatory scrutiny on data privacy makes a sale challenging. It could also make it difficult for a buyer to expand TikTok’s ads business, they wrote. 

A consortium of businesspeople including billionaire Frank McCourt and O’Leary Ventures Chairman Kevin O’Leary put in a bid to buy TikTok from ByteDance. O’Leary has previously said the group would be willing to pay up to $20 billion to acquire the U.S. assets without the algorithm.

Unlike a Musk bid, O’Leary’s group’s bid would be free from regulatory scrutiny, O’Leary said in a Monday interview with Fox News.

O’Leary said that he’s “a huge Elon Musk fan,” but added “the idea that the regulator, even under Trump’s administration, would allow this is pretty slim.”

TikTok, X and O’Leary Ventures did not respond to requests for comment.

Watch: Chinese TikTok alternative surges

Chinese TikTok alternative surges

Continue Reading

Technology

Bitcoin approaches $100,000 again as a cool inflation reading fuels risk appetite

Published

on

By

Bitcoin approaches 0,000 again as a cool inflation reading fuels risk appetite

Mustafa Ciftci | Anadolu via Getty Images

Bitcoin extended its rebound on Wednesday, hovering just below $100,000 after another encouraging inflation report fueled investors’ risk appetite.

The price of the flagship cryptocurrency was last higher by more than 3% at $99,444.43, bringing its 2-day gain to about 7%, according to Coin Metrics.

The CoinDesk 20 index, which measures the broader market of cryptocurrencies, gained 6%.

Stock Chart IconStock chart icon

hide content

Bitcoin approaches $100,000 after Wednesday’s CPI data

Shares of Coinbase gained 6%. Bitcoin proxies MicroStrategy and Mara Holdings each gained about 4%.

Wednesday’s move followed the release of the December consumer price index, which showed core inflation unexpectedly slowed in December. A day earlier, the market got another bright inflation reading in the producer price index, which showed wholesale prices rose less in December than expected.

The post-election crypto rally fizzled into the end of 2024 after Federal Reserve Chair Jerome Powell sounded an inflation warning on Dec. 18, and bitcoin suffered even steeper losses last week as a spike in bond yields prompted investors to dump growth-oriented risk assets. This Monday, bitcoin briefly dipped below $90,000.

The price of bitcoin has been taking its cue from the equities market in recent weeks, thanks in part to the popularity of bitcoin ETFs, which have led to the institutionalization of the asset. Bitcoin’s correlation with the S&P 500 has climbed in the past week, while its correlation with gold has dropped sharply since the end of December.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

From Gmail to Word, your privacy settings and AI are entering into a new relationship

Published

on

By

From Gmail to Word, your privacy settings and AI are entering into a new relationship

The Microsoft 365 website on a laptop arranged in New York, US, on Tuesday, June 25, 2024. 

Bloomberg | Bloomberg | Getty Images

The beginning of the year is a great time to do some basic cyber hygiene. We’ve all been told to patch, change passwords, and update software. But one concern that has been increasingly creeping to the forefront is the sometimes quiet integration of potentially privacy-invading AI into programs.   

“AI’s rapid integration into our software and services has and should continue to raise significant questions about privacy policies that preceded the AI era,” said Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro. Many programs we use today — whether it be email, bookkeeping, or productivity tools, and social media and streaming apps — may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.

“This leaves all of us vulnerable to uses of our personal information without the appropriate consent. It’s time for every app, website, or online service to take a good hard look at the data they are collecting, who they’re sharing it with, how they’re sharing it, and whether or not it can be accessed to train AI models,” Owens said. “There’s a lot of catch up needed to be done.”

Where AI is already inside our daily online lives

Owens said the potential issues overlap with most of the programs and applications we use on a daily basis.

“Many platforms have been integrating AI into their operations for years, long before AI became a buzzword,” she said. 

As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its “Smart Compose” feature. “And streaming services like Netflix rely on AI to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.

“While these tools offer convenience, consumers should consider the potential privacy trade-offs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review privacy settings, understand what data is being shared, and regularly check for updates to terms of service,”  Owens said.

One tool that has come in for particular scrutiny is Microsoft’s connected experiences, which has been around since 2019 and comes activated with an optional opt-out. It was recently highlighted in press reports — inaccurately, according to the company as well as some outside cybersecurity experts that have taken a look at the issue — as a feature that is new or that has had its settings changed. Leaving the sensational headlines aside, privacy experts do worry that advances in AI can lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.

“When tools like connected experiences evolve, even if the underlying privacy settings haven’t changed, the implications of data use might be far broader,” Owens said. 

A spokesman for Microsoft wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models. He added that in certain instances, customers may consent to using their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-backed features many people have come to expect from productivity tools such as real-time co-authoring, cloud storage and tools like Editor in Word that provide spelling and grammar suggestions.

Default privacy settings are an issue

Ted Miracco, CEO of security software company Approov, said features like Microsoft’s connected experiences are a double-edged sword — the promise of enhanced productivity but the introduction of significant privacy red flags. The setting’s default-on status could, Miracco said, opt people into something they aren’t necessarily aware of, primarily related to data collection, and organizations may also want to think twice before leaving the feature on.

“Microsoft’s assurance provides only partial relief, but still falls short of mitigating some real privacy concern,” Miracco said.

Perception can be its own problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.

Having the default to enablement shifts the dynamic significantly,” Vahdat said. “Automatically enabling these features, even with good intentions, inherently places the onus on users to review and modify their privacy settings, which can feel intrusive or manipulative to some.”

His view is that companies need to be more transparent, not less, in an environment where there is a lot of distrust and suspicion regarding AI.

Companies including Microsoft should emphasize default opt-out rather than opt-in, and might provide more granular, non-technical information about how personal content is handled because perception can become a reality.

“Even if the technology is completely safe, public perception is shaped not just by facts but by fears and assumptions — especially in the AI era where users often feel disempowered,” he said.

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Default settings that enable sharing make sense for business reasons but are bad for consumer privacy, according to Jochem Hummel, assistant professor of information systems and management at Warwick Business School at the University of Warwick in England.

Companies are able to enhance their products and maintain competitiveness with more data sharing as the default, Hummel said. However, from a user standpoint, prioritizing privacy by adopting an opt-in model for data sharing would be “a more ethical approach,” he said. And as long as the additional features offered through data collection are not indispensable, users can choose which aligns more closely with their interests.

There are real benefits to the current tradeoff between AI-enhanced tools and privacy, Hummel said, based on what he is seeing in the work turned in by students. Students who have grown up with web cameras, lives broadcast in real-time on social media, and all-encompassing technology, are often less concerned about privacy, Hummel said, and are embracing these tools enthusiastically. “My students, for example, are creating better presentations than ever,” he said.  

Managing the risks

In areas such as copyright law, fears about massive copying by LLMs have been overblown, according to Kevin Smith, director of libraries at Colby College, but AI’s evolution does intersect with core privacy concerns.

“A lot of the privacy concerns currently being raised about AI have actually been around for years; the rapid deployment of large language model trained AI has just focused attention on some of those issues,” Smith said. “Personal information is all about relationships, so the risk that AI models could uncover data that was more secure in a more ‘static’ system is the real change we need to find ways to manage,” he added.

In most programs, turning off AI features is an option buried in the settings. For instance, with connected experiences, open a document and then click “file” and then go to “account” and then find privacy settings. Once there, go to “manage settings” and scroll down to connected experiences. Click the box to turn it off.  Once doing so, Microsoft warns: “If you turn this off, some experiences may not be available to you.”  Microsoft says leaving the setting on will allow for more communication, collaboration, and AI served-up suggestions.

In Gmail, one needs to open it, tap the menu, then go to settings, then click the account you want to change and then scroll to the “general” section and uncheck the boxes next to the various “Smart features” and personalization options.

As cybersecurity vendor Malwarebytes put it in a blog post about the Microsoft feature: “turning that option off might result in some lost functionality if you’re working on the same document with other people in your organization. … If you want to turn these settings off for reasons of privacy and you don’t use them much anyway, by all means, do so. The settings can all be found under Privacy Settings for a reason. But nowhere could I find any indication that these connected experiences were used to train AI models.”

While these instructions are easy enough to follow, and learning more about what you have agreed to is probably a good option, some experts say the onus should not be on the consumer to deactivate these settings. “When companies implement features like these, they often present them as opt-ins for enhanced functionality, but users may not fully understand the scope of what they’re agreeing to,” said Wes Chaar, a data privacy expert.

“The crux of the issue lies in the vague disclosures and lack of clear communication about what ‘connected’ entails and how deeply their personal content is analyzed or stored,” Chaar said. “For those outside of technology, it might be likened to inviting a helpful assistant into your home, only to learn later they’ve taken notes on your private conversations for a training manual.”

The decision to manage, limit, or even revoke access to data underscores the imbalance in the current digital ecosystem. “Without robust systems prioritizing user consent and offering control, individuals are left vulnerable to having their data repurposed in ways they neither anticipate nor benefit from,” Chaar said.

Continue Reading

Trending