How the Supreme Court could soon change free speech on the internet
More Videos
Published
3 years agoon
By
admin
Bloomberg Creative | Bloomberg Creative Photos | Getty Images
When Elon Musk announced his offer to buy Twitter for more than $40 billion, he told the public his vision for the social media site was to make sure it’s “an inclusive arena for free speech.”
Musk’s actions since closing the deal last year have illuminated how he sees the balance internet platforms must strike in protecting free expression versus user safety. While he’s lifted restrictions on many previously-suspended accounts including former President Donald Trump’s, he’s also placed new limitations on journalists’ and others’ accounts for posting publicly available flight information he’s equated to doxxing.
The saga of Musk’s Twitter takeover has underscored the complexity of determining what speech is truly protected. That question is particularly difficult when it comes to online platforms, which create policies that impact wide swaths of users from different cultures and legal systems across the world.
This year, the U.S. justice system, including the Supreme Court, will take on cases that will help determine the bounds of free expression on the internet in ways that could force the hand of Musk and other platform owners who determine what messages get distributed widely.
The boundaries they will consider include the extent of platforms’ responsibility to remove terrorist content and prevent their algorithms from promoting it, whether social media sites can take down messaging on the basis of viewpoint and whether the government can impose online safety standards that some civil society groups fear could lead to important resources and messages being stifled to avoid legal liability.
“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment to be free from discrimination.”
Brody said whenever the parameters of content moderation get tweaked, people need to consider “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”
Tech’s liability shield under threat
Facebook’s new rebrand logo Meta is seen on smartpone in front of displayed logo of Facebook, Messenger, Intagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.
Dado Ruvic | Reuters
Section 230 of the Communications Decency Act has been a bedrock of the tech industry for more than two decades. The law grants a liability shield to internet platforms that protects them from being held responsible for their users’ posts, while also allowing them to decide what stays up or comes down.
But while industry leaders say it’s what has allowed online platforms to flourish and innovate, lawmakers on both sides of the aisle have increasingly pushed to diminish its protections for the multi-billion dollar companies, with many Democrats wanting platforms to remove more hateful content and Republicans wanting to leave up more posts that align with their views.
Section 230 protection makes it easier for platforms to allow users to post their views without the companies themselves fearing they could be held responsible for those messages. It also gives the platforms peace of mind that they won’t be penalized if they want to remove or demote information they deem to be harmful or objectionable in some way.
These are the cases that threaten to undermine Section 230’s force:
- Gonzalez v. Google: This is the Supreme Court case with the potential to alter the most popular business models of the internet that currently allow for a largely free-flowing stream of posts. The case, brought by the family of an American who was killed in a 2015 terrorist attack in Paris, seeks to determine whether Section 230 can shield Google from liability under the Anti-Terrorism Act (ATA) for allegedly aiding and abetting ISIS by promoting videos created by the terrorist organization through its recommendation algorithm. If the court significantly increases the liability risk for platforms using algorithms, the services may choose to abandon them or greatly diminish their use, therefore changing the way content can be found or go viral on the internet. It will be heard by the Supreme Court in February.
- Twitter v. Taamneh: This Supreme Court case doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, which will be heard by the Supreme Court in February, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group for Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
- Challenges to Florida and Texas social media laws: Another set of cases deals with the question of whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice and the Computer & Communications Industry Association, filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws effectively violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s own terms of service, policies or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many expect it will take them up at some point.
- Tech challenge to California’s kids online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids, but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by kids to mitigate risks to those users. But in doing so, NetChoice has argued the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.
The tension between the cases
Getty Images
The variety in these cases involving speech on the internet underscores the complexity of regulating the space.
“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.”
If the Supreme Court ultimately decides to hear arguments in the Texas or Florida social media law cases, it could face tricky questions about how to square its decision with the outcome in the Gonzalez case.
For example, if the court decides in the Gonzalez case that platforms can be held liable for hosting some types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest, said Samir Jain, vice present of policy at the Center for Democracy and Technology, a nonprofit that has received funding from tech companies including Google and Amazon.
“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”
The NetChoice cases against red states Florida and Texas, and the blue state of California, also show how disagreements over how speech should be regulated on the internet are not constrained by ideological lines. The laws threaten to divide the country into states that require more messages to be left up and others that require more posts to be taken down or restricted in reach.
Under such a system, tech companies “would be forced to go to any common denominator that exists,” according to Chris Marchese, counsel a NetChoice.
“I have a feeling though that what really would end up happening is that you could probably boil down half the states into a, ‘we need to remove more content regime,’ and then the other half would more or less go into, ‘we need to leave more content up’ regime,” Marchese said. “Those two regimes really cannot be harmonized. And so I think that to the extent that it’s possible, we could see an internet that does not function the same from state to state.”
Critics of the California law have also warned that in a period when access to resources for LGBTQ youth is already limited (through measures like Florida’s Parental Rights in Education law, also referred to by critics as the Don’t Say Gay law limiting how schools can teach about gender identity or sexual orientation in young grades), the legislation threatens to further cut off vulnerable kids and teens from important information based on the whims of the state’s enforcement.
NetChoice alleged in its lawsuit against the California law that blogs and discussion forums for mental health, sexuality, religion and more could be considered under the scope of the law if likely to be accessed by kids. It also claimed the law would violate platforms’ own First Amendment right to editorial discretion and “impermissibly restricts how publishers may address or promote content that a government censor thinks unsuitable for minors.”
Jim Steyer, CEO of Common Sense Media, which has advocated for the California law and other measures to protect kids online, criticized arguments from tech-backed groups against the legislation. Though he acknowledged critiques from outside groups as well, he warned that it’s important not to let “perfect be the enemy of the good.”
“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”
How degrading 230 protections could change the internet
A YouTube logo seen at the YouTube Space LA in Playa Del Rey, Los Angeles, California, United States October 21, 2015.
Lucy Nicholson | Reuters
Although the courts could rule in a variety of ways in these cases, any chipping away at Section 230 protections will likely have tangible effects on how internet companies operate.
Google, in its brief filed with the Supreme Court on Jan. 12, warned that denying Section 230 protections to YouTube in the Gonzalez case “could have devastating spillover effects.”
“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”
Google said such a change would also make the internet less safe and less hospitable to free expression.
“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position. “That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”
Miers of Chamber of Progress said that even if Google technically wins at the Supreme Court, it’s possible justices try to “split the baby” in establishing a new test of when Section 230 protections should apply, like in the case of algorithms. A result like that would effectively undermine one of the main functions of the law, according to Miers, which is the ability to swiftly end lawsuits against platforms that involve hosting third-party content.
If the court tries to draw such a distinction, Miers said, “now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up. And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”
Miers added that platforms could also opt to display mostly posts from professional content creators, rather than amateurs, to maintain a level of control over the information they could be at risk for promoting.
The impact on online communities could be especially profound for marginalized groups. Civil society groups who spoke with CNBC doubted that for-profit companies would spend on increasingly complex models to navigate a risky legal field in a more nuanced way.
“It’s much cheaper from a compliance point of view to just censor everything,” said Brody of the Lawyers’ Committee. “I mean, these are for-profit companies, they’re going to look at, what is the most cost-effective way for us to reduce our legal liability? And the answer to that is not going to be investing billions and billions of dollars into trying to improve content moderation systems that are frankly already broken. The answer is going to be, let’s just crank up the dial on the AI that automatically censors stuff so that we have a Disneyland rule. Everything’s happy and nothing bad ever happens. But to do that, you’re going to censor a lot of underrepresented voices in a way that is really going to have outsized censorship impacts on them.”
The Supreme Court of the United States building are seen in Washington D.C., United States on December 28, 2022.
Celal Gunes | Anadolu Agency | Getty Images
The idea that some business models will become simply too risky to operate under a more limited liability shield is not theoretical.
After Congress passed SESTA-FOSTA, which carved out an exception for liability protection in cases of sex trafficking, options to advertise sex work online became more limited due to the liability risk. While some might view that as a positive change, many sex workers have argued it removed a safer option for making money compared to soliciting work in person.
Lawmakers who’ve sought to alter Section 230 seem to think there is a “magical lever” they can pull that will “censor all the bad stuff from the internet and leave up all the good stuff,” according to Evan Greer, director of Fight for the Future, a digital rights advocacy group.
“The reality is that when we subject platforms to liability for user-generated content, no matter how well-intentioned the effort is or no matter how it’s framed, what ends up happening is not that platforms moderate more responsibly or more thoughtfully,” Greer said. “They moderate in whatever way their risk-averse lawyers tell them to, to avoid getting sued.”
Jain of CDT pointed to Craigslist’s decision to take down its personal ads section altogether in the wake of SESTA-FOSTA’s passage “because it was just too difficult to sort of make those fine-grained distinctions” between legal services and illegal sex trafficking.
“So if the court were to say that you could be potentially liable for quote, unquote, recommending third-party content or for your algorithms displaying third-party content, because it’s so difficult to moderate in a totally perfect way, one response might be to take down a lot of speech or to block a lot of speech,” Jain said.
Miers fears that if different states enact their own laws seeking to place limits on Section 230 as Florida and Texas have, companies will end up adhering to the strictest state’s law for the rest of the country. That could result in restrictions on the kind of content most likely to be considered controversial in that state, such as resources for LGBTQ youth when such information isn’t considered age-appropriate, or reproductive care in a state that has abortion restrictions.
Should the Supreme Court end up degrading 230 protections and allowing a fragmented legal system to persist for content moderation, Miers said it could be a spark for Congress to address the new challenges, noting that Section 230 itself came out of two bipartisan lawmakers’ recognition of new legal complexities presented by the existence of the internet.
“Maybe we have to sort of relive that history and realize that oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act. ”
WATCH: The big, messy business of content moderation on Facebook, Twitter and YouTube

You may like
Technology
From Llamas to Avocados: Meta’s shifting AI strategy is causing internal confusion
Published
4 hours agoon
December 9, 2025By
admin

Meta CEO Mark Zuckerberg makes a keynote speech at the Meta Connect annual event at the company’s headquarters in Menlo Park, Calif., on Sept. 25, 2024.
Manuel Orbegozo | Reuters
Meta CEO Mark Zuckerberg was so optimistic last year about his company’s Llama family of artificial intelligence models that he predicted they would become the “most advanced in the industry” and “bring the benefits of AI to everyone.”
But after including a whole section on Llama in his opening remarks during Meta’s earnings call in January of this year, he mentioned the brand name only once on the latest call in October. The company’s obsession with its open-source large language model has given way to a very different approach to AI, one focused around a multibillion-dollar hiring spree to bring in top industry talent that could help Meta take on the likes of OpenAI, Google and Anthropic.
As 2025 comes to a close, Meta’s strategy remains scattershot, according to insiders and industry experts, feeding the perception that the company has fallen further behind its top AI rivals, whose models are rapidly gaining adoption in the consumer and enterprise markets.
Meta is pursuing a new Llama successor and frontier AI model, codenamed Avocado, CNBC has learned. People with knowledge of the matter said many within the company were expecting the model to be released before the end of this year, but that the plan now is for that to happen in the first quarter of 2026. The model is wrestling with various training-related performance testing intended to ensure the system is well received when it eventually debuts, said the people, who asked not to be named because they weren’t authorized to speak on the matter.
“Our model training efforts are going according to plan and have had no meaningful timing changes,” a Meta spokesperson said in a statement.
With its stock underperforming the broader tech sector this year and badly trailing Google parent Alphabet, Wall Street is looking for a sense of direction and a path to a return on investment after Meta spent $14.3 billion in June to hire Scale AI founder Alexandr Wang and a handful of his top engineers and researchers. Four months after that announcement, which included Meta purchasing a big stake in Scale, the social media company raised its 2025 guidance for capital expenditures to between $70 billion and $72 billion from a prior range of $66 billion to $72 billion.
“In many ways, Meta has been the opposite of Alphabet, where it entered the year as an AI winner and now faces more questions around investment levels and ROI,” analysts at KeyBanc Capital Markets wrote in a November note to clients. The firm recommends buying both stocks.

At the heart of Meta’s challenge is the sustained dominance of its core business: digital advertising.
Even with annual sales in excess of $160 billion, Meta’s ad targeting business, driven by massive improvements in AI and the popularity of Instagram, is growing revenue north of 20% a year. Investors have lauded the company for using AI to bolster the strength of its cash cow and to make the organization more efficient and less bloated.
But Zuckerberg has much grander ambitions, and the new guard he’s brought in to push the future vision of AI has no background in online ads. The 41-year-old founder, with a net worth of more than $230 billion, has suggested that if Meta doesn’t take big swings, it risks becoming an afterthought in a world that’s poised to be defined by AI.
Until recently, Meta’s unique position in AI was the open-source nature of its Llama models. Unlike other AI models, Meta’s technology was made freely available so third-party researchers and others could access the tools and ultimately improve them.
“Today, several tech companies are developing leading closed models,” Zuckerberg wrote in a blog post in July 2024. “But open source is quickly closing the gap.”
He’s since started changing his tune. Zuckerberg hinted over the summer that Meta was considering shaking up its approach to open source after the April release of Llama 4, which failed to captivate developers. Zuckerberg said in July that, “We’ll need to be rigorous about mitigating these risks and careful about what we choose to open source.”
Avocado, when it’s eventually made available, could be a proprietary model, according to people familiar with the matter. That means outside developers wouldn’t be able to freely download its so-called weights and related software components.
Some at Meta were upset that the R1 model released by Chinese AI lab DeepSeek earlier this year incorporated pieces of Llama’s architecture, the people said, further underscoring the risks of open source and hammering home the idea that the company should overhaul its strategy.
The company’s high-priced AI hires and leaders of the recently launched Meta Superintelligence Labs, or MSL, have also questioned the open-source AI strategy and favored creating a more powerful proprietary AI model, CNBC reported in July. A Meta spokesperson said at the time that the company’s “position on open source AI is unchanged.”
The Llama 4 flub was a significant catalyst in Zuckerberg’s pivot, the people said, and also led to a major internal shake-up. Chris Cox, Meta’s chief product officer and a 20-year company veteran who was hired as its 13th software engineer, no longer oversees the AI division, formally known as the GenAI unit, after the botched release, the people said.
Zuckerberg went on a spending spree to retool Meta’s AI leadership.
He landed on Wang, then Scale AI’s 28-year-old CEO, who was named Meta’s new chief AI officer and, in August, became the head of an elite unit called TBD Lab. Avocado is being developed inside TBD, people familiar with the matter said.
Alexandr Wang, CEO of ScaleAI speaks on CNBC’s Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 23, 2025.
Gerry Miller | CNBC
OpenAI CEO Sam Altman said in June that Meta was trying to lure talent from his company with gigantic pay packages, including sky-high $100 million signing bonuses, which Meta said at the time was a misrepresentation.
Along with Wang came other tech bigwigs, including former GitHub CEO Nat Friedman, who heads the product and applied research arm of MSL, and Shengjia Zhao, who was a ChatGPT co-creator. They’ve brought with them modern methods that have become the standard for frontier AI development in Silicon Valley, and have upended the traditional software development process inside Meta, the people said.
Meta’s AI culture shift
Wang is now under pressure to deliver a top-tier AI model that helps the company regain momentum against rivals like OpenAI, Anthropic and Google, the people said.
That pressure has only increased as competitors stepped up their game. Google’s Gemini 3, unveiled last month, has drawn solid reviews from users and analysts. OpenAI recently announced new updates to its GPT-5 AI model, while Anthropic debuted its Claude Opus 4.5 model in November shortly after releasing two other major models.
Analysts previously told CNBC that there’s no clear leading AI model, because some perform better on certain tasks like conversations or coding. But the one constant is that all of the major model creators have to spend a lot of money on AI to maintain any competitive edge, they said.
A hefty dose of that spending lines the pockets of Nvidia, the leading developer of AI graphics processing units. Nvidia CEO Jensen Huang laid out the state of play during his company’s earnings call in November, after the chipmaker reported 62% year-over-year revenue growth. He highlighted a number of model developers as big customers, including Elon Musk’s xAI.
“We run OpenAI. We run Anthropic. We run xAI because of our deep partnership with Elon and xAI,” Huang said. “We run Gemini. We run Thinking Machines. Let’s see, what else do we run? We run them all.”
At no point did Huang reference Llama, although elsewhere on the call he said Meta’s Gem, “a foundation model for ad recommendations trained on large-scale GPU clusters,” drove an improvement in ad conversions at Meta in the second quarter.
Wang isn’t the only Meta exec feeling the heat.
Friedman has also been tasked with producing a breakout AI product, the people said. He was responsible for Meta’s September launch of Vibes, a feed of AI-generated short videos, which is widely viewed as inferior to OpenAI’s Sora 2, they said. Former employees and creators told CNBC that the product was rushed to market and lacked key features, like the ability to generate realistic lip-synched audio.
Although Vibes has attracted more interest to the company’s stand-alone Meta AI app, it trails the Sora app as measured by downloads, according to data provided to CNBC by Appfigures.
Pressure is being felt across Meta’s AI organizations, where 70-hour workweeks have become the norm, the people said, while teams have also been hit with layoffs and restructurings throughout the year.
In October, Meta cut 600 jobs in MSL to reduce layers and operate more quickly. Those layoffs impacted employees in areas like the Fundamental Artificial Intelligence Research unit, or FAIR, and played a key role in chief AI scientist Yann LeCun’s decision to leave the company to launch a startup, according to people with knowledge of the matter.
LeCun declined to comment.
Yann LeCun, Meta’s former chief AI scientist, says people move on.
Getty Images
Zuckerberg’s high-stakes decision to turn to outsiders like Wang and Friedman to lead the company’s AI efforts represented a major change for a company that’s historically promoted long-tenured workers to top posts, the people said.
In Wang and Friedman, Zuckerberg has handed the controls to experts in infrastructure and related systems, rather than consumer apps. The new leaders also brought a different management style and one that’s unfamiliar inside Meta.
In particular, insiders told CNBC that Wang and Friedman are more cloistered in their communications, a contrast to a historically open approach of sharing work and chatting on the company’s Workplace internal social network
Members of Wang’s TBD Lab, who work near Zuckerberg’s office, don’t use Workplace, people familiar said, adding that they’re not even on the network and that the group operates like a separate startup.
However, Zuckerberg isn’t giving the new AI leadership team complete autonomy. Aparna Ramani, engineering vice president, who has been with Meta for nearly a decade, has been put in charge of overseeing the distribution of computing resources for MSL, the people said.
And in October, Vishal Shah was moved from leading the company’s metaverse initiatives within Reality Labs, where he’d been for four years, to a new role as vice president of AI Products, working with Friedman. Shah is considered a loyal lieutenant who has helped act as a bridge between the company’s traditional social apps like Instagram and newer projects like Reality Labs, the people said.
Meta confirmed last week that it plans to cut resources to its virtual reality and related metaverse initiatives, shifting its attention to its popular AI-infused glasses developed with EssilorLuxottica.
‘Demo, don’t memo’
One of the biggest points of tension between the old and the new is in the realm of software development, people familiar with the matter said.
In creating products, Meta has traditionally sought input from numerous groups responsible for areas like front-end user interface, design, algorithmic feeds and privacy, the people said. The multistep process was intended to ensure some level of uniformity among the company’s apps that attract billions of users each day.
But the many internal tools built over the years to help coders create software and features weren’t developed to accommodate foundation models. Meta’s new AI leaders, notably Friedman, view them as bottlenecks slowing down what should be a rapid-fire development process, the people said.
Friedman has called for MSL to use newer tools that have been calibrated to incorporate multiple AI models and various kinds of coding automation software often called AI agents, the people said.
“They have this mantra now saying ‘Demo, don’t memo,'” Lovable CEO Anton Osika said in October at the Masters of Scale Summit in San Francisco, about Meta’s new development process.
Osika said Meta employees have been using Lovable’s tools to more quickly build internal apps, specifically referencing the company’s finance teams, which have turned to Lovable to create software for tracking head count and planning.
An illustration photo shows the event of Meta launching the Vibes platform, Suqian City, Jiangsu Province, China on September 26, 2025.
Cfoto | Future Publishing | Getty Images
While Meta continues retooling its app development methods and pushes toward releasing Avocado, the company has been experimenting with other AI models on its products. Vibes, for instance, relied on AI models from Black Forest Labs and Midjourney, a startup that counts Friedman as an advisor.
Meta is also altering its approach to infrastructure, and is increasingly turning to third-party cloud computing services like CoreWeave and Oracle for developing and testing AI features as it builds out its own massive data centers, the people said.
The social media giant announced in October that it signed a joint venture agreement with Blue Owl Capital as part of a $27 billion deal to help fund and develop the gargantuan Hyperion data center in Richland Parish, Louisiana. The company said at the time that the partnership provides the “the speed and flexibility” Meta needs to build the data center and support its “long-term AI ambitions.”
Despite the company’s challenges in 2025, Zuckerberg’s message to employees and investors is that he’s more committed than ever to winning. At the top of the company’s earnings call in October, Zuckerberg said MSL is “off to a strong start.”
“I think that we’ve already built the lab with the highest talent density in the industry,” Zuckerberg said. “We’re heads down developing our next generation of models and products and I’m looking forward to sharing more on that front over the coming months.”

Technology
Paramount’s hostile Warner Bros. bid, Meta’s AI course correction, McDonald’s value crackdown and more in Morning Squawk
Published
4 hours agoon
December 9, 2025By
admin

David Ellison, chairman and chief executive officer of Paramount Skydance Corp., center, outside the New York Stock Exchange (NYSE) in New York, US, on Monday, Dec. 8, 2025.
Michael Nagle | Bloomberg | Getty Images
This is CNBC’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox.
Here are five key things investors need to know to start the trading day:
1. One battle after another
Paramount Skydance CEO David Ellison isn’t taking his loss to Netflix in the bidding war for Warner Bros. on the chin. Paramount announced yesterday that it’s going directly to WBD shareholders with a $30 per share, all-cash hostile bid, with Ellison telling CNBC that he wants “to finish what we started.”
Here’s what to know:
- Paramount’s offer is the same one that Warner Bros. Discovery executives passed over in favor of Netflix’s last week. But this time, the decision will rest in the hands of WBD stakeholders.
- President Donald Trump over the weekend said the Netflix-WBD deal “could be a problem,” citing the streamer’s potential market share should the deal go through. Trump also said he’d “be involved” in the approval process.
- Paramount’s hostile bid is backed by Jared Kushner — Trump’s son-in-law — according to a regulatory filling.
- Meanwhile, Comcast President Mike Cavanagh said he believed his company’s proposal was “light” on cash compared to the other two bids.
- Paramount shares surged 9% in yesterday’s session while shares of Warner Bros. Discovery jumped more than 4%. Netflix shares pulled back by more than 3%.
- Follow live market updates here.
Disclosure: Comcast is the parent company of NBCUniversal, which owns CNBC. Versant would become the new parent company of CNBC upon Comcast’s planned spinoff of Versant.
2. DC’s AI moves
Nvidia H200 chips in an eight-GPU Nvidia HGX system.
Nvidia
Trump announced yesterday that Nvidia will be allowed to ship its H200 artificial intelligence chips to “approved customers” in China and other countries. The caveat: Only if the U.S. gets a 25% cut.
The Department of Commerce is finalizing the details, Trump said in a social media post, adding that “the same approach will apply to AMD, Intel” and other U.S. firms. Shares of Nvidia, AMD and Intel all rose in overnight trading. Trump also said that Chinese President Xi Jinping “responded positively” to the plan.
Meanwhile, House Democrats are creating a commission on AI, hoping to position themselves as leaders on the issue. As CNBC’s Emily Wilkins notes, the move comes as the tech industry ramps up its presence in D.C. and its campaign spending.
3. From Llamas to Avocados
Meta CEO Mark Zuckerberg makes a keynote speech during the Meta Connect annual event, at the company’s headquarters in Menlo Park, California, on Sept. 25, 2024.
Manuel Orbegozo | Reuters
Meta has poured billions of dollars into overhauling its AI strategy. But as CNBC’s Jonathan Vanian reports, the shift has led to internal confusion and a haphazard strategy.
CEO Mark Zuckerberg began the year by touting Meta’s Llama family of AI models, which he said would become the “most advanced in the industry.” But CNBC has learned that Meta is now focused on a new AI model codenamed Avocado that could be proprietary instead of open source.
Elsewhere in Big Tech, Apple has seen significant churn among its top brass in recent days, including the departures of its head of AI and its top lawyer. The iPhone maker’s chip leader, Johny Srouji, reassured staff in a memo yesterday that he isn’t planning to leave “anytime soon,” following a report that he was considering departing.
Get Morning Squawk directly in your inbox
4. Farm aid
Dan Duffy uses a tractor to plant soybeans on land he farms with his brother on April 28, 2025 near Dwight, Illinois.
Scott Olson | Getty Images
Trump announced a $12 billion aid package for farmers impacted by tariffs yesterday, saying the funds would come from revenues generated by the tariffs.
A White House official told CNBC that up to $11 billion of that sum will go to the Agriculture Department’s new Farmer Bridge Assistance program to distribute one-time payments to row crop farmers. The other $1 billion will be held as the department evaluates the market, the official said.
Trump, meanwhile, suffered a blow in court yesterday. A federal judge overturned his ban on new wind power projects, saying it was “arbitrary and capricious and contrary to law.”
5. McDonald’s New Year’s resolution
A customer waits to order food at a McDonalds fast food restaurant on July 26, 2022 in Miami, Florida.
Joe Raedle | Getty Images
McDonald’s is putting its franchisees under a more intense microscope in 2026. The fast-food titan said it will look at how their prices align with value goals as McDonald’s aims to woo more price-conscious consumers, according to a memo viewed by CNBC.
McDonald’s will update its standards for franchisees — who run about 95% of McDonald’s restaurants — and “holistically assess” their pricing, the memo shows. If franchise owners do not comply with the new standards, they could face penalties such as being barred from opening additional stores or having their agreements with the company terminated.
The Daily Dividend
IBM CEO Arvind Krishna joined CNBC’s “Squawk on the Street” yesterday to discuss the company’s acquisition of data streaming platform Confluent in an $11 billion deal. Confluent shares soared 29% following the announcement.

— CNBC’s Alex Sherman, David Faber, Lillian Rizzo, Sean Conlon, Emily Wilkins, Dan Mangan, Kevin Breuninger, Jonathan Vanian, Kif Leswing, Chris Eudaily, Steve Kovach, Spencer Kimball and Amelia Lucas contributed to this report. Josephine Rozzelle edited this edition.
Technology
SF mayor’s downtown revival project has reeled in $60 million from Google, OpenAI and others
Published
4 hours agoon
December 9, 2025By
admin

San Francisco Mayor Daniel Lurie speaks during a press conference at San Francisco City Hall on Oct. 23, 2025 in San Francisco, California.
Justin Sullivan | Getty Images
San Francisco’s Downtown Development Corporation, launched in April by Mayor Daniel Lurie, said on Tuesday that it’s received over $60 million in early commitments from donors including Google and OpenAI to help revive the city’s center.
“I think people view this as a generational moment,” Shola Olatoye, CEO of the SFDDC, told CNBC in an interview. “San Francisco has captured the world’s, and the country’s, imagination as a global hub of innovation and industry. The folks who want to build businesses, raise their families here, and visit, recognize the important work that is underway and want to see it continue.”
In October, Lurie said the group, a nonprofit public benefit corporation, had raised $50 million for its efforts, up from $40 million at the time of its debut. When campaigning for mayor last year, Lurie touted his ability to fundraise, drawing on his past experience at the anti-poverty nonprofit Tipping Point Community, laying the groundwork for public-private partnerships to help revitalize San Francisco.
In addition to Google and OpenAI, SFDDC has raised money from backers including Visa, Thoma Bravo, Ripple, Salesforce, Amazon, Emerson Collective, Sixth Street and Gap. The funds will help support Lurie’s Heart of the City initiative, which prioritizes street safety and cleanliness, small business support and more.

Olatoye said some of the funding will also be deployed to fill vacant spaces in key retail spots such as along Powell and Stockton streets.
“We’re going to provide direct grants to these businesses to provide business support, marketing support and legal support,” Olatoye said. “And then actual below market capital from some of our lending partners to go in and actually fix up these spaces and get those businesses in there, get people spending money and generating economic activity for the city of San Francisco.”
Money will also be dedicated to a new Embarcadero Park, inspired by New York City’s Bryant Park. Lurie has often cited Michael Bloomberg’s efforts as mayor of New York as inspiration for his work, and the DDC is drawing on models used in New York as well as Detroit.
While a number of metrics show that San Francisco has bounced back dramatically from its pandemic lull, the city has a lot of work to do to prepare for an active 2026. Super Bowl LX is coming to the area in February, along with the Pro Bowl Games. In the summer, people will pack into the Bay Area for some of the FIFA World Cup.
“When downtown thrives, our residents, families and small business owners all benefit,” Lurie said in a statement. “By strengthening public safety, cutting red tape and leaning into our arts and culture, we are bringing people back to our streets.”
The first-term mayor notched a significant political win in October as President Donald Trump reversed his decision to deploy the National Guard in downtown San Francisco, saying Lurie was making “substantial progress” on crime in the city. Trump also said he was swayed by Nvidia CEO Jensen Huang and Salesforce CEO Marc Benioff.
The city has been boosted over the last year by a surge in investment and activity related to artificial intelligence. CBRE data on venture funding show 2025 is expected to surpass the record reached in 2021, thanks in large part to AI investments in San Francisco and Silicon Valley.
In addition, crime rates are down 30% from 2024, with event bookings and tourism on the rise, and residential and commercial real estate heating up.
“There’s no doubt that there is a lot of attention on us and we are super focused on outcomes and using data to ensure we can hold ourselves accountable,” Olatoye said.

Trending
-
Sports2 years agoStory injured on diving stop, exits Red Sox game
-
Sports3 years ago‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Sports2 years agoGame 1 of WS least-watched in recorded history
-
Sports3 years agoButton battles heat exhaustion in NASCAR debut
-
Sports3 years agoMLB Rank 2023: Ranking baseball’s top 100 players
-
Sports4 years ago
Team Europe easily wins 4th straight Laver Cup
-
Environment3 years agoJapan and South Korea have a lot at stake in a free and open South China Sea
-
Environment1 year agoHere are the best electric bikes you can buy at every price level in October 2024
