Connect with us

Published

on

Bloomberg Creative | Bloomberg Creative Photos | Getty Images

When Elon Musk announced his offer to buy Twitter for more than $40 billion, he told the public his vision for the social media site was to make sure it’s “an inclusive arena for free speech.”

Musk’s actions since closing the deal last year have illuminated how he sees the balance internet platforms must strike in protecting free expression versus user safety. While he’s lifted restrictions on many previously-suspended accounts including former President Donald Trump’s, he’s also placed new limitations on journalists’ and others’ accounts for posting publicly available flight information he’s equated to doxxing.

The saga of Musk’s Twitter takeover has underscored the complexity of determining what speech is truly protected. That question is particularly difficult when it comes to online platforms, which create policies that impact wide swaths of users from different cultures and legal systems across the world.

This year, the U.S. justice system, including the Supreme Court, will take on cases that will help determine the bounds of free expression on the internet in ways that could force the hand of Musk and other platform owners who determine what messages get distributed widely.

The boundaries they will consider include the extent of platforms’ responsibility to remove terrorist content and prevent their algorithms from promoting it, whether social media sites can take down messaging on the basis of viewpoint and whether the government can impose online safety standards that some civil society groups fear could lead to important resources and messages being stifled to avoid legal liability.

“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment to be free from discrimination.”

Brody said whenever the parameters of content moderation get tweaked, people need to consider “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”

Tech’s liability shield under threat

Facebook’s new rebrand logo Meta is seen on smartpone in front of displayed logo of Facebook, Messenger, Intagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.

Dado Ruvic | Reuters

Section 230 of the Communications Decency Act has been a bedrock of the tech industry for more than two decades. The law grants a liability shield to internet platforms that protects them from being held responsible for their users’ posts, while also allowing them to decide what stays up or comes down.

But while industry leaders say it’s what has allowed online platforms to flourish and innovate, lawmakers on both sides of the aisle have increasingly pushed to diminish its protections for the multi-billion dollar companies, with many Democrats wanting platforms to remove more hateful content and Republicans wanting to leave up more posts that align with their views.

Section 230 protection makes it easier for platforms to allow users to post their views without the companies themselves fearing they could be held responsible for those messages. It also gives the platforms peace of mind that they won’t be penalized if they want to remove or demote information they deem to be harmful or objectionable in some way.

These are the cases that threaten to undermine Section 230’s force:

  • Twitter v. Taamneh: This Supreme Court case doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, which will be heard by the Supreme Court in February, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group for Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
  • Challenges to Florida and Texas social media laws: Another set of cases deals with the question of whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice and the Computer & Communications Industry Association, filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws effectively violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s own terms of service, policies or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many expect it will take them up at some point.
  • Tech challenge to California’s kids online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids, but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by kids to mitigate risks to those users. But in doing so, NetChoice has argued the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.

The tension between the cases

Getty Images

The variety in these cases involving speech on the internet underscores the complexity of regulating the space.

“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.” 

If the Supreme Court ultimately decides to hear arguments in the Texas or Florida social media law cases, it could face tricky questions about how to square its decision with the outcome in the Gonzalez case.

For example, if the court decides in the Gonzalez case that platforms can be held liable for hosting some types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest, said Samir Jain, vice present of policy at the Center for Democracy and Technology, a nonprofit that has received funding from tech companies including Google and Amazon.

“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”

The NetChoice cases against red states Florida and Texas, and the blue state of California, also show how disagreements over how speech should be regulated on the internet are not constrained by ideological lines. The laws threaten to divide the country into states that require more messages to be left up and others that require more posts to be taken down or restricted in reach.

Under such a system, tech companies “would be forced to go to any common denominator that exists,” according to Chris Marchese, counsel a NetChoice.

“I have a feeling though that what really would end up happening is that you could probably boil down half the states into a, ‘we need to remove more content regime,’ and then the other half would more or less go into, ‘we need to leave more content up’ regime,” Marchese said. “Those two regimes really cannot be harmonized. And so I think that to the extent that it’s possible, we could see an internet that does not function the same from state to state.”

Critics of the California law have also warned that in a period when access to resources for LGBTQ youth is already limited (through measures like Florida’s Parental Rights in Education law, also referred to by critics as the Don’t Say Gay law limiting how schools can teach about gender identity or sexual orientation in young grades), the legislation threatens to further cut off vulnerable kids and teens from important information based on the whims of the state’s enforcement.

NetChoice alleged in its lawsuit against the California law that blogs and discussion forums for mental health, sexuality, religion and more could be considered under the scope of the law if likely to be accessed by kids. It also claimed the law would violate platforms’ own First Amendment right to editorial discretion and “impermissibly restricts how publishers may address or promote content that a government censor thinks unsuitable for minors.”

Jim Steyer, CEO of Common Sense Media, which has advocated for the California law and other measures to protect kids online, criticized arguments from tech-backed groups against the legislation. Though he acknowledged critiques from outside groups as well, he warned that it’s important not to let “perfect be the enemy of the good.”

“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”

How degrading 230 protections could change the internet

A YouTube logo seen at the YouTube Space LA in Playa Del Rey, Los Angeles, California, United States October 21, 2015.

Lucy Nicholson | Reuters

Although the courts could rule in a variety of ways in these cases, any chipping away at Section 230 protections will likely have tangible effects on how internet companies operate.

Google, in its brief filed with the Supreme Court on Jan. 12, warned that denying Section 230 protections to YouTube in the Gonzalez case “could have devastating spillover effects.”

“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”

Google said such a change would also make the internet less safe and less hospitable to free expression.

“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position.That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”

Miers of Chamber of Progress said that even if Google technically wins at the Supreme Court, it’s possible justices try to “split the baby” in establishing a new test of when Section 230 protections should apply, like in the case of algorithms. A result like that would effectively undermine one of the main functions of the law, according to Miers, which is the ability to swiftly end lawsuits against platforms that involve hosting third-party content.

If the court tries to draw such a distinction, Miers said, “now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up. And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”

Miers added that platforms could also opt to display mostly posts from professional content creators, rather than amateurs, to maintain a level of control over the information they could be at risk for promoting.

The impact on online communities could be especially profound for marginalized groups. Civil society groups who spoke with CNBC doubted that for-profit companies would spend on increasingly complex models to navigate a risky legal field in a more nuanced way.

“It’s much cheaper from a compliance point of view to just censor everything,” said Brody of the Lawyers’ Committee. “I mean, these are for-profit companies, they’re going to look at, what is the most cost-effective way for us to reduce our legal liability? And the answer to that is not going to be investing billions and billions of dollars into trying to improve content moderation systems that are frankly already broken. The answer is going to be, let’s just crank up the dial on the AI that automatically censors stuff so that we have a Disneyland rule. Everything’s happy and nothing bad ever happens. But to do that, you’re going to censor a lot of underrepresented voices in a way that is really going to have outsized censorship impacts on them.” 

The Supreme Court of the United States building are seen in Washington D.C., United States on December 28, 2022.

Celal Gunes | Anadolu Agency | Getty Images

The idea that some business models will become simply too risky to operate under a more limited liability shield is not theoretical.

After Congress passed SESTA-FOSTA, which carved out an exception for liability protection in cases of sex trafficking, options to advertise sex work online became more limited due to the liability risk. While some might view that as a positive change, many sex workers have argued it removed a safer option for making money compared to soliciting work in person.

Lawmakers who’ve sought to alter Section 230 seem to think there is a “magical lever” they can pull that will “censor all the bad stuff from the internet and leave up all the good stuff,” according to Evan Greer, director of Fight for the Future, a digital rights advocacy group.

“The reality is that when we subject platforms to liability for user-generated content, no matter how well-intentioned the effort is or no matter how it’s framed, what ends up happening is not that platforms moderate more responsibly or more thoughtfully,” Greer said. “They moderate in whatever way their risk-averse lawyers tell them to, to avoid getting sued.”

Jain of CDT pointed to Craigslist’s decision to take down its personal ads section altogether in the wake of SESTA-FOSTA’s passage “because it was just too difficult to sort of make those fine-grained distinctions” between legal services and illegal sex trafficking.

“So if the court were to say that you could be potentially liable for quote, unquote, recommending third-party content or for your algorithms displaying third-party content, because it’s so difficult to moderate in a totally perfect way, one response might be to take down a lot of speech or to block a lot of speech,” Jain said.

Miers fears that if different states enact their own laws seeking to place limits on Section 230 as Florida and Texas have, companies will end up adhering to the strictest state’s law for the rest of the country. That could result in restrictions on the kind of content most likely to be considered controversial in that state, such as resources for LGBTQ youth when such information isn’t considered age-appropriate, or reproductive care in a state that has abortion restrictions.

Should the Supreme Court end up degrading 230 protections and allowing a fragmented legal system to persist for content moderation, Miers said it could be a spark for Congress to address the new challenges, noting that Section 230 itself came out of two bipartisan lawmakers’ recognition of new legal complexities presented by the existence of the internet.

“Maybe we have to sort of relive that history and realize that oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act. ” 

Subscribe to CNBC on YouTube.

WATCH: The big, messy business of content moderation on Facebook, Twitter and YouTube

Why content moderation costs billions and is so tricky for Facebook, Twitter, YouTube and others

Continue Reading

Technology

Inside a Utah desert facility preparing humans for life on Mars

Published

on

By

Inside a Utah desert facility preparing humans for life on Mars

Hidden among the majestic canyons of the Utah desert, about 7 miles from the nearest town, is a small research facility meant to prepare humans for life on Mars.

The Mars Society, a nonprofit organization that runs the Mars Desert Research Station, or MDRS, invited CNBC to shadow one of its analog crews on a recent mission.

MDRS is the best analog astronaut environment,” said Urban Koi, who served as health and safety officer for Crew 315. “The terrain is extremely similar to the Mars terrain and the protocols, research, science and engineering that occurs here is very similar to what we would do if we were to travel to Mars.”

SpaceX CEO and Mars advocate Elon Musk has said his company can get humans to Mars as early as 2029.

The 5-person Crew 315 spent two weeks living at the research station following the same procedures that they would on Mars.

David Laude, who served as the crew’s commander, described a typical day.

“So we all gather around by 7 a.m. around a common table in the upper deck and we have breakfast,” he said. “Around 8:00 we have our first meeting of the day where we plan out the day. And then in the morning, we usually have an EVA of two or three people and usually another one in the afternoon.”

An EVA refers to extravehicular activity. In NASA speak, EVAs refer to spacewalks, when astronauts leave the pressurized space station and must wear spacesuits to survive in space.

“I think the most challenging thing about these analog missions is just getting into a rhythm. … Although here the risk is lower, on Mars performing those daily tasks are what keeps us alive,” said Michael Andrews, the engineer for Crew 315.

Watch the video to find out more.

Continue Reading

Technology

Apple scores big victory with ‘F1,’ but AI is still a major problem in Cupertino

Published

on

By

Apple scores big victory with 'F1,' but AI is still a major problem in Cupertino

Formula One F1 – United States Grand Prix – Circuit of the Americas, Austin, Texas, U.S. – October 23, 2022 Tim Cook waves the chequered flag to the race winner Red Bull’s Max Verstappen 

Mike Segar | Reuters

Apple had two major launches last month. They couldn’t have been more different.

First, Apple revealed some of the artificial intelligence advancements it had been working on in the past year when it released developer versions of its operating systems to muted applause at its annual developer’s conference, WWDC. Then, at the end of the month, Apple hit the red carpet as its first true blockbuster movie, “F1,” debuted to over $155 million — and glowing reviews — in its first weekend.

While “F1” was a victory lap for Apple, highlighting the strength of its long-term outlook, the growth of its services business and its ability to tap into culture, Wall Street’s reaction to the company’s AI announcements at WWDC suggest there’s some trouble underneath the hood.

“F1” showed Apple at its best — in particular, its ability to invest in new, long-term projects. When Apple TV+ launched in 2019, it had only a handful of original shows and one movie, a film festival darling called “Hala” that didn’t even share its box office revenue.

Despite Apple TV+ being written off as a costly side-project, Apple stuck with its plan over the years, expanding its staff and operation in Culver City, California. That allowed the company to build up Hollywood connections, especially for TV shows, and build an entertainment track record. Now, an Apple Original can lead the box office on a summer weekend, the prime season for blockbuster films.

The success of “F1” also highlights Apple’s significant marketing machine and ability to get big-name talent to appear with its leadership. Apple pulled out all the stops to market the movie, including using its Wallet app to send a push notification with a discount for tickets to the film. To promote “F1,” Cook appeared with movie star Brad Pitt at an Apple store in New York and posted a video with actual F1 racer Lewis Hamilton, who was one of the film’s producers.

(L-R) Brad Pitt, Lewis Hamilton, Tim Cook, and Damson Idris attend the World Premiere of “F1: The Movie” in Times Square on June 16, 2025 in New York City.

Jamie Mccarthy | Getty Images Entertainment | Getty Images

Although Apple services chief Eddy Cue said in a recent interview that Apple needs the its film business to be profitable to “continue to do great things,” “F1” isn’t just about the bottom line for the company.

Apple’s Hollywood productions are perhaps the most prominent face of the company’s services business, a profit engine that has been an investor favorite since the iPhone maker started highlighting the division in 2016.

Films will only ever be a small fraction of the services unit, which also includes payments, iCloud subscriptions, magazine bundles, Apple Music, game bundles, warranties, fees related to digital payments and ad sales. Plus, even the biggest box office smashes would be small on Apple’s scale — the company does over $1 billion in sales on average every day.

But movies are the only services component that can get celebrities like Pitt or George Clooney to appear next to an Apple logo — and the success of “F1” means that Apple could do more big popcorn films in the future.

“Nothing breeds success or inspires future investment like a current success,” said Comscore senior media analyst Paul Dergarabedian.

But if “F1” is a sign that Apple’s services business is in full throttle, the company’s AI struggles are a “check engine” light that won’t turn off.

Replacing Siri’s engine

At WWDC last month, Wall Street was eager to hear about the company’s plans for Apple Intelligence, its suite of AI features that it first revealed in 2024. Apple Intelligence, which is a key tenet of the company’s hardware products, had a rollout marred by delays and underwhelming features.

Apple spent most of WWDC going over smaller machine learning features, but did not reveal what investors and consumers increasingly want: A sophisticated Siri that can converse fluidly and get stuff done, like making a restaurant reservation. In the age of OpenAI’s ChatGPT, Anthropic’s Claude and Google’s Gemini, the expectation of AI assistants among consumers is growing beyond “Siri, how’s the weather?”

The company had previewed a significantly improved Siri in the summer of 2024, but earlier this year, those features were delayed to sometime in 2026. At WWDC, Apple didn’t offer any updates about the improved Siri beyond that the company was “continuing its work to deliver” the features in the “coming year.” Some observers reduced their expectations for Apple’s AI after the conference.

“Current expectations for Apple Intelligence to kickstart a super upgrade cycle are too high, in our view,” wrote Jefferies analysts this week.

Siri should be an example of how Apple’s ability to improve products and projects over the long-term makes it tough to compete with.

It beat nearly every other voice assistant to market when it first debuted on iPhones in 2011. Fourteen years later, Siri remains essentially the same one-off, rigid, question-and-answer system that struggles with open-ended questions and dates, even after the invention in recent years of sophisticated voice bots based on generative AI technology that can hold a conversation.

Apple’s strongest rivals, including Android parent Google, have done way more to integrate sophisticated AI assistants into their devices than Apple has. And Google doesn’t have the same reflex against collecting data and cloud processing as privacy-obsessed Apple.

Some analysts have said they believe Apple has a few years before the company’s lack of competitive AI features will start to show up in device sales, given the company’s large installed base and high customer loyalty. But Apple can’t get lapped before it re-enters the race, and its former design guru Jony Ive is now working on new hardware with OpenAI, ramping up the pressure in Cupertino.

“The three-year problem, which is within an investment time frame, is that Android is racing ahead,” Needham senior internet analyst Laura Martin said on CNBC this week.

Apple’s services success with projects like “F1” is an example of what the company can do when it sets clear goals in public and then executes them over extended time-frames.

Its AI strategy could use a similar long-term plan, as customers and investors wonder when Apple will fully embrace the technology that has captivated Silicon Valley.

Wall Street’s anxiety over Apple’s AI struggles was evident this week after Bloomberg reported that Apple was considering replacing Siri’s engine with Anthropic or OpenAI’s technology, as opposed to its own foundation models.

The move, if it were to happen, would contradict one of Apple’s most important strategies in the Cook era: Apple wants to own its core technologies, like the touchscreen, processor, modem and maps software, not buy them from suppliers.

Using external technology would be an admission that Apple Foundation Models aren’t good enough yet for what the company wants to do with Siri.

“They’ve fallen farther and farther behind, and they need to supercharge their generative AI efforts” Martin said. “They can’t do that internally.”

Apple might even pay billions for the use of Anthropic’s AI software, according to the Bloomberg report. If Apple were to pay for AI, it would be a reversal from current services deals, like the search deal with Alphabet where the Cupertino company gets paid $20 billion per year to push iPhone traffic to Google Search.

The company didn’t confirm the report and declined comment, but Wall Street welcomed the report and Apple shares rose.

In the world of AI in Silicon Valley, signing bonuses for the kinds of engineers that can develop new models can range up to $100 million, according to OpenAI CEO Sam Altman.

“I can’t see Apple doing that,” Martin said.

Earlier this week, Meta CEO Mark Zuckerberg sent a memo bragging about hiring 11 AI experts from companies such as OpenAI, Anthropic, and Google’s DeepMind. That came after Zuckerberg hired Scale AI CEO Alexandr Wang to lead a new AI division as part of a $14.3 billion deal.

Meta’s not the only company to spend hundreds of millions on AI celebrities to get them in the building. Google spent big to hire away the founders of Character.AI, Microsoft got its AI leader by striking a deal with Inflection and Amazon hired the executive team of Adept to bulk up its AI roster.

Apple, on the other hand, hasn’t announced any big AI hires in recent years. While Cook rubs shoulders with Pitt, the actual race may be passing Apple by.

WATCH: Jefferies upgrades Apple to ‘Hold’

Jefferies upgrades Apple to 'Hold'

Continue Reading

Technology

Musk backs Sen. Paul’s criticism of Trump’s megabill in first comment since it passed

Published

on

By

Musk backs Sen. Paul's criticism of Trump's megabill in first comment since it passed

Tesla CEO Elon Musk speaks alongside U.S. President Donald Trump to reporters in the Oval Office of the White House on May 30, 2025 in Washington, DC.

Kevin Dietsch | Getty Images

Tesla CEO Elon Musk, who bombarded President Donald Trump‘s signature spending bill for weeks, on Friday made his first comments since the legislation passed.

Musk backed a post on X by Sen. Rand Paul, R-Ky., who said the bill’s budget “explodes the deficit” and continues a pattern of “short-term politicking over long-term sustainability.”

The House of Representatives narrowly passed the One Big Beautiful Bill Act on Thursday, sending it to Trump to sign into law.

Paul and Musk have been vocal opponents of Trump’s tax and spending bill, and repeatedly called out the potential for the spending package to increase the national debt.

On Monday, Musk called it the “DEBT SLAVERY bill.”

The independent Congressional Budget Office has said the bill could add $3.4 trillion to the $36.2 trillion of U.S. debt over the next decade. The White House has labeled the agency as “partisan” and continuously refuted the CBO’s estimates.

Read more CNBC tech news

The bill includes trillions of dollars in tax cuts, increased spending for immigration enforcement and large cuts to funding for Medicaid and other programs.

It also cuts tax credits and support for solar and wind energy and electric vehicles, a particularly sore spot for Musk, who has several companies that benefit from the programs.

“I took away his EV Mandate that forced everyone to buy Electric Cars that nobody else wanted (that he knew for months I was going to do!), and he just went CRAZY!” Trump wrote in a social media post in early June as the pair traded insults and threats.

Shares of Tesla plummeted as the feud intensified, with the company losing $152 billion in market cap on June 5 and putting the company below $1 trillion in value. The stock has largely rebounded since, but is still below where it was trading before the ruckus with Trump.

Stock Chart IconStock chart icon

hide content

Tesla one-month stock chart.

— CNBC’s Kevin Breuninger and Erin Doherty contributed to this article.

Continue Reading

Trending