When Elon Musk announced his offer to buy Twitter for more than $40 billion, he told the public his vision for the social media site was to make sure it’s “an inclusive arena for free speech.”
The saga of Musk’s Twitter takeover has underscored the complexity of determining what speech is truly protected. That question is particularly difficult when it comes to online platforms, which create policies that impact wide swaths of users from different cultures and legal systems across the world.
This year, the U.S. justice system, including the Supreme Court, will take on cases that will help determine the bounds of free expression on the internet in ways that could force the hand of Musk and other platform owners who determine what messages get distributed widely.
The boundaries they will consider include the extent of platforms’ responsibility to remove terrorist content and prevent their algorithms from promoting it, whether social media sites can take down messaging on the basis of viewpoint and whether the government can impose online safety standards that some civil society groups fear could lead to important resources and messages being stifled to avoid legal liability.
“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment to be free from discrimination.”
Brody said whenever the parameters of content moderation get tweaked, people need to consider “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”
Tech’s liability shield under threat
Facebook’s new rebrand logo Meta is seen on smartpone in front of displayed logo of Facebook, Messenger, Intagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.
Dado Ruvic | Reuters
Section 230 of the Communications Decency Act has been a bedrock of the tech industry for more than two decades. The law grants a liability shield to internet platforms that protects them from being held responsible for their users’ posts, while also allowing them to decide what stays up or comes down.
But while industry leaders say it’s what has allowed online platforms to flourish and innovate, lawmakers on both sides of the aisle have increasingly pushed to diminish its protections for the multi-billion dollar companies, with many Democrats wanting platforms to remove more hateful content and Republicans wanting to leave up more posts that align with their views.
Section 230 protection makes it easier for platforms to allow users to post their views without the companies themselves fearing they could be held responsible for those messages. It also gives the platforms peace of mind that they won’t be penalized if they want to remove or demote information they deem to be harmful or objectionable in some way.
These are the cases that threaten to undermine Section 230’s force:
Gonzalez v. Google: This is the Supreme Court case with the potential to alter the most popular business models of the internet that currently allow for a largely free-flowing stream of posts. The case, brought by the family of an American who was killed in a 2015 terrorist attack in Paris, seeks to determine whether Section 230 can shield Google from liability under the Anti-Terrorism Act (ATA) for allegedly aiding and abetting ISIS by promoting videos created by the terrorist organization through its recommendation algorithm. If the court significantly increases the liability risk for platforms using algorithms, the services may choose to abandon them or greatly diminish their use, therefore changing the way content can be found or go viral on the internet. It will be heard by the Supreme Court in February.
Twitter v. Taamneh: This Supreme Court case doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, which will be heard by the Supreme Court in February, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group for Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
Challenges to Florida and Texas social media laws: Another set of cases deals with the question of whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice and the Computer & Communications Industry Association, filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws effectively violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s own terms of service, policies or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many expect it will take them up at some point.
Tech challenge to California’s kids online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids, but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by kids to mitigate risks to those users. But in doing so, NetChoice has argued the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.
The tension between the cases
Getty Images
The variety in these cases involving speech on the internet underscores the complexity of regulating the space.
“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.”
If the Supreme Court ultimately decides to hear arguments in the Texas or Florida social media law cases, it could face tricky questions about how to square its decision with the outcome in the Gonzalez case.
For example, if the court decides in the Gonzalez case that platforms can be held liable for hosting some types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest, said Samir Jain, vice present of policy at the Center for Democracy and Technology, a nonprofit that has received funding from tech companies including Google and Amazon.
“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”
The NetChoice cases against red states Florida and Texas, and the blue state of California, also show how disagreements over how speech should be regulated on the internet are not constrained by ideological lines. The laws threaten to divide the country into states that require more messages to be left up and others that require more posts to be taken down or restricted in reach.
Under such a system, tech companies “would be forced to go to any common denominator that exists,” according to Chris Marchese, counsel a NetChoice.
“I have a feeling though that what really would end up happening is that you could probably boil down half the states into a, ‘we need to remove more content regime,’ and then the other half would more or less go into, ‘we need to leave more content up’ regime,” Marchese said. “Those two regimes really cannot be harmonized. And so I think that to the extent that it’s possible, we could see an internet that does not function the same from state to state.”
Critics of the California law have also warned that in a period when access to resources for LGBTQ youth is already limited (through measures like Florida’s Parental Rights in Education law, also referred to by critics as the Don’t Say Gay law limiting how schools can teach about gender identity or sexual orientation in young grades), the legislation threatens to further cut off vulnerable kids and teens from important information based on the whims of the state’s enforcement.
NetChoice alleged in its lawsuit against the California law that blogs and discussion forums for mental health, sexuality, religion and more could be considered under the scope of the law if likely to be accessed by kids. It also claimed the law would violate platforms’ own First Amendment right to editorial discretion and “impermissibly restricts how publishers may address or promote content that a government censor thinks unsuitable for minors.”
Jim Steyer, CEO of Common Sense Media, which has advocated for the California law and other measures to protect kids online, criticized arguments from tech-backed groups against the legislation. Though he acknowledged critiques from outside groups as well, he warned that it’s important not to let “perfect be the enemy of the good.”
“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”
How degrading 230 protections could change the internet
A YouTube logo seen at the YouTube Space LA in Playa Del Rey, Los Angeles, California, United States October 21, 2015.
Lucy Nicholson | Reuters
Although the courts could rule in a variety of ways in these cases, any chipping away at Section 230 protections will likely have tangible effects on how internet companies operate.
Google, in its brief filed with the Supreme Court on Jan. 12, warned that denying Section 230 protections to YouTube in the Gonzalez case “could have devastating spillover effects.”
“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”
Google said such a change would also make the internet less safe and less hospitable to free expression.
“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position. “That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”
Miers of Chamber of Progress said that even if Google technically wins at the Supreme Court, it’s possible justices try to “split the baby” in establishing a new test of when Section 230 protections should apply, like in the case of algorithms. A result like that would effectively undermine one of the main functions of the law, according to Miers, which is the ability to swiftly end lawsuits against platforms that involve hosting third-party content.
If the court tries to draw such a distinction, Miers said, “now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up. And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”
Miers added that platforms could also opt to display mostly posts from professional content creators, rather than amateurs, to maintain a level of control over the information they could be at risk for promoting.
The impact on online communities could be especially profound for marginalized groups. Civil society groups who spoke with CNBC doubted that for-profit companies would spend on increasingly complex models to navigate a risky legal field in a more nuanced way.
“It’s much cheaper from a compliance point of view to just censor everything,” said Brody of the Lawyers’ Committee. “I mean, these are for-profit companies, they’re going to look at, what is the most cost-effective way for us to reduce our legal liability? And the answer to that is not going to be investing billions and billions of dollars into trying to improve content moderation systems that are frankly already broken. The answer is going to be, let’s just crank up the dial on the AI that automatically censors stuff so that we have a Disneyland rule. Everything’s happy and nothing bad ever happens. But to do that, you’re going to censor a lot of underrepresented voices in a way that is really going to have outsized censorship impacts on them.”
The Supreme Court of the United States building are seen in Washington D.C., United States on December 28, 2022.
Celal Gunes | Anadolu Agency | Getty Images
The idea that some business models will become simply too risky to operate under a more limited liability shield is not theoretical.
After Congress passed SESTA-FOSTA, which carved out an exception for liability protection in cases of sex trafficking, options to advertise sex work online became more limited due to the liability risk. While some might view that as a positive change, many sex workers have argued it removed a safer option for making money compared to soliciting work in person.
Lawmakers who’ve sought to alter Section 230 seem to think there is a “magical lever” they can pull that will “censor all the bad stuff from the internet and leave up all the good stuff,” according to Evan Greer, director of Fight for the Future, a digital rights advocacy group.
“The reality is that when we subject platforms to liability for user-generated content, no matter how well-intentioned the effort is or no matter how it’s framed, what ends up happening is not that platforms moderate more responsibly or more thoughtfully,” Greer said. “They moderate in whatever way their risk-averse lawyers tell them to, to avoid getting sued.”
“So if the court were to say that you could be potentially liable for quote, unquote, recommending third-party content or for your algorithms displaying third-party content, because it’s so difficult to moderate in a totally perfect way, one response might be to take down a lot of speech or to block a lot of speech,” Jain said.
Miers fears that if different states enact their own laws seeking to place limits on Section 230 as Florida and Texas have, companies will end up adhering to the strictest state’s law for the rest of the country. That could result in restrictions on the kind of content most likely to be considered controversial in that state, such as resources for LGBTQ youth when such information isn’t considered age-appropriate, or reproductive care in a state that has abortion restrictions.
Should the Supreme Court end up degrading 230 protections and allowing a fragmented legal system to persist for content moderation, Miers said it could be a spark for Congress to address the new challenges, noting that Section 230 itself came out of two bipartisan lawmakers’ recognition of new legal complexities presented by the existence of the internet.
“Maybe we have to sort of relive that history and realize that oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act. ”
Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.
Tom Williams | CQ-Roll Call, Inc. | Getty Images
In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.
“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview.
“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.”
Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.
These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.
The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have beentalking to ChatGPT in the lead-up.
“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.”
Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”
Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable.
How are ChatGPT’s ethics determined?
Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards.
While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer.
“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.”
When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”
An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.
“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.
How private is ChatGPT?
Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”
In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential.
“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.”
According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.
“I think I feel optimistic that we can get the government to understand the importance of this,” he said.
Will ChatGPT be used in military operations?
Asked by Carlson if ChatGPT would be used by the military to harm humans, Altman didn’t provide a direct answer.
“I don’t know the way that people in the military use ChatGPT today… but I suspect there’s a lot of people in the military talking to ChatGPT for advice.”
Later, he added that he wasn’t sure “exactly how to feel about that.”
OpenAI was one of the AI companies that received a $200 million contract from the U.S. Department of Defense to put generative AI to work for the U.S. military. The firm said in a blog post that it would provide the U.S. government access to custom AI models for national security, support and product roadmap information.
Just how powerful is OpenAI?
Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”
In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people.
“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”
However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.
China is one of Nvidia’s largest markets, particularly for data centers, gaming and artificial intelligence applications.
Avishek Das | Lightrocket | Getty Images
China’s market regulator on Monday said that Nvidia violated the country’s anti-monopoly law, according to a preliminary probe, adding that Beijing would continue its investigation into the U.S. chip giant.
Shares of Nvidia were down around 2% in premarket trading.
Late last year, China’s State Administration for Market Regulation (SAMR) opened an investigation into Nvidia in relation to the acquisition of Mellanox and some agreements made during the acquisition. Nvidia acquired the Israeli technology company that creates network solutions for data centers and servers in 2020, in a deal that was approved by China at the time with certain conditions.
In a preliminary investigation, the SAMR said Nvidia had violated China’s anti-monopoly laws in relation to that acquisition and its conditions. China’s market regulator did not specify how Nvidia allegedly breached the country’s laws.
CNBC has reached out to Nvidia for comment.
The update from the SAMR has the potential to complicate trade talks between Chinese and U.S. officials that began on Sunday in Madrid, Spain.
Tensions between Beijing and Washington appear to be on the rise on the technology front. China opened two separate probes into semiconductors on Saturday: one is an anti-dumping investigation into certain chips imported from the U.S., while the other is an anti-discrimination scrutiny of U.S. restrictions on China’s chip industry.
This is a breaking news story. Please check back for more.
Specialist traders work at the post for Swedish fintech Klarna, during the company’s IPO at the New York Stock Exchange in New York City, U.S., Sept. 10, 2025.
Klarna popped as much as 30% on the day of its New York IPO, before settling to close around 15% higher. The stock declined further to $42.92 by Friday but is still up about 7% from its IPO price of $40.
The debut demonstrated how Wall Street is becoming more welcoming of bumper fintech listings. Prior to Klarna, online trading platform eToro, stablecoin issuer Circle and crypto exchange Bullish all went public to a positive first-day reception.
Gemini, the crypto exchange founded by Cameron and Tyler Winklevoss, surged 14% in its IPO Friday.
“I think the Klarna IPO would be viewed positively by some of the other scaled-up vendors,” Gautam Pillai, head of fintech research at British investment bank Peel Hunt, told CNBC.
There’s a crowded pipeline of fintech names that could be next to IPO after Klarna. CNBC looks at which companies look the most promising.
Stripe
Patrick Collison, chief executive officer and co-founder of Stripe Inc., left, smiles as John Collison, president and co-founder of Stripe Inc., speaks during a Bloomberg Studio 1.0 television interview in San Francisco, California, U.S., on Friday, March 23, 2018.
Bloomberg | Bloomberg | Getty Images
Digital payments firm Stripe has for years been viewed as an IPO contender. Stripe has remained a private company in the 15 years since it was founded, and founders and brothers John and Patrick Collison have long resisted pressure to take the business public.
However, that doesn’t mean a stock market listing hasn’t been on Stripe’s mind. The Collisons told employees in 2023 that Stripe would decide to either go public or allow employees to sell shares via a secondary offering within the next year.
Ultimately, Stripe in January opted for a secondary share sale valuing the company at $91.5 billion — close to its peak valuation of $95 billion, which it achieved in 2021.
That doesn’t mean Stripe couldn’t still pursue a stock market debut further down the line. Many fintech unicorn CEOs have been keeping a close eye on Klarna’s IPO performance for signs of when will be the right moment to list.
Revolut
Revolut CEO Nikolay Storonsky at the Web Summit in Lisbon, Portugal, Nov. 7, 2019.
Pedro Nunes | Reuters
Revolut is widely seen as a potential future fintech IPO candidate. The digital banking unicorn told CNBC last week that it recently gave employees the chance to sell shares on the secondary market at a whopping $75 billion valuation, placing it above some major U.K. banks by market value.
“As part of our commitment to our employees, we regularly provide opportunities for them to gain liquidity,” a Revolut spokesperson told CNBC at the time. “An employee secondary share sale is currently in process, and we won’t be commenting further until it is complete.”
The secondary round buys Revolut some time to remain private for longer while still offering staff the chance to exit some of their holdings. At the same time, though, it now makes Revolut one of the world’s most valuable private fintech firms.
As to where Revolut lists, for now the U.S. appears the likeliest location.
Co-founder and CEO Nikolay Storonsky has spoken candidly about his preference to list in the U.S. due to issues with London’s IPO market. Last year, he told the 20VC podcast that it was “just not rational” to go public in the U.K.
Monzo
Monzo CEO TS Anil.
Monzo
Having recently reached a $5.9 billion valuation in a secondary share sale, British digital bank Monzo is another contender for the public markets.
A report surfaced earlier this year from Sky News that said Monzo had lined up bankers to work on an IPO that could take place as early as the first half of 2026.
However, in a fireside discussion moderated by CNBC at SXSW London, Monzo CEO TS Anil said that an IPO is “not the thing we’re focused on right now” — it’s worth noting though that this was back in June.
“The thing we’re focused on is scale the business, continue to grow it, double it again, reach more customers, build more products, continue to drive great economic outcomes on the back of that,” Anil said at the time.
Anil wouldn’t comment on where Monzo would list if it were to IPO, but he stressed the firm was “deeply committed” to being globally headquartered in London.
Starling Bank
Raman Bhatia, incoming chief executive officer of Starling. Bhatia moved over from OVO Energy Ltd., where he was CEO.
Zed Jameson | Bloomberg | Getty Images
Monzo’s rival neobank Starling Bank has reportedly been considering an initial public offering in the U.S. as part of expansion plans there.
On Thursday, Bloomberg reported that Starling had hired Jody Bhagat, former president of global banking at software firm Personetics Technologies, to lead the growth of its Engine technology unit in the U.S.
Starling declined to comment when asked by CNBC about its listing plans.
Last year, Starling’s CEO Raman Bhatia talked up the bank’s plans to expand globally via Engine, a software platform that Starling sells to other companies so they can set up their own digital banks.
“I am very bullish about this approach around internationalization of what is the best of Starling — the proprietary tech,” Bhatia said during a fireside chat at the Money 20/20 conference moderated by CNBC.
Though a lesser known name, Bulgaria-founded fintech firm Payhawk also has IPO ambitions.
The spend management platform was valued at $1 billion in 2022 and saw revenue surge 85% year-over-year in 2024 to 23.4 million euros ($27.4 million).
“We’re definitely seeing the IPO window open,” Payhawk CEO and co-founder Hristo Borisov told CNBC in an interview earlier this month. However, he stressed that “we are looking at more of a five-year horizon there.”
“If you look at the majority of the IPOs, the majority of those IPOs are companies with $400 million to $500 million-plus ARR [annual recurring revenue],” Borisov said. “That’s our goal.”
Some honorary mentions
There are other fintechs that look like potential IPO contenders further down the line — but the trajectory looks less clear.
Blockchain firm Ripple’s CEO Brad Garlinghouse told CNBC in January last year that the company explored markets outside the U.S. for its IPO due to an aggressive crypto enforcement regime under ex-Securities and Exchange Commission chief Gary Gensler.
That could change now thanks to President Donald Trump’s pro-crypto stance. Garlinghouse said last year though that Ripple had put any plans for an IPO on hold. The startup was most recently valued at $15 billion.
Germany’s N26 is another potential IPO contender. The digital bank was valued at $9 billion in a 2021 funding round.
However, it has faced some setbacks. N26 co-founder Valentin Stalf recently stepped down as CEO after facing pressure from investors over regulatory failings.