When Elon Musk announced his offer to buy Twitter for more than $40 billion, he told the public his vision for the social media site was to make sure it’s “an inclusive arena for free speech.”
The saga of Musk’s Twitter takeover has underscored the complexity of determining what speech is truly protected. That question is particularly difficult when it comes to online platforms, which create policies that impact wide swaths of users from different cultures and legal systems across the world.
This year, the U.S. justice system, including the Supreme Court, will take on cases that will help determine the bounds of free expression on the internet in ways that could force the hand of Musk and other platform owners who determine what messages get distributed widely.
The boundaries they will consider include the extent of platforms’ responsibility to remove terrorist content and prevent their algorithms from promoting it, whether social media sites can take down messaging on the basis of viewpoint and whether the government can impose online safety standards that some civil society groups fear could lead to important resources and messages being stifled to avoid legal liability.
“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment to be free from discrimination.”
Brody said whenever the parameters of content moderation get tweaked, people need to consider “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”
Tech’s liability shield under threat
Facebook’s new rebrand logo Meta is seen on smartpone in front of displayed logo of Facebook, Messenger, Intagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.
Dado Ruvic | Reuters
Section 230 of the Communications Decency Act has been a bedrock of the tech industry for more than two decades. The law grants a liability shield to internet platforms that protects them from being held responsible for their users’ posts, while also allowing them to decide what stays up or comes down.
But while industry leaders say it’s what has allowed online platforms to flourish and innovate, lawmakers on both sides of the aisle have increasingly pushed to diminish its protections for the multi-billion dollar companies, with many Democrats wanting platforms to remove more hateful content and Republicans wanting to leave up more posts that align with their views.
Section 230 protection makes it easier for platforms to allow users to post their views without the companies themselves fearing they could be held responsible for those messages. It also gives the platforms peace of mind that they won’t be penalized if they want to remove or demote information they deem to be harmful or objectionable in some way.
These are the cases that threaten to undermine Section 230’s force:
Gonzalez v. Google: This is the Supreme Court case with the potential to alter the most popular business models of the internet that currently allow for a largely free-flowing stream of posts. The case, brought by the family of an American who was killed in a 2015 terrorist attack in Paris, seeks to determine whether Section 230 can shield Google from liability under the Anti-Terrorism Act (ATA) for allegedly aiding and abetting ISIS by promoting videos created by the terrorist organization through its recommendation algorithm. If the court significantly increases the liability risk for platforms using algorithms, the services may choose to abandon them or greatly diminish their use, therefore changing the way content can be found or go viral on the internet. It will be heard by the Supreme Court in February.
Twitter v. Taamneh: This Supreme Court case doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, which will be heard by the Supreme Court in February, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group for Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
Challenges to Florida and Texas social media laws: Another set of cases deals with the question of whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice and the Computer & Communications Industry Association, filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws effectively violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s own terms of service, policies or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many expect it will take them up at some point.
Tech challenge to California’s kids online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids, but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by kids to mitigate risks to those users. But in doing so, NetChoice has argued the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.
The tension between the cases
Getty Images
The variety in these cases involving speech on the internet underscores the complexity of regulating the space.
“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.”
If the Supreme Court ultimately decides to hear arguments in the Texas or Florida social media law cases, it could face tricky questions about how to square its decision with the outcome in the Gonzalez case.
For example, if the court decides in the Gonzalez case that platforms can be held liable for hosting some types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest, said Samir Jain, vice present of policy at the Center for Democracy and Technology, a nonprofit that has received funding from tech companies including Google and Amazon.
“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”
The NetChoice cases against red states Florida and Texas, and the blue state of California, also show how disagreements over how speech should be regulated on the internet are not constrained by ideological lines. The laws threaten to divide the country into states that require more messages to be left up and others that require more posts to be taken down or restricted in reach.
Under such a system, tech companies “would be forced to go to any common denominator that exists,” according to Chris Marchese, counsel a NetChoice.
“I have a feeling though that what really would end up happening is that you could probably boil down half the states into a, ‘we need to remove more content regime,’ and then the other half would more or less go into, ‘we need to leave more content up’ regime,” Marchese said. “Those two regimes really cannot be harmonized. And so I think that to the extent that it’s possible, we could see an internet that does not function the same from state to state.”
Critics of the California law have also warned that in a period when access to resources for LGBTQ youth is already limited (through measures like Florida’s Parental Rights in Education law, also referred to by critics as the Don’t Say Gay law limiting how schools can teach about gender identity or sexual orientation in young grades), the legislation threatens to further cut off vulnerable kids and teens from important information based on the whims of the state’s enforcement.
NetChoice alleged in its lawsuit against the California law that blogs and discussion forums for mental health, sexuality, religion and more could be considered under the scope of the law if likely to be accessed by kids. It also claimed the law would violate platforms’ own First Amendment right to editorial discretion and “impermissibly restricts how publishers may address or promote content that a government censor thinks unsuitable for minors.”
Jim Steyer, CEO of Common Sense Media, which has advocated for the California law and other measures to protect kids online, criticized arguments from tech-backed groups against the legislation. Though he acknowledged critiques from outside groups as well, he warned that it’s important not to let “perfect be the enemy of the good.”
“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”
How degrading 230 protections could change the internet
A YouTube logo seen at the YouTube Space LA in Playa Del Rey, Los Angeles, California, United States October 21, 2015.
Lucy Nicholson | Reuters
Although the courts could rule in a variety of ways in these cases, any chipping away at Section 230 protections will likely have tangible effects on how internet companies operate.
Google, in its brief filed with the Supreme Court on Jan. 12, warned that denying Section 230 protections to YouTube in the Gonzalez case “could have devastating spillover effects.”
“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”
Google said such a change would also make the internet less safe and less hospitable to free expression.
“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position. “That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”
Miers of Chamber of Progress said that even if Google technically wins at the Supreme Court, it’s possible justices try to “split the baby” in establishing a new test of when Section 230 protections should apply, like in the case of algorithms. A result like that would effectively undermine one of the main functions of the law, according to Miers, which is the ability to swiftly end lawsuits against platforms that involve hosting third-party content.
If the court tries to draw such a distinction, Miers said, “now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up. And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”
Miers added that platforms could also opt to display mostly posts from professional content creators, rather than amateurs, to maintain a level of control over the information they could be at risk for promoting.
The impact on online communities could be especially profound for marginalized groups. Civil society groups who spoke with CNBC doubted that for-profit companies would spend on increasingly complex models to navigate a risky legal field in a more nuanced way.
“It’s much cheaper from a compliance point of view to just censor everything,” said Brody of the Lawyers’ Committee. “I mean, these are for-profit companies, they’re going to look at, what is the most cost-effective way for us to reduce our legal liability? And the answer to that is not going to be investing billions and billions of dollars into trying to improve content moderation systems that are frankly already broken. The answer is going to be, let’s just crank up the dial on the AI that automatically censors stuff so that we have a Disneyland rule. Everything’s happy and nothing bad ever happens. But to do that, you’re going to censor a lot of underrepresented voices in a way that is really going to have outsized censorship impacts on them.”
The Supreme Court of the United States building are seen in Washington D.C., United States on December 28, 2022.
Celal Gunes | Anadolu Agency | Getty Images
The idea that some business models will become simply too risky to operate under a more limited liability shield is not theoretical.
After Congress passed SESTA-FOSTA, which carved out an exception for liability protection in cases of sex trafficking, options to advertise sex work online became more limited due to the liability risk. While some might view that as a positive change, many sex workers have argued it removed a safer option for making money compared to soliciting work in person.
Lawmakers who’ve sought to alter Section 230 seem to think there is a “magical lever” they can pull that will “censor all the bad stuff from the internet and leave up all the good stuff,” according to Evan Greer, director of Fight for the Future, a digital rights advocacy group.
“The reality is that when we subject platforms to liability for user-generated content, no matter how well-intentioned the effort is or no matter how it’s framed, what ends up happening is not that platforms moderate more responsibly or more thoughtfully,” Greer said. “They moderate in whatever way their risk-averse lawyers tell them to, to avoid getting sued.”
“So if the court were to say that you could be potentially liable for quote, unquote, recommending third-party content or for your algorithms displaying third-party content, because it’s so difficult to moderate in a totally perfect way, one response might be to take down a lot of speech or to block a lot of speech,” Jain said.
Miers fears that if different states enact their own laws seeking to place limits on Section 230 as Florida and Texas have, companies will end up adhering to the strictest state’s law for the rest of the country. That could result in restrictions on the kind of content most likely to be considered controversial in that state, such as resources for LGBTQ youth when such information isn’t considered age-appropriate, or reproductive care in a state that has abortion restrictions.
Should the Supreme Court end up degrading 230 protections and allowing a fragmented legal system to persist for content moderation, Miers said it could be a spark for Congress to address the new challenges, noting that Section 230 itself came out of two bipartisan lawmakers’ recognition of new legal complexities presented by the existence of the internet.
“Maybe we have to sort of relive that history and realize that oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act. ”
Silicon Valley executives and financiers publicly opened their wallets in support of President Donald Trump’s 2024 presidential run. The early returns in 2025 aren’t great, to say the least.
Following Trump’s sweeping tariff plan announced Wednesday, the Nasdaq suffered steep consecutive daily drops to finish 10% lower for the week, the index’s worst performance since the beginning of the Covid pandemic in 2020.
The tech industry’s leading CEO’s rushed to contribute to Trump’s inauguration in January and paraded to Washington, D.C., for the event. Since then, it’s been a slog.
The market can always turn around, but economists and investors aren’t optimistic, and concerns are building of a potential recession. The seven most valuable U.S. tech companies lost a combined $1.8 trillion in market cap in two days.
Apple slid 14% for the week, its biggest drop in more than five years. Tesla, led by top Trump adviser Elon Musk, plunged 9.2% and is now down more than 40% for the year. Musk contributed close to $300 million to help propel Trump back to the White House.
Nvidia, Meta and Amazon all suffered double-digit drops for the week. For Amazon, a ninth straight weekly decline marks its longest such losing streak since 2008.
With Wall Street selling out of risky assets on concern that widespread tariff hikes will punish the U.S. and global economy, the fallout has drifted down to the IPO market. Online lender Klarna and ticketing marketplace StubHub delayed their IPOs due to market turbulence, just weeks after filing with the Securities and Exchange Commission, and fintech company Chime is also reportedly delaying its listing.
CoreWeave, a provider of artificial intelligence infrastructure, last week became the first venture-backed company to raise more than $1 billion in a U.S. IPO since 2021. But the company slashed its offering, and trading has been very volatile in its opening days on the market. The stock plunged 12% on Friday, leaving it 17% above its offer price but below the bottom of its initial range.
“You couldn’t create a worse market and macro environment to go public,” said Phil Haslett, co-founder of EquityZen, a platform for investing in private companies. “Way too much turbulence. All flights are grounded until further notice.”
CoreWeave investor Mark Klein of SuRo Capital previously told CNBC that the company could be the first in an “IPO parade.” Now he’s backtracking.
“It appears that the IPO parade has been temporarily halted,” Klein told CNBC by email on Friday. “The current tariff situation has prompted these companies to pause and assess its impact.”
‘Cave rapidly’
During last year’s presidential campaign, prominent venture capitalists like Marc Andreessen backed Trump, expecting that his administration would usher in a boom and eliminate some of the hurdles to startup growth set up by the Biden administration. Andreessen and his partner, Ben Horowitz, said in July that their financial support of the Trump campaign was due to what they called a better “little tech agenda.”
A spokesperson for Andreessen Horowitz declined to comment.
Some techies who supported Trump in the campaign have taken to social media to defend their positions.
Venture capitalist Keith Rabois, a managing director at Khosla Ventures, posted on X on Thursday that “Trump Derangement Syndrome has morphed into Tariff Derangement Syndrome.” He said tariffs aren’t inflationary, are effective at reducing fentanyl imports, and he expects that “most other countries will cave and cave rapidly.”
That was before China’s Finance Ministry said on Friday that it will impose a 34% tariff on all goods imported from the U.S. starting on April 10.
At Sequoia Capital, which is the biggest investor in Klarna, outspoken Trump supporter Shaun Maguire, wrote on X, “The first long-term thinking President of my lifetime,” and said in a separate post that, “The price of stocks says almost nothing about the long term health of an economy.”
However, Allianz Chief Economic Advisor Mohamed El-Erian warned on Friday that Trump’s extensive raft of import tariffs are putting the U.S. economy at risk of recession.
“You’ve had a major repricing of growth prospects, with a recession in the U.S. going up to 50% probability, you’ve seen an increase in inflation expectations, up to 3.5%,” he told CNBC’s Silvia Amaro on the sidelines of the Ambrosetti Forum in Cernobbio, Italy.
Former Microsoft CEOs Bill Gates, left, and Steve Ballmer, center, pose for photos with CEO Satya Nadella during an event celebrating the 50th Anniversary of Microsoft on April 4, 2025 in Redmond, Washington.
Stephen Brashear | Getty Images
Meanwhile, executives at tech’s megacap companies were largely silent this week, and their public relations representatives declined to provide comments about their thinking.
Microsoft CEO Satya Nadella was in the awkward position on Friday of celebrating his company’s 50th anniversary at corporate headquarters in Redmond, Washington. Alongside Microsoft’s prior two CEOs, Bill Gates and Steve Ballmer, Nadella sat down with CNBC’s Andrew Ross Sorkin for a televised interview that was planned well before Trump’s tariff announcement.
When asked about the tariffs at the top of the interview, Nadella effectively dodged the question and avoided expressing his views about whether the new policies will hamper Microsoft’s business.
Ballmer, who was succeeded by Nadella in 2014, acknowledged to Sorkin that “disruption is very hard on people” and that, “as a Microsoft shareholder, this kind of thing is not good.” Ballmer and Gates are two of the 12 wealthiest people in the world thanks to their Microsoft fortunes.
C-suites may not be able to stay quiet for long, especially if the recent turmoil spills into next week.
Lise Buyer, who previously helped guide Google through its IPO and now works as an adviser to companies going public, said there’s no appetite for risk in the market under these conditions. But there is risk that staffers get jittery, and they’ll surely look to their leaders for some reassurance.
“Until markets settle out and we have the opportunity to access valuation levels, public company CEOs should work to calm potentially distressed employees,” Buyer said in an email. “And private company managements should refine plans to get by on dollars already in the treasury.”
— CNBC’s Hayden Field, Jordan Novet, Leslie Picker, Annie Palmer and Samantha Subin contributed to this report.
Elon Musk has been promising investors for about a decade that Tesla’s cars are on the verge of turning into robotaxis, capable of driving themselves cross-country, after one big software update.
That hasn’t happened yet.
What Tesla offers is a sophisticated, but only partially automated, driving system that’s marketed in the U.S. as its Full Self-Driving (Supervised) option, though many Tesla fans refer to it as FSD. In China, Tesla recently changed the system’s name to “intelligent assisted driving.”
Full Self-Driving, as it was previously called, relies on cameras and software to enable features like automatic navigation on highways and city streets, or automatic braking and slowing in response to traffic lights and stop signs.
Tesla owner’s manuals warn users that FSD “is a hands-on feature” that requires them to pay attention to the road at all times. “Keep your hands on the steering wheel at all times, be mindful of road conditions and surrounding traffic,” the manuals say.
But many of Tesla’s customers ignore the fine print and use the system hands-free anyway.
Tesla’s partially automated driving systems have been a source of inspiration for its stalwart fans. But they’ve also caused controversy and concern for public safety after reports of injurious and fatal collisions where Tesla’s standard Autopilot or premium FSD systems were known to be in use.
FSD does a lot of things “amazingly well,” said Guy Mangiamele, a professional test driver for automotive consulting firm AMCI Testing, during a recent long drive in Los Angeles. But he added that “the times that it trips up, you could kill somebody or you could hurt yourself.”
The pressure has never been higher on Tesla to elevate the technology and deliver on Musk’s long-delayed promises.
The Tesla CEO is the wealthiest person in the world and was the biggest financial backer of President Donald Trump’s 2024 campaign. Since Trump’s January inauguration, Musk has been leading the administration’s Department of Government Efficiency effort to drastically slash the federal workforce and government spending.
The DOGE team has been connected to more than 280,000 layoff plans for federal workers and contractors impacting 27 agencies over the last two months, according to data tracked by Challenger Gray, the executive outplacement firm.
Musk’s work with DOGE – along with his frequently incendiary political rhetoric and endorsement of Germany’s far-right, anti-immigrant party AfD – has led to a tremendous backlash against Tesla.
Protests, boycotts and even criminal acts of vandalism have targeted the electric vehicle maker in recent months and led many prospective Tesla customers to turn to other brands. Meanwhile, existing Tesla owners have been trading in their EVs at record levels, according to data from Edmunds.
Tesla’s stock dropped 36% through the first three months of 2025, representing its steepest decline since 2022 and third-biggest slide for any quarter since the EV maker went public in June 2010. Tesla also reported 336,681 vehicle deliveries in the first quarter of 2025, a 13% decline from the same period a year ago.
Product unveilings and a “robotaxi launch” expected from Tesla in Austin, Texas, this year could revitalize investors’ sentiment about the company and hopefully lift its share price, Piper Sandler analysts wrote in a note following the worse-than-expected deliveries report.
On Tesla’s last earnings call, Musk promised investors that Tesla will finally start its driverless ride-hailing service in Austin in June.
To see whether the company’s FSD technology is anywhere close to a robotaxi-ready release, CNBC spent months riding along with Tesla owners who use Full Self-Driving (Supervised) and speaking with automotive safety experts about their impressions.
Auto-tech enthusiast and Tesla owner Chris Lee, host of the YouTube channel EverydayChris, told CNBC that Tesla’s system “definitely has a ways to go, but the fact that it’s able to go from where it was three years ago to today, is insane.”
Many experts, including Telemetry Vice President of Market Research Sam Abuelsamid, remain skeptical. There’s been “no evidence” that FSD is “anywhere close to being ready to be used in an unsupervised form” by June, said Abuelsamid, whose firms specializes in automotive intelligence.
Tesla FSD will “often work really well, particularly in daytime conditions” but then “randomly, in a scenario where it did fine previously, it will fail,” said Abuelsamid, adding that those scenarios can be unpredictable and dangerous.
Watch the video to learn more about the evolution of Tesla’s Full Self-Driving (Supervised) and whether it will be robotaxi-ready this June.
Microsoft owns lots of Nvidia graphics processing units, but it isn’t using them to develop state-of-the-art artificial intelligence models.
There are good reasons for that position, Mustafa Suleyman, the company’s CEO of AI, told CNBC’s Steve Kovach in an interview on Friday. Waiting to build models that are “three or six months behind” offers several advantages, including lower costs and the ability to concentrate on specific use cases, Suleyman said.
It’s “cheaper to give a specific answer once you’ve waited for the first three or six months for the frontier to go first. We call that off-frontier,” he said. “That’s actually our strategy, is to really play a very tight second, given the capital-intensiveness of these models.”
Suleyman made a name for himself as a co-founder of DeepMind, the AI lab that Google bought in 2014, reportedly for $400 million to $650 million. Suleyman arrived at Microsoft last year alongside other employees of the startup Inflection, where he had been CEO.
More than ever, Microsoft counts on relationships with other companies to grow.
It gets AI models from San Francisco startup OpenAI and supplemental computing power from newly public CoreWeave in New Jersey. Microsoft has repeatedly enriched Bing, Windows and other products with OpenAI’s latest systems for writing human-like language and generating images.
Microsoft’s Copilot will gain “memory” to retain key facts about people who repeatedly use the assistant, Suleyman said Friday at an event in Microsoft’s Redmond, Washington, headquarters to commemorate the company’s 50th birthday. That feature came first to OpenAI’s ChatGPT, which has 500 million weekly users.
Through ChatGPT, people can access top-flight large language models such as the o1 reasoning model that takes time before spitting out an answer. OpenAI introduced that capability in September — only weeks later did Microsoft bring a similar capability called Think Deeper to Copilot.
Microsoft occasionally releases open-source small-language models that can run on PCs. They don’t require powerful server GPUs, making them different from OpenAI’s o1.
OpenAI and Microsoft have held a tight relationship shortly after the startup launched its ChatGPT chatbot in late 2022, effectively kicking off the generative AI race. In total, Microsoft has invested $13.75 billion in the startup, but more recently, fissures in the relationship between the two companies have begun to show.
Microsoft added OpenAI to its list of competitors in July 2024, and OpenAI in January announced that it was working with rival cloud provider Oracle on the $500 billion Stargate project. That came after years of OpenAI exclusively relying on Microsoft’s Azure cloud. Despite OpenAI partnering with Oracle, Microsoft in a blog post announced that the startup had “recently made a new, large Azure commitment.”
“Look, it’s absolutely mission-critical that long-term, we are able to do AI self-sufficiently at Microsoft,” Suleyman said. “At the same time, I think about these things over five and 10 year periods. You know, until 2030 at least, we are deeply partnered with OpenAI, who have [had an] enormously successful relationship for us.
Microsoft is focused on building its own AI internally, but the company is not pushing itself to build the most cutting-edge models, Suleyman said.
“We have an incredibly strong AI team, huge amounts of compute, and it’s very important to us that, you know, maybe we don’t develop the absolute frontier, the best model in the world first,” he said. “That’s very, very expensive to do and unnecessary to cause that duplication.”