When Elon Musk announced his offer to buy Twitter for more than $40 billion, he told the public his vision for the social media site was to make sure it’s “an inclusive arena for free speech.”
The saga of Musk’s Twitter takeover has underscored the complexity of determining what speech is truly protected. That question is particularly difficult when it comes to online platforms, which create policies that impact wide swaths of users from different cultures and legal systems across the world.
This year, the U.S. justice system, including the Supreme Court, will take on cases that will help determine the bounds of free expression on the internet in ways that could force the hand of Musk and other platform owners who determine what messages get distributed widely.
The boundaries they will consider include the extent of platforms’ responsibility to remove terrorist content and prevent their algorithms from promoting it, whether social media sites can take down messaging on the basis of viewpoint and whether the government can impose online safety standards that some civil society groups fear could lead to important resources and messages being stifled to avoid legal liability.
“The question of free speech is always more complicated than it looks,” said David Brody, managing attorney of the Digital Justice Initiative at the Lawyers’ Committee for Civil Rights Under the Law. “There’s a freedom to speak freely. But there’s also the freedom to be free from harassment to be free from discrimination.”
Brody said whenever the parameters of content moderation get tweaked, people need to consider “whose speech gets silenced when that dial gets turned? Whose speech gets silenced because they are too fearful to speak out in the new environment that is created?”
Tech’s liability shield under threat
Facebook’s new rebrand logo Meta is seen on smartpone in front of displayed logo of Facebook, Messenger, Intagram, Whatsapp and Oculus in this illustration picture taken October 28, 2021.
Dado Ruvic | Reuters
Section 230 of the Communications Decency Act has been a bedrock of the tech industry for more than two decades. The law grants a liability shield to internet platforms that protects them from being held responsible for their users’ posts, while also allowing them to decide what stays up or comes down.
But while industry leaders say it’s what has allowed online platforms to flourish and innovate, lawmakers on both sides of the aisle have increasingly pushed to diminish its protections for the multi-billion dollar companies, with many Democrats wanting platforms to remove more hateful content and Republicans wanting to leave up more posts that align with their views.
Section 230 protection makes it easier for platforms to allow users to post their views without the companies themselves fearing they could be held responsible for those messages. It also gives the platforms peace of mind that they won’t be penalized if they want to remove or demote information they deem to be harmful or objectionable in some way.
These are the cases that threaten to undermine Section 230’s force:
Gonzalez v. Google: This is the Supreme Court case with the potential to alter the most popular business models of the internet that currently allow for a largely free-flowing stream of posts. The case, brought by the family of an American who was killed in a 2015 terrorist attack in Paris, seeks to determine whether Section 230 can shield Google from liability under the Anti-Terrorism Act (ATA) for allegedly aiding and abetting ISIS by promoting videos created by the terrorist organization through its recommendation algorithm. If the court significantly increases the liability risk for platforms using algorithms, the services may choose to abandon them or greatly diminish their use, therefore changing the way content can be found or go viral on the internet. It will be heard by the Supreme Court in February.
Twitter v. Taamneh: This Supreme Court case doesn’t directly involve Section 230, but its outcome could still impact how platforms choose to moderate information on their services. The case, which will be heard by the Supreme Court in February, also brought under the ATA, deals with the question of whether Twitter should have taken more aggressive moderating action against terrorist content because it moderates posts on its site. Jess Miers, legal advocacy counsel at the tech-backed group for Chamber of Progress, said a ruling against Twitter in the case could create an “existential question” for tech companies by forcing them to rethink whether monitoring for terrorist content at all creates legal knowledge that it exists, which could later be used against them in court.
Challenges to Florida and Texas social media laws: Another set of cases deals with the question of whether services should be required to host more content of certain kinds. Two tech industry groups, NetChoice and the Computer & Communications Industry Association, filed suit against the states of Florida and Texas over their laws seeking to prevent online platforms from discriminating on their services based on viewpoint. The groups argue that the laws effectively violate the businesses’ First Amendment rights by forcing them to host objectionable messages even if they violate the company’s own terms of service, policies or beliefs. The Supreme Court has yet to decide if or when to hear the cases, though many expect it will take them up at some point.
Tech challenge to California’s kids online safety law: Separately, NetChoice also filed suit against California for a new law there that aims to make the internet safer for kids, but that the industry group says would unconstitutionally restrict speech. The Age-Appropriate Design Code requires internet platforms that are likely to be accessed by kids to mitigate risks to those users. But in doing so, NetChoice has argued the state imposed an overly vague rule subject to the whims of what the attorney general deems to be appropriate. The group said the law will create “overwhelming pressure to over-moderate content to avoid the law’s penalties for content the State deems harmful,” which will “stifle important resources, particularly for vulnerable youth who rely on the Internet for life-saving information.” This case is still at the district court level.
The tension between the cases
Getty Images
The variety in these cases involving speech on the internet underscores the complexity of regulating the space.
“On the one hand, in the NetChoice cases, there’s an effort to get platforms to leave stuff up,” said Jennifer Granick, surveillance and cybersecurity counsel at the ACLU Speech, Privacy, and Technology Project. “And then the Taamneh and the Gonzalez case, there’s an effort to get platforms to take more stuff down and to police more thoroughly. You kind of can’t do both.”
If the Supreme Court ultimately decides to hear arguments in the Texas or Florida social media law cases, it could face tricky questions about how to square its decision with the outcome in the Gonzalez case.
For example, if the court decides in the Gonzalez case that platforms can be held liable for hosting some types of user posts or promoting them through their algorithms, “that’s in some tension with the notion that providers are potentially liable for third-party content,” as the Florida and Texas laws suggest, said Samir Jain, vice present of policy at the Center for Democracy and Technology, a nonprofit that has received funding from tech companies including Google and Amazon.
“Because if on the one hand, you say, ‘Well, if you carry terrorist-related content or you carry certain other content, you’re potentially liable for it.’ And they then say, ‘But states can force you to carry that content.’ There’s some tension there between those two kinds of positions,” Jain said. “And so I think the court has to think of the cases holistically in terms of what kind of regime overall it’s going to be creating for online service providers.”
The NetChoice cases against red states Florida and Texas, and the blue state of California, also show how disagreements over how speech should be regulated on the internet are not constrained by ideological lines. The laws threaten to divide the country into states that require more messages to be left up and others that require more posts to be taken down or restricted in reach.
Under such a system, tech companies “would be forced to go to any common denominator that exists,” according to Chris Marchese, counsel a NetChoice.
“I have a feeling though that what really would end up happening is that you could probably boil down half the states into a, ‘we need to remove more content regime,’ and then the other half would more or less go into, ‘we need to leave more content up’ regime,” Marchese said. “Those two regimes really cannot be harmonized. And so I think that to the extent that it’s possible, we could see an internet that does not function the same from state to state.”
Critics of the California law have also warned that in a period when access to resources for LGBTQ youth is already limited (through measures like Florida’s Parental Rights in Education law, also referred to by critics as the Don’t Say Gay law limiting how schools can teach about gender identity or sexual orientation in young grades), the legislation threatens to further cut off vulnerable kids and teens from important information based on the whims of the state’s enforcement.
NetChoice alleged in its lawsuit against the California law that blogs and discussion forums for mental health, sexuality, religion and more could be considered under the scope of the law if likely to be accessed by kids. It also claimed the law would violate platforms’ own First Amendment right to editorial discretion and “impermissibly restricts how publishers may address or promote content that a government censor thinks unsuitable for minors.”
Jim Steyer, CEO of Common Sense Media, which has advocated for the California law and other measures to protect kids online, criticized arguments from tech-backed groups against the legislation. Though he acknowledged critiques from outside groups as well, he warned that it’s important not to let “perfect be the enemy of the good.”
“We’re in the business of trying to get stuff done concretely for kids and families,” Steyer said. “And it’s easy to make intellectual arguments. It’s a lot tougher sometimes to get stuff done.”
How degrading 230 protections could change the internet
A YouTube logo seen at the YouTube Space LA in Playa Del Rey, Los Angeles, California, United States October 21, 2015.
Lucy Nicholson | Reuters
Although the courts could rule in a variety of ways in these cases, any chipping away at Section 230 protections will likely have tangible effects on how internet companies operate.
Google, in its brief filed with the Supreme Court on Jan. 12, warned that denying Section 230 protections to YouTube in the Gonzalez case “could have devastating spillover effects.”
“Websites like Google and Etsy depend on algorithms to sift through mountains of user-created content and display content likely relevant to each user,” Google wrote. It added that if tech platforms were able to be sued without Section 230 protection for how they organize information, “the internet would devolve into a disorganized mess and a litigation minefield.”
Google said such a change would also make the internet less safe and less hospitable to free expression.
“Without Section 230, some websites would be forced to overblock, filtering content that could create any potential legal risk, and might shut down some services altogether,” General Counsel Halimah DeLaine Prado wrote in a blog post summarizing Google’s position. “That would leave consumers with less choice to engage on the internet and less opportunity to work, play, learn, shop, create, and participate in the exchange of ideas online.”
Miers of Chamber of Progress said that even if Google technically wins at the Supreme Court, it’s possible justices try to “split the baby” in establishing a new test of when Section 230 protections should apply, like in the case of algorithms. A result like that would effectively undermine one of the main functions of the law, according to Miers, which is the ability to swiftly end lawsuits against platforms that involve hosting third-party content.
If the court tries to draw such a distinction, Miers said, “now we’re going to get in a situation where every case plaintiffs bringing their cases against internet services are going to always try to frame it as being on the other side of the line that the Supreme Court sets up. And then there’s going to be a lengthy discussion of the courts asking, well does Section 230 even apply in this case? But once we get to that lengthy discussion, the entire procedural benefits of 230 have been mooted at that point.”
Miers added that platforms could also opt to display mostly posts from professional content creators, rather than amateurs, to maintain a level of control over the information they could be at risk for promoting.
The impact on online communities could be especially profound for marginalized groups. Civil society groups who spoke with CNBC doubted that for-profit companies would spend on increasingly complex models to navigate a risky legal field in a more nuanced way.
“It’s much cheaper from a compliance point of view to just censor everything,” said Brody of the Lawyers’ Committee. “I mean, these are for-profit companies, they’re going to look at, what is the most cost-effective way for us to reduce our legal liability? And the answer to that is not going to be investing billions and billions of dollars into trying to improve content moderation systems that are frankly already broken. The answer is going to be, let’s just crank up the dial on the AI that automatically censors stuff so that we have a Disneyland rule. Everything’s happy and nothing bad ever happens. But to do that, you’re going to censor a lot of underrepresented voices in a way that is really going to have outsized censorship impacts on them.”
The Supreme Court of the United States building are seen in Washington D.C., United States on December 28, 2022.
Celal Gunes | Anadolu Agency | Getty Images
The idea that some business models will become simply too risky to operate under a more limited liability shield is not theoretical.
After Congress passed SESTA-FOSTA, which carved out an exception for liability protection in cases of sex trafficking, options to advertise sex work online became more limited due to the liability risk. While some might view that as a positive change, many sex workers have argued it removed a safer option for making money compared to soliciting work in person.
Lawmakers who’ve sought to alter Section 230 seem to think there is a “magical lever” they can pull that will “censor all the bad stuff from the internet and leave up all the good stuff,” according to Evan Greer, director of Fight for the Future, a digital rights advocacy group.
“The reality is that when we subject platforms to liability for user-generated content, no matter how well-intentioned the effort is or no matter how it’s framed, what ends up happening is not that platforms moderate more responsibly or more thoughtfully,” Greer said. “They moderate in whatever way their risk-averse lawyers tell them to, to avoid getting sued.”
“So if the court were to say that you could be potentially liable for quote, unquote, recommending third-party content or for your algorithms displaying third-party content, because it’s so difficult to moderate in a totally perfect way, one response might be to take down a lot of speech or to block a lot of speech,” Jain said.
Miers fears that if different states enact their own laws seeking to place limits on Section 230 as Florida and Texas have, companies will end up adhering to the strictest state’s law for the rest of the country. That could result in restrictions on the kind of content most likely to be considered controversial in that state, such as resources for LGBTQ youth when such information isn’t considered age-appropriate, or reproductive care in a state that has abortion restrictions.
Should the Supreme Court end up degrading 230 protections and allowing a fragmented legal system to persist for content moderation, Miers said it could be a spark for Congress to address the new challenges, noting that Section 230 itself came out of two bipartisan lawmakers’ recognition of new legal complexities presented by the existence of the internet.
“Maybe we have to sort of relive that history and realize that oh, well, we’ve made the regulatory environment so convoluted that it’s risky again to host user-generated content,” Miers said. “Yeah, maybe Congress needs to act. ”
UK Finance Minister Rachel Reeves makes a speech during the Labour Party Conference that is held at the ACC Liverpool Convention Center in Liverpool, UK on September 23, 2024.
Anadolu | Getty Images
LONDON — British tech bosses and venture capitalists are questioning whether the country can deliver on its bid to become a global artificial intelligence hub after the government set out plans to increase taxes on businesses.
On Wednesday, Finance Minister Rachel Reeves announced a move to hike capital gains tax (CGT) — a levy on the profit investors make from the sale of an investment — as part of a far-reaching announcement on the Labour government’s fiscal spending and tax plans.
The lower capital gains tax rate was increased to 18% from 10%, while the higher rate climbed to 24% from 20%. Reeves said the increases will help bring in £2.5 billion ($3.2 billion) of additional capital to the public purses.
It was also announced that the lifetime limit for business asset disposal relief (BADR) — which offers entrepreneurs a reduced rate on the level of tax paid on capital gains resulting from the sale of all or part of a company — would sit at £1 million.
She added that the rate of CGT applied to entrepreneurs using the BADR scheme will increase to 14% in 2025 and to 18% a year later. Still, Reeves said the U.K. would still have the lowest capital gains tax rate of any European G7 economy.
The hikes were less severe than previously feared — but the push toward a higher tax environment for corporates stoked the concern of several tech executives and investors, with many suggesting the move would lead to higher inflation and a slowdown in hiring.
On top of increases to CGT, the government also raised the rate of National Insurance (NI) contributions, a tax on earnings. Reeves forecasted the move would raise £25 billion per year — by far the largest revenue raising measure in a raft of pledges that were made Wednesday.
Paul Taylor, CEO and co-founder of fintech firm Thought Machine, said that hike to NI rates would lead to an additional £800,000 in payroll spending for his business.
“This is a significant amount for companies like us, which rely on investor capital and already face cost pressures and targets,” he noted.
“Nearly all emerging tech businesses run on investor capital, and this increase sets them back on their path to profitability,” added Taylor, who sits on the lobbying group Unicorn Council for U.K. FinTech. “The U.S. startup and entrepreneurial environment is a model of where the U.K. needs to be.”
Chances of building ‘the next Nvidia’ more slim
Another increase to taxation by way of a rise in the tax rate for carried interest — the level of tax applied to the share of profit a fund manager makes from a private equity investment.
Reeves announced that the rate of tax on carried interest, which is charged on capital gains, would rise to 32%, up from 28% currently.
Haakon Overli, co-founder of European venture capital firm Dawn Capital, said that increases to capital gains tax could make it harder for the next Nvidia to be built in the U.K.
“If we are to have the next NVIDIA built in the UK, it will come from a company born from venture capital investment,” Overli said by email.
“The tax returns from creating such a company, which is worth more than the FTSE 100 put together, would dwarf any gains from increasing the take from venture capital today.”
The government is carrying out further consultation with industry stakeholders on plans to up taxes on carried interest. Anne Glover, CEO of Amadeus Capital, an early investor in Arm, said this was a good thing.
“The Chancellor has clearly listened to some of the concerns of investors and business leaders,” she said, adding that talks on carried interest reforms must be “equally as productive and engaged.”
Britain also committed to mobilizing £70 billion of investment through the recently formed National Wealth Fund — a state-backed investment platform modelled on sovereign wealth vehicles such as Norway’s Government Pension Fund Global and Saudi Arabia’s Public Investment Fund.
This, Glover added, “aligns with our belief that investment in technology will ultimately lead to long term growth.”
She nevertheless urged the government to look seriously at mandating that pension funds diversify their allocation to riskier assets like venture capital — a common ask from VCs to boost the U.K. tech sector.
Clarity welcomed
Steve Hare, CEO of accounting software firm Sage, said the budget would mean “significant challenges for UK businesses, especially SMBs, who will face the impact of rising employer National Insurance contributions and minimum wage increases in the months ahead.”
Even so, he added that many firms would still welcome the “longer-term certainty and clarity provided, allowing them to plan and adapt effectively.”
Meanwhile, Sean Reddington, founder and CEO of educational technology firm Thrive, said that higher CGT rates mean tech entrepreneurs will face “greater costs when selling assets,” while the rise in employer NI contributions “could impact hiring decisions.”
“For a sustainable business environment, government support must go beyond these fiscal changes,” Reddington said. “While clearer tax communication is positive, it’s unlikely to offset the pressures of heightened taxation and rising debt on small businesses and the self-employed.”
He added, “The crucial question is how businesses can maintain profitability with increased costs. Government support is essential to offset these new burdens and ensure the UK’s entrepreneurial spirit continues to thrive.”
Apple CEO Tim Cook (C) joins customers during Apple’s iPhone 16 launch in New York on September 20, 2024.
Timothy A. Clary | Afp | Getty Images
Apple’s second-largest division after the iPhone has turned into a $100 billion a year business that Wall Street loves.
In Apple’s earnings report on Thursday, the company said it reached just under $25 billion in services revenue, an all-time high for the category, and 12% growth on an annual basis.
“It’s an important milestone,” Apple CFO Luca Maestri said on a call with analysts. “We’ve got to a run rate of $100 billion. You look back just a few years ago and the the growth has been phenomenal.”
Apple first broke out its services revenue in the December quarter of 2014. At the time, it was $4.8 billion.
Apple’s services unit has become a critical part of Apple’s appeal to investors over the past decade. Its gross margin was 74% in the September quarter compared to Apple’s overall margin of 46.2%.
Services contains a wide range of different offerings. According to the company’s SEC filings, it includes advertising, search licensing revenue from Google, warranties called AppleCare, cloud subscription services such as iCloud, content subscriptions such as the company’s Apple TV+ service, and payments from Apple Pay and AppleCare.
On a January 2016 earnings call, when the reporting segment was relatively new, Apple CEO Tim Cook told investors to pay attention.
“I do think that the assets that we have in this area are huge, and I do think that it’s probably something that the investment community would want to and should focus more on,” Cook said.
Over the years, Apple has compared its services business to the size of Fortune 500 companies, which are ranked by sales, to give a sense of its scale. After Thursday, Apple’s services business alone, based on its most recent run rate, would land around 40th on the Fortune 500, topping Morgan Stanley and Johnson & Johnson.
Services appeals to investors because many of the subscriptions contained in it are billed on a recurring basis. That can be more reliably modeled than hardware sales, which will increase or decrease based on a given iPhone model’s demand.
“Yes, the the recurring portion is growing faster than the transactional one,” Maestri said on Thursday.
Apple’s fourth-quarter results beat Wall Street expectations for revenue and earnings on Thursday, but net income slumped after a one-time charge as part of a tax decision in Europe. The stock fell as much as 2% in extended trading.
Apple boasts to investors that its sales from Services will grow alongside its installed base. After someone buys an iPhone, they’re likely to sign up for Apple’s subscriptions, use Safari to search Google, or buy an extended warranty.
Apple also cites a “subscription” figure that includes both its first-party services, such as Apple TV+ subscriptions, and users who sign up to be billed by an App Store app on a recurring basis.
The company said the installed base and subscriptions hit all-time-highs, but didn’t give updated figures. Apple said it had 2.2 billion active devices in February, and in August said it had topped 1 billion paid subscriptions.
Still, Apple faces questions about how long its services business can continue growing at such a rapid rate. Between 2016 and 2021, the unit sported significantly higher growth, reaching 27.3% at the end of that stretch.
In fiscal 2023, services growth dropped to 9.1% for the year, before recovering to about 13% the next year. Apple told investors that it expected services growth in the December quarter to be about what it was in fiscal 2024.
Cook was asked on Thursday what Apple could do to make some of its services and its Apple One subscription bundle grow faster.
“There’s lots of customers to try to convince to take advantage of it,” Cook said. “We’re going to continue investing in the services and adding new features. Whether it’s News+ or Music or Arcade, that’s what we’re going to do.”
Amazon CEO, Andy Jassy speaking with CNBC’s Jim Cramer on Mad Money in Seattle, WA. on Dec. 6th, 2023.
CNBC
Amazon CEO Andy Jassy is trying to reassure investors who may be worried about the future payoff of the company’s massive investments in generative artificial intelligence.
On a conference call with analysts following the company’s third-quarter earnings report on Thursday, Jassy pointed to the success of Amazon’s cloud computing business, Amazon Web Services, which has become a crucial profit engine despite the extreme costs associated with building data centers.
“I think we’ve proven over time that we can drive enough operating income and free cash flow to make this a very successful return on invested capital business,” Jassy said. “We expect the same thing will happen here with generative AI.”
Amazon spent $22.6 billion on property and equipment during the quarter, up 81% from the year before. Jassy said Amazon plans to spend $75 billion on capex in 2024 and expects an even higher number in 2025.
The jump in spending is primarily being driven by generative AI investments, Jassy said. The company is rushing to invest in data centers, networking gear and hardware to meet vast demand for the technology, which has exploded in popularity since OpenAI released its ChatGPT assistant almost two years ago.
“It is a really unusually large, maybe once-in-a-lifetime type of opportunity,” Jassy said. “And I think our customers, the business and our shareholders will feel good about this long term that we’re aggressively pursuing it.”
AI spending was a big topic on tech earnings calls this week. Meta on Wednesday raised its capital expenditures guidance, and CEO Mark Zuckerberg said he was “quite happy” with the team’s execution. Meanwhile, Microsoft‘s investment in OpenAI weighed on its fiscal first-quarter earnings released on Wednesday, and the company said capital spending would continue to rise. A day earlier, Alphabet CFO Anat Ashkenazi warned the company expects capital spending to grow in 2025.
Amazon has said its cloud unit has picked up more business from companies that need infrastructure to deploy generative AI models. It’s also launched several AI products for enterprises, third-party sellers on its marketplace and advertisers in recent months. The company is expected to announce a souped-up version of its Alexa voice assistant that incorporates generative AI, something Jassy said will arrive “in the near future.”
Amazon hasn’t disclosed its revenue from generative AI, but Jassy said Thursday it’s become a “multi-billion-dollar revenue run rate” business within AWS that “continues to grow at a triple-digit year-over-year percentage.”
“It’s growing more than three times faster at this stage of its evolution as AWS itself grew, and we felt like AWS grew pretty quickly,” he added.