Mark Zuckerberg, chief executive officer of Meta Platforms Inc., left, arrives at federal court in San Jose, California, US, on Tuesday, Dec. 20, 2022.
David Paul Morris | Bloomberg | Getty Images
Toward the end of 2022, engineers on Meta’s team combating misinformation were ready to debut a key fact-checking tool that had taken half a year to build. The company needed all the reputational help it could get after a string of crises had badly damaged the credibility of Facebook and Instagram and given regulators additional ammunition to bear down on the platforms.
The new product would let third-party fact-checkers like The Associated Press and Reuters, as well as credible experts, add comments at the top of questionable articles on Facebook as a way to verify their trustworthiness.
related investing news
an hour ago
But CEO Mark Zuckerberg’s commitment to make 2023 the “year of efficiency” spelled the end of the ambitious effort, according to three people familiar with the matter who asked not to be named due to confidentiality agreements.
Over multiple rounds of layoffs, Meta announced plans to eliminate roughly 21,000 jobs, a mass downsizing that had an outsized effect on the company’s trust and safety work. The fact-checking tool, which had initial buy-in from executives and was still in a testing phase early this year, was completely dissolved, the sources said.
A Meta spokesperson did not respond to questions related to job cuts in specific areas and said in an emailed statement that “we remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community.”
Across the tech industry, as companies tighten their belts and impose hefty layoffs to address macroeconomic pressures and slowing revenue growth, wide swaths of people tasked with protecting the internet’s most-populous playgrounds are being shown the exits. The cuts come at a time of increased cyberbullying, which has been linked to higher rates of adolescent self-harm, and as the spread of misinformation and violent content collides with the exploding use of artificial intelligence.
In their most recent earnings calls, tech executives highlighted their commitment to “do more with less,” boosting productivity with fewer resources. Meta, Alphabet, Amazon and Microsoft have all cut thousands of jobs after staffing up rapidly before and during the Covid pandemic. Microsoft CEO Satya Nadella recently said his company would suspend salary increases for full-time employees.
The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency, even with the 2024 U.S. election season — and the online chaos that’s expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments within tech companies but are aligned on goals related to limiting real-life harm that can stem from use of their companies’ products and services.
“Abuse actors are usually ahead of the game; it’s cat and mouse,” said Arjun Narayan, who previously served as a trust and safety lead at Google and TikTok parent ByteDance, and is now head of trust and safety at news aggregator app Smart News. “You’re always playing catch-up.”
For now, tech companies seem to view both trust and safety and AI ethics as cost centers.
Twitter effectively disbanded its ethical AI team in November and laid off all but one of its members, along with 15% of its trust and safety department, according to reports. In February, Google cut about one-third of a unit that aims to protect society from misinformation, radicalization, toxicity and censorship. Meta reportedly ended the contracts of about 200 content moderators in early January. It also laid off at least 16 members of Instagram’s well-being group and more than 100 positions related to trust, integrity and responsibility, according to documents filed with the U.S. Department of Labor.
Andy Jassy, chief executive officer of Amazon.Com Inc., during the GeekWire Summit in Seattle, Washington, U.S., on Tuesday, Oct. 5, 2021.
David Ryder | Bloomberg | Getty Images
In March, Amazon downsized its responsible AI team and Microsoft laid off its entire ethics and society team – the second of two layoff rounds that reportedly took the team from 30 members to zero. Amazon didn’t respond to a request for comment, and Microsoft pointed to a blog post regarding its job cuts.
At Amazon’s game streaming unit Twitch, staffers learned of their fate in March from an ill-timed internal post from Amazon CEO Andy Jassy.
Jassy’s announcement that 9,000 jobs would be cut companywide included 400 employees at Twitch. Of those, about 50 were part of the team responsible for monitoring abusive, illegal or harmful behavior, according to people familiar with the matter who spoke on the condition of anonymity because the details were private.
The trust and safety team, or T&S as it’s known internally, was losing about 15% of its staff just as content moderation was seemingly more important than ever.
In an email to employees, Twitch CEO Dan Clancy didn’t call out the T&S department specifically, but he confirmed the broader cuts among his staffers, who had just learned about the layoffs from Jassy’s post on a message board.
“I’m disappointed to share the news this way before we’re able to communicate directly to those who will be impacted,” Clancy wrote in the email, which was viewed by CNBC.
Narayan of Smart News said that with a lack of investment in safety at the major platforms, companies lose their ability to scale in a way that keeps pace with malicious activity. As more problematic content spreads, there’s an “erosion of trust,” he said.
“In the long run, it’s really hard to win back consumer trust,” Narayan added.
While layoffs at Meta and Amazon followed demands from investors and a dramatic slump in ad revenue and share prices, Twitter’s cuts resulted from a change in ownership.
Almost immediately after Elon Musk closed his $44 billion purchase of Twitter in October, he began eliminating thousands of jobs. That included all but one member of the company’s 17-person AI ethics team, according to Rumman Chowdhury, who served as director of Twitter’s machine learning ethics, transparency and accountability team. The last remaining person ended up quitting.
The team members learned of their status when their laptops were turned off remotely, Chowdhury said. Hours later, they received email notifications.
“I had just recently gotten head count to build out my AI red team, so these would be the people who would adversarially hack our models from an ethical perspective and try to do that work,” Chowdhury told CNBC. She added, “It really just felt like the rug was pulled as my team was getting into our stride.”
Part of that stride involved working on “algorithmic amplification monitoring,” Chowdhury said, or tracking elections and political parties to see if “content was being amplified in a way that it shouldn’t.”
Chowdhury referenced an initiative in July 2021, when Twitter’s AI ethics team led what was billed as the industry’s first-ever algorithmic bias bounty competition. The company invited outsiders to audit the platform for bias, and made the results public.
Chowdhury said she worries that now Musk “is actively seeking to undo all the work we have done.”
“There is no internal accountability,” she said. “We served two of the product teams to make sure that what’s happening behind the scenes was serving the people on the platform equitably.”
Twitter did not provide a comment for this story.
Advertisers are pulling back in places where they see increased reputational risk.
According to Sensor Tower, six of the top 10 categories of U.S. advertisers on Twitter spent much less in the first quarter of this year compared with a year earlier, with that group collectively slashing its spending by 53%. The site has recently come under fire for allowing the spread of violent images and videos.
The rapid rise in popularity of chatbots is only complicating matters. The types of AI models created by OpenAI, the company behind ChatGPT, and others make it easier to populate fake accounts with content. Researchers from the Allen Institute for AI, Princeton University and Georgia Tech ran tests in ChatGPT’s application programming interface (API), and found up to a sixfold increase in toxicity, depending on which type of functional identity, such as a customer service agent or virtual assistant, a company assigned to the chatbot.
Regulators are paying close attention to AI’s growing influence and the simultaneous downsizing of groups dedicated to AI ethics and trust and safety. Michael Atleson, an attorney at the Federal Trade Commission’s division of advertising practices, called out the paradox in a blog post earlier this month.
“Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” Atleson wrote. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.”
Beyond the fact-checking project, the layoffs hit researchers, engineers, user design experts and others who worked on issues pertaining to societal concerns. The company’s dedicated team focused on combating misinformation suffered numerous losses, four former Meta employees said.
Prior to Meta’s first round of layoffs in November, the company had already taken steps to consolidate members of its integrity team into a single unit. In September, Meta merged its central integrity team, which handles social matters, with its business integrity group tasked with addressing ads and business-related issues like spam and fake accounts, ex-employees said.
In the ensuing months, as broader cuts swept across the company, former trust and safety employees described working under the fear of looming layoffs and for managers who sometimes failed to see how their work affected Meta’s bottom line.
For example, things like improving spam filters that required fewer resources could get clearance over long-term safety projects that would entail policy changes, such as initiatives involving misinformation. Employees felt incentivized to take on more manageable tasks because they could show their results in their six-month performance reviews, ex-staffers said.
Ravi Iyer, a former Meta project manager who left the company before the layoffs, said that the cuts across content moderation are less bothersome than the fact that many of the people he knows who lost their jobs were performing critical roles on design and policy changes.
“I don’t think we should reflexively think that having fewer trust and safety workers means platforms will necessarily be worse,” said Iyer, who’s now the managing director of the Psychology of Technology Institute at University of Southern California’s Neely Center. “However, many of the people I’ve seen laid off are amongst the most thoughtful in rethinking the fundamental designs of these platforms, and if platforms are not going to invest in reconsidering design choices that have been proven to be harmful — then yes, we should all be worried.”
A Meta spokesperson previously downplayed the significance of the job cuts in the misinformation unit, tweeting that the “team has been integrated into the broader content integrity team, which is substantially larger and focused on integrity work across the company.”
Still, sources familiar with the matter said that following the layoffs, the company has fewer people working on misinformation issues.
For those who’ve gained expertise in AI ethics, trust and safety and related content moderation, the employment picture looks grim.
Newly unemployed workers in those fields from across the social media landscape told CNBC that there aren’t many job openings in their area of specialization as companies continue to trim costs. One former Meta employee said that after interviewing for trust and safety roles at Microsoft and Google, those positions were suddenly axed.
An ex-Meta staffer said the company’s retreat from trust and safety is likely to filter down to smaller peers and startups that appear to be “following Meta in terms of their layoff strategy.”
Chowdhury, Twitter’s former AI ethics lead, said these types of jobs are a natural place for cuts because “they’re not seen as driving profit in product.”
“My perspective is that it’s completely the wrong framing,” she said. “But it’s hard to demonstrate value when your value is that you’re not being sued or someone is not being harmed. We don’t have a shiny widget or a fancy model at the end of what we do; what we have is a community that’s safe and protected. That is a long-term financial benefit, but in the quarter over quarter, it’s really hard to measure what that means.”
At Twitch, the T&S team included people who knew where to look to spot dangerous activity, according to a former employee in the group. That’s particularly important in gaming, which is “its own unique beast,” the person said.
Now, there are fewer people checking in on the “dark, scary places” where offenders hide and abusive activity gets groomed, the ex-employee added.
More importantly, nobody knows how bad it can get.
Microsoft CEO Satya Nadella speaks during the Microsoft Build conference at Microsoft headquarters in Redmond, Washington, on May 21, 2024.
Jason Redmond | AFP | Getty Images
A half-century ago, childhood friends Bill Gates and Paul Allen started Microsoft from a strip mall in Albuquerque, New Mexico. Five decades and almost $3 trillion later, the company celebrates its 50th birthday on Friday from its sprawling campus in Redmond, Washington.
Now the second most valuable publicly traded company in the world, Microsoft has only had three CEOs in its history, and all of them are in attendance for the monumental event. One is current CEO Satya Nadella. The other two are Gates and Steve Ballmer, both among the 11 richest people in the world due to their Microsoft fortunes.
While Microsoft has mostly been on the ascent of late, with Nadella turning the company into a major power player in cloud computing and artificial intelligence, the birthday party lands at an awkward moment.
The company’s stock price has dropped for four consecutive months for the first time since 2009 and just suffered its steepest quarterly drop in three years. That was all before President Donald Trump’s announcement this week of sweeping tariffs, which sent the Nasdaq tumbling on Thursday and Microsoft down another 2.4%.
Cloud computing has been Microsoft’s main source of new revenue since Nadella took over from Ballmer as CEO in 2014. But the Azure cloud reported disappointing revenue in the latest quarter, a miss that finance chief Amy Hood attributed in January to power and space shortages and a sales posture that focused too much on AI. Hood said revenue growth in the current quarter will fall to 10% from 17% a year earlier
Nadella said management is refining sales incentives to maximize revenue from traditional workloads, while positioning the company to benefit from the ongoing AI boom.
“You would rather win the new than just protect the past,” Nadella told analysts on a conference call.
The past remains healthy. Microsoft still generates around one-fifth of its roughly $262 billion in annual revenue from productivity software, mostly from commercial clients. Windows makes up around 10% of sales.
Meanwhile, the company has used its massive cash pile to orchestrate its three largest acquisitions on record in a little over eight years, snapping up LinkedIn in late 2016, Nuance Communications in 2022 and Activision Blizzard in 2023, for a combined $121 billion.
“Microsoft has figured out how to stay ahead of the curve, and 50 years later, this is a company that can still be on the forefront of technology innovation,” said Soma Somasegar, a former Microsoft executive who now invests in startups at venture firm Madrona. “That’s a commendable place for the company to be in.”
When Somasegar gave up his corporate vice president position at Microsoft in 2015, the company was fresh off a $7.6 billion write-down from Ballmer’s ill-timed purchase of Nokia’s devices and services business.
Microsoft is now in a historic phase of investment. The company has built a $13.8 billion stake in OpenAI and last year spent almost $76 billion on capital expenditures and finance leases, up 83% from a year prior, partly to enable the use of AI models in the Azure cloud. In January, Nadella said Microsoft has $13 billion in annualized AI revenue, more even than OpenAI, which just closed a financing round valuing the company at $300 billion.
Microsoft’s spending spree has constrained free cash flow growth. Guggenheim analysts wrote in a note after the company’s earnings report in January, “You just have to believe in the future.”
Of the 35 Microsoft analysts tracked by FactSet, 32 recommend buying the stock, which has appreciated tenfold since Nadella became CEO. Azure has become a fearsome threat to Amazon Web Services, which pioneered the cloud market in the 2000s, and startups as well as enterprises are flocking to its cloud technology.
Winston Weinberg, CEO of legal AI startup Harvey, uses OpenAI models through Azure. Weinberg lauded Nadella’s focus on customers of all sizes.
“Satya has literally responded to emails within 15 minutes of us having a technical problem, and he’ll route it to the right person,” Weinberg said.
Still, technology is moving at an increasingly rapid pace and Microsoft’s ability to stay on top is far from guaranteed. Industry experts highlighted four key issues the company has to address as it pushes into its next half-century.
Microsoft didn’t respond to a request for comment.
Microsoft pushed through its largest acquisition ever, the $75 billion purchase of video game publisher Activision, during Biden’s term. But only after a protracted legal battle with the FTC.
At the very end of Biden’s time in office, the FTC opened an antitrust investigation on Microsoft. That probe is ongoing, Bloomberg reported in March.
Nadella has cultivated a relationship with Trump. In January, the two reportedly met for lunch at Trump’s Mar-a-Lago resort in Florida, alongside Tesla CEO Elon Musk.
President Donald Trump shakes hands with Microsoft CEO Satya Nadella during an American Technology Council roundtable at the White House in Washington on June 19, 2017.
Nicholas Kamm | AFP | Getty Images
The U.S. isn’t the only concern. The U.K.’s Competition and Markets Authority said in January that an independent inquiry found that “Microsoft is using its strong position in software to make it harder for AWS and Google to compete effectively for cloud customers that wish to use Microsoft software on the cloud.”
Microsoft last year committed to unbundling Teams from Microsoft 365 productivity software subscriptions globally to address concerns from the European Union’s executive arm, the European Commission.
The breakout has been GitHub Copilot, which generates source code and answers developers’ questions. GitHub reached $2 billion in annualized revenue last year, with Copilot accounting for more than 40% of sales growth for the business. Microsoft bought GitHub in 2018 for $7.5 billion.
Microsoft CEO Satya Nadella, right, speaks as OpenAI CEO Sam Altman looks on during the OpenAI DevDay event in San Francisco on Nov. 6, 2023.
Justin Sullivan | Getty Images
But speedy deployment in AI can be worrisome.
The company is “not providing the underpinnings needed to deploy AI properly, in terms of security and governance — all because they care more about being ‘first,'” Foley wrote. Microsoft also hasn’t been great at helping customers understand the return on investment, she wrote.
AI-ready Copilot+ PCs, which Microsoft introduced last year, aren’t gaining much traction. The company had to delay the release of the Recall search feature to prevent data breaches. And the Copilot assistant subscription, at $30 a month for customers of the Microsoft 365 productivity suite, hasn’t become pervasive in the business world.
“Copilot was really their chance to take the lead,” said Jason Wong, an analyst at technology industry researcher Gartner. “But increasingly, what it’s seeming like is Copilot is just an add-on and not like a net-new thing to drive AI.”
In AI, Microsoft’s best bet so far was its investment in OpenAI. Somasegar said Microsoft is in prime position to be a big player in the market.
“To me, it’s been 2½ years since ChatGPT showed up, and we are not even at the Uber and Airbnb moment,” Somasegar said. “There is a tremendous amount of value creation that needs to happen in AI. Microsoft as much as everybody else is thinking, ‘What does that mean? How do we get there?'”
Artificial intelligence robot looking at futuristic digital data display.
Yuichiro Chino | Moment | Getty Images
Artificial intelligence is projected to reach $4.8 trillion in market value by 2033, but the technology’s benefits remain highly concentrated, according to the U.N. Trade and Development agency.
In a report released on Thursday, UNCTAD said the AI market cap would roughly equate to the size of Germany’s economy, with the technology offering productivity gains and driving digital transformation.
However, the agency also raised concerns about automation and job displacement, warning that AI could affect 40% of jobs worldwide. On top of that, AI is not inherently inclusive, meaning the economic gains from the tech remain “highly concentrated,” the report added.
“The benefits of AI-driven automation often favour capital over labour, which could widen inequality and reduce the competitive advantage of low-cost labour in developing economies,” it said.
The potential for AI to cause unemployment and inequality is a long-standing concern, with the IMF making similar warnings over a year ago. In January, The World Economic Forum released findings that as many as 41% of employers were planning on downsizing their staff in areas where AI could replicate them.
However, the UNCTAD report also highlights inequalities between nations, with U.N. data showing that 40% of global corporate research and development spending in AI is concentrated among just 100 firms, mainly those in the U.S. and China.
Furthermore, it notes that leading tech giants, such as Apple, Nvidia and Microsoft — companies that stand to benefit from the AI boom — have a market value that rivals the gross domestic product of the entire African continent.
This AI dominance at national and corporate levels threatens to widen those technological divides, leaving many nations at risk of lagging behind, UNCTAD said. It noted that 118 countries — mostly in the Global South — are absent from major AI governance discussions.
Altimeter Capital CEO Brad Gerstner said Thursday that he’s moving out of the “bomb shelter” with Nvidia and into a position of safety, expecting that the chipmaker is positioned to withstand President Donald Trump’s widespread tariffs.
“The growth and the demand for GPUs is off the charts,” he told CNBC’s “Fast Money Halftime Report,” referring to Nvidia’s graphics processing units that are powering the artificial intelligence boom. He said investors just need to listen to commentary from OpenAI, Google and Elon Musk.
President Trump announced an expansive and aggressive “reciprocal tariff” policy in a ceremony at the White House on Wednesday. The plan established a 10% baseline tariff, though many countries like China, Vietnam and Taiwan are subject to steeper rates. The announcement sent stocks tumbling on Thursday, with the tech-heavy Nasdaq down more than 5%, headed for its worst day since 2022.
The big reason Nvidia may be better positioned to withstand Trump’s tariff hikes is because semiconductors are on the list of exceptions, which Gerstner called a “wise exception” due to the importance of AI.
Nvidia’s business has exploded since the release of OpenAI’s ChatGPT in 2022, and annual revenue has more than doubled in each of the past two fiscal years. After a massive rally, Nvidia’s stock price has dropped by more than 20% this year and was down almost 7% on Thursday.
Gerstner is concerned about the potential of a recession due to the tariffs, but is relatively bullish on Nvidia, and said the “negative impact from tariffs will be much less than in other areas.”
He said it’s key for the U.S. to stay competitive in AI. And while the company’s chips are designed domestically, they’re manufactured in Taiwan “because they can’t be fabricated in the U.S.” Higher tariffs would punish companies like Meta and Microsoft, he said.
“We’re in a global race in AI,” Gerstner said. “We can’t hamper our ability to win that race.”