Mark Zuckerberg, chief executive officer of Meta Platforms Inc., left, arrives at federal court in San Jose, California, US, on Tuesday, Dec. 20, 2022.
David Paul Morris | Bloomberg | Getty Images
Toward the end of 2022, engineers on Meta’s team combating misinformation were ready to debut a key fact-checking tool that had taken half a year to build. The company needed all the reputational help it could get after a string of crises had badly damaged the credibility of Facebook and Instagram and given regulators additional ammunition to bear down on the platforms.
The new product would let third-party fact-checkers like The Associated Press and Reuters, as well as credible experts, add comments at the top of questionable articles on Facebook as a way to verify their trustworthiness.
related investing news
an hour ago
But CEO Mark Zuckerberg’s commitment to make 2023 the “year of efficiency” spelled the end of the ambitious effort, according to three people familiar with the matter who asked not to be named due to confidentiality agreements.
Over multiple rounds of layoffs, Meta announced plans to eliminate roughly 21,000 jobs, a mass downsizing that had an outsized effect on the company’s trust and safety work. The fact-checking tool, which had initial buy-in from executives and was still in a testing phase early this year, was completely dissolved, the sources said.
A Meta spokesperson did not respond to questions related to job cuts in specific areas and said in an emailed statement that “we remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community.”
Across the tech industry, as companies tighten their belts and impose hefty layoffs to address macroeconomic pressures and slowing revenue growth, wide swaths of people tasked with protecting the internet’s most-populous playgrounds are being shown the exits. The cuts come at a time of increased cyberbullying, which has been linked to higher rates of adolescent self-harm, and as the spread of misinformation and violent content collides with the exploding use of artificial intelligence.
In their most recent earnings calls, tech executives highlighted their commitment to “do more with less,” boosting productivity with fewer resources. Meta, Alphabet, Amazon and Microsoft have all cut thousands of jobs after staffing up rapidly before and during the Covid pandemic. Microsoft CEO Satya Nadella recently said his company would suspend salary increases for full-time employees.
The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency, even with the 2024 U.S. election season — and the online chaos that’s expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments within tech companies but are aligned on goals related to limiting real-life harm that can stem from use of their companies’ products and services.
“Abuse actors are usually ahead of the game; it’s cat and mouse,” said Arjun Narayan, who previously served as a trust and safety lead at Google and TikTok parent ByteDance, and is now head of trust and safety at news aggregator app Smart News. “You’re always playing catch-up.”
For now, tech companies seem to view both trust and safety and AI ethics as cost centers.
Twitter effectively disbanded its ethical AI team in November and laid off all but one of its members, along with 15% of its trust and safety department, according to reports. In February, Google cut about one-third of a unit that aims to protect society from misinformation, radicalization, toxicity and censorship. Meta reportedly ended the contracts of about 200 content moderators in early January. It also laid off at least 16 members of Instagram’s well-being group and more than 100 positions related to trust, integrity and responsibility, according to documents filed with the U.S. Department of Labor.
Andy Jassy, chief executive officer of Amazon.Com Inc., during the GeekWire Summit in Seattle, Washington, U.S., on Tuesday, Oct. 5, 2021.
David Ryder | Bloomberg | Getty Images
In March, Amazon downsized its responsible AI team and Microsoft laid off its entire ethics and society team – the second of two layoff rounds that reportedly took the team from 30 members to zero. Amazon didn’t respond to a request for comment, and Microsoft pointed to a blog post regarding its job cuts.
At Amazon’s game streaming unit Twitch, staffers learned of their fate in March from an ill-timed internal post from Amazon CEO Andy Jassy.
Jassy’s announcement that 9,000 jobs would be cut companywide included 400 employees at Twitch. Of those, about 50 were part of the team responsible for monitoring abusive, illegal or harmful behavior, according to people familiar with the matter who spoke on the condition of anonymity because the details were private.
The trust and safety team, or T&S as it’s known internally, was losing about 15% of its staff just as content moderation was seemingly more important than ever.
In an email to employees, Twitch CEO Dan Clancy didn’t call out the T&S department specifically, but he confirmed the broader cuts among his staffers, who had just learned about the layoffs from Jassy’s post on a message board.
“I’m disappointed to share the news this way before we’re able to communicate directly to those who will be impacted,” Clancy wrote in the email, which was viewed by CNBC.
‘Hard to win back consumer trust’
A current member of Twitch’s T&S team said the remaining employees in the unit are feeling “whiplash” and worry about a potential second round of layoffs. The person said the cuts caused a big hit to institutional knowledge, adding that there was a significant reduction in Twitch’s law enforcement response team, which deals with physical threats, violence, terrorism groups and self-harm.
A Twitch spokesperson did not provide a comment for this story, instead directing CNBC to a blog post from March announcing the layoffs. The post didn’t include any mention of trust and safety or content moderation.
Narayan of Smart News said that with a lack of investment in safety at the major platforms, companies lose their ability to scale in a way that keeps pace with malicious activity. As more problematic content spreads, there’s an “erosion of trust,” he said.
“In the long run, it’s really hard to win back consumer trust,” Narayan added.
While layoffs at Meta and Amazon followed demands from investors and a dramatic slump in ad revenue and share prices, Twitter’s cuts resulted from a change in ownership.
Almost immediately after Elon Musk closed his $44 billion purchase of Twitter in October, he began eliminating thousands of jobs. That included all but one member of the company’s 17-person AI ethics team, according to Rumman Chowdhury, who served as director of Twitter’s machine learning ethics, transparency and accountability team. The last remaining person ended up quitting.
The team members learned of their status when their laptops were turned off remotely, Chowdhury said. Hours later, they received email notifications.
“I had just recently gotten head count to build out my AI red team, so these would be the people who would adversarially hack our models from an ethical perspective and try to do that work,” Chowdhury told CNBC. She added, “It really just felt like the rug was pulled as my team was getting into our stride.”
Part of that stride involved working on “algorithmic amplification monitoring,” Chowdhury said, or tracking elections and political parties to see if “content was being amplified in a way that it shouldn’t.”
Chowdhury referenced an initiative in July 2021, when Twitter’s AI ethics team led what was billed as the industry’s first-ever algorithmic bias bounty competition. The company invited outsiders to audit the platform for bias, and made the results public.
Chowdhury said she worries that now Musk “is actively seeking to undo all the work we have done.”
“There is no internal accountability,” she said. “We served two of the product teams to make sure that what’s happening behind the scenes was serving the people on the platform equitably.”
Twitter did not provide a comment for this story.
Advertisers are pulling back in places where they see increased reputational risk.
According to Sensor Tower, six of the top 10 categories of U.S. advertisers on Twitter spent much less in the first quarter of this year compared with a year earlier, with that group collectively slashing its spending by 53%. The site has recently come under fire for allowing the spread of violent images and videos.
The rapid rise in popularity of chatbots is only complicating matters. The types of AI models created by OpenAI, the company behind ChatGPT, and others make it easier to populate fake accounts with content. Researchers from the Allen Institute for AI, Princeton University and Georgia Tech ran tests in ChatGPT’s application programming interface (API), and found up to a sixfold increase in toxicity, depending on which type of functional identity, such as a customer service agent or virtual assistant, a company assigned to the chatbot.
Regulators are paying close attention to AI’s growing influence and the simultaneous downsizing of groups dedicated to AI ethics and trust and safety. Michael Atleson, an attorney at the Federal Trade Commission’s division of advertising practices, called out the paradox in a blog post earlier this month.
“Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” Atleson wrote. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.”
Meta as a bellwether
For years, as the tech industry was enjoying an extended bull market and the top internet platforms were flush with cash, Meta was viewed by many experts as a leader in prioritizing ethics and safety.
The company spent years hiring trust and safety workers, including many with academic backgrounds in the social sciences, to help avoid a repeat of the 2016 presidential election cycle, when disinformation campaigns, often operated by foreign actors, ran rampant on Facebook. The embarrassment culminated in the 2018 Cambridge Analytica scandal, which exposed how a third party was illicitly using personal data from Facebook.
But following a brutal 2022 for Meta’s ad business — and its stock price — Zuckerberg went into cutting mode, winning plaudits along the way from investors who had complained of the company’s bloat.
Beyond the fact-checking project, the layoffs hit researchers, engineers, user design experts and others who worked on issues pertaining to societal concerns. The company’s dedicated team focused on combating misinformation suffered numerous losses, four former Meta employees said.
Prior to Meta’s first round of layoffs in November, the company had already taken steps to consolidate members of its integrity team into a single unit. In September, Meta merged its central integrity team, which handles social matters, with its business integrity group tasked with addressing ads and business-related issues like spam and fake accounts, ex-employees said.
In the ensuing months, as broader cuts swept across the company, former trust and safety employees described working under the fear of looming layoffs and for managers who sometimes failed to see how their work affected Meta’s bottom line.
For example, things like improving spam filters that required fewer resources could get clearance over long-term safety projects that would entail policy changes, such as initiatives involving misinformation. Employees felt incentivized to take on more manageable tasks because they could show their results in their six-month performance reviews, ex-staffers said.
Ravi Iyer, a former Meta project manager who left the company before the layoffs, said that the cuts across content moderation are less bothersome than the fact that many of the people he knows who lost their jobs were performing critical roles on design and policy changes.
“I don’t think we should reflexively think that having fewer trust and safety workers means platforms will necessarily be worse,” said Iyer, who’s now the managing director of the Psychology of Technology Institute at University of Southern California’s Neely Center. “However, many of the people I’ve seen laid off are amongst the most thoughtful in rethinking the fundamental designs of these platforms, and if platforms are not going to invest in reconsidering design choices that have been proven to be harmful — then yes, we should all be worried.”
A Meta spokesperson previously downplayed the significance of the job cuts in the misinformation unit, tweeting that the “team has been integrated into the broader content integrity team, which is substantially larger and focused on integrity work across the company.”
Still, sources familiar with the matter said that following the layoffs, the company has fewer people working on misinformation issues.
For those who’ve gained expertise in AI ethics, trust and safety and related content moderation, the employment picture looks grim.
Newly unemployed workers in those fields from across the social media landscape told CNBC that there aren’t many job openings in their area of specialization as companies continue to trim costs. One former Meta employee said that after interviewing for trust and safety roles at Microsoft and Google, those positions were suddenly axed.
An ex-Meta staffer said the company’s retreat from trust and safety is likely to filter down to smaller peers and startups that appear to be “following Meta in terms of their layoff strategy.”
Chowdhury, Twitter’s former AI ethics lead, said these types of jobs are a natural place for cuts because “they’re not seen as driving profit in product.”
“My perspective is that it’s completely the wrong framing,” she said. “But it’s hard to demonstrate value when your value is that you’re not being sued or someone is not being harmed. We don’t have a shiny widget or a fancy model at the end of what we do; what we have is a community that’s safe and protected. That is a long-term financial benefit, but in the quarter over quarter, it’s really hard to measure what that means.”
At Twitch, the T&S team included people who knew where to look to spot dangerous activity, according to a former employee in the group. That’s particularly important in gaming, which is “its own unique beast,” the person said.
Now, there are fewer people checking in on the “dark, scary places” where offenders hide and abusive activity gets groomed, the ex-employee added.
More importantly, nobody knows how bad it can get.
Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX.
Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.
Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.
In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit.
“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.
To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.
Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.
Carlos Barria | Reuters
X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform.
The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much.
The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms.
“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.
“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”
It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.
Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively.
X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.
Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.
Artificial intelligence training
X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.
“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.
Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually.
X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.
“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.
Though X’s new terms may be expansive, Meta’s policies aren’t that different.
The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center.
Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.
Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training.
“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.
Liquidated damages
Another unusual aspect of X’s new terms is its “liquidated damages” clause. The terms state that if users request, view or access more than 1 million posts – including replies, videos, images and others – in any 24-hour period they are liable for damages of $15,000.
While most individual users won’t easily approach that threshold, the clause is concerning for some, including digital researchers. They rely on the analysis of larger numbers of public posts from services like X to do their work.
X’s new terms of service are a “disturbing move that the company should reverse,” said Alex Abdo, litigation director for the Knight First Amendment Institute at Columbia University, in an October statement.
“The public relies on journalists and researchers to understand whether and how the platforms are shaping public discourse, affecting our elections, and warping our relationships,” Abdo wrote. “One effect of X Corp.’s new terms of service will be to stifle that research when we need it most.”
Neither Threads nor Bluesky have anything similar to X’s liquidated damages clause.
Meta and X did not respond to requests for comment.
A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”
The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.
“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.
The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.
Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.
Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.
“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.
Concern with Congress and a changing White House
The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.
As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.
Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.
Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.
Cyber Command 2.0 and the U.S. military
John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.
Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.
In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.
Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.
Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.
Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.
Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.
“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.
He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.
Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.