Connect with us

Published

on

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., left, arrives at federal court in San Jose, California, US, on Tuesday, Dec. 20, 2022. 

David Paul Morris | Bloomberg | Getty Images

Toward the end of 2022, engineers on Meta’s team combating misinformation were ready to debut a key fact-checking tool that had taken half a year to build. The company needed all the reputational help it could get after a string of crises had badly damaged the credibility of Facebook and Instagram and given regulators additional ammunition to bear down on the platforms.

The new product would let third-party fact-checkers like The Associated Press and Reuters, as well as credible experts, add comments at the top of questionable articles on Facebook as a way to verify their trustworthiness.

related investing news

How artificial intelligence could threaten major industries and business models

CNBC Pro

But CEO Mark Zuckerberg’s commitment to make 2023 the “year of efficiency” spelled the end of the ambitious effort, according to three people familiar with the matter who asked not to be named due to confidentiality agreements.

Over multiple rounds of layoffs, Meta announced plans to eliminate roughly 21,000 jobs, a mass downsizing that had an outsized effect on the company’s trust and safety work. The fact-checking tool, which had initial buy-in from executives and was still in a testing phase early this year, was completely dissolved, the sources said.

A Meta spokesperson did not respond to questions related to job cuts in specific areas and said in an emailed statement that “we remain focused on advancing our industry-leading integrity efforts and continue to invest in teams and technologies to protect our community.”

Across the tech industry, as companies tighten their belts and impose hefty layoffs to address macroeconomic pressures and slowing revenue growth, wide swaths of people tasked with protecting the internet’s most-populous playgrounds are being shown the exits. The cuts come at a time of increased cyberbullying, which has been linked to higher rates of adolescent self-harm, and as the spread of misinformation and violent content collides with the exploding use of artificial intelligence.

In their most recent earnings calls, tech executives highlighted their commitment to “do more with less,” boosting productivity with fewer resources. Meta, Alphabet, Amazon and Microsoft have all cut thousands of jobs after staffing up rapidly before and during the Covid pandemic. Microsoft CEO Satya Nadella recently said his company would suspend salary increases for full-time employees.

The slashing of teams tasked with trust and safety and AI ethics is a sign of how far companies are willing to go to meet Wall Street demands for efficiency, even with the 2024 U.S. election season — and the online chaos that’s expected to ensue — just months away from kickoff. AI ethics and trust and safety are different departments within tech companies but are aligned on goals related to limiting real-life harm that can stem from use of their companies’ products and services.

“Abuse actors are usually ahead of the game; it’s cat and mouse,” said Arjun Narayan, who previously served as a trust and safety lead at Google and TikTok parent ByteDance, and is now head of trust and safety at news aggregator app Smart News. “You’re always playing catch-up.”

For now, tech companies seem to view both trust and safety and AI ethics as cost centers.

Twitter effectively disbanded its ethical AI team in November and laid off all but one of its members, along with 15% of its trust and safety department, according to reports. In February, Google cut about one-third of a unit that aims to protect society from misinformation, radicalization, toxicity and censorship. Meta reportedly ended the contracts of about 200 content moderators in early January. It also laid off at least 16 members of Instagram’s well-being group and more than 100 positions related to trust, integrity and responsibility, according to documents filed with the U.S. Department of Labor.

Andy Jassy, chief executive officer of Amazon.Com Inc., during the GeekWire Summit in Seattle, Washington, U.S., on Tuesday, Oct. 5, 2021.

David Ryder | Bloomberg | Getty Images

In March, Amazon downsized its responsible AI team and Microsoft laid off its entire ethics and society team – the second of two layoff rounds that reportedly took the team from 30 members to zero. Amazon didn’t respond to a request for comment, and Microsoft pointed to a blog post regarding its job cuts.

At Amazon’s game streaming unit Twitch, staffers learned of their fate in March from an ill-timed internal post from Amazon CEO Andy Jassy.

Jassy’s announcement that 9,000 jobs would be cut companywide included 400 employees at Twitch. Of those, about 50 were part of the team responsible for monitoring abusive, illegal or harmful behavior, according to people familiar with the matter who spoke on the condition of anonymity because the details were private.

The trust and safety team, or T&S as it’s known internally, was losing about 15% of its staff just as content moderation was seemingly more important than ever.

In an email to employees, Twitch CEO Dan Clancy didn’t call out the T&S department specifically, but he confirmed the broader cuts among his staffers, who had just learned about the layoffs from Jassy’s post on a message board.

“I’m disappointed to share the news this way before we’re able to communicate directly to those who will be impacted,” Clancy wrote in the email, which was viewed by CNBC.

‘Hard to win back consumer trust’

A current member of Twitch’s T&S team said the remaining employees in the unit are feeling “whiplash” and worry about a potential second round of layoffs. The person said the cuts caused a big hit to institutional knowledge, adding that there was a significant reduction in Twitch’s law enforcement response team, which deals with physical threats, violence, terrorism groups and self-harm.

A Twitch spokesperson did not provide a comment for this story, instead directing CNBC to a blog post from March announcing the layoffs. The post didn’t include any mention of trust and safety or content moderation.

Narayan of Smart News said that with a lack of investment in safety at the major platforms, companies lose their ability to scale in a way that keeps pace with malicious activity. As more problematic content spreads, there’s an “erosion of trust,” he said.

“In the long run, it’s really hard to win back consumer trust,” Narayan added.

While layoffs at Meta and Amazon followed demands from investors and a dramatic slump in ad revenue and share prices, Twitter’s cuts resulted from a change in ownership.

Almost immediately after Elon Musk closed his $44 billion purchase of Twitter in October, he began eliminating thousands of jobs. That included all but one member of the company’s 17-person AI ethics team, according to Rumman Chowdhury, who served as director of Twitter’s machine learning ethics, transparency and accountability team. The last remaining person ended up quitting.

The team members learned of their status when their laptops were turned off remotely, Chowdhury said. Hours later, they received email notifications. 

“I had just recently gotten head count to build out my AI red team, so these would be the people who would adversarially hack our models from an ethical perspective and try to do that work,” Chowdhury told CNBC. She added, “It really just felt like the rug was pulled as my team was getting into our stride.”

Part of that stride involved working on “algorithmic amplification monitoring,” Chowdhury said, or tracking elections and political parties to see if “content was being amplified in a way that it shouldn’t.”

Chowdhury referenced an initiative in July 2021, when Twitter’s AI ethics team led what was billed as the industry’s first-ever algorithmic bias bounty competition. The company invited outsiders to audit the platform for bias, and made the results public. 

Chowdhury said she worries that now Musk “is actively seeking to undo all the work we have done.”

“There is no internal accountability,” she said. “We served two of the product teams to make sure that what’s happening behind the scenes was serving the people on the platform equitably.”

Twitter did not provide a comment for this story.

Ad giant IPG advises brands to pause Twitter advertising after Musk takeover

Advertisers are pulling back in places where they see increased reputational risk.

According to Sensor Tower, six of the top 10 categories of U.S. advertisers on Twitter spent much less in the first quarter of this year compared with a year earlier, with that group collectively slashing its spending by 53%. The site has recently come under fire for allowing the spread of violent images and videos.

The rapid rise in popularity of chatbots is only complicating matters. The types of AI models created by OpenAI, the company behind ChatGPT, and others make it easier to populate fake accounts with content. Researchers from the Allen Institute for AI, Princeton University and Georgia Tech ran tests in ChatGPT’s application programming interface (API), and found up to a sixfold increase in toxicity, depending on which type of functional identity, such as a customer service agent or virtual assistant, a company assigned to the chatbot.

Regulators are paying close attention to AI’s growing influence and the simultaneous downsizing of groups dedicated to AI ethics and trust and safety. Michael Atleson, an attorney at the Federal Trade Commission’s division of advertising practices, called out the paradox in a blog post earlier this month.

“Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering,” Atleson wrote. “If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look.” 

Meta as a bellwether

For years, as the tech industry was enjoying an extended bull market and the top internet platforms were flush with cash, Meta was viewed by many experts as a leader in prioritizing ethics and safety.

The company spent years hiring trust and safety workers, including many with academic backgrounds in the social sciences, to help avoid a repeat of the 2016 presidential election cycle, when disinformation campaigns, often operated by foreign actors, ran rampant on Facebook. The embarrassment culminated in the 2018 Cambridge Analytica scandal, which exposed how a third party was illicitly using personal data from Facebook.

But following a brutal 2022 for Meta’s ad business — and its stock price — Zuckerberg went into cutting mode, winning plaudits along the way from investors who had complained of the company’s bloat.

Beyond the fact-checking project, the layoffs hit researchers, engineers, user design experts and others who worked on issues pertaining to societal concerns. The company’s dedicated team focused on combating misinformation suffered numerous losses, four former Meta employees said.

Prior to Meta’s first round of layoffs in November, the company had already taken steps to consolidate members of its integrity team into a single unit. In September, Meta merged its central integrity team, which handles social matters, with its business integrity group tasked with addressing ads and business-related issues like spam and fake accounts, ex-employees said.

In the ensuing months, as broader cuts swept across the company, former trust and safety employees described working under the fear of looming layoffs and for managers who sometimes failed to see how their work affected Meta’s bottom line.

For example, things like improving spam filters that required fewer resources could get clearance over long-term safety projects that would entail policy changes, such as initiatives involving misinformation. Employees felt incentivized to take on more manageable tasks because they could show their results in their six-month performance reviews, ex-staffers said.

Ravi Iyer, a former Meta project manager who left the company before the layoffs, said that the cuts across content moderation are less bothersome than the fact that many of the people he knows who lost their jobs were performing critical roles on design and policy changes.

“I don’t think we should reflexively think that having fewer trust and safety workers means platforms will necessarily be worse,” said Iyer, who’s now the managing director of the Psychology of Technology Institute at University of Southern California’s Neely Center. “However, many of the people I’ve seen laid off are amongst the most thoughtful in rethinking the fundamental designs of these platforms, and if platforms are not going to invest in reconsidering design choices that have been proven to be harmful — then yes, we should all be worried.”

A Meta spokesperson previously downplayed the significance of the job cuts in the misinformation unit, tweeting that the “team has been integrated into the broader content integrity team, which is substantially larger and focused on integrity work across the company.”

Still, sources familiar with the matter said that following the layoffs, the company has fewer people working on misinformation issues.

Meta Q1 earnings were a 'tour de force', says Wedgewood's David Rolfe

For those who’ve gained expertise in AI ethics, trust and safety and related content moderation, the employment picture looks grim.

Newly unemployed workers in those fields from across the social media landscape told CNBC that there aren’t many job openings in their area of specialization as companies continue to trim costs. One former Meta employee said that after interviewing for trust and safety roles at Microsoft and Google, those positions were suddenly axed.

An ex-Meta staffer said the company’s retreat from trust and safety is likely to filter down to smaller peers and startups that appear to be “following Meta in terms of their layoff strategy.”

Chowdhury, Twitter’s former AI ethics lead, said these types of jobs are a natural place for cuts because “they’re not seen as driving profit in product.”

“My perspective is that it’s completely the wrong framing,” she said. “But it’s hard to demonstrate value when your value is that you’re not being sued or someone is not being harmed. We don’t have a shiny widget or a fancy model at the end of what we do; what we have is a community that’s safe and protected. That is a long-term financial benefit, but in the quarter over quarter, it’s really hard to measure what that means.” 

At Twitch, the T&S team included people who knew where to look to spot dangerous activity, according to a former employee in the group. That’s particularly important in gaming, which is “its own unique beast,” the person said.

Now, there are fewer people checking in on the “dark, scary places” where offenders hide and abusive activity gets groomed, the ex-employee added.

More importantly, nobody knows how bad it can get.

WATCH: CNBC’s interview with Elon Musk

Tesla CEO Elon Musk discusses the implications of A.I. on his children's future in the workforce

Continue Reading

Technology

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

Published

on

By

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a source familiar with the situation confirmed to CNBC on Friday.

The person, who spoke on condition of anonymity, said that some of the team members are being re-assigned to multiple other teams within the company.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

The news was first reported by Wired.

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Sutskever and Leike on Tuesday announced their departures on X, hours apart, but on Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike did not immediately respond to a request for comment, and OpenAI did not immediately provide a comment.

The high-profile departures come months after OpenAI went through a leadership crisis involving co-founder and CEO Sam Altman.

In November, OpenAI’s board ousted Altman, claiming in a statement that Altman had not been “consistently candid in his communications with the board.”

The issue seemed to grow more complex each following day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman’s ouster prompted resignations – or threats of resignations – including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.

When Altman was asked about Sutskever’s status on a Zoom call with reporters in March, he said there were no updates to share. “I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutskever’s departure.

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known but no less important.” Altman said research director Jakub Pachocki, who has been at OpenAI since 2017, will replace Sutskever as chief scientist.

News of Sutskever’s and Leike’s departures, and the dissolution of the superalignment team, come days after OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand the use of its popular chatbot.

The update brings the GPT-4 model to everyone, including OpenAI’s free users, technology chief Mira Murati said Monday in a livestreamed event. She added that the new model, GPT-4o, is “much faster,” with improved capabilities in text, video and audio.

OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said.

Continue Reading

Technology

BlackRock funds are ‘crushing shareholder rights,’ says activist Boaz Weinstein

Published

on

By

BlackRock funds are ‘crushing shareholder rights,' says activist Boaz Weinstein

Boaz Weinstein, founder and chief investment officer of Saba Capital Management, during the Bloomberg Invest event in New York, US, on Wednesday, June 7, 2023. 

Jeenah Moon | Bloomberg | Getty Images

Boaz Weinstein, the hedge fund investor on the winning side of JPMorgan Chase’s $6.2 billion, “London Whale” trading loss in 2011, is now taking on index fund giant BlackRock

On Friday, Weinstein‘s Saba Capital detailed in a presentation seen by CNBC its plans to push for change at 10 closed-end BlackRock funds that trade at a significant discount to the value of their underlying assets compared to their peers. Saba says the underperformance is a direct result of BlackRock’s management.

The hedge fund wants board control at three BlackRock funds and a minority slate at seven others. It also seeks to oust BlackRock as the manager of six of those ten funds.

“In the last three years, nine of the ten funds that we’re even talking about have lost money for investors,” Weinstein said on CNBC’s “Squawk Box” earlier this week.

At the heart of Saba’s “Hey BlackRock” campaign is an argument around governance. Saba says in its presentation that BlackRock runs those closed-end funds the “exact opposite” way it expects companies to run themselves.

BlackRock “is talking out of both sides of its mouth” by doing this, Saba says. That’s cost retail investors $1.4 billion in discounts, by Saba’s math, on top of the management fees it charges.

BlackRock, Saba says in the deck, “considers itself a leader in governance, but is crushing shareholder rights.” At certain BlackRock funds, for example, if an investor doesn’t submit their vote in a shareholder meeting, their shares will automatically go to support BlackRock. Saba is suing to change that.

A BlackRock spokesperson called that assertion “very misleading” and said those funds “simply require that most shareholders vote affirmatively in favor.”

The index fund manager’s rebuttal, “Defend Your Fund,” describes Saba as an activist hedge fund seeking to “enrich itself.”

The problem and the solution

Closed-end funds have a finite number of shares. Investors who want to sell their positions have to find an interested buyer, which means they may not be able to sell at a price that reflects the value of a fund’s holdings.

In open-ended funds, by contrast, an investor can redeem its shares with the manager in exchange for cash. That’s how many index funds are structured, like those that track the S&P 500.

Saba says it has a solution. BlackRock should buy back shares from investors at the price they’re worth, not where they currently trade.

“Investors who want to come out come out, and those who want to stay will stay for a hundred years, if they want,” Weinstein told CNBC earlier this week.

Weinstein, who founded Saba in 2009, made a fortune two years later, when he noticed that a relatively obscure credit derivatives index was behaving abnormally. Saba began buying up the underlying derivatives that, unbeknownst to him, were being sold by JPMorgan’s Bruno Iksil. For a time, Saba took tremendous losses on the position, until Iksil’s bet turned sour on him, costing JPMorgan billions and netting Saba huge profits.

Saba said in its investor deck that the changes at BlackRock could take the form of a tender offer or a restructuring. The presentation noted that BlackRock previously cast its shares in support of a tender at another closed-end fund where an activist was pushing for similar change.

At the worst-performing funds relative to their peer group, Saba is seeking shareholder approval to fire the manager. In total, BlackRock wants new management at six funds, including the BlackRock California Municipal Income Trust (BFZ), the BlackRock Innovation and Growth Term Trust (BIGZ) and the BlackRock Health Sciences Term Trust (BMEZ).

“BlackRock is failing as a manager by delivering subpar performance compared to relevant benchmarks and worst-in-class corporate governance,” the deck says.

If Saba were to win shareholder approval to fire BlackRock as manager at the six funds, the newly constituted boards would then run a review process over at least six months. Saba says that in addition to offering liquidity to investors, its board nominees would push for reduced fees and for other unspecified governance fixes.

A BlackRock spokesperson told CNBC that the firm has historically taken steps to improve returns at closed-end funds when necessary.

“BlackRock’s closed-end funds welcome constructive engagement with thoughtful shareholders who act in good faith with the shared goal of enhancing long-term value for all,” the spokesperson said.

Weinstein said Saba has run similar campaigns at roughly 60 closed-end funds in the past decade but has only taken over a fund’s management twice. The hedge fund sued BlackRock last year to remove that so-called “vote-stripping provision” at certain funds and filed another lawsuit earlier this year.

BlackRock has pitched shareholders via mailings and advertisements. “Your dependable, income-paying investment,” BlackRock has told investors, is under threat from Saba.

Saba plans to host a webinar for shareholders on Monday but says BlackRock has refused to provide the shareholder list for several of the funds. The BlackRock spokesperson said that it has “always acted in accordance with all applicable laws” when providing shareholder information, and that it “never blocked Saba’s access to shareholders.”

“What we want is for shareholders, which we are the largest of but not in any way the majority, to make that $1.4 billion, which can be done at the press of a button,” Weinstein told CNBC earlier this week.

WATCH: CNBC’s full interview with Saba Capital’s Boaz Weinstein

Watch CNBC's full interview with Saba Capital's Boaz Weinstein

Continue Reading

Technology

As Tesla layoffs continue, here are 600 jobs the company cut in California

Published

on

By

As Tesla layoffs continue, here are 600 jobs the company cut in California

As part of Tesla’s massive restructuring, the electric-vehicle maker notified the California Employment Development Department this week that it’s cutting approximately 600 more employees at its manufacturing facilities and engineering offices between Fremont and Palo Alto.

The latest round of layoffs eliminated roles across the board — from entry-level positions to directors — and hit an array of departments, impacting factory workers, software developers and robotics engineers.

The cuts were reported in a Worker Adjustment and Retraining Notification, or WARN, Act filing that CNBC obtained through a public records request.

Facing both weakening demand for Tesla electric vehicles and increased competition, the company has been slashing its headcount since at least January. CEO Elon Musk told employees in a memo in April that the company would cut more than 10% of its global workforce, which totaled 140,473 employees at the end of 2023.

Previous filings revealed that Tesla would cut more than 6,300 jobs across California; Austin, Texas; and Buffalo, New York.

Musk said on Tesla’s quarterly earnings call on April 23 that the company had built up a 25% to 30% “inefficiency” over the past several years, implying the layoffs underway could impact tens of thousands more employees than the 10% number would suggest.

According to the WARN filing, the 378 job cuts in Fremont, home to Tesla’s first U.S. manufacturing plant, included people involved in staffing and running vehicle assembly. There were 65 cuts at the company’s Kato Rd. battery development center.

Tesla didn’t respond to a request for comment.

Among the highest-level roles eliminated in Fremont were an environmental health and safety director and a user experience design director.

In Palo Alto, home to the company’s engineering headquarters, 233 more employees, including two directors of technical programs, lost their jobs.

Tesla has also terminated a majority of employees involved in designing and improving apps made for customers and employees, according to two former employees directly familiar with the matter. The WARN filing shows that to be the case, with many cut from the team at Tesla’s Hanover Street location in Palo Alto.

Tesla faces reduced demand for cars it makes in Fremont, including its older Model S and X vehicles and Model 3 sedan. Total deliveries dropped in the first quarter from a year earlier, and Tesla reported its steepest year-over-year revenue decline since 2012.

An onslaught of competition, especially in China, has continued to pressure Tesla’s sales in the second quarter. Xiaomi and Nio have each launched new EV models, which undercut the price of Tesla’s most popular vehicles.

Tesla’s stock price has tumbled about 30% so far this year, while the S&P 500 is up 11%.

Musk has been trying to convince investors not to focus on vehicle sales and instead to back Tesla’s potential to finally deliver self-driving software, a robotaxi, and a “sentient” humanoid robot. Musk and Tesla have long promised customers self-driving software that would turn their existing EVs into robotaxis, but the company’s systems still require constant human supervision.

Other recent job cuts at Tesla included the team responsible for building out the Supercharger, or electric-vehicle fast-charging network, in the U.S.

Tesla disclosed plans in its annual filing for 2023 to grow and optimize its charging infrastructure “to ensure cost effectiveness and customer satisfaction.” Tesla said in the filing that it needed to expand its “network in order to ensure adequate availability to meet customer demands,” after other auto companies announced plans to adopt the North American Charging Standard.

Since cutting most of its Supercharger team, Tesla has reportedly started to rehire at least some members, a move reminiscent of the job cuts Musk made at Twitter after he bought the company and later rebranded it as X. Musk told CNBC’s David Faber last year that he wanted to rehire some of those he let go.

Read the latest WARN filing in California here:

Continue Reading

Trending