Connect with us

Published

on

People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London.

Joshua Bratt | Pa Images | Getty Images

Lauren Wagner knows a lot about disinformation. Heading into the 2020 U.S. presidential election, she worked at Facebook, focusing on information integrity and overseeing products designed to make sure content was moderated and fact-checked.

She can’t believe what’s she’s seeing now. Since war erupted last month between Israel and Hamas, the constant deluge of misinformation and violent content spreading across the internet is hard for her to comprehend. Wagner left Facebook parent Meta last year, and her work in trust and safety feels like it was from a prior era.

“When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it’s like long video clips and there’s multiple points of view?” Wagner said. “This idea of live-streaming terrorism, essentially at such a deep and in-depth scale, I don’t know how you manage that.”

The problem is even more pronounced because Meta, Google parent Alphabet, and X, formerly Twitter, have all eliminated jobs tied to content moderation and trust and safety as part of broader cost-cutting measures that began late last year and continued through 2023. Now, as people post and share out-of-context videos of previous wars, fabricated audio in news clips, and graphic videos of terrorist acts, the world’s most trafficked websites are struggling to keep up, experts have noted.

As the founder of a new venture capital firm, Radium Ventures, Wagner is in the midst of raising her first fund dedicated solely to startup founders working on trust and safety technologies. She said many more platforms that think they are “fairly innocuous” are seeing the need to act.

“Hopefully this is shining a light on the fact that if you house user-generated content, there’s an opportunity for misinformation, for charged information or potentially damaging information to spread,” Wagner said.

In addition to the traditional social networks, the highly polarized nature of the Israel-Hamas war affects internet platforms that weren’t typically known for hosting political discussions but now have to take precautionary measures. Popular online messaging and discussion channels such as Discord and Telegram could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.

A Discord spokesperson declined to comment. Telegram didn’t respond to a request for comment.

A demonstrator places flowers on white-shrouded body bags representing victims in the Israel-Hamas conflict, in front of the White House in Washington, DC, on November 15, 2023.

Mandel Ngan | AFP | Getty Images

On kids gaming site Roblox, thousands of users recently attended pro-Palestinian protests held within the virtual world. That has required the company to closely monitor for posts that violate its community standards, a Roblox spokesperson told CNBC in a statement.

Roblox has thousands of moderators and “automated detection tools in place to monitor,” the spokesperson said, adding that the site “allows for expressions of solidarity,” but does “not allow for content that endorses or condones violence, promotes terrorism or hatred against individuals or groups, or calls for supporting a specific political party.”

When it comes to looking for talent in the trust and safety space, there’s no shortage. Many of Wagner’s former colleagues at Meta lost their jobs and remain dedicated to the cause.

One of her first investments was in a startup called Cove, which was founded by former Meta trust and safety staffers. Cove is among a handful of emerging companies developing technology that they can sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to go after the same general market.

“It adds some more coherence to the information ecosystem,” Wagner, who is also a senior advisor at the Responsible Innovation Labs nonprofit, said regarding the new crop of trust and safety tools. “They provide some level of standardized processes across companies where they can access tools and guidelines to be able to manage user-generated content effectively.”

‘Brilliant people out there’

It’s not just ex-Meta staffers who recognize the opportunity.

The founding team of startup TrustLab came from companies including Google, Reddit and TikTok parent ByteDance. And the founders of Intrinsic previously worked on trust and safety-related issues at Apple and Discord.

For the TrustCon conference in July, tech policy wonks and other industry experts headed to San Francisco to discuss the latest hot topics in online trust and safety, including their concerns about the potential societal effects of layoffs across the industry.

Several startups showcased their products in the exhibition hall, promoting their services, talking to potential clients and recruiting talent. ActiveFence, which describes itself as a “leader in providing Trust & Safety solutions to protect online platforms and their users from malicious behavior and content,” had a booth at the conference. So did Checkstep, a content moderation platform.

Cove also had an exhibit at the event.

“I think the cost-cutting has definitely obviously affected the labor markets and the hiring market,” said Cove CEO Michael Dworsky, who co-founded the company in 2021 after more than three years at Facebook. “There are a bunch of brilliant people out there that we can now hire.”

Cove has developed software to help manage a company’s content policy and review process. The management platform works alongside various content moderation systems, or classifiers, to detect issues such as harassment, so businesses can protect their users without needing expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, says on its website that Cove is “the solution we wish we had at Meta.”

“When Facebook started really investing in trust and safety, it’s not like there were tools on the market that they could have bought,” said Cove technology chief Mason Silber, who previously spent seven years at Facebook. “They didn’t want to build, they didn’t want to become the experts. They did it more out of necessity than desire, and they built some of the most robust, trusted safety solutions in the world.”

A Meta spokesperson declined to comment for this story.

We can't trust Instagram with our teens over child safety: Former Instagram consultant Arturo Béjar

Wagner, who left Meta in mid-2022 after about two and a half years at the company, said that earlier content moderation was more manageable than it is today, particularly with the current Middle East crisis. In the past, for instance, a trust and safety team member could analyze a picture and determine whether it contained false information through a fairly routine scan, she said.

But the quantity and speed of photos and videos being uploaded and the ability of people to manipulate details, especially as generative AI tools become more mainstream, has created a whole new hassle.

Social media sites are now dealing with a swarm of content related to two simultaneous wars, one in the Middle East and another between Russia and Ukraine. On top of that, they have to get ready for the 2024 presidential election in less than a year. Former President Donald Trump, who is under criminal indictment in Georgia for alleged interference in the 2020 election, is the front-runner to become the Republican nominee.

Manu Aggarwal, a partner at research firm Everest Group, said trust and safety is among the fastest-growing segments of a part of the market called business process services, which includes the outsourcing of various IT-related tasks and call centers.

By 2024, Everest Group projects the overall business process services market to be about $300 billion, with trust and safety representing about $11 billion of that figure. Companies such as Accenture and Genpact, which offer outsourced trust and safety services and contract workers, currently capture the bulk of spending, primarily because Big Tech companies have been “building their own” tools, Aggarwal said.

As startups focus on selling packaged and easy-to-use technology to a wider swath of clients, Everest Group practice director Abhijnan Dasgupta estimates that spending on trust and safety tools could be between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This figure is partly dependent on whether companies adopt more AI services, thus requiring them to potentially abide by emerging AI regulations, he added.

Tech investors are circling the opportunity. Venture capital firm Accel is the lead investor in Cinder, a two-year-old startup whose founders helped build much of Meta’s internal trust and safety systems and also worked on counterterrorism efforts.

“What better team to solve this challenge than the one that played a major role in defining Facebook’s Trust and Safety operations?” Accel’s Sara Ittelson said in a press release announcing the financing in December.

Ittelson told CNBC that she expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.

New content policy regulations have also spurred investment in the area.

The European Commission is now requiring large online platforms with big audiences in the EU to document and detail how they moderate and remove illegal and violent content on their services or face fines of up to 6% of their annual revenue.

Cinder and Cove are promoting their technologies as ways that online businesses can streamline and document their content moderation procedures to comply with the EU’s new regulations, called the Digital Services Act.

‘Frankenstein’s monster’

In the absence of specialized tech tools, Cove’s Dworsky said, many companies have tried to customize Zendesk, which sells customer support software, and Google Sheets to capture their trust and safety policies. That can result in a “very manual, unscalable approach,” he said, describing the process for some companies as “rebuilding and building a Frankenstein’s monster.”

Still, industry experts know that even the most effective trust and safety technologies aren’t a panacea for a problem as big and seemingly uncontrollable as the spread of violent content and disinformation. According to a survey published last week by the Anti-Defamation League, 70% of respondents said that on social media, they’d been exposed to at least one of several types of misinformation or hate related to the Israel-Hamas conflict.

As the problem expands, companies are dealing with the constant struggle over determining what constitutes free speech and what crosses the line into unlawful, or at least unacceptable, content.

Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute, said that in addition to doing their best to maintain integrity on their sites, companies should be honest with their users about their content moderation efforts.

“There’s a balance that is tough to strike, but it is strikable,” he said. “One thing I would recommend is transparency at a time where third-party access and understanding to what is going on at scale on social platforms is what is needed.”

Discord CEO Jason Citron: 15% of our workforce is dedicated to trust and safety

Noam Bardin, the former CEO of navigation firm Waze, now owned by Google, founded the social news-sharing and real-time messaging service Post last year. Bardin, who’s from Israel, said he’s been frustrated with the spread of misinformation and disinformation since the war began in October.

“The whole perception of what’s going on is fashioned and managed through social media, and this means there’s a tremendous influx of propaganda, disinformation, AI-generated content, bringing content from other conflicts into this conflict,” Bardin said.

Bardin said that Meta and X have struggled to manage and remove questionable posts, a challenge that’s become even greater with the influx of videos.

At Post, which is most similar to Twitter, Bardin said he’s been incorporating “all these moderation tools, automated tools and processes” since his company’s inception. He uses services from ActiveFence and OpenWeb, which are both based in Israel.

“Basically, anytime you comment or you post on our platform, it goes through it,” Bardin said regarding the trust and safety software. “It looks at it from an AI perspective to understand what it is and to rank it in terms of harm, pornography, violence, etc.”

Post is an example of the kinds of companies that trust and safety startups are focused on. Active online communities with live-chatting services have also emerged on video game sites, online marketplaces, dating apps and music streaming sites, opening them up to potentially harmful content from users.

Brian Fishman, co-founder of Cinder, said “militant organizations” rely on a network of services to spread propaganda, including platforms like Telegram, and sites such as Rumble and Vimeo, which have less advanced technology than Facebook.

Representatives from Rumble and Vimeo didn’t respond to requests for comment.

Fishman said customers are starting to see trust and safety tools as almost an extension of their cybersecurity budgets. In both cases, companies have to spend money to prevent possible disasters.

“Some of it is you’re paying for insurance, which means that you’re not getting full return on that investment every day,” Fishman said. “You’re investing a little bit more during black times, so that you got capability when you really, really need it, and this is one of those moments where companies really need it.”

WATCH: Lawmakers ask social media and AI companies to crack down on misinformation

Lawmakers ask social media and AI companies to crack down on misinformation

Continue Reading

Technology

Super Micro plans to ramp up manufacturing in Europe to capitalize on AI demand

Published

on

By

Super Micro plans to ramp up manufacturing in Europe to capitalize on AI demand

CEO of Supermicro Charles Liang speaks during the Reuters NEXT conference in New York City, U.S., December 10, 2024. 

Mike Segar | Reuters

PARIS — Super Micro plans to increase its investment in Europe, including ramping up manufacturing of its AI servers in the region, CEO Charles Liang told CNBC in an interview that aired on Wednesday.

The company sells servers which are packed with Nvidia chips and are key for training and implementing huge AI models. It has manufacturing facilities in the Netherlands, but could expand to other places.

“But because the demand in Europe is growing very fast, so I already decided, indeed, [there’s] already a plan to invest more in Europe, including manufacturing,” Liang told CNBC at the Raise Summit in Paris, France.

“The demand is global, and the demand will continue to improve in [the] next many years,” Liang added.

Liang’s comments come less than a month after Nvidia CEO Jensen Huang visited various parts of Europe, signing infrastructure deals and urging the region to ramp up its computing capacity.

Growth to be ‘strong’

Super Micro rode the growth wave after OpenAI’s ChatGPT boom boosted demand for Nvidia’s chips, which underpin big AI models. The server maker’s stock hit a record high in March 2024. However, the stock is around 60% off that all-time high over concerns about its accounting and financial reporting. But the company in February filed its delayed financial report for its 2024 fiscal year, assuaging those fears.

In May, the company reported weaker-than-expected guidance for the current quarter, raising concerns about demand for its product.

However, Liang dismissed those fears. “Our growth rate continues to be strong, because we continue to grow our fundamental technology, and we [are] also expanding our business scope,” Liang said.

“So the room … to grow will be still very tremendous, very big.”

Continue Reading

Technology

Apple says COO Jeff Williams will retire from company later this year

Published

on

By

Apple says COO Jeff Williams will retire from company later this year

Jeff Williams, chief operating officer of Apple Inc., during the Apple Worldwide Developers Conference (WWDC) at Apple Park campus in Cupertino, California, US, on Monday, June 9, 2025.

David Paul Morris | Bloomberg | Getty Images

Apple said on Tuesday that Chief Operating Officer Jeff Williams, a 27-year company veteran, will be retiring later this year.

Current operations leader Sabih Khan will take over much of the COO role later this month, Apple said in a press release. For his remaining time with the comapny, Williams will continue to head up Apple’s design team, Apple Watch, and health initiatives, reporting to CEO Tim Cook.

Williams becomes the latest longtime Apple executive to step down as key employees, who were active in the company’s hyper-growth years, reach retirement age. Williams, 62, previously headed Apple’s formidable operations division, which is in charge of manufacturing millions of complicated devices like iPhones, while keeping costs down.

He also led important teams inside Apple, including the company’s fabled industrial design team, after longtime leader Jony Ive retired in 2019. When Williams retires, Apple’s design team will report to CEO Tim Cook, Apple said.

“He’s helped to create one of the most respected global supply chains in the world; launched Apple Watch and overseen its development; architected Apple’s health strategy; and led our world class team of designers with great wisdom, heart, and dedication,” Cook said in the statement.

Williams said he plans to spend more time with friends and family.

“June marked my 27th anniversary with Apple, and my 40th in the industry,” Williams said in the release.

Williams is leaving Apple at a time when its famous supply chain is under significant pressure, as the U.S. imposes tariffs on many of the countries where Apple sources its devices, and White House officials publicly pressure Apple to move more production to the U.S.

Khan was added to Apple’s executive team in 2019, taking an executive vice president title. Apple said on Tuesday that he will lead supply chain, product quality, planning, procurement, and fulfillment at Apple.

The operations leader joined Apple’s procurement group in 1995, and before that worked as an engineer and technical leader at GE Plastics. He has a bachelor’s degree from Tufts University and a master’s degree in mechanical engineering from Rensselaer Polytechnic Institute in upstate New York.

Khan has worked closely with Cook. Once, during a meeting when Cook said that a manufacturing problem was “really bad,” Khan stood up and drove to the airport, and immediately booked a flight to China to fix it, according to an anecdote published in Fortune.

WATCH: Jefferies upgrades Apple

Jefferies upgrades Apple to 'Hold'

Continue Reading

Technology

Elon Musk lashes out at Tesla bull Dan Ives over board proposals: ‘Shut up’

Published

on

By

Elon Musk lashes out at Tesla bull Dan Ives over board proposals: 'Shut up'

Elon Musk, chief executive officer of SpaceX and Tesla, attends the Viva Technology conference at the Porte de Versailles exhibition center in Paris, June 16, 2023.

Gonzalo Fuentes | Reuters

Tesla CEO Elon Musk told Wedbush Securities’ Dan Ives to “Shut up” on Tuesday after the analyst offered three recommendations to the electric vehicle company’s board in a post on X.

Ives has been one of the most bullish Tesla observers on Wall Street. With a $500 price target on the stock, he has the highest projection of any analyst tracked by FactSet.

But on Tuesday, Ives took to X with critical remarks about Musk’s political activity after the world’s richest person said over the weekend that he was creating a new political party called the America Party to challenge Republican candidates who voted for the spending bill that was backed by President Donald Trump.

Ives’ post followed a nearly 7% slide in Tesla’s stock Monday, which wiped out $68 billion in market cap. Ives called for Tesla’s board to create a new pay package for Musk that would get him 25% voting control and clear a path to merge with xAI, establish “guardrails” for how much time Musk has to spend at Tesla, and provide “oversight on political endeavors.”

Ives published a lengthier note with other analysts from his firm headlined, “The Tesla board MUST Act and Create Ground Rules For Musk; Soap Opera Must End.” The analysts said that Musk’s launching of a new political party created a “tipping point in the Tesla story,” necessitating action by the company’s board to rein in the CEO.

Still, Wedbush maintained its price target and its buy recommendation on the stock.

“Shut up, Dan,” Musk wrote in response on X, even though the first suggestion would hand the CEO the voting control he has long sought at Tesla.

In an email to CNBC, Ives wrote, “Elon has his opinion and I get it, but we stand by what the right course of action is for the Board.”

Musk’s historic 2018 CEO pay package, which had been worth around $56 billion and has since gone up in value, was voided last year by the Delaware Court of Chancery. Judge Kathaleen McCormick ruled that Tesla’s board members had lacked independence from Musk and failed to properly negotiate at arm’s length with the CEO.

Elon Musk can't continue to go down this political path, says Wedbush's Dan Ives

Tesla has appealed that case to the Delaware state Supreme Court and is trying to determine what Musk’s next pay package should entail.

Ives isn’t the only Tesla bull to criticize Musk’s continued political activism.

Analysts at William Blair downgraded the stock to the equivalent of a hold from a buy on Monday, because of Musk’s political plans and rhetoric as well as the negative impacts that the spending bill passed by Congress could have on Tesla’s margins and EV sales.

“We expect that investors are growing tired of the distraction at a point when the business needs Musk’s attention the most and only see downside from his dip back into politics,” the analysts wrote. “We would prefer this effort to be channeled towards the robotaxi rollout at this critical juncture.”

Trump supporter James Fishback, CEO of hedge fund Azoria Partners, said Saturday that his firm postponed the listing of an exchange-traded fund, the Azoria Tesla Convexity ETF, that would invest in the EV company’s shares and options. He began his post on X saying, “Elon has gone too far.”

“I encourage the Board to meet immediately and ask Elon to clarify his political ambitions and evaluate whether they are compatible with his full-time obligations to Tesla as CEO,” Fishback wrote.

Musk said Saturday that he has formed the America Party, which he claimed will give Americans “back your freedom.” He hasn’t shared formal details, including where the party may be registered, how much funding he will provide for it and which candidates he will back.

Tesla’s stock is now down about 25% this year, badly underperforming U.S. indexes and by far the worst performance among tech’s megacaps.

Musk spent much of the first half of the year working with the Trump administration and leading an effort to massively downsize the federal government. His official work with the administration wrapped up at the end of May, and his exit preceded a public spat between Musk and Trump over the spending bill and other matters.

Musk, Tesla’s board chair Robyn Denholm and investor relations representative Travis Axelrod didn’t immediately respond to requests for comment.

WATCH: Musk-backed party would be doomed by his unfavorability

Musk-backed party would be doomed by his unfavorability, says Big Technology's Alex Kantrowitz

Continue Reading

Trending