People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London.
Joshua Bratt | Pa Images | Getty Images
Lauren Wagner knows a lot about disinformation. Heading into the 2020 U.S. presidential election, she worked at Facebook, focusing on information integrity and overseeing products designed to make sure content was moderated and fact-checked.
She can’t believe what’s she’s seeing now. Since war erupted last month between Israel and Hamas, the constant deluge of misinformation and violent content spreading across the internet is hard for her to comprehend. Wagner left Facebook parent Meta last year, and her work in trust and safety feels like it was from a prior era.
“When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it’s like long video clips and there’s multiple points of view?” Wagner said. “This idea of live-streaming terrorism, essentially at such a deep and in-depth scale, I don’t know how you manage that.”
The problem is even more pronounced because Meta, Google parent Alphabet, and X, formerly Twitter, have all eliminated jobs tied to content moderation and trust and safety as part of broader cost-cutting measures that began late last year and continued through 2023. Now, as people post and share out-of-context videos of previous wars, fabricated audio in news clips, and graphic videos of terrorist acts, the world’s most trafficked websites are struggling to keep up, experts have noted.
As the founder of a new venture capital firm, Radium Ventures, Wagner is in the midst of raising her first fund dedicated solely to startup founders working on trust and safety technologies. She said many more platforms that think they are “fairly innocuous” are seeing the need to act.
“Hopefully this is shining a light on the fact that if you house user-generated content, there’s an opportunity for misinformation, for charged information or potentially damaging information to spread,” Wagner said.
In addition to the traditional social networks, the highly polarized nature of the Israel-Hamas war affects internet platforms that weren’t typically known for hosting political discussions but now have to take precautionary measures. Popular online messaging and discussion channels such as Discord and Telegram could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.
A Discord spokesperson declined to comment. Telegram didn’t respond to a request for comment.
A demonstrator places flowers on white-shrouded body bags representing victims in the Israel-Hamas conflict, in front of the White House in Washington, DC, on November 15, 2023.
Mandel Ngan | AFP | Getty Images
On kids gaming site Roblox, thousands of users recently attended pro-Palestinian protests held within the virtual world. That has required the company to closely monitor for posts that violate its community standards, a Roblox spokesperson told CNBC in a statement.
Roblox has thousands of moderators and “automated detection tools in place to monitor,” the spokesperson said, adding that the site “allows for expressions of solidarity,” but does “not allow for content that endorses or condones violence, promotes terrorism or hatred against individuals or groups, or calls for supporting a specific political party.”
When it comes to looking for talent in the trust and safety space, there’s no shortage. Many of Wagner’s former colleagues at Meta lost their jobs and remain dedicated to the cause.
One of her first investments was in a startup called Cove, which was founded by former Meta trust and safety staffers. Cove is among a handful of emerging companies developing technology that they can sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to go after the same general market.
“It adds some more coherence to the information ecosystem,” Wagner, who is also a senior advisor at the Responsible Innovation Labs nonprofit, said regarding the new crop of trust and safety tools. “They provide some level of standardized processes across companies where they can access tools and guidelines to be able to manage user-generated content effectively.”
‘Brilliant people out there’
It’s not just ex-Meta staffers who recognize the opportunity.
The founding team of startup TrustLab came from companies including Google, Reddit and TikTok parent ByteDance. And the founders of Intrinsic previously worked on trust and safety-related issues at Apple and Discord.
For the TrustCon conference in July, tech policy wonks and other industry experts headed to San Francisco to discuss the latest hot topics in online trust and safety, including their concerns about the potential societal effects of layoffs across the industry.
Several startups showcased their products in the exhibition hall, promoting their services, talking to potential clients and recruiting talent. ActiveFence, which describes itself as a “leader in providing Trust & Safety solutions to protect online platforms and their users from malicious behavior and content,” had a booth at the conference. So did Checkstep, a content moderation platform.
Cove also had an exhibit at the event.
“I think the cost-cutting has definitely obviously affected the labor markets and the hiring market,” said Cove CEO Michael Dworsky, who co-founded the company in 2021 after more than three years at Facebook. “There are a bunch of brilliant people out there that we can now hire.”
Cove has developed software to help manage a company’s content policy and review process. The management platform works alongside various content moderation systems, or classifiers, to detect issues such as harassment, so businesses can protect their users without needing expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, says on its website that Cove is “the solution we wish we had at Meta.”
“When Facebook started really investing in trust and safety, it’s not like there were tools on the market that they could have bought,” said Cove technology chief Mason Silber, who previously spent seven years at Facebook. “They didn’t want to build, they didn’t want to become the experts. They did it more out of necessity than desire, and they built some of the most robust, trusted safety solutions in the world.”
A Meta spokesperson declined to comment for this story.
Wagner, who left Meta in mid-2022 after about two and a half years at the company, said that earlier content moderation was more manageable than it is today, particularly with the current Middle East crisis. In the past, for instance, a trust and safety team member could analyze a picture and determine whether it contained false information through a fairly routine scan, she said.
But the quantity and speed of photos and videos being uploaded and the ability of people to manipulate details, especially as generative AI tools become more mainstream, has created a whole new hassle.
Social media sites are now dealing with a swarm of content related to two simultaneous wars, one in the Middle East and another between Russia and Ukraine. On top of that, they have to get ready for the 2024 presidential election in less than a year. Former President Donald Trump, who is under criminal indictment in Georgia for alleged interference in the 2020 election, is the front-runner to become the Republican nominee.
Manu Aggarwal, a partner at research firm Everest Group, said trust and safety is among the fastest-growing segments of a part of the market called business process services, which includes the outsourcing of various IT-related tasks and call centers.
By 2024, Everest Group projects the overall business process services market to be about $300 billion, with trust and safety representing about $11 billion of that figure. Companies such as Accenture and Genpact, which offer outsourced trust and safety services and contract workers, currently capture the bulk of spending, primarily because Big Tech companies have been “building their own” tools, Aggarwal said.
As startups focus on selling packaged and easy-to-use technology to a wider swath of clients, Everest Group practice director Abhijnan Dasgupta estimates that spending on trust and safety tools could be between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This figure is partly dependent on whether companies adopt more AI services, thus requiring them to potentially abide by emerging AI regulations, he added.
Tech investors are circling the opportunity. Venture capital firm Accel is the lead investor in Cinder, a two-year-old startup whose founders helped build much of Meta’s internal trust and safety systems and also worked on counterterrorism efforts.
“What better team to solve this challenge than the one that played a major role in defining Facebook’s Trust and Safety operations?” Accel’s Sara Ittelson said in a press release announcing the financing in December.
Ittelson told CNBC that she expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.
The European Commission is now requiring large online platforms with big audiences in the EU to document and detail how they moderate and remove illegal and violent content on their services or face fines of up to 6% of their annual revenue.
Cinder and Cove are promoting their technologies as ways that online businesses can streamline and document their content moderation procedures to comply with the EU’s new regulations, called the Digital Services Act.
‘Frankenstein’s monster’
In the absence of specialized tech tools, Cove’s Dworsky said, many companies have tried to customize Zendesk, which sells customer support software, and Google Sheets to capture their trust and safety policies. That can result in a “very manual, unscalable approach,” he said, describing the process for some companies as “rebuilding and building a Frankenstein’s monster.”
Still, industry experts know that even the most effective trust and safety technologies aren’t a panacea for a problem as big and seemingly uncontrollable as the spread of violent content and disinformation. According to a survey published last week by the Anti-Defamation League, 70% of respondents said that on social media, they’d been exposed to at least one of several types of misinformation or hate related to the Israel-Hamas conflict.
As the problem expands, companies are dealing with the constant struggle over determining what constitutes free speech and what crosses the line into unlawful, or at least unacceptable, content.
Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute, said that in addition to doing their best to maintain integrity on their sites, companies should be honest with their users about their content moderation efforts.
“There’s a balance that is tough to strike, but it is strikable,” he said. “One thing I would recommend is transparency at a time where third-party access and understanding to what is going on at scale on social platforms is what is needed.”
Noam Bardin, the former CEO of navigation firm Waze, now owned by Google, founded the social news-sharing and real-time messaging service Post last year. Bardin, who’s from Israel, said he’s been frustrated with the spread of misinformation and disinformation since the war began in October.
“The whole perception of what’s going on is fashioned and managed through social media, and this means there’s a tremendous influx of propaganda, disinformation, AI-generated content, bringing content from other conflicts into this conflict,” Bardin said.
Bardin said that Meta and X have struggled to manage and remove questionable posts, a challenge that’s become even greater with the influx of videos.
At Post, which is most similar to Twitter, Bardin said he’s been incorporating “all these moderation tools, automated tools and processes” since his company’s inception. He uses services from ActiveFence and OpenWeb, which are both based in Israel.
“Basically, anytime you comment or you post on our platform, it goes through it,” Bardin said regarding the trust and safety software. “It looks at it from an AI perspective to understand what it is and to rank it in terms of harm, pornography, violence, etc.”
Post is an example of the kinds of companies that trust and safety startups are focused on. Active online communities with live-chatting services have also emerged on video game sites, online marketplaces, dating apps and music streaming sites, opening them up to potentially harmful content from users.
Brian Fishman, co-founder of Cinder, said “militant organizations” rely on a network of services to spread propaganda, including platforms like Telegram, and sites such as Rumble and Vimeo, which have less advanced technology than Facebook.
Representatives from Rumble and Vimeo didn’t respond to requests for comment.
Fishman said customers are starting to see trust and safety tools as almost an extension of their cybersecurity budgets. In both cases, companies have to spend money to prevent possible disasters.
“Some of it is you’re paying for insurance, which means that you’re not getting full return on that investment every day,” Fishman said. “You’re investing a little bit more during black times, so that you got capability when you really, really need it, and this is one of those moments where companies really need it.”
A representation of cryptocurrency Ethereum is placed on a PC motherboard in this illustration taken on June 16, 2023.
Dado Ruvic | Reuters
Stocks tied to the price of ether, better known as ETH, were higher on Wednesday, reflecting renewed enthusiasm for the crypto asset amid a surge of interest in stablecoins and tokenization.
“We’re finally at the point where real use cases are emerging, and stablecoins have been the first version of that at scale but they’re going to open the door to a much bigger story around tokenizing other assets and using digital assets in new ways,” Devin Ryan, head of financial technology research at Citizens.
On Tuesday, as bitcoin ETFs snapped a 15-day streak of inflows, ether ETFs saw $40 million in inflows led by BlackRock’s iShares Ethereum Trust. ETH ETFs came back to life in June after much concern that they were becoming zombie funds.
The price of the coin itself was last higher by 5%, according to Coin Metrics, though it’s still down 24% this year.
Ethereum has been struggling with an identity crisis fueled by uncertainty about the network’s value proposition, weaker revenue since its last big technical upgrade and increasing competition from Solana. Market volatility, driven by geopolitical uncertainty this year, has not helped.
The Ethereum network’s smart contracts capability makes it a prominent platform for the tokenization of traditional assets, which includes U.S. dollar-pegged stablecoins. Fundstrat’s Tom Lee this week called Ethereum “the backbone and architecture” of stablecoins. Both Tether (USDT) and Circle‘s USD Coin (USDC) are issued on the network.
BlackRock’s tokenized money market fund (known as BUIDL, which stands for USD Institutional Digital Liquidity Fund) also launched on Ethereum last year before expanding to other blockchain networks.
Tokenization is the process of issuing digital representations on a blockchain network of publicly traded securities, real world assets or any other form of value. Holders of tokenized assets don’t have outright ownership of the assets themselves.
The latest wave of interest in ETH-related assets follows an announcement by Robinhood this week that it will enable trading of tokenized U.S. stocks and ETFs across Europe, after a groundswell of interest in stablecoins throughout June following Circle’s IPO and the Senate passage of its proposed stablecoin bill, the GENIUS Act.
Ether, which turns 10 years old at the end of July, is sitting about 75% off its all-time high.
Don’t miss these cryptocurrency insights from CNBC Pro:
Honor launched the Honor Magic V5 on Wednesday July 2, as it looks to challenge Samsung in the foldable space.
Honor
Honor on Wednesday touted the slimness and battery capacity of its newly launched thin foldable phone, as it lays down a fresh challenge to market leader Samsung.
The Honor Magic V5 goes will initially go on sale in China, but the Chinese tech firm will likely bring the device to international markets later this year.
Honor said the Magic V5 is 8.8 mm to 9mm when folded, depending on the color choice. The phone’s predecessor, the Magic V3 — Honor skipped the Magic V4 name — was 9.2 mm when folded. Honor said the Magic V5 weighs 217 grams to 222 grams, again, depending on the color model. The previous version was 226 grams.
In China, Honor will launch a special 1 terabyte storage size version of the Magic V5, which it says will have a battery capacity of more than 6000 milliampere-hour — among the highest for foldable phones.
Honor has tried hard to tout these features, as competition in foldables ramps up, even as these types of devices have a very small share of the overall smartphone market.
Honor vs. Samsung
Foldables represented less than 2% of the overall smartphone market in 2024, according to International Data Corporation. Samsung was the biggest player with 34% market share followed by Huawei with just under 24%, IDC added. Honor took the fourth spot with a nearly 11% share.
Honor is looking to get a head start on Samsung, which has its own foldable launch next week on July 9.
Francisco Jeronimo, a vice president at the International Data Corporation, said the Magic V5 is a strong offering from Honor.
“This is the dream foldable smartphone that any user who is interested in this category will think of,” Jeronimo told CNBC, pointing to features such as the battery.
“This phone continues to push the bar forward, and it will challenge Samsung as they are about to launch their seventh generation of foldable phones,” he added.
At its event next week, Samsung is expected to release a foldable that is thinner than its predecessor and could come close to challenging Honor’s offering by way of size, analysts said. If that happens, then Honor will be facing more competition, especially against Samsung, which has a bigger global footprint.
“The biggest challenge for Honor is the brand equity and distribution reach vs Samsung, where the Korean vendor has the edge,” Neil Shah, co-founder of Counterpoint Research, told CNBC.
Honor’s push into international markets beyond China is still fairly young, with the company looking to build up its brand.
“Further, if Samsung catches up with a thinner form-factor in upcoming iterations, as it has been the real pioneer in foldables with its vertical integration expertise from displays to batteries, the differentiating factor might narrow for Honor,” Shah added.
Vertical integration refers to when a company owns several parts of a product’s supply chain. Samsung has a display and battery business which provides the components for its foldables.
In March, Honor pledged a $10 billion investment in AI over the next five years, with part of that going toward the development of next-generation agents that are seen as more advanced personal assistants.
Honor said its AI assistant Yoyo can interact with other AI models, such as those created by DeepSeek and Alibaba in China, to create presentation decks.
The company also flagged its AI agent can hail a taxi ride across multiple apps in China, automatically accepting the quickest ride to arrive? and cancelling the rest.
One of the most popular gaming YouTubers is named Bloo, and has bright blue wavy hair and dark blue eyes. But he isn’t a human — he’s a fully virtual personality powered by artificial intelligence.
“I’m here to keep my millions of viewers worldwide entertained and coming back for more,” said Bloo in an interview with CNBC. “I’m all about good vibes and engaging content. I’m built by humans, but boosted by AI.”
Bloo is a virtual YouTuber, or VTuber, who has built a massive following of 2.5 million subscribers and more than 700 million views through videos of him playing popular games like Grand Theft Auto, Roblox and Minecraft. VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.
The virtual character – whose bright colors and 3D physique look like something out of a Pixar film or the video game Fortnite – was created by Jordi van den Bussche, a long time YouTuber also known as kwebbelkop. Van den Bussche created Bloo after finding himself unable to keep up with the demands of content creation. The work no longer matched the output.
“Turns out, the flaw in this equation is the human, so we need to somehow remove the human,” said van den Bussche, a 29-year old from Amsterdam, in an interview. “The only logical way was to replace the humanwith either a photorealistic person or a cartoon. The VTuber was the only option, and that’s where Bloo came from.”
Jordi Van Den Bussche, YouTuber known as Kwebbelkop.
Courtesy: Jordi Van Den Bussche
Bloo has already generated more than seven figures in revenue, according to van den Bussche. Many VTubers like Bloo are “puppeteered,” meaning a human controls the character’s voice and movements in real time using motion capture or face-tracking technology.Everything else, from video thumbnails to voice dubbing in other languages, is handled by AI technology from ElevenLabs, OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. Van den Bussche’s long-term goal is for Bloo’s entire personality and content creation process to be run by AI.
Van den Bussche has already tested fully AI-generated videos on Bloo’s channel, but says the results have not yet been promising. The content doesn’t perform as well because the AI still lacks the intuition and creative instincts of a human, he said.
“When AI can do it better, faster or cheaper than humans, that’s when we’ll start using it permanently,” van den Bussche said.
The technology might not be far away.
Startup Hedra offers a product that uses AI technology to generate videos that are up to five minutes long. It raised $32 million in a funding round in Mayled by Andreessen Horowitz’s Infrastructure fund.
Hedra’s product, Character-3, allows users to create AI-generated characters for videos and can add dialogue and other characteristics. CEO Michael Lingelbach told CNBC Hedra is working on a product that will allow users to create self-sustaining, fully-automated characters.
Hedra’s product Character-3 allows users to make figures powered by AI that can be animated in real-time.
Hedra
“We’re doing a lot of research accelerating models like Character-3 to real time, and that’s going to be a really good fit for VTubers,” Lingelbach said.
Character-3’s technology is already being used by a growing number of creators who are experimenting with new formats, and many of their projects are going viral. One of those is comedian Jon Lajoie’s Talking Baby Podcast, which features a hyper-realistic animated baby talking into a microphone. Another is Milla Sofia, a virtual singer and artist whose AI-generated music videos attract thousands of views.
Talking Baby Podcast
Source: Instagram | Talking Baby Podcast
These creators are using Character-3 to produce content that stands out on social media, helping them reach wide audiences without the cost and complexity of traditional production.
AI-generated video is a rapidly evolving technology that is reshaping how content is made and shared online, making it easier than ever to produce high-quality video without cameras, actors or editing software. In May, Google announced Veo 3, a tool that creates AI-generated videos with audio.
Google said it uses a subset of YouTube content to train Veo 3, CNBC reported in June. While many creators said they were unaware of the training, experts said it has the potential to create an intellectual property crisis on the platform.
Faceless AI YouTubers
Creators are increasingly finding profitable ways to capitalize on the generative AI technology ushered in by the launch of OpenAI’s ChatGPT in late 2022.
One growing trend is the rise of faceless AI channels. These are run by creators who use these tools to produce videos with artificially generated images and voiceover that can sometimes earn thousands of dollars a month without them ever appearing on camera.
“My goal is to scale up to 50 channels, though it’s getting harder because of how YouTube handles new channels and trust scores,” said GoldenHand, a Spain-based creator who declined to share his real name.
Working with a small team, GoldenHand said he publishes up to 80 videos per day across his network of channels. Some maintain a steady few thousand views per video while others might suddenly go viral and rack up millions of views, mostly to an audience of those over the age of 65.
GoldenHand said his content is audio-driven storytelling. He describes his YouTube videos as audiobooks that are paired with AI-generated images and subtitles. Everything after the initial idea is created entirely by AI.
He recently launched a new platform, TubeChef, which gives creators access to his system to automatically generate faceless AI videos starting at $18 a month.
“People think using AI means you’re less creative, but I feel more creative than ever,” he said. “Coming up with 60 to 80 viral video ideas a day is no joke. The ideation is where all the effort goes now.”
AI Slop
As AI-generated content becomes more common online, concerns about its impact are growing. Some users worry about the spread of misinformation, especially as it becomes easier to generate convincing but entirely AI-fabricated videos.
“Even if the content is informative and someone might find it entertaining or useful, I feel we are moving into a time where … you do not have a way to understand what is human made and what is not,” said Henry Ajder, founder of Latent Space Advisory, which helps business navigate the AI landscape.
Others are frustrated by the sheer volume of low-effort, AI content flooding their feeds. This kind of material is often referred to as “AI slop,” low-quality, randomly generated content made using artificial intelligence.
Google DeepMind Veo 3.
Courtesy: Google DeepMind
“The age of slop is inevitable,” said Ajder, who is also an AI policy advisor at Meta, which owns Facebook and Instagram. “I’m not sure what we do about it.”
While it’s not new, the surge in this type of content has led to growing criticism from users who say it’s harder to find meaningful or original material, particularly on apps like TikTok, YouTube and Instagram.
“I am actually so tired of AI slop,” said one user on X. “AI images are everywhere now. There is no creativity and no effort in anything relating to art, video, or writing when using AI. It’s disappointing.”
However, the creators of this AI content tell CNBC that it comes down to supply and demand. As the AI-generated content continues to get clicks, there’s no reason to stop creating more of it, said Noah Morris, a creator with 18 faceless YouTube channels.
Some argue that AI videos still have inherent artistic value, and though it’s become much easier to create, slop-like content has always existed on the internet, Lingelbach said.
“There’s never been a barrier to people making uninteresting content,” he said. “Now there’s just more opportunity to create different kinds of uninteresting content, but also more kinds of really interesting content too.”