People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London.
Joshua Bratt | Pa Images | Getty Images
Lauren Wagner knows a lot about disinformation. Heading into the 2020 U.S. presidential election, she worked at Facebook, focusing on information integrity and overseeing products designed to make sure content was moderated and fact-checked.
She can’t believe what’s she’s seeing now. Since war erupted last month between Israel and Hamas, the constant deluge of misinformation and violent content spreading across the internet is hard for her to comprehend. Wagner left Facebook parent Meta last year, and her work in trust and safety feels like it was from a prior era.
“When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it’s like long video clips and there’s multiple points of view?” Wagner said. “This idea of live-streaming terrorism, essentially at such a deep and in-depth scale, I don’t know how you manage that.”
The problem is even more pronounced because Meta, Google parent Alphabet, and X, formerly Twitter, have all eliminated jobs tied to content moderation and trust and safety as part of broader cost-cutting measures that began late last year and continued through 2023. Now, as people post and share out-of-context videos of previous wars, fabricated audio in news clips, and graphic videos of terrorist acts, the world’s most trafficked websites are struggling to keep up, experts have noted.
As the founder of a new venture capital firm, Radium Ventures, Wagner is in the midst of raising her first fund dedicated solely to startup founders working on trust and safety technologies. She said many more platforms that think they are “fairly innocuous” are seeing the need to act.
“Hopefully this is shining a light on the fact that if you house user-generated content, there’s an opportunity for misinformation, for charged information or potentially damaging information to spread,” Wagner said.
In addition to the traditional social networks, the highly polarized nature of the Israel-Hamas war affects internet platforms that weren’t typically known for hosting political discussions but now have to take precautionary measures. Popular online messaging and discussion channels such as Discord and Telegram could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.
A Discord spokesperson declined to comment. Telegram didn’t respond to a request for comment.
A demonstrator places flowers on white-shrouded body bags representing victims in the Israel-Hamas conflict, in front of the White House in Washington, DC, on November 15, 2023.
Mandel Ngan | AFP | Getty Images
On kids gaming site Roblox, thousands of users recently attended pro-Palestinian protests held within the virtual world. That has required the company to closely monitor for posts that violate its community standards, a Roblox spokesperson told CNBC in a statement.
Roblox has thousands of moderators and “automated detection tools in place to monitor,” the spokesperson said, adding that the site “allows for expressions of solidarity,” but does “not allow for content that endorses or condones violence, promotes terrorism or hatred against individuals or groups, or calls for supporting a specific political party.”
When it comes to looking for talent in the trust and safety space, there’s no shortage. Many of Wagner’s former colleagues at Meta lost their jobs and remain dedicated to the cause.
One of her first investments was in a startup called Cove, which was founded by former Meta trust and safety staffers. Cove is among a handful of emerging companies developing technology that they can sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to go after the same general market.
“It adds some more coherence to the information ecosystem,” Wagner, who is also a senior advisor at the Responsible Innovation Labs nonprofit, said regarding the new crop of trust and safety tools. “They provide some level of standardized processes across companies where they can access tools and guidelines to be able to manage user-generated content effectively.”
‘Brilliant people out there’
It’s not just ex-Meta staffers who recognize the opportunity.
The founding team of startup TrustLab came from companies including Google, Reddit and TikTok parent ByteDance. And the founders of Intrinsic previously worked on trust and safety-related issues at Apple and Discord.
For the TrustCon conference in July, tech policy wonks and other industry experts headed to San Francisco to discuss the latest hot topics in online trust and safety, including their concerns about the potential societal effects of layoffs across the industry.
Several startups showcased their products in the exhibition hall, promoting their services, talking to potential clients and recruiting talent. ActiveFence, which describes itself as a “leader in providing Trust & Safety solutions to protect online platforms and their users from malicious behavior and content,” had a booth at the conference. So did Checkstep, a content moderation platform.
Cove also had an exhibit at the event.
“I think the cost-cutting has definitely obviously affected the labor markets and the hiring market,” said Cove CEO Michael Dworsky, who co-founded the company in 2021 after more than three years at Facebook. “There are a bunch of brilliant people out there that we can now hire.”
Cove has developed software to help manage a company’s content policy and review process. The management platform works alongside various content moderation systems, or classifiers, to detect issues such as harassment, so businesses can protect their users without needing expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, says on its website that Cove is “the solution we wish we had at Meta.”
“When Facebook started really investing in trust and safety, it’s not like there were tools on the market that they could have bought,” said Cove technology chief Mason Silber, who previously spent seven years at Facebook. “They didn’t want to build, they didn’t want to become the experts. They did it more out of necessity than desire, and they built some of the most robust, trusted safety solutions in the world.”
A Meta spokesperson declined to comment for this story.
Wagner, who left Meta in mid-2022 after about two and a half years at the company, said that earlier content moderation was more manageable than it is today, particularly with the current Middle East crisis. In the past, for instance, a trust and safety team member could analyze a picture and determine whether it contained false information through a fairly routine scan, she said.
But the quantity and speed of photos and videos being uploaded and the ability of people to manipulate details, especially as generative AI tools become more mainstream, has created a whole new hassle.
Social media sites are now dealing with a swarm of content related to two simultaneous wars, one in the Middle East and another between Russia and Ukraine. On top of that, they have to get ready for the 2024 presidential election in less than a year. Former President Donald Trump, who is under criminal indictment in Georgia for alleged interference in the 2020 election, is the front-runner to become the Republican nominee.
Manu Aggarwal, a partner at research firm Everest Group, said trust and safety is among the fastest-growing segments of a part of the market called business process services, which includes the outsourcing of various IT-related tasks and call centers.
By 2024, Everest Group projects the overall business process services market to be about $300 billion, with trust and safety representing about $11 billion of that figure. Companies such as Accenture and Genpact, which offer outsourced trust and safety services and contract workers, currently capture the bulk of spending, primarily because Big Tech companies have been “building their own” tools, Aggarwal said.
As startups focus on selling packaged and easy-to-use technology to a wider swath of clients, Everest Group practice director Abhijnan Dasgupta estimates that spending on trust and safety tools could be between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This figure is partly dependent on whether companies adopt more AI services, thus requiring them to potentially abide by emerging AI regulations, he added.
Tech investors are circling the opportunity. Venture capital firm Accel is the lead investor in Cinder, a two-year-old startup whose founders helped build much of Meta’s internal trust and safety systems and also worked on counterterrorism efforts.
“What better team to solve this challenge than the one that played a major role in defining Facebook’s Trust and Safety operations?” Accel’s Sara Ittelson said in a press release announcing the financing in December.
Ittelson told CNBC that she expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.
The European Commission is now requiring large online platforms with big audiences in the EU to document and detail how they moderate and remove illegal and violent content on their services or face fines of up to 6% of their annual revenue.
Cinder and Cove are promoting their technologies as ways that online businesses can streamline and document their content moderation procedures to comply with the EU’s new regulations, called the Digital Services Act.
‘Frankenstein’s monster’
In the absence of specialized tech tools, Cove’s Dworsky said, many companies have tried to customize Zendesk, which sells customer support software, and Google Sheets to capture their trust and safety policies. That can result in a “very manual, unscalable approach,” he said, describing the process for some companies as “rebuilding and building a Frankenstein’s monster.”
Still, industry experts know that even the most effective trust and safety technologies aren’t a panacea for a problem as big and seemingly uncontrollable as the spread of violent content and disinformation. According to a survey published last week by the Anti-Defamation League, 70% of respondents said that on social media, they’d been exposed to at least one of several types of misinformation or hate related to the Israel-Hamas conflict.
As the problem expands, companies are dealing with the constant struggle over determining what constitutes free speech and what crosses the line into unlawful, or at least unacceptable, content.
Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute, said that in addition to doing their best to maintain integrity on their sites, companies should be honest with their users about their content moderation efforts.
“There’s a balance that is tough to strike, but it is strikable,” he said. “One thing I would recommend is transparency at a time where third-party access and understanding to what is going on at scale on social platforms is what is needed.”
Noam Bardin, the former CEO of navigation firm Waze, now owned by Google, founded the social news-sharing and real-time messaging service Post last year. Bardin, who’s from Israel, said he’s been frustrated with the spread of misinformation and disinformation since the war began in October.
“The whole perception of what’s going on is fashioned and managed through social media, and this means there’s a tremendous influx of propaganda, disinformation, AI-generated content, bringing content from other conflicts into this conflict,” Bardin said.
Bardin said that Meta and X have struggled to manage and remove questionable posts, a challenge that’s become even greater with the influx of videos.
At Post, which is most similar to Twitter, Bardin said he’s been incorporating “all these moderation tools, automated tools and processes” since his company’s inception. He uses services from ActiveFence and OpenWeb, which are both based in Israel.
“Basically, anytime you comment or you post on our platform, it goes through it,” Bardin said regarding the trust and safety software. “It looks at it from an AI perspective to understand what it is and to rank it in terms of harm, pornography, violence, etc.”
Post is an example of the kinds of companies that trust and safety startups are focused on. Active online communities with live-chatting services have also emerged on video game sites, online marketplaces, dating apps and music streaming sites, opening them up to potentially harmful content from users.
Brian Fishman, co-founder of Cinder, said “militant organizations” rely on a network of services to spread propaganda, including platforms like Telegram, and sites such as Rumble and Vimeo, which have less advanced technology than Facebook.
Representatives from Rumble and Vimeo didn’t respond to requests for comment.
Fishman said customers are starting to see trust and safety tools as almost an extension of their cybersecurity budgets. In both cases, companies have to spend money to prevent possible disasters.
“Some of it is you’re paying for insurance, which means that you’re not getting full return on that investment every day,” Fishman said. “You’re investing a little bit more during black times, so that you got capability when you really, really need it, and this is one of those moments where companies really need it.”
An employee walks past a quilt displaying Etsy Inc. signage at the company’s headquarters in the Brooklyn.
Victor J. Blue/Bloomberg via Getty Images
Etsy is trying to make it easier for shoppers to purchase products from local merchants and avoid the extra cost of imports as President Donald Trump’s sweeping tariffs raise concerns about soaring prices.
In a post to Etsy’s website on Thursday, CEO Josh Silverman said the company is “surfacing new ways for buyers to discover businesses in their countries” via shopping pages and by featuring local sellers on its website and app.
“While we continue to nurture and enable cross-border trade on Etsy, we understand that people are increasingly interested in shopping domestically,” Silverman said.
Etsy operates an online marketplace that connects buyers and sellers with mostly artisanal and handcrafted goods. The site, which had 5.6 million active sellers as of the end of December, competes with e-commerce juggernaut Amazon, as well as newer entrants that have ties to China like Temu, Shein and TikTok Shop.
By highlighting local sellers, Etsy could relieve some shoppers from having to pay higher prices induced by President Trump’s widespread tariffs on trade partners. Trump has imposed tariffs on most foreign countries, with China facing a rate of 145%, and other nations facing 10% rates after he instituted a 90-day pause to allow for negotiations. Trump also signed an executive order that will end the de minimis provision, a loophole for low-value shipments often used by online businesses, on May 2.
Temu and Shein have already announced they plan to raise prices late next week in response to the tariffs. Sellers on Amazon’s third-party marketplace, many of whom source their products from China, have said they’re considering raising prices.
Silverman said Etsy has provided guidance for its sellers to help them “run their businesses with as little disruption as possible” in the wake of tariffs and changes to the de minimis exemption.
Before Trump’s “Liberation Day” tariffs took effect, Silverman said on the company’s fourth-quarter earnings call in late February that he expects Etsy to benefit from the tariffs and de minimis restrictions because it “has much less dependence on products coming in from China.”
“We’re doing whatever work we can do to anticipate and prepare for come what may,” Silverman said at the time. “In general, though, I think Etsy will be more resilient than many of our competitors in these situations.”
Still, American shoppers may face higher prices on Etsy as U.S. businesses that source their products or components from China pass some of those costs on to consumers.
Etsy shares are down 17% this year, slightly more than the Nasdaq.
Google CEO Sundar Pichai testifies before the House Judiciary Committee at the Rayburn House Office Building on December 11, 2018 in Washington, DC.
Alex Wong | Getty Images
Google’s antitrust woes are continuing to mount, just as the company tries to brace for a future dominated by artificial intelligence.
On Thursday, a federal judge ruled that Google held illegal monopolies in online advertising markets due to its position between ad buyers and sellers.
The ruling, which followed a September trial in Alexandria, Virginia, represents a second major antitrust blow for Google in under a year. In August, a judge determined the company has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoftmore than 20 years ago.
Google is in a particularly precarious spot as it tries to simultaneously defend its primary business in court while fending off an onslaught of new competition due to the emergence of generative AI, most notably OpenAI’s ChatGPT, which offers users alternative ways to search for information. Revenue growth has cooled in recent years, and Google also now faces the added potential of a slowdown in ad spending due to economic concerns from President Donald Trump’s sweeping new tariffs.
Parent company Alphabet reports first-quarter results next week. Alphabet’s stock price dipped more than 1% on Thursday and is now down 20% this year.
In Thursday’s ruling, U.S. District Judge Leonie Brinkema said Google’s anticompetitive practices “substantially harmed” publishers and users on the web. The trial featured 39 live witnesses, depositions from an additional 20 witnesses and hundreds of exhibits.
Judge Brinkema ruled that Google unlawfully controls two of the three parts of the advertising technology market: the publisher ad server market and ad exchange market. Brinkema dismissed the third part of the case, determining that tools used for general display advertising can’t clearly be defined as Google’s own market. In particular, the judge cited the purchases of DoubleClick and Admeld and said the government failed to show those “acquisitions were anticompetitive.”
“We won half of this case and we will appeal the other half,” Lee-Anne Mulholland, Google’s vice president or regulatory affairs, said in an emailed statement. “We disagree with the Court’s decision regarding our publisher tools. Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective.”
Attorney General Pam Bondi said in a press release from the DOJ that the ruling represents a “landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”
Potential ad disruption
If regulators force the company to divest parts of the ad-tech business, as the Justice Department has requested, it could open up opportunities for smaller players and other competitors to fill the void and snap up valuable market share. Amazon has been growing its ad business in recent years.
Meanwhile, Google is still defending itself against claims that its search has acted as a monopoly by creating strong barriers to entry and a feedback loop that sustained its dominance. Google said in August, immediately after the search case ruling, that it would appeal, meaning the matter can play out in court for years even after the remedies are determined.
The remedies trial, which will lay out the consequences, begins next week. The Justice Department is aiming for a break up of Google’s Chrome browser and eliminating exclusive agreements, like its deal with Apple for search on iPhones. The judge is expected to make the ruling by August.
Google CEO Sundar Pichai (L) and Apple CEO Tim Cook (R) listen as U.S. President Joe Biden speaks during a roundtable with American and Indian business leaders in the East Room of the White House on June 23, 2023 in Washington, DC.
Anna Moneymaker | Getty Images
After the ad market ruling on Thursday, Gartner’s Andrew Frank said Google’s “conflicts of interest” are apparent by how the market runs.
“The structure has been decades in the making,” Frank said, adding that “untangling that would be a significant challenge, particularly since lawyers don’t tend to be system architects.”
However, the uncertainty that comes with a potentially years-long appeals process means many publishers and advertisers will be waiting to see how things shake out before making any big decisions given how much they rely on Google’s technology.
“Google will have incentives to encourage more competition possibly by loosening certain restrictions on certain media it controls, YouTube being one of them,” Frank said. “Those kind of incentives may create opportunities for other publishers or ad tech players.”
A date for the remedies trial hasn’t been set.
Damian Rollison, senior director of market insights for marketing platform Soci, said the revenue hit from the ad market case could be more dramatic than the impact from the search case.
“The company stands to lose a lot more in material terms if its ad business, long its main source of revenue, is broken up,” Rollison said in an email. “Whereas divisions like Chrome are more strategically important.”
Jason Citron, CEO of Discord in Washington, DC, on January 31, 2024.
Andrew Caballero-Reynolds | AFP | Getty Images
The New Jersey attorney general sued Discord on Thursday, alleging that the company misled consumers about child safety features on the gaming-centric social messaging app.
The lawsuit, filed in the New Jersey Superior Court by Attorney General Matthew Platkin and the state’s division of consumer affairs, alleges that Discord violated the state’s consumer fraud laws.
Discord did so, the complaint said, by allegedly “misleading children and parents from New Jersey” about safety features, “obscuring” the risks children face on the platform and failing to enforce its minimum age requirement.
“Discord’s strategy of employing difficult to navigate and ambiguous safety settings to lull parents and children into a false sense of safety, when Discord knew well that children on the Application were being targeted and exploited, are unconscionable and/or abusive commercial acts or practices,” lawyers wrote in the legal filing.
They alleged that Discord’s acts and practices were “offensive to public policy.”
A Discord spokesperson said in a statement that the company disputes the allegations and that it is “proud of our continuous efforts and investments in features and tools that help make Discord safer.”
“Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today,” the spokesperson said.
One of the lawsuit’s allegations centers around Discord’s age-verification process, which the plaintiffs believe is flawed, writing that children under thirteen can easily lie about their age to bypass the app’s minimum age requirement.
The lawsuit also alleges that Discord misled parents to believe that its so-called Safe Direct Messaging feature “was designed to automatically scan and delete all private messages containing explicit media content.” The lawyers claim that Discord misrepresented the efficacy of that safety tool.
“By default, direct messages between ‘friends’ were not scanned at all,” the complaint stated. “But even when Safe Direct Messaging filters were enabled, children were still exposed to child sexual abuse material, videos depicting violence or terror, and other harmful content.”
The New Jersey attorney general is seeking unspecified civil penalties against Discord, according to the complaint.
The filing marks the latest lawsuit brought by various state attorneys general around the country against social media companies.
In 2023, a bipartisan coalition of over 40 state attorneys general sued Meta over allegations that the company knowingly implemented addictive features across apps like Facebook and Instagram that harm the mental well being of children and young adults.
The New Mexico attorney general sued Snap in Sep. 2024 over allegations that Snapchat’s design features have made it easy for predators to easily target children through sextortion schemes.
The following month, a bipartisan group of over a dozen state attorneys general filed lawsuits against TikTok over allegations that the app misleads consumers that its safe for children. In one particular lawsuit filed by the District of Columbia’s attorney general, lawyers allege that the ByteDance-owned app maintains a virtual currency that “substantially harms children” and a livestreaming feature that “exploits them financially.”
In January 2024, executives from Meta, TikTok, Snap, Discord and X were grilled by lawmakers during a senate hearing over allegations that the companies failed to protect children on their respective social media platforms.