Connect with us

Published

on

People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London.

Joshua Bratt | Pa Images | Getty Images

Lauren Wagner knows a lot about disinformation. Heading into the 2020 U.S. presidential election, she worked at Facebook, focusing on information integrity and overseeing products designed to make sure content was moderated and fact-checked.

She can’t believe what’s she’s seeing now. Since war erupted last month between Israel and Hamas, the constant deluge of misinformation and violent content spreading across the internet is hard for her to comprehend. Wagner left Facebook parent Meta last year, and her work in trust and safety feels like it was from a prior era.

“When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it’s like long video clips and there’s multiple points of view?” Wagner said. “This idea of live-streaming terrorism, essentially at such a deep and in-depth scale, I don’t know how you manage that.”

The problem is even more pronounced because Meta, Google parent Alphabet, and X, formerly Twitter, have all eliminated jobs tied to content moderation and trust and safety as part of broader cost-cutting measures that began late last year and continued through 2023. Now, as people post and share out-of-context videos of previous wars, fabricated audio in news clips, and graphic videos of terrorist acts, the world’s most trafficked websites are struggling to keep up, experts have noted.

As the founder of a new venture capital firm, Radium Ventures, Wagner is in the midst of raising her first fund dedicated solely to startup founders working on trust and safety technologies. She said many more platforms that think they are “fairly innocuous” are seeing the need to act.

“Hopefully this is shining a light on the fact that if you house user-generated content, there’s an opportunity for misinformation, for charged information or potentially damaging information to spread,” Wagner said.

In addition to the traditional social networks, the highly polarized nature of the Israel-Hamas war affects internet platforms that weren’t typically known for hosting political discussions but now have to take precautionary measures. Popular online messaging and discussion channels such as Discord and Telegram could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.

A Discord spokesperson declined to comment. Telegram didn’t respond to a request for comment.

A demonstrator places flowers on white-shrouded body bags representing victims in the Israel-Hamas conflict, in front of the White House in Washington, DC, on November 15, 2023.

Mandel Ngan | AFP | Getty Images

On kids gaming site Roblox, thousands of users recently attended pro-Palestinian protests held within the virtual world. That has required the company to closely monitor for posts that violate its community standards, a Roblox spokesperson told CNBC in a statement.

Roblox has thousands of moderators and “automated detection tools in place to monitor,” the spokesperson said, adding that the site “allows for expressions of solidarity,” but does “not allow for content that endorses or condones violence, promotes terrorism or hatred against individuals or groups, or calls for supporting a specific political party.”

When it comes to looking for talent in the trust and safety space, there’s no shortage. Many of Wagner’s former colleagues at Meta lost their jobs and remain dedicated to the cause.

One of her first investments was in a startup called Cove, which was founded by former Meta trust and safety staffers. Cove is among a handful of emerging companies developing technology that they can sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to go after the same general market.

“It adds some more coherence to the information ecosystem,” Wagner, who is also a senior advisor at the Responsible Innovation Labs nonprofit, said regarding the new crop of trust and safety tools. “They provide some level of standardized processes across companies where they can access tools and guidelines to be able to manage user-generated content effectively.”

‘Brilliant people out there’

It’s not just ex-Meta staffers who recognize the opportunity.

The founding team of startup TrustLab came from companies including Google, Reddit and TikTok parent ByteDance. And the founders of Intrinsic previously worked on trust and safety-related issues at Apple and Discord.

For the TrustCon conference in July, tech policy wonks and other industry experts headed to San Francisco to discuss the latest hot topics in online trust and safety, including their concerns about the potential societal effects of layoffs across the industry.

Several startups showcased their products in the exhibition hall, promoting their services, talking to potential clients and recruiting talent. ActiveFence, which describes itself as a “leader in providing Trust & Safety solutions to protect online platforms and their users from malicious behavior and content,” had a booth at the conference. So did Checkstep, a content moderation platform.

Cove also had an exhibit at the event.

“I think the cost-cutting has definitely obviously affected the labor markets and the hiring market,” said Cove CEO Michael Dworsky, who co-founded the company in 2021 after more than three years at Facebook. “There are a bunch of brilliant people out there that we can now hire.”

Cove has developed software to help manage a company’s content policy and review process. The management platform works alongside various content moderation systems, or classifiers, to detect issues such as harassment, so businesses can protect their users without needing expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, says on its website that Cove is “the solution we wish we had at Meta.”

“When Facebook started really investing in trust and safety, it’s not like there were tools on the market that they could have bought,” said Cove technology chief Mason Silber, who previously spent seven years at Facebook. “They didn’t want to build, they didn’t want to become the experts. They did it more out of necessity than desire, and they built some of the most robust, trusted safety solutions in the world.”

A Meta spokesperson declined to comment for this story.

We can't trust Instagram with our teens over child safety: Former Instagram consultant Arturo Béjar

Wagner, who left Meta in mid-2022 after about two and a half years at the company, said that earlier content moderation was more manageable than it is today, particularly with the current Middle East crisis. In the past, for instance, a trust and safety team member could analyze a picture and determine whether it contained false information through a fairly routine scan, she said.

But the quantity and speed of photos and videos being uploaded and the ability of people to manipulate details, especially as generative AI tools become more mainstream, has created a whole new hassle.

Social media sites are now dealing with a swarm of content related to two simultaneous wars, one in the Middle East and another between Russia and Ukraine. On top of that, they have to get ready for the 2024 presidential election in less than a year. Former President Donald Trump, who is under criminal indictment in Georgia for alleged interference in the 2020 election, is the front-runner to become the Republican nominee.

Manu Aggarwal, a partner at research firm Everest Group, said trust and safety is among the fastest-growing segments of a part of the market called business process services, which includes the outsourcing of various IT-related tasks and call centers.

By 2024, Everest Group projects the overall business process services market to be about $300 billion, with trust and safety representing about $11 billion of that figure. Companies such as Accenture and Genpact, which offer outsourced trust and safety services and contract workers, currently capture the bulk of spending, primarily because Big Tech companies have been “building their own” tools, Aggarwal said.

As startups focus on selling packaged and easy-to-use technology to a wider swath of clients, Everest Group practice director Abhijnan Dasgupta estimates that spending on trust and safety tools could be between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This figure is partly dependent on whether companies adopt more AI services, thus requiring them to potentially abide by emerging AI regulations, he added.

Tech investors are circling the opportunity. Venture capital firm Accel is the lead investor in Cinder, a two-year-old startup whose founders helped build much of Meta’s internal trust and safety systems and also worked on counterterrorism efforts.

“What better team to solve this challenge than the one that played a major role in defining Facebook’s Trust and Safety operations?” Accel’s Sara Ittelson said in a press release announcing the financing in December.

Ittelson told CNBC that she expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.

New content policy regulations have also spurred investment in the area.

The European Commission is now requiring large online platforms with big audiences in the EU to document and detail how they moderate and remove illegal and violent content on their services or face fines of up to 6% of their annual revenue.

Cinder and Cove are promoting their technologies as ways that online businesses can streamline and document their content moderation procedures to comply with the EU’s new regulations, called the Digital Services Act.

‘Frankenstein’s monster’

In the absence of specialized tech tools, Cove’s Dworsky said, many companies have tried to customize Zendesk, which sells customer support software, and Google Sheets to capture their trust and safety policies. That can result in a “very manual, unscalable approach,” he said, describing the process for some companies as “rebuilding and building a Frankenstein’s monster.”

Still, industry experts know that even the most effective trust and safety technologies aren’t a panacea for a problem as big and seemingly uncontrollable as the spread of violent content and disinformation. According to a survey published last week by the Anti-Defamation League, 70% of respondents said that on social media, they’d been exposed to at least one of several types of misinformation or hate related to the Israel-Hamas conflict.

As the problem expands, companies are dealing with the constant struggle over determining what constitutes free speech and what crosses the line into unlawful, or at least unacceptable, content.

Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute, said that in addition to doing their best to maintain integrity on their sites, companies should be honest with their users about their content moderation efforts.

“There’s a balance that is tough to strike, but it is strikable,” he said. “One thing I would recommend is transparency at a time where third-party access and understanding to what is going on at scale on social platforms is what is needed.”

Discord CEO Jason Citron: 15% of our workforce is dedicated to trust and safety

Noam Bardin, the former CEO of navigation firm Waze, now owned by Google, founded the social news-sharing and real-time messaging service Post last year. Bardin, who’s from Israel, said he’s been frustrated with the spread of misinformation and disinformation since the war began in October.

“The whole perception of what’s going on is fashioned and managed through social media, and this means there’s a tremendous influx of propaganda, disinformation, AI-generated content, bringing content from other conflicts into this conflict,” Bardin said.

Bardin said that Meta and X have struggled to manage and remove questionable posts, a challenge that’s become even greater with the influx of videos.

At Post, which is most similar to Twitter, Bardin said he’s been incorporating “all these moderation tools, automated tools and processes” since his company’s inception. He uses services from ActiveFence and OpenWeb, which are both based in Israel.

“Basically, anytime you comment or you post on our platform, it goes through it,” Bardin said regarding the trust and safety software. “It looks at it from an AI perspective to understand what it is and to rank it in terms of harm, pornography, violence, etc.”

Post is an example of the kinds of companies that trust and safety startups are focused on. Active online communities with live-chatting services have also emerged on video game sites, online marketplaces, dating apps and music streaming sites, opening them up to potentially harmful content from users.

Brian Fishman, co-founder of Cinder, said “militant organizations” rely on a network of services to spread propaganda, including platforms like Telegram, and sites such as Rumble and Vimeo, which have less advanced technology than Facebook.

Representatives from Rumble and Vimeo didn’t respond to requests for comment.

Fishman said customers are starting to see trust and safety tools as almost an extension of their cybersecurity budgets. In both cases, companies have to spend money to prevent possible disasters.

“Some of it is you’re paying for insurance, which means that you’re not getting full return on that investment every day,” Fishman said. “You’re investing a little bit more during black times, so that you got capability when you really, really need it, and this is one of those moments where companies really need it.”

WATCH: Lawmakers ask social media and AI companies to crack down on misinformation

Lawmakers ask social media and AI companies to crack down on misinformation

Continue Reading

Technology

Microsoft set to unveil its vision for AI PCs at Build developer conference

Published

on

By

Microsoft set to unveil its vision for AI PCs at Build developer conference

Microsoft Chief Executive Officer (CEO) Satya Narayana Nadella speaks at a live Microsoft event in the Manhattan borough of New York City, October 26, 2016.

Lucas Jackson | Reuters

Microsoft‘s Build developer conference kicks off on Tuesday, giving the company the opportunity to showcase its latest artificial intelligence projects, following high-profile events this month hosted by OpenAI and Google.

One area where Microsoft has a distinct advantage over others in the AI race is in its ownership of Windows, which gives the company a massive PC userbase.

Microsoft CEO Satya Nadella said in January that 2024 will mark the year when AI will become the “first-class part of every PC.”

The company already offers its Copilot chatbot assistant in the Bing search engine and, for a fee, in Office productivity software. Now, PC users will get to hear more about how AI will be embedded in Windows and what they can do with it on new AI PCs.

Build comes days after Google I/O, where the search giant unveiled its most powerful AI model yet and showed how its Gemini AI will work on computers and phones. Prior to Google’s event, OpenAI announced its new GPT-4o model. Microsoft is OpenAI’s lead investor, and its Copilot technology is based on OpenAI’s models..

For Microsoft, the challenge is twofold: keeping a prominent position in AI and bolstering PC sales, which have been in the doldrums for the past two years following an upgrade cycle during the pandemic.

In a recent note on Dell to investors, Morgan Stanley analyst Erik Woodring wrote that he remains “bullish on the PC market recovery” due to commentary from customers and recent “upward revisions to notebook” original design manufacturer (ODM) builds.

Technology industry researcher Gartner estimated that PC shipments increased 0.9% in the quarter after a multi-year slump. Demand for PCs was “slightly better than expected,” Microsoft CFO Amy Hood said on the company’s quarterly earnings call last month.

Generative-AI startups like OpenAI beginning to monetize their cutting-edge technology

New AI tools from Microsoft could offer another reason for enterprise and consumer customers to upgrade their aging computers, whether they’re made by HP, Dell or Lenovo.

“While Copilot for Windows does not directly drive monetization it should, we believe, drive up usage of Windows, stickiness of Windows, customers to higher priced more powerful PCs (and therefore more revenue to Microsoft per device), and likely search revenue,” Bernstein analysts wrote in a note to investors on April 26, the day after Microsoft reported earnings.

While Microsoft will provide the software to handle some of the AI tasks sent to the internet, its computers will be powered by chips from AMD, Intel and Qualcomm for offline AI jobs. That could include, for example, using your voice to ask Copilot to summarize a transcription without a connection.

What’s an AI PC?

The key hardware addition to an AI PC is what’s called a neural processing unit. NPUs go beyond the capabilities of traditional central processing units (CPUs) and are designed to specifically handle artificial intelligence tasks. Traditionally, they’ve been used by companies like Apple to improve photos and videos or for speech recognition.

Microsoft hasn’t said what AI PCs will be capable of yet without an internet connection. But Google’s PIxel 8 Pro phone, which doesn’t have a full computer processor, can summarize and transcribe recordings, recommend text message responses and more using its Gemini Nano AI.

Computers with Intel’s latest Lunar Lake chips with a dedicated NPU are expected to arrive in late 2024. Qualcomm’s Snapdragon X Elite chip with an NPU will be available in the middle of this year, while AMD’s latest Ryzen Pro is expected sometime during the quarter.

Intel says the chips allow for things like “real-time language translation, automation inferencing, and enhanced gaming environments.”

Apple has been using NPUs for years and recently highlighted them in its new M4 chip for the iPad Pro. The M4 chip is expected to launch in the next round of Macs sometime this year.

Windows on Arm

Qualcomm, unlike Intel and AMD, offers chips powered by Arm-based architecture. One of Microsoft’s sessions will talk about “the Next Generation of Windows on Arm,” which will likely cover how Windows runs on Qualcomm chips and how that’s different from Intel and AMD versions of Windows.

Intel still controls 78% of the PC chip market, followed by AMD at 13%, according to recent data from Canalys.

In the past, Qualcomm has promoted Snapdragon Arm-based computers by touting their longer battery life, thinner designs and other benefits like cellular connections. But earlier versions of Qualcomm’s chips were limited in what they offered consumers. In 2018, for example, the company’s Snapdragon 835 chip couldn’t run most Windows applications

Microsoft has since improved Windows to handle traditional apps on Arm, but questions remain. The company even has an FAQ page dedicated to computers running on ARM hardware. 

AI everywhere else

Investing in the future of AI: Tech investor Paul Meeks on the five 'Magnificent 7' stocks he likes

Continue Reading

Technology

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

Published

on

By

OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

OpenAI has disbanded its team focused on the long-term risks of artificial intelligence just one year after the company announced the group, a source familiar with the situation confirmed to CNBC on Friday.

The person, who spoke on condition of anonymity, said that some of the team members are being re-assigned to multiple other teams within the company.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

The news was first reported by Wired.

OpenAI’s Superalignment team, announced last year, has focused on “scientific and technical breakthroughs to steer and control AI systems much smarter than us.” At the time, OpenAI said it would commit 20% of its computing power to the initiative over four years.

Sutskever and Leike on Tuesday announced their departures on X, hours apart, but on Friday, Leike shared more details about why he left the startup.

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact.

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.”

Leike added that OpenAI must become a “safety-first AGI company.”

“Building smarter-than-human machines is an inherently dangerous endeavor,” he wrote. “OpenAI is shouldering an enormous responsibility on behalf of all of humanity. But over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike did not immediately respond to a request for comment, and OpenAI did not immediately provide a comment.

The high-profile departures come months after OpenAI went through a leadership crisis involving co-founder and CEO Sam Altman.

In November, OpenAI’s board ousted Altman, claiming in a statement that Altman had not been “consistently candid in his communications with the board.”

The issue seemed to grow more complex each following day, with The Wall Street Journal and other media outlets reporting that Sutskever trained his focus on ensuring that artificial intelligence would not harm humans, while others, including Altman, were instead more eager to push ahead with delivering new technology.

Altman’s ouster prompted resignations – or threats of resignations – including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company, and board members Helen Toner, Tasha McCauley and Ilya Sutskever, who had voted to oust Altman, were out. Sutskever stayed on staff at the time but no longer in his capacity as a board member. Adam D’Angelo, who had also voted to oust Altman, remained on the board.

When Altman was asked about Sutskever’s status on a Zoom call with reporters in March, he said there were no updates to share. “I love Ilya… I hope we work together for the rest of our careers, my career, whatever,” Altman said. “Nothing to announce today.”

On Tuesday, Altman shared his thoughts on Sutskever’s departure.

“This is very sad to me; Ilya is easily one of the greatest minds of our generation, a guiding light of our field, and a dear friend,” Altman wrote on X. “His brilliance and vision are well known; his warmth and compassion are less well known but no less important.” Altman said research director Jakub Pachocki, who has been at OpenAI since 2017, will replace Sutskever as chief scientist.

News of Sutskever’s and Leike’s departures, and the dissolution of the superalignment team, come days after OpenAI launched a new AI model and desktop version of ChatGPT, along with an updated user interface, the company’s latest effort to expand the use of its popular chatbot.

The update brings the GPT-4 model to everyone, including OpenAI’s free users, technology chief Mira Murati said Monday in a livestreamed event. She added that the new model, GPT-4o, is “much faster,” with improved capabilities in text, video and audio.

OpenAI said it eventually plans to allow users to video chat with ChatGPT. “This is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said.

Continue Reading

Technology

BlackRock funds are ‘crushing shareholder rights,’ says activist Boaz Weinstein

Published

on

By

BlackRock funds are ‘crushing shareholder rights,' says activist Boaz Weinstein

Boaz Weinstein, founder and chief investment officer of Saba Capital Management, during the Bloomberg Invest event in New York, US, on Wednesday, June 7, 2023. 

Jeenah Moon | Bloomberg | Getty Images

Boaz Weinstein, the hedge fund investor on the winning side of JPMorgan Chase’s $6.2 billion, “London Whale” trading loss in 2011, is now taking on index fund giant BlackRock

On Friday, Weinstein‘s Saba Capital detailed in a presentation seen by CNBC its plans to push for change at 10 closed-end BlackRock funds that trade at a significant discount to the value of their underlying assets compared to their peers. Saba says the underperformance is a direct result of BlackRock’s management.

The hedge fund wants board control at three BlackRock funds and a minority slate at seven others. It also seeks to oust BlackRock as the manager of six of those ten funds.

“In the last three years, nine of the ten funds that we’re even talking about have lost money for investors,” Weinstein said on CNBC’s “Squawk Box” earlier this week.

At the heart of Saba’s “Hey BlackRock” campaign is an argument around governance. Saba says in its presentation that BlackRock runs those closed-end funds the “exact opposite” way it expects companies to run themselves.

BlackRock “is talking out of both sides of its mouth” by doing this, Saba says. That’s cost retail investors $1.4 billion in discounts, by Saba’s math, on top of the management fees it charges.

BlackRock, Saba says in the deck, “considers itself a leader in governance, but is crushing shareholder rights.” At certain BlackRock funds, for example, if an investor doesn’t submit their vote in a shareholder meeting, their shares will automatically go to support BlackRock. Saba is suing to change that.

A BlackRock spokesperson called that assertion “very misleading” and said those funds “simply require that most shareholders vote affirmatively in favor.”

The index fund manager’s rebuttal, “Defend Your Fund,” describes Saba as an activist hedge fund seeking to “enrich itself.”

The problem and the solution

Closed-end funds have a finite number of shares. Investors who want to sell their positions have to find an interested buyer, which means they may not be able to sell at a price that reflects the value of a fund’s holdings.

In open-ended funds, by contrast, an investor can redeem its shares with the manager in exchange for cash. That’s how many index funds are structured, like those that track the S&P 500.

Saba says it has a solution. BlackRock should buy back shares from investors at the price they’re worth, not where they currently trade.

“Investors who want to come out come out, and those who want to stay will stay for a hundred years, if they want,” Weinstein told CNBC earlier this week.

Weinstein, who founded Saba in 2009, made a fortune two years later, when he noticed that a relatively obscure credit derivatives index was behaving abnormally. Saba began buying up the underlying derivatives that, unbeknownst to him, were being sold by JPMorgan’s Bruno Iksil. For a time, Saba took tremendous losses on the position, until Iksil’s bet turned sour on him, costing JPMorgan billions and netting Saba huge profits.

Saba said in its investor deck that the changes at BlackRock could take the form of a tender offer or a restructuring. The presentation noted that BlackRock previously cast its shares in support of a tender at another closed-end fund where an activist was pushing for similar change.

At the worst-performing funds relative to their peer group, Saba is seeking shareholder approval to fire the manager. In total, BlackRock wants new management at six funds, including the BlackRock California Municipal Income Trust (BFZ), the BlackRock Innovation and Growth Term Trust (BIGZ) and the BlackRock Health Sciences Term Trust (BMEZ).

“BlackRock is failing as a manager by delivering subpar performance compared to relevant benchmarks and worst-in-class corporate governance,” the deck says.

If Saba were to win shareholder approval to fire BlackRock as manager at the six funds, the newly constituted boards would then run a review process over at least six months. Saba says that in addition to offering liquidity to investors, its board nominees would push for reduced fees and for other unspecified governance fixes.

A BlackRock spokesperson told CNBC that the firm has historically taken steps to improve returns at closed-end funds when necessary.

“BlackRock’s closed-end funds welcome constructive engagement with thoughtful shareholders who act in good faith with the shared goal of enhancing long-term value for all,” the spokesperson said.

Weinstein said Saba has run similar campaigns at roughly 60 closed-end funds in the past decade but has only taken over a fund’s management twice. The hedge fund sued BlackRock last year to remove that so-called “vote-stripping provision” at certain funds and filed another lawsuit earlier this year.

BlackRock has pitched shareholders via mailings and advertisements. “Your dependable, income-paying investment,” BlackRock has told investors, is under threat from Saba.

Saba plans to host a webinar for shareholders on Monday but says BlackRock has refused to provide the shareholder list for several of the funds. The BlackRock spokesperson said that it has “always acted in accordance with all applicable laws” when providing shareholder information, and that it “never blocked Saba’s access to shareholders.”

“What we want is for shareholders, which we are the largest of but not in any way the majority, to make that $1.4 billion, which can be done at the press of a button,” Weinstein told CNBC earlier this week.

WATCH: CNBC’s full interview with Saba Capital’s Boaz Weinstein

Watch CNBC's full interview with Saba Capital's Boaz Weinstein

Continue Reading

Trending