Connect with us

Published

on

Nina Jankowicz, a disinformation expert and vice president at the Centre for Information Resilience, gestures during an interview with AFP in Washington, DC, on March 23, 2023.

Bastien Inzaurralde | AFP | Getty Images

Nina Jankowicz’s dream job has turned into a nightmare.

For the past 10 years, she’s been a disinformation researcher, studying and analyzing the spread of Russian propaganda and internet conspiracy theories. In 2022, she was appointed to the White House’s Disinformation Governance Board, which was created to help the Department of Homeland Security fend off online threats.  

Now, Jankowicz’s life is filled with government inquiries, lawsuits and a barrage of harassment, all the result of an extreme level of hostility directed at people whose mission is to safeguard the internet, particularly ahead of presidential elections.

Jankowicz, the mother of a toddler, says her anxiety has run so high, in part due to death threats, that she recently had a dream that a stranger broke into her house with a gun. She threw a punch in the dream that, in reality, grazed her bedside baby monitor. Jankowicz said she tries to stay out of public view and no longer publicizes when she’s going to events.

“I don’t want somebody who wishes harm to show up,” Jankowicz said. “I have had to change how I move through the world.”

In prior election cycles, researchers like Jankowicz were heralded by lawmakers and company executives for their work exposing Russian propaganda campaigns, Covid conspiracies and false voter fraud accusations. But 2024 has been different, marred by the potential threat of litigation by powerful people like X owner Elon Musk as well congressional investigations conducted by far-right politicians, and an ever-increasing number of online trolls. 

Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University, said the constant attacks and legal expenses have “unfortunately become an occupational hazard” for these researchers. Abdo, whose institute has filed amicus briefs in several lawsuits targeting researchers, said the “chill in the community is palpable.” 

Jankowicz is one of more than two dozen researchers who spoke to CNBC about the changing environment of late and the safety concerns they now face for themselves and their families. Many declined to be named to protect their privacy and avoid further public scrutiny. 

Whether they agreed to be named or not, the researchers all spoke of a more treacherous landscape this election season than in the past. The researchers said that conspiracy theories claiming that internet platforms try to silence conservative voices began during Trump’s first campaign for president nearly a decade ago and have steadily increased since then.

SpaceX and Tesla founder Elon Musk speaks at a town hall with Republican candidate U.S. Senate Dave McCormick at the Roxain Theater on October 20, 2024 in Pittsburgh, Pennsylvania. 

Michael Swensen | Getty Images

‘Those attacks take their toll’

The chilling effect is of particular concern because online misinformation is more prevalent than ever and, particularly with the rise of artificial intelligence, often even more difficult to recognize, according to the observations of some researchers. It’s the internet equivalent of taking cops off the streets just as robberies and break-ins are surging.  

Jeff Hancock, president of the Stanford Internet Observatory, said we’re in a “trust and safety winter.” He’s experienced it firsthand. 

After the SIO’s work looking into misinformation and disinformation during the 2020 election, the institute was sued three times in 2023 by conservative groups, who alleged that the organization’s researchers colluded with the federal government to censor speech. Stanford spent millions of dollars to defend its staff and students fighting the lawsuits. 

During that time, SIO downsized significantly.

“Many people have lost their jobs or worse and especially that’s the case for our staff and researchers,” said Hancock, during the keynote of his organization’s third annual Trust and Safety Research Conference in September. “Those attacks take their toll.”

SIO didn’t respond to CNBC’s inquiry about the reason for the job cuts. 

Google last month laid off several employees, including a director, in its trust and safety research unit just days before some of them were scheduled to speak at or attend the Stanford event, according to sources close to the layoffs who asked not to be named. In March, the search giant laid off a handful of employees on its trust and safety team as part of broader staff cuts across the company.

Google didn’t specify the reason for the cuts, telling CNBC in a statement that, “As we take on more responsibilities, particularly around new products, we make changes to teams and roles according to business needs.” The company said it’s continuing to grow its trust and safety team. 

Jankowicz said she began to feel the hostility two years ago after her appointment to the Biden administration’s Disinformation Governance Board. 

She and her colleagues say they faced repeated attacks from conservative media and Republican lawmakers, who alleged that the group limited free speech. After just four months in operation, the board was shuttered. 

In an August 2022 statement announcing the termination of the board, DHS didn’t provide a specific reason for the move, saying only that it was following the recommendation of the Homeland Security Advisory Council. 

Jankowicz was then subpoenaed as a part of an investigation by a subcommittee of the House Judiciary Committee intended to discover whether the federal government was colluding with researchers to “censor” Americans and conservative viewpoints on social media.

“I’m the face of that,” Jankowicz said. “It’s hard to deal with.”

Watch CNBC’s full interview with former Google executive chairman and CEO Eric Schmidt

Since being subpoenaed, Jankowicz said she’s also had to deal with a “cyberstalker,” who repeatedly posted about her and her child on social media site X, resulting in the need to obtain a protective order. Jankowicz has spent more than $80,000 in legal bills on top of the constant fear that online harassment will lead to real-world dangers.

On notorious online forum 4chan, Jankowicz’s face grazed the cover of a munitions handbook, a manual teaching others how to build their own guns. Another person used AI software and a photo of Jankowicz’s face to create deep-fake pornography, essentially putting her likeness onto explicit videos. 

“I have been recognized on the street before,” said Jankowicz, who wrote about her experience in a 2023 story in The Atlantic with the headline, “I Shouldn’t Have to Accept Being in Deepfake Porn.”

One researcher, who spoke on condition of anonymity due to safety concerns, said she’s experienced more online harassment since Musk’s late 2022 takeover of Twitter, now known as X. 

In a direct message that was shared with CNBC, a user of X threatened the researcher, saying they knew her home address and suggested the researcher plan where she, her partner and their “little one will live.” 

Within a week of receiving the message, the researcher and her family relocated. 

Misinformation researchers say they are getting no help from X. Rather, Musk’s company has launched several lawsuits against researchers and organizations for calling out X for failing to mitigate hate speech and false information. 

In November, X filed a suit against Media Matters after the nonprofit media watchdog published a report showing that hateful content on the platform appeared next to ads from companies including Apple, IBM and Disney. Those companies paused their ad campaigns following the Media Matters report, which X’s attorneys described as “intentionally deceptive.” 

Then there’s House Judiciary Chairman Jim Jordan, R-Ohio, who continues investigating alleged collusion between large advertisers and the nonprofit Global Alliance for Responsible Media (GARM), which was created in 2019 in part to help brands avoid having their promotions show up alongside content they deem harmful. In August, the World Federation of Advertisers said it was suspending GARM’s operations after X sued the group, alleging it organized an illegal ad boycott. 

GARM said at the time that the allegations “caused a distraction and significantly drained its resources and finances.”

Abdo of the Knight First Amendment Institute said billionaires like Musk can use those types of lawsuits to tie up researchers and nonprofits until they go bankrupt.

Representatives from X and the House Judiciary Committee didn’t respond to requests for comment.

Less access to tech platforms

X’s actions aren’t limited to litigation.

Last year, the company altered how its data library can be used and, instead of offering it for free, started charging researchers $42,000 a month for the lowest tier of the service, which allows access to 50 million tweets.

Musk said at the time that the change was needed because the “free API is being abused badly right now by bot scammers & opinion manipulators.” 

Kate Starbird, an associate professor at the University of Washington who studies misinformation on social media, said researchers relied on Twitter because “it was free, it was easy to get, and we would use it as a proxy for other places.”

“Maybe 90% of our effort was focused on just Twitter data because we had so much of it,” said Starbird, who was subpoenaed for a House Judiciary congressional hearing in 2023 related to her disinformation studies. 

A more stringent policy will take effect on Nov. 15, shortly after the election, when X says that under its new terms of service, users risk a $15,000 penalty for accessing over 1 million posts in a day.

“One effect of X Corp.’s new terms of service will be to stifle that research when we need it most,” Abdo said in a statement. 

Meta CEO Mark Zuckerberg attends the Senate Judiciary Committee hearing on online child sexual exploitation at the U.S. Capitol in Washington, D.C., on Jan. 31, 2024.

Nathan Howard | Reuters

It’s not just X. 

In August, Meta shut down a tool called CrowdTangle, used to track misinformation and popular topics on its social networks. It was replaced with the Meta Content Library, which the company says provides “comprehensive access to the full public content archive from Facebook and Instagram.”

Researchers told CNBC that the change represented a significant downgrade. A Meta spokesperson said that the company’s new research-focused tool is more comprehensive than CrowdTangle and is better suited for election monitoring.

In addition to Meta, other apps like TikTok and Google-owned YouTube provide scant data access, researchers said, limiting how much content they can analyze. They say their work now often consists of manually tracking videos, comments and hashtags.

“We only know as much as our classifiers can find and only know as much as is accessible to us,” said Rachele Gilman, director of intelligence for The Global Disinformation Index. 

In some cases, companies are even making it easier for falsehoods to spread. 

For example, YouTube said in June of last year it would stop removing false claims about 2020 election fraud. And ahead of the 2022 U.S. midterm elections, Meta introduced a new policy allowing political ads to question the legitimacy of past elections. 

YouTube works with hundreds of academic researchers from around the world today through its YouTube Researcher Program, which allows access to its global data API “with as much quota as needed per project,” a company spokeswoman told CNBC in a statement. She added that increasing access to new areas of data for researchers isn’t always straightforward due to privacy risks.

A TikTok spokesperson said the company offers qualifying researchers in the U.S. and the EU free access to various, regularly updated tools to study its service. The spokesperson added that TikTok actively engages researchers for feedback.

Not giving up

As this year’s election hits its home stretch, one particular concern for researchers is the period between Election Day and Inauguration Day, said Katie Harbath, CEO of tech consulting firm Anchor Change. 

Fresh in everyone’s mind is Jan. 6, 2021, when rioters stormed the U.S. Capitol while Congress was certifying the results, an event that was organized in part on Facebook. Harbath, who was previously a public policy director at Facebook, said the certification process could again be messy. 

“There’s this period of time where we might not know the winner, so companies are thinking about ‘what do we do with content?'” Harbath said. “Do we label, do we take down, do we reduce the reach?” 

Despite their many challenges, researchers have scored some legal victories in their efforts to keep their work alive.

In March, a California federal judge dismissed a lawsuit by X against the nonprofit Center for Countering Digital Hate, ruling that the litigation was an attempt to silence X’s critics.

Three months later, a ruling by the Supreme Court allowed the White House to urge social media companies to remove misinformation from their platform.

Jankowicz, for her part, has refused to give up. 

Earlier this year, she founded the American Sunlight Project, which says its mission is “to ensure that citizens have access to trustworthy sources to inform the choices they make in their daily lives.” Jankowicz told CNBC that she wants to offer support to those in the field who have faced threats and other challenges.

“The uniting factor is that people are scared about publishing the sort of research that they were actively publishing around 2020,” Jankowicz said. “They don’t want to deal with threats, they certainly don’t want to deal with legal threats and they’re worried about their positions.”

Watch: OpenAI warns of AI misinformation ahead of election

OpenAI warns of AI misinformation ahead of election

Continue Reading

Technology

Instacart ends AI-driven pricing tests that drove up costs for some shoppers

Published

on

By

Instacart ends AI-driven pricing tests that drove up costs for some shoppers

FILE PHOTO: Instacart shopper, Loralyn Geggatt makes a delivery to a customer’s home in Falmouth, MA on April 7, 2020.

David L. Ryan | Boston Globe | Getty Images

Instacart said Monday it will cease the use of artificial intelligence-driven pricing tests on its grocery delivery platform after the practice was scrutinized in a wide-ranging study and rebuked by lawmakers.

The company said in a blog post that retailers will no longer be able to use its Eversight technology to run pricing experiments on its platform, effective immediately.

“We understand that the tests we ran with a small number of retail partners that resulted in different prices for the same item at the same store missed the mark for some customers,” the company wrote. “At a time when families are working exceptionally hard to stretch every grocery dollar, those tests raised concerns, leaving some people questioning the prices they see on Instacart. That’s not okay – especially for a company built on trust, transparency, and affordability.”

Instacart acquired Eversight for $59 million in 2022. Eversight’s software allows retailers to carry out pricing tests to gauge shoppers’ reactions to higher or lower prices on certain items.

Instacart said at the time that the technology would help retailers improve sales and growth, while “also surfacing the best deals for customers.”

Read more CNBC tech news

Earlier this month, a study by Consumer Reports and other organizations found that Instacart’s algorithmic pricing tools caused shoppers to pay different prices for identical items from the same store.

The total cost for the same basket of goods at a single store varied by about 7%, which can result in over $1,000 in extra annual costs for customers. Instacart responded by saying that retailers determine prices listed on the app.

The company also rejected characterizations of the technology as surveillance pricing or dynamic pricing, and said the tests were never based on personal, demographic or individual-level user data.

Reuters reported last week that the Federal Trade Commission had sent a civil investigative demand to Instacart about its pricing practices.

Separately, Instacart last week was ordered to pay $60 million in refunds to customers to settle claims raised by the FTC that it used deceptive tactics in its subscription sign-up, “satisfaction guarantee” advertising and other processes.

Instacart denied any allegations of wrongdoing. The company said it answered questions from the FTC about its AI pricing tools as part of that settlement.

Study finds Instacart uses AI pricing tools causing various prices for identical products

Continue Reading

Technology

Tech stocks rebound, Google’s boomerang strategy, Xbox’s slump and more in Morning Squawk

Published

on

By

Tech stocks rebound, Google's boomerang strategy, Xbox's slump and more in Morning Squawk

Wall Street and Broad St. signs are seen as New York Stock Exchange building decorated for Christmas at the Financial District in New York City, United States on December 16, 2020.

Tayfun Coskun | Anadolu Agency | Getty Images

This is CNBC’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox.

Here are five key things investors need to know to start the trading day:

1. Here comes Santa Claus?

Technology stocks rebounded to end last week, helping assuage the latest worries about the artificial intelligence trade. The question now is if Santa is coming to town — which, in this case, means Wall Street.

Here’s what to know:

2. Epstein files

This photo illustration taken in Washington, DC, on December 19, 2025 shows redacted documents after the US Justice Department began releasing the long-awaited records from the investigation into the politically explosive case of convicted sex offender Jeffrey Epstein.

Mandel Ngan | AFP | Getty Images

The Justice Department released some of its investigative files tied to sexual predator Jeffrey Epstein on Friday. The release came on the deadline set by the Epstein Files Transparency Act, but it did not include all files as was instructed by the legislation.

The DOJ’s website now has an “Epstein Library” with a search box for keywords in the newly released files. However, CNBC found the search box did not immediately work as intended.

A number of documents were reportedly removed from the Justice Department’s site. A photo featuring President Donald Trump was later reposted after backlash.

3. Job search

Josh Woodward, VP of Google Labs, addresses the crowd during Google’s annual I/O developers conference in Mountain View, California on May 20, 2025.

Camille Cohen | AFP | Getty Images

Google hasn’t been looking far to staff up its AI teams. CNBC’s Jennifer Elias reported that about one-fifth of all AI software engineers hired by the tech giant this year were boomerangs, a term used for ex-employees who return.

That comes as 16-year Google veteran Josh Woodward has taken the helm of Gemini, the crown jewel of Alphabet’s AI ambitions, this year. The 42-year-old Oklahoma native also kept his role managing Google Labs.

Alphabet also made the news this weekend for a different business: its driverless ride-hailing service Waymo. The company had temporarily suspended operations in the San Francisco Bay Area following power outages.

Get Morning Squawk directly in your inbox

4. (Dis)like

In this photo illustration, iPhone screens display various social media apps on the screens on February 9, 2025 in Bath, England.

Anna Barclay | Getty Images News | Getty Images

C-suite leaders are learning a lesson that their younger, rank-and-file staffers grew up knowing: There’s a dark side to social media.

Executives and founders have been told that active social media usage is good for their personal brands and company awareness. But a growing body of anecdotes over recent years has shown that missteps can leave them — and sometimes the businesses they represent — in hot water.

Still, that doesn’t mean there aren’t benefits to being online, even if the reaction can be negative. How one founder put it to CNBC: “As long as your name is in their mouth, you’re doing something right.”

5. Glow up

While supplements tend to be more popular around New Year’s resolution season, Gruns is hoping its sales get a holiday bump. It’s selling some packs of gummies with a holiday flair, including a Grinch-inspired sour punch flavor.

Courtesy of Gruns

Make room, candy bars. Supplements may be making their way into stockings this year.

The wellness category is slated to gain ground this shopping season, with retailers giving this sector shelf space as consumers make resolutions for the new year. CNBC’s Melissa Repko reported that brands like Grüns and Neom Wellbeing are aiming to win shoppers’ interest this season by selling holiday-themed items.

On the other hand, Microsoft‘s Xbox may not be a hot item under trees as the gaming console remains in a slump. Between company layoffs, studio closures and price increases, CNBC’s Jaures Yip found that some are wondering if the Xbox is finally dead.

The Daily Dividend

Here’s some of the events we’re keeping an eye on this holiday-shortened week:

  • Tuesday: Real GDP and consumer confidence data
  • Wednesday: Early stock market close for Christmas Eve
  • Thursday: Stock market closed for Christmas

CNBC Pro subscribers can see a calendar and rundown for the week here.

CNBC’s Jonathan Vanian, Julia Boorstin, Laya Neelakandan, Chloe Taylor, Liz Napolitano, Dan Mangan, Sean Conlon, Jennifer Elias, Lora Kolodny, Mike Winters, Melissa Repko and Jaures Yip contributed to this report. Melodie Warner edited this edition.

Continue Reading

Technology

Uber, Lyft set to trial robotaxis in the UK in partnership with China’s Baidu

Published

on

By

Uber, Lyft set to trial robotaxis in the UK in partnership with China's Baidu

A Baidu Apollo RT6 robotaxi during Baidu’s Apollo Day in Wuhan, China, on Wednesday, May 15, 2024.

Bloomberg | Bloomberg | Getty Images

Chinese tech giant Baidu has announced plans to bring robotaxis to London starting next year through its partnerships with Lyft and Uber, as the UK emerges as a growing autonomous vehicle battleground.

The announced collaborations will bring Baidu’s Apollo Go autonomous vehicles to the British capital through the Uber and Lyft platforms, the companies said on their respective social media accounts. 

Lyft’s testing of Baidu’s initial fleet of dozens of vehicles will begin in 2026, pending regulatory approval, “with plans to scale to hundreds from there,” Lyft CEO David Risher said in a post on social media platform X on Monday.

Meanwhile, Uber said that its first pilot is expected to start in the first half of 2026. “We’re excited to accelerate Britain’s leadership in the future of mobility, bringing another safe and reliable travel option to Londoners next year,” the company added.

The moves add to Baidu’s growing global footprint, which it says includes 22 cities and more than 250,000 weekly trips, as it races against other Chinese players like WeRide and Western giants like Alphabet‘s Waymo. 

The UK, in particular, has seen a wave of interest from driverless taxi companies, following the government’s announcement in June that it would accelerate its plans to allow autonomous vehicle tech on public roads. 

The government now aims to begin permitting robotaxis to operate in small-scale pilots starting in spring 2026, with Baidu likely aiming to be amongst the first. 

The city of London has also established a “Vision Zero” goal to eliminate all serious injuries and deaths in its transportation systems by 2041, with autonomous driving technology expected to play a large role. 

News of Baidu pilots comes as its competitor Waymo also looks to begin testing in London, with plans for a full service launch in 2026. Waymo currently operates or plans to launch a service or test its fleet in 26 markets, including major cities like Tokyo and New York City.

Baidu, for its part, has been aggressively expanding globally, with testing rolling out in international markets like the United Arab Emirates and Switzerland

Continue Reading

Trending