Nina Jankowicz, a disinformation expert and vice president at the Centre for Information Resilience, gestures during an interview with AFP in Washington, DC, on March 23, 2023.
Bastien Inzaurralde | AFP | Getty Images
Nina Jankowicz’s dream job has turned into a nightmare.
For the past 10 years, she’s been a disinformation researcher, studying and analyzing the spread of Russian propaganda and internet conspiracy theories. In 2022, she was appointed to the White House’s Disinformation Governance Board, which was created to help the Department of Homeland Security fend off online threats.
Now, Jankowicz’s life is filled with government inquiries, lawsuits and a barrage of harassment, all the result of an extreme level of hostility directed at people whose mission is to safeguard the internet, particularly ahead of presidential elections.
Jankowicz, the mother of a toddler, says her anxiety has run so high, in part due to death threats, that she recently had a dream that a stranger broke into her house with a gun. She threw a punch in the dream that, in reality, grazed her bedside baby monitor. Jankowicz said she tries to stay out of public view and no longer publicizes when she’s going to events.
“I don’t want somebody who wishes harm to show up,” Jankowicz said. “I have had to change how I move through the world.”
In prior election cycles, researchers like Jankowicz were heralded by lawmakers and company executives for their work exposing Russian propaganda campaigns, Covid conspiracies and false voter fraud accusations. But 2024 has been different, marred by the potential threat of litigation by powerful people like X owner Elon Musk as well congressional investigations conducted by far-right politicians, and an ever-increasing number of online trolls.
Alex Abdo, litigation director of the Knight First Amendment Institute at Columbia University, said the constant attacks and legal expenses have “unfortunately become an occupational hazard” for these researchers. Abdo, whose institute has filed amicus briefs in several lawsuits targeting researchers, said the “chill in the community is palpable.”
Jankowicz is one of more than two dozen researchers who spoke to CNBC about the changing environment of late and the safety concerns they now face for themselves and their families. Many declined to be named to protect their privacy and avoid further public scrutiny.
Whether they agreed to be named or not, the researchers all spoke of a more treacherous landscape this election season than in the past. The researchers said that conspiracy theories claiming that internet platforms try to silence conservative voices began during Trump’s first campaign for president nearly a decade ago and have steadily increased since then.
SpaceX and Tesla founder Elon Musk speaks at a town hall with Republican candidate U.S. Senate Dave McCormick at the Roxain Theater on October 20, 2024 in Pittsburgh, Pennsylvania.
Michael Swensen | Getty Images
‘Those attacks take their toll’
The chilling effect is of particular concern because online misinformation is more prevalent than ever and, particularly with the rise of artificial intelligence, often even more difficult to recognize, according to the observations of some researchers. It’s the internet equivalent of taking cops off the streets just as robberies and break-ins are surging.
Jeff Hancock, president of the Stanford Internet Observatory, said we’re in a “trust and safety winter.” He’s experienced it firsthand.
After the SIO’s work looking into misinformation and disinformation during the 2020 election, the institute was sued three times in 2023 by conservative groups, who alleged that the organization’s researchers colluded with the federal government to censor speech. Stanford spent millions of dollars to defend its staff and students fighting the lawsuits.
During that time, SIO downsized significantly.
“Many people have lost their jobs or worse and especially that’s the case for our staff and researchers,” said Hancock, during the keynote of his organization’s third annual Trust and Safety Research Conference in September. “Those attacks take their toll.”
SIO didn’t respond to CNBC’s inquiry about the reason for the job cuts.
Google last month laid off several employees, including a director, in its trust and safety research unit just days before some of them were scheduled to speak at or attend the Stanford event, according to sources close to the layoffs who asked not to be named. In March, the search giant laid off a handful of employees on its trust and safety team as part of broader staff cuts across the company.
Google didn’t specify the reason for the cuts, telling CNBC in a statement that, “As we take on more responsibilities, particularly around new products, we make changes to teams and roles according to business needs.” The company said it’s continuing to grow its trust and safety team.
Jankowicz said she began to feel the hostility two years ago after her appointment to the Biden administration’s Disinformation Governance Board.
She and her colleagues say they faced repeated attacks from conservative media and Republican lawmakers, who alleged that the group limited free speech. After just fourmonths in operation, the board was shuttered.
In an August 2022 statement announcing the termination of the board, DHS didn’t provide a specific reason for the move, saying only that it was following the recommendation of the Homeland Security Advisory Council.
Jankowicz was then subpoenaed as a part of an investigation by a subcommittee of the House Judiciary Committee intended to discover whether the federal government was colluding with researchers to “censor” Americans and conservative viewpoints on social media.
“I’m the face of that,” Jankowicz said. “It’s hard to deal with.”
Since being subpoenaed, Jankowicz said she’s also had to deal with a “cyberstalker,” who repeatedly posted about her and her child on social media site X, resulting in the need to obtain a protective order. Jankowicz has spent more than$80,000 in legal bills on top of the constant fear that online harassment will lead to real-world dangers.
On notorious online forum 4chan, Jankowicz’s face grazed the cover of a munitions handbook, a manual teaching others how to build their own guns. Another person used AI software and a photo of Jankowicz’s face to create deep-fake pornography, essentially putting her likeness onto explicit videos.
“I have been recognized on the street before,” said Jankowicz, who wrote about her experience in a 2023 story in The Atlantic with the headline, “I Shouldn’t Have to Accept Being in Deepfake Porn.”
One researcher, who spoke on condition of anonymity due to safety concerns, said she’s experienced more online harassment since Musk’s late 2022 takeover of Twitter, now known as X.
In a direct message that was shared with CNBC, a user of X threatened the researcher, saying they knew her home address and suggested the researcher plan where she, her partner and their“little one will live.”
Within a week of receiving the message, the researcher and her family relocated.
Misinformation researchers say they are getting no help from X. Rather, Musk’s company has launched several lawsuits against researchers and organizations for calling out X for failing to mitigate hate speech and false information.
In November, X filed a suit against Media Matters after the nonprofit media watchdog published a report showing that hateful content on the platform appeared next to ads from companies including Apple, IBM and Disney. Those companies paused their ad campaigns following the Media Matters report, which X’s attorneys described as “intentionally deceptive.”
Then there’s House Judiciary Chairman Jim Jordan, R-Ohio, who continues investigating alleged collusion between large advertisers and the nonprofit Global Alliance for Responsible Media (GARM), which was created in 2019 in part to help brands avoid having their promotions show up alongside content they deem harmful. In August, the World Federation of Advertisers said it was suspending GARM’s operations after X sued the group, alleging it organized an illegal ad boycott.
GARM said at the time that the allegations “caused a distraction and significantly drained its resources and finances.”
Abdo of the Knight First Amendment Institute said billionaires like Musk can use those types of lawsuits to tie up researchers and nonprofits until they go bankrupt.
Representatives from X and the House Judiciary Committee didn’t respond to requests for comment.
Less access to tech platforms
X’s actions aren’t limited to litigation.
Last year, the company altered how its data library can be used and, instead of offering it for free, started charging researchers $42,000 a month for the lowest tier of the service, which allows access to 50 million tweets.
Musk said at the time that the change was needed because the “free API is being abused badly right now by bot scammers & opinion manipulators.”
Kate Starbird, an associate professor at the University of Washington who studies misinformation on social media, said researchers relied on Twitter because “it was free, it was easy to get, and we would use it as a proxy for other places.”
“Maybe 90% of our effort was focused on just Twitter data because we had so much of it,” said Starbird, who was subpoenaed for a House Judiciary congressional hearing in 2023 related to her disinformation studies.
A more stringent policy will take effect on Nov. 15, shortly after the election, when X says that under its new terms of service, users risk a $15,000 penalty for accessing over 1 million posts in a day.
“One effect of X Corp.’s new terms of service will be to stifle that research when we need it most,” Abdo said in a statement.
Meta CEO Mark Zuckerberg attends the Senate Judiciary Committee hearing on online child sexual exploitation at the U.S. Capitol in Washington, D.C., on Jan. 31, 2024.
Nathan Howard | Reuters
It’s not just X.
In August, Meta shut down a tool called CrowdTangle, used to track misinformation and popular topics on its social networks. It was replaced with the Meta Content Library, which the company says provides “comprehensive access to the full public content archive from Facebook and Instagram.”
Researchers told CNBC that the change represented a significant downgrade. A Meta spokesperson said that the company’s new research-focused tool is more comprehensive than CrowdTangle and is better suited for election monitoring.
In addition to Meta, other apps like TikTok and Google-owned YouTube provide scant data access, researchers said, limiting how much content they can analyze. They say their work now often consists of manually tracking videos, comments and hashtags.
“We only know as much as our classifiers can find and only know as much as is accessible to us,” said Rachele Gilman, director of intelligence for The Global Disinformation Index.
In some cases, companies are even making it easier for falsehoods to spread.
For example, YouTube said in Juneof last year it would stop removing false claims about 2020 election fraud. And ahead of the 2022 U.S. midterm elections, Meta introduced a new policy allowing political ads to question the legitimacy of past elections.
YouTube works with hundreds of academic researchers from around the world today through its YouTube Researcher Program, which allows access to its global data API “with as much quota as needed per project,” a company spokeswoman told CNBC in a statement. She added that increasing access to new areas of data for researchers isn’t always straightforward due to privacy risks.
A TikTok spokesperson said the company offers qualifying researchers in the U.S. and the EU free access to various, regularly updated tools to study its service. The spokesperson added that TikTok actively engages researchers for feedback.
Not giving up
As this year’s election hits its home stretch, one particular concern for researchers is the period between Election Day and Inauguration Day, said Katie Harbath, CEO of tech consulting firm Anchor Change.
Fresh in everyone’s mind is Jan. 6, 2021, when rioters stormed the U.S. Capitol while Congress was certifying the results, an event that was organized in part on Facebook. Harbath, who was previously a public policy director at Facebook, said the certification process could again be messy.
“There’s this period of time where we might not know the winner, so companies are thinking about ‘what do we do with content?'” Harbath said. “Do we label, do we take down, do we reduce the reach?”
Despite their many challenges, researchers have scored some legal victories in their efforts to keep their work alive.
In March, a California federal judgedismissed a lawsuit by X against the nonprofit Center for Countering Digital Hate, ruling that the litigation was an attempt to silence X’s critics.
Three months later, a ruling by the Supreme Court allowed the White House to urge social media companies to remove misinformation from their platform.
Jankowicz, for her part, has refused to give up.
Earlier this year, she founded the American Sunlight Project, which says its mission is “to ensure that citizens have access to trustworthy sources to inform the choices they make in their daily lives.” Jankowicz told CNBC that she wants to offer support to those in the field who have faced threats and other challenges.
“The uniting factor is that people are scared about publishing the sort of research that they were actively publishing around 2020,” Jankowicz said. “They don’t want to deal with threats, they certainly don’t want to deal with legal threats and they’re worried about their positions.”
Social media giant Reddit has launched a lawsuit against artificial intelligence company Perplexity, alleging that it illegally scraped user posts to train its AI model, marking the latest data-rights clash between content owners and the AI industry.
The complaint filed in New York federal court on Wednesday also named three defendants, which Reddit says helped Perplexity collect its data: Lithuanian data scraper Oxylabs, “former Russian botnet” AWMProxy, and Texas startup SerpApi.
Reddit alleged that the three smaller entities were able to extract its copyrighted content “by masking their identities, hiding their locations and disguising their web scrapers as regular people.”
Perplexity, which runs an AI-powered search engine, denied the allegations and accused Reddit of “extortion” and opposition to an open internet, while SerpApi told CNBC it “strongly disagrees” with Reddit’s claims and intends to defend itself in court.
The case represents one of many filed by content owners accusing AI firms of using copyrighted material without permission to train their large language models. Reddit, in particular, has been on the front lines of that battle, having launched a similar ongoing lawsuit against AI startup Anthropic in June. CNBC was unable to reach Oxylabs and AWMProxy.
In a statement shared with CNBC, Ben Lee, Chief Legal Officer at Reddit, said that AI companies are” locked in an arms race for quality human content” and that pressure has fueled an “industrial-scale ‘data laundering’ economy.”
Scrapers bypass technological protections to steal data, then sell it to clients hungry for training material. Reddit is a prime target because it’s one of the largest and most dynamic collections of human conversation ever created.
Reddit — which hosts over 100,000 interest-based “subreddit” communities — said in its lawsuit that its user posts had become the most commonly cited source for AI-generated answers on Perplexity.
It added that it sent Perplexity a cease-and-desist letter, after which it increased the volume of citations to Reddit “forty-fold.”
AI researchers have previously noted that Reddit’s large volume of moderated conversations can help make AI chatbots produce more natural-sounding responses.
In the age of artificial intelligence, Reddit has worked to leverage its massive data pool, permitting access to it only through AI-related licensing agreements. The social media company has signed such agreements with OpenAI and Alphabet‘s Google.
In a response to the lawsuit, Perplexity, in a post on the Reddit platform, argued that it does not train AI models on content but merely summarizes and cites public Reddit discussions. Therefore, it said it is “impossible” to sign a license agreement.
“A year ago, after explaining this, Reddit insisted we pay anyway, despite lawfully accessing Reddit data. Bowing to strong arm tactics just isn’t how we do business,” the statement read, going on to describe the suit as a “show of force in Reddit’s training data negotiations with Google and OpenAI.”
“Perplexity believes this is a sad example of what happens when public data becomes a big part of a public company’s business model,” Perplexity added, noting that data licensing has become an increasingly important source of revenue for Reddit.
In February, Reddit’s COO Jen Wong told the trade publication Adweek that AI licensing deals with Google and OpenAI made up nearly 10% of Reddit’s revenue.
Elon Musk listens as reporters ask U.S. President Donald Trump and South Africa President Cyril Ramaphosa questions during a press availability in the Oval Office at the White House on May 21, 2025 in Washington, DC.
Chip Somodevilla | Getty Images
There was a lot missing from Tesla’s third-quarter earnings call.
CEO Elon Musk said nothing about demand for the company’s electric vehicles after a key federal tax credit expired last month. There was no mention of the Cybertruck or the impact of tariffs on auto parts. Investors got no sign for how the fourth quarter is shaping up.
That all helps explain why the stock sank almost 4% in extended trading.
Rather than focus on sales, margins and earnings (which missed estimates), Musk took a familiar path, making bold promises and laying out his futuristic vision for the business. It starts with robotaxis, and Musk’s view that skeptical investors and much of the public fail to see what’s coming.
“People just don’t don’t quite appreciate the degree to which this will take off — where it’s honestly — it’s going to be like a shock wave,” Musk said in his opening remarks. “We have millions of cars out there that, with a software update, become full self-driving cars and, you know, we’re making a couple million a year.”
Musk has for years promised that Tesla’s EVs will be able to do work for their owners, making them money while they sleep by ferrying passengers or goods around without a driver. But while Alphabet’s Waymo is aggressively entering new markets with its commercial robotaxi service, and Baidu’s Apollo Go is taking off in China and elsewhere, Tesla is still limited to a few pilot projects.
During Tesla’s prior earnings call in July, Musk predicted that the company would have autonomous ride hailing available to “probably half the population of The U.S. by the end of the year.” The company still doesn’t produce or sell cars that are safe to use without a human ready to steer or brake at all times.
On Wednesday, Musk said Tesla would have its robotaxi service operating without human drivers in Austin by the end of the year and that it would be running in eight to 10 cities by the close of 2025, at least with drivers on board.
As for its current fleet of cars, finance chief Vaibhav Taneja said on the call that the customer base for FSD Supervised, Tesla’s partially automated driving system, “is still small,” with 12% of users paying for the system. Taneja didn’t offer an average sale price that subscribers are paying after Tesla ran a number of promotions to drive uptake.
Tesla said in its investor deck that FSD revenue was lower than in the year-ago period, when the figure was $326 million. That means FSD accounted for less than 2% of total revenue in the latest quarter.
After robotaxis, Musk turned to humanoid robots, repeating his prediction that Optimus has the “potential to be the biggest product of all time.”
Optimus is Tesla’s bipedal humanoid robot that’s in development but not yet commercially deployed. Musk has previously said the robots will be so sophisticated that they can serve as factory workers or babysitters.
Now he’s raising the bar.
“Optimus will be an incredible surgeon,” Musk said on Wednesday. He said that with Optimus and self driving, “you can actually create a world where there is no poverty, where everyone has access to the finest medical care.”
Musk said Tesla will likely demo a new version of Optimus, which he called V3, in the first quarter of 2026.
At the end of the call, Musk kept the focus on robots but combined it with another topic of importance: his pay package.
A Tesla Optimus robot scoops popcorn and waves at attendees during the opening of the Tesla Diner and drive-in restaurant and supercharger on Santa Monica Blvd. in the Hollywood neighborhood of Los Angeles on July 21, 2025.
Patrick T. Fallon | Afp | Getty Images
In September, Tesla introduced a new pay plan that could be worth $1 trillion and increase Musk’s stake in the company by 12%. Tesla will hold its annual shareholder meeting in early November, when the plan will be up for a vote.
“If we build this robot army, do I have at least a strong influence over that robot army?” Musk said on the call. “I don’t feel comfortable building that robot army if I don’t have at least a strong influence.”
He also took aim at proxy advisors Institutional Shareholder Services and Glass Lewis after the firms recommended shareholders vote against approving his new pay plan.
Musk said ISS and Glass Lewis “have no freaking clue,” and described them as “corporate terrorists.”
Representatives from the two firms didn’t immediately respond to requests for comment.
In the meantime, Tesla still relies on auto sales for the vast majority of its revenue. And while revenue increased 12% in the third quarter from a year earlier, that followed two straight year-over-year declines, and analysts expect a drop of about 2% in the fourth quarter.
Absent from the call was any discussion of what Tesla may be doing in the near term to restore consumer enthusiasm.
Tesla’s brand ranking declined to the 25th spot on the Interbrand 2025 Best Global Brands list out earlier this month, from 12th in 2024. The report said that “Tesla was once the main disruptive force in the automotive industry,” but Musk’s political activities along with a lack of new products “has led to concerns about Tesla’s ability to sustain high margins.”
Through Tesla’s online forum, investors submitted questions about new products in the pipeline. But on the call, investor relations lead Travis Axelrod twice refused to read them.
“This is not the appropriate venue to cover that,” he said.
The New Jersey attorney general sued Amazon on Wednesday, alleging the company has violated the rights of thousands of pregnant employees and staffers with disabilities who work in several of its facilities in the state.
The complaint, filed in Essex County Superior Court by the office of Attorney General Matthew Platkin, alleges Amazon violated state anti-discrimination law in how it treats pregnant employees and employees with disabilities when they request a work accommodation.
The state said the lawsuit follows a years-long investigation by its civil rights division into Amazon’s treatment of workers at warehouses across New Jersey.
According to the suit, the state’s investigation found that since October 2015, Amazon allegedly violated pregnant and disabled employees’ rights by placing them on unpaid leave when they request accommodations, denied them reasonable accommodations and “unreasonably” delayed its responses to workers’ requests.
It also alleged that Amazon “unlawfully” retaliates against these workers when they seek an accommodation, including by firing them. After workers are granted an accommodation, Amazon allegedly fired some employees for “failing to meet the company’s rigid productivity requirements.”
Read more CNBC tech news
“There is no excuse for Amazon’s shameful treatment of pregnant workers and workers with disabilities,” Platkin said in a statement. “Amazon’s egregious conduct has caused enormous damage to pregnant workers and workers with disabilities in our state, and it must stop now.”
Amazon spokesperson Kelly Nantel said in a statement that accusations it doesn’t follow federal and state laws like New Jersey’s anti-discrimination law are “simply not true.”
“Ensuring the health and well-being of our employees is our top priority, and we’re committed to providing a safe and supportive environment for everyone,” Nantel said.
The company said it approves more than 99% of pregnancy accommodation requests submitted by workers. Amazon also denied placing pregnant workers automatically on leave, as well as claims that it unjustifiably rejects accommodation requests.
The complaint seeks to require that Amazon pay unspecified compensatory damages and civil fines, as well as court orders requiring the company to adjust its policies and to submit to monitoring and reporting requirements for five years, among other remedies.
One incident described in the complaint states that an unnamed pregnant employee received an accommodation that permitted her to take additional breaks and restricted her from lifting items heavier than 15 pounds.
Less than a month after the accommodation was approved, she was allegedly terminated for “not meeting packing numbers,” the lawsuit states, even though her accommodation required her pack fewer items each shift.
In another case, a pregnant employee’s accommodation request was closed due to a lack of medical paperwork when the requested documents weren’t required. While the worker tried to resubmit her request, she allegedly received three warnings for “poor productivity,” and was ultimately fired for “not making rate,” according to the complaint.
Amazon’s internal investigation of her case didn’t confirm that the employee was fired due to her pregnancy, but the company ultimately reinstated her with backpay, the lawsuit says.
“Amazon’s discriminatory practices and systemic failure to accommodate pregnant workers and workers with disabilities have the effect of pushing these employees out of Amazon’s workforce — the precise outcome the [Law Against Discrimination] was intended to prevent,” according to the lawsuit.
Amazon’s treatment of pregnant employees and others in its sprawling front-line workforce has come under scrutiny in the past.
The company, which is the nation’s second-largest private employer, has faced lawsuits from workers at its warehouses, who alleged the company failed to accommodate them once they were pregnant, then fired them for failing to meet performance standards, CNET reported.
The Equal Employment Opportunity Commission last year opened a probe into Amazon’s treatment of pregnant workers in its warehouses after six senators urged it to do so, citing a “concerning pattern of mistreatment.”
New York’s Division of Human Rights in 2022 filed a complaint against Amazon alleging it discriminates against pregnant workers and workers with disabilities at its facilities.
Amazon said it doesn’t comment on ongoing litigation.