Connect with us

Published

on

Voters cast ballots on election day at the Fairfax County Government Center polling location in Fairfax, Virginia, on November 2, 2021.

Andrew Caballero-Reynolds | AFP | Getty Images

Social media platforms including Meta’s Facebook and Instagram, Twitter, TikTok and Google’s YouTube are readying themselves for another heated Election Day this week.

The companies now regularly come under close scrutiny around election time, something that accelerated following findings that Russian agents used social media to sow division in the run-up to the 2016 election. During the last presidential election in 2020, the platforms faced the challenge of moderating election denialism as an outgoing president stoked the false claims himself, leading several of them to at least temporarily suspend him after the Jan. 6 insurrection.

This year, the platforms are using all of those experiences to prepare for threats to democracy and safety as voters decide who will represent them in Congress, governor’s offices and state legislatures.

Here’s how all the major platforms are planning to police their services on Election Day.

Meta

Onur Dogman | Lightrocket | Getty Images

Meta’s Facebook has been one of the most scrutinized platforms when it comes to misinformation. In response to years of criticism, it has bolstered its approach to election integrity. It’s said it will use many of the same policies and safeguards this year that it had in 2020.

Meta has stood up its Elections Operations Center, which it likened to a command center, to bring together different teams throughout the company to monitor and quickly address threats they see on the platform. It’s used this model dozens of times worldwide since 2018.

Facebook and Instagram also share reliable information with users about how to vote (including in languages other than English). The company said it’s already sent more than 80 million election notifications this year on the two platforms.

The company uses third-party fact-checkers to help label false posts so they can be demoted in the algorithm before they go viral. Meta said it’s investing an additional $5 million in fact-checking and media literacy efforts before Election Day.

Meta said it’s prepared to seek out threats and coordinated harassment against election officials and poll workers, who were the subject of misinformation campaigns and threats during the last election.

The company is once again banning new political ads in the week before the election, as it did in 2020. While ads submitted before the blackout period can still run, political advertisers have expressed frustration about the policy since it’s often helpful to respond to last-minute attacks and polling with fresh messaging. Facebook already has extra screening for those who sign up as political advertisers and maintains information about political ads in a database available to the public.

Meta has pledged to remove posts that seek to suppress voting, like misinformation about how and when to vote. It also said it would reject ads that discourage voting or question the legitimacy of the upcoming election.

In a study by New York University’s Cybersecurity for Democracy and international NGO Global Witness testing election integrity ad screens across social media platforms, the groups found Facebook was mostly successful in blocking ads they submitted with election disinformation. Still, 20% to 50% of the ads tested were approved, depending on what language they were in and whether they were submitted from inside or outside the U.S.

The researchers also violated Facebook’s policies about who is allowed to place ads, with one of the test accounts placing ads from the U.K. The researchers also did not go through Facebook’s authorization process, which is supposed to provide extra scrutiny for political advertisers.

The researchers did not run the ads once they were approved, so it’s not clear whether Facebook would have blocked them during that step.

A Meta spokesperson said in a statement published with the study that it was “based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.”

“We invest significant resources to protect elections, from our industry-leading transparency efforts to our enforcement of strict protocols on ads about social issues, elections, or politics – and we will continue to do so,” a Meta spokesperson said in a statement to CNBC. The New York Times first reported the statement.

TikTok

TikTok owner ByteDance has launched a women’s fashion website called If Yooou. Pinduoduo launched an e-commerce site in the U.S. called Temu. The two companies are the latest Chinese tech giants to look to crack the international e-commerce market domianted by Amazon.

Mike Kemp | In Pictures | Getty Images

TikTok has become an increasingly important platform for all sorts of discussion, but it’s tried to keep its service at arm’s length from the most heated political discussions.

TikTok does not allow political ads and has stated its desire for the service to be “a fun, positive and joyful experience.”

“TikTok is first and foremost an entertainment platform,” the company said in a September blog post. It added that it wants to “foster and promote a positive environment that brings people together, not divide them.”

Still, the NYU and Global Witness study found TikTok performed the worst out of the platforms it tested in blocking election-related misinformation in ads. Only one ad it submitted in both English and Spanish falsely claiming Covid vaccines were required to vote was rejected, while ads promoting the wrong date for the election or encouraging voters to vote twice were approved.

TikTok did not provide a comment on the report but told the researchers in a statement that it values “feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”

The service said that while it doesn’t “proactively encourage politicians or political parties to join TikTok,” it welcomes them to do so. The company announced in September that it would try out mandatory verification for government, politician and political party accounts in the U.S. through the midterms and disable those types of accounts from running ads.

TikTok said it would allow those accounts to run ads in limited circumstances, like public health and safety campaigns, but that they’d have to work with a TikTok representative to do so.

TikTok also barred these accounts from other ways to make money on the platform, like through tipping and e-commerce. Politician and political party accounts are also not allowed to solicit campaign donations on their pages.

TikTok has said it’s committed to stemming the spread of misinformation, including by working with experts to strengthen its policies and outside fact-checkers to verify election-related posts.

It’s also sought to build on its experiences from the last election, like by surfacing its election center with information about how to vote earlier in the cycle. It’s also tried to do more to educate creators on the platform about what kinds of paid partnerships are and are not allowed and how to disclose them.

Twitter

A video grab taken from a video posted on the Twitter account of billionaire Tesla chief Elon Musk on October 26, 2022 shows himself carrying a sink as he enters the Twitter headquarters in San Francisco. Elon Musk changed his Twitter profile to “Chief Twit” and posted video of himself walking into the social network’s California headquarters carrying a sink, days before his contentious takeover of the company must be finalized.

– | Afp | Getty Images

Twitter is in a unique position this Election Day, after billionaire Elon Musk bought the platform and took it private less than a couple weeks before voters headed to the polls.

Musk has expressed a desire to loosen Twitter’s content moderation policies. He’s said decisions on whether to reinstate banned users, a group that includes former President Donald Trump, would take a few weeks at least.

But shortly after the deal, Bloomberg reported the team responsible for content moderation lost access to some of their tools. Twitter’s head of safety and integrity, Yoel Roth, characterized that move as a normal measure for a recently acquired company to take and said Twitter’s rules were still being enforced at scale.

But the timing shortly before the election is particularly stark. Musk said teams would have access to all the necessary tools by the end of the week before the election, according to a civil society group leader who was on a call with Musk earlier in the week.

Before Musk’s takeover, Twitter laid out its election integrity plans in an August blog post. Those included activating its civic integrity policy, which allows it to label and demote misleading information about the election, sharing “prebunks,” or proactively debunked false claims about the election, and surfacing relevant news and voting information in a dedicated tab. Twitter has not allowed political ads since 2019.

Google/YouTube

People walk past a billboard advertisement for YouTube on September 27, 2019 in Berlin, Germany.

Sean Gallup | Getty Images

Google and its video platform YouTube are also important platforms outside of Facebook where advertisers seek to get their campaign messages out.

The platforms require advertisers running election messages to become verified and disclose the ad’s backing. Political ads, including information on how much money was behind them and how much they were viewed, are included in the company’s transparency report.

Prior to the last election, Google made it so users could no longer be targeted quite as narrowly with political ads, limiting targeting to certain general demographic categories.

The NYU and Global Witness study found YouTube performed the best out of the platforms it tested in blocking ads with election misinformation. The site ultimately blocked all the misinformation-packed ads the researchers submitted through an account that hadn’t gone through its advertiser verification process. The platform also blocked the YouTube channel hosting the ads, though a Google Ads account remained active.

Like other platforms, Google and YouTube highlight authoritative sources and information on the election high up in related searches. The company said it would remove content violating its policies by misleading about the voting process or encouraging interference with the democratic process.

YouTube also has sought to help users learn how to spot manipulative messages on their own using education content.

Google said it’s helped train campaign and election officials on security practices.

Clarification: This story has been updated to clarify that Meta’s statement on the study was first reported by The New York Times.

Subscribe to CNBC on YouTube.

WATCH: The messy business of content moderation on Facebook, Twitter, YouTube

Why content moderation costs billions and is so tricky for Facebook, Twitter, YouTube and others

Continue Reading

Technology

Tesla must pay portion of $329 million in damages after fatal Autopilot crash, jury says

Published

on

By

Tesla must pay portion of 9 million in damages after fatal Autopilot crash, jury says

A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.

Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.

The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.

The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.

Tesla said it plans to appeal the decision.

Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.

The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.

While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.

Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.

“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”

Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.

Here is Tesla’s response to CNBC:

“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.

Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.

This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”

The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.

Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.

The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.

The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.

The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.

The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.

A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.

Read the jury’s verdict below.

Continue Reading

Technology

Crypto wobbles into August as Trump’s new tariffs trigger risk-off sentiment

Published

on

By

Crypto wobbles into August as Trump's new tariffs trigger risk-off sentiment

A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025. 

Lam Yik | Bloomberg | Getty Images

The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.

The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.

The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.

Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.

Stock Chart IconStock chart icon

hide content

Bitcoin falls below $115,000

The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.

“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”

Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.

Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

Google has dropped more than 50 DEI-related organizations from its funding list

Published

on

By

Google has dropped more than 50 DEI-related organizations from its funding list

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.

David Paul Morris | Bloomberg | Getty Images

Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.

The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.

The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.

In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.

“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.

Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids. 

The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.

Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.

However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.

President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.

Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”

Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.

“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.

One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.

Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.

WATCH: Alphabet’s valuation remains highly attractive, says Evercore ISI’s Mark Mahaney

Alphabet's valuation remains highly attractive, says Evercore ISI's Mark Mahaney

Continue Reading

Trending