Voters cast ballots on election day at the Fairfax County Government Center polling location in Fairfax, Virginia, on November 2, 2021.
Andrew Caballero-Reynolds | AFP | Getty Images
Social media platforms including Meta’s Facebook and Instagram, Twitter, TikTok and Google’sYouTube are readying themselves for another heated Election Day this week.
The companies now regularly come under close scrutiny around election time, something that accelerated following findings that Russian agents used social media to sow division in the run-up to the 2016 election. During the last presidential election in 2020, the platforms faced the challenge of moderating election denialism as an outgoing president stoked the false claims himself, leading several of them to at least temporarily suspend him after the Jan. 6 insurrection.
This year, the platforms are using all of those experiences to prepare for threats to democracy and safety as voters decide who will represent them in Congress, governor’s offices and state legislatures.
Here’s how all the major platforms are planning to police their services on Election Day.
Meta
Onur Dogman | Lightrocket | Getty Images
Meta’s Facebook has been one of the most scrutinized platforms when it comes to misinformation. In response to years of criticism, it has bolstered its approach to election integrity. It’s said it will use many of the same policies and safeguards this year that it had in 2020.
Meta has stood up its Elections Operations Center, which it likened to a command center, to bring together different teams throughout the company to monitor and quickly address threats they see on the platform. It’s used this model dozens of times worldwide since 2018.
Facebook and Instagram also share reliable information with users about how to vote (including in languages other than English). The company said it’s already sent more than 80 million election notifications this year on the two platforms.
The company uses third-party fact-checkers to help label false posts so they can be demoted in the algorithm before they go viral. Meta said it’s investing an additional $5 million in fact-checking and media literacy efforts before Election Day.
Meta said it’s prepared to seek out threats and coordinated harassment against election officials and poll workers, who were the subject of misinformation campaigns and threats during the last election.
The company is once again banning new political ads in the week before the election, as it did in 2020. While ads submitted before the blackout period can still run, political advertisers have expressed frustration about the policy since it’s often helpful to respond to last-minute attacks and polling with fresh messaging. Facebook already has extra screening for those who sign up as political advertisers and maintains information about political ads in a database available to the public.
Meta has pledged to remove posts that seek to suppress voting, like misinformation about how and when to vote. It also said it would reject ads that discourage voting or question the legitimacy of the upcoming election.
In a study by New York University’s Cybersecurity for Democracy and international NGO Global Witness testing election integrity ad screens across social media platforms, the groups found Facebook was mostly successful in blocking ads they submitted with election disinformation. Still, 20% to 50% of the ads tested were approved, depending on what language they were in and whether they were submitted from inside or outside the U.S.
The researchers also violated Facebook’s policies about who is allowed to place ads, with one of the test accounts placing ads from the U.K. The researchers also did not go through Facebook’s authorization process, which is supposed to provide extra scrutiny for political advertisers.
The researchers did not run the ads once they were approved, so it’s not clear whether Facebook would have blocked them during that step.
A Meta spokesperson said in a statement published with the study that it was “based on a very small sample of ads, and are not representative given the number of political ads we review daily across the world.”
“We invest significant resources to protect elections, from our industry-leading transparency efforts to our enforcement of strict protocols on ads about social issues, elections, or politics – and we will continue to do so,” a Meta spokesperson said in a statement to CNBC. The New York Times first reported the statement.
TikTok
TikTok owner ByteDance has launched a women’s fashion website called If Yooou. Pinduoduo launched an e-commerce site in the U.S. called Temu. The two companies are the latest Chinese tech giants to look to crack the international e-commerce market domianted by Amazon.
Mike Kemp | In Pictures | Getty Images
TikTok has become an increasingly important platform for all sorts of discussion, but it’s tried to keep its service at arm’s length from the most heated political discussions.
TikTok does not allow political ads and has stated its desire for the service to be “a fun, positive and joyful experience.”
“TikTok is first and foremost an entertainment platform,” the company said in a September blog post. It added that it wants to “foster and promote a positive environment that brings people together, not divide them.”
Still, the NYU and Global Witness study found TikTok performed the worst out of the platforms it tested in blocking election-related misinformation in ads. Only one ad it submitted in both English and Spanish falsely claiming Covid vaccines were required to vote was rejected, while ads promoting the wrong date for the election or encouraging voters to vote twice were approved.
TikTok did not provide a comment on the report but told the researchers in a statement that it values “feedback from NGOs, academics, and other experts which helps us continually strengthen our processes and policies.”
The service said that while it doesn’t “proactively encourage politicians or political parties to join TikTok,” it welcomes them to do so. The company announced in September that it would try out mandatory verification for government, politician and political party accounts in the U.S. through the midterms and disable those types of accounts from running ads.
TikTok said it would allow those accounts to run ads in limited circumstances, like public health and safety campaigns, but that they’d have to work with a TikTok representative to do so.
TikTok also barred these accounts from other ways to make money on the platform, like through tipping and e-commerce. Politician and political party accounts are also not allowed to solicit campaign donations on their pages.
TikTok has said it’s committed to stemming the spread of misinformation, including by working with experts to strengthen its policies and outside fact-checkers to verify election-related posts.
It’s also sought to build on its experiences from the last election, like by surfacing its election center with information about how to vote earlier in the cycle. It’s also tried to do more to educate creators on the platform about what kinds of paid partnerships are and are not allowed and how to disclose them.
Twitter
A video grab taken from a video posted on the Twitter account of billionaire Tesla chief Elon Musk on October 26, 2022 shows himself carrying a sink as he enters the Twitter headquarters in San Francisco. Elon Musk changed his Twitter profile to “Chief Twit” and posted video of himself walking into the social network’s California headquarters carrying a sink, days before his contentious takeover of the company must be finalized.
– | Afp | Getty Images
Twitter is in a unique position this Election Day, after billionaire Elon Musk bought the platform and took it private less than a couple weeks before voters headed to the polls.
Musk has expressed a desire to loosen Twitter’s content moderation policies. He’s said decisions on whether to reinstate banned users, a group that includes former President Donald Trump, would take a few weeks at least.
But shortly after the deal, Bloomberg reported the team responsible for content moderation lost access to some of their tools. Twitter’s head of safety and integrity, Yoel Roth, characterized that move as a normal measure for a recently acquired company to take and said Twitter’s rules were still being enforced at scale.
But the timing shortly before the election is particularly stark. Musk said teams would have access to all the necessary tools by the end of the week before the election, according to a civil society group leader who was on a call with Musk earlier in the week.
Before Musk’s takeover, Twitter laid out its election integrity plans in an August blog post. Those included activating its civic integrity policy, which allows it to label and demote misleading information about the election, sharing “prebunks,” or proactively debunked false claims about the election, and surfacing relevant news and voting information in a dedicated tab. Twitter has not allowed political ads since 2019.
Google/YouTube
People walk past a billboard advertisement for YouTube on September 27, 2019 in Berlin, Germany.
Sean Gallup | Getty Images
Google and its video platform YouTube are also important platforms outside of Facebook where advertisers seek to get their campaign messages out.
The platforms require advertisers running election messages to become verified and disclose the ad’s backing. Political ads, including information on how much money was behind them and how much they were viewed, are included in the company’s transparency report.
Prior to the last election, Google made it so users could no longer be targeted quite as narrowly with political ads, limiting targeting to certain general demographic categories.
The NYU and Global Witness study found YouTube performed the best out of the platforms it tested in blocking ads with election misinformation. The site ultimately blocked all the misinformation-packed ads the researchers submitted through an account that hadn’t gone through its advertiser verification process. The platform also blocked the YouTube channel hosting the ads, though a Google Ads account remained active.
Like other platforms, Google and YouTube highlight authoritative sources and information on the election high up in related searches. The company said it would remove content violating its policies by misleading about the voting process or encouraging interference with the democratic process.
YouTube also has sought to help users learn how to spot manipulative messages on their own using education content.
Google said it’s helped train campaign and election officials on security practices.
Clarification: This story has been updated to clarify that Meta’s statement on the study was first reported by The New York Times.
OpenAI CEO Sam Altman visits “Making Money With Charles Payne” at Fox Business Network Studios in New York on Dec. 4, 2024.
Mike Coppola | Getty Images
OpenAI CEO Sam Altman’s sister, Ann Altman, filed a lawsuit on Monday, alleging that her brother sexually abused her regularly between the years of 1997 and 2006.
The lawsuit, which was filed in U.S. District Court in the Eastern District of Missouri, alleges that the abuse took place at the family’s home in Clayton, Missouri, and began when Ann, who goes by Annie, was three and Sam was 12. The filing claims that the abusive activities took place “several times per week,” beginning with oral sex and later involving penetration.
The lawsuit claims that “as a direct and proximate result of the foregoing acts of sexual assault,” the plaintiff has experienced “severe emotional distress, mental anguish, and depression, which is expected to continue into the future.”
The younger Altman has publicly made similar sexual assault allegations against her brother in the past on platforms like X, but this is the first time she’s taken him to court. She’s being represented by Ryan Mahoney, whose Illinois-based firm specializes in matters including sexual assault and harassment.
The lawsuit requests a jury trial and damages in excess of $75,000.
In a joint statement on X with his mother, Connie, and his brothers Jack and Max, Sam Altman denied the allegations.
“Annie has made deeply hurtful and entirely untrue claims about our family, and especially Sam,” the statement said. “We’ve chosen not to respond publicly, out of respect for her privacy and our own. However, she has now taken legal action against Sam, and we feel we have no choice but to address this.”
Their response says “all of these claims are utterly untrue,” adding that “this situation causes immense pain to our entire family.” They said that Ann Altman faces “mental health challenges” and “refuses conventional treatment and lashes out at family members who are genuinely trying to help.”
Sam Altman has gained international prominence since OpenAI’s debut of the artificial intelligence chatbot ChatGPT in November 2022. Backed by Microsoft, the company was most recently valued at $157 billion, with funding coming from Thrive Capital, chipmaker Nvidia, SoftBank and others.
Altman was briefly ousted from the CEO role by OpenAI’s board in November 2023, but was quickly reinstated due to pressure from investors and employees.
This isn’t the only lawsuit the tech exec faces.
In March, Tesla and SpaceX CEO Elon Musk sued OpenAI and co-founders Altman and Greg Brockman, alleging breach of contract and fiduciary duty. Musk, who now runs a competing AI startup, xAI, was a co-founder of OpenAI when it began as a nonprofit in 2015. Musk left the board in 2018 and has publicly criticized OpenAI for allegedly abandoning its original mission.
Musk is suing to keep OpenAI from turning into a for-profit company. In June, Musk withdrew the original complaint filed in a San Francisco state court and later refiled in federal court.
Last month, OpenAI clapped back against Musk, claiming in a blog post that in 2017 Musk “not only wanted, but actually created, a for-profit” to serve as the company’s proposed new structure.
This photo illustration created on January 7, 2025, in Washington, DC, shows an image of Mark Zuckerberg, CEO of Meta, and an image of the Meta logo.
Drew Angerer | Afp | Getty Images
Meta employees took to their internal forum on Tuesday, criticizing the company’s decision to end third-party fact-checking on its services two weeks before President-elect Donald Trump’s inauguration.
Company employees voiced their concern after Joel Kaplan, Meta’s new chief global affairs officer and former White House deputy chief of staff under former President George W. Bush, announced the content policy changes on Workplace, the in-house communications tool.
“We’re optimistic that these changes help us return to that fundamental commitment to free expression,” Kaplan wrote in the post, which was reviewed by CNBC.
The content policy announcement follows a string of decisions that appear targeted to appease the incoming administration. On Monday, Meta added new members to its board, including UFC CEO Dana White, a longtime friend of Trump, and the company confirmed last month that it was contributing $1 million to Trump’s inauguration.
Among the latest changes, Kaplan announced that Meta will scrap its fact-checking program and shift to a user-generated system like X’s Community Notes. Kaplan, who took over his new role last week, also said that Meta will lift restrictions on certain topics and focus its enforcement on illegal and high-severity violations while giving users “a more personalized approach to political content.”
One worker wrote they were “extremely concerned” about the decision, saying it appears Meta is “sending a bigger, stronger message to people that facts no longer matter, and conflating that with a victory for free speech.”
Another employee commented that by “simply absolving ourselves from the duty to at least try to create a safe and respective platform is a really sad direction to take.” Other comments expressed concern about the impact the policy change could have on the discourse around topics like immigration, gender identity and gender, which, according to one employee, could result in an “influx of racist and transphobic content.”
A separate employee said they were scared that “we’re entering into really dangerous territory by paving the way for the further spread of misinformation.”
The changes weren’t universally criticized, as some Meta workers congratulated the company’s decision to end third-party fact checking. One wrote that X’s Community Notes feature has “proven to be a much better representation of the ground truth.”
Another employee commented that the company should “provide an accounting of the worst outcomes of the early years” that necessitated the creation of a third-party fact-checking program and whether the new policies would prevent the same type of fall out from happening again.
As part of the company’s massive layoffs in 2023, Meta also scrapped an internal fact-checking project, CNBC reported. That project would have let third-party fact checkers like the Associated Press and Reuters, in addition to credible experts, comment on flagged articles in order to verify the content.
Although Meta announced the end of its fact-checking program on Tuesday, the company had already been pulling it back. In September, a spokesperson for the AP told CNBC that the news agency’s “fact-checking agreement with Meta ended back in January” 2024.
Dana White, CEO of the Ultimate Fighting Championship gestures as he speaks during a rally for Republican presidential nominee and former U.S. President Donald Trump at Madison Square Garden, in New York, U.S., Oct. 27, 2024.
Andrew Kelly | Reuters
After the announcement of White’s addition to the board on Monday, employees also posted criticism, questions and jokes on Workplace, according to posts reviewed by CNBC.
White, who has led UFC since 2001, became embroiled in controversy in 2023 after a video published by TMZ showed him slapping his wife at a New Year’s Eve party in Mexico. White issued a public apology, and his wife, Anne White, issued a statement to TMZ, calling it an isolated incident.
Commenters on Workplace made jokes asking whether performance reviews would now involve mixed martial arts style fights.
In addition to White, John Elkann, the CEO of Italian auto holding company Exor, was named to Meta’s board.
Some employees asked what value autos and entertainment executives could bring to Meta, and whether White’s addition reflects the company’s values. One post suggested the new board appointments would help with political alliances that could be valuable but could also change the company culture in unintended or unwanted ways.
Comments in Workplace alluding to White’s personal history were flagged and removed from the discussion, according to posts from the internal app read by CNBC.
An employee who said he was with Meta’s Internal Community Relations team, posted a reminder to Workplace about the company’s “community engagement expectations” policy, or CEE, for using the platform.
“Multiple comments have been flagged by the community for review,” the employee posted. “It’s important that we maintain a respectful work environment where people can do their best work.”
The internal community relations team member added that “insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.”
Several workers responded to that note saying that even respectful posts, if critical, had been removed, amounting to a corporate form of censorship.
One worker said that because critical comments were being removed, the person wanted to voice support for “women and all voices.”
Meta declined to comment.
— CNBC’s Salvador Rodriguez contributed to this report.
Bitcoin slumped on Tuesday as a spike in Treasury yields weighed on risk assets broadly.
The price of the flagship cryptocurrency was last lower by 4.8% at $97,183.80, according to Coin Metrics. The broader market of cryptocurrencies, as measured by the CoinDesk 20 index, dropped more than 5%.
The moves followed a sudden increase in the 10-year U.S. Treasury yield after data released by the Institute for Supply Management reflected faster-than-expected growth in the U.S. services sector in December, adding to concerns about stickier inflation. Rising yields tend to pressure growth oriented risk assets.
Bitcoin traded above $102,000 on Monday and is widely expected to about double this year from that level. Investors are hopeful that clearer regulation will support digital asset prices and in turn benefit stocks like Coinbase and Robinhood.
However, uncertainty about the path of Federal Reserve interest rate cuts could put bumps in the road for crypto prices. In December, the central bank signaled that although it was cutting rates a third time, it may do fewer rate cuts in 2025 than investors had anticipated. Historically, rate cuts have had a positive effect on bitcoin price while hikes have had a negative impact.
Bitcoin is up more than 3% since the start of the year. It posted a 120% gain for 2024.
Don’t miss these cryptocurrency insights from CNBC Pro: