The Washington Post | The Washington Post | Getty Images
President-elect Donald Trump was early to warn about the national security dangers posed by TikTok during his first term in office, with rhetoric and policy discussions that framed the social media app within his aggressive anti-China stance. But during the 2024 campaign, Trump seemed to do an about-face.
In an interview on CNBC’s “Squawk Box” last March, Trump said banning TikTok would make young people “go crazy” and would also benefit Meta Platforms‘ Facebook.
“There’s a lot of good and there’s a lot of bad with TikTok,” Trump said. “But the thing I don’t like is that without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people, along with a lot of the media.”
Trump’s transition team hasn’t commented on TikTok specifically, but has said the election results give the president a mandate to follow through on the promises he made on the campaign trail, and there are some big deadlines coming up related to TikTok’s fate.
Before Trump is even president, the U.S. Court of Appeals for the D.C. Circuit is expected to issue a ruling by Friday on a challenge to the new law requiring ByteDance, TikTok’s Chinese parent company, to divest its U.S. operations by January 19. This case has broad implications, touching on national security concerns, constitutional questions about free speech, and the future of foreign-owned tech platforms in the U.S.
Courts generally defer to the executive and legislative branches on national security matters, but the outcome may depend on whether the court frames the issue solely as a national security question or also considers First Amendment concerns. The balance likely favors the government given Congress’s clear constitutional authority to regulate foreign commerce, which supports the legislation requiring ByteDance divestment. Regardless, this case is likely headed to the Supreme Court.
As of now, with Trump to be sworn in on Jan. 20, one day after the federal ban on TikTok is scheduled to begin, Trump’s comments have intensified deep concerns about the influence that major donors will have in a second Trump administration and the extent to which private financial interests will be prioritized over national security and public welfare. In fact, it may be the first major decision made by Trump that tells us just how far his administration is willing to go in prioritizing the donor wish list.
At the center of this controversy is Jeff Yass, a major Republican donor with significant financial ties to ByteDance, TikTok’s parent company. Yass, who contributed over $46 million to Republican causes during the 2024 election cycle, reportedly met with Trump in March, though the details of their conversation remain unclear. What is clear, however, is that Yass’s ownership stake in ByteDance has fueled concerns in Washington about whether Trump’s reversal was influenced by donor priorities rather than a pure devotion to market competition.
The Wall Street Journal recently reported that TikTok’s CEO has been personally lobbying Elon Musk, who now has a close relationship with the President-Elect, on his company’s behalf. Meanwhile, Meta’s Mark Zuckerberg dined with Trump at Mar-a-Lago last week.
The optics of a TikTok ban reversal are troubling. Imagine the backlash if a prominent Democratic donor like George Soros — frequently vilified by Republicans — had similarly positioned himself to influence major policy decisions tied to his personal financial interests. The accusations of corruption and undue influence, if not worse, would be deafening. Yet figures like Yass and particularly Elon Musk — who has duct-taped himself, and his entangled financial interests to Trump’s transition team and many of their personnel and policy decisions — face little scrutiny from the same critics who level conspiracy theories against Soros.
This selective outrage underscores a systemic problem: a political system where major donors wield significant influence over policymaking, often without bipartisan expressions of concern or actions that force transparency or accountability.
TikTok’s weaponized influence
Concerns about donor influence are amplified when considering the risks associated with TikTok itself. The app’s meteoric rise has sparked bipartisan alarm over its ties to the Chinese government. Lawmakers and intelligence officials have consistently warned about its potential for data harvesting, espionage, and propaganda. These concerns are not abstract. During the last congressional push to ban TikTok, the app demonstrated its ability to weaponize its platform by rapidly mobilizing its user base to flood lawmakers with calls and emails opposing the ban.
This real-time demonstration of TikTok’s ability to influence public sentiment, amplify social narratives, and pressure lawmakers underscores its unparalleled capacity as a tool for shaping public policy and national opinions. When coupled with ByteDance’s links to the Chinese government, TikTok’s potential for misuse or mischief is alarming.
Another concern around a TikTok ban reversal is the fact that there is already a law addressing TikTok: the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), enacted in April 2024 as part of Public Law 118-50. This bipartisan legislation mandates that foreign adversary-controlled applications, like TikTok, must be divested or face a U.S. ban. As federal law, PAFACA cannot simply be reversed by presidential decree. A U.S. president cannot legally bypass Congress to nullify or override an existing law. Laws passed by Congress remain binding until they are repealed or amended by Congress or struck down by the courts.
Instead of bypassing Congress or undermining existing law, any changes to TikTok’s status should be addressed through the framework that PAFACA provides. Such a transparent process would ensure that decisions are made in public and on behalf of the public interest, not in the backrooms at Mar-a-Lago. With Republicans controlling both the House and Senate during the newly elected Congress, they have the power to amend or repeal PAFACA. However, doing so would require navigating a highly involved legislative process that would inevitably bring more scrutiny to Yass.
Trump’s options
Given Trump’s dominance of the federal courts at the highest level, he could use this route, but short of the courts, the president’s authority in this context is limited. Any Trump effort to unilaterally overturn a TikTok ban as president would be difficult to execute based on how the system is supposed to work.
Two options Trump would have are enforcement discretion and executive orders. The president has considerable discretion in how federal laws are enforced. For instance, executive agencies might prioritize certain aspects of a law over others, effectively scaling down enforcement in particular areas. While executive orders cannot override existing laws, they can guide how the executive branch implements them, potentially narrowing their scope. Presidents have historically used enforcement discretion to achieve policy objectives without openly violating the law.
But addressing TikTok through the existing legal framework already established by PAFACA would allow for the consideration of balanced alternatives, such as requiring stricter data security measures, local data storage, or divestiture that places TikTok’s operations under U.S. ownership. These options could protect users’ access to the app while addressing legitimate security risks.
Many of these alternatives have been explored in public discussions and through proposals like “Project Texas,” and some have found their way into law. They have also been subject to criticism and challenges, largely about insufficient follow-through or the perception that these efforts are not thorough, would never be agreed to by the Chinese government, or are just incomplete or inadequate to address security concerns. But consideration of these remedies should continue — to date, the execution has been nonexistent rather than the proposals being outright failures.
The broader implications of donor-driven policy
Trump’s March comments on TikTok get one thing right. It is important to acknowledge that TikTok’s immense popularity creates another unique dilemma. With over 150 million users in the U.S., the app is more than just a platform for entertainment — it has become a key tool for creativity, connection, and commerce, particularly among younger Americans and small businesses. This widespread use complicates the conversation, as any decision about TikTok’s future will inevitably affect millions of people who rely on it for various purposes.
However, the app’s popularity should not outweigh the national security concerns it poses, particularly given its ties to the Chinese government. ByteDance’s well-documented connections to the Chinese government have heightened fears in Washington about the potential misuse of TikTok’s data collection capabilities. These risks are not speculative — they reflect patterns of behavior consistent with Chinese state-sponsored cyber activities. Allowing donor-driven priorities to eclipse these legitimate security concerns undermines public trust in the policymaking process and erodes confidence in government institutions.
This situation raises a critical question: What other national priorities might be sacrificed to appease donors with outsized influence? If decisions about TikTok — an app that elicits bipartisan concerns about its national security implications — can be swayed, what does this mean for other pressing issues like energy policy, defense, or trade? The stakes are far too high to let financial interests dictate public policy outcomes.
Americans deserve a government that treats national security as a top priority and not one that is negotiable or secondary to the interests of private wealthy donors.
—ByDewardric McNeal, managing director & senior policy analyst at Longview Global and CNBC contributor, who served as an Asia policy specialist at the Defense Department during the Obama administration.
U.S. Attorney General Pam Bondi speaks during a roundtable on “Antifa,” an anti-fascist movement he designated a domestic “terrorist organization” via executive order on September 22, at the White House in Washington, D.C., Oct. 8, 2025.
Evelyn Hockstein | Reuters
Meta removed a Facebook group page on Tuesday that was allegedly used to “dox and target” U.S. Immigration and Customs Enforcement agents in Chicago after being contacted by the Department of Justice.
Attorney General Pam Bondi revealed the Facebook takedown in an X post, and said that the DOJ “will continue engaging tech companies to eliminate platforms where radicals can incite imminent violence against federal law enforcement.”
A Meta spokesperson confirmed that the tech giant removed the Facebook group page, but declined to comment about its size and the specific details that warranted its removal.
“This Group was removed for violating our policies against coordinated harm,” the Meta spokesperson said in a statement that also referred to the company’s policies pertaining to “Coordinating Harm and Promoting Crime.”
Meta’s removal of the Facebook group page follows similar moves from rivals like Apple and Google, which have recently removed apps that could be used to anonymously report sightings of ICE agents and other law enforcement.
Read more CNBC tech news
Apple took down the ICEBlock app nearly two weeks ago following pressure from Bondi, who said at the time that the app was “designed to put ICE agents at risk just for doing their jobs.”
Apple said at the time in a statement that it removed the ICEBlock app based on information provided by law enforcement about alleged “safety risks.”
Google, which did not maintain the ICEBlock app on its app store, said in October that while the DOJ never contacted the search giant, the company removed “similar apps for violations of our policies.”
ICEBlock creator Joshua Aaron criticized both Apple and the White House in an interview with CNBC, and compared his app to others like Waze, which let drivers report when they see law enforcement officers in order to avoid getting ticketed for speeding.
“This is about our fundamental constitutional rights in this country being stripped away by this administration, and the powers that be who are capitulating to their requests,” Aaron said.
OpenAI’s EMEA startups head Laura Modiano spoke at the Sifted Summit on Wednesday, 8 October.
Nurphoto | Nurphoto | Getty Images
OpenAI on Tuesday announced a council of eight experts who will advise the company and provide insight into how artificial intelligence could affect users’ mental health, emotions and motivation.
The group, which is called the Expert Council on Well-Being and AI, will initially guide OpenAI’s work on its chatbot ChatGPT and its short-form video app Sora, the company said. Through check-ins and recurring meetings, OpenAI said the council will help it define what healthy AI interactions look like.
OpenAI has been expanding its safety controls in recent months as the company has faced mounting scrutiny over how it protects users, particularly minors.
In September, the Federal Trade Commission launched an inquiry into several tech companies, including OpenAI, over how chatbots like ChatGPT could negatively affect children and teenagers. OpenAI is also embroiled in a wrongful death lawsuit from a family who blames ChatGPT for their teenage son’s death by suicide.
Read more CNBC tech news
The company is building an age prediction system that will automatically apply teen-appropriate settings for users under 18, and it launched a series of parental controls late last month. Parents can now get notified if their child is showing signs of acute distress, for instance.
OpenAI said it began informally consulting with members of its new expert council as it was building its parental controls. The company brought on additional experts in psychiatry. psychology and human-computer interaction as it formalized the council, which officially launched with an in-person session last week.
In addition to its expert council, OpenAI said it is also working with researchers and mental health clinicians within the Global Physician Network who will help test ChatGPT and establish company policies.
Here are the members of OpenAI’s Expert Council on Well-Being and AI:
Andrew Przybylski, a professor of human behavior and technology at the University of Oxford.
David Bickham, a research scientist in the Digital Wellness Lab at Boston Children’s Hospital.
David Mohr, the director of Northwestern University’s Center for Behavioral Intervention Technologies.
Mathilde Cerioli, the chief scientist at Everyone.AI, a nonprofit that explores the risks and benefits of AI for children.
Munmun De Choudhury, a professor at Georgia Tech’s School of Interactive Computing.
Dr. Robert Ross, a pediatrician by training and the former CEO of The California Endowment, a nonprofit that aims to expand access to affordable health care.
Dr. Sara Johansen, a clinical assistant professor at Stanford University who founded its Digital Mental Health Clinic.
Tracy Dennis-Tiwary, a professor of psychology at Hunter College.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor
Oracle Cloud Infrastructure on Tuesday announced it will deploy 50,000 Advanced Micro Devices graphics processors starting in the second half of 2026.
AMD shares climbed about 2%. Oracle shares sank 4% while Nvidia was more than 3% lower.
The move is the latest sign that cloud companies are increasingly offering AMD’s graphics processing units as an alternative to Nvidia’s market-leading GPUs for artificial intelligence.
“We feel like customers are going to take up AMD very, very well — especially in the inferencing space,” said Karan Batta, senior vice president of Oracle Cloud Infrastructure.
Oracle will use AMD’s Instinct MI450 chips, which were announced earlier this year.
They are AMD’s first AI chips that can be assembled into a larger rack-sized system that enables 72 of the chips to work as one, which is needed to create and deploy the most advanced AI algorithms.
OpenAI CEO Sam Altman appeared with AMD CEO Lisa Su at a company event in June to announce the product.
Read more CNBC tech news
Earlier this month, OpenAI announced a deal with AMD for processors requiring 6 gigawatts of power over multiple years, with a 1-gigawatt rollout starting in 2026. As part of the deal, and if the deployment goes well, OpenAI may end up owning as many as 160 million shares of AMD, or about 10% of the company.
OpenAI has historically been closely linked with Nvidia, whose chips were used to develop ChatGPT. Nvidia’s chips dominate the market for data center GPUs with more than 90% market share. Nvidia also invested in OpenAI in September.
But OpenAI leaders say the company needs as much computing power as possible, which means it needs AI chips from multiple suppliers. OpenAI also has plans to design its own AI chips with Broadcom.
“I think AMD has done a really fantastic job, just like Nvidia, and I think both of them have their place,” Batta said.
Tuesday at Oracle AI World, founder and Chairman Larry Ellison is set to take the stage and share his views on the latest OpenAI deal and what his company is doing to stay ahead of its main cloud competitors – Microsoft, Amazon and Google.
“Oracle has already shown it is willing to place big bets and go all in to meet the AI moment. The company must now prove that beyond capacity, it can capitalize on its massive underlying data and enterprise capabilities … to add meaningful value to the enterprise AI wave,” said Daniel Newman, CEO of The Futurum Group, on the sidelines of Oracle’s conference.