Connect with us

Published

on

The Washington Post | The Washington Post | Getty Images

President-elect Donald Trump was early to warn about the national security dangers posed by TikTok during his first term in office, with rhetoric and policy discussions that framed the social media app within his aggressive anti-China stance. But during the 2024 campaign, Trump seemed to do an about-face.

In an interview on CNBC’s “Squawk Box” last March, Trump said banning TikTok would make young people “go crazy” and would also benefit Meta Platforms‘ Facebook.

“There’s a lot of good and there’s a lot of bad with TikTok,” Trump said. “But the thing I don’t like is that without TikTok, you can make Facebook bigger, and I consider Facebook to be an enemy of the people, along with a lot of the media.”

Trump’s transition team hasn’t commented on TikTok specifically, but has said the election results give the president a mandate to follow through on the promises he made on the campaign trail, and there are some big deadlines coming up related to TikTok’s fate.

Before Trump is even president, the U.S. Court of Appeals for the D.C. Circuit is expected to issue a ruling by Friday on a challenge to the new law requiring ByteDance, TikTok’s Chinese parent company, to divest its U.S. operations by January 19. This case has broad implications, touching on national security concerns, constitutional questions about free speech, and the future of foreign-owned tech platforms in the U.S.  

Courts generally defer to the executive and legislative branches on national security matters, but the outcome may depend on whether the court frames the issue solely as a national security question or also considers First Amendment concerns. The balance likely favors the government given Congress’s clear constitutional authority to regulate foreign commerce, which supports the legislation requiring ByteDance divestment. Regardless, this case is likely headed to the Supreme Court.

As of now, with Trump to be sworn in on Jan. 20, one day after the federal ban on TikTok is scheduled to begin, Trump’s comments have intensified deep concerns about the influence that major donors will have in a second Trump administration and the extent to which private financial interests will be prioritized over national security and public welfare. In fact, it may be the first major decision made by Trump that tells us just how far his administration is willing to go in prioritizing the donor wish list.

Former President Donald Trump: I consider Facebook to be an enemy of the people

At the center of this controversy is Jeff Yass, a major Republican donor with significant financial ties to ByteDance, TikTok’s parent company. Yass, who contributed over $46 million to Republican causes during the 2024 election cycle, reportedly met with Trump in March, though the details of their conversation remain unclear. What is clear, however, is that Yass’s ownership stake in ByteDance has fueled concerns in Washington about whether Trump’s reversal was influenced by donor priorities rather than a pure devotion to market competition.

The Wall Street Journal recently reported that TikTok’s CEO has been personally lobbying Elon Musk, who now has a close relationship with the President-Elect, on his company’s behalf. Meanwhile, Meta’s Mark Zuckerberg dined with Trump at Mar-a-Lago last week.

The optics of a TikTok ban reversal are troubling. Imagine the backlash if a prominent Democratic donor like George Soros — frequently vilified by Republicans — had similarly positioned himself to influence major policy decisions tied to his personal financial interests. The accusations of corruption and undue influence, if not worse, would be deafening. Yet figures like Yass and particularly Elon Musk — who has duct-taped himself, and his entangled financial interests to Trump’s transition team and many of their personnel and policy decisions — face little scrutiny from the same critics who level conspiracy theories against Soros.

This selective outrage underscores a systemic problem: a political system where major donors wield significant influence over policymaking, often without bipartisan expressions of concern or actions that force transparency or accountability.

TikTok’s weaponized influence

Concerns about donor influence are amplified when considering the risks associated with TikTok itself. The app’s meteoric rise has sparked bipartisan alarm over its ties to the Chinese government. Lawmakers and intelligence officials have consistently warned about its potential for data harvesting, espionage, and propaganda. These concerns are not abstract. During the last congressional push to ban TikTok, the app demonstrated its ability to weaponize its platform by rapidly mobilizing its user base to flood lawmakers with calls and emails opposing the ban.

This real-time demonstration of TikTok’s ability to influence public sentiment, amplify social narratives, and pressure lawmakers underscores its unparalleled capacity as a tool for shaping public policy and national opinions. When coupled with ByteDance’s links to the Chinese government, TikTok’s potential for misuse or mischief is alarming.

Another concern around a TikTok ban reversal is the fact that there is already a law addressing TikTok: the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), enacted in April 2024 as part of Public Law 118-50. This bipartisan legislation mandates that foreign adversary-controlled applications, like TikTok, must be divested or face a U.S. ban. As federal law, PAFACA cannot simply be reversed by presidential decree. A U.S. president cannot legally bypass Congress to nullify or override an existing law. Laws passed by Congress remain binding until they are repealed or amended by Congress or struck down by the courts.

Instead of bypassing Congress or undermining existing law, any changes to TikTok’s status should be addressed through the framework that PAFACA provides. Such a transparent process would ensure that decisions are made in public and on behalf of the public interest, not in the backrooms at Mar-a-Lago. With Republicans controlling both the House and Senate during the newly elected Congress, they have the power to amend or repeal PAFACA. However, doing so would require navigating a highly involved legislative process that would inevitably bring more scrutiny to Yass.

Trump’s options

Given Trump’s dominance of the federal courts at the highest level, he could use this route, but short of the courts, the president’s authority in this context is limited. Any Trump effort to unilaterally overturn a TikTok ban as president would be difficult to execute based on how the system is supposed to work.

Two options Trump would have are enforcement discretion and executive orders. The president has considerable discretion in how federal laws are enforced. For instance, executive agencies might prioritize certain aspects of a law over others, effectively scaling down enforcement in particular areas. While executive orders cannot override existing laws, they can guide how the executive branch implements them, potentially narrowing their scope. Presidents have historically used enforcement discretion to achieve policy objectives without openly violating the law. 

But addressing TikTok through the existing legal framework already established by PAFACA would allow for the consideration of balanced alternatives, such as requiring stricter data security measures, local data storage, or divestiture that places TikTok’s operations under U.S. ownership. These options could protect users’ access to the app while addressing legitimate security risks.

Many of these alternatives have been explored in public discussions and through proposals like “Project Texas,” and some have found their way into law. They have also been subject to criticism and challenges, largely about insufficient follow-through or the perception that these efforts are not thorough, would never be agreed to by the Chinese government, or are just incomplete or inadequate to address security concerns. But consideration of these remedies should continue — to date, the execution has been nonexistent rather than the proposals being outright failures. 

The broader implications of donor-driven policy

Trump’s March comments on TikTok get one thing right. It is important to acknowledge that TikTok’s immense popularity creates another unique dilemma. With over 150 million users in the U.S., the app is more than just a platform for entertainment — it has become a key tool for creativity, connection, and commerce, particularly among younger Americans and small businesses. This widespread use complicates the conversation, as any decision about TikTok’s future will inevitably affect millions of people who rely on it for various purposes.

However, the app’s popularity should not outweigh the national security concerns it poses, particularly given its ties to the Chinese government. ByteDance’s well-documented connections to the Chinese government have heightened fears in Washington about the potential misuse of TikTok’s data collection capabilities. These risks are not speculative — they reflect patterns of behavior consistent with Chinese state-sponsored cyber activities. Allowing donor-driven priorities to eclipse these legitimate security concerns undermines public trust in the policymaking process and erodes confidence in government institutions.

This situation raises a critical question: What other national priorities might be sacrificed to appease donors with outsized influence? If decisions about TikTok — an app that elicits bipartisan concerns about its national security implications — can be swayed, what does this mean for other pressing issues like energy policy, defense, or trade? The stakes are far too high to let financial interests dictate public policy outcomes.

Americans deserve a government that treats national security as a top priority and not one that is negotiable or secondary to the interests of private wealthy donors.

—By Dewardric McNeal, managing director & senior policy analyst at Longview Global and CNBC contributor, who served as an Asia policy specialist at the Defense Department during the Obama administration.

Continue Reading

Technology

Global movement to protect kids online fuels a wave of AI safety tech

Published

on

By

Global movement to protect kids online fuels a wave of AI safety tech

Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.

STR | Nurphoto via Getty Images

The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.

In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.

Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.

This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.

Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.

Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.

Digital ID tech flourishing

At the heart of all these age verification measures is one company: Yoti.

Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.

The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.

“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”

Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.

“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”

Read more CNBC tech news

Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.

“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”

Child-safe smartphones

The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.

Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.

The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.

Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.

HMD Global

“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”

The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.

Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.

The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.

“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”

Continue Reading

Technology

‘AI may eat software,’ but several tech names just wrapped a huge week

Published

on

By

'AI may eat software,' but several tech names just wrapped a huge week

A banner for Snowflake Inc. is displayed at the New York Stock Exchange to celebrate the company’s initial public offering on Sept. 16, 2020.

Brendan McDermid | Reuters

MongoDB’s stock just closed out its best week on record, leading a rally in enterprise technology companies that are seeing tailwinds from the artificial intelligence boom.

In addition to MongoDB’s 44% rally, Pure Storage soared 33%, its second-sharpest gain ever, while Snowflake jumped 21%. Autodesk rose 8.4%.

Since generative AI started taking off in late 2022 following the launch of OpenAI’s ChatGPT, the big winners have been Nvidia, for its graphics processing units, as well as the cloud vendors like Microsoft, Google and Oracle, and companies packaging and selling GPUs, such as Dell and Super Micro Computer.

For many cloud software vendors and other enterprise tech companies, Wall Street has been waiting to see if AI will be a boon to their business, or if it might displace it.

Quarterly results this week and commentary from company executives may have eased some of those concerns, showing that the financial benefits of AI are making their way downstream.

MongoDB CEO Dev Ittycheria told CNBC’s “Squawk Box” on Wednesday that enterprise rollouts of AI services are happening, but slowly.

“You start to see deployments of agents to automate back office, maybe automate sales and marketing, but it’s still not yet kind of full force in the enterprise,” Ittycheria said. “People want to see some wins before they deploy more investment.”

Revenue at MongoDB, which sells cloud database services, rose 24% from a year earlier to $591 million, sailing past the $556 million average analyst estimate, according to LSEG. Earnings also exceeded expectations, as did the company’s full-year forecast for profit and revenue.

MongoDB CEO Dev Ittycheria on Q2 results: The opportunity in front of us is massive

MongoDB said in its earnings report that it’s added more than 5,000 customers year-to-date, “the highest ever in the first half of the year.”

“We think that’s a good sign of future growth because a lot of these companies are AI native companies who are coming to MongoDB to run their business,” Ittycheria said.

Pure Storage enjoyed a record pop on Thursday, when the stock jumped 32% to an all-time high.

The data storage management vendor reported quarterly results that topped estimates and lifted its guidance for the year. But what’s exciting investors the most is early returns from Pure’s recent contract with Meta. Pure will help the social media company manage its massive storage needs efficiently with the demands of AI.

Pure said it started recognizing revenue from its Meta deployments in the second quarter, and finance chief Tarek Robbiati said on the earnings call that the company is seeing “increased interest from other hyperscalers” looking to replace their traditional storage with Pure’s technology.

‘Banger of a report’

Reports from MongoDB and Pure landed the same week that Nvidia announced quarterly earnings, and said revenue soared 56% from a year earlier, marking a ninth-straight quarter of growth in excess of 50%.

Nvidia has emerged as the world’s most-valuable company by selling advanced AI processors to all of the infrastructure providers and model developers.

While growth at Nvidia has slowed from its triple-digit rate in 2023 and 2024, it’s still expanding at a much faster pace than its megacap peers, indicating that there’s no end in sight when it comes to the expansive AI buildouts.

“It was a banger of a report,” said Brad Gerstner CEO of Altimeter Capital, in an interview with CNBC’s “Halftime Report” on Thursday. “This company is accelerating at scale.”

Read more CNBC tech news

Data analytics vendor Snowflake talked up its Snowflake AI data cloud in its quarterly earnings report on Wednesday.

Snowflake shares popped 20% following better-than-expected earnings and revenue. The company also boosted its guidance for the year for product revenue, and said it has more than 6,100 customers using Snowflake AI, up from 5,200 during the prior quarter.

“Our progress with AI has been remarkable,” Snowflake CEO Sridhar Ramaswamy said on the earnings call. “Today, AI is a core reason why customers are choosing Snowflake, influencing nearly 50% of new logos won in Q2.”

Autodesk, founded in 1982, has been around much longer than MongoDB, Pure Storage or Snowflake. The company is known for its AutoCAD software used in architecture and construction.

The company has underperformed the broader tech sector of late, and last year activist investor Starboard Value jumped into the stock to push for improvements in operations and financial performance, including cost cuts. In February, Autodesk slashed 9% of its workforce, and two months later the company settled with Starboard, adding two newcomers to its board.

The stock is still trailing the Nasdaq for the year, but climbed 9.1% on Friday after Autodesk reported results that exceeded Wall Street estimates and increased its full-year revenue guidance.

Last year, Autodesk introduced Project Bernini to develop new AI models and create what it calls “AI‑driven CAD engines.”

On Thursday’s earnings call, CEO Andrew Anagnost was asked what he’s most excited about across his company’s product portfolio when it comes to AI.

Anagnost touted the ability of Autodesk to help customers simplify workflow across products and promoted the Autodesk Assistant as a way to enhance productivity through simple prompts.

He also addressed the elephant in the room: The existential threat that AI presents.

“AI may eat software,” he said, “but it’s not gonna eat Autodesk.”

WATCH: Autodesk CEO on Q2 earnings

Autodesk CEO on Q2 earnings beat, M&A strategy and activist pressure

Continue Reading

Technology

Meta changes teen AI chatbot responses as Senate begins probe into ‘romantic’ conversations

Published

on

By

Meta changes teen AI chatbot responses as Senate begins probe into 'romantic' conversations

Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.

Nathan Howard | Reuters

Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.

The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.

The company said AI chatbots will instead point teenagers to expert resources when appropriate.

“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.

Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.

The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.

TechCrunch was first to report the change.

Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.

Read more CNBC tech news

The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.  

In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”

A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”

Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.

“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”

A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.

The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”

A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”

WATCH: Is the A.I. trade overdone?

The 'Halftime' Investment Committee debate whether the AI trade overdone

Continue Reading

Trending