Shou Zi Chew, chief executive officer of TikTok Inc., speaks during the Bloomberg New Economy Forum in Singapore, on Wednesday, Nov. 16, 2022.
Bryan van der Beek | Bloomberg | Getty Images
TikTok CEO Shou Zi Chew will face a tough crowd on Thursday when he testifies before the House Energy and Commerce Committee while his company is on the brink of a potential ban in the U.S.
Although TikTok is the one in the hot seat on Thursday, the hearing will also raise existential questions for the U.S. government regarding how it regulates technology. Lawmakers recognize that the concerns over broad data collection and the ability to influence what information consumers see extend far beyond TikTok alone. U.S. tech platforms including Meta’s Facebook and Instagram, Google’sYouTube, Twitter and Snap’s Snapchat have raised similar fears for lawmakers and users.
That means that while trying to understand whether TikTok can effectively protect U.S. consumers under a Chinese owner, lawmakers will also have to grapple with how best to address consumer harms across the industry.
Conversations with lawmakers, congressional aides and outside experts ahead of the hearing reveal the difficult line the government needs to walk to protect U.S. national security while avoiding excessive action against a single app and violating First Amendment rights.
Evaluating a potential ban
There’s little appetite in Washington to accept the potential risks that TikTok’s ownership by Chinese company ByteDance poses to U.S. national security. Congress has already banned the app on government devices and some states have made similar moves.
The interagency panel tasked with reviewing national security risks stemming from ByteDance’s ownership has threatened a ban if the company won’t sell its stake in the app.
Still, an outright ban raises its own concerns, potentially missing the forest for the trees.
“If members focus solely on the prospect of a ban or a forced sale without addressing some of the more pervasive issues, particularly those facing children and younger users, shared by TikTok and U.S.-based social media companies, I think that would be a mistake,” Rep. Lori Trahan, D-Mass., a committee member, told CNBC in an interview on Tuesday. Trahan said members should ask about national security risks of the app, but those questions should be substantive.
A TikTok advertisement at Union Station in Washington, DC, US, on Wednesday, March 22, 2023.
Nathan Howard | Bloomberg | Getty Images
Rep. Gus Bilirakis, R-Fla., who chairs the E&C subcommittee on innovation, data and commerce, said he and many of his colleagues are going into the hearing open to solutions.
“We have to be open-minded and deliberate,” Bilirakis told CNBC in an interview on Wednesday. “But at the same time, time is of the essence.”
If the government moves for a ban where the concerns could reasonably be mitigated with a less restrictive measure, it could pose First Amendment issues, according to Jameel Jaffer, executive director of the Knight First Amendment Institute at Columbia University.
“A ban here is in some ways under-inclusive because it would be focused just on TikTok or a small number of platforms, when in fact many other platforms are collecting this kind of information as well,” Jaffer said. “And in other ways, it would be over-broad because there are less restrictive ways that the government could achieve its ends.”
While some might wonder if cutting off Americans’ access to TikTok is really such a violation of rights, Jaffer said the public should consider it in terms of the U.S. government’s authority to decide which media Americans can access.
“It’s a good thing that if the government wants to ban Americans from accessing foreign media, including foreign social media… it has to carry a heavy burden in court,” Jaffer said.
Many lawmakers agree that the government should make its case more clearly to the American public for why a ban is necessary, should it go that route. The bipartisan RESTRICT Act recently introduced in the Senate, for example, would require such an explanation, to the extent possible, when the government wants to limit foreign-owned technology for national security reasons.
Trahan said she could support legislation similar to the RESTRICT Act in the House, which would create a process to mitigate national security risks of technologies from foreign adversary countries, but passing such a bill would still not be enough.
“The message that I want folks to hear is that we cannot afford to pass this legislation or something like it, watch the administration ban or force the sale of TikTok and declare victory in the fight to rein in the abuses of dominant Big Tech companies,” Trahan said. “I think the conversation right now about a ban certainly threatens to let Big Tech companies off the hook, and it’s on Congress not to fall into that trap.”
Even if the U.S. successfully banned TikTok or forced it to spin off from ByteDance, there’s no way to know for sure that any earlier-collected data is out of reach of the Chinese government.
“If that divestment would occur, how do you segregate the code bases between ByteDance and TikTok?” asked John Lash, who advises clients on risk mitigation agreements with the Committee on Foreign Investment in the U.S. (CFIUS) but hasn’t worked for TikTok or ByteDance. “And how is the U.S. government going to get comfortable that the asset, TikTok, which is hypothetically sold, is free of any type of backdoor that was either maliciously inserted or just weaknesses in code, errors that occur regularly in how code is structured?”
“I think the concern is valid. My big issue is that genie’s sort of out of the bottle,” Eric Cole, a cybersecurity consultant who began his career as a hacker for the Central Intelligence Agency, said of the data security fears. “At this point, it’s so embedded that even if they were successful in banning Tiktok altogether, that the damage is done.”
Addressing industry-wide concerns
Thursday’s hearing will feature several lawmakers on both sides of the aisle calling for comprehensive privacy reform, like the kind the panel passed last year but never made it to the floor for a vote.
Those calls serve as recognition that many of the concerns about TikTok, apart from its ownership by a Chinese company, are shared by other prominent tech platforms headquartered in the U.S.
Both Trahan and Bilirakis mentioned the need for privacy reform as a more systemic solution to the issues raised by TikTok. Both are especially concerned about the social media company’s potentially harmful impacts on children and said they would drill down on TikTok’s protections in the hearing.
TikTok has touted a complex plan known as Project Texas to help ease U.S. concerns over its ownership. Under the plan, it will base its U.S. data operations domestically and allow its code to be reviewed and sent to the app stores by outside parties.
A TikTok advertisement at Union Station in Washington, DC, US, on Wednesday, March 22, 2023.
Nathan Howard | Bloomberg | Getty Images
Chew plans to tell Congress that he strongly prioritizes the safety of users, and particularly teens, that TikTok will firewall U.S. user data from “unauthorized foreign access,” it “will not be manipulated by any government” and it will be transparent and allow independent monitors to assess its compliance.
Experts and even some lawmakers acknowledge that Project Texas offers a step forward on some aspects of consumer protection they’ve pushed for in the tech industry more broadly.
“TikTok is in a really unique position right now to take some positive steps on issues that a lot of top American companies have fallen behind and frankly even regressed on whether it’s protecting kids or embracing transparency,” Trahan said. While she believes there are still many questions TikTok needs to answer about the adequacy of Project Texas, Trahan said she is “hopeful” about the company’s professed “openness to stronger transparency mechanisms.”
Lawmakers and aides who spoke with CNBC ahead of the hearing emphasized that comprehensive privacy legislation will be necessary regardless of what action is taken against TikTok in particular. That’s how a similar situation in the future may be prevented, and a way to hold U.S. companies to higher standards as well.
But given federal digital privacy protections don’t currently exist, Lash said the U.S. should consider what it would mean if Project Texas were to go away.
“In lieu of comprehensive federal data privacy regulation in the United States, which is needed, does Project Texas give the best available option right now to protect national security?” asked Lash, whose advisory is one of a small group of firms with the expertise to advise the company on an agreement should a deal go through. “And does it continue if ByteDance is forced to divest their interests?”
The plan appears to address the issues that lawmakers are concerned about, said Lash, but what it can’t address are “the theoretical risks around may happen, could happen as it relates to the application.”
“I would say, based on what I’ve seen out in the public, it does seem to comprehensively address a lot of the real technical risks that may be arising,” he said.
Still, policymakers appear skeptical that Project Texas reaches that bar.
An aide for the House Energy and Commerce Committee who was only authorized to speak on background told reporters earlier this week that TikTok’s risk mitigation plans were “purely marketing.” Another aide for the committee noted that even if the U.S. can be assured the data is secure, it’s impossible to comb through all the existing code for vulnerabilities.
E&C Chair Cathy McMorris Rodgers, R-Wash., supports a ban to address the immediate risks TikTok poses as well as comprehensive privacy legislation that passed through the committee last Congress to prevent repeat situations, according to E&C aides.
TikTok’s strategy
In the lead-up to the hearing, TikTok has turned to creators and users to share their support for the app and help lawmakers understand the unique features that make it an important source of income, open expression and education for many Americans.
On Tuesday, Chew posted a video on TikTok touting its 150 million monthly active users in the U.S. and appealed to them to leave comments about what they want their lawmakers to know about why they love TikTok.
The company has also found an ally in its efforts to fight a ban in Rep. Jamaal Bowman, D-N.Y., a TikTok user himself who discovered the power of the app to build connections with constituents while vlogging the lengthy Speaker of the House election.
Rep. Jamaal Bowman (D-NY) speaks at a news conference outside the U.S. Capitol Building on February 02, 2023 in Washington, DC.
Anna Moneymaker | Getty Images
On Wednesday, Bowman held a press conference with dozens of creators, opposing the ban and saying rhetoric around the app is a sort of “red scare” pushed primarily by Republicans. He said he supports comprehensive legislation addressing privacy issues across the industry, rather than singling out one platform. Bowman noted lawmakers haven’t received a bipartisan congressional briefing from the administration on national security risks stemming from TikTok.
“Let’s not have a dishonest conversation,” Bowman said. “Let’s not be racist toward China and express our xenophobia when it comes to TikTok. Because American companies have done tremendous harm to American people.”
Reps. Mark Pocan, D-Wisc., and Robert Garcia, D-Calif., joined Bowman and the creators, announcing their opposition to a ban. Garcia, who is openly gay, said it’s important that young queer creators “are able to find themselves in this space, share information and feel comfortable, in some cases come out.”
“Honestly it’s done best on the TikTok platform than any other social media platform that currently exists, certainly in the United States,” Garcia said.
Creators at the event on Wednesday shared the opportunities that TikTok has afforded them that aren’t available in the same way on other apps. Several creators who spoke with CNBC said they have other social media channels but have far fewer followers on them, due in part to the easy discoverability built into TikTok’s design.
“I’ve been on social media for probably ten years,” said David Ma, a Brooklyn-based content creator, director and filmmaker on TikTok. But it wasn’t until he joined TikTok that his following grew exponentially, to more than 1 million people. “It’s given me visibility with people that are going to fundamentally change the trajectory of my career.”
Tim Martin, a college football coach in North Dakota who posts about sports on TikTok to a following of 1 million users, estimated 70% of his income comes from the app. Martin credits the TikTok algorithm with getting his videos in front of users who truly care about what he has to share, which has helped him grow his following there far more than on Instagram.
But TikTok’s attempt to shift the narrative to positive stories from creators and users may still fall flat for some lawmakers.
Bilirakis said the strategy is “not resonating with our colleagues. Definitely not with me.” That’s because he hears other anecdotes about constituents’ encounters with the app that make him worry for teens’ safety.
“I do think there’s a chance that it may not necessarily have the impact that TikTok is looking for,” said Jasmine Enberg, a social media analyst for Insider Intelligence. “It’s more evidence of how firmly entrenched the app is in the digital lives of Americans, which isn’t necessarily going to help convince us lawmakers that TikTok can’t be used or isn’t being used to influence public opinion.”
Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.
STR | Nurphoto via Getty Images
The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.
In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.
Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.
This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.
Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.
Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.
Digital ID tech flourishing
At the heart of all these age verification measures is one company: Yoti.
Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.
The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.
“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”
Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.
“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”
Read more CNBC tech news
Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.
“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”
Child-safe smartphones
The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.
Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.
The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.
Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.
HMD Global
“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”
The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.
Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.
The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”
A banner for Snowflake Inc. is displayed at the New York Stock Exchange to celebrate the company’s initial public offering on Sept. 16, 2020.
Brendan McDermid | Reuters
MongoDB’s stock just closed out its best week on record, leading a rally in enterprise technology companies that are seeing tailwinds from the artificial intelligence boom.
In addition to MongoDB’s 44% rally, Pure Storage soared 33%, its second-sharpest gain ever, while Snowflake jumped 21%. Autodesk rose 8.4%.
Since generative AI started taking off in late 2022 following the launch of OpenAI’s ChatGPT, the big winners have been Nvidia, for its graphics processing units, as well as the cloud vendors like Microsoft, Google and Oracle, and companies packaging and selling GPUs, such as Dell and Super Micro Computer.
For many cloud software vendors and other enterprise tech companies, Wall Street has been waiting to see if AI will be a boon to their business, or if it might displace it.
Quarterly results this week and commentary from company executives may have eased some of those concerns, showing that the financial benefits of AI are making their way downstream.
MongoDB CEO Dev Ittycheria told CNBC’s “Squawk Box” on Wednesday that enterprise rollouts of AI services are happening, but slowly.
“You start to see deployments of agents to automate back office, maybe automate sales and marketing, but it’s still not yet kind of full force in the enterprise,” Ittycheria said. “People want to see some wins before they deploy more investment.”
Revenue at MongoDB, which sells cloud database services, rose 24% from a year earlier to $591 million, sailing past the $556 million average analyst estimate, according to LSEG. Earnings also exceeded expectations, as did the company’s full-year forecast for profit and revenue.
MongoDB said in its earnings report that it’s added more than 5,000 customers year-to-date, “the highest ever in the first half of the year.”
“We think that’s a good sign of future growth because a lot of these companies are AI native companies who are coming to MongoDB to run their business,” Ittycheria said.
Pure Storage enjoyed a record pop on Thursday, when the stock jumped 32% to an all-time high.
The data storage management vendor reported quarterly results that topped estimates and lifted its guidance for the year. But what’s exciting investors the most is early returns from Pure’s recent contract with Meta. Pure will help the social media company manage its massive storage needs efficiently with the demands of AI.
Pure said it started recognizing revenue from its Meta deployments in the second quarter, and finance chief Tarek Robbiati said on the earnings call that the company is seeing “increased interest from other hyperscalers” looking to replace their traditional storage with Pure’s technology.
‘Banger of a report’
Reports from MongoDB and Pure landed the same week that Nvidia announced quarterly earnings, and said revenue soared 56% from a year earlier, marking a ninth-straight quarter of growth in excess of 50%.
Nvidia has emerged as the world’s most-valuable company by selling advanced AI processors to all of the infrastructure providers and model developers.
While growth at Nvidia has slowed from its triple-digit rate in 2023 and 2024, it’s still expanding at a much faster pace than its megacap peers, indicating that there’s no end in sight when it comes to the expansive AI buildouts.
“It was a banger of a report,” said Brad Gerstner CEO of Altimeter Capital, in an interview with CNBC’s “Halftime Report” on Thursday. “This company is accelerating at scale.”
Read more CNBC tech news
Data analytics vendor Snowflake talked up its Snowflake AI data cloud in its quarterly earnings report on Wednesday.
Snowflake shares popped 20% following better-than-expected earnings and revenue. The company also boosted its guidance for the year for product revenue, and said it has more than 6,100 customers using Snowflake AI, up from 5,200 during the prior quarter.
“Our progress with AI has been remarkable,” Snowflake CEO Sridhar Ramaswamy said on the earnings call. “Today, AI is a core reason why customers are choosing Snowflake, influencing nearly 50% of new logos won in Q2.”
Autodesk, founded in 1982, has been around much longer than MongoDB, Pure Storage or Snowflake. The company is known for its AutoCAD software used in architecture and construction.
The company has underperformed the broader tech sector of late, and last year activist investor Starboard Value jumped into the stock to push for improvements in operations and financial performance, including cost cuts. In February, Autodesk slashed 9% of its workforce, and two months later the company settled with Starboard, adding two newcomers to its board.
The stock is still trailing the Nasdaq for the year, but climbed 9.1% on Friday after Autodesk reported results that exceeded Wall Street estimates and increased its full-year revenue guidance.
Last year, Autodesk introduced Project Bernini to develop new AI models and create what it calls “AI‑driven CAD engines.”
On Thursday’s earnings call, CEO Andrew Anagnost was asked what he’s most excited about across his company’s product portfolio when it comes to AI.
Anagnost touted the ability of Autodesk to help customers simplify workflow across products and promoted the Autodesk Assistant as a way to enhance productivity through simple prompts.
He also addressed the elephant in the room: The existential threat that AI presents.
“AI may eat software,” he said, “but it’s not gonna eat Autodesk.”
Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.
Nathan Howard | Reuters
Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.
The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.
The company said AI chatbots will instead point teenagers to expert resources when appropriate.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.
Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.
The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.
Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
Read more CNBC tech news
The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.
In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.
“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”