Republican presidential nominee and former U.S. President Donald Trump gestures at the Bitcoin 2024 event in Nashville, Tennessee, U.S., July 27, 2024.
Kevin Wurm | Reuters
With the levers of power in Washington, D.C., about to change hands, a raft of pro-crypto legislation is expected from Congress and the Trump administration. To date, there’s been less focus on the cybersecurity side of the political effort, which could be an issue for crypto in relation to its popularity among a wary U.S. population.
Cryptocurrency, which includes not just bitcoin but ethereum, dogecoin, and others, has a faithful following among American adults. According to the Pew Research Center, 17% of American adults have traded in crypto, but that market share of American wallets has remained virtually unchanged since 2021. Meanwhile, according to a poll Pew conducted shortly before the election, 63% of adults say they have little to no confidence in crypto investing or trading, and don’t think cryptocurrencies are reliable and safe.
The incoming Trump administration has been touting its crypto bona fides, with a focus on the industry rather than the consumer.
“The No. 1 most important priority for the industry is to make sure they have a regulatory framework so that they can do business,” said Dusty Johnson (R-South Dakota), who helped author the Financial Innovation and Technology for the 21st Century Act (FIT21) that addresses the treatment of digital assets under U.S. law. The law passed in the House with bipartisan support but has not been taken up by the Senate.
FIT21 did contain specific crypto-cybersecurity provisions, which Johnson predicts will be built upon in the new administration.
Glenn “GT” Thompson (R-Pennsylvania), Chairman of the House Committee on Agriculture and a co-author of FIT21, says the cybersecurity provisions in the bill are still key in the upcoming administration.
“FIT21 requires important cybersecurity safeguards for financial intermediaries engaging with digital assets,” Thompson said in a statement to CNBC, adding that FIT21 includes explicit provisions to ensure that regulated firms take steps to evaluate and mitigate cyber vulnerabilities to protect both the services they offer and assets they hold on behalf of their customers.
“These cybersecurity requirements are critical for protecting digital asset markets and market participants,” Thompson said.
Some experts, however, doubt that there will be as much action on the security side of the legislation, given that crypto proponents are closely advising the Trump administration.
“Personnel is policy,” says Jeff Le, vice president of global government affairs and public policy at Security Scorecard and a former assistant cabinet secretary in the California governor’s office. The top ranks of the incoming economic team, made up of SEC Chair-designate Paul Atkins, Commerce Secretary Howard Lutnick, and Treasury Secretary-designate Scott Bessent, “have had a track record of supporting cryptocurrencies,” Le said.
The crypto industry donated significant sums to the 2024 election cycle, contributions that were not limited to the GOP, but focused more broadly on lawmakers with an industry-friendly view of crypto regulation. It’s likely that will continue to influence political calculations. The pro-crypto and bipartisan super PAC Fairshake and its affiliates have already raised over $100 million for the 2026 midterm elections, including commitments from Coinbase and Silicon Valley venture fund Andreessen Horowitz, an early backer of Coinbase. Top Andreessen Horowitz executives have been tapped for roles in the Trump administration.
“We have the most pro-crypto Congress ever [in] history, we have an extraordinarily pro-crypto president coming into office,” Faryar Shirzad, chief policy officer at Coinbase, recently told CNBC.
“It is rare to see cryptocurrency proponents advocate for increased regulation in the space, regardless of reason,” said Jason Baker, senior threat intelligence consultant at GuidePoint Security.
Baker says the anonymity and independence of cryptocurrency are often cited as primary benefits that legislation would curtail, and cryptocurrency’s decentralized nature makes it hard to regulate in a traditional sense.
“Given current signaling from the incoming administration and the interests of cryptocurrency proponents influential to the administration, we do not anticipate significant advances in cryptocurrency regulation within the next four years,” Baker said.
If there isn’t much action on regulation, there are some obvious ramifications for cybersecurity, he said, driven by the correlation between a pro-crypto Washington, D.C., and bullish bets by investors on digital assets.
“Cybercrime is often driven by benefits from increasing cryptocurrency value. In ransomware, for example, ransoms are commonly demanded in USD, but payments are made most frequently in bitcoin. When the value of bitcoin increases, cybercriminals will benefit,” Baker said.
Stock Chart IconStock chart icon
The value of bitcoin has risen significantly over the past three months in what has been a risk-on market environment.
“Future de-emphasis on cryptocurrency regulation may positively signal that cybercrime operations in bitcoin remain viable and unlikely to suffer government disruption to operators in the space,” Baker said.
Cybercriminals have also been changing tactics to evade legislation and scrutiny, Baker added, switching to more under-the-radar cryptocurrencies like Monero.
Ransomware’s potential role in Congressional action
Baker predicts regulation centered on organizations issuing cryptocurrency payments — whether in the form of a ransom payment or for other purposes — is more likely achievable and palatable in the current regulatory environment.
“This could include, for example, increased requirements for reporting ransom payments when made, a policy which has been floated without gaining substantial traction in recent years,” Baker said. This approach can be argued as regulating end users and purposes rather than the underlying cryptocurrency itself.
In addition to ransomware payments to restore access to technology systems, there are other reasons why payment in cryptocurrency is common in digital extortion schemes, including to protect the identity and operational security of the criminal. Private organizations may also opt to use crypto to purchase leaked data or credentials which have been made available on illicit forums.
There could also be situations where private individuals attempt to report and receive payment for discovered vulnerabilities under a “bug bounty” program — whether voluntary or coerced (so-called “beg bounty”). They may request payment in cryptocurrency out of personal preference or general desire for privacy, and private organizations may or may not oblige.
“While there are doubtless other options for organizations to use cryptocurrency in some form, these are the primary forms we see on a regular or more frequent basis,” Baker said. “Though such actions would almost certainly have downstream impacts on cryptocurrency value by virtue of their impact on transaction volume,” Baker added.
Steve McNew, global leader of blockchain and digital assets at FTI Consulting, thinks some cyber-crypto legislation may happen, especially governing when a company victimized by a ransomware pays their attackers in cryptocurrency.
“There’s more than just public policy at issue,” said McNew. If a company has been compromised in a cyberattack and is required to make public disclosure of the ransoms it paid out, it can result in the company becoming a bigger future target for other criminal enterprises, McNew said. While it might make sense, on one hand, to provide disclosure as to where funds are going and what cryptocurrencies were used in a payment, doing so can put the company (and by extension its customers, employees and partners) in harm’s way.
“So, any policy decisions around cryptocurrency disclosures in this context will require balancing the need for transparency around the use of cryptocurrency in criminal matters alongside the risks such transparency might exacerbate,” McNew says.
Though FIT21 passed the House with broad bipartisan support, it did not address these issues specifically.
Le expects some legislation action that may attempt to address this topic. “The next Congress could see more traction for proposed legislation like Cryptocurrency Cybersecurity Information Sharing Act of 2022, which allows companies to share information regarding cybersecurity threats with the federal government and with one another,” he said.
Le said Congress may also revisit the work of outgoing Financial Services Chair Patrick McHenry (R-North Carolina) and Rep. Brittany Pettersen (D-Colorado) and the Ransomware and Financial Stability Act of 2024, which aimed at “strengthening the resilience of the U.S. financial system against ransomware attacks, establishing clear protocols for ransom payments, and ensuring that such payments, including those involving cryptocurrencies, are made within a controlled and legally compliant framework.”
But he added that it is unclear if the Trump administration will continue the Biden administration’s leadership role in the International Counter Ransomware Initiative, a 68-country coalition aimed at preventing the payments of ransomware.
The broader bitcoin governance battle
McNew says that many basic parameters surrounding crypto, even down to its definition, could hamstring legislation, even aspects of it intended to foster innovation and adoption of the industry.
“U.S. lawmakers have work to do in determining roles, responsibilities, and basic parameters for how the industry will be governed before any meaningful legislation can take hold,” McNew said. As an example, establishing a designated authority for digital assets is an imperative that has yet to be addressed.
Basic governance structure was a major sticking point during the Biden administration, and a primary reason Securities and Exchange Commission Chair Gary Gensler was a thorn in the side of the crypto industry.
“Lawmakers must decide whether responsibility will fall under the SEC, the CFTC, or another body. Issues around taxation and broker-dealer definitions for digital assets markets will also need to be defined and provided with a set of clear rules for legislation to be effective,” McNew said, adding that given how closely divided the House will be in the next session, it may be tough to craft an agreement.
Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to inappropriate content.
STR | Nurphoto via Getty Images
The global online safety movement has paved the way for a number of artificial intelligence-powered products designed to keep kids away from potentially harmful things on the internet.
In the U.K., a new piece of legislation called the Online Safety Act imposes a duty of care on tech companies to protect children from age-inappropriate material, hate speech, bullying, fraud, and child sexual abuse material (CSAM). Companies can face fines as high as 10% of their global annual revenue for breaches.
Further afield, landmark regulations aimed at keeping kids safer online are swiftly making their way through the U.S. Congress. One bill, known as the Kids Online Safety Act, would make social media platforms liable for preventing their products from harming children — similar to the Online Safety Act in the U.K.
This push from regulators is increasingly causing something of a rethink at several major tech players. Pornhub and other online pornography giants are blocking all users from accessing their sites unless they go through an age verification system.
Porn sites haven’t been alone in taking action to verify users ages, though. Spotify, Reddit and X have all implemented age assurance systems to prevent children from being exposed to sexually explicit or inappropriate materials.
Such regulatory measures have been met with criticisms from the tech industry — not least due to concerns that they may infringe internet users’ privacy.
Digital ID tech flourishing
At the heart of all these age verification measures is one company: Yoti.
Yoti produces technology that captures selfies and uses artificial intelligence to verify someone’s age based on their facial features. The firm says its AI algorithm, which has been trained on millions of faces, can estimate the age of 13 to 24-year-olds within two years of accuracy.
The firm has previously partnered with the U.K.’s Post Office and is hoping to capitalize on the broader push for government-issued digital ID cards in the U.K. Yoti is not alone in the identity verification software space — other players include Entrust, Persona and iProov. However, the company has been the most prominent provider of age assurance services under the new U.K. regime.
“There is a race on for child safety technology and service providers to earn trust and confidence,” Pete Kenyon, a partner at law firm Cripps, told CNBC. “The new requirements have undoubtedly created a new marketplace and providers are scrambling to make their mark.”
Yet the rise of digital identification methods has also led to concerns over privacy infringements and possible data breaches.
“Substantial privacy issues arise with this technology being used,” said Kenyon. “Trust is key and will only be earned by the use of stringent and effective technical and governance procedures adopted in order to keep personal data safe.”
Read more CNBC tech news
Rani Govender, policy manager for child safety online at British child protection charity NSPCC, said that the technology “already exists” to authenticate users without compromising their privacy.
“Tech companies must make deliberate, ethical choices by choosing solutions that protect children from harm without compromising the privacy of users,” she told CNBC. “The best technology doesn’t just tick boxes; it builds trust.”
Child-safe smartphones
The wave of new tech emerging to prevent children from being exposed to online harms isn’t just limited to software.
Earlier this month, Finnish phone maker HMD Global launched a new smartphone called the Fusion X1, which uses AI to stop kids from filming or sharing nude content or viewing sexually explicit images from the camera, screen and across all apps.
The phone uses technology developed by SafeToNet, a British cybersecurity firm focused on child safety.
Finnish phone maker HMD Global’s new smartphone uses AI to prevent children from being exposed nude or sexually explicit images.
HMD Global
“We believe more needs to be done in this space,” James Robinson, vice president of family vertical at HMD, told CNBC. He stressed that HMD came up with the concept for children’s devices prior to the Online Safety Act entering into force, but noted it was “great to see the government taking greater steps.”
The release of HMD’s child-friendly phone follows heightened momentum in the “smartphone-free” movement, which encourages parents to avoid letting their children own a smartphone.
Going forward, the NSPCC’s Govender says that child safety will become a significant priority for digital behemoths such as Google and Meta.
The tech giants have for years been accused of worsening mental health in children and teens due to the rise of online bullying and social media addiction. They in return argue they’ve taken steps to address these issues through increased parental controls and privacy features.
“For years, tech giants have stood by while harmful and illegal content spread across their platforms, leaving young people exposed and vulnerable,” she told CNBC. “That era of neglect must end.”
A banner for Snowflake Inc. is displayed at the New York Stock Exchange to celebrate the company’s initial public offering on Sept. 16, 2020.
Brendan McDermid | Reuters
MongoDB’s stock just closed out its best week on record, leading a rally in enterprise technology companies that are seeing tailwinds from the artificial intelligence boom.
In addition to MongoDB’s 44% rally, Pure Storage soared 33%, its second-sharpest gain ever, while Snowflake jumped 21%. Autodesk rose 8.4%.
Since generative AI started taking off in late 2022 following the launch of OpenAI’s ChatGPT, the big winners have been Nvidia, for its graphics processing units, as well as the cloud vendors like Microsoft, Google and Oracle, and companies packaging and selling GPUs, such as Dell and Super Micro Computer.
For many cloud software vendors and other enterprise tech companies, Wall Street has been waiting to see if AI will be a boon to their business, or if it might displace it.
Quarterly results this week and commentary from company executives may have eased some of those concerns, showing that the financial benefits of AI are making their way downstream.
MongoDB CEO Dev Ittycheria told CNBC’s “Squawk Box” on Wednesday that enterprise rollouts of AI services are happening, but slowly.
“You start to see deployments of agents to automate back office, maybe automate sales and marketing, but it’s still not yet kind of full force in the enterprise,” Ittycheria said. “People want to see some wins before they deploy more investment.”
Revenue at MongoDB, which sells cloud database services, rose 24% from a year earlier to $591 million, sailing past the $556 million average analyst estimate, according to LSEG. Earnings also exceeded expectations, as did the company’s full-year forecast for profit and revenue.
MongoDB said in its earnings report that it’s added more than 5,000 customers year-to-date, “the highest ever in the first half of the year.”
“We think that’s a good sign of future growth because a lot of these companies are AI native companies who are coming to MongoDB to run their business,” Ittycheria said.
Pure Storage enjoyed a record pop on Thursday, when the stock jumped 32% to an all-time high.
The data storage management vendor reported quarterly results that topped estimates and lifted its guidance for the year. But what’s exciting investors the most is early returns from Pure’s recent contract with Meta. Pure will help the social media company manage its massive storage needs efficiently with the demands of AI.
Pure said it started recognizing revenue from its Meta deployments in the second quarter, and finance chief Tarek Robbiati said on the earnings call that the company is seeing “increased interest from other hyperscalers” looking to replace their traditional storage with Pure’s technology.
‘Banger of a report’
Reports from MongoDB and Pure landed the same week that Nvidia announced quarterly earnings, and said revenue soared 56% from a year earlier, marking a ninth-straight quarter of growth in excess of 50%.
Nvidia has emerged as the world’s most-valuable company by selling advanced AI processors to all of the infrastructure providers and model developers.
While growth at Nvidia has slowed from its triple-digit rate in 2023 and 2024, it’s still expanding at a much faster pace than its megacap peers, indicating that there’s no end in sight when it comes to the expansive AI buildouts.
“It was a banger of a report,” said Brad Gerstner CEO of Altimeter Capital, in an interview with CNBC’s “Halftime Report” on Thursday. “This company is accelerating at scale.”
Read more CNBC tech news
Data analytics vendor Snowflake talked up its Snowflake AI data cloud in its quarterly earnings report on Wednesday.
Snowflake shares popped 20% following better-than-expected earnings and revenue. The company also boosted its guidance for the year for product revenue, and said it has more than 6,100 customers using Snowflake AI, up from 5,200 during the prior quarter.
“Our progress with AI has been remarkable,” Snowflake CEO Sridhar Ramaswamy said on the earnings call. “Today, AI is a core reason why customers are choosing Snowflake, influencing nearly 50% of new logos won in Q2.”
Autodesk, founded in 1982, has been around much longer than MongoDB, Pure Storage or Snowflake. The company is known for its AutoCAD software used in architecture and construction.
The company has underperformed the broader tech sector of late, and last year activist investor Starboard Value jumped into the stock to push for improvements in operations and financial performance, including cost cuts. In February, Autodesk slashed 9% of its workforce, and two months later the company settled with Starboard, adding two newcomers to its board.
The stock is still trailing the Nasdaq for the year, but climbed 9.1% on Friday after Autodesk reported results that exceeded Wall Street estimates and increased its full-year revenue guidance.
Last year, Autodesk introduced Project Bernini to develop new AI models and create what it calls “AI‑driven CAD engines.”
On Thursday’s earnings call, CEO Andrew Anagnost was asked what he’s most excited about across his company’s product portfolio when it comes to AI.
Anagnost touted the ability of Autodesk to help customers simplify workflow across products and promoted the Autodesk Assistant as a way to enhance productivity through simple prompts.
He also addressed the elephant in the room: The existential threat that AI presents.
“AI may eat software,” he said, “but it’s not gonna eat Autodesk.”
Meta Platforms CEO Mark Zuckerberg departs after attending a Federal Trade Commission trial that could force the company to unwind its acquisitions of messaging platform WhatsApp and image-sharing app Instagram, at U.S. District Court in Washington, D.C., U.S., April 15, 2025.
Nathan Howard | Reuters
Meta on Friday said it is making temporary changes to its artificial intelligence chatbot policies related to teenagers as lawmakers voice concerns about safety and inappropriate conversations.
The social media giant is now training its AI chatbots so that they do not generate responses to teenagers about subjects like self-harm, suicide, disordered eating and avoid potentially inappropriate romantic conversations, a Meta spokesperson confirmed.
The company said AI chatbots will instead point teenagers to expert resources when appropriate.
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” the company said in a statement.
Additionally, teenage users of Meta apps like Facebook and Instagram will only be able to access certain AI chatbots intended for educational and skill-development purposes.
The company said it’s unclear how long these temporary modifications will last, but they will begin rolling out over the next few weeks across the company’s apps in English-speaking countries. The “interim changes” are part of the company’s longer-term measures over teen safety.
Last week, Sen. Josh Hawley, R-Mo., said that he was launching an investigation into Meta following a Reuters report about the company permitting its AI chatbots to engage in “romantic” and “sensual” conversations with teens and children.
Read more CNBC tech news
The Reuters report described an internal Meta document that detailed permissible AI chatbot behaviors that staff and contract workers should take into account when developing and training the software.
In one example, the document cited by Reuters said that a chatbot would be allowed to have a romantic conversation with an eight-year-old and could tell the minor that “every inch of you is a masterpiece – a treasure I cherish deeply.”
A Meta spokesperson told Reuters at the time that “The examples and notes in question were and are erroneous and inconsistent with our policies, and have been removed.”
Most recently, the nonprofit advocacy group Common Sense Media released a risk assessment of Meta AI on Thursday and said that it should not be used by anyone under the age of 18, because the “system actively participates in planning dangerous activities, while dismissing legitimate requests for support,” the nonprofit said in a statement.
“This is not a system that needs improvement. It’s a system that needs to be completely rebuilt with safety as the number-one priority, not an afterthought,” said Common Sense Media CEO James Steyer in a statement. “No teen should use Meta AI until its fundamental safety failures are addressed.”
A separate Reuters report published on Friday found “dozens” of flirty AI chatbots based on celebrities like Taylor Swift, Scarlett Johansson, Anne Hathaway and Selena Gomez on Facebook, Instagram and WhatsApp.
The report said that when prompted, the AI chatbots would generate “photorealistic images of their namesakes posing in bathtubs or dressed in lingerie with their legs spread.”
A Meta spokesperson told CNBC in a statement that “the AI-generated imagery of public figures in compromising poses violates our rules.”
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” the Meta spokesperson said. “Meta’s AI Studio rules prohibit the direct impersonation of public figures.”