Connect with us

Published

on

US Vice President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Brendan Smialowski | AFP | Getty Images

After the Biden administration unveiled the first-ever executive order on artificial intelligence on Monday, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document — making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action.

One core debate centers on a question of AI fairness. Many civil society leaders told CNBC the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they say it’s a meaningful step along the path.

Many civil society and several tech industry groups praised the executive order’s roots — the White House’s blueprint for an AI bill of rights, released last October — but called on Congress to pass laws codifying protections, and to better account for training and developing models that prioritize AI fairness instead of addressing those harms after-the-fact.

“This executive order is a real step forward, but we must not allow it to be the only step,” Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said in a statement. “We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped.”

U.S. President Joe Biden and Vice President Kamala Harris arrive for an event about their administration’s approach to artificial intelligence in the East Room of the White House on October 30, 2023 in Washington, DC.

Chip Somodevilla | Getty Images

Cody Venzke, senior policy counsel at the American Civil Liberties Union, believes the executive order is an “important next step in centering equity, civil rights and civil liberties in our national AI policy” — but that the ACLU has “deep concerns” about the executive order’s sections on national security and law enforcement.

In particular, the ACLU is concerned about the executive order’s push to “identify areas where AI can enhance law enforcement efficiency and accuracy,” as is stated in the text.

“One of the thrusts of the executive order is definitely that ‘AI can improve governmental administration, make our lives better and we don’t want to stand in way of innovation,'” Venzke told CNBC.

“Some of that stands at risk to lose a fundamental question, which is, ‘Should we be deploying artificial intelligence or algorithmic systems for a particular governmental service at all?’ And if we do, it really needs to be preceded by robust audits for discrimination and to ensure that the algorithm is safe and effective, that it accomplishes what it’s meant to do.”

Margaret Mitchell, researcher and chief ethics scientist of AI startup Hugging Face said she agreed with the values the executive order puts forth — privacy, safety, security, trust, equity and justice — but is concerned about the lack of focus on ways to train and develop models to minimize future harms, before an AI system is deployed.

“There was a call for an overall focus on applying red-teaming, but not other more critical approaches to evaluation,” Mitchell said.

“‘Red-teaming’ is a post-hoc, hindsight approach to evaluation that works a bit like whack-a-mole: Now that the model is finished training, what can you think of that might be a problem? See if it’s a problem and fix it if so.”

Mitchell wished she had seen “foresight” approaches highlighted in the executive order, such as disaggregated evaluation approaches, which can analyze a model as data is scaled up.

Dr. Joy Buolamwini, founder and president of the Algorithmic Justice League, said Tuesday at an event in New York that she felt the executive order fell short in terms of the notion of redress, or penalties when AI systems harm marginalized or vulnerable communities.

Even experts who praised the executive order’s scope believe the work will be incomplete without action from Congress.

“The President is trying to extract extra mileage from the laws that he has,” said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists.

For example, it seeks to work within existing immigration law to make it easier to retain high-skilled AI workers in the U.S. But immigration law has not been updated in decades, said Kaushik, who was involved in collaborative efforts with the administration in crafting elements of the order.

It falls on Congress, he added, to increase the number of employment-based green cards awarded each year and avoid losing talent to other countries.

Industry worries about stifling innovation

On the other side, industry leaders expressed wariness or even stronger feelings that the order had gone too far and would stifle innovation in a nascent sector.

Andrew Ng, longtime AI leader and cofounder of Google Brain and Coursera, told CNBC he is “quite concerned about the reporting requirements for models over a certain size,” adding that he is “very worried about overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”

In Ng’s view, thoughtful AI regulation can help advance the field, but over-regulation of aspects of the technology, such as AI model size, could hurt the open-source community, which would in turn likely benefit tech giants.

Vice President Kamala Harris and US President Joe Biden depart after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Chip Somodevilla | Getty Images

Nathan Benaich, founder and general partner of Air Street Capital, also had concerns about the reporting requirements for large AI models, telling CNBC that the compute threshold and stipulations mentioned in the order are a “flawed and potentially distorting measure.”

“It tells us little about safety and risks discouraging emerging players from building large models, while entrenching the power of incumbents,” Benaich told CNBC.

NetChoice’s Vice President and General Counsel Carl Szabo was even more blunt.

“Broad regulatory measures in Biden’s AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation,” said Szabo, whose group counts Amazon, Google, Meta and TikTok among its members. “Thus, this order puts any investment in AI at risk of being shut down at the whims of government bureaucrats.”

But Reggie Townsend, a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden, told CNBC that he feels the order doesn’t stifle innovation.

“If anything, I see it as an opportunity to create more innovation with a set of expectations in mind,” said Townsend.

David Polgar, founder of the nonprofit All Tech Is Human and a member of TikTok’s content advisory council, had similar takeaways: In part, he said, it’s about speeding up responsible AI work instead of slowing technology down.

“What a lot of the community is arguing for — and what I take away from this executive order — is that there’s a third option,” Polgar told CNBC. “It’s not about either slowing down innovation or letting it be unencumbered and potentially risky.”

WATCH: We have to try to engage China in AI safety conversation, UK tech minister says

We have to try to engage China in AI safety conversation, UK tech minister says

Continue Reading

Technology

CNBC Daily Open: Debt worries continue to weigh on AI-related stocks

Published

on

By

CNBC Daily Open: Debt worries continue to weigh on AI-related stocks

Traders work on the floor at the New York Stock Exchange in New York City, U.S., Dec. 15, 2025.

Brendan McDermid | Reuters

U.S. stocks of late have been shaky as investors turn away from artificial intelligence shares, especially those related to AI infrastructure, such as Oracle, Broadcom and CoreWeave.

The worry is that those companies are running into high levels of debt to finance their multibillion-dollar deals.

Oracle, for instance, said Wednesday it would need to raise capital expenditure by an additional $15 billion for its current fiscal year and increase its lease commitments for data centers. The company is turning to debt to finance all that.

The stock lost 2.7% on Monday, while shares of CoreWeave, its fellow player in the AI data center trade dropped around 8%. Broadcom also retreated over concerns over margin compression, sliding about 5.6%.

That said, the broader market was not affected too adversely as investors continued rotating into sectors such as consumer discretionary and industrials. The S&P 500 slipped 0.16%, the Dow Jones Industrial Average ticked down just 0.09% and the Nasdaq Composite, comprising more tech firms, fell 0.59%.

The broader market performance suggests that the fears are mostly contained within the AI infrastructure space.

“It definitely requires the ROI [return on investment] to be there to keep funding this AI investment,” Matt Witheiler, head of late-stage growth at Wellington Management, told CNBC’s “Money Movers” on Monday. “From what we’ve seen so far that ROI is there.”

Witheiler said the bullish side of the story is that, “every single AI company on the planet is saying if you give me more compute I can make more revenue.”

The ready availability of clients, according to that argument, means those companies that provide the compute — Oracle and CoreWeave — just need to make sure their finances are in order.

— CNBC’s Ari Levy contributed to this report.

What you need to know today

U.S. stocks edged down Monday. All major indexes slid as AI-related stocks continued to weigh down markets. Europe’s regional Stoxx 600 climbed 0.74%. The continent’s defense stocks fell as Ukraine offered to give up on joining NATO.

Tesla testing driverless Robotaxis in Austin, Texas. “Testing is underway with no occupants in the car,” CEO Elon Musk wrote in a post on his social network X over the weekend. Shares of Tesla rose 3.6% on Monday to close at their highest this year.

U.S. collects $200 billion in tariffs. The country’s Customs and Border Protection agency said Monday that the tally comprises only new tariffs, including “reciprocal” and “fentanyl” levies, imposed by U.S. President Trump in his second term.

Ukraine-Russia peace deal is nearly complete. That’s according to U.S. officials, who held talks with Ukraine President Volodymyr Zelenskyy beginning Sunday. Ukraine has offered to give up its NATO bid, while Russia is open to Ukraine joining the EU, officials said.

[PRO] Wall Street’s favorite stocks for 2026. These S&P 500 stocks have a consensus buy rating and an upside to average price target of at least 35%, based on CNBC Pro’s screening of data from LSEG.

And finally…

Customers walk in the parking lot outside a Costco store on December 02, 2025 in Chicago, Illinois.

Scott Olson | Getty Images

Continue Reading

Technology

Merriam-Webster declares ‘slop’ its word of the year in nod to growth of AI

Published

on

By

Merriam-Webster declares 'slop' its word of the year in nod to growth of AI

The logos of Google Gemini, ChatGPT, Microsoft Copilot, Claude by Anthropic, Perplexity, and Bing apps are displayed on the screen of a smartphone in Reno, United States, on November 21, 2024.

Jaque Silva | Nurphoto | Getty Images

Merriam-Webster declared “slop” its 2025 word of the year on Monday, a sign of growing wariness around artificial intelligence.

Slop is now defined as “digital content of low quality that is produced usually in quantity by means of artificial intelligence,” according to Merriam-Webster’s dictionary. The word has previously been used primarily to connote a “product of little value” or “food waste fed to animals”

Mainstream social networks saw a flood of AI-generated content, including what 404 Media described as a “video of a bizarre creature turning into a spider, turning into a nightmare giraffe inside of a busy mall,” that the publication reported had been viewed more than 362 million times on Meta apps. 

In September, Meta launched Vibes, a separate feed for AI-generated videos. Days later, OpenAI released its Sora app. Those services, along with TikTok, YouTube and others, are increasingly rife with AI slop, which can often generate revenue with enough engagement.

Spotify said in September that it had to remove over 75 million AI-generated, “spammy tracks” from its service, and roll out formal policies to protect artists from AI impersonation and deception. The streaming company faced widespread criticism after The Velvet Sundown racked up 1 million monthly listeners on without initially making it clear they produced their songs with generative AI. The artist later clarified on its bio page that it’s a “synthetic music project.”

According to CNBC’s latest All-America Economic Survey, published Dec. 15, fewer respondents have been using AI platforms, such as ChatGPT, Microsoft Copilot and Google Gemini, in the last two to three months compared to the summer months.

Just 48% of those surveyed said they had used AI platforms recently, down from 53% in August.

WATCH: OpenAI’s Sora 2 sparks AI ‘slop’ backlash

OpenAI's Sora 2 sparks AI 'slop' backlash

Continue Reading

Technology

PayPal applies to form bank that can offer small business loans and savings accounts

Published

on

By

PayPal applies to form bank that can offer small business loans and savings accounts

PayPal CEO Alex Chriss speaks at the Global Fintech Fest in Mumbai, India, on Oct. 7, 2025.

Indranil Aditya | Nurphoto | Getty Images

PayPal said Monday that it has applied for approval to form PayPal Bank, which would be able to offer loans to small businesses.

“Establishing PayPal Bank will strengthen our business and improve our efficiency, enabling us to better support small business growth and economic opportunities across the U.S.,” PayPal CEO Alex Chriss said in a statement.

The U.S. Federal Deposit Insurance Corporation will review an application proposing the establishment of PayPal Bank, along with Utah’s Department of Financial Institutions, PayPal said.

The company, which owns popular payment app Venmo, hopes to also offer interest-bearing savings accounts to its customers, the statement said. PayPal already makes credit lines available to consumers and has been trying to expand its roster of banking-like services as it competes with a growing number of fintech companies that are aiming to take business from traditional brick-and-mortar banks.

Shares of PayPal rose 1.5% in extended trading following the announcement.

In October, PayPal said quarterly revenue increased 7% year over year to $8.42 billion, more than analysts had expected. But in 2025 the stock has slumped about 29%, while the S&P 500 index has gained almost 16% in the same period.

WATCH: E-commerce consumption could bump 20% because of agentic AI, says Mizuho’s Dan Dolev

E-commerce consumption could bump 20% because of agentic AI, says Mizuho's Dan Dolev

Continue Reading

Trending