Connect with us

Published

on

US Vice President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Brendan Smialowski | AFP | Getty Images

After the Biden administration unveiled the first-ever executive order on artificial intelligence on Monday, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document — making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action.

One core debate centers on a question of AI fairness. Many civil society leaders told CNBC the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they say it’s a meaningful step along the path.

Many civil society and several tech industry groups praised the executive order’s roots — the White House’s blueprint for an AI bill of rights, released last October — but called on Congress to pass laws codifying protections, and to better account for training and developing models that prioritize AI fairness instead of addressing those harms after-the-fact.

“This executive order is a real step forward, but we must not allow it to be the only step,” Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said in a statement. “We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped.”

U.S. President Joe Biden and Vice President Kamala Harris arrive for an event about their administration’s approach to artificial intelligence in the East Room of the White House on October 30, 2023 in Washington, DC.

Chip Somodevilla | Getty Images

Cody Venzke, senior policy counsel at the American Civil Liberties Union, believes the executive order is an “important next step in centering equity, civil rights and civil liberties in our national AI policy” — but that the ACLU has “deep concerns” about the executive order’s sections on national security and law enforcement.

In particular, the ACLU is concerned about the executive order’s push to “identify areas where AI can enhance law enforcement efficiency and accuracy,” as is stated in the text.

“One of the thrusts of the executive order is definitely that ‘AI can improve governmental administration, make our lives better and we don’t want to stand in way of innovation,'” Venzke told CNBC.

“Some of that stands at risk to lose a fundamental question, which is, ‘Should we be deploying artificial intelligence or algorithmic systems for a particular governmental service at all?’ And if we do, it really needs to be preceded by robust audits for discrimination and to ensure that the algorithm is safe and effective, that it accomplishes what it’s meant to do.”

Margaret Mitchell, researcher and chief ethics scientist of AI startup Hugging Face said she agreed with the values the executive order puts forth — privacy, safety, security, trust, equity and justice — but is concerned about the lack of focus on ways to train and develop models to minimize future harms, before an AI system is deployed.

“There was a call for an overall focus on applying red-teaming, but not other more critical approaches to evaluation,” Mitchell said.

“‘Red-teaming’ is a post-hoc, hindsight approach to evaluation that works a bit like whack-a-mole: Now that the model is finished training, what can you think of that might be a problem? See if it’s a problem and fix it if so.”

Mitchell wished she had seen “foresight” approaches highlighted in the executive order, such as disaggregated evaluation approaches, which can analyze a model as data is scaled up.

Dr. Joy Buolamwini, founder and president of the Algorithmic Justice League, said Tuesday at an event in New York that she felt the executive order fell short in terms of the notion of redress, or penalties when AI systems harm marginalized or vulnerable communities.

Even experts who praised the executive order’s scope believe the work will be incomplete without action from Congress.

“The President is trying to extract extra mileage from the laws that he has,” said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists.

For example, it seeks to work within existing immigration law to make it easier to retain high-skilled AI workers in the U.S. But immigration law has not been updated in decades, said Kaushik, who was involved in collaborative efforts with the administration in crafting elements of the order.

It falls on Congress, he added, to increase the number of employment-based green cards awarded each year and avoid losing talent to other countries.

Industry worries about stifling innovation

On the other side, industry leaders expressed wariness or even stronger feelings that the order had gone too far and would stifle innovation in a nascent sector.

Andrew Ng, longtime AI leader and cofounder of Google Brain and Coursera, told CNBC he is “quite concerned about the reporting requirements for models over a certain size,” adding that he is “very worried about overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”

In Ng’s view, thoughtful AI regulation can help advance the field, but over-regulation of aspects of the technology, such as AI model size, could hurt the open-source community, which would in turn likely benefit tech giants.

Vice President Kamala Harris and US President Joe Biden depart after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Chip Somodevilla | Getty Images

Nathan Benaich, founder and general partner of Air Street Capital, also had concerns about the reporting requirements for large AI models, telling CNBC that the compute threshold and stipulations mentioned in the order are a “flawed and potentially distorting measure.”

“It tells us little about safety and risks discouraging emerging players from building large models, while entrenching the power of incumbents,” Benaich told CNBC.

NetChoice’s Vice President and General Counsel Carl Szabo was even more blunt.

“Broad regulatory measures in Biden’s AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation,” said Szabo, whose group counts Amazon, Google, Meta and TikTok among its members. “Thus, this order puts any investment in AI at risk of being shut down at the whims of government bureaucrats.”

But Reggie Townsend, a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden, told CNBC that he feels the order doesn’t stifle innovation.

“If anything, I see it as an opportunity to create more innovation with a set of expectations in mind,” said Townsend.

David Polgar, founder of the nonprofit All Tech Is Human and a member of TikTok’s content advisory council, had similar takeaways: In part, he said, it’s about speeding up responsible AI work instead of slowing technology down.

“What a lot of the community is arguing for — and what I take away from this executive order — is that there’s a third option,” Polgar told CNBC. “It’s not about either slowing down innovation or letting it be unencumbered and potentially risky.”

WATCH: We have to try to engage China in AI safety conversation, UK tech minister says

We have to try to engage China in AI safety conversation, UK tech minister says

Continue Reading

Technology

AI could affect 40% of jobs and widen inequality between nations, UN warns

Published

on

By

AI could affect 40% of jobs and widen inequality between nations, UN warns

Artificial intelligence robot looking at futuristic digital data display.

Yuichiro Chino | Moment | Getty Images

Artificial intelligence is projected to reach $4.8 trillion in market value by 2033, but the technology’s benefits remain highly concentrated, according to the U.N. Trade and Development agency.

In a report released on Thursday, UNCTAD said the AI market cap would roughly equate to the size of Germany’s economy, with the technology offering productivity gains and driving digital transformation. 

However, the agency also raised concerns about automation and job displacement, warning that AI could affect 40% of jobs worldwide. On top of that, AI is not inherently inclusive, meaning the economic gains from the tech remain “highly concentrated,” the report added. 

“The benefits of AI-driven automation often favour capital over labour, which could widen inequality and reduce the competitive advantage of low-cost labour in developing economies,” it said. 

The potential for AI to cause unemployment and inequality is a long-standing concern, with the IMF making similar warnings over a year ago. In January, The World Economic Forum released findings that as many as 41% of employers were planning on downsizing their staff in areas where AI could replicate them.  

However, the UNCTAD report also highlights inequalities between nations, with U.N. data showing that 40% of global corporate research and development spending in AI is concentrated among just 100 firms, mainly those in the U.S. and China. 

Furthermore, it notes that leading tech giants, such as Apple, Nvidia and Microsoft — companies that stand to benefit from the AI boom — have a market value that rivals the gross domestic product of the entire African continent. 

This AI dominance at national and corporate levels threatens to widen those technological divides, leaving many nations at risk of lagging behind, UNCTAD said. It noted that 118 countries — mostly in the Global South — are absent from major AI governance discussions. 

UN recommendations 

But AI is not just about job replacement, the report said, noting that it can also “create new industries and and empower workers” — provided there is adequate investment in reskilling and upskilling.

But in order for developing nations not to fall behind, they must “have a seat at the table” when it comes to AI regulation and ethical frameworks, it said.

In its report, UNCTAD makes a number of recommendations to the international community for driving inclusive growth. They include an AI public disclosure mechanism, shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources. 

Open-source generally refers to software in which the source code is made freely available on the web for possible modification and redistribution.

“AI can be a catalyst for progress, innovation, and shared prosperity – but only if countries actively shape its trajectory,” the report concludes. 

“Strategic investments, inclusive governance, and international cooperation are key to ensuring that AI benefits all, rather than reinforcing existing divides.”

Continue Reading

Technology

Nvidia positioned to weather Trump tariffs, chip demand ‘off the charts,’ says Altimeter’s Gerstner

Published

on

By

Nvidia positioned to weather Trump tariffs, chip demand 'off the charts,' says Altimeter's Gerstner

Altimeter CEO Brad Gerstner is buying Nvidia

Altimeter Capital CEO Brad Gerstner said Thursday that he’s moving out of the “bomb shelter” with Nvidia and into a position of safety, expecting that the chipmaker is positioned to withstand President Donald Trump’s widespread tariffs.

“The growth and the demand for GPUs is off the charts,” he told CNBC’s “Fast Money Halftime Report,” referring to Nvidia’s graphics processing units that are powering the artificial intelligence boom. He said investors just need to listen to commentary from OpenAI, Google and Elon Musk.

President Trump announced an expansive and aggressive “reciprocal tariff” policy in a ceremony at the White House on Wednesday. The plan established a 10% baseline tariff, though many countries like China, Vietnam and Taiwan are subject to steeper rates. The announcement sent stocks tumbling on Thursday, with the tech-heavy Nasdaq down more than 5%, headed for its worst day since 2022.

The big reason Nvidia may be better positioned to withstand Trump’s tariff hikes is because semiconductors are on the list of exceptions, which Gerstner called a “wise exception” due to the importance of AI.

Nvidia’s business has exploded since the release of OpenAI’s ChatGPT in 2022, and annual revenue has more than doubled in each of the past two fiscal years. After a massive rally, Nvidia’s stock price has dropped by more than 20% this year and was down almost 7% on Thursday.

Gerstner is concerned about the potential of a recession due to the tariffs, but is relatively bullish on Nvidia, and said the “negative impact from tariffs will be much less than in other areas.”

He said it’s key for the U.S. to stay competitive in AI. And while the company’s chips are designed domestically, they’re manufactured in Taiwan “because they can’t be fabricated in the U.S.” Higher tariffs would punish companies like Meta and Microsoft, he said.

“We’re in a global race in AI,” Gerstner said. “We can’t hamper our ability to win that race.”

WATCH: Brad Gerstner is buying Nvidia

Continue Reading

Technology

YouTube announces Shorts editing features amid potential TikTok ban

Published

on

By

YouTube announces Shorts editing features amid potential TikTok ban

Jaque Silva | Nurphoto | Getty Images

YouTube on Thursday announced new video creation tools for Shorts, its short-form video feed that competes against TikTok. 

The features come at a time when TikTok, which is owned by Chinese company ByteDance, is at risk of an effective ban in the U.S. if it’s not sold to an American owner by April 5.

Among the new tools is an updated video editor that allows creators to make precise adjustments and edits, a feature that automatically syncs video cuts to the beat of a song and AI stickers.

The creator tools will become available later this spring, said YouTube, which is owned by Google

Along with the new features, YouTube last week said it was changing the way view counts are tabulated on Shorts. Under the new guidelines, Shorts views will count the number of times the video is played or replayed with no minimum watch time requirement. 

Previously, views were only counted if a video was played for a certain number of seconds. This new tabulation method is similar to how views are counted on TikTok and Meta’s Reels, and will likely inflate view counts.

“We got this feedback from creators that this is what they wanted. It’s a way for them to better understand when their Shorts have been seen,” YouTube Chief Product Officer Johanna Voolich said in a YouTube video. “It’s useful for creators who post across multiple platforms.”

WATCH: TikTok is a digital Trojan horse, says Hayman Capital’s Kyle Bass

TikTok is a digital Trojan horse, says Hayman Capital's Kyle Bass

Continue Reading

Trending