Connect with us

Published

on

US Vice President Kamala Harris applauds as US President Joe Biden signs an executive order after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Brendan Smialowski | AFP | Getty Images

After the Biden administration unveiled the first-ever executive order on artificial intelligence on Monday, a frenzy of lawmakers, industry groups, civil rights organizations, labor unions and others began digging into the 111-page document — making note of the priorities, specific deadlines and, in their eyes, the wide-ranging implications of the landmark action.

One core debate centers on a question of AI fairness. Many civil society leaders told CNBC the order does not go far enough to recognize and address real-world harms that stem from AI models — especially those affecting marginalized communities. But they say it’s a meaningful step along the path.

Many civil society and several tech industry groups praised the executive order’s roots — the White House’s blueprint for an AI bill of rights, released last October — but called on Congress to pass laws codifying protections, and to better account for training and developing models that prioritize AI fairness instead of addressing those harms after-the-fact.

“This executive order is a real step forward, but we must not allow it to be the only step,” Maya Wiley, president and CEO of The Leadership Conference on Civil and Human Rights, said in a statement. “We still need Congress to consider legislation that will regulate AI and ensure that innovation makes us more fair, just, and prosperous, rather than surveilled, silenced, and stereotyped.”

U.S. President Joe Biden and Vice President Kamala Harris arrive for an event about their administration’s approach to artificial intelligence in the East Room of the White House on October 30, 2023 in Washington, DC.

Chip Somodevilla | Getty Images

Cody Venzke, senior policy counsel at the American Civil Liberties Union, believes the executive order is an “important next step in centering equity, civil rights and civil liberties in our national AI policy” — but that the ACLU has “deep concerns” about the executive order’s sections on national security and law enforcement.

In particular, the ACLU is concerned about the executive order’s push to “identify areas where AI can enhance law enforcement efficiency and accuracy,” as is stated in the text.

“One of the thrusts of the executive order is definitely that ‘AI can improve governmental administration, make our lives better and we don’t want to stand in way of innovation,'” Venzke told CNBC.

“Some of that stands at risk to lose a fundamental question, which is, ‘Should we be deploying artificial intelligence or algorithmic systems for a particular governmental service at all?’ And if we do, it really needs to be preceded by robust audits for discrimination and to ensure that the algorithm is safe and effective, that it accomplishes what it’s meant to do.”

Margaret Mitchell, researcher and chief ethics scientist of AI startup Hugging Face said she agreed with the values the executive order puts forth — privacy, safety, security, trust, equity and justice — but is concerned about the lack of focus on ways to train and develop models to minimize future harms, before an AI system is deployed.

“There was a call for an overall focus on applying red-teaming, but not other more critical approaches to evaluation,” Mitchell said.

“‘Red-teaming’ is a post-hoc, hindsight approach to evaluation that works a bit like whack-a-mole: Now that the model is finished training, what can you think of that might be a problem? See if it’s a problem and fix it if so.”

Mitchell wished she had seen “foresight” approaches highlighted in the executive order, such as disaggregated evaluation approaches, which can analyze a model as data is scaled up.

Dr. Joy Buolamwini, founder and president of the Algorithmic Justice League, said Tuesday at an event in New York that she felt the executive order fell short in terms of the notion of redress, or penalties when AI systems harm marginalized or vulnerable communities.

Even experts who praised the executive order’s scope believe the work will be incomplete without action from Congress.

“The President is trying to extract extra mileage from the laws that he has,” said Divyansh Kaushik, associate director for emerging technologies and national security at the Federation of American Scientists.

For example, it seeks to work within existing immigration law to make it easier to retain high-skilled AI workers in the U.S. But immigration law has not been updated in decades, said Kaushik, who was involved in collaborative efforts with the administration in crafting elements of the order.

It falls on Congress, he added, to increase the number of employment-based green cards awarded each year and avoid losing talent to other countries.

Industry worries about stifling innovation

On the other side, industry leaders expressed wariness or even stronger feelings that the order had gone too far and would stifle innovation in a nascent sector.

Andrew Ng, longtime AI leader and cofounder of Google Brain and Coursera, told CNBC he is “quite concerned about the reporting requirements for models over a certain size,” adding that he is “very worried about overhyped dangers of AI leading to reporting and licensing requirements that crush open source and stifle innovation.”

In Ng’s view, thoughtful AI regulation can help advance the field, but over-regulation of aspects of the technology, such as AI model size, could hurt the open-source community, which would in turn likely benefit tech giants.

Vice President Kamala Harris and US President Joe Biden depart after delivering remarks on advancing the safe, secure, and trustworthy development and use of artificial intelligence, in the East Room of the White House in Washington, DC, on October 30, 2023.

Chip Somodevilla | Getty Images

Nathan Benaich, founder and general partner of Air Street Capital, also had concerns about the reporting requirements for large AI models, telling CNBC that the compute threshold and stipulations mentioned in the order are a “flawed and potentially distorting measure.”

“It tells us little about safety and risks discouraging emerging players from building large models, while entrenching the power of incumbents,” Benaich told CNBC.

NetChoice’s Vice President and General Counsel Carl Szabo was even more blunt.

“Broad regulatory measures in Biden’s AI red tape wishlist will result in stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation,” said Szabo, whose group counts Amazon, Google, Meta and TikTok among its members. “Thus, this order puts any investment in AI at risk of being shut down at the whims of government bureaucrats.”

But Reggie Townsend, a member of the National Artificial Intelligence Advisory Committee (NAIAC), which advises President Biden, told CNBC that he feels the order doesn’t stifle innovation.

“If anything, I see it as an opportunity to create more innovation with a set of expectations in mind,” said Townsend.

David Polgar, founder of the nonprofit All Tech Is Human and a member of TikTok’s content advisory council, had similar takeaways: In part, he said, it’s about speeding up responsible AI work instead of slowing technology down.

“What a lot of the community is arguing for — and what I take away from this executive order — is that there’s a third option,” Polgar told CNBC. “It’s not about either slowing down innovation or letting it be unencumbered and potentially risky.”

WATCH: We have to try to engage China in AI safety conversation, UK tech minister says

We have to try to engage China in AI safety conversation, UK tech minister says

Continue Reading

Technology

Week in review: The Nasdaq’s worst week since April, three trades, and earnings

Published

on

By

Week in review: The Nasdaq's worst week since April, three trades, and earnings

Continue Reading

Technology

Too early to bet against AI trade, State Street suggests 

Published

on

By

Too early to bet against AI trade, State Street suggests 

Momentum and private assets: The trends driving ETFs to record inflows

State Street is reiterating its bullish stance on the artificial intelligence trade despite the Nasdaq’s worst week since April.

Chief Business Officer Anna Paglia said momentum stocks still have legs because investors are reluctant to step away from the growth story that’s driven gains all year.

“How would you not want to participate in the growth of AI technology? Everybody has been waiting for the cycle to change from growth to value. I don’t think it’s happening just yet because of the momentum,” Paglia told CNBC’s “ETF Edge” earlier this week. “I don’t think the rebalancing trade is going to happen until we see a signal from the market indicating a slowdown in these big trends.”

Paglia, who has spent 25 years in the exchange-traded funds industry, sees a higher likelihood that the space will cool off early next year.

“There will be much more focus about the diversification,” she said.

Her firm manages several ETFs with exposure to the technology sector, including the SPDR NYSE Technology ETF, which has gained 38% so far this year as of Friday’s close.

The fund, however, pulled back more than 4% over the past week as investors took profits in AI-linked names. The fund’s second top holding as of Friday’s close is Palantir Technologies, according to State Street’s website. Its stock tumbled more than 11% this week after the company’s earnings report on Monday.

Despite the decline, Paglia reaffirmed her bullish tech view in a statement to CNBC later in the week.

Meanwhile, Todd Rosenbluth suggests a rotation is already starting to grip the market. He points to a renewed appetite for health-care stocks.

“The Health Care Select Sector SPDR Fund… which has been out of favor for much of the year, started a return to favor in October,” the firm’s head of research said in the same interview. “Health care tends to be a more defensive sector, so we’re watching to see if people continue to gravitate towards that as a way of diversifying away from some of those sectors like technology.”

The Health Care Select Sector SPDR Fund, which has been underperforming technology sector this year, is up 5% since Oct. 1. It was also the second-best performing S&P 500 group this week.

Disclaimer

Continue Reading

Technology

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Published

on

By

People with ADHD, autism, dyslexia say AI agents are helping them succeed at work

Neurodiverse professionals may see unique benefits from artificial intelligence tools and agents, research suggests. With AI agent creation booming in 2025, people with conditions like ADHD, autism, dyslexia and more report a more level playing field in the workplace thanks to generative AI.

A recent study from the UK’s Department for Business and Trade found that neurodiverse workers were 25% more satisfied with AI assistants and were more likely to recommend the tool than neurotypical respondents.

“Standing up and walking around during a meeting means that I’m not taking notes, but now AI can come in and synthesize the entire meeting into a transcript and pick out the top-level themes,” said Tara DeZao, senior director of product marketing at enterprise low-code platform provider Pega. DeZao, who was diagnosed with ADHD as an adult, has combination-type ADHD, which includes both inattentive symptoms (time management and executive function issues) and hyperactive symptoms (increased movement).

“I’ve white-knuckled my way through the business world,” DeZao said. “But these tools help so much.”

AI tools in the workplace run the gamut and can have hyper-specific use cases, but solutions like note takers, schedule assistants and in-house communication support are common. Generative AI happens to be particularly adept at skills like communication, time management and executive functioning, creating a built-in benefit for neurodiverse workers who’ve previously had to find ways to fit in among a work culture not built with them in mind.

Because of the skills that neurodiverse individuals can bring to the workplace — hyperfocus, creativity, empathy and niche expertise, just to name a few — some research suggests that organizations prioritizing inclusivity in this space generate nearly one-fifth higher revenue.

AI ethics and neurodiverse workers

“Investing in ethical guardrails, like those that protect and aid neurodivergent workers, is not just the right thing to do,” said Kristi Boyd, an AI specialist with the SAS data ethics practice. “It’s a smart way to make good on your organization’s AI investments.”

Boyd referred to an SAS study which found that companies investing the most in AI governance and guardrails were 1.6 times more likely to see at least double ROI on their AI investments. But Boyd highlighted three risks that companies should be aware of when implementing AI tools with neurodiverse and other individuals in mind: competing needs, unconscious bias and inappropriate disclosure.

“Different neurodiverse conditions may have conflicting needs,” Boyd said. For example, while people with dyslexia may benefit from document readers, people with bipolar disorder or other mental health neurodivergences may benefit from AI-supported scheduling to make the most of productive periods. “By acknowledging these tensions upfront, organizations can create layered accommodations or offer choice-based frameworks that balance competing needs while promoting equity and inclusion,” she explained.

Regarding AI’s unconscious biases, algorithms can (and have been) unintentionally taught to associate neurodivergence with danger, disease or negativity, as outlined in Duke University research. And even today, neurodiversity can still be met with workplace discrimination, making it important for companies to provide safe ways to use these tools without having to unwillingly publicize any individual worker diagnosis.

‘Like somebody turned on the light’

As businesses take accountability for the impact of AI tools in the workplace, Boyd says it’s important to remember to include diverse voices at all stages, implement regular audits and establish safe ways for employees to anonymously report issues.

The work to make AI deployment more equitable, including for neurodivergent people, is just getting started. The nonprofit Humane Intelligence, which focuses on deploying AI for social good, released in early October its Bias Bounty Challenge, where participants can identify biases with the goal of building “more inclusive communication platforms — especially for users with cognitive differences, sensory sensitivities or alternative communication styles.”

For example, emotion AI (when AI identifies human emotions) can help people with difficulty identifying emotions make sense of their meeting partners on video conferencing platforms like Zoom. Still, this technology requires careful attention to bias by ensuring AI agents recognize diverse communication patterns fairly and accurately, rather than embedding harmful assumptions.

DeZao said her ADHD diagnosis felt like “somebody turned on the light in a very, very dark room.”

“One of the most difficult pieces of our hyper-connected, fast world is that we’re all expected to multitask. With my form of ADHD, it’s almost impossible to multitask,” she said.

DeZao says one of AI’s most helpful features is its ability to receive instructions and do its work while the human employee can remain focused on the task at hand. “If I’m working on something and then a new request comes in over Slack or Teams, it just completely knocks me off my thought process,” she said. “Being able to take that request and then outsource it real quick and have it worked on while I continue to work [on my original task] has been a godsend.”

Continue Reading

Trending