Connect with us

Published

on

Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.

Sadik Demiroz | Photodisc | Getty Images

AMSTERDAM — Artificial intelligence has a racial bias problem.

From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.

And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.

Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.

A.I. can be dumb

Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.

“The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”

As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.

“None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”

Europe, United States working on voluntary A.I. code of conduct

Manji suggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.

However, he added that banks — being the heavily regulated, slow-moving institutions that they are — are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.

“You’ve got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks are not known for being fast,” Manji said.

Banking’s A.I. problem

Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.

“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”

In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.

“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.

“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”

Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector, tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.

“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.

“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.

Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.

“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”

When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.

“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.

Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.

“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.

Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.

“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”

Fintech firm Nium plans U.S. IPO in 2 years, CEO says

But it’s also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York Department of Financial Services after the regulator found no evidence of discrimination based on sex. 

The problem, according to Kim Smouter, director of anti-racism group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.

“One of the difficulties in the mass deployment of AI,” he said, “is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination.”

“Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it’s also difficult to detect specific instances where things have gone wrong,” he added.

Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were “treated with an institutional bias.”

This, Smouter said, “demonstrates how quickly such disfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done.”

Policing A.I.’s biases

Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.

Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and “hallucinations” generated by ChatGPT-like tools.

“I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” Chowdhury said.

Now is the time for meaningful regulation of AI to come into force — but knowing the amount of time it will take regulatory proposals like the European Union’s AI Act to take effect, some are concerned this won’t happen fast enough.

“We call upon more transparency and accountability of algorithms and how they operate and a layman’s declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment,” Smouter said.

The AI Act, the first regulatory framework of its kind, has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.

“It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation,” he said.

BlackRock reportedly close to filing Bitcoin ETF application

Continue Reading

Technology

AI could affect 40% of jobs and widen inequality between nations, UN warns

Published

on

By

AI could affect 40% of jobs and widen inequality between nations, UN warns

Artificial intelligence robot looking at futuristic digital data display.

Yuichiro Chino | Moment | Getty Images

Artificial intelligence is projected to reach $4.8 trillion in market value by 2033, but the technology’s benefits remain highly concentrated, according to the U.N. Trade and Development agency.

In a report released on Thursday, UNCTAD said the AI market cap would roughly equate to the size of Germany’s economy, with the technology offering productivity gains and driving digital transformation. 

However, the agency also raised concerns about automation and job displacement, warning that AI could affect 40% of jobs worldwide. On top of that, AI is not inherently inclusive, meaning the economic gains from the tech remain “highly concentrated,” the report added. 

“The benefits of AI-driven automation often favour capital over labour, which could widen inequality and reduce the competitive advantage of low-cost labour in developing economies,” it said. 

The potential for AI to cause unemployment and inequality is a long-standing concern, with the IMF making similar warnings over a year ago. In January, The World Economic Forum released findings that as many as 41% of employers were planning on downsizing their staff in areas where AI could replicate them.  

However, the UNCTAD report also highlights inequalities between nations, with U.N. data showing that 40% of global corporate research and development spending in AI is concentrated among just 100 firms, mainly those in the U.S. and China. 

Furthermore, it notes that leading tech giants, such as Apple, Nvidia and Microsoft — companies that stand to benefit from the AI boom — have a market value that rivals the gross domestic product of the entire African continent. 

This AI dominance at national and corporate levels threatens to widen those technological divides, leaving many nations at risk of lagging behind, UNCTAD said. It noted that 118 countries — mostly in the Global South — are absent from major AI governance discussions. 

UN recommendations 

But AI is not just about job replacement, the report said, noting that it can also “create new industries and and empower workers” — provided there is adequate investment in reskilling and upskilling.

But in order for developing nations not to fall behind, they must “have a seat at the table” when it comes to AI regulation and ethical frameworks, it said.

In its report, UNCTAD makes a number of recommendations to the international community for driving inclusive growth. They include an AI public disclosure mechanism, shared AI infrastructure, the use of open-source AI models and initiatives to share AI knowledge and resources. 

Open-source generally refers to software in which the source code is made freely available on the web for possible modification and redistribution.

“AI can be a catalyst for progress, innovation, and shared prosperity – but only if countries actively shape its trajectory,” the report concludes. 

“Strategic investments, inclusive governance, and international cooperation are key to ensuring that AI benefits all, rather than reinforcing existing divides.”

Continue Reading

Technology

Nvidia positioned to weather Trump tariffs, chip demand ‘off the charts,’ says Altimeter’s Gerstner

Published

on

By

Nvidia positioned to weather Trump tariffs, chip demand 'off the charts,' says Altimeter's Gerstner

Altimeter CEO Brad Gerstner is buying Nvidia

Altimeter Capital CEO Brad Gerstner said Thursday that he’s moving out of the “bomb shelter” with Nvidia and into a position of safety, expecting that the chipmaker is positioned to withstand President Donald Trump’s widespread tariffs.

“The growth and the demand for GPUs is off the charts,” he told CNBC’s “Fast Money Halftime Report,” referring to Nvidia’s graphics processing units that are powering the artificial intelligence boom. He said investors just need to listen to commentary from OpenAI, Google and Elon Musk.

President Trump announced an expansive and aggressive “reciprocal tariff” policy in a ceremony at the White House on Wednesday. The plan established a 10% baseline tariff, though many countries like China, Vietnam and Taiwan are subject to steeper rates. The announcement sent stocks tumbling on Thursday, with the tech-heavy Nasdaq down more than 5%, headed for its worst day since 2022.

The big reason Nvidia may be better positioned to withstand Trump’s tariff hikes is because semiconductors are on the list of exceptions, which Gerstner called a “wise exception” due to the importance of AI.

Nvidia’s business has exploded since the release of OpenAI’s ChatGPT in 2022, and annual revenue has more than doubled in each of the past two fiscal years. After a massive rally, Nvidia’s stock price has dropped by more than 20% this year and was down almost 7% on Thursday.

Gerstner is concerned about the potential of a recession due to the tariffs, but is relatively bullish on Nvidia, and said the “negative impact from tariffs will be much less than in other areas.”

He said it’s key for the U.S. to stay competitive in AI. And while the company’s chips are designed domestically, they’re manufactured in Taiwan “because they can’t be fabricated in the U.S.” Higher tariffs would punish companies like Meta and Microsoft, he said.

“We’re in a global race in AI,” Gerstner said. “We can’t hamper our ability to win that race.”

WATCH: Brad Gerstner is buying Nvidia

Continue Reading

Technology

YouTube announces Shorts editing features amid potential TikTok ban

Published

on

By

YouTube announces Shorts editing features amid potential TikTok ban

Jaque Silva | Nurphoto | Getty Images

YouTube on Thursday announced new video creation tools for Shorts, its short-form video feed that competes against TikTok. 

The features come at a time when TikTok, which is owned by Chinese company ByteDance, is at risk of an effective ban in the U.S. if it’s not sold to an American owner by April 5.

Among the new tools is an updated video editor that allows creators to make precise adjustments and edits, a feature that automatically syncs video cuts to the beat of a song and AI stickers.

The creator tools will become available later this spring, said YouTube, which is owned by Google

Along with the new features, YouTube last week said it was changing the way view counts are tabulated on Shorts. Under the new guidelines, Shorts views will count the number of times the video is played or replayed with no minimum watch time requirement. 

Previously, views were only counted if a video was played for a certain number of seconds. This new tabulation method is similar to how views are counted on TikTok and Meta’s Reels, and will likely inflate view counts.

“We got this feedback from creators that this is what they wanted. It’s a way for them to better understand when their Shorts have been seen,” YouTube Chief Product Officer Johanna Voolich said in a YouTube video. “It’s useful for creators who post across multiple platforms.”

WATCH: TikTok is a digital Trojan horse, says Hayman Capital’s Kyle Bass

TikTok is a digital Trojan horse, says Hayman Capital's Kyle Bass

Continue Reading

Trending