Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.
Sadik Demiroz | Photodisc | Getty Images
AMSTERDAM — Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
A.I. can be dumb
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
“The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”
As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.
“None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”
Manjisuggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.
However, he added that banks — being the heavily regulated, slow-moving institutions that they are — are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.
“You’ve got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks are not known for being fast,” Manji said.
Banking’s A.I. problem
Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.
“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector,tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”
When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.
“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.
Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.
“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.
“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”
But it’s also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York Department of Financial Services after the regulator found no evidence of discrimination based on sex.
The problem, according to Kim Smouter, director of anti-racism group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.
“One of the difficulties in the mass deployment of AI,” he said, “is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination.”
“Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it’s also difficult to detect specific instances where things have gone wrong,” he added.
Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were “treated with an institutional bias.”
This, Smouter said, “demonstrates how quickly such disfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done.”
Policing A.I.’s biases
Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.
Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and “hallucinations” generated by ChatGPT-like tools.
“I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” Chowdhury said.
Now is the time for meaningful regulation of AI to come into force — but knowing the amount of time it will take regulatory proposals like the European Union’s AI Act to take effect, some are concerned this won’t happen fast enough.
“We call upon more transparency and accountability of algorithms and how they operate and a layman’s declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment,” Smouter said.
The AI Act, the first regulatory framework of its kind, has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.
“It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation,” he said.
Marek Antoni Iwanczuk | Sopa Images | Lightrocket | Getty Images
Google on Friday made the latest a splash in the AI talent wars, announcing an agreement to bring in Varun Mohan, co-founder and CEO of artificial intelligence coding startup Windsurf.
As part of the deal, Google will also hire other senior Windsurf research and development employees. Google is not investing in Windsurf, but the search giant will take a nonexclusive license to certain Windsurf technology, according to a person familiar with the matter. Windsurf remains free to license its technology to others.
“We’re excited to welcome some top AI coding talent from Windsurf’s team to Google DeepMind to advance our work in agentic coding,” a Google spokesperson wrote in an email. “We’re excited to continue bringing the benefits of Gemini to software developers everywhere.”
The deal between Google and Windsurf comes after the AI coding startup had been in talks with OpenAI for a $3 billion acquisition deal, CNBC reported in April. OpenAI did not immediately respond to a request for comment.
The move ratchets up the talent war in AI particularly among prominent companies. Meta has made lucrative job offers to several employees at OpenAI in recent weeks. Most notably, the Facebook parent added Scale AI founder Alexandr Wang to lead its AI strategy as part of a $14.3 billion investment into his startup.
Douglas Chen, another Windsurf co-founder, will be among those joining Google in the deal, Jeff Wang, the startup’s new interim CEO and its head of business for the past two years, wrote in a post on X.
“Most of Windsurf’s world-class team will continue to build the Windsurf product with the goal of maximizing its impact in the enterprise,” Wang wrote.
Windsurf has become more popular this year as an option for so-called vibe coding, which is the process of using new age AI tools to write code. Developers and non-developers have embraced the concept, leading to more revenue for Windsurf and competitors, such as Cursor, which OpenAI also looked at buying. All the interest has led investors to assign higher valuations to the startups.
This isn’t the first time Google has hired select people out of a startup. It did the same with Character.AI last summer. Amazon and Microsoft have also absorbed AI talent in this fashion, with the Adept and Inflection deals, respectively.
Microsoft is pushing an agent mode in its Visual Studio Code editor for vibe coding. In April, Microsoft CEO Satya Nadella said AI is composing as much of 30% of his company’s code.
The Verge reported the Google-Windsurf deal earlier on Friday.
Jensen Huang, CEO of Nvidia, holds a motherboard as he speaks during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, on June 11, 2025.
The sale, which totals 225,000 shares, comes as part of Huang’s previously adopted plan in March to unload up to 6 million shares of Nvidia through the end of the year. He sold his first batch of stock from the agreement in June, equaling about $15 million.
Last year, the tech executive sold about $700 million worth of shares as part of a prearranged plan. Nvidia stock climbed about 1% Friday.
Huang’s net worth has skyrocketed as investors bet on Nvidia’s AI dominance and graphics processing units powering large language models.
The 62-year-old’s wealth has grown by more than a quarter, or about $29 billion, since the start of 2025 alone, based on Bloomberg’s Billionaires Index. His net worth last stood at $143 billion in the index, putting him neck-and-neck with Berkshire Hathaway‘s Warren Buffett at $144 billion.
Shortly after the market opened Friday, Fortune‘s analysis of net worth had Huang ahead of Buffett, with the Nvidia CEO at $143.7 billion and the Oracle of Omaha at $142.1 billion.
Read more CNBC tech news
The company has also achieved its own notable milestones this year, as it prospers off the AI boom.
On Wednesday, the Santa Clara, California-based chipmaker became the first company to top a $4 trillion market capitalization, beating out both Microsoft and Apple. The chipmaker closed above that milestone Thursday as CNBC reported that the technology titan met with President Donald Trump.
Brooke Seawell, venture partner at New Enterprise Associates, sold about $24 million worth of Nvidia shares, according to an SEC filing. Seawell has been on the company’s board since 1997, according to the company.
Huang still holds more than 858 million shares of Nvidia, both directly and indirectly, in different partnerships and trusts.
Elon Musk meets with Indian Prime Minister Narendra Modi at Blair House in Washington DC, USA on February 13, 2025.
Anadolu | Anadolu | Getty Images
Tesla will open a showroom in Mumbai, India next week, marking the U.S. electric carmakers first official foray into the country.
The one and a half hour launch event for the Tesla “Experience Center” will take place on July 15 at the Maker Maxity Mall in Bandra Kurla Complex in Mumbai, according to an event invitation seen by CNBC.
Along with the showroom display, which will feature the company’s cars, Tesla is also likely to officially launch direct sales to Indian customers.
The automaker has had its eye on India for a while and now appears to have stepped up efforts to launch locally.
In April, Tesla boss Elon Musk spoke with Indian Prime Minister Narendra Modi to discuss collaboration in areas including technology and innovation. That same month, the EV-maker’s finance chief said the company has been “very careful” in trying to figure out when to enter the market.
Tesla has no manufacturing operations in India, even though the country’s government is likely keen for the company to establish a factory. Instead the cars sold in India will need to be imported from Tesla’s other manufacturing locations in places like Shanghai, China, and Berlin, Germany.
As Tesla begins sales in India, it will come up against challenges from long-time Chinese rival BYD, as well as local player Tata Motors.
One potential challenge for Tesla comes by way of India’s import duties on electric vehicles, which stand at around 70%. India has tried to entice investment in the country by offering companies a reduced duty of 15% if they commit to invest $500 million and set up manufacturing locally.
HD Kumaraswamy, India’s minister for heavy industries, told reporters in June that Tesla is “not interested” in manufacturing in the country, according to a Reuters report.
Tesla is looking to recruit roles in Mumbai, job listings posted on LinkedIn . These include advisors working in showrooms, security, vehicle operators to collect data for its Autopilot feature and service technicians.
There are also roles being advertised in the Indian capital of New Delhi, including for store managers. It’s unclear if Tesla is planning to launch a showroom in the city.