Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.
“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector, tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”
When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.
“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.
Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.
“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.
“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”