Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.
Sadik Demiroz | Photodisc | Getty Images
AMSTERDAM — Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
A.I. can be dumb
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
“The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”
As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.
“None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”
Manjisuggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.
However, he added that banks — being the heavily regulated, slow-moving institutions that they are — are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.
“You’ve got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks are not known for being fast,” Manji said.
Banking’s A.I. problem
Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.
“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector,tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”
When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.
“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.
Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.
“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.
“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”
But it’s also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York Department of Financial Services after the regulator found no evidence of discrimination based on sex.
The problem, according to Kim Smouter, director of anti-racism group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.
“One of the difficulties in the mass deployment of AI,” he said, “is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination.”
“Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it’s also difficult to detect specific instances where things have gone wrong,” he added.
Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were “treated with an institutional bias.”
This, Smouter said, “demonstrates how quickly such disfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done.”
Policing A.I.’s biases
Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.
Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and “hallucinations” generated by ChatGPT-like tools.
“I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” Chowdhury said.
Now is the time for meaningful regulation of AI to come into force — but knowing the amount of time it will take regulatory proposals like the European Union’s AI Act to take effect, some are concerned this won’t happen fast enough.
“We call upon more transparency and accountability of algorithms and how they operate and a layman’s declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment,” Smouter said.
The AI Act, the first regulatory framework of its kind, has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.
“It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation,” he said.
Google CEO Sundar Pichai testifies before the House Judiciary Committee at the Rayburn House Office Building on December 11, 2018 in Washington, DC.
Alex Wong | Getty Images
Google’s antitrust woes are continuing to mount, just as the company tries to brace for a future dominated by artificial intelligence.
On Thursday, a federal judge ruled that Google held illegal monopolies in online advertising markets due to its position between ad buyers and sellers.
The ruling, which followed a September trial in Alexandria, Virginia, represents a second major antitrust blow for Google in under a year. In August, a judge determined the company has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoftmore than 20 years ago.
Google is in a particularly precarious spot as it tries to simultaneously defend its primary business in court while fending off an onslaught of new competition due to the emergence of generative AI, most notably OpenAI’s ChatGPT, which offers users alternative ways to search for information. Revenue growth has cooled in recent years, and Google also now faces the added potential of a slowdown in ad spending due to economic concerns from President Donald Trump’s sweeping new tariffs.
Parent company Alphabet reports first-quarter results next week. Alphabet’s stock price dipped more than 1% on Thursday and is now down 20% this year.
In Thursday’s ruling, U.S. District Judge Leonie Brinkema said Google’s anticompetitive practices “substantially harmed” publishers and users on the web. The trial featured 39 live witnesses, depositions from an additional 20 witnesses and hundreds of exhibits.
Judge Brinkema ruled that Google unlawfully controls two of the three parts of the advertising technology market: the publisher ad server market and ad exchange market. Brinkema dismissed the third part of the case, determining that tools used for general display advertising can’t clearly be defined as Google’s own market. In particular, the judge cited the purchases of DoubleClick and Admeld and said the government failed to show those “acquisitions were anticompetitive.”
“We won half of this case and we will appeal the other half,” Lee-Anne Mulholland, Google’s vice president or regulatory affairs, said in an emailed statement. “We disagree with the Court’s decision regarding our publisher tools. Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective.”
Attorney General Pam Bondi said in a press release from the DOJ that the ruling represents a “landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”
Potential ad disruption
If regulators force the company to divest parts of the ad-tech business, as the Justice Department has requested, it could open up opportunities for smaller players and other competitors to fill the void and snap up valuable market share. Amazon has been growing its ad business in recent years.
Meanwhile, Google is still defending itself against claims that its search has acted as a monopoly by creating strong barriers to entry and a feedback loop that sustained its dominance. Google said in August, immediately after the search case ruling, that it would appeal, meaning the matter can play out in court for years even after the remedies are determined.
The remedies trial, which will lay out the consequences, begins next week. The Justice Department is aiming for a break up of Google’s Chrome browser and eliminating exclusive agreements, like its deal with Apple for search on iPhones. The judge is expected to make the ruling by August.
Google CEO Sundar Pichai (L) and Apple CEO Tim Cook (R) listen as U.S. President Joe Biden speaks during a roundtable with American and Indian business leaders in the East Room of the White House on June 23, 2023 in Washington, DC.
Anna Moneymaker | Getty Images
After the ad market ruling on Thursday, Gartner’s Andrew Frank said Google’s “conflicts of interest” are apparent by how the market runs.
“The structure has been decades in the making,” Frank said, adding that “untangling that would be a significant challenge, particularly since lawyers don’t tend to be system architects.”
However, the uncertainty that comes with a potentially years-long appeals process means many publishers and advertisers will be waiting to see how things shake out before making any big decisions given how much they rely on Google’s technology.
“Google will have incentives to encourage more competition possibly by loosening certain restrictions on certain media it controls, YouTube being one of them,” Frank said. “Those kind of incentives may create opportunities for other publishers or ad tech players.”
A date for the remedies trial hasn’t been set.
Damian Rollison, senior director of market insights for marketing platform Soci, said the revenue hit from the ad market case could be more dramatic than the impact from the search case.
“The company stands to lose a lot more in material terms if its ad business, long its main source of revenue, is broken up,” Rollison said in an email. “Whereas divisions like Chrome are more strategically important.”
Jason Citron, CEO of Discord in Washington, DC, on January 31, 2024.
Andrew Caballero-Reynolds | AFP | Getty Images
The New Jersey attorney general sued Discord on Thursday, alleging that the company misled consumers about child safety features on the gaming-centric social messaging app.
The lawsuit, filed in the New Jersey Superior Court by Attorney General Matthew Platkin and the state’s division of consumer affairs, alleges that Discord violated the state’s consumer fraud laws.
Discord did so, the complaint said, by allegedly “misleading children and parents from New Jersey” about safety features, “obscuring” the risks children face on the platform and failing to enforce its minimum age requirement.
“Discord’s strategy of employing difficult to navigate and ambiguous safety settings to lull parents and children into a false sense of safety, when Discord knew well that children on the Application were being targeted and exploited, are unconscionable and/or abusive commercial acts or practices,” lawyers wrote in the legal filing.
They alleged that Discord’s acts and practices were “offensive to public policy.”
A Discord spokesperson said in a statement that the company disputes the allegations and that it is “proud of our continuous efforts and investments in features and tools that help make Discord safer.”
“Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today,” the spokesperson said.
One of the lawsuit’s allegations centers around Discord’s age-verification process, which the plaintiffs believe is flawed, writing that children under thirteen can easily lie about their age to bypass the app’s minimum age requirement.
The lawsuit also alleges that Discord misled parents to believe that its so-called Safe Direct Messaging feature “was designed to automatically scan and delete all private messages containing explicit media content.” The lawyers claim that Discord misrepresented the efficacy of that safety tool.
“By default, direct messages between ‘friends’ were not scanned at all,” the complaint stated. “But even when Safe Direct Messaging filters were enabled, children were still exposed to child sexual abuse material, videos depicting violence or terror, and other harmful content.”
The New Jersey attorney general is seeking unspecified civil penalties against Discord, according to the complaint.
The filing marks the latest lawsuit brought by various state attorneys general around the country against social media companies.
In 2023, a bipartisan coalition of over 40 state attorneys general sued Meta over allegations that the company knowingly implemented addictive features across apps like Facebook and Instagram that harm the mental well being of children and young adults.
The New Mexico attorney general sued Snap in Sep. 2024 over allegations that Snapchat’s design features have made it easy for predators to easily target children through sextortion schemes.
The following month, a bipartisan group of over a dozen state attorneys general filed lawsuits against TikTok over allegations that the app misleads consumers that its safe for children. In one particular lawsuit filed by the District of Columbia’s attorney general, lawyers allege that the ByteDance-owned app maintains a virtual currency that “substantially harms children” and a livestreaming feature that “exploits them financially.”
In January 2024, executives from Meta, TikTok, Snap, Discord and X were grilled by lawmakers during a senate hearing over allegations that the companies failed to protect children on their respective social media platforms.
Signage at 23andMe headquarters in Sunnyvale, California, U.S., on Wednesday, Jan. 27, 2021.
David Paul Morris | Bloomberg | Getty Images
The House Committee on Energy and Commerce is investigating 23andMe‘s decision to file for Chapter 11 bankruptcy protection and has expressed concern that its sensitive genetic data is “at risk of being compromised,” CNBC has learned.
Rep. Brett Guthrie, R-Ky., Rep. Gus Bilirakis, R-Fla., and Rep. Gary Palmer, R.-Ala., sent a letter to 23andMe’s interim CEO Joe Selsavage on Thursday requesting answers to a series of questions about its data and privacy practices by May 1.
The congressmen are the latest government officials to raise concerns about 23andMe’s commitment to data security, as the House Committee on Oversight and Government Reform and the Federal Trade Commission have sent the company similar letters in recent weeks.
23andMe exploded into the mainstream with its at-home DNA testing kits that gave customers insight into their family histories and genetic profiles. The company was once valued at a peak of $6 billion, but has since struggled to generate recurring revenue and establish a lucrative research and therapeutics businesses.
After filing for bankruptcy in in Missouri federal court in March, 23andMe’s assets, including its vast genetic database, are up for sale.
“With the lack of a federal comprehensive data privacy and security law, we write to express our great concern about the safety of Americans’ most sensitive personal information,” Guthrie, Bilirakis and Palmer wrote in the letter.
23andMe did not immediately respond to CNBC’s request for comment.
More CNBC health coverage
23andMe has been inundated with privacy concerns in recent years after hackers accessed the information of nearly 7 million customers in 2023.
DNA data is particularly sensitive because each person’s sequence is unique, meaning it can never be fully anonymized, according to the National Human Genome Research Institute. If genetic data falls into the hands of bad actors, it could be used to facilitate identity theft, insurance fraud and other crimes.
The House Committee on Energy and Commerce has jurisdiction over issues involving data privacy. Guthrie serves as the chairman of the committee, Palmer serves as the chairman of the Subcommittee on Oversight and Investigations and Bilirakis serves as the chairman of the Subcommittee on Commerce, Manufacturing and Trade.
The congressmen said that while Americans’ health information is protected under legislation like the Health Insurance Portability and Accountability Act, or HIPAA, direct-to-consumer companies like 23andMe are typically not covered under that law. They said they feel “great concern” about the safety of the company’s customer data, especially given the uncertainty around the sale process.
23andMe has repeatedly said it will not change how it manages or protects consumer data throughout the transaction. Similarly, in a March release, the company said all potential buyers must agree to comply with its privacy policy and applicable law.
“To constitute a qualified bid, potential buyers must, among other requirements, agree to comply with 23andMe’s consumer privacy policy and all applicable laws with respect to the treatment of customer data,” 23andMe said in the release.
23andMe customers can still delete their account and accompanying data through the company’s website. But Guthrie, Bilirakis and Palmer said there are reports that some users have had trouble doing so.
“Regardless of whether the company changes ownership, we want to ensure that customer access and deletion requests are being honored by 23andMe,” the congressmen wrote.