Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.
Sadik Demiroz | Photodisc | Getty Images
AMSTERDAM — Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
A.I. can be dumb
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
“The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”
As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.
“None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”
Manjisuggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.
However, he added that banks — being the heavily regulated, slow-moving institutions that they are — are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.
“You’ve got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks are not known for being fast,” Manji said.
Banking’s A.I. problem
Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.
“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector,tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”
When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.
“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.
Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.
“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.
“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”
But it’s also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York Department of Financial Services after the regulator found no evidence of discrimination based on sex.
The problem, according to Kim Smouter, director of anti-racism group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.
“One of the difficulties in the mass deployment of AI,” he said, “is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination.”
“Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it’s also difficult to detect specific instances where things have gone wrong,” he added.
Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were “treated with an institutional bias.”
This, Smouter said, “demonstrates how quickly such disfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done.”
Policing A.I.’s biases
Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.
Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and “hallucinations” generated by ChatGPT-like tools.
“I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” Chowdhury said.
Now is the time for meaningful regulation of AI to come into force — but knowing the amount of time it will take regulatory proposals like the European Union’s AI Act to take effect, some are concerned this won’t happen fast enough.
“We call upon more transparency and accountability of algorithms and how they operate and a layman’s declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment,” Smouter said.
The AI Act, the first regulatory framework of its kind, has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.
“It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation,” he said.
Mark Zuckerberg’s announcement this week that Meta would pivot its moderation policies to allow more “free expression” was widely viewed as the company’s latest effort to appease President-elect Donald Trump.
More than any of its Silicon Valley peers, Meta has taken numerous public steps to make amends with Trump since his election victory in November.
That follows a highly contentious four years between the two during Trump’s first term in office, which ended with Facebook — similar to other social media companies — banning Trump from its platform.
As recently as March, Trump was using his preferred nickname of “Zuckerschmuck” when talking about Meta’s CEO and declaring that Facebook was an “enemy of the people.”
With Meta now positioning itself to be a key player in artificial intelligence, Zuckerberg recognizes the need for White House support as his company builds data centers and pursues policies that will allow it to fulfill its lofty ambitions, according to people familiar with the company’s plans who asked not to be named because they weren’t authorized to speak on the matter.
“Even though Facebook is as powerful as it is, it still had to bend the knee to Trump,” said Brian Boland, a former Facebook vice president, who left the company in 2020.
Meta declined to comment for this article.
In Tuesday’s announcement, Zuckerberg said Meta will end third-party fact-checking, remove restrictions on topics such as immigration and gender identity and bring political content back to users’ feeds. Zuckerberg pitched the sweeping policy changes as key to stabilizing Meta’s content-moderation apparatus, which he said had “reached a point where it’s just too many mistakes and too much censorship.”
The policy change was the latest strategic shift Meta has taken to buddy up with Trump and Republicans since Election Day.
A day earlier, Meta announced that UFC CEO Dana White, a longtime Trump friend, is joining the company’s board.
And last week, Meta announced that it was replacing Nick Clegg, its president of global affairs, with Joel Kaplan, who had been the company’s policy vice president. Clegg previously had a career in British politics with the Liberal Democrats party, including as a deputy prime minister, while Kaplan was a White House deputy chief of staff under former President George W. Bush.
Kaplan, who joined Meta in 2011 when it was still known as Facebook, has longstanding ties to the Republican Party and once worked as a law clerk for the late conservative Supreme Court Justice Antonin Scalia. In December, Kaplan posted photos on Facebook of himself with Vice President-elect JD Vance and Trump during their visit to the New York Stock Exchange.
Joel Kaplan, Facebook’s vice president of global policy, on April 17, 2018.
Niall Carson | PA Images | Getty Images
Many Meta employees criticized the policy change internally, with some saying the company is absolving itself of its responsibility to create a safe platform. Current and former employees also expressed concern that marginalized communities could face more online abuse due to the new policy, which is set to take effect over the coming weeks.
Despite the backlash from employees, people familiar with the company’s thinking said Meta is more willing to make these kinds of moves after laying off 21,000 employees, or nearly a quarter of its workforce, in 2022 and 2023.
Those cuts affected much of Meta’s civic integrity and trust and safety teams. The civic integrity group was the closest thing the company had to a white-collar union, with members willing to push back against certain policy decisions, former employees said. Since the job cuts, Zuckerberg faces less friction when making broad policy changes, the people said.
Zuckerberg’s overtures to Trump began in the months leading up to the election.
Following the first assassination attempt on Trump in July, Zuckerberg called the photo of Trump raising his fist with blood running down his face “one of the most badass things I’ve ever seen in my life.”
A month later, Zuckerberg penned a letter to the House Judiciary Committee alleging that the Biden administration had pressured Meta’s teams to censor certain Covid-19 content.
“I believe the government pressure was wrong, and I regret that we were not more outspoken about it,” he wrote.
After Trump’s presidential victory, Zuckerberg joined several other technology executives who visited the president-elect’s Mar-a-Lago resort in Florida. Meta also donated $1 million to Trump’s inaugural fund.
On Friday, Meta revealed to its workforce in a memo obtained by CNBC that it intends to shutter several internal programs related to diversity and inclusion in its hiring process, representing another Trump-friendly move.
The previous day, some details of the company’s new relaxed content-moderation guidelines were published by the news site The Intercept, showing the kind of offensive rhetoric that Meta’s new policy would now allow, including statements such as “Migrants are no better than vomit” and “I bet Jorge’s the one who stole my backpack after track practice today. Immigrants are all thieves.”
Recalibrating for Trump
Zuckerberg, who has been dragged to Washington eight times to testify before congressional committees during the last two administrations, wants to be perceived as someone who can work with Trump and the Republican Party, people familiar with the matter said.
Though Meta’s content-policy updates caught many of its employees and fact-checking partners by surprise, a small group of executives were formulating the plans in the aftermath of the U.S. election results. By New Year’s Day, leadership began planning the public announcements of its policy change, the people said.
Meta typically undergoes major “recalibrations” after prominent U.S. elections, said Katie Harbath, a former Facebook policy director and CEO of tech consulting firm Anchor Change. When the country undergoes a change in power, Meta adjusts its policies to best suit its business and reputational needs based on the political landscape, Harbath said.
“In 2028, they’ll recalibrate again,” she said.
After the 2016 election and Trump’s first victory, for example, Zuckerberg toured the U.S. to meet people in states he hadn’t previously visited. He published a 6,000-word manifesto emphasizing the need for Facebook to build more community.
The social media company faced harsh criticism about fake news and Russian election interference on its platforms after the 2016 election.
Following the 2020 election, during the heart of the pandemic, Meta took a harder stand on Covid-19 content, with a policy executive saying in 2021 that the “amount of COVID-19 vaccine misinformation that violates our policies is too much by our standards.” Those efforts may have appeased the Biden administration, but it drew the ire of Republicans.
Meta is once again reacting to the moment, Harbath said.
“There wasn’t a business risk here in Silicon Valley to be more right-leaning,” Harbath said.
While Trump has offered few specific policy proposals for his second administration, Meta has plenty at stake.
The White House could create more relaxed AI regulations compared with those in the European Union, where Meta says harsh restrictions have resulted in the company not releasing some of its more advanced AI technologies. Meta, like other tech giants, also needs more massive data centers and cutting-edge computer chips to help train and run their advanced AI models.
“There’s a business benefit to having Republicans win, because they are traditionally less regulatory,” Harbath said.
Meta’s CEO Mark Zuckerberg reacts as he testifies during the Senate Judiciary Committee hearing on online child sexual exploitation at the U.S. Capitol in Washington, U.S., January 31, 2024.
Evelyn Hockstein | Reuters
Meta isn’t alone in trying to cozy up to Trump. But the extreme measures the company is taking reflects a particular level of animus expressed by Trump over the years.
Trump has accused Meta of censorship and has expressed resentment over the company’s two-year suspension of his Facebook and Instagram accounts following the Jan. 6 attack on the Capitol.
In July 2024, Trump posted on Truth Social that he intended to “pursue Election Fraudsters at levels never seen before, and they will be sent to prison for long periods of time,” adding “ZUCKERBUCKS, be careful!” Trump reiterated that statement in his book, “Save America,” writing that Zuckerberg plotted against him during the 2020 election and that the Meta CEO would “spend the rest of his life in prison” if it happened again.
Meta spends $14 million annually on providing personal security for Zuckerberg and his family, according to the company’s 2024 proxy statement. As part of that security, the company analyzes any threats or perceived threats against its CEO, according to a person familiar with the matter. Those threats are cataloged, analyzed and dissected by Meta’s multitude of security teams.
After Trump’s comments, Meta’s security teams analyzed how Trump could weaponize the Justice Department and the country’s intelligence agencies against Zuckerberg and what it would cost the company to defend its CEO against a sitting president, said the person, who asked not to be named because of confidentiality.
Meta’s efforts to appease the incoming president bring their own risks.
After Zuckerberg announced the new speech policy Tuesday, Boland, the former executive, was among a number of users who took to Meta’s Threads service to tell their followers that they were quitting Facebook.
“Last post before deleting,” Boland wrote in his post.
Before the post could be seen by any of his Threads followers, Meta’s content moderation system had taken it down, citing cybersecurity reasons.
Boland told CNBC in an interview that he couldn’t help but chuckle at the situation.
“It’s deeply ironic,” Boland said.
— CNBC’s Salvador Rodriguez contributed to this report.
Apple is losing market share in China due to declining iPhone shipments, supply chain analyst Ming-Chi Kuo wrote in a report on Friday. The stock slid 2.4%.
“Apple has adopted a cautious stance when discussing 2025 iPhone production plans with key suppliers,” Kuo, an analyst at TF Securities, wrote in the post. He added that despite the expected launch of the new iPhone SE 4, shipments are expected to decline 6% year over year for the first half of 2025.
Kuo expects Apple’s market share to continue to slide, as two of the coming iPhones are so thin that they likely will only support eSIM, which the Chinese market currently does not promote.
“These two models could face shipping momentum challenges unless their design is modified,” he wrote.
Kuo wrote that in December, overall smartphone shipments in China were flat from a year earlier, but iPhone shipments dropped 10% to 12%.
There is also “no evidence” that Apple Intelligence, the company’s on-device artificial intelligence offering, is driving hardware upgrades or services revenue, according to Kuo. He wrote that the feature “has not boosted iPhone replacement demand,” according to a supply chain survey he conducted, and added that in his view, the feature’s appeal “has significantly declined compared to cloud-based AI services, which have advanced rapidly in subsequent months.”
Apple’s estimated iPhone shipments total about 220 million units for 2024 and between about 220 million and 225 million for this year, Kuo wrote. That is “below the market consensus of 240 million or more,” he wrote.
Apple did not immediately respond to CNBC’s request for comment.
Amazon said it is halting some of its diversity and inclusion initiatives, joining a growing list of major corporations that have made similar moves in the face of increasing public and legal scrutiny.
In a Dec. 16 internal note to staffers that was obtained by CNBC, Candi Castleberry, Amazon’s VP of inclusive experiences and technology, said the company was in the process of “winding down outdated programs and materials” as part of a broader review of hundreds of initiatives.
“Rather than have individual groups build programs, we are focusing on programs with proven outcomes — and we also aim to foster a more truly inclusive culture,” Castleberry wrote in the note, which was first reported by Bloomberg.
Castleberry’s memo doesn’t say which programs the company is dropping as a result of its review. The company typically releases annual data on the racial and gender makeup of its workforce, and it also operates Black, LGBTQ+, indigenous and veteran employee resource groups, among others.
In 2020, Amazon set a goal of doubling the number of Black employees in vice president and director roles. It announced the same goal in 2021 and also pledged to hire 30% more Black employees for product manager, engineer and other corporate roles.
Meta on Friday made a similar retreat from its diversity, equity and inclusion initiatives. The social media company said it’s ending its approach of considering qualified candidates from underrepresented groups for open roles and its equity and inclusion training programs. The decision drew backlash from Meta employees, including one staffer who wrote, “If you don’t stand by your principles when things get difficult, they aren’t values. They’re hobbies.”
Amazon, which is the nation’s second-largest private employer behind Walmart, also recently made changes to its “Our Positions” webpage, which lays out the company’s stance on a variety of policy issues. Previously, there were separate sections dedicated to “Equity for Black people,” “Diversity, equity and inclusion” and “LGBTQ+ rights,” according to records from the Internet Archive’s Wayback Machine.
The current webpage has streamlined those sections into a single paragraph. The section says that Amazon believes in creating a diverse and inclusive company and that inequitable treatment of anyone is unacceptable. The Information earlier reported the changes.
Amazon spokesperson Kelly Nantel told CNBC in a statement: “We update this page from time to time to ensure that it reflects updates we’ve made to various programs and positions.”
Read the full memo from Amazon’s Castleberry:
Team,
As we head toward the end of the year, I want to give another update on the work we’ve been doing around representation and inclusion.
As a large, global company that operates in different countries and industries, we serve hundreds of millions of customers from a range of backgrounds and globally diverse communities. To serve them effectively, we need millions of employees and partners that reflect our customers and communities. We strive to be representative of those customers and build a culture that’s inclusive for everyone.
In the last few years we took a new approach, reviewing hundreds of programs across the company, using science to evaluate their effectiveness, impact, and ROI — identifying the ones we believed should continue. Each one of these addresses a specific disparity, and is designed to end when that disparity is eliminated. In parallel, we worked to unify employee groups together under one umbrella, and build programs that are open to all. Rather than have individual groups build programs, we are focusing on programs with proven outcomes — and we also aim to foster a more truly inclusive culture. You can read more about this on our Together at Amazon page on A to Z.
This approach — where we move away from programs that were separate from our existing processes, and instead integrating our work into existing processes so they become durable — is the evolution to “built in” and “born inclusive,” instead of “bolted on.” As part of this evolution, we’ve been winding down outdated programs and materials, and we’re aiming to complete that by the end of 2024. We also know there will always be individuals or teams who continue to do well-intentioned things that don’t align with our company-wide approach, and we might not always see those right away. But we’ll keep at it.
We’ll continue to share ongoing updates, and appreciate your hard work in driving this progress. We believe this is important work, so we’ll keep investing in programs that help us reflect those audiences, help employees grow, thrive, and connect, and we remain dedicated to delivering inclusive experiences for customers, employees, and communities around the world.