Artificial intelligence algorithms are increasingly being used in financial services — but they come with some serious risks around discrimination.
Sadik Demiroz | Photodisc | Getty Images
AMSTERDAM — Artificial intelligence has a racial bias problem.
From biometric identification systems that disproportionately misidentify the faces of Black people and minorities, to applications of voice recognition software that fail to distinguish voices with distinct regional accents, AI has a lot to work on when it comes to discrimination.
And the problem of amplifying existing biases can be even more severe when it comes to banking and financial services.
Deloitte notes that AI systems are ultimately only as good as the data they’re given: Incomplete or unrepresentative datasets could limit AI’s objectivity, while biases in development teams that train such systems could perpetuate that cycle of bias.
A.I. can be dumb
Nabil Manji, head of crypto and Web3 at Worldpay by FIS, said a key thing to understand about AI products is that the strength of the technology depends a lot on the source material used to train it.
“The thing about how good an AI product is, there’s kind of two variables,” Manji told CNBC in an interview. “One is the data it has access to, and second is how good the large language model is. That’s why the data side, you see companies like Reddit and others, they’ve come out publicly and said we’re not going to allow companies to scrape our data, you’re going to have to pay us for that.”
As for financial services, Manji said a lot of the backend data systems are fragmented in different languages and formats.
“None of it is consolidated or harmonized,” he added. “That is going to cause AI-driven products to be a lot less effective in financial services than it might be in other verticals or other companies where they have uniformity and more modern systems or access to data.”
Manjisuggested that blockchain, or distributed ledger technology, could serve as a way to get a clearer view of the disparate data tucked away in the cluttered systems of traditional banks.
However, he added that banks — being the heavily regulated, slow-moving institutions that they are — are unlikely to move with the same speed as their more nimble tech counterparts in adopting new AI tools.
“You’ve got Microsoft and Google, who like over the last decade or two have been seen as driving innovation. They can’t keep up with that speed. And then you think about financial services. Banks are not known for being fast,” Manji said.
Banking’s A.I. problem
Rumman Chowdhury, Twitter’s former head of machine learning ethics, transparency and accountability, said that lending is a prime example of how an AI system’s bias against marginalized communities can rear its head.
“Algorithmic discrimination is actually very tangible in lending,” Chowdhury said on a panel at Money20/20 in Amsterdam. “Chicago had a history of literally denying those [loans] to primarily Black neighborhoods.”
In the 1930s, Chicago was known for the discriminatory practice of “redlining,” in which the creditworthiness of properties was heavily determined by the racial demographics of a given neighborhood.
“There would be a giant map on the wall of all the districts in Chicago, and they would draw red lines through all of the districts that were primarily African American, and not give them loans,” she added.
“Fast forward a few decades later, and you are developing algorithms to determine the riskiness of different districts and individuals. And while you may not include the data point of someone’s race, it is implicitly picked up.”
Indeed, Angle Bush, founder of Black Women in Artificial Intelligence, an organization aiming to empower Black women in the AI sector,tells CNBC that when AI systems are specifically used for loan approval decisions, she has found that there is a risk of replicating existing biases present in historical data used to train the algorithms.
“This can result in automatic loan denials for individuals from marginalized communities, reinforcing racial or gender disparities,” Bush added.
“It is crucial for banks to acknowledge that implementing AI as a solution may inadvertently perpetuate discrimination,” she said.
Frost Li, a developer who has been working in AI and machine learning for over a decade, told CNBC that the “personalization” dimension of AI integration can also be problematic.
“What’s interesting in AI is how we select the ‘core features’ for training,” said Li, who founded and runs Loup, a company that helps online retailers integrate AI into their platforms. “Sometimes, we select features unrelated to the results we want to predict.”
When AI is applied to banking, Li says, it’s harder to identify the “culprit” in biases when everything is convoluted in the calculation.
“A good example is how many fintech startups are especially for foreigners, because a Tokyo University graduate won’t be able to get any credit cards even if he works at Google; yet a person can easily get one from community college credit union because bankers know the local schools better,” Li added.
Generative AI is not usually used for creating credit scores or in the risk-scoring of consumers.
“That is not what the tool was built for,” said Niklas Guske, chief operating officer at Taktile, a startup that helps fintechs automate decision-making.
Instead, Guske said the most powerful applications are in pre-processing unstructured data such as text files — like classifying transactions.
“Those signals can then be fed into a more traditional underwriting model,” said Guske. “Therefore, Generative AI will improve the underlying data quality for such decisions rather than replace common scoring processes.”
But it’s also difficult to prove. Apple and Goldman Sachs, for example, were accused of giving women lower limits for the Apple Card. But these claims were dismissed by the New York Department of Financial Services after the regulator found no evidence of discrimination based on sex.
The problem, according to Kim Smouter, director of anti-racism group European Network Against Racism, is that it can be challenging to substantiate whether AI-based discrimination has actually taken place.
“One of the difficulties in the mass deployment of AI,” he said, “is the opacity in how these decisions come about and what redress mechanisms exist were a racialized individual to even notice that there is discrimination.”
“Individuals have little knowledge of how AI systems work and that their individual case may, in fact, be the tip of a systems-wide iceberg. Accordingly, it’s also difficult to detect specific instances where things have gone wrong,” he added.
Smouter cited the example of the Dutch child welfare scandal, in which thousands of benefit claims were wrongfully accused of being fraudulent. The Dutch government was forced to resign after a 2020 report found that victims were “treated with an institutional bias.”
This, Smouter said, “demonstrates how quickly such disfunctions can spread and how difficult it is to prove them and get redress once they are discovered and in the meantime significant, often irreversible damage is done.”
Policing A.I.’s biases
Chowdhury says there is a need for a global regulatory body, like the United Nations, to address some of the risks surrounding AI.
Though AI has proven to be an innovative tool, some technologists and ethicists have expressed doubts about the technology’s moral and ethical soundness. Among the top worries industry insiders expressed are misinformation; racial and gender bias embedded in AI algorithms; and “hallucinations” generated by ChatGPT-like tools.
“I worry quite a bit that, due to generative AI, we are entering this post-truth world where nothing we see online is trustworthy — not any of the text, not any of the video, not any of the audio, but then how do we get our information? And how do we ensure that information has a high amount of integrity?” Chowdhury said.
Now is the time for meaningful regulation of AI to come into force — but knowing the amount of time it will take regulatory proposals like the European Union’s AI Act to take effect, some are concerned this won’t happen fast enough.
“We call upon more transparency and accountability of algorithms and how they operate and a layman’s declaration that allows individuals who are not AI experts to judge for themselves, proof of testing and publication of results, independent complaints process, periodic audits and reporting, involvement of racialized communities when tech is being designed and considered for deployment,” Smouter said.
The AI Act, the first regulatory framework of its kind, has incorporated a fundamental rights approach and concepts like redress, according to Smouter, adding that the regulation will be enforced in approximately two years.
“It would be great if this period can be shortened to make sure transparency and accountability are in the core of innovation,” he said.
Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX.
Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.
Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.
In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit.
“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.
To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.
Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.
Carlos Barria | Reuters
X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform.
The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much.
The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms.
“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.
“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”
It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.
Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively.
X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.
Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.
Artificial intelligence training
X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.
“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.
Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually.
X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.
“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.
Though X’s new terms may be expansive, Meta’s policies aren’t that different.
The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center.
Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.
Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training.
“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.
Liquidated damages
Another unusual aspect of X’s new terms is its “liquidated damages” clause. The terms state that if users request, view or access more than 1 million posts – including replies, videos, images and others – in any 24-hour period they are liable for damages of $15,000.
While most individual users won’t easily approach that threshold, the clause is concerning for some, including digital researchers. They rely on the analysis of larger numbers of public posts from services like X to do their work.
X’s new terms of service are a “disturbing move that the company should reverse,” said Alex Abdo, litigation director for the Knight First Amendment Institute at Columbia University, in an October statement.
“The public relies on journalists and researchers to understand whether and how the platforms are shaping public discourse, affecting our elections, and warping our relationships,” Abdo wrote. “One effect of X Corp.’s new terms of service will be to stifle that research when we need it most.”
Neither Threads nor Bluesky have anything similar to X’s liquidated damages clause.
Meta and X did not respond to requests for comment.
A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”
The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.
“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.
The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.
Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.
Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.
“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.
Concern with Congress and a changing White House
The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.
As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.
Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.
Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.
Cyber Command 2.0 and the U.S. military
John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.
Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.
In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.
Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.
Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.
Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.
Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.
“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.
He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.
Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.