It’s a scenario most people have encountered: you try to make a big or unexpected purchase on your credit card, and, at the moment you need it the most, the card gets declined.
Sometimes, it’s as simple as confirming the purchase via text message, and you can quickly complete the transaction. Other times, it’s a days-long process that involves confirmation codes, mailed letters and waiting on hold with the card company to validate that it was indeed you who wanted to buy the product.
The rate of fraud alerts is “absolutely” going up, according to Deloitte U.S. risk & financial advisory principal Satish Lalchand.
It can’t be ignored, because many of the alerts are not false alarms.
About 60% of credit card holders in 2023 experienced some sort of attempted fraud, according to Experian.
“Fraud in general across all channels, whether it’s check fraud, credit card fraud payments, the peer-to-peer payments, everything, is significantly increasing at a very rapid pace,” Lalchand said.
Global card losses attributed to fraud reached $33 billion in 2022, according to payments industry research company Nilson Report, with the U.S. market representing roughly 40% of losses. It has forecast a persistent threat that could reach nearly $400 billion in card fraud in the decade to 2032.
“What’s driving a lot of this type of fraud, is the fraudsters themselves are using AI in general,” Lalchand said. “So, they are able to now move much faster.”
In the past, cybercriminals could open five to ten accounts a day. Now, it’s hundreds, if not thousands of accounts, thanks to advancements in artificial intelligence.
But at the same time AI is helping to detect potentially problematic transactions, with the downside of many cases turning out to be false alarms.
“When we come down to credit cards, financial institutions are investing more in the concept of fraud and fraud modernization, replacing older technology and having better fraud detection capabilities, and retuning their alerts,” Lalchand said. “That’s also causing a lot more on the detection side to go up.”
More personal data is being stolen
Michael Bruemmer, Experian vice president and head of its global data breach resolution and consumer protection division, says a lot more fraud is being done in other ways than stealing your credit card number, using other portions of your financial background, identity background, social security number.
Just in the past five months, there have been four major data breaches including Ticketmaster, Change Healthcare, AT&T and National Public Data. More data breaches can lead to more scrutiny and more preemptive alert protocols, although they are often not the main reason for alerts, according to Experian.
There is some good news. Overall, the rate of false purchases on credit cards is actually decreasing, according to Experian. There have been 416,582 cases of credit card fraud that have been perpetrated in 2024. It’s down 5.4% versus 2023.
AI’s ability to detect patterns based on previous behavior has helped. While you may still get credit card blocks on purchases that seem out of the ordinary, technology has improved fraud alerts in other ways. MasterCard said it’s observed on average a 20% increase in its ability to detect fraud thanks to AI, and up to 300% increase in its ability to detect fraud without more false alerts. Mastercard declined to provide statistics on the absolute level of fraud and overall accuracy of fraud detection.
“We’ve come such a long way to actually reduce the friction out there,” said Johan Gerber, Mastercard executive vice president and head of security solutions.
Take for example, travel plans and making purchases in a foreign country. Before, people would have to call the credit card company. Now, card companies automatically note vacations and travel patterns based on past purchase behavior. Technology has also made it faster to identify and clear flagged fraud alerts if it is indeed a false alarm. Instead of having to call and wait on hold, in many cases verification can be done in a matter of minutes through authorized related accounts or through information only the individual cardholder would know.
Tips to cut down on unnecessary alerts
Today, some scenarios will raise concerns within current security parameters. Experian notes that while data breaches may turn up the dial on fraud alerts, it’s actually changes in shopping patterns that are guaranteed to set off red flags. If you’re buying something at a new store or purchasing a big ticket item that you don’t usually buy, that’s typically something that will be noted. MasterCard also said trying multiple transactions quickly in a row will always alert their systems. So, you can expect these will usually garner some sort of temporary block.
“It’s a balance,” Gerber said. “Do I want to be inconvenienced? Do you potentially want a transaction that [MasterCard] may get wrong because [we] declined you? Or do I want to sit on the other side of the loss of trust in that [we] actually did let a transaction through and you should have known it’s not me.”
Other things you can do to ensure that you get mostly accurate fraud alerts is to sign up for monitoring services and personally set limit alerts on your accounts. Most institutions will let you place monetary limits on when you can get notified about big transactions. Freezing your credit file, using a password manager and using two-factor authentication for your financial accounts with a biometric passcode can also be beneficial.
“Try to shop on regular, reputable shopping sites, and if you’re going to use a credit card, have a low-level limit credit card that’s only used for those shopping sites,” Bruemmer said. “I would also recommend using a tap-to-pay or a mobile app and then make sure you’re not shopping on a public Wi-Fi network.”
And, even if the alerts may be annoying, never ignore them. Even though it may seem like you get notice of a data breach every day, it doesn’t mean you won’t eventually be affected.
“Consumers should pay attention to all of this, because it’s just a matter of time … they will be impacted,” Lachland said.
Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, speaking during the company’s annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.
David Paul Morris | Bloomberg | Getty Images
LONDON — The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence — but says it’s important that policymakers don’t tar all technology companies developing AI systems with the same brush.
Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation “seriously.” However, she added that any British proposals aimed at regulating AI should be “proportional and tailored.”
Bahrololoumi noted that there’s a difference between companies developing consumer-facing AI tools — like OpenAI — and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.
“What we look for is targeted, proportional, and tailored legislation,” Bahrololoumi told CNBC on Wednesday.
“There’s definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we’re a B2B organization,” she said.
A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be “highly targeted to the handful of companies developing the most powerful AI models,” rather than applying “blanket rules on the use of AI. “
That indicates that the rules might not apply to companies like Salesforce, which don’t make their own foundational models like OpenAI.
“We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,” the DSIT spokesperson added.
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, one feature called “zero retention” means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs aren’t stored in Salesforce’s large language models — the programs that form the bedrock of today’s genAI chatbots, like ChatGPT.
With consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI assistant, it’s unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.
“To train these models you need so much data,” she told CNBC. “And so, with something like ChatGPT and these consumer models, you don’t know what it’s using.”
Even Microsoft’s Copilot, which is marketed at enterprise customers, comes with heightened risks, Bahrololoumi said, citing a Gartner report calling out the tech giant’s AI personal assistant over the security risks it poses to organizations.
OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.
AI concerns ‘apply at all levels’
Bola Rotibi, chief of enterprise research at analyst firm CCS Insight, told CNBC that, while enterprise-focused AI suppliers are “more cognizant of enterprise-level requirements” around security and data privacy, it would be wrong to assume regulations wouldn’t scrutinize both consumer and business-facing firms.
“All the concerns around things like consent, privacy, transparency, data sovereignty apply at all levels no matter if it is consumer or enterprise as such details are governed by regulations such as GDPR,” Rotibi told CNBC via email. GDPR, or the General Data Protection Regulation, became law in the UK in 2018.
However, Rotibi said that regulators may feel “more confident” in AI compliance measures adopted by enterprise application providers like Salesforce, “because they understand what it means to deliver enterprise-level solutions and management support.”
“A more nuanced review process is likely for the AI services from widely deployed enterprise solution providers like Salesforce,” she added.
Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London, an event designed to promote the use of the company’s new “agentic” AI technology by partners and customers.
Her remarks come after U.K. Prime Minister Keir Starmer’s Labour refrained from introducing an AI bill in the King’s Speech, which is written by the government to outline its priorities for the coming months. The government at the time said it plans to establish “appropriate legislation” for AI, without offering further details.
After a decade of unfulfilled promises about driverless vehicles, Tesla CEO Elon Musk hyped the company’s Cybercab concept on Thursday night, showing off a low, silver two-seater with no steering wheels or pedals.
Rolling up to the stage in a Cybercab almost an hour after the company’s “We, Robot” event was supposed to begin, Musk said the company had 21 of these vehicles, and a total of 50 “autonomous” cars on-location at the Warner Bros. studio in Burbank, California where Tesla hosted its invitation-only event.
Musk offered no details about exactly where Tesla plans to produce the cars, but said consumers would be able to buy a Tesla Cybercab for below $30,000. He said the company hopes to be producing the Cybercab before 2027
He also said he expects Tesla to have “unsupervised FSD” up and running in Texas and California next year in the company’s Model 3 and Model Y electric vehicles.
FSD, which stands for Full Self-Driving, is Tesla’s premium driver assistance system, available today in a “supervised” version for Tesla electric vehicles. FSD currently requires a human driver at the wheel, ready to steer or brake at any time. Earlier this year, Tesla tacked “supervised” onto the product name.
“It’s going to be a glorious future,” Musk said on Thursday night.
Musk also revealed plans to produce an autonomous, electric Robovan that can carry up to 20 people, or be used to transport goods. He said it will “solve for high density,” transporting a sports team, for example.
He said the Cybercab and Robovan would employ inductive charging, meaning these autonomous vehicles could roll up to a station to recharge, with no plugging in required.
Tesla unveils its RoboVan at the We, Robot event on October 10, 2024.
Musk has spent years touting Tesla’s work in autonomous cars and promising that they would hit the market. Along the way, he’s repeatedly woven a fantastical vision for shareholders, setting and missing his own deadlines.
In 2015, Musk told shareholders that Tesla cars would achieve “full autonomy” within three years. They didn’t. In 2016, Musk said a Tesla car would be able to make a cross-country drive without requiring any human intervention before the end of 2017. That never happened. And in 2019, on a call with institutional investors that would help him raise more than $2 billion, Musk said Tesla would have 1 million robotaxi-ready vehicles on the road in 2020, able to complete 100 hours of driving work per week each, making money for their owners.
In April this year, Musk was still telling investors autonomy is the company’s future.
“If somebody doesn’t believe Tesla’s going to solve autonomy, I think they should not be an investor in the company,” he said on a call with analysts. “We will, and we are.”
At Thursday night’s event, which he previously characterized as a “product launch,” Musk welcomed attendees to the “party,” and said they would be able to take test rides in the autonomous vehicles on location, in the closed environment of the movie studio lots.
It was Tesla’s first product unveiling since the company first showed off the design for its Cybertruck in 2019. The angular steel pickup began shipping to customers in late 2023, and has been the subject of five voluntary recalls since then in the U.S.