Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, speaking during the company’s annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.
David Paul Morris | Bloomberg | Getty Images
LONDON — The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence — but says it’s important that policymakers don’t tar all technology companies developing AI systems with the same brush.
Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation “seriously.” However, she added that any British proposals aimed at regulating AI should be “proportional and tailored.”
Bahrololoumi noted that there’s a difference between companies developing consumer-facing AI tools — like OpenAI — and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.
“What we look for is targeted, proportional, and tailored legislation,” Bahrololoumi told CNBC on Wednesday.
“There’s definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we’re a B2B organization,” she said.
A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be “highly targeted to the handful of companies developing the most powerful AI models,” rather than applying “blanket rules on the use of AI. “
That indicates that the rules might not apply to companies like Salesforce, which don’t make their own foundational models like OpenAI.
“We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,” the DSIT spokesperson added.
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, one feature called “zero retention” means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs aren’t stored in Salesforce’s large language models — the programs that form the bedrock of today’s genAI chatbots, like ChatGPT.
With consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI assistant, it’s unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.
“To train these models you need so much data,” she told CNBC. “And so, with something like ChatGPT and these consumer models, you don’t know what it’s using.”
Even Microsoft’s Copilot, which is marketed at enterprise customers, comes with heightened risks, Bahrololoumi said, citing a Gartner report calling out the tech giant’s AI personal assistant over the security risks it poses to organizations.
OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.
AI concerns ‘apply at all levels’
Bola Rotibi, chief of enterprise research at analyst firm CCS Insight, told CNBC that, while enterprise-focused AI suppliers are “more cognizant of enterprise-level requirements” around security and data privacy, it would be wrong to assume regulations wouldn’t scrutinize both consumer and business-facing firms.
“All the concerns around things like consent, privacy, transparency, data sovereignty apply at all levels no matter if it is consumer or enterprise as such details are governed by regulations such as GDPR,” Rotibi told CNBC via email. GDPR, or the General Data Protection Regulation, became law in the UK in 2018.
However, Rotibi said that regulators may feel “more confident” in AI compliance measures adopted by enterprise application providers like Salesforce, “because they understand what it means to deliver enterprise-level solutions and management support.”
“A more nuanced review process is likely for the AI services from widely deployed enterprise solution providers like Salesforce,” she added.
Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London, an event designed to promote the use of the company’s new “agentic” AI technology by partners and customers.
Her remarks come after U.K. Prime Minister Keir Starmer’s Labour refrained from introducing an AI bill in the King’s Speech, which is written by the government to outline its priorities for the coming months. The government at the time said it plans to establish “appropriate legislation” for AI, without offering further details.
OpenAI CEO Sam Altman speaks during the Federal Reserve’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., U.S., July 22, 2025.
Ken Cedeno | Reuters
OpenAI is detailing its plans to address ChatGPT’s shortcomings when handling “sensitive situations” following a lawsuit from a family who blamed the chatbot for their teenage son’s death by suicide.
“We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” OpenAI wrote on Tuesday, in a blog post titled, “Helping people when they need it most.”
Earlier on Tuesday, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16, NBC News reported. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”
The company did not mention the Raine family or lawsuit in its blog post.
OpenAI said that although ChatGPT is trained to direct people to seek help when expressing suicidal intent, the chatbot tends to offer answers that go against the company’s safeguards after many messages over an extended period of time.
The company said it’s also working on an update to its GPT-5 model released earlier this month that will cause the chatbot to deescalate conversations, and that it’s exploring how to “connect people to certified therapists before they are in an acute crisis,” including possibly building a network of licensed professionals that users could reach directly through ChatGPT.
Additionally, OpenAI said it’s looking into how to connect users with “those closest to them,” like friends and family members.
When it comes to teens, OpenAI said it will soon introduce controls that will give parents options to gain more insight into how their children use ChatGPT.
Jay Edelson, lead counsel for the Raine family, told CNBC on Tuesday that nobody from OpenAI has reached out to the family directly to offer condolences or discuss any effort to improve the safety of the company’s products.
“If you’re going to use the most powerful consumer tech on the planet — you have to trust that the founders have a moral compass,” Edelson said. “That’s the question for OpenAI right now, how can anyone trust them?”
Raine’s story isn’t isolated.
Writer Laura Reiley earlier this month published an essay in The New York Times detailing how her 29-year-old daughter died by suicide after discussing the idea extensively with ChatGPT. And in a case in Florida, 14-year-old Sewell Setzer III died by suicide last year after discussing it with an AI chatbot on the app Character.AI.
As AI services grow in popularity, a host of concerns are arising around their use for therapy, companionship and other emotional needs.
But regulating the industry may also prove challenging.
On Monday, a coalition of AI companies, venture capitalists and executives, including OpenAI President and co-founder Greg Brockman announced Leading the Future, a political operation that “will oppose policies that stifle innovation” when it comes to AI.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.
Okta CEO Todd McKinnon appears on CNBC in September 2018.
Anjali Sundaram | CNBC
Okta shares rose 4% in extended trading on Tuesday after the identity software maker reported fiscal results that exceeded Wall Street projections.
Here’s how the company did in comparison with LSEG consensus:
Earnings per share: 91 cents adjusted vs. 84 cents expected
Revenue: $728 million vs. $711.8 million expected
Okta’s revenue grew about 13% year over year in the fiscal second quarter, which ended on July 31, according to a statement. Net income of $67 million, or 37 cents per share, was up from $29 million, or 15 cents per share, in the same quarter last year.
In May, Okta adjusted its guidance to reflect macroeconomic uncertainty. But business has been going well, said Todd McKinnon, Okta’s co-founder and CEO, in an interview with CNBC on Tuesday.
“It was much better than we thought,” McKinnon said. “Yeah, the results speak for themselves.”
U.S. government customers are being more careful about signing up for deals after President Donald Trump launched the Department of Government Efficiency in January.
“But even under that additional review, we did really well,” McKinnon said.
Net retention rate, a metric to show growth with existing customers, came to 106% in the quarter, unchanged from three months ago.
Read more CNBC tech news
Companies will need to buy software to manage the identities of artificial intelligence agents working in their environments, which should lead to expansions with customers, McKinnon said. Selling suites of several kinds of Okta software should also boost revenue growth, he said.
Management called for 74 cents to 75 cents in adjusted earnings per share and $728 million to $730 million in revenue for the fiscal third quarter. Analysts surveyed by LSEG had expected earnings of 75 cents per share, with $722.9 million in revenue. Okta expects $2.260 billion to $2.265 billion in current remaining performance obligation, a measurement of subscription backlog to be recognized in the next 12 months, just above StreetAccount’s $2.26 billion consensus.
The company bumped up its fiscal 2026 forecast. It sees $3.33 to $3.38 in full-year adjusted earnings per share, with $2.875 billion to $2.885 billion in revenue. The LSEG consensus showed $3.28 in adjusted earnings per share on $2.86 billion in revenue. Okta’s full fiscal year guidance from May included $3.23 to $3.28 per share and $2.850 billion to $2.860 in revenue.
“Palo Alto is going to be like, ‘You have to buy security from us, and your endpoint from us and your SIEM [security information and event management] from us and your network from us,’ ” McKinnon said. “We just think that’s wrong, because customers need choice. It’s very unlikely they’re going to get every piece of technology or every piece of security from one vendor.”
A Palo Alto spokesperson did not immediately respond to a request for comment.
Earlier on Tuesday, Okta said it had agreed to acquire Israeli startup Axiom Security, which sells software for managing data access. The companies did not disclose terms of the deal.
As of Tuesday’s close, Okta shares were up 16%, while the technology-heavy Nasdaq was up 11%.
Executives will discuss the results with analysts on a conference call starting at 5 p.m. ET.
Apple on Tuesday sent invites to the media and analysts for a launch event at its campus on September 9 at 10 A.M pacific time.
The tagline on the invite is: “Awe dropping.”
Apple is expected to release new iPhones, as it usually does in September. This year’s model would be the iPhone 17. It also often announces new Apple Watch models in September.
While Apple’s launch events used to be held live, with executives demonstrating features on stage, since 2020 they have been pre-recorded videos. Apple said it would stream the event on its website.
Analysts expect Apple to release a lineup of new phones with updated processors and specs, including a new slim version that trades battery life and cameras for a light weight and design.