Zahra Bahrololoumi, CEO of U.K. and Ireland at Salesforce, speaking during the company’s annual Dreamforce conference in San Francisco, California, on Sept. 17, 2024.
David Paul Morris | Bloomberg | Getty Images
LONDON — The UK chief executive of Salesforce wants the Labor government to regulate artificial intelligence — but says it’s important that policymakers don’t tar all technology companies developing AI systems with the same brush.
Speaking to CNBC in London, Zahra Bahrololoumi, CEO of UK and Ireland at Salesforce, said the American enterprise software giant takes all legislation “seriously.” However, she added that any British proposals aimed at regulating AI should be “proportional and tailored.”
Bahrololoumi noted that there’s a difference between companies developing consumer-facing AI tools — like OpenAI — and firms like Salesforce making enterprise AI systems. She said consumer-facing AI systems, such as ChatGPT , face fewer restrictions than enterprise-grade products, which have to meet higher privacy standards and comply with corporate guidelines.
“What we look for is targeted, proportional, and tailored legislation,” Bahrololoumi told CNBC on Wednesday.
“There’s definitely a difference between those organizations that are operating with consumer facing technology and consumer tech, and those that are enterprise tech. And we each have different roles in the ecosystem, [but] we’re a B2B organization,” she said.
A spokesperson for the UK’s Department of Science, Innovation and Technology (DSIT) said that planned AI rules would be “highly targeted to the handful of companies developing the most powerful AI models,” rather than applying “blanket rules on the use of AI. “
That indicates that the rules might not apply to companies like Salesforce, which don’t make their own foundational models like OpenAI.
“We recognize the power of AI to kickstart growth and improve productivity and are absolutely committed to supporting the development of our AI sector, particularly as we speed up the adoption of the technology across our economy,” the DSIT spokesperson added.
Data security
Salesforce has been heavily touting the ethics and safety considerations embedded in its Agentforce AI technology platform, which allows enterprise organizations to spin up their own AI “agents” — essentially, autonomous digital workers that carry out tasks for different functions, like sales, service or marketing.
For example, one feature called “zero retention” means no customer data can ever be stored outside of Salesforce. As a result, generative AI prompts and outputs aren’t stored in Salesforce’s large language models — the programs that form the bedrock of today’s genAI chatbots, like ChatGPT.
With consumer AI chatbots like ChatGPT, Anthropic’s Claude or Meta’s AI assistant, it’s unclear what data is being used to train them or where that data gets stored, according to Bahrololoumi.
“To train these models you need so much data,” she told CNBC. “And so, with something like ChatGPT and these consumer models, you don’t know what it’s using.”
Even Microsoft’s Copilot, which is marketed at enterprise customers, comes with heightened risks, Bahrololoumi said, citing a Gartner report calling out the tech giant’s AI personal assistant over the security risks it poses to organizations.
OpenAI and Microsoft were not immediately available for comment when contacted by CNBC.
AI concerns ‘apply at all levels’
Bola Rotibi, chief of enterprise research at analyst firm CCS Insight, told CNBC that, while enterprise-focused AI suppliers are “more cognizant of enterprise-level requirements” around security and data privacy, it would be wrong to assume regulations wouldn’t scrutinize both consumer and business-facing firms.
“All the concerns around things like consent, privacy, transparency, data sovereignty apply at all levels no matter if it is consumer or enterprise as such details are governed by regulations such as GDPR,” Rotibi told CNBC via email. GDPR, or the General Data Protection Regulation, became law in the UK in 2018.
However, Rotibi said that regulators may feel “more confident” in AI compliance measures adopted by enterprise application providers like Salesforce, “because they understand what it means to deliver enterprise-level solutions and management support.”
“A more nuanced review process is likely for the AI services from widely deployed enterprise solution providers like Salesforce,” she added.
Bahrololoumi spoke to CNBC at Salesforce’s Agentforce World Tour in London, an event designed to promote the use of the company’s new “agentic” AI technology by partners and customers.
Her remarks come after U.K. Prime Minister Keir Starmer’s Labour refrained from introducing an AI bill in the King’s Speech, which is written by the government to outline its priorities for the coming months. The government at the time said it plans to establish “appropriate legislation” for AI, without offering further details.
Amazon has rolled out a new storefront featuring apparel, home goods, electronics and other items priced below $20, to fend off growing competition from discount upstarts Temu and Shein.
Called “Amazon Haul,” the storefront is accessible through the company’s mobile app, and promises “crazy low prices” on a plethora of goods. Shoppers can buy $1 eyelash curlers and oven gloves, or a $3 nail dryer. The company is offering free shipping on orders over $25, or a $3.99 shipping fee on orders below that threshold.
An Amazon spokesperson didn’t immediately respond to a request for comment.
CNBC previously reported that Amazon planned to launch its own discount webstore with goods shipped directly from China.
Amazon is betting shoppers will wait longer for products in exchange for rock-bottom prices. The company noted that most purchases made in Amazon Haul will be delivered in under two weeks, “although shipping times may vary and are dependent on a customer’s delivery location.”
That’s a shift for Amazon, which has partly cemented its dominance in e-commerce by offering faster delivery speeds than its competitors. The company upended the online shopping world when it first offered free two-day delivery, and it’s been speeding up delivery times since then. Amazon now offers same- or next-day delivery, and in some parts of the U.S. it promises delivery within a few hours of an order being placed.
In this photo illustration, the OpenAI logo is displayed on a mobile phone screen with a photo of Sam Altman, CEO of OpenAI.
Didem Mente | Anadolu | Getty Images
OpenAI’s official “blueprint for U.S. AI infrastructure” involves artificial intelligence economic zones, tapping the U.S. Navy’s nuclear power experience and government projects funded by private investors, according to a document viewed by CNBC, which the company plans to present on Wednesday in Washington, D.C.
The blueprint also outlines a North American AI alliance to compete with China’s initiatives and a National Transmission Highway Act “as ambitious as the 1956 National Interstate and Defense Highways Act.”
In the document, OpenAI outlines a rosy future for AI, calling it “as foundational a technology as electricity, and promising similarly distributed access and benefits.” The company wrote that investment in U.S. AI will lead to tens of thousands of jobs, GDP growth, a modernized grid that includes nuclear power, a new group of chip manufacturing facilities and billions of dollars in investment from global funds.
Now that Donald Trump is President-elect, OpenAI has made clear its plans to work with the new administration on AI policy, and the company’s Wednesday presentation outlines its plans.
Trump plans to repeal President Biden’s executive order on AI, according to his campaign platform, stating that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology” and that “in its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
OpenAI’s presentation outlines AI economic zones co-created by state and federal governments “to give states incentives to speed up permitting and approvals for AI infrastructure.” The company envisions building new solar arrays and wind farms and getting unused nuclear reactors cleared for use.
“States that provide subsidies or other support for companies launching infrastructure projects could require that a share of the new compute be made available to their public universities to create AI research labs and developer hubs aligned with their key commercial sectors,” OpenAI wrote.
OpenAI also wrote that it foresees a “National Transmission Highway Act” that could expand power, fiber connectivity and natural gas pipeline construction. The company wrote it needs “new authority and funding to unblock the planning, permitting, and payment for transmission,” and that existing procedures aren’t keeping pace with AI-driven demand.
The blueprints say, “The government can encourage private investors to fund high-cost energy infrastructure projects by committing to purchase energy and other means that lessen credit risk.”
A North American AI Alliance and investment in more U.S. data centers
OpenAI also foresees a North American AI alliance of Western countries that could eventually expand to a global network, such as a “Gulf Cooperation Council with the UAE and others in that region.”
The company also outlined its vision for nuclear power, writing that although China “has built as much nuclear power capacity in 10 years as the US built in 40,” the U.S. Navy operates about 100 small modular reactors (SMRs) to power naval submarines, and leveraging the Navy’s expertise could lead to building more civilian SMRs.
OpenAI’s infrastructure blueprint aligns with what Chris Lehane, OpenAI’s head of global policy, told CNBC in a recent interview. He sees the Midwest and Southwest as potential core areas for AI investment.
“Parts of the country that have been ‘left behind,’ as we enter the digital age, where so much of the economics and particularly economic benefits flow to the two coasts… Areas like the midwest and the southwest are going to be the types of places where you have the land and ability to do wind farms and to do solar facilities, and potentially to do some part of the energy transition — potentially do nuclear facilities,” Lehane said.
The infrastructure, Lehane explained, is contingent on the U.S. maintaining a lead over China in AI.
“[In] Kansas and Iowa, which sits on top of an enormous amount of agricultural data, think about standing up a data center,” Lehane said. “One gigawatt, which is a lot, taking, you know, 200-250 megawatts, a quarter of that, and doing something with their public university systems to create an agricultural-based LLM or inference model that would really serve their community but also make them a center of agricultural AI.”
Lehane cited an estimate that the US will need 50 gigawatts of energy by 2030 to support the AI industry’s needs and to compete against China, especially when the country approved 20 nuclear reactors over the past two years and 11 more for next year.
“We don’t have a choice,” Lehane said. “We do have to compete with that.”
Denmark on Wednesday laid out a framework that can help EU member states use generative artificial intelligence in compliance with the European Union’s strict new AI Act — and Microsoft‘s already on board.
A government-backed alliance of major Danish corporates, led by IT consultancy Netcompany, launched the “Responsible Use of AI Assistants in the Public and Private Sector” white paper, a blueprint that sets out “best-practice examples” for how firms should use and support employees in deploying AI systems in a regulated environment.
The guide also aims to encourage delivery of “secure and reliable services” by businesses to consumers. Denmark’s Agency for Digital Government, the country’s central business registry CVR and pensions authority ATP are among the founding partners adopting the framework.
This includes guidelines governing how the public and private sector collaborate, deploying AI in society, complying with both the AI Act and General Data Protection Regulation (GDPR), mitigating risks and reducing bias, scaling AI implementation, storing data securely, and training up staff.
Netcompany CEO André Rogaczewski said the provisions laid out in the white paper were primarily aimed at companies in heavily regulated industries, such as in financial services. He told CNBC he’s aiming to address one core question: “How can we scale the responsible usage of AI?”
What is the EU AI Act?
The EU AI Act is a landmark law that aims to govern the way companies develop, use and apply AI. It came into force in August, after previously receiving final approval from EU member states, lawmakers, and the European Commission — the executive body of the EU — in May.
The law applies a risk-based approach to governing AI, meaning various applications of the technology are treated differently depending on the risk level they pose. It’s been touted as the world’s first major AI law that will give firms clarity under a harmonized, EU-wide regulatory framework.
Though the rules are technically in effect, implementation them is a lengthy process. Most of the provisions of the Act — including rules for general-purpose AI systems like OpenAI’s ChatGPT — won’t materialize until at least 2026, at the end of a two-year transition period.
“It is absolutely vital for the competitiveness of our businesses and future progress of Europe that both the private and public sector will succeed in developing and using AI in the years to come,” Caroline Stage Olsen, Denmark’s minister of digital affairs, told CNBC, calling the white paper a “helpful step” toward that goal.
Netcompany’s Rogaczewski told CNBC that pitched the idea for a white paper to some of Denmark’s biggest banks and insurance firms some months ago. He found that, though each organization was “experimenting” with AI, institutions lacked a “common standard” to get the most out of the tech.
Rogaczewski hopes the Danish white paper will also offer a blue print for other countries and businesses seeking to simplify compliance with the EU AI Act.
Microsoft’s decision to sign up to the guidelines is of particular note. “Getting Microsoft involved was important since generative AI solutions often involve algorithms and global tech,” said Rogaczewski, adding the tech giant’s involvement underlines how responsible digitization is possibility across borders.
The U.S. tech giant is a major backer of ChatGPT developer OpenA, which secured a $157 billion valuation this year. Microsoft also licenses OpenAI’s technology out to enterprise firms via its Azure cloud computing platform.