European Union flags flutter outside the EU Commission headquarters, in Brussels, Belgium, February 1, 2023
Yves Herman | Reuters
When Gerard de Graaf moved from Europe to San Francisco almost a year ago, his job had a very different feel to it.
De Graaf, a 30-year veteran of the European Commission, was tasked with resurrecting the EU office in the Bay Area. His title is senior envoy for digital to the U.S., and since September his main job has been to help the tech industry prepare for new legislation called The Digital Services Act (DSA), which goes into effect Friday.
At the time of his arrival, the metaverse trumped artificial intelligence as the talk of the town, tech giants and emerging startups were cutting thousands of jobs, and the Nasdaq was headed for its worst year since the financial crisis in 2008.
Within de Graaf’s purview, companies including Meta, Google, Apple and Amazon have had since April to get ready for the DSA, which takes inspiration from banking regulations. They face fines of as much as 6% of annual revenue if they fail to comply with the act, which was introduced in 2020 by the EC (the executive arm of the EU) to reduce the spread of illegal content online and provide more accountability.
Coming in as an envoy, de Graaf has seen more action than he expected. In March, there was the sudden implosion of the iconic Silicon Valley Bank, the second-largest bank failure in U.S. history. At the same time, OpenAI’s ChatGPT service, launched late last year, was setting off an arms race in generative AI, with tech money pouring into new chatbots and the large language models (LLMs) powering them.
It was a “strange year in many, many ways,” de Graaf said, from his office, which is co-located with the Irish Consulate on the 23rd floor of a building in downtown San Francisco. The European Union hasn’t had a formal presence in Silicon Valley since the 1990s.
De Graaf spent much of his time meeting with top executives, policy teams and technologists at the major tech companies to discuss regulations, the impact of generative AI and competition. Although regulations are enforced by the EC in Brussels, the new outpost has been a useful way to foster a better relationship between the U.S. tech sector and the EU, de Graaf said.
“I think there’s been a conversation that we needed to have that did not really take place,” said de Graaf. With a hint of sarcasm, de Graaf said that somebody with “infinite wisdom” decided the EU should step back from the region during the internet boom, right “when Silicon Valley was taking off and going from strength to strength.”
The thinking at the time within the tech industry, he said, was that the internet is a “different technology that moves very fast” and that “policymakers don’t understand it and can’t regulate it.”
Facebook Chairman and CEO Mark Zuckerberg arrives to testify before the House Financial Services Committee on “An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors” in the Rayburn House Office Building in Washington, DC on October 23, 2019.
Mandel Ngan | AFP | Getty Images
However, some major leaders in tech have shown signs that they’re taking the DSA seriously, de Graaf said. He noted that Meta CEO Mark Zuckerberg met with Thierry Breton, the EU commissioner for internal market, to go over some of the specifics of the rules, and that X owner Elon Musk has publicly supported the DSA after meeting with Breton.
De Graaf said he’s seeing “a bit more respect and understanding for the European Union’s position, and I think that has accelerated after generative AI.”
‘Serious commitment’
X, formerly known as Twitter, had withdrawn from the EU’s voluntary guidelines for countering disinformation. There was no penalty for not participating, but X must now comply with the DSA, and Breton said after his meeting with Musk that “fighting disinformation will be a legal obligation.”
“I think, in general, we’ve seen a serious commitment of big companies also in Europe and around the world to be prepared and to prepare themselves,” de Graaf said.
The new rules require platforms with at least 45 million monthly active users in the EU to provide risk assessment and mitigation plans. They also must allow for certain researchers to have inspection access to their services for harms and provide more transparency to users about their recommendation systems, even allowing people to tweak their settings.
Timing could be a challenge. As part of their cost-cutting measures implemented early this year, many companies laid off members of their trust and safety teams.
“You ask yourself the question, will these companies still have the capacity to implement these new regulations?” de Graaf said. “We’ve been assured by many of them that in the process of layoffs, they have a renewed sense of trust and safety.”
The DSA doesn’t require that tech companies maintain a certain number of trust and safety workers, de Graaf said, just that they comply with the law. Still, he said one social media platform that he declined to name gave an answer “that was not entirely reassuring” when asked how it plans to monitor for disinformation in Poland during the upcoming October elections, as the company has only one person in the region.
That’s why the rules include transparency about what exactly the platforms are doing.
“There’s a lot we don’t know, like how these companies moderate content,” de Graaf said. “And not just their resources, but also how their decisions are made with which content will stay and which content is taken down.”
De Graaf, a Dutchman who’s married with two kids, has spent the past three decades going deep on regulatory issues for the EC. He previously worked on the Digital Services Act and Digital Markets Act, European legislation targeted at consumer protection and rights and enhancing competition.
This isn’t his first stint in the U.S. From 1997 to 2001, he worked in Washington, D.C., as “trade counsellor at the European Commission’s Delegation to the United States,” according to his bio.
For all the talk about San Francisco’s “doom loop,” de Graaf said he sees a different level of energy in the city as well as further south in Silicon Valley.
There’s still “so much dynamism” in San Francisco, he said, adding that it’s filled with “such interesting people and objective people that I find incredibly refreshing.”
“I meet very, very interesting people here in Silicon Valley and in San Francisco,” he said. “And it’s not just the companies that are kind of avant-garde as the people behind them, so the conversations you have here with people are really rewarding.”
The generative AI boom
Generative AI was a virtually foreign concept when de Graaf arrived in San Francisco last September. Now, it’s about the only topic of conversation at tech conferences and cocktail parties.
The rise and rapid spread of generative AI has led to a number of big tech companies and high-profile executives calling for regulations, citing the technology’s potential influence on society and the economy. In June, the European Parliament cleared a major step in passing the EU AI Act, which would represent the EU’s package of AI regulations. It’s still a long way from becoming law.
De Graaf noted the irony in the industry’s attitude. Tech companies that have for years criticized the EU for overly aggressive regulations are now asking, “Why is it taking you so long?” de Graaf said.
“We will hopefully have an agreement on the text by the end of this year,” he said. “And then we always have these transitional periods where the industry needs to prepare, and we need to prepare. That might be two years or a year and a half.”
The rapidly changing landscape of generative AI makes it tricky for the EU to quickly formulate regulations.
“Six months ago, I think our big concern was to legislate the handful of companies — the extremely powerful, resource rich companies — that are going to dominate,” de Graaf said.
But as more powerful LLMs become available for people to use for free, the technology is spreading, making regulation more challenging as it’s not just about dealing with a few big companies. De Graaf has been meeting with local universities like Stanford to learn about transparency into the LLMs, how researchers can access the technology and what kind of data companies could provide to lawmakers about their software.
One proposal being floated in Europe is the idea of publicly funded AI models, so control isn’t all in the hands of big U.S. companies.
“These are questions that policymakers in the U.S. and all around the world are asking themselves,” de Graaf said. “We don’t have a crystal ball where we can just predict everything that’s happening.”
Even if there are ways to expand how AI models are developed, there’s little doubt about where the money is flowing for processing power. Nvidia, which just reported blowout earnings for the latest quarter and has seen its stock price triple in value this year, is by far the leader in providing the kind of chips needed to power generative AI systems.
“That company, they have a unique value proposition,” de Graaf said. “It’s unique not because of scale or a network effect, but because their technology is so advanced that it has no competition.”
He said that his team meets “quite regularly” with Nvidia and its policy team and they’ve been learning “how the semiconductor market is evolving.”
“That’s a useful source information for us, and of course, where the technology is going,” de Graaf said. “They know where a lot of the industries are stepping up and are on the ball or are going to move more quickly than other industries.”
An employee works at Shopify’s headquarters in Ottawa, Ontario in Canada.
Chris Wattie | Reuters
Shopify on Tuesday reported better-than-expected sales for the fourth quarter but missed on earnings. Shares whipsawed in premarket trading.
Here’s how the company did:
Earnings: 39 cents per share vs. 43 cents per share expected by LSEG
Revenue: $2.81 billion vs. $2.73 billion expected by LSEG
Shopify forecasted revenue in the first quarter to grow at a mid-20% percentage rate, which is roughly in line with analysts’ expectations of 24.4% revenue growth, according to LSEG.
“We expect the strong merchant momentum from Q4 to carry over into Q1, recognizing that Q1 is consistently our lowest [gross merchandise volume] quarter seasonally,” the company said in its earnings release.
Read more CNBC tech news
The first quarter includes the results of the holiday shopping season. Online spending jumped nearly 9% to $241.1 billion in November and December, according to data from Adobe Analytics, which tracks sales on retailers’ websites. That was slightly higher than analysts’ forecast for sales of $240.8 billion.
The company said it expects operating expense as a percentage of revenue to be 41% to 42% in the current quarter. That’s a step up from 31.5% in the fourth quarter.
Net income nearly doubled to $1.3 billion, or 99 cents per share, from $657 million, or 51 cents per share, a year ago.
Revenue in the fourth quarter jumped 31% from $2.14 billion in the same quarter a year earlier.
Gross merchandise volume, or the total volume of merchandise sold on the platform, came in at $94.5 billion. Analysts surveyed by FactSet were looking for GMV of $93 billion.
Shopify sells software for merchants who run online businesses as well as services such as advertising and payment processing tools. The company has made its name as a platform for small businesses and direct-to-consumer brands to launch online storefronts. More recently, it has looked to attract bigger customers, such as Reebok, Mattel and Barnes & Noble, as a way to boost its growth.
While the details on just how DeepSeek did it remain incomplete, and its success doesn’t mean export controls don’t have a place in markets and national security policy, it does show that a focus on stopping the competition can’t keep pace with innovation. Now, the debate is underway over just how far the U.S. government should go in the future in blocking access to U.S. chip technology.
President Biden’s Department of Commerce issued its rules to “regulate the global diffusion” of AI chips and models in the administration’s waning days. The rules already have been heavily criticized by tech companies, including Nvidia, as well as policy experts. A Brookings analysis argues the the AI diffusion rules seek to create “a centrally planned global computing economy.”
“A decade from now, we will look back and recognize how quixotic it was for the U.S. government of the mid-2020s to attempt to limit the ability of people in 150 countries to perform fast multiplications,” wrote John Villasenor, a nonresident senior fellow at Brookings and professor of electrical engineering, law, public policy, and management at UCLA.
In any technology war, questions about what countermove the U.S. should make next inevitably run up against the awareness that any notion of controlling innovation through measures like restricting exports is not guaranteed to work – and may even backfire. Among the risks cited by Brookings: spurring the development of a global AI ecosystem anchored outside the U.S.; pushing more nations into building stronger technology ties with China; and allowing non-U.S. makers of advanced chips to grow global market share at the expense of the U.S. companies behind the original innovations.
“I worry that we will have a knee-jerk response to ratchet up controls heavily, before we fully think through the trade-offs,” said Martin Chorzempa, senior fellow at the Peterson Institute for International Economics.
There is a 120-day comment period that ends on May 15 on the AI diffusion rules, unless Trump reverses or revises the rule before then. While the president has spoken in general about the need to protect the U.S. technological lead, he has not specifically addressed this rule. It’s unknown what stance the current administration will take – expanding, curtailing, or overturning chip export rules already in place.
“Some authoritarian regimes have stolen and used AI to strengthen their military intelligence and surveillance capabilities, capture foreign data and create propaganda to undermine other nations’ national security,” Vance said in an address at France’s AI Action Summit in Paris. “I want to be clear, this administration will block such efforts, full stop,” Vance said. “We will safeguard American AI and chip technologies from theft and misuse, work with our allies and partners to strengthen and extend these protections and close pathways to adversaries attaining AI capabilities that threaten all of our people,” he added.
Trump’s first-day signing of an executive order to “identify and eliminate loopholes in existing export controls,” suggest he could take a hard line. It said the government will “assess and make recommendations regarding how to maintain, obtain, and enhance our Nation’s technological edge and how to identify and eliminate loopholes in existing export controls – especially those that enable the transfer of strategic goods, software, services, and technology to countries to strategic rivals and their proxies.”
The tech sector was quick to do its outreach to the new administration, with several major CEOs at the inauguration, and Nvidia CEO Jensen Huang meeting with President Trump at the White House in recent weeks for a discussion that included chip restrictions to China.
Trump also called Deepseek a “wake-up call for our industries that we need to be laser-focused on competing to win.”
Particularly relevant to Deepseek in the AI diffusion rules are controls surrounding closed AI model weights, essential to the training process that develops how AI systems think and respond to queries.
“In part, DeepSeek was able to get around the speed limit imposed on chips allowed for sale to China in 2022, but banned in 2023, when the U.S. realized that the limit imposed was the wrong one,” said Chorzempa.
When the U.S. put controls on China in 2022, Chorzempa explained, they set a specific parameter concerning the speed of communication between chips. It was thought that if you control the power of an individual chip that might not be enough, because if you bring enough less powerful chips together, it’s possible to have supercomputer-like capabilities at a level the U.S. government didn’t want China to obtain. It appears from what DeepSeek described in its R1 paper that the company was able to overcome that speed limit.
“Experts in the technical community in at least early 2023 were pointing out that other restrictions were required to have an effective control as the technology evolved,” Chorzempa said.
In 2023, the U.S. government added additional layers of restriction that made the Nvidia chips DeepSeek says it trained the model on no longer legal for export. Tightened controls could be further strengthened by subsequent initiatives from the Trump administration.
But through a combination of having a limited number of advanced chips available and innovation spurred on by that limit, DeepSeek was able to build a better, and potentially cheaper, mousetrap.
“DeepSeeks seems to have optimized heavily with clever software and hardware engineering to sort of neuter the speed limit meant to hold those chips back,” Chorzempa said.
AI rivals will continue to do more with less
There are other aspects to the evolving AI race which show gaps that are narrowing for other reasons.
“The story is really about the gap being closed between open source and closed source models,” said Alexandra Mousavizadeh, CEO of Evident, an AI consulting firm. “Now the open source models are getting much closer to the capabilities of the closed ones, and we see the price driving down to zero,” Mousavizadeh said.
DeepSeek has already shown that you don’t need maximum computing power, and you can you use open-source as alternative when building a viable LLM, and in fact, according to Mousavizadeh, these factors can be a driver of innovation.
“We’re seeing that limits forced them to use scientific methods and systems that compress data onto a much smaller pool that uses much less power using mixed expert models,” she said. AI rivals can “do more with less,” Mousavizadeh added.
“You can’t really gatekeep,” Mousavizadeh said, noting that there is lots of sharing that occurs in the open source environment, “regardless of governmental policy.”
If DeepSeek’s success leads to export controls on advanced chips intended to slow Chinese AI efforts that become even stricter, it should also be clear they are no silver bullet. “They’re not a way to duck the competition between the US and China,” wrote Dario Amodei, CEO of gen AI startup Anthropic, in a blog post last week. “In the end, AI companies in the US and other democracies must have better models than those in China if we want to prevail. But we shouldn’t hand the Chinese Communist Party technological advantages when we don’t have to.”
His issue isn’t with the AI researchers in China, but the government to which they are ultimately beholden. “In interviews they’ve done, they seem like smart, curious researchers who just want to make useful technology,” Amodei wrote about DeepSeek. “But they’re beholden to an authoritarian government that has committed human rights violations, has behaved aggressively on the world stage, and will be far more unfettered in these actions if they’re able to match the US in AI.”
To be sure, there are many reasons to be wary of doing anything to contribute to China’s AI advances and successes like DeepSeek, from national security concerns about data sharing with the Chinese government, to ongoing hacking risks, to Chinese AI apps becoming popular enough to be used by Chinese intelligence to learn about Americans and American industries, and to sow division among the public.
Palantir Technologies CEO Alex Karp told CNBC’s Sara Eisen in a recent interview that “we have to run harder, run faster, have an all-country effort.”
“The second-mover can move very quickly, especially if we’ve already done the innovation,” Karp said, describing DeepSeek as derivative of U.S. models with “improvements at the margins.”
He expects a “huge policy discussion” to make sure innovations are not exported, but Karp added that in the end, “the real advantage goes to the first mover as long as the first mover is running hard. … We have the lead, we have to focus on making sure we keep it. Our adversaries are gonna copy anything they can.”
Palantir’s rise — its shares soared 340% last year to lead the S&P 500 — didn’t come by trying to stop others, and in that there may be a lesson. “We don’t focus on the competition,” Karp said. “We focus on how do we execute.”
The Google Calendar logo is displayed on a tablet.
Igor Golovniov | Sopa Images | Lightrocket | Getty Images
Google‘s popular online and mobile calendars no longer include reference to the first day of Black History Month or Women’s History month, among other holidays and events.
The company’s calendar previously had those days marked at the start of February and March, respectively, but they don’t appear for 2025.
The Verge first reported on the removals from Google Calendar late last week, which followed comments from users.
A Google spokesperson said the changes took place in the middle of last year.
“Some years ago, the Calendar team started manually adding a broader set of cultural moments in a wide number of countries around the world,” the spokesperson said in an email. “We got feedback that some other events and countries were missing — and maintaining hundreds of moments manually and consistently globally wasn’t scalable or sustainable,” the spokesperson added.
Read more CNBC tech news
Google has made numerous changes lately that align with an altered political environment in the U.S. The company recently began scrapping its diversity hiring goals, becoming the latest tech giant to change its approach to hiring and promotions following the election of President Donald Trump. One of Trump’s first acts as president after taking office in January was to sign an executive order ending the government’s DEI programs and putting federal officials overseeing those initiatives on leave.
In late January, the company said it would change the name of the Gulf of Mexico to the “Gulf of America” in Google Maps after the Trump administration updates its “official government sources.” Google also said it would follow Trump and start using the name “Mount McKinley” for the mountain in Alaska currently called Denali.
On Google Calendar, the company has removed other events as well. It previously had Nov. 1 as the first day of Indigenous Peoples Month and June 1 as the start of LGBTQ+ Pride month.
The company spokesperson said that in mid-2024, the company “returned to showing only public holidays and national observances from timeanddate.com globally, while allowing users to manually add other important moments.” The timeanddate.com website says its company has 40 employees and is based in Norway.
Google Calendar users noticed the changes and left comments in the user support web pages and on social media. The user support site previously received comments from people upset about the company adding such observances.