Connect with us

Published

on

European Union flags flutter outside the EU Commission headquarters, in Brussels, Belgium, February 1, 2023

Yves Herman | Reuters

When Gerard de Graaf moved from Europe to San Francisco almost a year ago, his job had a very different feel to it.

De Graaf, a 30-year veteran of the European Commission, was tasked with resurrecting the EU office in the Bay Area. His title is senior envoy for digital to the U.S., and since September his main job has been to help the tech industry prepare for new legislation called The Digital Services Act (DSA), which goes into effect Friday.

At the time of his arrival, the metaverse trumped artificial intelligence as the talk of the town, tech giants and emerging startups were cutting thousands of jobs, and the Nasdaq was headed for its worst year since the financial crisis in 2008.

Within de Graaf’s purview, companies including Meta, Google, Apple and Amazon have had since April to get ready for the DSA, which takes inspiration from banking regulations. They face fines of as much as 6% of annual revenue if they fail to comply with the act, which was introduced in 2020 by the EC (the executive arm of the EU) to reduce the spread of illegal content online and provide more accountability.

Coming in as an envoy, de Graaf has seen more action than he expected. In March, there was the sudden implosion of the iconic Silicon Valley Bank, the second-largest bank failure in U.S. history. At the same time, OpenAI’s ChatGPT service, launched late last year, was setting off an arms race in generative AI, with tech money pouring into new chatbots and the large language models (LLMs) powering them.

It was a “strange year in many, many ways,” de Graaf said, from his office, which is co-located with the Irish Consulate on the 23rd floor of a building in downtown San Francisco. The European Union hasn’t had a formal presence in Silicon Valley since the 1990s.

De Graaf spent much of his time meeting with top executives, policy teams and technologists at the major tech companies to discuss regulations, the impact of generative AI and competition. Although regulations are enforced by the EC in Brussels, the new outpost has been a useful way to foster a better relationship between the U.S. tech sector and the EU, de Graaf said.

“I think there’s been a conversation that we needed to have that did not really take place,” said de Graaf. With a hint of sarcasm, de Graaf said that somebody with “infinite wisdom” decided the EU should step back from the region during the internet boom, right “when Silicon Valley was taking off and going from strength to strength.”

The thinking at the time within the tech industry, he said, was that the internet is a “different technology that moves very fast” and that “policymakers don’t understand it and can’t regulate it.”

Facebook Chairman and CEO Mark Zuckerberg arrives to testify before the House Financial Services Committee on “An Examination of Facebook and Its Impact on the Financial Services and Housing Sectors” in the Rayburn House Office Building in Washington, DC on October 23, 2019.

Mandel Ngan | AFP | Getty Images

However, some major leaders in tech have shown signs that they’re taking the DSA seriously, de Graaf said. He noted that Meta CEO Mark Zuckerberg met with Thierry Breton, the EU commissioner for internal market, to go over some of the specifics of the rules, and that X owner Elon Musk has publicly supported the DSA after meeting with Breton.

De Graaf said he’s seeing “a bit more respect and understanding for the European Union’s position, and I think that has accelerated after generative AI.”

‘Serious commitment’

X, formerly known as Twitter, had withdrawn from the EU’s voluntary guidelines for countering disinformation. There was no penalty for not participating, but X must now comply with the DSA, and Breton said after his meeting with Musk that “fighting disinformation will be a legal obligation.”

“I think, in general, we’ve seen a serious commitment of big companies also in Europe and around the world to be prepared and to prepare themselves,” de Graaf said.

The new rules require platforms with at least 45 million monthly active users in the EU to provide risk assessment and mitigation plans. They also must allow for certain researchers to have inspection access to their services for harms and provide more transparency to users about their recommendation systems, even allowing people to tweak their settings.

Timing could be a challenge. As part of their cost-cutting measures implemented early this year, many companies laid off members of their trust and safety teams.

“You ask yourself the question, will these companies still have the capacity to implement these new regulations?” de Graaf said. “We’ve been assured by many of them that in the process of layoffs, they have a renewed sense of trust and safety.”

The DSA doesn’t require that tech companies maintain a certain number of trust and safety workers, de Graaf said, just that they comply with the law. Still, he said one social media platform that he declined to name gave an answer “that was not entirely reassuring” when asked how it plans to monitor for disinformation in Poland during the upcoming October elections, as the company has only one person in the region.

That’s why the rules include transparency about what exactly the platforms are doing.

“There’s a lot we don’t know, like how these companies moderate content,” de Graaf said. “And not just their resources, but also how their decisions are made with which content will stay and which content is taken down.”

AI is the next trend to watch when it comes to smartphones, says market research firm

De Graaf, a Dutchman who’s married with two kids, has spent the past three decades going deep on regulatory issues for the EC. He previously worked on the Digital Services Act and Digital Markets Act, European legislation targeted at consumer protection and rights and enhancing competition.

This isn’t his first stint in the U.S. From 1997 to 2001, he worked in Washington, D.C., as “trade counsellor at the European Commission’s Delegation to the United States,” according to his bio.

For all the talk about San Francisco’s “doom loop,” de Graaf said he sees a different level of energy in the city as well as further south in Silicon Valley.

There’s still “so much dynamism” in San Francisco, he said, adding that it’s filled with “such interesting people and objective people that I find incredibly refreshing.”

“I meet very, very interesting people here in Silicon Valley and in San Francisco,” he said. “And it’s not just the companies that are kind of avant-garde as the people behind them, so the conversations you have here with people are really rewarding.”

The generative AI boom

Generative AI was a virtually foreign concept when de Graaf arrived in San Francisco last September. Now, it’s about the only topic of conversation at tech conferences and cocktail parties.

The rise and rapid spread of generative AI has led to a number of big tech companies and high-profile executives calling for regulations, citing the technology’s potential influence on society and the economy. In June, the European Parliament cleared a major step in passing the EU AI Act, which would represent the EU’s package of AI regulations. It’s still a long way from becoming law.

De Graaf noted the irony in the industry’s attitude. Tech companies that have for years criticized the EU for overly aggressive regulations are now asking, “Why is it taking you so long?” de Graaf said.

“We will hopefully have an agreement on the text by the end of this year,” he said. “And then we always have these transitional periods where the industry needs to prepare, and we need to prepare. That might be two years or a year and a half.”

The rapidly changing landscape of generative AI makes it tricky for the EU to quickly formulate regulations.

“Six months ago, I think our big concern was to legislate the handful of companies — the extremely powerful, resource rich companies — that are going to dominate,” de Graaf said.

But as more powerful LLMs become available for people to use for free, the technology is spreading, making regulation more challenging as it’s not just about dealing with a few big companies. De Graaf has been meeting with local universities like Stanford to learn about transparency into the LLMs, how researchers can access the technology and what kind of data companies could provide to lawmakers about their software.

One proposal being floated in Europe is the idea of publicly funded AI models, so control isn’t all in the hands of big U.S. companies.

“These are questions that policymakers in the U.S. and all around the world are asking themselves,” de Graaf said. “We don’t have a crystal ball where we can just predict everything that’s happening.”

Even if there are ways to expand how AI models are developed, there’s little doubt about where the money is flowing for processing power. Nvidia, which just reported blowout earnings for the latest quarter and has seen its stock price triple in value this year, is by far the leader in providing the kind of chips needed to power generative AI systems.

“That company, they have a unique value proposition,” de Graaf said. “It’s unique not because of scale or a network effect, but because their technology is so advanced that it has no competition.”

He said that his team meets “quite regularly” with Nvidia and its policy team and they’ve been learning “how the semiconductor market is evolving.”

“That’s a useful source information for us, and of course, where the technology is going,” de Graaf said. “They know where a lot of the industries are stepping up and are on the ball or are going to move more quickly than other industries.”

WATCH: Former White House CTO Aneesh Chopra on A.I. regulation

Fmr. White House CTO Aneesh Chopra on A.I. regulation: Right now this is an open marketplace

Continue Reading

Technology

Tesla must pay portion of $329 million in damages after fatal Autopilot crash, jury says

Published

on

By

Tesla must pay portion of 9 million in damages after fatal Autopilot crash, jury says

A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.

Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.

The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.

The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.

Tesla said it plans to appeal the decision.

Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.

The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.

While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.

Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.

“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”

Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.

Here is Tesla’s response to CNBC:

“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.

Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.

This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”

The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.

Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.

The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.

The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.

The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.

The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.

A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.

Read the jury’s verdict below.

Continue Reading

Technology

Crypto wobbles into August as Trump’s new tariffs trigger risk-off sentiment

Published

on

By

Crypto wobbles into August as Trump's new tariffs trigger risk-off sentiment

A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025. 

Lam Yik | Bloomberg | Getty Images

The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.

The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.

The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.

Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.

Stock Chart IconStock chart icon

hide content

Bitcoin falls below $115,000

The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.

“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”

Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.

Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

Google has dropped more than 50 DEI-related organizations from its funding list

Published

on

By

Google has dropped more than 50 DEI-related organizations from its funding list

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.

David Paul Morris | Bloomberg | Getty Images

Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.

The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.

The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.

In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.

“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.

Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids. 

The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.

Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.

However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.

President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.

Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”

Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.

“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.

One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.

Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.

WATCH: Alphabet’s valuation remains highly attractive, says Evercore ISI’s Mark Mahaney

Alphabet's valuation remains highly attractive, says Evercore ISI's Mark Mahaney

Continue Reading

Trending