Connect with us

Published

on

Tesla Model Y, equipped with FSD system. Three front facing cameras under windshield near rear view mirror. 

Mark Leong | The Washington Post | Getty Images

Tesla drivers in the U.S. were involved in accidents at a higher rate than drivers of any other brand of vehicle over the past year, according to a new study of 30 automotive brands by LendingTree.

The researchers analyzed quotes from people looking to insure their own vehicles, and did not include accident or incident data involving drivers of rental cars, a spokesperson for LendingTree told CNBC by email on Tuesday.

The study said, “It’s hard to nail down why certain brands may have higher accident rates than others. However, there are indications that certain types of vehicles attract riskier drivers than others.”

With 24 accidents per 1,000 drivers during the period from mid-November 2022 to mid-November 2023, Tesla drivers clocked in with the worst accident rate in the U.S., followed by Ram drivers who were involved in about 23 accidents, and Subaru drivers who were involved in about 21 accidents per 1,000 drivers during the year.

By contrast, drivers of Pontiac, Mercury and Saturn vehicles were all involved in fewer than 10 accidents per 1,000 drivers during the period of the study.

BMW drivers were the most likely to engage in driving under the influence, the researchers found. They were involved in about 3 DUIs per 1,000 drivers in a year, about twice the rate of DUIs among Ram drivers, who were the second worst drivers in this regard.

For driving incidents overall, which included not only accidents but also DUIs, speeding, and other citations, Ram drivers had the highest incident rate, while Tesla drivers had the second-highest incident rate in the U.S.

Accidents, DUIs, speeding and other citations can all lead to higher insurance rates for drivers. Lending Tree found that one speeding ticket can bump up the price of vehicle insurance by 10% to 20%, accidents can increase rates by around 40%, while DUIs can lead to a rate increase of 60% or more.

The Lending Tree findings about drivers with the highest rates of accidents and incidents by vehicle brand followed an Autopilot software recall by Tesla in the U.S. that impacts some 2 million of the company’s electric vehicles.

Tesla EVs come standard with an advanced driver assistance system (ADAS) marketed as Autopilot. The company sells more extensive driver assistance packages called Enhanced Autopilot and Full Self-Driving (or FSD) options in the U.S. as well. Those who pay for FSD can also test software features that are not fully debugged yet on public roads.

Tesla’s ADAS technology is meant to help drivers with steering, acceleration and braking. CEO Elon Musk claimed in 2021 that a Tesla driver using Autopilot was about 10 times less likely to crash than a driver of the average car. While Tesla publishes its own safety reports, the company has not allowed third-party researchers to evaluate their data to confirm or debunk such claims.

Musk has also touted Tesla’s systems as if they are already, or will soon be, safe to use hands-free — yet Autopilot and Full Self-Driving systems still require Tesla drivers to remain attentive to the road and ready to steer or brake in response at all times.

A two-year investigation by the National Highway Traffic Safety Administration (or NHTSA) found that Tesla’s Autosteer feature, which is part of Autopilot and FSD, had safety defects that may cause an “increased risk of a collision.” NHTSA said it found that Tesla drivers can too easily misuse the cars’ Autosteer feature and may not even know whether it is engaged or switched off.

According to filings with the federal vehicle safety regulator, Tesla did not concur with NHTSA’s findings but agreed to conduct a voluntary software recall, and promised to make safety improvements to Autosteer with “over-the-air” updates. The updated software will nag drivers to pay attention to the road more often, and lock drivers out of using Autopilot if Tesla’s systems detect irresponsible use.

Tesla did not respond to a request for comment about the Lending Tree study and why the accident and incident rates may have been so high among Tesla drivers in the U.S. over the past year.

Read the full Lending Tree study of the best and worst drivers in the U.S. by auto brand, here.

Continue Reading

Technology

EA going private in $55 billion deal that will pay shareholders $210 a share

Published

on

By

EA going private in  billion deal that will pay shareholders 0 a share

Electronic Arts to be taken private by PIF, Silver Lake and Affinity Partners for $55B

Electronic Arts said Monday that it has agreed to be acquired by the Public Investment Fund of Saudi Arabia, Silver Lake and Affinity Partners in an all-cash deal worth $55 billion.

Shareholders of the company will receive $210 per share in cash.

EA stock climbed 5% Monday. Shares gained about 15% Friday, closing at $193.35, after the Wall Street Journal reported that the company was nearing a deal to go private.

PIF is rolling over its existing 9.9% stake in the company and will, by far, be the majority investor in the new structure, people close to the deal told CNBC’s David Faber.

Affinity CEO Jared Kushner, who is President Donald Trump‘s son-in-law, touted EA’s “bold vision ​for ​the ​future” in a release announcing the deal.

“I’ve admired their ​ability to create iconic, lasting experiences, ​and ​as ​someone ​who ​grew up playing their ​games ​- and now enjoys them with his ​kids – I couldn’t be ​more ​excited about ​what’s ​ahead,” Kushner said in a statement.

The group of companies is making a total $36 billion equity investment, with $20 billion in debt financing from JPMorgan, according to the release. JPMorgan was brought in a couple of weeks ago, people familiar with the deal told Faber.

Read more CNBC tech news

The take-private deal for the maker of popular games like Battlefield, The Sims and the Madden series of NFL games, among others, is set to be the largest leveraged buyout in Wall Street history.

In a note to employees, EA CEO Andrew Wilson said he is “excited to continue as CEO.”

“Our new partners bring deep experience across sports, gaming, and entertainment,’ he wrote. “They are committed with conviction to EA – they believe in our people, our leadership, and the long-term vision we are now building together.”

The deal is expected to close in the first quarter of fiscal year 2027.

There is a 45-day window to allow for other proposals, people familiar with the terms of the deal told Faber. The deal talks started in the spring, the people said.

Silver Lake, which is led by co-CEOs Egon Durban and Greg Mondre, is also one of the key investors in Trump’s push to get TikTok under U.S. control.

CNBC has reached out to EA for further comment and information on the deal.

Stock Chart IconStock chart icon

hide content

EA year-to-date stock chart.

This is breaking news. Please refresh for updates.

Continue Reading

Technology

A look at OpenAI’s tangled web of dealmaking

Published

on

By

A look at OpenAI's tangled web of dealmaking

OpenAI CEO Sam Altman speaks to media following a Q&A at the OpenAI data center in Abilene, Texas, U.S., Sept. 23, 2025.

Shelby Tauber | Reuters

OpenAI CEO Sam Altman is everywhere.

His artificial intelligence startup, now valued at $500 billion, has been inking deals valued in the tens to hundreds of billions of dollars with infrastructure partners, even as it continues to burn mounds of cash.

Those expenditures are driving the market.

The Nasdaq and S&P 500 rose to record highs this week after Nvidia agreed to invest up to $100 billion in OpenAI. That followed a $300 billion deal between OpenAI and Oracle in July as part of the the Stargate program, a $500 billion infrastructure project that’s also being funded by SoftBank.

Its commitments don’t stop there. CoreWeave on Thursday said it’s agreed to provide OpenAI up to $22.4 billion in AI infrastructure, an increase from the $11.9 billion it initially announced in March. Earlier this month, chipmaker Broadcom said it had secured a new $10 billion customer, and analysts were quick to point to OpenAI. 

While OpenAI says that scaling is key to driving innovation and future AI breakthroughs, investors and analysts are beginning to raise their eyebrows over the mindboggling sums, as well as OpenAI’s reliance on an increasingly interconnected web of infrastructure partners. 

OpenAI took a $350 million stake in CoreWeave ahead of its IPO in March, for instance. Nvidia formalized its financial stake in OpenAI by participating in a $6.6 billion funding round in October. Oracle is spending about $40 billion on Nvidia chips to power one of OpenAI’s Stargate data centers, according to a May report from the Financial Times. Earlier this month, CoreWeave disclosed an order worth at least $6.3 billion from Nvidia. 

And through its $100 billion investment in OpenAI, Nvidia will get equity in the startup and earn revenue at the same time.

OpenAI is only expected to generate $13 billion in revenue this year, according to the company’s CFO Sarah Friar. She told CNBC that technology booms require bold bets on infrastructure. 

“When the internet was getting started, people kept feeling like, ‘Oh, we’re over-building, there’s too much,'” Friar said. “Look where we are today, right?”

Altman told CNBC in August that he’s willing to run the company at a loss in order to prioritize growth and its investments. 

‘Troubling signal’

But some analysts are raising red flags, arguing that OpenAI’s deal with Nvidia is reminiscent of vendor financing patterns that helped burst the dot-com bubble in the early 2000s.

Nvidia has been the biggest winner of the AI boom so far because it produces the graphics processing units (GPUs) that are necessary to train models and run large AI workloads. Nvidia’s investment in OpenAI, which will be paid out in installments over several years, will help the startup build out data centers that are based around its GPUs. 

“You don’t have to be a skeptic about AI technology’s promise in general to see this announcement as a troubling signal about how self-referential the entire space has become,” Bespoke Investment Group wrote in a note to clients on Tuesday. “If NVDA has to provide the capital that becomes its revenues in order to maintain growth, the whole ecosystem may be unsustainable.” 

Sam Altman, CEO of OpenAI (L), and Jensen Huang CEO of Nvidia.

Reuters

Peter Boockvar, chief investment officer at One Point BFG Wealth Partners, said names of companies from the late 1990′s were ringing in his ears after the OpenAI-Nvidia deal was announced. 

A key difference, however, is that this transaction is “so much bigger in terms of dollars,” he wrote in a note.

“For this whole massive experiment to work without causing large losses, OpenAI and its peers now have got to generate huge revenues and profits to pay for all the obligations they are signing up for and at the same time provide a return to its investors,” Boockvar said.

An OpenAI spokesperson referred CNBC to comments from Altman and Friar this week, adding that the company is pursuing “a once-in-a-century opportunity that demands ambition equal to the moment.”

The total amount of demand for compute could reach a staggering 200 gigawatts by 2030, according to Bain & Company’s 2025 Technology Report. Building enough data centers to meet this anticipated demand would cost about $500 billion a year, meaning AI companies would have to generate a combined $2 trillion in annual revenue to cover those costs.

Even if companies throw their whole weight behind investing in the cloud and data centers, “the amount would still fall $800 billion short of the revenue needed to fund the full investment,” Bain said.

There’s a clear uphill battle ahead, but OpenAI’s Altman brushed off concerns on Tuesday, rejecting the idea that the infrastructure spending spree is overkill.

“This is what it takes to deliver AI,” Altman told CNBC. “Unlike previous technological revolutions or previous versions of the internet, there’s so much infrastructure that’s required, and this is a small sample of it.”

–CNBC’s Yun Li and MacKenzie Sigalos contributed to this report

WATCH: OpenAI’s Sam Altman defends Stargate expansion as demand for AI soars

OpenAI's Sam Altman defends Stargate expansion as demand for AI soars

Continue Reading

Technology

5 takeaways from CNBC’s investigation into ‘nudify’ apps and sites

Published

on

By

5 takeaways from CNBC’s investigation into 'nudify' apps and sites

Jessica Guistolise, Megan Hurley and Molly Kelley talk with CNBC in Minneapolis, Minnesota, on July 11, 2025, about fake pornographic images and videos depicting their faces made by their mutual friend Ben using AI site DeepSwap.

Jordan Wyatt | CNBC

In the summer of 2024, a group of women in the Minneapolis area learned that a male friend used their Facebook photos mixed with artificial intelligence to create sexualized images and videos.   

Using an AI site called DeepSwap, the man secretly created deepfakes of the friends and over 80 women in the Twin Cities region. The discovery created emotional trauma and led the group to seek the help of a sympathetic state senator.

As a CNBC investigation shows, the rise of “nudify” apps and sites has made it easier than ever for people to create nonconsensual, explicit deepfakes. Experts said these services are all over the Internet, with many being promoted via Facebook ads, available for download on the Apple and Google app stores and easily accessed using simple web searches.

“That’s the reality of where the technology is right now, and that means that any person can really be victimized,” said Haley McNamara, senior vice president of strategic initiatives and programs at the National Center on Sexual Exploitation.

CNBC’s reporting shines a light on the legal quagmire surrounding AI, and how a group of friends became key figures in the fight against nonconsensual, AI-generated porn.

Here are five takeaways from the investigation.

The women lack legal recourse

Because the women weren’t underage and the man who created the deepfakes never distributed the content, there was no apparent crime.

“He did not break any laws that we’re aware of,” said Molly Kelley, one of the Minnesota victims and a law student. “And that is problematic.”

Now, Kelley and the women are advocating for a local bill in their state, proposed by Democratic state Senator Erin Maye Quade, intended to block nudify services in Minnesota. Should the bill become law, it would levy fines on the entities enabling the creation of the deepfakes.

Maye Quade said the bill is reminiscent of laws that prohibit peeping into windows to snap explicit photos without consent.

“We just haven’t grappled with the emergence of AI technology in the same way,” Maye Quade said in an interview with CNBC, referring to the speed of AI development.

The harm is real

Jessica Guistolise, one of the Minnesota victims, said she continues to suffer from panic and anxiety stemming from the incident last year.

Sometimes, she said, a simple click of a camera shutter can cause her to lose her breath and begin trembling, her eyes swelling with tears. That’s what happened at a conference she attended a month after first learning about the images.

“I heard that camera click, and I was quite literally in the darkest corners of the internet,” Guistolise said. “Because I’ve seen myself doing things that are not me doing things.”

Mary Anne Franks, professor at the George Washington University Law School, compared the experience to the feelings victims describe when talking about so-called revenge porn, or the posting of a person’s sexual photos and videos online, often by a former romantic partner.

“It makes you feel like you don’t own your own body, that you’ll never be able to take back your own identity,” said Franks, who is also president of the Cyber Civil Rights Initiative, a nonprofit organization dedicated to combating online abuse and discrimination.

Deepfakes are easier to create than ever

Less than a decade ago, a person would need to be an AI expert to make explicit deepfakes. Thanks to nudifier services, all that’s required is an internet connection and a Facebook photo.

Researchers said new AI models have helped usher in a wave of nudify services. The models are often bundled within easy-to-use apps, so that people lacking technical skills can create the content.

And while nudify services can contain disclaimers about obtaining consent, it’s unclear whether there is any enforcement mechanism. Additionally, many nudify sites market themselves simply as so-called face-swapping tools.

“There are apps that present as playful and they are actually primarily meant as pornographic in purpose,” said Alexios Mantzarlis, an AI security expert at Cornell Tech. “That’s another wrinkle in this space.”

Nudify service DeepSwap is hard to find

The site that was used to create the content is called DeepSwap, and there’s not much information about it online.

In a press release published in July, DeepSwap used a Hong Kong dateline and included a quote from Penyne Wu, who was identified in the release as CEO and co-founder. The media contact on the release was Shawn Banks, who was listed as marketing manager. 

CNBC was unable to find information online about Wu, and sent multiple emails to the address provided for Banks, but received no response.

DeepSwap’s website currently lists “MINDSPARK AI LIMITED” as its company name, provides an address in Dublin, and states that its terms of service are “governed by and construed in accordance with the laws of Ireland.”

However, in July, the same DeepSwap page had no mention of Mindspark, and references to Ireland instead said Hong Kong. 

AI’s collateral damage

The alarming rise of AI ‘nudify’ apps that create explicit images of real people

Continue Reading

Trending