Connect with us

Published

on

Two Waymo autonomous vehicles drive themselves down Central Avenue in Phoenix, Arizona, U.S., March 18, 2024. 

Caitlin O’hara | Reuters

Uber Eats customers may start receiving orders delivered by a Waymo self-driving car for the first time in the Phoenix metropolitan area.

The new service, which rolls out on Wednesday, marks the official launch of Uber’s delivery partnership with Waymo and is part of a broader multiyear collaboration between the two companies unveiled last year. In October, Uber began offering rides in Waymo’s self-driving vehicles in Phoenix.

For Uber Eats, Phoenix is the seventh site with autonomous deliveries, but the first location where the delivery app will use Waymo’s vehicles. Uber Eats has already teamed up with robotics companies Cartken, Motional, Nuro and Serve Robotics to pilot autonomous deliveries in other markets.

A view of a Waymo delivery on the Uber Eats app.

Credit: Uber Eats x Waymo

The latest offering in Arizona is limited to Uber Eats users in Phoenix, Chandler, Mesa and Tempe, with ordering available from select merchants in those cities. An Uber spokesperson told CNBC that the list of service areas is growing, and more restaurants will be eligible for autonomous deliveries in the coming weeks.

Orders will be delivered via Waymo’s Jaguar I-PACE electric vehicles, the spokesperson said, adding that Waymo, which is owned by Alphabet, doesn’t disclose its fleet size.

Uber Eats users in areas serviced by Waymo can opt to have their items delivered by a courier. With Waymo, standard fees will apply but customers will not be charged for tips, the Uber spokesperson said.

“The addition of food delivery to Uber’s ongoing partnership with Waymo reflects both companies’ mission to encourage zero-emission trips and unlock greater innovation for consumers and merchants in Phoenix and beyond,” Uber said in a blog post.

Uber did not say whether it plans to bring Waymo deliveries to more cities in the future. Beyond Phoenix, Waymo’s core ride-hailing service is available in parts of Los Angeles and San Francisco.

Don’t miss these stories from CNBC PRO:

What it's really like to ride in Cruise and Waymo robotaxis on San Francisco streets

Continue Reading

Technology

Tesla robotaxi event comes after a decade of unfulfilled promises from Elon Musk

Published

on

By

Tesla robotaxi event comes after a decade of unfulfilled promises from Elon Musk

Elon Musk, Chief Executive Officer of SpaceX and Tesla and owner of X looks on during the Milken Conference 2024 Global Conference Sessions at The Beverly Hilton in Beverly Hills, California, U.S., May 6, 2024. 

David Swanson | Reuters

With Tesla’s hotly anticipated robotaxi event hours away, investors will soon get a glance at what CEO Elon Musk has called the CyberCab.

After a decade of unfulfilled promises to deliver autonomous vehicles, capable of traveling reasonable distances safely without a human at the wheel, there’s a hefty dose of skepticism about what Tesla can do technologically, and when its robotaxi might actually hit the market.

The robotaxi day, or “We, Robot,” event is scheduled to begin at 7:00 p.m. Pacific time at a Warner Bros. studio in Burbank, California and will be livestreamed via X.

Garrett Nelson, an analyst at CFRA, cautioned in a preview on Oct. 4, that conditions at a closed course on a movie studio lot could make a Tesla robotaxi look more advanced than it would be in normal traffic and on public roads. CFRA has a hold rating on the stock.

Tesla shares dipped about 1% on Thursday to $238.77. They’re now down almost 4% for the year and more than 40% below their record reached in 2021.

The event comes a week after Tesla reported third-quarter deliveries of 462,890, lifting the number to 1.35 million for the year so far. For all of last year, Tesla reported deliveries of 1.81 million.

Bullish analysts at firms including Wedbush, ARK and RBC Capital Markets expressed optimism in their reports about the company’s ability to keep growing sales long-term, while delivering higher-tech products, including a long-delayed autonomous vehicle, humanoid robotics and other AI-driven products and services.

Gene Munster of Deepwater Asset Management told CNBC’s “Fast Money” on Wednesday, that he’ll be at the event and expects to test the robotaxi.

Munster, a long-time Tesla bull, said he thinks the company will roll out robotaxis in some cities by the end of 2025. He’s also expecting Tesla to announce plans to produce an affordable EV, possibly just a stripped down version of its Model 3, and an electric van.

He said that while he expects the stock to be down after the event, it could “make new highs” over the next two years as deliveries start to accelerate.

Tesla was once seen as a pioneer in autonomous vehicle development, but has never managed to deliver or demonstrate robotaxi technology. The company is now considered a laggard.

Alphabet’s Waymo in the U.S., and a number of Chinese firms, are all operating commercial robotaxi services today.

Morgan Stanley analysts wrote in a report on Wednesday that if Tesla can launch a “level 4” robotaxi, meaning it can operate without a driver at the wheel, using its current “suite of hardware and software,” it would result in a cost-per-mile advantage relative to peers.

Tesla's robotaxis could generate $1.7T of revenue by 2040, says RBC's Tom Narayan

In addition to missed deadlines, Tesla has had safety issues with the its driver assistance systems, which are currently marketed as the standard Autopilot and premium Full Self-Driving (Supervised) options.

Missy Cummings, a professor at George Mason University and director of the Mason Autonomy and Robotics Center, said Tesla leaders should be able to say how they’re solving a problem known as “phantom braking,” which refers to instances when vehicles equipped with ADAS apply their brakes unexpectedly, even while driving at highway speeds, with no visible obstacles around them.

Tesla’s phantom braking problems are the subject of an ongoing investigation by the National Highway Traffic Safety Administration (NHTSA). Cummings, who previously served as a senior safety advisor to the regulator, told CNBC, “If they can’t solve phantom braking for a level 2 car, they can’t solve it for level 4 or 5 vehicle.” Level 2 vehicles include driver assistance systems.

According to data tracked by NHTSA starting in 2021, there have been 1,399 crashes in which Tesla driver assistance systems were engaged within 30 seconds of the crash, and 31 of these collisions resulted in reported fatalities.

Sam Abuelsamid, an analyst at Guidehouse Insights, said Musk or other Tesla executives should be able to say exactly how they plan for their vehicles to operate in different weather, such as fog, rain, snow, and lighting, or in dark tunnels.

He also wants Tesla executives to say whether they will accept full liability for the operation of these vehicles, which he calls “table stakes for a true robotaxi without human controls.”

Finally, Abuelsamid wants to know if Tesla plans to own and operate its robotaxis or lease or sell them to consumers and fleet operators.

“Many companies have made progress on the automated driving technology side,” Abuelsamid said. “But they’ve faltered when it came to figuring out a business model that could be profitable. Tesla has a lot of challenges to overcome and I want to know how all the pieces fall into place.”

WATCH: Will be another five years before we see a Waymo-like car from Tesla

Will be another five years before we see a 'Waymo-like' car from Tesla, says Roth MKM's Craig Irwin

Continue Reading

Technology

Microsoft announces new AI tools to help ease workload for doctors and nurses

Published

on

By

Microsoft announces new AI tools to help ease workload for doctors and nurses

Microsoft on Thursday announced new health-care data and artificial intelligence tools, including a collection of medical imaging models, a health-care agent service and an automated documentation solution for nurses. 

The tools aim to help health-care organizations build AI applications quicker and save clinicians time on administrative tasks, a major cause of industry burnout. Nurses spend as much as 41% of their time on documentation, according to a report from the Office of the Surgeon General. 

“By integrating AI into health care, our goal is to reduce the strain on medical staff, foster the collective health team collaboration, enhance the overall efficiency of healthcare systems across the country,” Mary Varghese Presti, vice president of portfolio evolution and incubation at Microsoft Health and Life Sciences, said in a prerecorded briefing with reporters. 

The new tools are the latest example of Microsoft’s efforts to establish itself as a leader in health-care AI. Last October, the company unveiled a series of health features across its Azure cloud and Fabric analytics platform. It also acquired Nuance Communications, which offers speech-to-text AI solutions for health care and other sectors, in a $16 billion deal in 2021.

Many of the solutions Microsoft announced on Thursday are in the early stages of development or only available in preview. Health-care organizations will test and validate them before the company rolls them out more broadly. Microsoft declined to share what these new tools will cost.

Health-care AI models 

Microsoft’s model catalog

Courtesy of Microsoft

Roughly 80% of hospital and health system visits include an imaging exam because doctors often rely on images to help treat patients.

Microsoft is launching a collection of open-source multimodal AI models that can analyze data types beyond just text, such as medical images, clinical records and genomic data. Health-care organizations can use the models to build new applications and tools.

For example, digitizing a single pathology slide can require more than a gigabyte of storage, so many existing AI pathology models have trained on small pieces of slides at a time. Microsoft and Providence Health & Services built a whole-slide model that improves on mutation prediction and cancer subtyping, according to a paper published in the peer-reviewed journal Nature.

Now, health systems can build on it and fine-tune it to meet their needs. 

“Getting a whole-slide foundation model for pathology has been a challenge in the past … and now we’re actually able to do it,” Sara Vaezy, chief strategy and digital officer at Providence, told CNBC in an interview. “It was really sort of a game changer.” 

The models are available in the model catalog within Azure AI Studio, which serves as Microsoft’s generative AI development hub. 

Health-care agent service

Microsoft’s health-care agent service.

Courtesy of Microsoft

Microsoft also announced a new way for health systems to build AI agents.

AI agents vary in complexity, but they can help users answer questions, automate processes and perform specific tasks. 

Through Microsoft Copilot Studio, these organizations can create agents equipped with health-care-specific safeguards. When an answer contains a reference to clinical evidence, for instance, the source is shown, and a note indicates if the answer is AI-generated. Fabrications and omissions are also flagged, Microsoft said. 

For example, a health-care organization could build an AI agent to help doctors identify relevant clinical trials for a patient. Microsoft said a physician could type the question, “What clinical trials for a male 55-year-old with diabetes and interstitial lung disease?” and receive a list of potential options. It would save the doctor the time and effort of finding each trial. 

AI agents that can help patients answer basic questions have been popular among the health systems that have already begun testing the service, Hadas Bitran, general manager of health AI at Microsoft Health and Life Sciences, said in a Q&A with reporters. Agents that can help doctors answer questions about recent guidelines and patients’ history are also common, she added.

Microsoft’s health-care agent service is available in a preview capacity starting Thursday.

Bringing automated documentation to nurses

What it's like to have a doctor visit with A.I.

In August, Microsoft announced that the next phase of its partnership with Epic Systems would be dedicated to building an AI-powered documentation tool for nurses, and the company detailed those plans on Thursday. 

Epic is a health-care software vendor that houses the electronic health records of more than 280 million people in the U.S. It has a yearslong relationship with Microsoft. 

Microsoft’s Nuance already offers an automated documentation tool for doctors called DAX Copilot, which it unveiled last year. It allows doctors to consensually record their visits with patients, and AI automatically transforms them into clinical notes and summaries.

Ideally, this means doctors don’t have to spend time typing out these notes themselves every time they see a patient. 

The technology has exploded in popularity this year. Nuance announced that DAX Copilot was generally available within Epic’s electronic health record in January – a coveted stamp of approval within the health-care industry. Integrating a tool like DAX Copilot directly into doctors’ EHR workflow means they won’t need to switch apps to access it, which helps save time and reduces administrative workload.

But so far, DAX Copilot has only been available to doctors. Microsoft said that’s changing. It’s building a similar tool optimized for nurses.

“The nursing workflow is very different from that of physicians, and any solution developed for nurses needs to integrate with the way they work,” Presti said during the briefing. “Our team has spent hours shadowing nurses during their shifts to see how they carry out their tasks and to discover where the greatest points of friction exist throughout their day.”  

Microsoft is working with organizations like Stanford Health Care, Northwestern Medicine and Tampa General Hospital to develop it.

Don’t miss these insights from CNBC PRO

Continue Reading

Technology

AMD launches AI chip to rival Nvidia’s Blackwell

Published

on

By

AMD launches AI chip to rival Nvidia's Blackwell

AMD launched a new artificial-intelligence chip on Thursday that is taking direct aim at Nvidia’s data center graphics processors, known as GPUs.

The Instinct MI325X, as the chip is called, will start production before the end of 2024, AMD said Thursday during an event announcing the new product. If AMD’s AI chips are seen by developers and cloud giants as a close substitute for Nvidia’s products, it could put pricing pressure on Nvidia, which has enjoyed roughly 75% gross margins while its GPUs have been in high demand over the past year.

Advanced generative AI such as OpenAI’s ChatGPT requires massive data centers full of GPUs in order to do the necessary processing, which has created demand for more companies to provide AI chips.

In the past few years, Nvidia has dominated the majority of the data center GPU market, but AMD is historically in second place. Now, AMD is aiming to take share from its Silicon Valley rival or at least to capture a big chunk of the market, which it says will be worth $500 billion by 2028.

“AI demand has actually continued to take off and actually exceed expectations. It’s clear that the rate of investment is continuing to grow everywhere,” AMD CEO Lisa Su said at the event.

AMD didn’t reveal new major cloud or internet customers for its Instinct GPUs at the event, but the company has previously disclosed that both Meta and Microsoft buy its AI GPUs and that OpenAI uses them for some applications. The company also did not disclose pricing for the Instinct MI325X, which is typically sold as part of a complete server.

With the launch of the MI325X, AMD is accelerating its product schedule to release new chips on an annual schedule to better compete with Nvidia and take advantage of the boom for AI chips. The new AI chip is the successor to the MI300X, which started shipping late last year. AMD’s 2025 chip will be called MI350, and its 2026 chip will be called MI400, the company said.

The MI325X’s rollout will pit it against Nvidia’s upcoming Blackwell chips, which Nvidia has said will start shipping in significant quantities early next year.

A successful launch for AMD’s newest data center GPU could draw interest from investors that are looking for additional companies that are in line to benefit from the AI boom. AMD is only up 20% so far in 2024 while Nvidia’s stock is up over 175%. Most industry estimates say Nvidia has over 90% of the market for data center AI chips.

AMD stock fell 3% during trading on Thursday.

AMD’s biggest obstacle in taking market share is that its rival’s chips use their own programming language, CUDA, which has become standard among AI developers. That essentially locks developers into Nvidia’s ecosystem.

In response, AMD this week said that it has been improving its competing software, called ROCm, so that AI developers can more easily switch more of their AI models over to AMD’s chips, which it calls accelerators.

AMD has framed its AI accelerators as more competitive for use cases where AI models are creating content or making predictions rather than when an AI model is processing terabytes of data to improve. That’s partially due to the advanced memory AMD is using on its chip, it said, which allows it to server Meta’s Llama AI model faster than some Nvidia chips.

“What you see is that MI325 platform delivers up to 40% more inference performance than the H200 on Llama 3.1,” said Su, referring to Meta’s large-language AI model.

Taking on Intel, too

While AI accelerators and GPUs have become the most intensely watched part of the semiconductor industry, AMD’s core business has been central processors, or CPUs, that lay at the heart of nearly every server in the world.

AMD’s data center sales during the June quarter more than doubled in the past year to $2.8 billion, with AI chips accounting for only about $1 billion, the company said in July.

AMD takes about 34% of total dollars spent on data center CPUs, the company said. That’s still less than Intel, which remains the boss of the market with its Xeon line of chips. AMD is aiming to change that with a new line of CPUs, called EPYC 5th Gen, that it also announced on Thursday.

Those chips come in a number of different configurations ranging from a low-cost and low-power 8-core chip that costs $527 to 192-core, 500-watt processors intended for supercomputers that cost $14,813 per chip.

The new CPUs are particularly good for feeding data into AI workloads, AMD said. Nearly all GPUs require a CPU on the same system in order to boot up the computer.

“Today’s AI is really about CPU capability, and you see that in data analytics and a lot of those types of applications,” Su said.

WATCH: Tech trends are meant to play out over years, we’re still learning with AI, says AMD CEO Lisa Su

Continue Reading

Trending