In this photo illustration, the OpenAI logo is displayed on a mobile phone screen with a photo of Sam Altman, CEO of OpenAI.
Didem Mente | Anadolu | Getty Images
OpenAI’s official “blueprint for U.S. AI infrastructure” involves artificial intelligence economic zones, tapping the U.S. Navy’s nuclear power experience and government projects funded by private investors, according to a document viewed by CNBC, which the company plans to present on Wednesday in Washington, D.C.
The blueprint also outlines a North American AI alliance to compete with China’s initiatives and a National Transmission Highway Act “as ambitious as the 1956 National Interstate and Defense Highways Act.”
In the document, OpenAI outlines a rosy future for AI, calling it “as foundational a technology as electricity, and promising similarly distributed access and benefits.” The company wrote that investment in U.S. AI will lead to tens of thousands of jobs, GDP growth, a modernized grid that includes nuclear power, a new group of chip manufacturing facilities and billions of dollars in investment from global funds.
Now that Donald Trump is President-elect, OpenAI has made clear its plans to work with the new administration on AI policy, and the company’s Wednesday presentation outlines its plans.
Trump plans to repeal President Biden’s executive order on AI, according to his campaign platform, stating that it “hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology” and that “in its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
OpenAI’s presentation outlines AI economic zones co-created by state and federal governments “to give states incentives to speed up permitting and approvals for AI infrastructure.” The company envisions building new solar arrays and wind farms and getting unused nuclear reactors cleared for use.
“States that provide subsidies or other support for companies launching infrastructure projects could require that a share of the new compute be made available to their public universities to create AI research labs and developer hubs aligned with their key commercial sectors,” OpenAI wrote.
OpenAI also wrote that it foresees a “National Transmission Highway Act” that could expand power, fiber connectivity and natural gas pipeline construction. The company wrote it needs “new authority and funding to unblock the planning, permitting, and payment for transmission,” and that existing procedures aren’t keeping pace with AI-driven demand.
The blueprints say, “The government can encourage private investors to fund high-cost energy infrastructure projects by committing to purchase energy and other means that lessen credit risk.”
A North American AI Alliance and investment in more U.S. data centers
OpenAI also foresees a North American AI alliance of Western countries that could eventually expand to a global network, such as a “Gulf Cooperation Council with the UAE and others in that region.”
The company also outlined its vision for nuclear power, writing that although China “has built as much nuclear power capacity in 10 years as the US built in 40,” the U.S. Navy operates about 100 small modular reactors (SMRs) to power naval submarines, and leveraging the Navy’s expertise could lead to building more civilian SMRs.
OpenAI’s infrastructure blueprint aligns with what Chris Lehane, OpenAI’s head of global policy, told CNBC in a recent interview. He sees the Midwest and Southwest as potential core areas for AI investment.
“Parts of the country that have been ‘left behind,’ as we enter the digital age, where so much of the economics and particularly economic benefits flow to the two coasts… Areas like the midwest and the southwest are going to be the types of places where you have the land and ability to do wind farms and to do solar facilities, and potentially to do some part of the energy transition — potentially do nuclear facilities,” Lehane said.
The infrastructure, Lehane explained, is contingent on the U.S. maintaining a lead over China in AI.
“[In] Kansas and Iowa, which sits on top of an enormous amount of agricultural data, think about standing up a data center,” Lehane said. “One gigawatt, which is a lot, taking, you know, 200-250 megawatts, a quarter of that, and doing something with their public university systems to create an agricultural-based LLM or inference model that would really serve their community but also make them a center of agricultural AI.”
Lehane cited an estimate that the US will need 50 gigawatts of energy by 2030 to support the AI industry’s needs and to compete against China, especially when the country approved 20 nuclear reactors over the past two years and 11 more for next year.
“We don’t have a choice,” Lehane said. “We do have to compete with that.”
C.C. Wei, TSMC Group CEO, stands on the future site of a chip factory under the name European Semiconductor Manufacturing Company (ESMC) during a symbolic ground-breaking ceremony.
Sebastian Kahnert | Picture Alliance | Getty Images
The president called the investment a “tremendous move by the most powerful company in the world.” The new capital brings TSMC’s total investment in the U.S. to $165 billion and will go toward building five new fabrication facilities in Arizona.
The announcement from TSMC, which supplies semiconductors to the likes of Nvidia and Apple for artificial intelligence use, supports the Trump administration’s ongoing efforts to make the U.S. an artificial intelligence hub.
Last month, Trump announced a multibillion-dollar AI infrastructure project with Oracle, OpenAI and Softbank. He’s also made numerous calls to bring semiconductor production back to the U.S. after much of the manufacturing industry moved abroad. Advancing semiconductor production in the U.S. is a matter of economic and national security, Trump said Monday.
TSMC has already made strides to expand its footprint in the U.S prior to Monday’s announcement. The company committed $12 billion in 2020 to build its first U.S. chip factory in Arizona, later raising that investment to $65 billion with a third factory. It has also gained U.S. government support through a $6.6 billion subsidy from the U.S. Commerce Department.
Microsoft is giving its health-care artificial intelligence tools a makeover.
The company on Monday unveiled a new voice-activated AI assistant that combines capabilities from its dictation solution, Dragon Medical One, and ambient listening solution, DAX Copilot, into one tool.
“Dragon Copilot” will be able to help doctors quickly pull information from medical sources and automatically draft clinical notes, referral letters, post-visit summaries and more, according to the company. It’s Microsoft’s latest effort to help health-care workers cut down their daunting clerical workloads, which are a major source of burnout in the industry.
Clinicians spend nearly 28 hours a week on administrative tasks like documentation, for instance, according to an October study from Google Cloud.
“Through this technology, clinicians will have the ability to focus on the patient rather than the computer, and this is going to lead to better outcomes and ultimately better health care for all,” Dr. David Rhew, global chief medical officer at Microsoft, said Thursday in a briefing with reporters.
Microsoft acquired Nuance Communications, the company behind Dragon Medical One and DAX Copilot, for about $16 billion in 2021. As a result, Microsoft has become a major player in the fiercely competitive AI scribing market, which has exploded in popularity as health systems have been looking for tools to help address burnout.
AI scribes like DAX Copilot allow doctors to draft clinical notes in real time as they consensually record their visits with patients. DAX Copilot has been used in more than 3 million patient visits across 600 health-care organizations in the last month, Microsoft said.
Other companies like Abridge, which has raised more than $460 million according to PitchBook, and Suki, which has raised nearly $170 million, have developed similar scribing tools. Microsoft’s updated interface could help it stand out from its competitors.
Dragon Copilot is accessible through a mobile app, browser or desktop, and it integrates directly with several different electronic health records, the company said.
Clinicians will still be able to draft clinical notes with the assistant like they could with DAX Copilot, but they’ll be able to use natural language to edit their documentation and prompt it further, Kenn Harper, general manager of Dragon products at Microsoft, told reporters on the call.
For instance, a doctor could ask questions like, “Was the patient experiencing ear pain?” or “Can you add the ICD-10 codes to the assessment and plan?” Physicians can also ask broader treatment-related queries such as, “Should this patient be screened for lung cancer?” and get an answer with links to resources like the Centers for Disease Control and Prevention.
WellSpan Health, which treats patients across 250 locations and nine hospitals throughout central Pennsylvania and northern Maryland, has been testing out Dragon Copilot with a group of clinicians in recent months.
One of those clinicians is Dr. David Gasperack, chief medical officer of primary care services at WellSpan. It’s still early days, but Gasperack told CNBC the assistant is easy to use and has been more accurate than Microsoft’s existing offerings.
“We’ve been asked more and more over time to do more administrative tasks that pull us away from the patient relationship and medical decision making,” Gasperack said. “This allows us to get back to that so we can focus on the patient, truly think about what’s needed.”
Microsoft declined to share the cost of Dragon Copilot but said the pricing structure is “competitive.” It will be easy for existing customers to upgrade to the new offering, the company added.
Dragon Copilot will be generally available in the U.S. and Canada starting in May, Microsoft said. The roll out will expand to the U.K., the Netherlands, France and Germany in the months following.
“Our goal remains to restore the joy of practicing medicine for clinicians and provide a better experience for patients globally,” Rhew said.
Watch: What it’s like to have a doctor visit with AI
Anthropic on Monday closed its latest funding round at a $61.5 billion post-money valuation, the company confirmed to CNBC.
The $3.5 billion round was led by Lightspeed Venture Partners, and other investors included Salesforce Ventures, Cisco Investments, Fidelity Management & Research Company, General Catalyst, D1 Capital Partners and Jane Street, among others.
Anthropic, the artificial intelligence startup backed heavily by Amazon, was founded by former OpenAI research executives. It launched Claude in March 2023, and like OpenAI’s ChatGPT and Google’s Gemini, Claude has exploded in popularity as businesses incorporate generative AI chatbots across sales, marketing and customer service functions.
The startup plans to use the latest funding to advance its development of next-generation AI, particularly to “expand its compute capacity, deepen its research in mechanistic interpretability and alignment, and accelerate its international expansion in Asia and Europe,” according to a release.
Read more CNBC reporting on AI
In December, Anthropic’s revenue hit an annualized $1 billion, which was an increase of roughly 10x year over year, a source told CNBC at the time. The company’s revenue comes primarily from enterprise sales, and its clients currently include startups like Cursor, Codeium and Replit, as well as larger businesses like Zoom, Snowflake, Pfizer, Thomson Reuters and Novo Nordisk, the company behind Ozempic, according to a release.
Anthropic also spotlighted in its release about the funding round that its technology now fuels Amazon’s Alexa+, “bringing Claude to millions of households and Prime members.”
Krishna Rao, Anthropic’s CFO, said in a release that the latest investment “fuels our development of more intelligent and capable AI systems that expand what humans can achieve” and that “continued advances in scaling across all aspects of model training are powering breakthroughs in intelligence and expertise.”
News of the latest funding round after Google in January agreed to a new investment of more than $1 billion in Anthropic, a source familiar with the situation confirmed to CNBC at the time. The fresh funding built on Google’s past investments of $2 billion in Anthropic and 10% ownership stake in the startup, as well as a large cloud contract between the two companies. Anthropic is most well known for its Claude AI chatbot.
Amazon announced that it would invest an additional $4 billion in Anthropic in November. That brought Amazon’s total investment in the startup to $8 billion. Amazon remains a minority investor, Anthropic confirmed to CNBC at the time, and does not have a board seat.
As part of the November investment, Amazon Web Services became Anthropic’s “primary cloud and training partner.” Anthropic has used Amazon Web Services’ Trainium and Inferentia chips to train and deploy its largest AI models since then.
Anthropic ramped up its technology development throughout last year, and in October, the startup said that its AI agents were able to use computers like humans can to complete complex tasks. Anthropic’s Computer Use capability allows its technology to interpret what’s on a computer screen, select buttons, enter text, navigate websites and execute tasks through any software and real-time internet browsing, the startup said.