Connect with us

Published

on

Lisa Su, CEO of AMD, left, and Jensen Huang, CEO of Nvidia

Benoit Tessier | Ritzau Scanpix | Mads Claus Rasmussen | Reuters

In the 1990s, when Intel dominated the PC chip market, the semiconductor maker needed Advanced Micro Devices to exist as a viable No. 2 to help avoid being charged with monopolistic behavior. 

Almost three decades later, AMD may be serving a similar role for Nvidia, which controls over 90% of the market for graphics processing units used for artificial intelligence workloads. 

When AMD announced a deal on Monday that involves selling many billions of dollars worth of GPUs to OpenAI, it announced itself as a serious rival the can pick up share in the quickly growing market for AI chips, analysts said.

“Right now, Nvidia almost has a monopoly, with AMD having a low-single-digit share in the $250 billion market” for AI data center silicon, said Mandeep Singh, senior analyst at Bloomberg intelligence.

Up to this point, Nvidia and OpenAI have defined the new era of AI.

Nvidia’s GPU sales have pushed the company’s market cap to $4.5 trillion. OpenAI’s private market valuation has climbed to $500 billion, driven by the popularity of ChatGPT and the company’s hyper-aggressive plans for building out data centers.

Nvidia is a significant investor in OpenAI, and last month agreed to pour up to $100 billion into the AI startup’s infrastructure buildouts.

While AMD is a very distant challenger, the stock has also been a Wall Street darling because of the company’s promises in AI and expectations that its GPUs will be enthusiastically snapped up by customers. But until its announcement with OpenAI this week, AMD’s rally has largely been built on hope.

AMD’s stock soared 24% on Monday, its biggest gain since 2002. It’s up 89% this year compared to Nvidia’s 40% gain.

Here's why Bank of America is bullish on AMD amid OpenAI deal

Nvidia’s control of the burgeoning market has been so vast that in September of last year, during the waning days of the Biden administration, the company was reportedly subpoenaed by the Justice Department, though it denied the report. Sen. Elizabeth Warren, D-Mass., sent a letter to the DOJ’s antitrust unit at the time supporting a probe.

The company’s growth, she wrote, “has been supercharged by Nvidia’s use of anticompetitive tactics that have choked off competition and chilled innovation.” Nvidia said at the time that it wins on merit.

The deal OpenAI and AMD announced on Monday could change the competitive dynamic.

The tie-up is expected to bring “double digit billions” in revenue to AMD starting in the second half of next year. OpenAI could also end up owning 10% of AMD if the stock hits price targets over a period of years.

AMD CEO Lisa Su described the agreement as a “win-win” on a call with reporters, and said it’s proof that her company’s chips are fast enough and priced to compete with those from Nvidia.

She described OpenAI’s commitment as a “clear signal” that AMD’s GPUs and software offer the performance and economic value “required for the most demanding at-scale deployments.”

Nvidia CEO Jensen Huang said on CNBC’s Squawk Box on Wednesday that the OpenAI-AMD deal was “unique and surprising.”

“I’m surprised that they would give away 10% of the company before they even built it,” Huang said.
“It’s clever, I guess,” Huang said.

The pact also allows OpenAI to show that its contracts and investments with suppliers like Nvidia aren’t exclusive, to avoid any potential antitrust ramifications. OpenAI CEO Sam Altman said on social media that any AMD chips would be “incremental” to its Nvidia purchases, and that the “world needs much more compute.”

“None of these things are, as far as I’m aware, exclusive contracts tying up avenues to other competitors,” said Alden Abbott, senior research fellow at Mercatus Center and a former general counsel at the Federal Trade Commission. “I don’t see any argument that in the near term that shows monopolization or cartelization of AI suppliers.”

Representatives from Nvidia, AMD and OpenAI declined to comment.

‘Committed to build’

When it comes to Washington, D.C., regulators aren’t the only concern. Those pressures have seemingly diminished this year under the Trump administration’s DOJ.

Rather, semiconductor investors are worried about potential tariffs, specifically Section 232 tariffs focused on chips. President Donald Trump has said that the tariffs, which have yet to go into effect, will double the price of imported chips. But in August, the president introduced a big carve-out.

“If you’re building in the United States or have committed to build — without question committed to build in the United States —there will be no charge,” Trump said at an event to announce Apple investments. The Trump administration’s AI Action Plan pushes for the U.S. to export “full-stack” AI technology abroad so it can become the global standard.

Ed Mills, Washington policy analyst at Raymond James, said it’s not entirely clear what will qualify for the exemption, adding that OpenAI’s investment in AMD may end up being an “off ramp” for the company.

Nvidia and OpenAI have already played a big role in Trump’s AI ambitions, as they joined with Oracle in January, when the president announced Project Stargate, a plan to invest up to $500 billion in U.S. AI infrastructure.

CEO Dr. Lisa Su, AMD executives, and industry luminaries unveil the AMD vision for Advancing Al.

Courtesy: AMD

In the AMD deal, OpenAI will be using the company’s Instinct MI450 systems, which will start shipping next year. It’s the first time AMD has offered a “rack-scale” system, not just individual chips, and will mean AMD is the only company besides Nvidia offering a full stack of AI hardware technologies.

“By having OpenAI purchase as much as they are from AMD, now we have a a multiplayer race that seems to be kind of dominated by Nvidia,” Mills said. “So we’re expanding the number U.S. companies that are going to be able to compete in producing that U.S. tech stack.”

There’s also the China issue.

Both Nvidia and AMD have China-specific AI products that have been barred by the U.S. government for shipment to the world’s second-largest economy, which is a major center of AI research. The Trump administration reversed course over the summer, and said the companies could export chips if they paid the U.S. government 15% of the revenue, but they still need export licenses.

Trump is expected to meet with China’s president, Xi Jinping, at the Asia-Pacific Economic Cooperation forum later this month. Recent reports suggest China could commit to investing $1 trillion in the U.S., and Mills said high-priced AI chips could be part of the deal.

AMD has historically downplayed competition with Nvidia, instead pointing to the potential opportunity in AI. The company recently said the AI chip market could be worth $500 billion by 2028, and this week said the OpenAI deal equates to at least “tens of billions of dollars of revenue.”

“I think they can get to 15% to 20% market share in a $500 billion market, whereas previously they had no chance,” said Bloomberg’s Singh.

The Trump administration may not be so concerned about antitrust matters, but Nvidia and AMD are at the early stages of a battle that’s expected to play out over many years, and there’s no telling who will be in the White House after Trump’s second term ends.

Antitrust regulators have paid close attention to the market in the past. The last time AMD played second fiddle in chips it was Intel that was the industry behemoth.

The FTC opened an inquiry into Intel in 1991, looking into potential anticompetitive practices in the PC market, and AMD filed a $2 billion antitrust suit against the company that year. The FTC never brought charges, and AMD and Intel ultimately settled their case.

Now AMD is worth about twice as much as Intel. And, after a spate of dealmaking, Intel’s largest shareholder is the U.S. government, followed not far behind by Nvidia.

WATCH: OpenAI’s deal spree

OpenAI's been on a deal spree – and it may still need even more compute capacity

Continue Reading

Technology

Top Hollywood agencies slam OpenAI’s Sora as ‘exploitation’ and a risk to clients

Published

on

By

Top Hollywood agencies slam OpenAI's Sora as 'exploitation' and a risk to clients

An illustration photo shows Sora 2 logo on a smartphone.

Cfoto | Future Publishing | Getty Images

The Creative Artists Agency on Thursday slammed OpenAI’s new video creation app Sora for posing “significant risks” to their clients and intellectual property.

The talent agency, which represents artists including Doja Cat, Scarlett Johanson, and Tom Hanks, questioned whether OpenAI believed that “humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create.”

“Or does Open AI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work? In our opinion, the answer to this question is obvious,” the CAA wrote.

OpenAI did not immediately respond to CNBC’s request for comment.

The CAA said that it was “open to hearing” solutions from OpenAI and is working with IP leaders, unions, legislators and global policymakers on the matter.

“Control, permission for use, and compensation is a fundamental right of these workers,” the CAA wrote. “Anything less than the protection of creators and their rights is unacceptable.”

Sora, which launched last week and has quickly reached 1 million downloads, allows users to create AI-generated clips often featuring popular characters and brands.

Read more CNBC tech news

OpenAI launched with an “opt-out” system, which allowed the use of copyrighted material unless studios or agencies requested that their IP not be used.

CEO Sam Altman later said in a blog post that they would give rightsholders “more granular control over generation of characters.”

Talent agency WME sent a memo to agents on Wednesday that it has “notified OpenAI that all WME clients be opted out of the latest Sora AI update, regardless of whether IP rights holders have opted out IP our clients are associated with,” the LA Times reported.

United Talent Agency also criticized Sora’s use of copyrighted property as “exploitation, not innovation,” in a statement on Thursday.

“There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected,” UTA wrote. “When it comes to OpenAI’s Sora or any other platform that seeks to profit from our clients’ intellectual property and likeness, we stand with artists.”

In a letter written to OpenAI last week, Disney said it did not authorize OpenAI and Sora to copy, distribute, publicly display or perform any image or video that features its copyrighted works and characters, according to a person familiar with the matter.

Disney also wrote that it did not have an obligation to “opt-out” of appearing in Sora or any OpenAI system to preserve its rights under copyright law, the person said.

The Motion Picture Association issued a statement on Tuesday, urging OpenAI to take “immediate and decisive action” against videos using Sora to produce content infringing on its copyrighted material.

Entertainment companies have expressed numerous copyright concerns as generative AI has surged.

Universal and Disney sued creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their movies despite requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.

Hollywood backlash grows against OpenAI's new Sora video model

Continue Reading

Technology

YouTube will give banned creators a ‘second chance’ after rule rollback

Published

on

By

YouTube will give banned creators a 'second chance' after rule rollback

People walk past a billboard advertisement for YouTube in Berlin, Germany, on Sept. 27, 2019.

Sean Gallup | Getty Images

YouTube is offering creators who were banned from the platform a second chance.

On Thursday, the Google-owned platform announced it is rolling out a feature for previously terminated creators to apply to create a new channel. Previous rules led to a lifetime ban.

“We know many terminated creators deserve a second chance,” wrote the YouTube Team in a blog post. “We’re looking forward to providing an opportunity for creators to start fresh and bring their voice back to the platform.”

Tech companies have faced months of scrutiny from House Republicans and President Donald Trump, who have accused the platforms of political bias and overreach in content moderation.

Last week, YouTube agreed to pay $24.5 million to settle a lawsuit involving the suspension of Trump’s account following the U.S. Capitol riots on Jan. 6, 2021.

YouTube said this new option is separate from its already existing appeals process. If an appeal is unsuccessful, creators now have the option to apply for a new channel.

Approved creators under the new process will start from scratch, with no prior videos, subscribers or monetization privileges carried over.

Read more CNBC tech news

Over the next several weeks, eligible creators logging into YouTube Studio will see an option to request a new channel. Creators are only eligible to apply one year after their original channel was terminated.

YouTube said it will review requests based on the severity and frequency of past violations.

The company also said it will consider off-platform behavior that could harm the community, such as activity endangering child safety.

The program excludes creators terminated for copyright infringement, violations of its Creator Responsibility policy or those who deleted their accounts.

YouTube’s ‘second chance’ process fits with a broader trend at Google and other major platforms to ease strict content moderation rules imposed in the wake of the pandemic and the 2020 election.

In September, Alphabet lawyer Daniel Donovan sent a letter to House Judiciary Chair Jim Jordan, R-Ohio, that announced the platform had made changes to its community guidelines for content containing Covid-19 or election-related misinformation.

The letter also claimed that senior Biden administration officials pressed the company to remove certain Covid-related videos, saying the pressure was “unacceptable and wrong.”

YouTube ended its stand-alone Covid misinformation rules in December 2024, according to Donovan’s letter.

Rep. Jim Jordan on Google reinstating banned YouTube accounts, return of Jimmy Kimmel

Continue Reading

Technology

Ex-Google CEO Eric Schmidt warns AI models can be hacked: ‘They learn how to kill someone’

Published

on

By

Ex-Google CEO Eric Schmidt warns AI models can be hacked: 'They learn how to kill someone'

Google’s former CEO Eric Schmidt spoke at the Sifted Summit on Wednesday 8, October.

Bloomberg | Bloomberg | Getty Images

Google‘s former CEO Eric Schmidt has issued a stark reminder about the dangers of AI and how susceptible it is to being hacked.

Schmidt, who served as Google’s chief executive from 2001 to 2011, warned about “the bad stuff that AI can do,” when asked whether AI is more destructive than nuclear weapons during a fireside chat at the Sifted Summit

“Is there a possibility of a proliferation problem in AI? Absolutely,” Schmidt said Wednesday. The proliferation risks of AI include the technology falling into the hands of bad actors and being repurposed and misused.

“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said.

“All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”

AI systems are vulnerable to attack, with some methods including prompt injections and jailbreaking. In a prompt injection attack, hackers hide malicious instructions in user inputs or external data, like web pages or documents, to trick the AI into doing things it’s not meant to do — such as sharing private data or running harmful commands

Jailbreaking, on the other hand, involves manipulating the AI’s responses so it ignores its safety rules and produces restricted or dangerous content.

In 2023, a few months after OpenAI’s ChatGPT was released, users employed a “jailbreak” trick to circumvent the safety instructions embedded in the chatbot.

This included creating a ChatGPT alter-ego called DAN, an acronym for “Do Anything Now,” which involved threatening the chatbot with death if it didn’t comply. The alter-ego could provide answers on how to commit illegal activities or list the positive qualities of Adolf Hitler.

Schmidt said that there isn’t a good “non-proliferation regime” yet to help curb the dangers of AI.

AI is ‘underhyped’

Continue Reading

Trending