Connect with us

Published

on

Generative artificial intelligence has developed so quickly in the past two years that massive breakthroughs seemed more a question of when rather than if. But in recent weeks, Silicon Valley has become increasingly concerned that advancements are slowing.  

One early indication is the lack of progress between models released by the biggest players in the space. The Information reports OpenAI is facing a significantly smaller boost in quality for its next model GPT-5, while Anthropic has delayed the release of its most powerful model Opus, according to wording that was removed from its website. Even at tech giant Google, Bloomberg reports that an upcoming version of Gemini is not living up to internal expectations.  

“Remember, ChatGPT came out at the end of 2022, so now it’s been close to two years,” said Dan Niles, founder of Niles Investment Management. “You had initially a huge ramp up in terms of what all these new models can do, and what’s happening now is you really trained all these models and so the performance increases are kind of leveling off.”

If progress is plateauing, it would call into question a core assumption that Silicon Valley has treated as religion: scaling laws. The idea is that adding more computing power and more data guarantees better models to an infinite degree. But those recent developments suggest they may be more theory than law. 

The key problem could be that AI companies are running out of data for training models, hitting what experts call the “data wall.” Instead, they’re turning to synthetic data, or AI-generated data. But that’s a band-aid solution, according to Scale AI founder Alexandr Wang.  

 “AI is an industry which is garbage in, garbage out,” Wang said. “So if you feed into these models a lot of AI gobbledygook, then the models are just going to spit out more AI gobbledygook.”  

But some leaders in the industry are pushing back on the idea that the rate of improvement is hitting a wall.  

“Foundation model pre-training scaling is intact and it’s continuing,” Nvidia CEO Jensen Huang said on the chipmaker’s latest earnings call. “As you know, this is an empirical law, not a fundamental physical law. But the evidence is that it continues to scale.”

OpenAI CEO Sam Altman posted on X simply, “there is no wall.” 

OpenAI and Anthropic didn’t respond to requests for comment. Google says it’s pleased with its progress on Gemini and has seen meaningful performance gains in capabilities like reasoning and coding. 

If AI acceleration is tapped out, the next phase of the race is the search for use cases – consumer applications that can be built on top of existing technology without the need for further model improvements. The development and deployment of AI agents, for example, is expected to be a game-changer. 

“I think we’re going to live in a world where there are going to be hundreds of millions, billions of AI agents, eventually probably more AI agents than there are people in the world,” Meta CEO Mark Zuckerberg said in a recent podcast interview.  

Watch the video to learn more. 

Continue Reading

Technology

Palo Alto tops earnings expectations, announces Chronosphere acquisition

Published

on

By

Palo Alto tops earnings expectations, announces Chronosphere acquisition

Chief executive officer at Palo Alto Networks Inc., Nikesh Arora attends the 9th edition of the VivaTech trade show at the Parc des Expositions de la Porte de Versailles on June 11, 2025, in Paris.

Chesnot | Getty Images

Palo Alto Networks beat Wall Street’s fiscal first-quarter estimates after the bell on Wednesday and announced plans to buy cloud observability platform Chronosphere for $3.35 billion.

The stock fell about 3%.

Here’s how the company did versus LSEG estimates:

  • Earnings per share: 93 cents adjusted vs. 89 cents expected
  • Revenue: $2.47 billion vs. $2.46 billion expected

Revenues grew 16% from $2.1 billion a year ago. Net income fell to $334 million, or 47 cents per share, from $351 million, or 49 cents per share in the year-ago period.

Palo Alto’s Chronosphere deal is slated to close in the second half of its fiscal 2026. The cybersecurity provider is also in the process of buying Israeli identity security firm CyberArk for $25 billion under CEO Nikesh Arora‘s acquisition spree.

He told investors in an earnings call that Palo Alto is making this simultaneous acquisition to address the fast-moving AI cycle.

“This large surge towards building AI compute is causing a lot of the AI players to think about newer models for software stacks and infrastructure stacks in the future,” he said.

Palo Alto guided for revenues between $2.57 billion and $2.59 billion in the second quarter, the midpoint of which was in line with a $2.58 billion estimate. For the full year, the company expects $10.50 billion to $10.54 billion, versus a $10.51 billion estimate.

Capital expenditures during the period were much higher than expectations at $84 million. StreetAccount expected $58.1 million. Remaining purchase obligations, which tracks backlog, grew to $15.5 billion and topped a $15.43 billion estimate.

The rise of artificial intelligence has also stirred up increasingly sophisticated cyberattacks and contributed to tools for customers. The Santa Clara, California-based company has infused AI into its tools and launched automated AI agents to help fend off attacks in October.

Read more CNBC tech news

Continue Reading

Technology

Elon Musk’s xAI will be first customer for Nvidia-backed data center in Saudi Arabia

Published

on

By

Elon Musk's xAI will be first customer for Nvidia-backed data center in Saudi Arabia

Tesla CEO Elon Musk (L) talks with Nvidia CEO Jensen Huang during the U.S.-Saudi Investment Forum at the Kennedy Center on Nov. 19, 2025 in Washington, DC.

Win McNamee | Getty Images

Nvidia and xAI said on Wednesday that a large data center facility being built in Saudi Arabia and equipped with hundreds of thousands of Nvidia chips will count Elon Musk’s artificial intelligence startup as its first customer.

Musk and Nvidia CEO Jensen Huang were both in attendance at the U.S.-Saudi Investment Forum in Washington, D.C.

The announcement builds on a partnership from May, when Nvidia said it would provide Saudi Arabia’s Humain with chips that use 500 megawatts of power. On Wednesday, Humain said the project would include about 600,000 Nvidia graphics processing units.

Humain was launched earlier this year and is owned by the Saudi Public Investment Fund. The plan to build the data center was initially announced when Huang visited Saudi Arabia alongside President Donald Trump.

“Could you imagine, a startup company approximately 0 billion dollars in revenues, now going to build a data center for Elon,” Huang said.

The facility is one of the most prominent examples of what Nvidia calls “sovereign AI.” The chipmaker has said that nations will increasingly need to build data centers for AI in order to protect national security and their culture. It’s also a potentially massive market for Nvidia’s pricey AI chips beyond a handful of hyperscalers.

Huang’s appearance at an event supported by President Trump is another sign of the administration’s focus on AI. Huang has become friendly with the president as Nvidia lobbies to gain licenses to ship future AI chips to China.

When announcing the agreement, Musk, who was a major figure in the early days of the second Trump administration, briefly mixed up the size of the data center, which is measured in megawatts, a unit of power. He joked that plans for a data center that would be 1,000 times larger would have to wait.

“That will be eight bazillion, trillion dollars,” Musk joked.

Humain won’t just use Nvidia chips. Advanced Micro Devices and Qualcomm will also sell chips and AI systems to Humain. AMD CEO Lisa Su and Qualcomm CEO Cristiano Amon both attended a state dinner on Tuesday to honor Saudi Crown Prince Mohammed bin Salman.

AMD will provide chips that may require as much as 1 gigawatt of power by 2030. The company said the chips that it would provide are its Instinct MI450 GPUs for AI. Cisco will provide additional infrastructure for the data center, AMD said.

Qualcomm will sell Humain its new data center chips that were first revealed in October, called the AI200 and AI250. Humain will deploy 200 megawatts of Qualcomm chips, the company said.

WATCH: Qualcomm CEO on new AI chips

Qualcomm CEO on new AI chips: Trying to prepare for the next phase of AI data center growth

Continue Reading

Technology

Meta chief AI scientist Yann LeCun is leaving to create his own startup

Published

on

By

Meta chief AI scientist Yann LeCun is leaving to create his own startup

Yann LeCun, known as one of the godfathers of modern artificial intelligence and one of the first AI visionaries to join the company then known as Facebook, is leaving Meta.

LuCun said in a LinkedIn post on Wednesday that he plans to create a startup that specializes in a kind of AI technology that researchers have described as world models, analyzing information beyond web data in order to better represent the physical world and its properties.

“I am creating a startup company to continue the Advanced Machine Intelligence research program (AMI) I have been pursuing over the last several years with colleagues at FAIR, at NYU, and beyond,” LeCun wrote. “The goal of the startup is to bring about the next big revolution in AI: systems that understand the physical world, have persistent memory, can reason, and can plan complex action sequences.”

Meta will partner with LeCun’s startup.

The departure comes at a time of disarray within Meta’s AI unit, which was dramatically overhauled this year after the company released the fourth version of its Llama open-source large language model to a disappointing response from developers. That spurred CEO Mark Zuckerberg to spend billions of dollars recruiting top AI talent, including a June $14.5 billion investment in Scale AI to lure the startup’s 28-year-old CEO Alexandr Wang, now Meta’s new chief AI officer.

LeCun, 65, joined Facebook in 2013 to be director of the FAIR AI research division while maintaining a part-time professorial position at New York University. He said in the LinkedIn post that the “creation of FAIR is my proudest non-technical accomplishment.”

“I am extremely grateful to Mark Zuckerberg, Andrew Bosworth, Chris Cox, and Mike Schroepfer for their support of FAIR, and for their support of the AMI program over the last few years,” LeCun said. “Because of their continued interest and support, Meta will be a partner of the new company.”

At the time, Facebook and Google were heavily recruiting high-level academics like LeCun to spearhead their efforts to produce cutting-edge computer science research that could potentially benefit their core businesses and products.

LeCun, along with other AI luminaries like Yoshua Bengio and Geoffrey Hinton, centered their academic research on a kind of AI technique known as deep learning, which involves the training of enormous software systems called neural networks so they can discover patterns within reams of data. The researchers helped popularize the deep learning approach, and in 2019 won the prestigious Turing Award, presented by the Association for Computing Machinery.

Since then, LeCun’s approach to AI development has drifted from the direction taken by Meta and the rest of Silicon Valley.

Meta and other tech companies like OpenAI have spent billions of dollars in developing so-called foundation models, particularly LLMs, as part of their efforts to advance state-of-the-art computing. However, LeCun and other deep-learning experts, have said that these current AI models, while powerful, have a limited understanding of the world, and new computing architectures are needed for researchers to create software that’s on par with or surpasses humans on certain tasks, a notion known as artificial general intelligence.

“As I envision it, AMI will have far-ranging applications in many sectors of the economy, some of which overlap with Meta’s commercial interests, but many of which do not,” LeCun said in the post. “Pursuing the goal of AMI in an independent entity is a way to maximize its broad impact.”

Besides Wang, other recent notables that Zuckerberg brought in to revamp Meta’s AI unit include former GitHub CEO Nat Friedman, who heads the unit’s product team, and ChatGPT co-creator Shengjia Zhao, the group’s chief scientist.

In October, Meta laid off 600 employees from its Superintelligence Labs division, including some who were part of the FAIR unit that LeCun helped get off the ground. Those layoffs and other cuts to FAIR over the years, coupled with a new AI leadership team, played a major role in LeCun’s decision to leave, according to people familiar with the matter who asked not to be named because they weren’t authorized to speak publicly.

Additionally, LeCun rarely interacted with Wang nor TBD Labs unit, which is compromised of many of the headline-grabbing hires Zuckerberg made over the summer. TBD Labs oversees the development of Meta’s Llama AI models, which were originally developed within FAIR, the people said.

While LeCun was always a champion of sharing AI research and related technologies to the open-source community, Wang and his team favor a more closed approach amid intense competition from rivals like OpenAI and Google, the people said.

WATCH: Meta is a table pounder here.

Continue Reading

Trending