Silicon Valley’s earliest stage companies are getting a major boost from artificial intelligence.
Startup accelerator Y Combinator — known for backing Airbnb, Dropbox and Stripe — this week held its annual demo day in San Francisco, where founders pitched their startups to an auditorium of potential venture capital investors.
Y Combinator CEO Garry Tan told CNBC that this group is growing significantly faster than past cohorts and with actual revenue. For the last nine months, the entire batch of YC companies in aggregate grew 10% per week, he said.
“It’s not just the number one or two companies — the whole batch is growing 10% week on week,” said Tan, who is also a Y Combinator alum. “That’s never happened before in early-stage venture.”
That growth spurt is thanks to leaps in artificial intelligence, Tan said.
App developers can now offload or automate more repetitive tasks, and they can generate new code using large language models. Tan called it “vibe coding,” a term for letting models take the wheel and generate software. In some cases, AI can code entire apps.
The ability for AI to subsidize an otherwise heavy workload has allowed these companies to build with fewer people. For about a quarter of the current YC startups, 95% of their code was written by AI, Tan said.
“That sounds a little scary, but on the other hand, what that means for founders is that you don’t need a team of 50 or 100 engineers,” said Tan, adding that companies are reaching as much as $10 million in revenue with teams of less than 10 people. “You don’t have to raise as much. The capital goes much longer.”
The growth-at-all-costs mindset of Silicon Valley during the zero-interest-rate era has gone “out the window,” said Tan, pointing to a renewed focus on profitability. That focus on the bottom line also applies to megacap tech companies. Google, Meta and Amazon have gone through multiple rounds of layoffs and pulled back on hiring.
While that’s shaken some engineers, Tan described it as an opportunity.
It’s easier to build a startup, and the top people in tech don’t have to prove their worth by going to work at big tech companies, he said.
“There’s a lot of anxiety in the job market, especially from young software engineers,” Tan said. “Maybe it’s that engineer who couldn’t get a job at Meta or Google who actually can build a standalone business making $10 million or $100 million a year with ten people — that’s such a powerful moment in software.”
About 80% of the YC companies that presented this week were AI focused, with a handful of robotics and semiconductor startups. This group of companies has been able to prove earlier commercial use compared to previous generations, Tan said.
“There’s a ton of hype, but what’s unique about this moment is that people are actually getting commercial validation,” he said. “If you’re an investor at demo day, you’ll be able to call a real customer, and that person will say, ‘Yeah, we use the software every single day.'”
Y Combinator was founded in 2005 by Paul Graham, Jessica Livingston, Robert Morris and Trevor Blackwell. The firm invests $500,000 in startups in exchange for an equity stake. Those founders then enter a three-month program at the San Francisco headquarters and get guidance from partners and YC alumni. Demo day is a way to attract additional capital.
The firm has funded more than 5,3000 companies, which it says are worth more than $800 billion in total. Over a dozen of them are public, and more than 100 are valued at $1 billion or more. More than 15,000 companies apply to get into the accelerator, with about a 1% acceptance rate.
More of these venture capital incubators have popped up throughout the past decade, and more capital has flocked to early stage startups. Despite the competition, Tan argued that Y Combinator has an edge thanks to its strong network. He pointed to the number of highly valued portfolio companies rising, and pushed back on the idea that specialized incubators were taking business.
“About 20 to 30% of the companies during YC change their idea and sometimes their industry entirely. And if you end up with an incubator that is very specialized, you might not be able to change into the thing that you were supposed to,” Tan said. “We think that the network effects and the advantages of doing YC have only become more bold.”
Internet firm Cloudflare will start blocking artificial intelligence crawlers from accessing content without website owners’ permission or compensation by default, in a move that could significantly impact AI developers’ ability to train their models.
Starting Tuesday, every new web domain that signs up to Cloudflare will be asked if they want to allow AI crawlers, effectively giving them the ability to prevent bots from scraping data from their websites.
Cloudflare is what’s called a content delivery network, or CDN. It helps businesses deliver online content and applications faster by caching the data closer to end-users. They play a significant role in making sure people can access web content seamlessly every day.
Roughly 16% of global internet traffic goes directly through Cloudflare’s CDN, the firm estimated in a 2023 report.
“AI crawlers have been scraping content without limits. Our goal is to put the power back in the hands of creators, while still helping AI companies innovate,” said Matthew Prince, co-founder and CEO of Cloudflare, in a statement Tuesday.
“This is about safeguarding the future of a free and vibrant Internet with a new model that works for everyone,” he added.
What are AI crawlers?
AI crawlers are automated bots designed to extract large quantities of data from websites, databases and other sources of information to train large language models from the likes of OpenAI and Google.
Whereas the internet previously rewarded creators by directing users to original websites, according to Cloudflare, today AI crawlers are breaking that model by collecting text, articles and images to generate responses to queries in a way that users don’t need to visit the original source.
This, the company adds, is depriving publishers of vital traffic and, in turn, revenue from online advertising.
Read more CNBC tech news
Tuesday’s move builds on a tool Cloudflare launched in September last year that gave publishers the ability to block AI crawlers with a single click. Now, the company is going a step further by making this the default for all websites it provides services for.
OpenAI says it declined to participate when Cloudflare previewed its plan to block AI crawlers by default on the grounds that the content delivery network is adding a middleman to the system.
The Microsoft-backed AI lab stressed its role as a pioneer of using robots.txt, a set of code that prevents automated scraping of web data, and said its crawlers respect publisher preferences.
“AI crawlers are typically seen as more invasive and selective when it comes to the data they consumer. They have been accused of overwhelming websites and significantly impacting user experience,” Matthew Holman, a partner at U.K. law firm Cripps, told CNBC.
“If effective, the development would hinder AI chatbots’ ability to harvest data for training and search purposes,” he added. “This is likely to lead to a short term impact on AI model training and could, over the long term, affect the viability of models.”
Elon Musk announced his new company xAI, which he says has the goal to understand the true nature of the universe.
Jaap Arriens | Nurphoto | Getty Images
XAI, the artificial intelligence startup run by Elon Musk, raised a combined $10 billion in debt and equity, Morgan Stanley said.
Half of that sum was clinched through secured notes and term loans, while a separate $5 billion was secured through strategic equity investment, the bank said on Monday.
The funding gives xAI more firepower to build out infrastructure and develop its Grok AI chatbot as it looks to compete with bitter rival OpenAI, as well as with a swathe of other players including Amazon-backed Anthropic.
In May, Musk told CNBC that xAI has already installed 200,000 graphics processing units (GPUs) at its Colossus facility in Memphis, Tennessee. Colossus is xAI’s supercomputer that trains the firm’s AI. Musk at the time said that his company will continue buying chips from semiconductor giants Nvidia and AMD and that xAI is planning a 1-million-GPU facility outside of Memphis.
Addressing the latest funds raised by the company, Morgan Stanley that “the proceeds will support xAI’s continued development of cutting-edge AI solutions, including one of the world’s largest data center and its flagship Grok platform.”
xAI continues to release updates to Grok and unveiled the Grok 3 AI model in February. Musk has sought to boost the use of Grok by integrating the AI model with the X social media platform, formerly known as Twitter. In March, xAI acquired X in a deal that valued the site at $33 billion and the AI firm at $80 billion. It’s unclear if the new equity raise has changed that valuation.
xAI was not immediately available for comment.
Last year, xAI raised $6 billion at a valuation of $50 billion, CNBC reported.
Morgan Stanley said the latest debt offering was “oversubscribed and included prominent global debt investors.”
Competition among American AI startups is intensifying, with companies raising huge amounts of funding to buy chips and build infrastructure.
Musk has called Grok a “maximally truth-seeking” AI that is also “anti-woke,” in a bid to set it apart from its rivals. But this has not come without its fair share of controversy. Earlier this year, Grok responded to user queries with unrelated comments about the controversial topic of “white genocide” and South Africa.
Musk has also clashed with fellow AI leaders, including OpenAI’s Sam Altman. Most famously, Musk claimed that OpenAI, which he co-founded, has deviated from its original mission of developing AI to benefit humanity as a nonprofit and is instead focused on commercial success. In February, Musk alongside a group of investors, put in a bid of $97.4 billion to buy control of OpenAI. Altman swiftly rejected the offer.
— CNBC’s Lora Kolodny and Jonathan Vanian contributed to this report.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
Ramon Costa | SOPA Images | Lightrocket | Getty Images
Huawei has open-sourced two of its artificial intelligence models — a move tech experts say will help the U.S.-blacklisted firm continue to build its AI ecosystem and expand overseas.
The Chinese tech giant announced on Monday the open-sourcing of the AI models under its Pangu series, as well as some of its model reasoning technology.
Tech experts told CNBC that Huawei’s latest announcements not only highlight how it is solidifying itself as an open-source LLM player, but also how it is strengthening its position across the entire AI value chain as it works to overcome U.S.-led AI chip export restrictions.
In recent years, the company has transformed from a competent private sector telecommunications firm into a “muscular technology juggernaut straddling the entire AI hardware and software stack,” said Paul Triolo, partner and senior vice president for China at advisory firm DGA-Albright Stonebridge Group.
In its announcement Monday, Huawei called the open-source moves another key measure for Huawei’s “Ascend ecosystem strategy” that would help speed up the adoption of AI across “thousands of industries.”
The Ascend ecosystem refers to AI products built around the company’s Ascend AI chip series, which are widely considered to be China’s leading competitor to products from American chip giant Nvidia. Nvidia is restricted from selling its advanced products to China.
A Google-like strategy?
Pangu being available in an open-source manner allows developers and businesses to test the models and customize them for their needs, said Lian Jye Su, chief analyst at Omdia. “The move is expected to incentivize the use of other Huawei products,” he added.
According to experts, the coupling of Huawei’s Pangu models with the company’s AI chips and related products gives the company a unique advantage, allowing it to optimize its AI solutions and applications.
While competitors like Baidu have LLMs with broad capabilities, Huawei has focused on specialized AI models for sectors such as government, finance and manufacturing.
“Huawei is not as strong as companies like DeepSeek and Baidu at the overall software level – but it doesn’t need to be,” said Marc Einstein, research director at Counterpoint Research.
“Its objective is to ultimately use open source products to drive hardware sales, which is a completely different model from others. It also collaborates with DeepSeek, Baidu and others and will continue to do so,” he added.
Ray Wang, principal analyst at Constellation Research, said the chip-to-model strategy is similar to that of Google, a company that is also developing AI chips and AI models like its open-source Gemma models.
Huawei’s announcement on Monday could also help with its international ambitions. Huawei, along with players like Zhipu AI, has been slowly making inroads into new overseas markets.
In its announcement Monday, Huawei invited developers, corporate partners and researchers around the world to download and use its new open-source products in order to gather feedback and improve them.
“Huawei’s open-source strategy will resonate well in developing countries where enterprises are more price-sensitive as is the case with [Huawei’s] other products,” Einstein said.
As part of its global strategy, the company has also been looking to bring its latest AI data center solutions to new countries.