Connect with us

Published

on

Stable Diffusion’s web interface, DreamStudio

Screenshot/Stable Diffusion

Computer programs can now create never-before-seen images in seconds.

Feed one of these programs some words, and it will usually spit out a picture that actually matches the description, no matter how bizarre.

The pictures aren’t perfect. They often feature hands with extra fingers or digits that bend and curve unnaturally. Image generators have issues with text, coming up with nonsensical signs or making up their own alphabet.

But these image-generating programs — which look like toys today — could be the start of a big wave in technology. Technologists call them generative models, or generative AI.

“In the last three months, the words ‘generative AI’ went from, ‘no one even discussed this’ to the buzzword du jour,” said David Beisel, a venture capitalist at NextView Ventures.

In the past year, generative AI has gotten so much better that it’s inspired people to leave their jobs, start new companies and dream about a future where artificial intelligence could power a new generation of tech giants.

The field of artificial intelligence has been having a boom phase for the past half-decade or so, but most of those advancements have been related to making sense of existing data. AI models have quickly grown efficient enough to recognize whether there’s a cat in a photo you just took on your phone and reliable enough to power results from a Google search engine billions of times per day.

But generative AI models can produce something entirely new that wasn’t there before — in other words, they’re creating, not just analyzing.

“The impressive part, even for me, is that it’s able to compose new stuff,” said Boris Dayma, creator of the Craiyon generative AI. “It’s not just creating old images, it’s new things that can be completely different to what it’s seen before.”

Sequoia Capital — historically the most successful venture capital firm in the history of the industry, with early bets on companies like Apple and Google — says in a blog post on its website that “Generative AI has the potential to generate trillions of dollars of economic value.” The VC firm predicts that generative AI could change every industry that requires humans to create original work, from gaming to advertising to law.

In a twist, Sequoia also notes in the post that the message was partially written by GPT-3, a generative AI that produces text.

How generative AI works

Kif Leswing/Craiyon

Image generation uses techniques from a subset of machine learning called deep learning, which has driven most of the advancements in the field of artificial intelligence since a landmark 2012 paper about image classification ignited renewed interest in the technology.

Deep learning uses models trained on large sets of data until the program understands relationships in that data. Then the model can be used for applications, like identifying if a picture has a dog in it, or translating text.

Image generators work by turning this process on its head. Instead of translating from English to French, for example, they translate an English phrase into an image. They usually have two main parts, one that processes the initial phrase, and the second that turns that data into an image.

The first wave of generative AIs was based on an approach called GAN, which stands for generative adversarial networks. GANs were famously used in a tool that generates photos of people who don’t exist. Essentially, they work by having two AI models compete against each other to better create an image that fits with a goal.

Newer approaches generally use transformers, which were first described in a 2017 Google paper. It’s an emerging technique that can take advantage of bigger datasets that can cost millions of dollars to train.

The first image generator to gain a lot of attention was DALL-E, a program announced in 2021 by OpenAI, a well-funded startup in Silicon Valley. OpenAI released a more powerful version this year.

“With DALL-E 2, that’s really the moment when when sort of we crossed the uncanny valley,” said Christian Cantrell, a developer focusing on generative AI.

Another commonly used AI-based image generator is Craiyon, formerly known as Dall-E Mini, which is available on the web. Users can type in a phrase and see it illustrated in minutes in their browser.

Since launching in July 2021, it’s now generating about 10 million images a day, adding up to 1 billion images that have never existed before, according to Dayma. He’s made Craiyon his full-time job after usage skyrocketed earlier this year. He says he’s focused on using advertising to keep the website free to users because the site’s server costs are high.

A Twitter account dedicated to the weirdest and most creative images on Craiyon has over 1 million followers, and regularly serves up images of increasingly improbable or absurd scenes. For example: An Italian sink with a tap that dispenses marinara sauce or Minions fighting in the Vietnam War.

But the program that has inspired the most tinkering is Stable Diffusion, which was released to the public in August. The code for it is available on GitHub and can be run on computers, not just in the cloud or through a programming interface. That has inspired users to tweak the program’s code for their own purposes, or build on top of it.

For example, Stable Diffusion was integrated into Adobe Photoshop through a plug-in, allowing users to generate backgrounds and other parts of images that they can then directly manipulate inside the application using layers and other Photoshop tools, turning generative AI from something that produces finished images into a tool that can be used by professionals.

“I wanted to meet creative professionals where they were and I wanted to empower them to bring AI into their workflows, not blow up their workflows,” said Cantrell, developer of the plug-in.

Cantrell, who was a 20-year Adobe veteran before leaving his job this year to focus on generative AI, says the plug-in has been downloaded tens of thousands of times. Artists tell him they use it in myriad ways that he couldn’t have anticipated, such as animating Godzilla or creating pictures of Spider-Man in any pose the artist could imagine.

“Usually, you start from inspiration, right? You’re looking at mood boards, those kinds of things,” Cantrell said. “So my initial plan with the first version, let’s get past the blank canvas problem, you type in what you’re thinking, just describe what you’re thinking and then I’ll show you some stuff, right?”

An emerging art to working with generative AIs is how to frame the “prompt,” or string of words that lead to the image. A search engine called Lexica catalogs Stable Diffusion images and the exact string of words that can be used to generate them.

Guides have popped up on Reddit and Discord describing tricks that people have discovered to dial in the kind of picture they want.

Startups, cloud providers, and chip makers could thrive

Image generated by DALL-E with prompt: A cat on sitting on the moon, in the style of Pablo Picasso, detailed, stars

Screenshot/OpenAI

Some investors are looking at generative AI as a potentially transformative platform shift, like the smartphone or the early days of the web. These kinds of shifts greatly expand the total addressable market of people who might be able to use the technology, moving from a few dedicated nerds to business professionals — and eventually everyone else.

“It’s not as though AI hadn’t been around before this — and it wasn’t like we hadn’t had mobile before 2007,” said Beisel, the seed investor. “But it’s like this moment where it just kind of all comes together. That real people, like end-user consumers, can experiment and see something that’s different than it was before.”

Cantrell sees generative machine learning as akin to an even more foundational technology: the database. Originally pioneered by companies like Oracle in the 1970s as a way to store and organize discrete bits of information in clearly delineated rows and columns — think of an enormous Excel spreadsheet, databases have been re-envisioned to store every type of data for every conceivable type of computing application from the web to mobile.

“Machine learning is kind of like databases, where databases were a huge unlock for web apps. Almost every app you or I have ever used in our lives is on top of a database,” Cantrell said. “Nobody cares how the database works, they just know how to use it.”

Michael Dempsey, managing partner at Compound VC, says moments where technologies previously limited to labs break into the mainstream are “very rare” and attract a lot of attention from venture investors, who like to make bets on fields that could be huge. Still, he warns that this moment in generative AI might end up being a “curiosity phase” closer to the peak of a hype cycle. And companies founded during this era could fail because they don’t focus on specific uses that businesses or consumers would pay for.

Others in the field believe that startups pioneering these technologies today could eventually challenge the software giants that currently dominate the artificial intelligence space, including Google, Facebook parent Meta and Microsoft, paving the way for the next generation of tech giants.

“There’s going to be a bunch of trillion-dollar companies — a whole generation of startups who are going to build on this new way of doing technologies,” said Clement Delangue, the CEO of Hugging Face, a developer platform like GitHub that hosts pre-trained models, including those for Craiyon and Stable Diffusion. Its goal is to make AI technology easier for programmers to build on.

Some of these firms are already sporting significant investment.

Hugging Face was valued at $2 billion after raising money earlier this year from investors including Lux Capital and Sequoia; and OpenAI, the most prominent startup in the field, has received over $1 billion in funding from Microsoft and Khosla Ventures.

Meanwhile, Stability AI, the maker of Stable Diffusion, is in talks to raise venture funding at a valuation of as much as $1 billion, according to Forbes. A representative for Stability AI declined to comment.

Cloud providers like Amazon, Microsoft and Google could also benefit because generative AI can be very computationally intensive.

Meta and Google have hired some of the most prominent talent in the field in hopes that advances might be able to be integrated into company products. In September, Meta announced an AI program called “Make-A-Video” that takes the technology one step farther by generating videos, not just images.

“This is pretty amazing progress,” Meta CEO Mark Zuckerberg said in a post on his Facebook page. “It’s much harder to generate video than photos because beyond correctly generating each pixel, the system also has to predict how they’ll change over time.”

On Wednesday, Google matched Meta and announced and released code for a program called Phenaki that also does text to video, and can generate minutes of footage.

The boom could also bolster chipmakers like Nvidia, AMD and Intel, which make the kind of advanced graphics processors that are ideal for training and deploying AI models.

At a conference last week, Nvidia CEO Jensen Huang highlighted generative AI as a key use for the company’s newest chips, saying these kind of programs could soon “revolutionize communications.”

Profitable end uses for Generative AI are currently rare. A lot of today’s excitement revolves around free or low-cost experimentation. For example, some writers have been experimented with using image generators to make images for articles.

One example of Nvidia’s work is the use of a model to generate new 3D images of people, animals, vehicles or furniture that can populate a virtual game world.

Ethical issues

Prompt: “A cat sitting on the moon, in the style of picasso, detailed”

Screenshot/Craiyon

Ultimately, everyone developing generative AI will have to grapple with some of the ethical issues that come up from image generators.

First, there’s the jobs question. Even though many programs require a powerful graphics processor, computer-generated content is still going to be far less expensive than the work of a professional illustrator, which can cost hundreds of dollars per hour.

That could spell trouble for artists, video producers and other people whose job it is to generate creative work. For example, a person whose job is choosing images for a pitch deck or creating marketing materials could be replaced by a computer program very shortly.

“It turns out, machine-learning models are probably going to start being orders of magnitude better and faster and cheaper than that person,” said Compound VC’s Dempsey.

There are also complicated questions around originality and ownership.

Generative AIs are trained on huge amounts of images, and it’s still being debated in the field and in courts whether the creators of the original images have any copyright claims on images generated to be in the original creator’s style.

One artist won an art competition in Colorado using an image largely created by a generative AI called MidJourney, although he said in interviews after he won that he processed the image after choosing it from one of hundreds he generated and then tweaking it in Photoshop.

Some images generated by Stable Diffusion seem to have watermarks, suggesting that a part of the original datasets were copyrighted. Some prompt guides recommend using specific living artists’ names in prompts in order to get better results that mimic the style of that artist.

Last month, Getty Images banned users from uploading generative AI images into its stock image database, because it was concerned about legal challenges around copyright.

Image generators can also be used to create new images of trademarked characters or objects, such as the Minions, Marvel characters or the throne from Game of Thrones.

As image-generating software gets better, it also has the potential to be able to fool users into believing false information or to display images or videos of events that never happened.

Developers also have to grapple with the possibility that models trained on large amounts of data may have biases related to gender, race or culture included in the data, which can lead to the model displaying that bias in its output. For its part, Hugging Face, the model-sharing website, publishes materials such as an ethics newsletter and holds talks about responsible development in the AI field.

“What we’re seeing with these models is one of the short-term and existing challenges is that because they’re probabilistic models, trained on large datasets, they tend to encode a lot of biases,” Delangue said, offering an example of a generative AI drawing a picture of a “software engineer” as a white man.

Continue Reading

Technology

Oracle set to report quarterly results after the bell

Published

on

By

Oracle set to report quarterly results after the bell

Larry Ellison, Oracle’s co-founder and chief technology officer, appears at the Formula One British Grand Prix in Towcester, U.K., on July 6, 2025.

Jay Hirano | Sopa Images | Lightrocket | Getty Images

Oracle is scheduled to report fiscal second-quarter results after market close on Wednesday.

Here’s what analysts are expecting, according to LSEG:

  • Earnings per share: $1.64 adjusted
  • Revenue: $16.21 billion

Wall Street expects revenue to increase 15% in the quarter that ended Nov. 30, from $14.1 billion a year earlier. Analysts polled by StreetAccount are looking for $7.92 billion in cloud revenue and $6.06 billion from software.

The report lands at a critical moment for Oracle, which has tried to position itself at the center of the artificial intelligence boom by committing to massive build-outs. While the move has been a boon for Oracle’s revenue and its backlog, investors have grown concerned about the amount of debt the company is raising and the risks it faces should the AI market slow.

The stock plummeted 23% in November, its worst monthly performance since 2001 and, as of Tuesday’s close, is 33% below its record reached in September. Still, the shares are up 33% for the year, outperforming the Nasdaq, which has gained 22% over that stretch.

Over the past decade, Oracle has diversified its business beyond databases and enterprise software and into cloud infrastructure, where it competes with Amazon, Microsoft and Google. Those companies are all vying for big AI contracts and are investing heavily in data centers and hardware necessary to meet expected demand.

OpenAI, which sparked the generative AI rush with the launch of ChatGPT three years ago, has committed to spending more than $300 billion on Oracle’s infrastructure services over five years.

“Oracle’s job is not to imagine gigawatt-scale data centers. Oracle’s job is to build them,” Larry Ellison, the company’s co-founder and chairman, told investors in September.

Oracle raised $18 billion during the period, one of the biggest issuances on record for a tech company. Skeptical investors have been buying five-year credit default swaps, driving them to multiyear highs. Credit default swaps are like insurance for investors, with buyers paying for protection in case the borrower can’t repay its debt.

“Customer concentration is a major issue here, but I think the bigger thing is, How are they going to pay for this?” said RBC analyst Rishi Jaluria, who has the equivalent of a hold rating on Oracle’s stock.

During the quarter, Oracle named executives Clay Magouyrk and Mike Sicilia as the company’s new CEOs, succeeding Safra Catz. Oracle also introduced AI agents for automating various facets of finance, human resources and sales.

Executives will discuss the results and issue guidance on a conference call starting at 5 p.m. ET.

WATCH: Oracle’s debt concerns loom large ahead of quarterly earnings

Oracle's debt concerns loom large ahead of quarterly earnings

Continue Reading

Technology

Nvidia refutes report that China’s DeepSeek is using its banned Blackwell AI chips

Published

on

By

Nvidia refutes report that China's DeepSeek is using its banned Blackwell AI chips

Jensen Huang, chief executive officer of Nvidia Corp., outside the US Capitol in Washington, DC, US, on Wednesday, Dec. 3, 2025.

Bloomberg | Bloomberg | Getty Images

Nvidia on Wednesday refuted a report that the Chinese artificial intelligence startup DeepSeek has been using smuggled Blackwell chips to develop its upcoming model.

The U.S. has banned the export of Nvidia’s Blackwell chips, which are considered the company’s most advanced offerings, to China in an effort to stay ahead in the AI race.

DeepSeek is reportedly using chips that were snuck into the country without authorization, according to The Information.

“We haven’t seen any substantiation or received tips of ‘phantom datacenters’ constructed to deceive us and our OEM partners, then deconstructed, smuggled, and reconstructed somewhere else,” a Nvidia spokesperson said in a statement. “While such smuggling seems farfetched, we pursue any tip we receive.”

Read more CNBC tech news

Nvidia has been one of the biggest winners of the AI boom so far because it develops the graphics processing units (GPUs) that are key for training models and running large workloads.

Since the hardware is so crucial for advancing AI technology, Nvidia’s relationship with China has become a political flashpoint among U.S. lawmakers.

President Donald Trump on Monday said Nvidia can ship its H200 chips to “approved customers” in China and elsewhere on the condition that the U.S. will get 25% of those sales.

The announcement was met with pushback from some Republicans.

DeepSeek spooked the U.S. tech sector in January when it released a reasoning model, called R1, that rocketed to the top of app stores and industry leaderboards. R1 was also created at a fraction of the cost of other models in the U.S., according to some analyst estimates.

In August, DeepSeek hinted that China will soon have its own “next generation” chips to support its AI models.

WATCH: Nvidia selling H200 AI chips to China is net positive, says Patrick Moorhead

Nvidia selling H200 AI chips to China is net positive, says Patrick Moorhead

– CNBC’s Kristina Partsinevelos contributed to this report.

Continue Reading

Technology

‘Greetings, earthlings’: Nvidia-backed Starcloud trains first AI model in space as orbital data center race heats up

Published

on

By

‘Greetings, earthlings’: Nvidia-backed Starcloud trains first AI model in space as orbital data center race heats up

The Starcloud-1 satellite is launched into space from a SpaceX rocket on November 2, 2025.

Courtesy: SpaceX | Starcloud

Nvidia-backed startup Starcloud trained an artificial intelligence model from space for the first time, signaling a new era for orbital data centers that could alleviate Earth’s escalating digital infrastructure crisis.

Last month, the Washington-based company launched a satellite with an Nvidia H100 graphics processing unit, sending a chip into outer space that’s 100 times more powerful than any GPU compute that has been in space before. Now, the company’s Starcloud-1 satellite is running and querying responses from Gemma, an open large language model from Google, in orbit, marking the first time in history that an LLM has been has run on a high-powered Nvidia GPU in outer space, CNBC has learned.

“Greetings, Earthlings! Or, as I prefer to think of you — a fascinating collection of blue and green,” reads a message from the recently launched satellite. “Let’s see what wonders this view of your world holds. I’m Gemma, and I’m here to observe, analyze, and perhaps, occasionally offer a slightly unsettlingly insightful commentary. Let’s begin!” the model wrote.

Starcloud’s output Gemma in space. Gemma is a family of open models built from the same technology used to create Google’s Gemini AI models.

Starcloud

Starcloud wants to show outer space can be a hospitable environment for data centers, particularly as Earth-based facilities strain power grids, consume billions of gallons of water annually and produce hefty greenhouse gas emissions. The electricity consumption of data centers is projected to more than double by 2030, according to data from the International Energy Agency.

Starcloud CEO Philip Johnston told CNBC that the company’s orbital data centers will have 10 times lower energy costs than terrestrial data centers.

“Anything you can do in a terrestrial data center, I’m expecting to be able to be done in space. And the reason we would do it is purely because of the constraints we’re facing on energy terrestrially,” Johnston said in an interview.

Johnston, who co-founded the startup in 2024, said Starcloud-1’s operation of Gemma is proof that space-based data centers can exist and operate a variety of AI models in the future, particularly those that require large compute clusters.

“This very powerful, very parameter dense model is living on our satellite,” Johnston said. “We can query, it and it will respond in the same way that when you query a chat from a database on Earth, it will give you a very sophisticated response. We can do that with our satellite.”

In a statement to CNBC, Google DeepMind product director Tris Warkentin said that “seeing Gemma run in the harsh environment of space is a testament to the flexibility and robustness of open models.”

In addition to Gemma, Starcloud was able to train NanoGPT, an LLM created by OpenAI founding member Andrej Karpathy, on the H100 chip using the complete works of Shakespeare. This led the model to speak in Shakespearean English.

Starcloud — a member of the Nvidia Inception program and graduate from Y Combinator and the Google for Startups Cloud AI Accelerator — plans to build a 5-gigawatt orbital data center with solar and cooling panels that measure roughly 4 kilometers in both width and height. A compute cluster of that gigawatt size would produce more power than the largest power plant in the U.S. and would be substantially smaller and cheaper than a terrestrial solar farm of the same capacity, according to Starcloud’s white paper.

These data centers in space would capture constant solar energy to power next-generation AI models, unhindered by the Earth’s day and night cycles and weather changes. Starcloud’s satellites should have a five-year lifespan given the expected lifetime of the Nvidia chips on its architecture, Johnston said.

Orbital data centers would have real-world commercial and military use cases. Already, Starcloud’s systems can enable real-time intelligence and, for example, spot the thermal signature of a wildfire the moment it ignites and immediately alert first responders, Johnston said.

“We’ve linked in the telemetry of the satellite, so we linked in the vital signs that it’s drawing from the sensors — things like altitude, orientation, location, speed,” Johnston said. “You can ask it, ‘Where are you now?’ and it will say ‘I’m above Africa and in 20 minutes, I’ll be above the Middle East.’ And you could also say, ‘What does it feel like to be a satellite? And it will say, ‘It’s kind of a bit weird’ … It’ll give you an interesting answer that you could only have with a very high-powered model.”

Starcloud is working on customer workloads by running inference on satellite imagery from observation company Capella Space, which could help spot lifeboats from capsized vessels at sea and forest fires in a certain location. The company will include several Nvidia H100 chips and integrate Nvidia’s Blackwell platform onto its next satellite launch in October 2026 to offer greater AI performance. The satellite launching next year will feature a module running a cloud platform from cloud infrastructure startup Crusoe, allowing customers to deploy and operate AI workloads from space.

“Running advanced AI from space solves the critical bottlenecks facing data centers on Earth,” Johnston told CNBC.

“Orbital compute offers a way forward that respects both technological ambition and environmental responsibility. When Starcloud-1 looked down, it saw a world of blue and green. Our responsibility is to keep it that way,” he added.

The risks

Risks in operating orbital data centers remain, however. Analysts from Morgan Stanley have noted that orbital data centers could face hurdles such as harsh radiation, difficulty of in-orbit maintenance, debris hazards and regulatory issues tied to data governance and space traffic.

Still, tech giants are pursuing orbital data centers given the prospect of nearly limitless solar energy and greater, gigawatt-sized operations in space.

Along with Starcloud and Nvidia’s efforts, several companies have announced space-based data center missions. On Nov. 4, Google unveiled a “moonshot” initiative titled Project Suncatcher, which aims to put solar-powered satellites into space with Google’s tensor processing units. Privately-owned Lonestar Data Holdings is working to put the first-ever commercial lunar data center on the moon’s surface.

OpenAI CEO Sam Altman has explored an acquisition or partnership with a rocket maker, suggesting a desire to compete against Elon Musk‘s SpaceX, according to The Wall Street Journal. SpaceX is a key launch partner for Starcloud.

Referring to Starcloud’s launch in early November, Nvidia senior director of AI infrastructure Dion Harris said: “From one small data center, we’ve taken a giant leap toward a future where orbital computing harnesses the infinite power of the sun.”

Continue Reading

Trending