Connect with us

Published

on

The Samsung Galaxy S24 Ultra. The device looks a lot similar to Samsung’s Galaxy S23 Ultra. The main difference this time round is what’s inside: Samsung is going big on artificial intelligence.

Samsung

Samsung announced its new flagship Galaxy S24 smartphone range Wednesday, earlier than expected, touting new artificial intelligence features as the company looks to kick off 2024 with a bang.

As is standard with Samsung’s flagship range now, the S24 range comes in three versions: the S24, S24+, and S24 Ultra. The S24 Ultra starts at a price of $1,300, the S24+ will cost $1,000, and the S24 will retail at $800.

The South Korean electronics giant showed off the new gadgets at its Kings Cross offices in London earlier this week, prior to the announcement. At a briefing with reporters, Samsung talked up the phone’s AI capabilities and showed how it’s able to edit pictures and search for items by using AI.

For Samsung’s top-tier S24 Ultra, which is the company’s biggest of the three devices and comes with punchier specs and features, Samsung is using a version of Qualcomm’s latest Snapdragon Series 8 Gen 3 optimized for Galaxy. The company is using a mix of Qualcomm systems-on-chips (SoCs) and its own Exynos chipset for its S24 and S24+ models.

“The Galaxy S24 series devices, together with Google’s Pixel range, mark the dawn of the consumerisation of AI in smartphones,” Ben Wood, chief analyst at CCS Insight, told CNBC. “This is a trend that will be echoed by all smartphone makers, including Apple, as they increasingly add a growing number of AI-powered capabilities to their new devices.”

“This launch sees Samsung betting on features powered by artificial intelligence to reignite consumers’ interest in smartphones at a time when incremental hardware updates have seen sales slow. Google has been the trailblazer with its Pixel devices and there is little question this is going to be a recurring theme going forward, not just for smartphones but across all consumer electronics.”

AI is the name of the game

The Samsung Galaxy S24 Ultra is the main event for most tech gadget enthusiasts — and, for the most part, it isn’t a whole lot different to the Galaxy S23 Ultra looks-wise.

That’s because Samsung isn’t changing an awful lot with the hardware. It still comes in the same size as its predecessor — the display is 6.8 inches, measured diagonally, though the phone is flatter this time round. The S23 Ultra had more curvature to it.

The big upgrade to the external hardware with this model is that it’s cased in titanium, so it’s a lot sturdier than the S23 Ultra.

The main difference this time round is what’s inside: Samsung is going big on artificial intelligence. A key focus for Samsung, like other smartphone makers, now is on “on-demand” AI — or, the ability to carry out AI workloads directly on a device, rather than over the cloud.

The Samsung Galaxy S24 Ultra has a bright display that can reach 2,600 nits at peak brightness — making it gthe brightest on a Samsung phone to date, according to the company.

Samsung

Samsung said its new Galaxy S24 Ultra will come with a bunch of new AI features, a lot of which is being powered by Qualcomm’s Snapdragon 8 Gen 3 chipset for mobile, which is tailored for AI devices.

It signals something that a lot of smartphone makers have been focusing on recently. Consumers aren’t getting as excited by new smartphone upgrades as they used to. So phone makers have had to come with ideas to get people’s attention again and rev up excitement in the market.

One feature Samsung’s loading into the Galaxy S24 range is the ability to circle locations or items a user is directing their camera at, or on a picture they’ve taken, and then look up results on what those things are.

So, for instance, if you see a landmark or a shoe you want to buy, you can make a circle around that object and then the AI shows you appropriate results on Google.

Another feature Samsung touted is the ability to use AI to edit photos. So users can edit reflections out of pictures they’ve taken, for instance if you took a picture of yourself in front of a window. Or you can move a person from one side of the room to another by dragging them from left to right.

Samsung also showcased live transcription features with its latest smartphones.

When calling someone who’s speaking in French, for instance, a user can pull up a transcription that’s being fed through to them in real time. You can also record a conversation between two people and get it transcribed, while the AI assigns a label to each person speaking, similar to transcription products like Otter AI.

'Huge miss': Samsung's operating profit expected to drop 35% in fourth quarter, missing earnings consensus

Paolo Pescatore, PP Foresight, told CNBC that Samsung “must focus its efforts on retaining its core loyal premium base.”

 “Arguably [Samsung] has done more than enough with the new features powered by its own AI platform,” Pescatoer said. “This potentially could be the start of a new era for smartphone representing a key super cycle for Samsung.”

“With this in mind, Samsung will have to entice users with a range of competitor offers to suit everyone; this includes older Samsung owners who will inevitably be looking for a much needed upgrade.”

AI watermarking

Another thing Samsung’s had to think about is what its AI features mean for things like privacy and copyright infringement.

The past year has seen countless examples of people using AI to create images and other creative media and pass it off as their own work — even when, in some cases, it’s derivative from or even looks identical to artists’ work.

So when a Galaxy S24 user uses AI to modify a photo, Samsung will keep a log of what was changed with AI and store it in the metadata. It’ll also have an icon in the bottom left corner to show that teh image has been edited using AI, kind of like a watermark.

At the Samsung briefing in King’s Cross, some analysts and reporters were able to crop this icon out just by using Samsung’s in-app cropping feature — though the icon is still kept in the metadata.

“AI-powered image and video manipulation raises some ethical questions, particularly given the recent media attention around deep fake content,” Wood told CNBC. “The addition of a watermark and updated metadata for altered content is a constructive step by Samsung and I’m sure others will follow.”

“The success of Samsung’s AI-based features will largely depend on Samsung’s ability to raise consumer awareness and engagement via its marketing for the Galaxy S24 portfolio,” he added. “Success will require crisp communication of the benefits and continued expansion of the use cases.”

Continue Reading

Technology

OpenAI takes stake in Thrive Holdings to help accelerate enterprise AI adoption

Published

on

By

OpenAI takes stake in Thrive Holdings to help accelerate enterprise AI adoption

Sam Altman, CEO of OpenAI, attends the annual Allen and Co. Sun Valley Media and Technology Conference at the Sun Valley Resort in Sun Valley, Idaho, on July 8, 2025.

David A. Grogan | CNBC

OpenAI on Monday announced it is taking an ownership stake in Thrive Holdings, a company that was launched by one of its major investors, Thrive Capital, in April.

The startup said it will embed engineering, research and product teams within Thrive Holdings’ companies to help accelerate their AI adoption and boost cost efficiency.

Thrive Holdings buys, owns and runs companies that it believes could benefit from technologies like artificial intelligence. It operates in sectors that are “core to the real economy,” starting with accounting and IT services, according to its website.

OpenAI, which is valued at $500 billion, did not disclose the financial terms of the agreement.

“We are excited to extend our partnership with OpenAI to embed their frontier models, products, and services into sectors we believe have tremendous potential to benefit from technological innovation and adoption,” Joshua Kushner, CEO and founder of Thrive Capital and Thrive Holdings, said in a statement.

It’s the latest example of OpenAI’s circular dealmaking.

In recent months, the company has taken stakes in infrastructure partners like Advanced Micro Devices and CoreWeave.

Read more CNBC tech news

The partnership is structured in a way that aligns the incentives of OpenAI and Thrive Holdings long term, according to a person familiar with the deal, who asked not to be named because the details are private.

If Thrive Holdings’ companies succeed, the size of OpenAI’s stake will grow.  

It also acts as a way for OpenAI to get compensated for its services, according to another person familiar with the agreement who declined to be named because the details are confidential.

“This partnership with Thrive Holdings is about demonstrating what’s possible when frontier AI research and deployment are rapidly deployed across entire organizations to revolutionize how businesses work and engage with customers,” OpenAI COO Brad Lightcap said in a statement.

OpenAI also announced a collaboration with the consulting firm Accenture on Monday.

The startup said its business offering, ChatGPT Enterprise, will roll out to “tens of thousands” of Accenture employees.

WATCH: OpenAI taps Foxconn to build AI hardware in the U.S.

OpenAI taps Foxconn to build AI hardware in the U.S.

Continue Reading

Technology

Runway rolls out new AI video model that beats Google, OpenAI in key benchmark

Published

on

By

Runway rolls out new AI video model that beats Google, OpenAI in key benchmark

Mustafa Hatipoglu | Anadolu | Getty Images

Artificial intelligence startup Runway on Monday announced Gen 4.5, a new video model that outperforms similar models from Google and OpenAI in an independent benchmark.

Gen 4.5 allows users to generate high-definition videos based on written prompts that describe the motion and action they want. Runway said the model is good at understanding physics, human motion, camera movements and cause and effect.

The model holds the No. 1 spot on the Video Arena leaderboard, which is maintained by the independent AI benchmarking and analysis company Artificial Analysis. To determine the text-to-video model rankings, people compare two different model outputs and vote for their favorite without knowing which companies are behind them.

Google’s Veo 3 model holds second place on the leaderboard, and OpenAI’s Sora 2 Pro model is in seventh place.  

“We managed to out-compete trillion-dollar companies with a team of 100 people,” Runway CEO Cristóbal Valenzuela told CNBC in an interview. “You can get to frontiers just by being extremely focused and diligent.”

Read more CNBC tech news

Runway was founded in 2018 and earned a spot on CNBC’s Disruptor 50 list this year. It conducts AI research and builds video and world models, which are models that are trained on video and observational data to better reflect how the physical world works.

The startup’s customers include media organizations, studios, brands, designers, creatives and students. Its valuation has swelled to $3.55 billion, according to PitchBook.

Valenzuela said Gen 4.5 was codenamed “David” in a nod to the biblical story of David and Goliath. The model was “an overnight success that took like seven years,” he said. 

“It does feel like a very interesting moment in time where the era of efficiency and research is upon us,” Valenzuela said. “[We’re] excited to be able to make sure that AI is not monopolized by two or three companies.” 

Gen 4.5 is rolling out gradually, but it will be available to all of Runway’s customers by the end of the week. Valenzuela said it’s the first of several major releases that the company has in store.

“It will be available through Runway’s platform, its application programming interface and through some of the company’s partners,” he said.

WATCH: We tested OpenAI’s Sora 2 AI-video app to find out why Hollywood is worried

We tested OpenAI’s Sora 2 AI-video app to find out why Hollywood is worried

Continue Reading

Technology

Nvidia takes $2 billion stake in Synopsys with expanded computing power partnership

Published

on

By

Nvidia takes  billion stake in Synopsys with expanded computing power partnership

Nvidia CEO Jensen Huang on Synopsys partnership: 'It's a huge deal'

Nvidia on Monday announced it has purchased $2 billion of Synopsys‘ common stock as part of a strategic partnership to accelerate computing and artificial intelligence engineering solutions.

As part of the multiyear partnership, Nvidia will help Synopsys accelerate its portfolio of compute-intensive applications, advance agentic AI engineering, expand cloud access and develop joint go-to-market initiatives, according to a release. Nvidia said it purchased Synopsys’ stock at $414.79 per share.

“Our partnership with Synopsys harnesses the power of Nvidia accelerated computing and AI to reimagine engineering and design — empowering engineers to invent the extraordinary products that will shape our future,” Nvidia CEO Jensen Huang said in the release.

Synopsys stock climbed 3%. Nvidia shares rose slightly.

Tune in at 9:30 a.m. ET as Nvidia CEO Jensen Huang and Synopsys CEO Sassine Ghazi join CNBC TV to discuss the partnership. Watch in real time on CNBC+ or the CNBC Pro stream.

Nvidia has been one of the biggest beneficiaries of the AI boom because it makes the graphics processing units, or GPUs, that are key to building and training AI models and running large workloads.

Synopsys offers services including silicon design and electronic design automation that help its customers build AI-powered products.

“The complexity and cost of developing next-generation intelligent systems demands engineering solutions with a deeper integration of electronics and physics, accelerated by AI capabilities and compute,” Synopsys CEO Sassine Ghazi said in a statement.

The partnership is not exclusive, which means that Nvidia and Synopsys can still work with other companies in the ecosystem.

Both companies will hold a press conference to discuss the announcement at 10 a.m. ET.

Read more CNBC tech news

Nvidia CEO: AI is going to transform every single industry

Continue Reading

Trending