Connect with us

Published

on

Getty Images

A team at Google has proposed using AI technology to create a “bird’s-eye” view of users’ lives using mobile phone data such as photographs and searches.

Dubbed “Project Ellmann,” after biographer and literary critic Richard David Ellmann, the idea would be to use LLMs like Gemini to ingest search results, spot patterns in a user’s photos, create a chatbot, and “answer previously impossible questions,” according to a copy of a presentation viewed by CNBC. Ellmann’s aim, it states, is to be “Your Life Story Teller.”

It’s unclear if the company has plans to produce these capabilities within Google Photos, or any other product. Google Photos has more than one billion users and four trillion photos and videos, according to a company blog post.

Google announces OpenAI competitor Gemini 1.0

Project Ellman is just one of many ways Google is proposing to create or improve its products with AI technology. On Wednesday, Google launched its latest “most capable” and advanced AI model yet, Gemini, which in some cases outperformed OpenAI’s GPT-4. The company is planning to license Gemini to a wide range of customers through Google Cloud for them to use in their own applications. One of Gemini’s standout features is that it’s multimodal, meaning it can process and understand information beyond text, including images, video and audio.

A product manager for Google Photos presented Project Ellman alongside Gemini teams at a recent internal summit, according to documents viewed by CNBC. They wrote that the teams spent the past few months determining that large language models are the ideal tech to make this bird’s-eye approach to one’s life story a reality.

Ellmann could pull in context using biographies, previous moments, and subsequent photos to describe a user’s photos more deeply than “just pixels with labels and metadata,” the presentation states. It proposes to be able to identify a series of moments like university years, Bay Area years, and years as a parent.

“We can’t answer tough questions or tell good stories without a bird’s-eye view of your life,” one description reads alongside a photo of a small boy playing with a dog in the dirt.

“We trawl through your photos, looking at their tags and locations to identify a meaningful moment,” a presentation slide reads. “When we step back and understand your life in its entirety, your overarching story becomes clear.”

The presentation said large language models could infer moments like a user’s child’s birth. “This LLM can use knowledge from higher in the tree to infer that this is Jack’s birth, and that he’s James and Gemma’s first and only child.” 

“One of the reasons that an LLM is so powerful for this bird’s-eye approach, is that it’s able to take unstructured context from all different elevations across this tree, and use it to improve how it understands other regions of the tree,” a slide reads, alongside an illustration of a user’s various life “moments” and “chapters.”

Presenters gave another example of determining one user had recently been to a class reunion. “It’s exactly 10 years since he graduated and is full of faces not seen in 10 years so it’s probably a reunion,” the team inferred in its presentation.

The team also demonstrated “Ellmann Chat,” with the description: “Imagine opening ChatGPT but it already knows everything about your life. What would you ask it?”

It displayed a sample chat in which a user asks “Do I have a pet?” To which it answers that yes, the user has a dog which wore a red raincoat, then offered the dog’s name and the names of the two family members it’s most often seen with.

Another example for the chat was a user asking when their siblings last visited. Another asked it to list similar towns to where they live because they are thinking of moving. Ellmann offered answers to both. 

Ellmann also presented a summary of the user’s eating habits, other slides showed. “You seem to enjoy Italian food. There are several photos of pasta dishes, as well as a photo of a pizza.” It also said that the user seemed to enjoy new food because one of their photos had a menu with a dish it didn’t recognize.

The technology also determined what products the user was considering purchasing, their interests, work, and travel plans based on the user’s screenshots, the presentation stated. It also suggested it would be able to know their favorite websites and apps, giving examples Google Docs, Reddit and Instagram.

A Google spokesperson told CNBC, “Google Photos has always used AI to help people search their photos and videos, and we’re excited about the potential of LLMs to unlock even more helpful experiences. This is a brainstorming concept a team is at the early stages of exploring. As always, we’ll take the time needed to ensure we do it responsibly, protecting users’ privacy as our top priority.”

Big Tech’s race to create AI-driven ‘Memories’

The proposed Project Ellmann could help Google in the arms race among tech giants to create more personalized life memories.

Google Photos and Apple Photos have for years served “memories” and generated albums based on trends in photos.

In November, Google announced that with the help of AI, Google Photos can now group together similar photos and organize screenshots into easy-to-find albums.

Apple announced in June that its latest software update will include the ability for its photo app to recognize people, dogs, and cats in their photos. It already sorts out faces and allows users to search for them by name.

Apple also announced an upcoming Journal App, which will use on-device AI to create personalized suggestions to prompt users to write passages that describe their memories and experiences based on recent photos, locations, music and workouts.

But Apple, Google and other tech giants are still grappling with the complexities of displaying and identifying images appropriately.

For instance, Apple and Google still avoid labeling gorillas after reports in 2015 found the company mislabeling Black people as gorillas. A New York Times investigation this year found Apple and Google’s Android software, which underpins most of the world’s smartphones, turned off the ability to visually search for primates for fear of labeling a person as an animal.

Companies including Google, Facebook and Apple have over time added controls to minimize unwanted memories, but users have reported they sometimes still surface unwanted memories and require the users to toggle through several settings in order to minimize them.

Continue Reading

Technology

AI is disrupting the advertising business in a big way — industry leaders explain how

Published

on

By

AI is disrupting the advertising business in a big way — industry leaders explain how

An AI assistant on display at Mobile World Congress 2024 in Barcelona.

Angel Garcia | Bloomberg | Getty Images

Artificial intelligence is shaking up the advertising business and “unnerving” investors, one industry leader told CNBC.

“I think this AI disruption … unnerving investors in every industry, and it’s totally disrupting our business,” Mark Read, the outgoing CEO of British advertising group WPP, told CNBC’s Karen Tso on Tuesday.

The advertising market is under threat from emerging generative AI tools that can be used to materialize pieces of content at rapid pace. The past couple of years has seen the rise of a number of AI image generators, including OpenAI’s DALL-E, Google’s Veo and Midjourney.

In his first interview since announcing he would step down as WPP boss, Read said that AI is “going to totally revolutionize our business.”

“AI is going to make all the world’s expertise available to everybody at extremely low cost,” he said at London Tech Week. “The best lawyer, the best psychologist, the best radiologist, the best accountant, and indeed, the best advertising creatives and marketing people often will be an AI, you know, will be driven by AI.”

Read said that 50,000 WPP employees now use WPP Open, the company’s own AI-powered marketing platform.

“That, I think, is my legacy in many ways,” he added.

Outgoing WPP CEO says AI will 'revolutionize' advertising business

Structural pressure on creative parts of the ad business are driving industry consolidation, Read also noted, adding that companies would need to “embrace” the way in which AI would impact everything from creating briefs and media plans to optimizing campaigns.

A report from Forrester released in June last year showed that more than 60% of U.S. ad agencies are already making use of generative AI, with a further 31% saying they’re exploring use cases for the technology.

‘Huge transformation’

Read is not alone in this view. Advertising is undergoing a “huge transformation” due to the disruptive effects of AI, French advertising giant Publicis Groupe’s CEO Maurice Levy told CNBC at the Viva Tech conference in Paris.

He noted that AI image and video generation tools are speeding up content production drastically, while automated messaging systems can now achieve “personalization at scale like never before.”

Read more CNBC tech news

However, the Publicis chief stressed that AI should only be considered a tool that people can use to augment their lives.

“We should not believe that AI is more than a tool,” he added.

And while AI is likely to impact some jobs, Levy ultimately thinks it will create more roles than it destroys.

“Will AI replace me, and will AI kill some jobs? I think that AI, yes, will destroy some jobs,” Levy conceded. However, he added that, “more importantly, AI will transform jobs and will create more jobs. So the net balance will be probably positive.”

This, he says, would be in keeping with the labor impacts of previous technological inventions like the internet and smartphones.

AI is moving from curiosity to action, Publicis' Maurice Levy says

“There will be more autonomous work,” Levy added.

Still, Nicole Denman Greene, analyst at Gartner, warns brands should be wary of causing a negative reaction from consumers who are skeptical of AI’s impact on human creativity.

According to a Gartner survey from September, 82% of consumers said firms using generative AI should prioritize preserving human jobs, even if it means lower profits.

“Pivot from what AI can do to what it should do in advertising,” Greene told CNBC.

“What it should do is help create groundbreaking insights, unique execution to reach diverse and niche audiences, push boundaries on what ‘marketing’ is and deliver more brand differentiated, helpful and relevant personalized experiences, including deliver on the promise of hyper-personalization.”

Continue Reading

Technology

Nvidia-mania took over Europe this week. Here’s what I learned from Jensen Huang

Published

on

By

Nvidia-mania took over Europe this week. Here's what I learned from Jensen Huang

Jensen Huang, co-founder and chief executive officer of Nvidia Corp., left, and Emmanuel Macron, France’s president at the 2025 VivaTech conference in Paris, France, on Wednesday, June 11, 2025.

Nathan Laine | Bloomberg | Getty Images

Nvidia boss Jensen Huang has been on a tour of Europe this week, bringing excitement and intrigue to everywhere he visited.

His message was clear — Nvidia is the company that can help Europe build its artificial intelligence infrastructure so the region can take control of its own destiny with the transformative technology.

I’ve been in London and Paris this week following Huang around as he met with U.K. Prime Minister Keir Starmer, French President Emmanuel Macron, journalists, fans, analysts and gave a keynote at Nvidia’s GTC event in the capital of France.

Here’s the what I saw and the key things I learned.

The draw of Huang is huge

Huang is truly the current rockstar of the tech world.

At London Tech Week, the lines were long and the auditorium packed to hear him speak.

The GTC event in Paris was full too. It was like going to a music concert or sporting event. There were GTC Paris T-shirts on the back of every chair and even a merchandise store.

Nvidia GTC in Paris on 11 June 2025

Arjun Kharpal

The aura of Huang really struck me when, after a question-and-answer session with him and a room full of attendees, most people lined up to take pictures or selfies with him.

Macron and Starmer both wanted to be seen on stage with him.

Nvidia positions itself as Europe’s AI hope

Nvidia’s key product is its graphics processing units (GPU) that are used to train and execute AI applications.

But Huang has positioned Nvidia as more than a chip company. During the week, he described Nvidia as an infrastructure firm. He also said AI should be seen as infrastructure like electricity.

His pitch to all countries was that Nvidia could be the company that will help countries build out that infrastructure.

“We believe that in order to compete, in order to build a meaningful ecosystem, Europe needs to come together and build capacity that is joint,” Huang said during a speech at the Viva Tech conference in Paris on Wednesday.

Jensen Huang, CEO of Nvidia, speaks during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, June 11, 2025.

Gonzalo Fuentes | Reuters

One of the most significant partnerships announced this week is between French startup Mistral and Nvidia to build a so-called AI cloud using the latter’s GPUs.

Huang spoke a lot during the week about “sovereign AI” — the concept of building data centers within a country’s borders that services its population rather than relying on servers located overseas. Among European policymakers and companies, this has been an important topic.

Huang also heaped praise on the U.K., France and Europe more broadly when it came to their potential in the AI industry.

China still behind but catching up

On Thursday, Huang decided to do a tour of Nvidia’s booth and I managed to catch him to get a few words on CNBC’s “Squawk Box Europe.”

A key topic of that discussion was China. Nvidia has not been able to sell its most advanced chips to China because of U.S. export controls and even less sophisticated semiconductors are being blocked. In its last quarterly results, Nvidia took a $4.5 billion hit on unsold inventory.

I asked Huang about how China was progressing with AI chips, in particular referencing Huawei, the Chinese tech giant that is trying to make semiconductor products to rival Nvidia.

Huang said Huawei is a generation behind Nvidia. But because there is lots of energy in China, Huawei can just use more chips to get results.

Nvidia CEO: Huawei ‘has got China covered’ if the U.S. doesn’t participate

“If the United States doesn’t want to partake, participate in China, Huawei has got China covered, and Huawei has got everybody else covered,” Huang said.

In addition, Huang is concerned about the strategic importance of U.S. companies not having access to China.

“It’s even more important that the American technology stack is what AI developers around the world build on,” Huang said.

Just reading between the lines somewhat — Huang sees a world where Chinese AI tech advances. Some countries may decide to build their AI infrastructure with Chinese companies rather than American. That in turn could give Chinese companies a chance to be in the AI race.

Quantum, robotics and driverless is the future

Nvidia boss Jensen Huang delivers a speech on stage talking about robotics.

Arjun Kharpal | CNBC

During his keynote at GTC Paris on Wednesday, he also address quantum computing, saying the technology is reaching “an inflection point.”

Quantum computers are widely believed to be able to solve complex problems that classic computers can’t. This could include things like discovering new drugs or materials.

Continue Reading

Technology

Tesla faces protests in Austin over Musk’s robotaxi plans

Published

on

By

Tesla faces protests in Austin over Musk's robotaxi plans

In an aerial view, a Tesla showroom at 12845 N. US 183 Highway Service Road is seen after police were called for a suspicious device in Austin, Texas, on March 24, 2025.

Brandon Bell | Getty Images

With Elon Musk looking to June 22 as his tentative start date for Tesla’s pilot robotaxi service in Austin, Texas, protesters are voicing their opposition.

Public safety advocates and political protesters, upset with Musk’s work with the Trump administration, joined together in downtown Austin on Thursday to express their concerns about the robotaxi launch. Members of the Dawn Project, Tesla Takedown and Resist Austin say that Tesla’s partially automated driving systems have safety problems.

Tesla sells its cars with a standard Autopilot package, or a premium Full Self-Driving option (also known as FSD or FSD supervised), in the U.S. Automobiles with these systems, which include features like automatic lane keeping, steering and parking, have been involved in dozens of collisions, some fatal, according to data tracked by the National Highway Traffic Safety Administration.

Tesla’s robotaxis, which Musk showed off in a video clip on X earlier this week, are new versions of the company’s popular Model Y vehicles, equipped with a future release of Tesla’s FSD software. That “unsupervised” FSD, or robotaxi technology, is not yet available to the public.

Tesla critics with The Dawn Project, which calls itself a tech-safety and security education business, brought a version of Model Y with relatively recent FSD software (version 2025.14.9) to show residents of Austin how it works.

In their demonstration on Thursday, they showed how a Tesla with FSD engaged zoomed past a school bus with a stop sign held out and ran over a child-sized mannequin that they put in front of the vehicle.

Dawn Project CEO Dan O’Dowd also runs Green Hills Software, which sells technology to Tesla competitors, including Ford and Toyota.

Stephanie Gomez, who attended the demonstration, told CNBC that she didn’t like the role Musk had been playing in the government. Additionally, she said she has no confidence in Tesla’s safety standards and said there’s been a lack of transparency from Tesla regarding how its robotaxis will work.

Another protester, Silvia Revelis, said she also opposed Musk’s political activity, but that safety is the biggest concern.

“Citizens have not been able to get safety testing results,” she said. “Musk believes he’s above the law.”

Tesla didn’t immediately respond to a request for comment.

— Todd Wiseman contributed to this report.

WATCH: Tesla’s next leg up is $400

Tesla's next leg up is $400 per share, says KKM's Jeff Kilburg

Continue Reading

Trending