Connect with us

Published

on

The Amazon logo displayed on a smartphone and a PC screen.
Pavlo Gonchar | LightRocket via Getty Images

Search for “toothpaste” on Amazon, and the top of the web page will show you a mix of popular brands like Colgate, Crest and Sensodyne. Try a separate search for “deodorant” and you’ll first see products from Secret, Dove and Native.

Look a little closer, though, and you’ll notice that those listings are advertisements with the “sponsored” label affixed to them. Amazon is generating hefty revenue from the top consumer brands because getting valuable placement on the biggest e-commerce site comes with a rising price tag.

“There’s fewer organic search results on the page, so that increasingly means the only way to get on the page is to buy your way on there,” said Jason Goldberg, chief commerce strategy officer at advertising firm Publicis.

For consumers looking for toothpaste on Amazon, getting to unpaid results requires two full swipes up on the mobile app.

An example of a mobile search for “toothpaste” on Amazon shows a sponsored brand ad at the top of results.

Until recently, Amazon put two or three sponsored products at the top of search results. Now, there may be as many as six sponsored products that appear ahead of any organic results, with more promotions elsewhere on the page, said Juozas Kaziukenas, who runs e-commerce research firm Marketplace Pulse.

The number of ads that appear differs depending on the exact search term and other factors such as whether users are shopping on desktop, mobile or in the Amazon app, Amazon says.

While Amazon doesn’t break out advertising revenue, ads account for the majority of the company’s “other” sales. That category was the fastest-growing part of Amazon’s overall business in the second quarter, with revenue soaring 87% from a year earlier to more than $7.9 billion.

In 2018, Amazon leapfrogged Microsoft to become the third-largest ad platform in the U.S., trailing only Google and Facebook. Amazon is capitalizing on its market control, knowing that its website or app is where many consumers begin their online shopping journey.

Kaziukenas said Amazon and founder Jeff Bezos have completely transformed from being anti-advertising. It’s become such a lucrative business that ads “have replaced most of the functionality on the site,” he said.

An Amazon spokesperson said there are no dedicated ad slots within search results, meaning that a user may see one ad, multiple ads or none at all. The company said advertising is an optional service for brands and sellers, but that using it can improve visibility of their products.

“Like all retailers, we design our store to help customers easily find and discover the right brands and products, and sponsored ads is one of the many ways we do this,” the spokesperson said in an email. “In all cases we work back from the most useful customer experience and the relevance of the results surfaced, regardless of how they’re presented to the shopper.”

Big consumer products makers aren’t the only ones taking up the most valuable virtual real estate. Amazon is also populating search results with its own products. For example, a search for “shampoo” pulls up a promotion for a bottle of Amazon brand Solimo before ads for products from Pantene, Nexxus, L’Oreal and others.

Sponsored product ads accounted for roughly 73% of retailers’ ad spend on Amazon in the second quarter, according to digital marketing agency Merkle. Last year, Amazon began replacing product recommendations in listings with product ads.

Amazon has also added new ad formats like video ads and sponsored brands posts, which feature a single brand and several product listings in a banner at the top of the page.

Ad prices going up

For brand owners, the price of doing business on Amazon is surging as the company expands its dominance in online commerce.

The cost per click for Amazon search advertising was $1.27 in August, up from 86 cents a year ago, according to a survey of more than 300 Amazon sellers conducted by Canopy Management, an agency that helps manage businesses on Amazon.

Companies that don’t pay the toll are finding their listings buried in search results. At the same time, sellers are paying more overall to Amazon for things like transaction fees and fulfillment services.

“It’s not uncommon now for brands to be spending 50% or more of their product price on various fees to be selling on Amazon,” Kaziukenas said.

Competition has also intensified as a result of the rise of Amazon aggregators, venture-backed companies that are raising big money from outside investors to acquire independent sellers. Some smaller sellers are concerned they may not be able to compete against deep-pocketed aggregators, which are bringing “massive budgets to be spent on Amazon, also in the form of advertising,” Kaziukenas said.

“They’re going from competing against other, smaller sellers to now competing against massive and well-funded sellers,” he said.

WATCH: Inside the rapid growth of Amazon Logistics and how it’s taking on third-party shipping

Continue Reading

Technology

OpenAI introduces safety models that other sites can use to classify harms

Published

on

By

OpenAI introduces safety models that other sites can use to classify harms

Sam Altman, CEO of OpenAI, attends the annual Allen and Co. Sun Valley Media and Technology Conference at the Sun Valley Resort in Sun Valley, Idaho, on July 8, 2025.

David A. Grogan | CNBC

OpenAI on Wednesday announced two reasoning models that developers can use to classify a range of online safety harms on their platforms. 

The artificial intelligence models are called gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, and their names reflect their sizes. They are fine-tuned, or adapted, versions of OpenAI’s gpt-oss models, which the company announced in August. 

OpenAI is introducing them as so-called open-weight models, which means their parameters, or the elements that improve the outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for users to customize and modify.

Organizations can configure the new models to their specific policy needs, OpenAI said. And since they are reasoning models that show their work, developers will have more direct insight into how they arrive at a particular output. 

For instance, a product reviews site could develop a policy and use gpt-oss-safeguard models to screen reviews that might be fake, OpenAI said. Similarly, a video game discussion forum could classify posts that discuss cheating.

OpenAI developed the models in partnership with Robust Open Online Safety Tools, or ROOST, an organization dedicated to building safety infrastructure for AI. Discord and SafetyKit also helped test the models. They are initially available in a research preview, and OpenAI said it will seek feedback from researchers and members of the safety community.

As part of the launch, ROOST is establishing a model community for researchers and practitioners that are using AI models in an effort to protect online spaces.

The announcement could help OpenAI placate some critics who have accused the startup of commercializing and scaling too quickly at the expense of AI ethics and safety. The startup is valued at $500 billion, and its consumer chatbot, ChatGPT, has surpassed 800 million weekly active users. 

On Tuesday, OpenAI said it’s completed its recapitalization, cementing its structure as a nonprofit with a controlling stake in its for-profit business. OpenAI was founded in 2015 as a nonprofit lab, but has emerged as the most valuable U.S. tech startup in the years since releasing ChatGPT in late 2022.

“As AI becomes more powerful, safety tools and fundamental safety research must evolve just as fast — and they must be accessible to everyone,” ROOST President Camille François, said in a statement.

Eligible users can download the model weights on Hugging Face, OpenAI said.

WATCH: OpenAI finalizes recapitalization plan

OpenAI finalizes recapitalization plan

Continue Reading

Technology

Fiserv stock craters 44%, on pace for worst day ever after company slashes guidance

Published

on

By

Fiserv stock craters 44%, on pace for worst day ever after company slashes guidance

Cheng Xin | Getty Images News | Getty Images

Fiserv‘s stock plummeted 44% Wednesday and headed for its worst day ever after the fintech company cut its earnings outlook and shook up some of its leadership team.

“Our current performance is not where we want it to be nor where our stakeholders expect it to be,” wrote CEO Mike Lyons in a release.

For the full year, Fiserv now expects adjusted earnings of $8.50 to $8.60 a share for the year, down from a previous forecast of $10.15 and $10.30. Revenues are expected to grow 3.5% to 4%, versus a prior estimate of 10%.

Adjusted earnings came in at $2.04 per share, falling short of the LSEG estimate of $2.64. Revenues rose about 1% from a year ago to $4.92 billion, missing the $5.36 billion forecast. Net income grew to $792 million from $564 million in the year-ago period.

Along with the results, Fiserv announced a slew of executive and board changes.

Read more CNBC tech news

Beginning in December, operating chief Takis Georgakopoulos will serve as co-president with Dhivya Suryadevara, recent CEO of Optum Financial Services and Optum Insight at UnitedHealth Group. Fiserv also promoted Paul Todd to finance chief.

“We also have opportunities in front of us to improve our results and execution, and I am confident that these are the right leaders to help guide Fiserv to long-term success,” Lyons wrote in a separate release.

Fiserv also announced that Gordon Nixon, Céline Dufétel and Gary Shedlin would join its board at the beginning of 2026, with Nixon serving as independent chairman of the board. Shedlin is slated to lead the audit committee.

The Milwaukee, Wisconsin-based company also announced an action plan that Lyons said would better situate the company to “drive sustainable, high-quality growth” and reach its “full potential.”

Fiserv said it will move its stock from the NYSE to the Nasdaq next month, where it will trade under the ticker symbol “FISV.”

Fiserv did not immediately respond to CNBC’s request for comment.

Continue Reading

Technology

Character.AI to block romantic AI chats for minors a year after teen’s suicide

Published

on

By

Character.AI to block romantic AI chats for minors a year after teen's suicide

Cfoto | Future Publishing | Getty Images

Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup’s artificial intelligence chatbots.

The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18.

Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI’s app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots.

As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25.

“This is a bold step forward, and we hope this raises the bar for everybody else,” Character.AI CEO Karandeep Anand told CNBC.

Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell’s family filed a wrongful death lawsuit against the company.

To enforce the policy, the company said it’s rolling out an age assurance function that will use first-party and third-party software to monitor a user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.

In 2024, Character.AI’s founders and certain members of its research team joined Google DeepMind, the company’s AI unit DeepMind. It’s one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.

Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offering from chatbot conversations. Those features include a feed for watching AI-generated videos as well as storytelling and roleplay formats.

Although Character.AI will no longer allow teenagers to engage in open-ended conversations on its app, those users will still have access to the app’s other offerings, said Anand, who was previously an executive at Meta.

Of the startup’s roughly 20 million monthly active users, about 10% are under 18. Anand said that percentage has declined as the app has shifted its focus toward storytelling and roleplaying.

The app makes money primarily through advertising and a $10 monthly subscription. Character.AI is on track to end the year with a run rate of $50 million, Anand said.

Additionally, the company on Wednesday announced that it will establish and fund an independent AI Safety Lab dedicated to safety research for AI entertainment. Character.AI didn’t say how much it will provide in funding, but the startup said it’s inviting other companies, academics, researchers and policy makers to join the nonprofit effort.

Regulatory pressure

Character.AI is one of many AI chatbot companies facing regulatory scrutiny on the matter of teens and AI companions.

In September, the Federal Trade Commission issued an order to seven companies including, Character.AI’s parent, as well as Alphabet, Meta, OpenAI and Snap, to understand the potential effects on children and teenagers.

On Tuesday, Senators Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn, announced legislation to ban AI chatbot companions for minors. California Gov. Gavin Newsom signed a law earlier this month requiring chatbots to disclose they are AI and tell minors to take a break every three hours.

Why it’s time to take AI-human relationships seriously

Rival Meta, which also offers AI chatbots, announced safety features in October that will allow parents to see and manage how their teenagers are interacting with AI characters on the company’s platforms. Parents have the option to turn off one-on-one chats with AI characters completely and can block specific AI characters.

The matter of sexualized conversations with AI chatbots has come into focus as tech companies announce different approaches to dealing with the issue.

Earlier this month, Sam Altman announced that OpenAI would allow adult users to engage in erotica with ChatGPT later this year, saying that his company is “not the elected moral police of the world.”

Microsoft AI CEO Mustafa Suleyman said last week that the software company will not provide “simulated erotica,” describing sexbots as “very dangerous.” Microsoft is a key investor and partner to OpenAI.

The race to develop more realistic human-like AI companions has been growing in Silicon Valley since ChatGPT’s launch in late 2022. While some people are creating deep connections with AI characters, the speedy development presents ethical and safety concerns, especially for children and teenagers. 

“I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI,” Anand said.

If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.

Continue Reading

Trending