Connect with us

Published

on

With about 100 million tracks available and over 600 million subscribers, helping listeners find the music they will love has become a navigational challenge for Spotify. It’s the promise of personalization and meaningful recommendations that will give the vast catalog more meaning, and that is central to Spotify’s mission.

The streaming audio giant’s suite of recommendation tools has grown over the years: Spotify Home feed, Discover Weekly, Blend, Daylist, and Made for You Mixes. And in recent years, there have been signs that it is working. According to data released by Spotify at its 2022 Investor Day, artist discoveries every month on Spotify had reached 22 billion, up from 10 billion in 2018, “and we’re nowhere near done,” the company stated at that time.

Over the past decade or more, Spotify has been investing in AI and, in particular, in machine learning. Its recently launched AI DJ may be its biggest bet yet that technology will allow subscribers to better personalize listening sessions and discover new music. The AI DJ mimics the vibe of radio by announcing the names of songs and lead-in to tracks, something aimed in part to help ease listeners into extending out of their comfort zones. An existing pain point for AI algorithms — which can be excellent at giving listeners what it knows they already like — is anticipating when you want to break out of that comfort zone. 

The AI DJ combines personalization technology, generative AI, and a dynamic AI voice, and listeners can tap the DJ button when they want to hear something new, and something less-directly-derived from their established likes. Behind the dulcet tones of an AI DJ there are people, tech experts and music experts, who aim to improve the recommendation capacity of Spotify’s tools. The company has hundreds of music editors and experts across the globe. A Spotify spokesperson said the generative AI tool allows the human experts to “scale their innate knowledge in ways never before possible.”  

The data on a particular song or artist captures a few attributes: particular musical features, and which song or artist it has been typically paired with among the millions of listening sessions whose data the AI algorithm can access. Gathering information about the song is a fairly easy process, including release year, genre, and mood — from happy to danceable or melancholic. Various musical attributes, such as tempo, key, and instrumentation, are also identified. Combining this data associated with millions of listening sessions and other users’ preferences helps to generate new recommendations, and makes the leap possible from aggregated data to individual listener assumptions.

In its simplest formulation, “Users who liked Y also liked Z. We know you like Y, so you might like Z,” is how an AI finds matches. And Spotify says it’s working. “Since launching DJ, we’ve found that when DJ listeners hear commentary alongside personal music recommendations, they’re more willing to try something new (or listen to a song they may have otherwise skipped),” the spokesperson said. 

If successful, it’s not just listeners that get relief from a pain point. A great discovery tool is as beneficial to the artists seeking to build connections with new fans.

Julie Knibbe, founder & CEO of Music Tomorrow — which aims to help artists connect with more listeners by understanding how algorithms work and how to better work with them — says everyone is trying to figure out how to balance familiarity and novelty in a meaningful way, and everyone is leaning on AI algorithms to help make this possible. Be she says the balance between discovering new music and staying with established patterns is a central unresolved issue for all involved, from Spotify to listeners and the artists.

“Any AI is only good at what you tell them to do,” Knibbe said. “These recommender systems have been around for over a decade and they’ve become very good at predicting what you will like. What they can’t do is know what’s in your head, specifically when you want to venture out into a new musical terrain or category.” 

Spotify’s Daylist is an attempt to use generative AI to take into account established tastes, but also the varying contexts that can shape and reshape a listeners’ tastes across the course of a day, and make new recommendations that fit various moods, activities and vibes. Knibbe says it’s possible that improvements like these continue, and the AI gets better at finding the formula for how much novelty a listener wants, but she added, “the assumption that people want to discover new music all the time is not true.”

Most people still return, fairly happily, to familiar musical terrain and listening patterns. 

“You have various profiles of listeners, curators, experts … people put different demands on the AI,” Knibbe said. “Experts are more difficult to surprise, but they aren’t the majority of listeners, who tend to be more casual,” and whose Spotify usage, she says, often amounts to creating a “comfortable background” to daily life.

Technology optimists often speak in terms of an era of “abundance.” With 100 million songs available, but many listeners preferring the same 100 songs a million times, it’s easy to understand why a new balance is being sought. But Ben Ratliff, a music critic and author of “Every Song Ever: Twenty Ways to Listen in an Age of Musical Plenty,” says algorithms are less solution to this problem than a further entrenching of it.

“Spotify is good at catching onto popular sensibilities and creating a soundtrack for them,” Ratliff said. “Its Sadgirl Starter Pack playlist, for instance, has a great name and about a million and a half likes. Unfortunately, under the banner of a gift, the SSP simplifies the oceanic complexity of young-adult depression into a small collection of dependably ‘yearny’ music acts, and makes hard clichés of music and sensibility form more quickly.” 

Works of curation that are clearly made by actual people with actual preferences remain Ratliff’s preference. Even a good playlist, he says, might have been made without much intention and conscience, but just a developed sense of pattern recognition, “whether it’s patterns of obscurity or patterns of the broadly known,” he said.

Depending on the individual, AI may have equal chances of becoming either a utopian or dystopian solution within the 100-million track universe. Ratliff says most users should keep it more simple in their streaming music journeys. “As long as you realize that the app will never know you in the way you want to be known, and as long as you know what you’re looking for, or have some good prompts at the ready, you can find lots of great music on Spotify.” 

Continue Reading

Technology

Reddit challenges Australia’s under-16 social media ban in High Court filing, says law curbs political speech

Published

on

By

Reddit challenges Australia’s under-16 social media ban in High Court filing, says law curbs political speech

Sopa Images | Lightrocket | Getty Images

Reddit, the popular community-focused forum, has launched a legal challenge against Australia’s social media ban for teens under 16, arguing that the newly enacted law is ineffective and goes too far by restricting political discussion online.

In its application to Australia’s High Court, the social news and aggregation platform said the law is “invalid on the basis of the implied freedom of political communication”, saying that it burdens political communication.

Canberra’s ban came into effect on Wednesday and targeted 10 major services, including Alphabet‘s YouTube, Meta’s Instagram, ByteDance’s TikTok, RedditSnapchat and Elon Musk’s X. All targeted platforms had agreed to comply with the policy to varying degrees.

Australia’s Prime Minister’s office, Attorney-General’s Department and other social media platforms did not immediately reply to requests for comment.

Under the law, the targeted platforms will have to take “reasonable steps” to prevent underage access, using ageverification methods such as inference from online activity, facial estimation via selfies, uploaded IDs, or linked bank details.

Reddit’s application to the courts seeks to either declare the law invalid or exclude the platform from the provisions of the law.

In a statement to CNBC, Reddit said that while it agrees with the importance of protecting persons under 16, the law could isolate teens “from the ability to engage in age-appropriate community experiences (including political discussions).”

It also said in its application that the law “burdens political communication,” saying “the political views of children inform the electoral choices of many current electors, including their parents and their teachers, as well as others interested in the views of those soon to reach the age of maturity.”

The platform also argued that it should not be subject to the law, saying it operates more as a forum for adults facilitating “knowledge sharing” between users than as a traditional social network, saying that it does not import contact lists or address books.

“Reddit is significantly different from other sites that allow for users to become “friends” with one another, or to post photos about themselves, or to organise events,” the platform said in its application.

Reddit further said in its court filing that most content on its platform is accessible without an account, and pointed out a person under the age of 16 “can be more easily protected from online harm if they have an account, being the very thing that is prohibited.”

“That is because the account can be subject to settings that limit their access to particular kinds of content that may be harmful to them,” it adds.

Despite its objections, Reddit said that the challenge was not an attempt to avoid complying with the law, nor was it an effort to retain young users for business reasons.

“There are more targeted, privacy-preserving measures to protect young people online without resorting to blanket bans,” the platform said.

— CNBC’s Dylan Butts contributed to this story.

Continue Reading

Technology

Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they’re rivals in a trillion-dollar market

Published

on

By

Altman and Musk launched OpenAI as a nonprofit 10 years ago. Now they’re rivals in a trillion-dollar market

Open AI CEO Sam Altman speaks during a talk session with SoftBank Group CEO Masayoshi Son at an event titled “Transforming Business through AI” in Tokyo, Japan, on February 03, 2025.

Tomohiro Ohsumi | Getty Images

On Dec. 11, 2015, OpenAI launched as a nonprofit research lab after Elon Musk and a group of prominent techies, including Peter Thiel and Reid Hoffman, pledged $1 billion to develop artificial intelligence for the benefit of humanity. The idea was for the project to be be free of commercial pressures and the pursuit of money.

A decade later, that founding mission is all but forgotten.

Musk, now the world’s richest person, is long gone, having created rival startup xAI. And he’s been engaged in a heated legal and public relations fight with OpenAI CEO and co-founder Sam Altman.

Far from the nonprofit realm, OpenAI has emerged as one of the fastest-growing commercial entities on the planet, zooming to a $500 billion private market valuation, with almost all of that value accruing since the company’s launch of ChatGPT three years ago. More than 800 million people now use the chatbot every week.

Musk’s xAI, meanwhile, is expected to close a $15 billion round at a $230 billion pre-money valuation this month, sources familiar with the matter told CNBC’s David Faber in late November.

OpenAI and xAI are two of the main companies, along with Google, Anthropic and Meta, pouring money into AI models, as the market rapidly evolves from text-based chatbots to AI-generated videos and more advanced compute-intensive forms of content, as well as into agentic AI, with large enterprises customizing tools to enhance productivity.

For OpenAI, the price tag is almost incomprehensible: $1.4 trillion and growing. That’s primarily for the mammoth data centers and high-powered chips required to meet what the company sees as insatiable demand for its technology. For now, OpenAI is a cash-burning machine going up against tech’s megacaps and their chip suppliers, drawing comparisons to earlier waves of high-growth tech firms that spent heavily for years to challenge behemoth incumbents, but to mixed results.

“OpenAI has a very big role in the in the history of the development of artificial intelligence, and will forever have that role,” said Gil Luria, an equity analyst at D.A. Davidson, in an interview. “Now, will that role be Netscape, or will it be Google? We’ve yet to find out.”

Nvidia CEO Jensen Huang speaks at an event ahead of the COMPUTEX forum, in Taipei, Taiwan, June 2, 2024.

Ann Wang | Reuters

It’s a position that would’ve been hard to imagine in 2016, when Nvidia CEO Jensen Huang hauled a black DGX-1 supercomputer up to OpenAI’s offices in San Francisco’s Mission District. The $300,000 machine had cost Nvidia “a few billion dollars” to develop, and there were no other buyers, Huang recalled recently on Joe Rogan’s podcast.

Musk, at OpenAI, was the only one who wanted it.

When Musk told him it was for “a nonprofit company,” Huang said all the blood drained from his face at the thought of parking such a costly box inside an organization that wasn’t meant to make money.

Behind the scenes, though, the nonprofit ideal was already under intense strain, and Musk didn’t like what he saw.

“Guys, I’ve had enough. This is the final straw,” Musk wrote in an email to his co-founders in 2017. He warned that he would “no longer fund OpenAI” if it turned into a tech startup instead of a nonprofit. Altman wrote back the next morning: “i remain enthusiastic about the non-profit structure!”

Altman vs. Musk

In February of the following year, Musk left the OpenAI board, and said at the time the move was to avoid a potential conflict of interest as his car company, Tesla, dove deeper into AI.

The story was more complicated.

Musk sued OpenAI and Altman in early 2024, alleging they abandoned the company’s founding mission to develop AI “for the benefit of humanity broadly,” and he’s regularly criticized OpenAI’s close ties to Microsoft, its principal backer. He also went to court to try and keep OpenAI from converting into a for-profit entity and, earlier this year, went so far as to try and acquire the AI lab for $97.4 billion.

In October, OpenAI announced it had completed a recapitalization, cementing its structure as a nonprofit with a controlling stake in its for-profit business, which is now a public benefit corporation called OpenAI Group PBC.

OpenAI signs $38B deal with Amazon: Here's what to know

Musk isn’t the only early OpenAI team member who’s turned into a bitter rival. Siblings Dario and Daniela Amodei left OpenAI in late 2020 to form Anthropic, which said last month that Microsoft and Nvidia would invest in the company. The valuation from the funding round could reach as high as $350 billion.

Anthropic’s Claude family of large language models is one of the biggest competitors to OpenAI’s GPT models.

Altman is wagering that he can win the race by outspending the competition. While his company has sketched out plans for a trillion-dollar-plus AI infrastructure outlay, Anthropic has made roughly $100 billion in recent compute commitments, spaced out at various intervals over the next few years.

It all amounts to a giant bet that demand for AI services will continue apace.

“We’ve got all the various AI vendors making these huge capital investments,” said David Menninger, executive director of software research at ISG. “There’s a question as to how long those capital investments continue and whether or not they all pan out.”

Luria says Anthropic and others are making reasonable commitments based on their current growth trajectory and the funding they’ve already secured. But he said OpenAI’s approach has been based on a “fantastical set of commitments” with a “faint belief that those numbers are even possible.”

‘Pretty extreme’

Altman told CNBC in an interview on Thursday that OpenAI is already seeing enough demand to justify its spending plans, which “makes us confident that we will be able to significantly ramp revenue.”

“It’s obviously unusual to be growing this fast at this kind of scale, but it is what we see in our current data,” Altman said, adding that “the demand in the market is pretty extreme.”

Altman said last month that he expects annualized revenue to hit $20 billion by the end of this year and to reach hundreds of billions by 2030. Its historic pace of growth has been a big boon for major tech companies.

Oracle signed a roughly $500 billion deal to sell infrastructure services to OpenAI over five years. Chipmakers Advanced Micro Devices and Broadcom have woven OpenAI-linked demand into multi-year forecasts.

But Oracle’s shares plunged 11% on Thursday after the software vendor reported weaker-than-expected revenue, a miss that dragged down Nvidia, CoreWeave and other AI-related stocks. Despite a surge in long-term contract commitments from companies like OpenAI, Meta, and Nvidia, investors are growing concerned about Oracle’s debt load that’s fueling its buildout.

Oracle plunges on weak revenue

Still, venture capitalist Matt Murphy of Menlo Ventures, said that in his 25 years in the venture business, “this is the mother of all waves.”

Murphy, an early investor in Anthropic, said the combination of AI models, custom chips and hyperscale data centers adds up to the potential for trillion-dollar outcomes. That explains the eye-popping level of capital expenditures and the astronomical valuations, he said.

Altman recently declared a “code red” inside his company, and shuffled resources to focus on making ChatGPT faster, more reliable and more personal, while delaying work on ads, health and shopping agents and a personal assistant called Pulse. His declaration came after Google released its Gemini 3 model last month, further accelerating the search giant’s ascent in the market.

On Thursday, OpenAI unveiled ChatGPT-5.2, a faster, more capable reasoning model that the company says is its best system yet for everyday professional use. It also struck a three-year, $1 billion content and equity deal with Disney around the Sora AI video generator.

Altman downplayed the threat from Google, telling CNBC that Gemini had less of an impact on the company’s metrics than OpenAI initially feared.

“I believe that when a competitive threat happens, you want to focus on it, deal with it quickly,” Altman said.

He said he expects the company to exit code red by January.

— CNBC’s Kif Leswing contributed to this report.

OpenAI CEO Sam Altman: Expect annualized revenue run rate to top $20B this year

Continue Reading

Technology

Broadcom stock reverses lower on a misinterpretation of what the CEO said on the earnings call

Published

on

By

Broadcom stock reverses lower on a misinterpretation of what the CEO said on the earnings call

Continue Reading

Trending