CEO of Alphabet and Google Sundar Pichai in Warsaw, Poland on March 29, 2022.
Mateusz Wlodarczyk | Nurphoto | Getty Images
The Department of Justice’s latest challenge to Google’s tech empire is an ambitious swing at the company with the potential to rearrange the digital advertising market. But alongside the possibility of great reward comes significant risk in seeking to push the boundaries of antitrust law.
“DOJ is going big or going home here,” said Daniel Francis, who teaches antitrust at NYU School of Law and previously worked as deputy director of the Federal Trade Commission’s Bureau of Competition, where he worked on the agency’s monopoly case againstFacebook.
related investing news
The DOJ’s antitrust chief Jonathan Kanter has indicated he’s comfortable with taking risks, often saying in public remarks that it’s important to bring cases that seek to challenge current conventions in antitrust law. He said he prefers more permanent remedies like breakups compared to promises to change behavior. That sentiment comes through in the DOJ’s request in its latest lawsuit for the court to force Google to spin off parts of its ad business.
Antitrust experts say the Justice Department paints a compelling story about the ways Google allegedly used acquisitions and exclusionary strategies to fend off rivals and maintain monopoly power in the digital advertising space. It’s one that, if the government gets its way, would break apart a business that’s generated more than $50 billion in revenue for Google in the last quarter, potentially opening up an entire market in which Google is currently one of the most important players.
But, they warn, the government will face significant challenges in proving its case in a court system that progressive antitrust enforcers and many lawmakers believe has taken on a myopic view of the scope of antitrust law, especially when it comes to digital markets.
“If they prove the violations they allege, they’re going to get a remedy that’s going to shake up the market,” said Doug Melamed, a scholar-in-residence at Stanford Law School who served at the Antitrust Division, including as acting assistant attorney general, from 1996-2001 during the landmark case against Microsoft. “But it’s not obvious they’re going to win this case.”
Challenges and strengths
Experts interviewed for this article said the DOJ will face the challenge of charting relatively underexplored areas of antitrust law in proving to the court that Google’s conduct violated the law and harmed competition without benefitting consumers. Though that’s a tall order, it could come with a huge upside if the agency succeeds, possibly expanding the scope of antitrust law for digital monopoly cases to come.
“All antitrust cases are an uphill battle for plaintiffs, thanks to 40 years of case law,” said Rebecca Haw Allensworth, an antitrust professor at Vanderbilt Law School. “This one’s no exception.”
But, Allensworth added, the government’s challenges may be different than those in many other antitrust cases.
“Usually the difficulty, especially in cases involving platforms, is market definition,” she said. In this case, the government argued the relevant market is publisher ad servers, ad exchanges, and advertiser ad networks — the three sides of the advertising stack Google has its hand in, which the DOJ said it’s leveraged to box out rivals. “And here, I think that that is relatively straightforward for the DOJ.”
“One way to look at the latest complaint is that it is the newest and most complete draft of a critique that antitrust agencies in the U.S. and abroad have been building against Google for over a decade,” William Kovacic, who served on the Federal Trade Commission from 2006 to 2011 and is now a professor at George Washington Law, said in an email.
Google, for its part, has said the latest DOJ lawsuit “tries to rewrite history at the expense of publishers, advertisers and internet users.” It claims the government is trying to “pick winners and losers” and that its products have expanded options for publishers and advertisers.
Compared to the DOJ’s earlier lawsuit, which argued Google maintained its monopoly over search services through exclusionary contracts with phone manufacturers, this one advances more nontraditional theories of harm, according to Francis, the NYU Law professor and former FTC official. That also makes it more likely that Google will move to dismiss the case to at least narrow the claims it may have to fight later on — a move it did not take in the earlier suit, he added.
“This case breaks much more new ground and it articulates theories, or it seems to articulate theories, that are right out on the border of what existing antitrust prohibits,” Francis said. “And we’re going to find out, when all is said and done, where the boundaries of digital monopolization really lie.”
High risk, high reward?
CEO of Alphabet and Google Sundar Pichai in Warsaw, Poland on March 29, 2022.
Mateusz Wlodarczyk | Nurphoto | Getty Images
DOJ took a gamble with this case. But if it wins, the rewards could match the risk.
“In terms of the potential impact of the remedy, this could be a bigger case than Microsoft,” said Melamed.
Still, Francis cautioned, a court could order a less disruptive remedy, like paying damages if it finds the government was harmed as an advertising purchaser, or simply requiring Google to stop the allegedly illegal conduct, even if it rules in the DOJ’s favor.
Like all antitrust cases, this one is unlikely to be concluded anytime soon. Still, a key decision by the Justice Department could make it speedier than otherwise expected. The agency filed the case in the Eastern District of Virginia, which has gained a reputation as the “rocket docket” for its relatively efficient pace in moving cases along.
“What that signals to me is that, given the timeframe for antitrust litigation is notoriously slow, DOJ is doing everything that they can in their choice of venue to ensure that this litigation moves forward before technological and commercial changes make it obsolete,” Francis said.
He added that the judge who has been assigned the trial, Clinton appointee Leonie Brinkema, is regarded as smart and fair and has handled antitrust cases before, including one Francis litigated years ago.
“I could imagine that both sides will feel pretty good about having drawn Judge Brinkema as a fair, efficient and sophisticated judge who will move the case along in an expeditious way,” Francis said.
Still, there are hardly any judges who have experience with a case like this one, simply because there haven’t been that many digital monopolization cases decided in court.
“So any judge who would be hearing this case is going to be confronting frontier issues of antitrust theory and principle,” Francis said.
Immediate impact
Outside of the courts, the case could have a more immediate impact in other ways.
“From the point of view of strategy, the case adds a major complication to Google’s defense by increasing the multiplicity and seriousness of public agency antitrust enforcement challenges,” said Kovacic, the former FTC commissioner. “The swarming of enforcement at home and abroad is forcing the company to defend itself in multiple fora in the US and in jurisdictions such as the EU and India.”
Regardless of outcomes, Kovacic said the sheer volume of lawsuits and regulation can create a distraction for top management and will likely lead Google to more carefully consider its actions.
“That can be a serious drag on company performance,” Kovacic wrote.
The suit could also lend credence to lawmakers’ efforts to legislate around digital ad markets. One proposal, the Competition and Transparency in Digital Advertising Act, would prohibit large companies like Google from owning more than one part of the digital advertising system, so it couldn’t own tools on both the buy and sell side as it currently does.
Importantly, the bill is sponsored by Sen. Mike Lee, R-Utah, the ranking member of the Senate Judiciary subcommittee on antitrust. Lee has remained skeptical of some other digital market antitrust reforms, but his leadership on this bill suggests there may be a broader group of Republicans willing to support this kind of measure.
“An antitrust lawsuit is good, but will take a long time and apply to only one company,” Lee tweeted following the DOJ’s announcement, saying he would soon reintroduce the measure. “We need to make sure competition works for everyone, and soon.”
Rep. Ken Buck, R-Colo., who has backed the House version of the bill, called the digital ad legislation “The most important bill we can move forward” in a recent interview with The Washington Post.
“This is clearly the blockbuster case so far from the DOJ antitrust division,” Francis said. “And I think it represents a flagship effort to establish new law on the borders of monopolization doctrine. And at the end of it — win, lose or draw — it’s really going to contribute to our understanding of what the Sherman Act actually prohibits in tech markets.”
Sam Altman, CEO of OpenAI, attends the annual Allen and Co. Sun Valley Media and Technology Conference at the Sun Valley Resort in Sun Valley, Idaho, on July 8, 2025.
David A. Grogan | CNBC
OpenAI on Wednesday announced two reasoning models that developers can use to classify a range of online safety harms on their platforms.
The artificial intelligence models are called gpt-oss-safeguard-120b and gpt-oss-safeguard-20b, and their names reflect their sizes. They are fine-tuned, or adapted, versions of OpenAI’s gpt-oss models, which the company announced in August.
OpenAI is introducing them as so-called open-weight models, which means their parameters, or the elements that improve the outputs and predictions during training, are publicly available. Open-weight models can offer transparency and control, but they are different from open-source models, whose full source code becomes available for users to customize and modify.
Organizations can configure the new models to their specific policy needs, OpenAI said. And since they are reasoning models that show their work, developers will have more direct insight into how they arrive at a particular output.
For instance, a product reviews site could develop a policy and use gpt-oss-safeguard models to screen reviews that might be fake, OpenAI said. Similarly, a video game discussion forum could classify posts that discuss cheating.
OpenAI developed the models in partnership with Robust Open Online Safety Tools, or ROOST, an organization dedicated to building safety infrastructure for AI. Discord and SafetyKit also helped test the models. They are initially available in a research preview, and OpenAI said it will seek feedback from researchers and members of the safety community.
As part of the launch, ROOST is establishing a model community for researchers and practitioners that are using AI models in an effort to protect online spaces.
The announcement could help OpenAI placate some critics who have accused the startup of commercializing and scaling too quickly at the expense of AI ethics and safety. The startup is valued at $500 billion, and its consumer chatbot, ChatGPT, has surpassed 800 million weekly active users.
On Tuesday, OpenAI said it’s completed its recapitalization, cementing its structure as a nonprofit with a controlling stake in its for-profit business. OpenAI was founded in 2015 as a nonprofit lab, but has emerged as the most valuable U.S. tech startup in the years since releasing ChatGPT in late 2022.
“As AI becomes more powerful, safety tools and fundamental safety research must evolve just as fast — and they must be accessible to everyone,” ROOST President Camille François, said in a statement.
Eligible users can download the model weights on Hugging Face, OpenAI said.
Fiserv‘s stock plummeted 44% Wednesday and headed for its worst day ever after the fintech company cut its earnings outlook and shook up some of its leadership team.
“Our current performance is not where we want it to be nor where our stakeholders expect it to be,” wrote CEO Mike Lyons in a release.
For the full year, Fiserv now expects adjusted earnings of $8.50 to $8.60 a share for the year, down from a previous forecast of $10.15 and $10.30. Revenues are expected to grow 3.5% to 4%, versus a prior estimate of 10%.
Adjusted earnings came in at $2.04 per share, falling short of the LSEG estimate of $2.64. Revenues rose about 1% from a year ago to $4.92 billion, missing the $5.36 billion forecast. Net income grew to $792 million from $564 million in the year-ago period.
Along with the results, Fiserv announced a slew of executive and board changes.
Read more CNBC tech news
Beginning in December, operating chief Takis Georgakopoulos will serve as co-president with Dhivya Suryadevara, recent CEO of Optum Financial Services and Optum Insight at UnitedHealth Group. Fiserv also promoted Paul Todd to finance chief.
“We also have opportunities in front of us to improve our results and execution, and I am confident that these are the right leaders to help guide Fiserv to long-term success,” Lyons wrote in a separate release.
Fiserv also announced that Gordon Nixon, Céline Dufétel and Gary Shedlin would join its board at the beginning of 2026, with Nixon serving as independent chairman of the board. Shedlin is slated to lead the audit committee.
The Milwaukee, Wisconsin-based company also announced an action plan that Lyons said would better situate the company to “drive sustainable, high-quality growth” and reach its “full potential.”
Fiserv said it will move its stock from the NYSE to the Nasdaq next month, where it will trade under the ticker symbol “FISV.”
Fiserv did not immediately respond to CNBC’s request for comment.
Character.AI on Wednesday announced that it will soon shut off the ability for minors to have free-ranging chats, including romantic and therapeutic conversations, with the startup’s artificial intelligence chatbots.
The Silicon Valley startup, which allows users to create and interact with character-based chatbots, announced the move as part of an effort to make its app safer and more age-appropriate for those under 18.
Last year, 14-year-old Sewell Setzer III, committed suicide after forming sexual relationships with chatbots on Character.AI’s app. Many AI developers, including OpenAI and Facebook-parent Meta, have come under scrutiny after users have committed suicide or died after forming relationships with chatbots.
As part of its safety initiatives, Character.AI said on Wednesday that it will limit users under 18 to two hours of open-ended chats per day, and will eliminate those types of conversations for minors by Nov. 25.
“This is a bold step forward, and we hope this raises the bar for everybody else,” Character.AI CEO Karandeep Anand told CNBC.
Character.AI introduced changes to prevent minors from engaging in sexual dialogues with its chatbots in October 2024. The same day, Sewell’s family filed a wrongful death lawsuit against the company.
To enforce the policy, the company said it’s rolling out an age assurance function that will use first-party and third-party software to monitor a user’s age. The company is partnering with Persona, the same firm used by Discord and others, to help with verification.
In 2024, Character.AI’s founders and certain members of its research team joined Google DeepMind, the company’s AI unit DeepMind. It’s one of a number of such deals announced by leading tech companies to speed their development of AI products and services. The agreement called for Character.AI to provide Google with a non-exclusive license for its current large language model, or LLM, technology.
Since Anand took over as CEO in June, 10 months after the Google deal, Character.AI has added more features to diversify its offering from chatbot conversations. Those features include a feed for watching AI-generated videos as well as storytelling and roleplay formats.
Although Character.AI will no longer allow teenagers to engage in open-ended conversations on its app, those users will still have access to the app’s other offerings, said Anand, who was previously an executive at Meta.
Of the startup’s roughly 20 million monthly active users, about 10% are under 18. Anand said that percentage has declined as the app has shifted its focus toward storytelling and roleplaying.
The app makes money primarily through advertising and a $10 monthly subscription. Character.AI is on track to end the year with a run rate of $50 million, Anand said.
Additionally, the company on Wednesday announced that it will establish and fund an independent AI Safety Lab dedicated to safety research for AI entertainment. Character.AI didn’t say how much it will provide in funding, but the startup said it’s inviting other companies, academics, researchers and policy makers to join the nonprofit effort.
Regulatory pressure
Character.AI is one of many AI chatbot companies facing regulatory scrutiny on the matter of teens and AI companions.
In September, the Federal Trade Commission issued an order to seven companies including, Character.AI’s parent, as well as Alphabet, Meta, OpenAI and Snap, to understand the potential effects on children and teenagers.
On Tuesday, Senators Josh Hawley, R-Mo, and Richard Blumenthal, D-Conn, announced legislation to ban AI chatbot companions for minors. California Gov. Gavin Newsom signed a law earlier this month requiring chatbots to disclose they are AI and tell minors to take a break every three hours.
Rival Meta, which also offers AI chatbots, announced safety features in October that will allow parents to see and manage how their teenagers are interacting with AI characters on the company’s platforms. Parents have the option to turn off one-on-one chats with AI characters completely and can block specific AI characters.
The matter of sexualized conversations with AI chatbots has come into focus as tech companies announce different approaches to dealing with the issue.
Earlier this month, Sam Altman announced that OpenAI would allow adult users to engage in erotica with ChatGPT later this year, saying that his company is “not the elected moral police of the world.”
Microsoft AI CEO Mustafa Suleyman said last week that the software company will not provide “simulated erotica,” describing sexbots as “very dangerous.” Microsoft is a key investor and partner to OpenAI.
The race to develop more realistic human-like AI companions has been growing in Silicon Valley since ChatGPT’s launch in late 2022. While some people are creating deep connections with AI characters, the speedy development presents ethical and safety concerns, especially for children and teenagers.
“I have a six-year-old as well, and I want to make sure that she grows up in a safe environment with AI,” Anand said.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.