On Wednesday, Googlepreviewed what could be one of the largest changes to the search engine in its history.
Google will use AI models to combine and summarize information from around the web in response to search queries, a product it calls Search Generative Experience.
related investing news
2 hours ago
5 hours ago
Instead of “ten blue links,” the phrase that describes Google’s usual search results, Google will show some users paragraphs of AI-generated text and a handful of links at the top of the results page.
The new AI-based search is being tested now for a select group of users and isn’t widely available yet. But website publishers are already worried that if it becomes Google’s default way of presenting search results, it could hurt them by sending fewer visitors to their sites and keeping them on Google.com.
The controversy highlights a long-running tension between Google and the websites it indexes, with a new artificial intelligence twist. Publishers have long worried that Google repurposes their verbatim content in snippets on its own website, but now Google is using advanced machine learning models that scrape large parts of the web to “train” the software to spit out human-like text and responses.
Rutledge Daugette, CEO of TechRaptor, a site focusing on gaming news and reviews, said that Google’s move was made without considering the interests of publishers and Google’s AI amounts to lifting content.
“Their focus is on zero-click searches that use information from publishers and writers who spend time and effort creating quality content, without offering any benefit other than the potential of a click,” Rutledge told CNBC. “Thus far, AI has been quick to reuse others’ information with zero benefit to them, and in cases like Google Bard doesn’t even offer attribution as to where the information it’s using came from.”
Luther Lowe, a longtime Google critic and chief of public policy at Yelp, said that Google’s update is part of a decades-long strategy to keep users on the site for longer, instead of sending them to the sites that originally hosted the information.
“The exclusionary self-preferencing of Google’s ChatGPT clone into search is the final chapter of bloodletting the web,” Lowe told CNBC.
According to SearchEngineLand, a news website that closely tracks changes to Google’s search engine, the AI-generated results are displayed above the organic search results in testing so far.
SGE comes in a differently colored box — green in the example — and includes boxed links to three websites on the right side. In Google’s primary example, all three of the website headlines were cut off.
Google says that the information isn’t taken from the websites, but is instead corroborated by the links. SearchEngineLand said the SGE approach was an improvement and a “healthier” way to link than Google’s Bard chatbot, which rarely linked to publisher websites.
Some publishers are wondering if they can prevent AI firms such as Google from scraping their content to train their models. Companies such as the firm behind Stable Diffusion are already facing lawsuits from data owners, but the right to scrape web data for AI remains an undecided frontier. Other companies, such as Reddit, have announced plans to charge for access to their data.
Leading the charge in the publishing world is Barry Diller, Chairman of IAC, which owns websites including All Recipes, People Magazine and The Daily Beast.
“If all the world’s information is able to be sucked up into this maw, and then essentially repackaged in declarative sentences, in what’s called chat, but it isn’t chat — as many grafs as you want, 25 on any subject — there will be no publishing, because it will be impossible,” Diller said last month at a conference.
“What you have to do is get the industry to say that you cannot scrape our content, until you work out systems where the publisher gets some avenue towards payment,” Diller continued, saying that Google will face this problem.
Diller says he believes publishers can sue AI firms under copyright law and that current “fair use” restrictions need to be redefined. The Financial Times reported on Wednesday that Diller is leading a group of publishers “that is going to say we are going to change copyright law if necessary.” An IAC spokesperson declined to make Diller available for an interview.
One challenge facing publishers is confirming that their content is being used by an AI. Google did not reveal training sources for its large language model that underpins SGE, PaLM 2, and Daugette says while he’s seen examples of quotes and review scores from competitors repurposed on Bard without attribution, it’s hard to tell when the information is from his site without directly linked sources.
Google didn’t respond to a request for comment. “PaLM 2 is trained on a wide range of openly available data on the internet and we obviously value the health of the web ecosystem. And that’s really part of the way we think about how we build our products, to ensure that we have a healthy ecosystem where creators are a part of that thriving ecosystem,” Google VP of Research Zoubin Ghahramani said in a media briefing earlier this week.
Daugette says that Google’s moves make being an independent publisher tough.
“I think it’s really frustrating for our industry to have to worry about our hard work being taken, when so many colleagues are being laid off,” Daugette said. “It’s just not okay.”
The first day of sale of the iPhone 15 smartphone in Mumbai, India, on Sept. 22, 2023.
Dhiraj Singh | Bloomberg | Getty Images
Apple has filed a case in Delhi High Court against the country’s anti-trust body because of how it considers global turnover when calculating penalties.
The iPhone maker, which is among the fastest growing smart phone brands in India, is challenging India’s new antitrust law under which the U.S. company could incur fines of up to $38 billion, according to a report by Reuters.
It added it was “unconstitutional, grossly disproportionate, unjust” for the Competition Commission of India (CCI) to use turnover when calculating penalties.
Apple did not immediately respond to a request for comment from CNBC.
The CCI has been investigating complaints made by an alliance of Indian startups and Tinder-owner Match Group that accuse Apple of “abusive conduct” which forces developers to pay high commissions for in-app purchases.
Apple denied the charges.
The CCI’s final verdict is still pending but it said its “prima facie view [is] that mandatory use of Apple’s IAP for paid apps & in-app purchases restrict the choice available to the app developers to select a payment processing system of their choice”, in an order in December 2021.
Apple recorded its highest-ever quarterly shipments in India of 5 million units in the third quarter of 2025, according to data from IDC.
The company is expected to sell about 15 million iPhones this year in India and could rank among top five smartphone companies there, Navkendar Singh associate vice president with IDC India said on CNBC’s “Inside India” on Nov. 18.
Apple is among the global companies who are diversifying their manufacturing supply chain from China to India. In 2024, Apple exports from India hit a record of $12.8 billion, growing at more than 42% from year ago.
Alibaba announced plans to release a pair of smart glasses powered by its AI models. The Quark AI Glasses are Alibaba’s first foray into the smart glasses product category.
Alibaba
Alibaba‘s artificial intelligence-powered smart glasses went on sale on Thursday as the Chinese tech giant looks to ramp up its focus on consumer AI in an increasingly competitive market.
The Quark AI Glasses, first announced in July, come in two variants — the S1, which starts at 3,799 Chinese yuan ($536) and G1 at 1,899 yuan.
The tech giant has integrated its Qwen AI models — Alibaba’s version of ChatGPT — with the device which also links to its newly-launched Qwen app. This means users can use voice control to get the glasses to carry out tasks.
The lenses of the glasses are effectively screens and the device has a camera built into the frame. The main difference between the S1 and G1 is the display, Alibaba said.
The company said that some of the features include on-the-go translation, AI-generated meeting notes and the ability to ask the virtual assistants questions. Users take pictures of a product using the camera in the lens and then the device will show the price of that product on Taobao, Alibaba’s main shopping app in China.
Alibaba, like other technology companies such as U.S. giant Meta, are betting that smart glasses could be the next big consumer device after the smartphone.
In September, Meta unveiled the $799 Meta Ray-Ban Display glasses, the social media company’s first consumer-ready smart glasses with a built-in display. Users can control the device via hand gestures with a special wristband.
Alibaba’s glasses will initially go on sale in China and compete with domestic rivals, including consumer electronics maker Xiaomi and startup Xreal.
The smart glasses market is still small but growing rapidly. By 2026, shipments of AI glasses are expected to exceed more than 10 million units, doubling from 2025, according to a forecast from Omdia.
For Alibaba, the glasses are its latest play in the consumer AI market as it looks to build on its recent successes. The company’s ChatGPT-style Qwen app got 10 million downloads in the first week of the public beta launch. Meanwhile, Alibaba’s cloud computing business, where it books much of its AI-relate revenue, saw an acceleration of growth in the last quarter.
The Hangzhou, headquartered company is one of the leaders in China’s AI space, and has been investing aggressively in AI alongside rival giants like Baidu and Tencent, and aggressively launching new models.
Europe, with its fragmented markets, is often said to be operating in the shadow of the U.S. and China when it comes to scaling AI.
But the very factors that challenge its growth as a major player may yet give it an edge when it comes to future-proofing the critical warehouses that power the AI boom.
The world is racing to double, if not triple, the entire data center capacity that has been built over the last forty years, Pankaj Sachdeva told CNBC, McKinsey senior partner in technology, with McKinsey estimating that build-out will cost up to $7 trillion by 2030.
He expects the U.S. to account for the lion’s share of activity, but Europe will “continue to build at a pretty meaningful rate” to nearly double its existing capacity.
“Europe is actually participating in this infrastructure build out, and is actually keeping pace, or we think that it will keep pace,” Sachdeva added.
To get there, the bloc must overcome major chokeholds in access to power and regulation, experts told CNBC.
Winners and losers
The defining bottleneck for Europe is access to electricity, with energy cost and availability shaping the flow of investment across the region. The Nordics and Spain have seen increased appetite for data center builds given their surplus in energy thanks to hydropower and renewables, while Germany and the U.K. may be less attractive due to energy supply constraints.
In terms of grid congestion, Italy is one such country on the winning side. It has a connection time of up to three years compared with the European average of four years, according to energy think tank Ember.
On the losing side is again Germany, the U.K., Ireland and the Netherlands, “where, basically either we just don’t have the grid capacity right now or we’ve got such a shortage in the system that there’s effectively a moratorium for the foreseeable future,” Jags Walia, head of global listed infrastructure at Van Lanschot Kempen told CNBC.
While differences between European countries are significant, it’s ultimately “going to be hard” to catch up on the U.S. in the short-term — where deregulation and huge investment are enabling a much quicker build-out — Walia said. Most European countries have around 200 to 300 data centers, he added, but “the U.S. has like 5,400.”
Constraints are resulting in some a diversification away from the traditional FLAP-D markets of Frankfurt, London, Amsterdam, Paris, and Dublin, and driving investment in data centers where resources are plentiful and stable.
Where Europe, from my perspective, stands out as quite interesting is it feels like a much more safer investment case
Seb Dooley
Senior Fund Manager at Principal Asset Management
There have also been some efforts to develop projects faster. For example, in the U.K., there have been instances of central government overruling local government to approve data centers that were previously denied. Last year the country designated data centers Critical National Infrastructure, highlighting their important in its economic agenda.
A powerful bottleneck
Energy consumption from power-hungry data centers could more than double to 1,000 terawatt-hours (TWh) in 2026, up from 460 TWh in 2022 and largely driven by AI, per the International Energy Agency.
A data center’s largest cost component is electricity, though newer, state-of-the-art facilities could have a reduced burden, according to Walia.
This is a particularly sticky problem for Europe, which saw its energy bills skyrocket when Russia invaded Ukraine. The U.K. has the highest energy costs in Europe, which are around 75% higher than before the full-scale attack.
While this can be a deterrent for setting up shop in a particular location, operators aim to balance it with grid congestion times.
Grid congestion has also instigated discussions about how to procure power in Europe, according to CBRE’s European data center research lead Kevin Restivo.
“You get a lot of speculators in the queue, and those speculators make it more difficult because they have no intention of building data centers. They just want the power, perhaps, to flip it somebody else,” Restivo told CNBC.
The U.K., for example, operated on a first-come-first-served basis, meaning project significance was not factored into the decision of who receives power first.
However, the system is currently being transitioned to a ‘first ready, first connected,’ process where finished projects will be able to jump ahead in theconnection queue, which were designed in part to tackle speculation. The reforms show how energy and infrastructure builds are forcing old systems to evolve and sets the stage for further innovation.
At the same time, the steady pace of change allows developers to be more deliberate about what they build, where, and how — meaning Europe could put greater emphasis on state-of-the-art facilities.
The quickest way for Europe to get around these challenges is not to wait on new grid connection but to say ‘where do I currently have good grid connection to an industry in decline?’, Walia said, as such sites can be repurposed from industrial to tech hubs.
The opportunity in AI inference
It’s unlikely that Europe will lead in building facilities for AI hyperscalers or for the training of AI — that race is considered all but won — but the general consensus is that it could excel in smaller, cloud-focused and connectivity-style facilities that require huge amounts of fiber going in and out of them, as well those designed for AI inference.
Indeed, the continent has few foundational model developers, with France’s Mistral being the most well-known, but McKinsey sees 70% of all AI demand coming from inference.
As such, the continent isn’t seeing “too many” massive data center sites being announced relating to AI, nor “the slightly overpriced nature” of them, according to Seb Dooley, senior fund manager at Principal Asset Management.
“So, actually, you are finding these areas, from our perspective, are well protected from that potential oversupply bubble that could come through,” he added, as cloud is well established.
It is largely driven by AI, but non-AI workloads are also expected to tick upwards
Principal Asset Management expects AI inference to take place in the same facilities as cloud, which has already happened at some of its U.S. cloud sites. This gives investors “quite a nice upside” without the speculative risk that comes with other AI investments, the fund manager said.
It’s also an opportunity for Europe. Inference likely will have to exist within European borders, Dooley said, driven by the broader push for sovereign AI. However, it has different technical requirements; density tends to be higher than the 20 kilowatts a rack for traditional cloud, meaning data centers that want to do both must factor that in. Inference also requires different cooling systems.
“That just means that you have to design these facilities to be sort of flexible and robust so that you can change between the two different systems as requirements change, Dooley added.
The joy of a slower and more considered pace in Europe, therefore, is that there is time to think about such things.
The risk of stranded assets
The pace of AI development has led to widespread chatter of a bubble, which would result in piles of stranded assets if it were to pop. If AI keeps its cadence, which many believe it will, there is still a risk that data centers built today won’t be suitable in the future as AI’s technical needs will change.
To help, investors are focusing on securing customers before ground is broken. Speculative-built data centers are “a relic of the past, for the most part,” said Restivo. Developer-operators often lock customers into 10-to-15-year terms, he added, which also couches obsolescence.
It’s a different case, however, if the tenant themselves is a startup or young company. Neo-cloud providers, for example, carry “significant risk” and have shorter terms of five-to-seven years, Restivo said.
“These are companies that have not returned capital to shareholders, they have unproven business models, and they have a great need for capacity in a shorter period of time,” he said, adding that there is “a lot of skin in the game for developer-operators” working with neo-clouds. However, some debt financiers and developers are “increasingly comfortable” with these terms, Restivo added.
There may be issues with repurposing brownfield sites, however, is if data centers are replacing an industrial plant that’s still running – meaning job losses. European policy requires developers to report energy and water usage of data centers, as well as justification for the particular location.
Some member states go further. Walia pointed to proposed sustainability requirements in Spain, which would see data center developers report socio-economic impact. “Nobody asks about that in the U.S.,” he said.
But Dooley expects that tight regulations will work in Europe’s favor in the long-run, as data centers will be integrated into local communities “rather than just being a complete blight on everyone’s life that they can sometimes be,” he said, noting that sustainability is one area where the bloc has been “very good at innovating.”
“Where Europe, from my perspective, stands out as quite interesting is it feels like a much more safer investment case if we’re looking more from the capital market side compared to the U.S.,” Dooley said.
“A lot of that comes from the fact that it’s difficult to build in Europe. We’ve got a lot of constraints, but, actually, the more difficult something is to replicate, the more long-term value what you’ve got has, the more likely people are to reuse, to come up with creative solutions to repurpose assets,” he added.
Ultimately, investors and developers may have no choice in the matter but to back Europe thanks to sovereign AI — an “underestimated” driver of the data center build, Jim Wright, manager of the Premier Miton Global Infrastructure Income Fund, told CNBC.
In all, Europe has the opportunity to innovate and create long-term value for both investors and citizens. Scarcity increases profitability and resilience for the former, while regulation encourages sustainable and constructive build outs for the latter.
However, there is not going to be a one-size-fits-all approach to building data centers in Europe. “The industry is still very much in ‘figuring out what exactly it needs’ phase at the moment,” Dooley added.