Prabhakar Raghavan, senior vice president at Google, speaks during the US Conference of Mayors Winter Meeting in Washington, DC, US, on Wednesday, Jan. 17, 2024.
Julia Nikhinson | Bloomberg | Getty Images
Wearing a hoodie with the words “We use Math” on the front, Google search boss Prabhakar Raghavan had an important message for employees at an all-hands meeting last month. But he first wanted them to settle in and get comfortable.
“Grab your boba teas,” Raghavan told the crowd, gathered in a theaterat the company’s headquarters in Mountain View, California.
Raghavan, who reports directly to CEO Sundar Pichai and leads key groups including search, ads, maps and commerce, was addressing Google’s knowledge and information organization, which consists of more than 25,000 full-time employees.
“I think we can agree that things are not like they were 15-20 years ago, things have changed,” Raghavan said, according to audio of the event obtained by CNBC. He was referring to the search industry, which Google has dominated for two decades, emerging as one of the most profitable and valuable companies on the planet along the way.
Raghavan said Google’s digital ad business had become “the envy of the world.” He noted that over the last three years, annual revenue has grown by more than $100 billion, exceeding Starbucks, Mazda and TikTok combined.
At a company long known across Silicon Valley for its free, gourmet lunches and endless on-campus perks, Raghavan’s comments serve as the latest warning to employees that growth for Google is getting harder.
“It’s not like life is going to be hunky-dory, forever,” he said.
Over roughly 35 minutes, Raghavan peppered his reality check address with sports metaphors and rallying cries.
“If there’s a clear and present market reality, we need to twitch faster, like the athletes twitch faster,” he said.
He referenced heightened competition and a more challenging regulatory environment. Though he didn’t name specific rivals, Google is facing pressure from the likes of Microsoft and OpenAI in generative artificial intelligence.
“People come to us because we are trusted,” Raghavan said. “They may have a new gizmo out there that people like to play with but they still come to Google to verify what they see there because it is the trusted source and it becomes more critical in this era of generative AI.”
Raghavan had some tangible changes to announce. He said the company plans to build teams closer to users in key markets, including India and Brazil, and revealed that he’s shortening the amount of time that his reports have to complete certain projects in an effort to move faster.
“There is something to be learned from that faster-twitch, shorter wavelength execution,” he said.
Google’s cloud business has also instructed employees to move within shorter timelines despite having fewer resources after cost cuts, sources with knowledge of the matter told CNBC.
“With a huge opportunity ahead, we’re moving with velocity and focus,” a Google spokesperson told CNBC, when asked to comment on Raghavan’s address. The spokesperson highlighted the addition of generative AI to search and improvements in search quality, adding, “There’s lots more to come.”
In March, Google named company veteran Elizabeth Reid to the role of vice president, leading search and reporting to Raghavan.
‘High highs and low lows’
In many respects, Raghavan’s tone was nothing new. Google has been in cost-cutting mode since early 2023, when parent Alphabet announced plans to eliminate about 12,000 jobs, or 6% of the company’s workforce. Job cuts have continued this year, with more layoffs in early 2024, and CFO Ruth Porat said in a memo last week that the company is restructuring its finance organization, a move that will involve additional downsizing.
But Raghavan is making clear that what’s happening now isn’t just a continuation of 2023. He noted that his group’s last all-hands meeting was three months ago, though for some it felt like three years.
“We’ve had a lot go on in these last three months,” consisting of “really high highs and low lows,” he said.
In that time, Googleintroduced its AI image generator. After users discovered inaccuracies that went viral online, the company pulled the feature in February. Google has been reorganizing to try and stay ahead in the AI arms race as more users move away from traditional internet search to find information online.
In Alphabet’s upcoming earnings report on Thursday, Wall Street is expecting a second straight quarter of year-over-year revenue growth in the low teens. While that marks an acceleration from the few quarters prior, the numbers are also in comparison to some of Google’s weakest reports on record.
Even though Alphabet reported better-than-expected revenue and profit for the fourth quarter, ad revenue trailed analysts’ projections, causing the company’s shares to drop more than 6%. Meanwhile, the AI boom is forcing a renewed focus on investments.
“We’re in a new cost reality,” Raghavan said. With generative AI, the company is “spending a ton more on machines,” he said.
Organic growth is slowing and the number of new devices coming into the world “is not what it used to be,” Raghavan said.
“What that means is our growth in this new operating reality has to be hard earned,” he added.
A smart phone displaying Google with Google Gemini in the background is being featured in this photo illustration in Brussels, Belgium, on February 8, 2024.
Jonathan Raa | Nurphoto | Getty Images
Raghavan said that additional challenges are emerging as the company is “navigating a regulatory environment unlike anything we’ve seen before.”
He cited the European Union’s Digital Markets Act and said the company is still learning what its obligations will be from the European Commission. The DMA, which officially became enforceable last month, aims to clamp down on anti-competitive practices among tech companies.
“That does have its impact on us,” Raghavan said.
Raghavan urged employees to “meet this moment” and “act with urgency based on market conditions.”
“It won’t be easy,” he said. “But these are the moments and the history of industries that will define us.”
120 hours a week
Raghavan said Google has to address its “systemic” challenges and build “new muscles that maybe we have let fall off for a bit.”
He praised the teams working on Gemini, the company’s main group of AI models. He said they’ve stepped up from working 100 hours a week to 120 hours to correct Google’s image recognition tool in a timely manner. That helped the team fix roughly 80% of the issues in just 10 days, he said.
However, Google still hasn’t brought back the ability to generate images of people. Demis Hassabis, Google’s AI leader, said in February after the tool was taken down that it would be re-released in weeks.
Raghavan clarified that the failure in image generation wasn’t due to a lack of effort.
“I want to be clear, this wasn’t some case of somebody slacking off and dropping the ball,” he said.
Raghavan said the company has shown the ability to move quickly on important matters. As an example, he highlighted an effort in 2023, when the Bard team (now Gemini) and Magi team, which focuses on AI-powered search, launched products within a matter of months.
It was something the company couldn’t have accomplished, he suggested, with bigger numbers.
“The realization was ‘gosh, if we had thrown 2,000 engineers at these projects, we wouldn’t have got it done,'” he said, indicating that the company would be paying close attention to the size and scope of teams.
Raghavan also spoke to critics of the company’s bureaucracy.
Employees have complained for years that Google’s growing bureaucracy has crippled their ability to launch products quickly. That worsened as the company rapidly expanded its workforce during the pandemic.
In 2022, in addition to Google’s annual survey called Googlegeist, Pichai launched a “Simplicity Sprint” to gather employee feedback on efficiency.
“The number of agreements and approvals it takes to bring a good idea to market — that’s not the Google way,” Raghavan said. “That’s not the way we should be functioning.”
Raghavan said leaders are actively working on removing unnecessary layers in the hierarchy, echoing prior comments from Pichai.
“We’ve learned a lot the last few quarters,” Raghavan said. “I cannot tell you that all the stumbles are behind us. What matters is how we respond and what we learn.”
Bitcoin extended its rebound on Wednesday, hovering just below $100,000 after another encouraging inflation report fueled investors’ risk appetite.
The price of the flagship cryptocurrency was last higher by more than 3% at $99,444.43, bringing its 2-day gain to about 7%, according to Coin Metrics.
The CoinDesk 20 index, which measures the broader market of cryptocurrencies, gained 6%.
Stock Chart IconStock chart icon
Bitcoin approaches $100,000 after Wednesday’s CPI data
Wednesday’s move followed the release of the December consumer price index, which showed core inflation unexpectedly slowed in December. A day earlier, the market got another bright inflation reading in the producer price index, which showed wholesale prices rose less in December than expected.
The post-election crypto rally fizzled into the end of 2024 after Federal Reserve Chair Jerome Powell sounded an inflation warning on Dec. 18, and bitcoin suffered even steeper losses last week as a spike in bond yields prompted investors to dump growth-oriented risk assets. This Monday, bitcoin briefly dipped below $90,000.
The price of bitcoin has been taking its cue from the equities market in recent weeks, thanks in part to the popularity of bitcoin ETFs, which have led to the institutionalization of the asset. Bitcoin’s correlation with the S&P 500 has climbed in the past week, while its correlation with gold has dropped sharply since the end of December.
Don’t miss these cryptocurrency insights from CNBC Pro:
The Microsoft 365 website on a laptop arranged in New York, US, on Tuesday, June 25, 2024.
Bloomberg | Bloomberg | Getty Images
The beginning of the year is a great time to do some basic cyber hygiene. We’ve all been told to patch, change passwords, and update software. But one concern that has been increasingly creeping to the forefront is the sometimes quiet integration of potentially privacy-invading AI into programs.
“AI’s rapid integration into our software and services has and should continue to raise significant questions about privacy policies that preceded the AI era,” said Lynette Owens, vice president, global consumer education at cybersecurity company Trend Micro. Many programs we use today — whether it be email, bookkeeping, or productivity tools, and social media and streaming apps — may be governed by privacy policies that lack clarity on whether our personal data can be used to train AI models.
“This leaves all of us vulnerable to uses of our personal information without the appropriate consent. It’s time for every app, website, or online service to take a good hard look at the data they are collecting, who they’re sharing it with, how they’re sharing it, and whether or not it can be accessed to train AI models,” Owens said. “There’s a lot of catch up needed to be done.”
Where AI is already inside our daily online lives
Owens said the potential issues overlap with most of the programs and applications we use on a daily basis.
“Many platforms have been integrating AI into their operations for years, long before AI became a buzzword,” she said.
As an example, Owens points out that Gmail has used AI for spam filtering and predictive text with its “Smart Compose” feature. “And streaming services like Netflix rely on AI to analyze viewing habits and recommend content,” Owens said. Social media platforms like Facebook and Instagram have long used AI for facial recognition in photos and personalized content feeds.
“While these tools offer convenience, consumers should consider the potential privacy trade-offs, such as how much personal data is being collected and how it is used to train AI systems. Everyone should carefully review privacy settings, understand what data is being shared, and regularly check for updates to terms of service,” Owens said.
One tool that has come in for particular scrutiny is Microsoft’s connected experiences, which has been around since 2019 and comes activated with an optional opt-out. It was recently highlighted in press reports — inaccurately, according to the company as well as some outside cybersecurity experts that have taken a look at the issue — as a feature that is new or that has had its settings changed. Leaving the sensational headlines aside, privacy experts do worry that advances in AI can lead to the potential for data and words in programs like Microsoft Word to be used in ways that privacy settings do not adequately cover.
“When tools like connected experiences evolve, even if the underlying privacy settings haven’t changed, the implications of data use might be far broader,” Owens said.
A spokesman for Microsoft wrote in a statement to CNBC that Microsoft does not use customer data from Microsoft 365 consumer and commercial applications to train foundational large language models. He added that in certain instances, customers may consent to using their data for specific purposes, such as custom model development explicitly requested by some commercial customers. Additionally, the setting enables cloud-backed features many people have come to expect from productivity tools such as real-time co-authoring, cloud storage and tools like Editor in Word that provide spelling and grammar suggestions.
Default privacy settings are an issue
Ted Miracco, CEO of security software company Approov, said features like Microsoft’s connected experiences are a double-edged sword — the promise of enhanced productivity but the introduction of significant privacy red flags. The setting’s default-on status could, Miracco said, opt people into something they aren’t necessarily aware of, primarily related to data collection, and organizations may also want to think twice before leaving the feature on.
“Microsoft’s assurance provides only partial relief, but still falls short of mitigating some real privacy concern,” Miracco said.
Perception can be its own problem, according to Kaveh Vadat, founder of RiseOpp, an SEO marketing agency.
“Having the default to enablement shifts the dynamic significantly,” Vahdat said. “Automatically enabling these features, even with good intentions, inherently places the onus on users to review and modify their privacy settings, which can feel intrusive or manipulative to some.”
His view is that companies need to be more transparent, not less, in an environment where there is a lot of distrust and suspicion regarding AI.
Companies including Microsoft should emphasize default opt-out rather than opt-in, and might provide more granular, non-technical information about how personal content is handled because perception can become a reality.
“Even if the technology is completely safe, public perception is shaped not just by facts but by fears and assumptions — especially in the AI era where users often feel disempowered,” he said.
Default settings that enable sharing make sense for business reasons but are bad for consumer privacy, according to Jochem Hummel, assistant professor of information systems and management at Warwick Business School at the University of Warwick in England.
Companies are able to enhance their products and maintain competitiveness with more data sharing as the default, Hummel said. However, from a user standpoint, prioritizing privacy by adopting an opt-in model for data sharing would be “a more ethical approach,” he said. And as long as the additional features offered through data collection are not indispensable, users can choose which aligns more closely with their interests.
There are real benefits to the current tradeoff between AI-enhanced tools and privacy, Hummel said, based on what he is seeing in the work turned in by students. Students who have grown up with web cameras, lives broadcast in real-time on social media, and all-encompassing technology, are often less concerned about privacy, Hummel said, and are embracing these tools enthusiastically. “My students, for example, are creating better presentations than ever,” he said.
Managing the risks
In areas such as copyright law, fears about massive copying by LLMs have been overblown, according to Kevin Smith, director of libraries at Colby College, but AI’s evolution does intersect with core privacy concerns.
“A lot of the privacy concerns currently being raised about AI have actually been around for years; the rapid deployment of large language model trained AI has just focused attention on some of those issues,” Smith said. “Personal information is all about relationships, so the risk that AI models could uncover data that was more secure in a more ‘static’ system is the real change we need to find ways to manage,” he added.
In most programs, turning off AI features is an option buried in the settings. For instance, with connected experiences, open a document and then click “file” and then go to “account” and then find privacy settings. Once there, go to “manage settings” and scroll down to connected experiences. Click the box to turn it off. Once doing so, Microsoft warns: “If you turn this off, some experiences may not be available to you.” Microsoft says leaving the setting on will allow for more communication, collaboration, and AI served-up suggestions.
In Gmail, one needs to open it, tap the menu, then go to settings, then click the account you want to change and then scroll to the “general” section and uncheck the boxes next to the various “Smart features” and personalization options.
As cybersecurity vendor Malwarebytes put it in a blog post about the Microsoft feature: “turning that option off might result in some lost functionality if you’re working on the same document with other people in your organization. … If you want to turn these settings off for reasons of privacy and you don’t use them much anyway, by all means, do so. The settings can all be found under Privacy Settings for a reason. But nowhere could I find any indication that these connected experiences were used to train AI models.”
While these instructions are easy enough to follow, and learning more about what you have agreed to is probably a good option, some experts say the onus should not be on the consumer to deactivate these settings. “When companies implement features like these, they often present them as opt-ins for enhanced functionality, but users may not fully understand the scope of what they’re agreeing to,” said Wes Chaar, a data privacy expert.
“The crux of the issue lies in the vague disclosures and lack of clear communication about what ‘connected’ entails and how deeply their personal content is analyzed or stored,” Chaar said. “For those outside of technology, it might be likened to inviting a helpful assistant into your home, only to learn later they’ve taken notes on your private conversations for a training manual.”
The decision to manage, limit, or even revoke access to data underscores the imbalance in the current digital ecosystem. “Without robust systems prioritizing user consent and offering control, individuals are left vulnerable to having their data repurposed in ways they neither anticipate nor benefit from,” Chaar said.
Microsoft Chairman and CEO Satya Nadella speaks during the Microsoft May 20 Briefing event at Microsoft in Redmond, Washington, on May 20, 2024. Nadella unveiled a new category of PC on Monday that features generative artificial intelligence tools built directly into Windows, the company’s world leading operating system.
Jason Redmond | AFP | Getty Images
Microsoft on Wednesday announced a tier of its Copilot assistant for corporate users with a consumption-based pricing model. The new Microsoft 365 Copilot Chat option represents an alternative to the Microsoft 365 Copilot, which organizations have been able to pay for based on the number of employees with access to it.
The introduction shows Microsoft’s determination to popularize generative artificial intelligence software in the workplace. Several companies have adopted the Microsoft 365 Copilot since it became available for $30 per person per month in November 2023, but one group of analysts recently characterized the product push as “slow/underwhelming.”
Copilot Chat can be an on-ramp to Microsoft 365 Copilot, with a lower barrier to entry, Jared Spataro, Microsoft’s chief marketing officer for AI at work, said in a CNBC interview this week. Both offerings rely on artificial intelligence models from Microsoft-backed OpenAI.
Copilot Chat can fetch information from the web and summarize text in uploaded documents, and people using it can create agents that perform tasks in the background. It can enrich answers with information from customers’ files and third-party sources.
Unlike Microsoft 365 Copilot, Copilot Chat can’t be found in Office applications such as Word and Excel. People can reach Copilot Chat starting today in the Microsoft 365 Copilot app for Windows, Android and iOS. The app is formerly known as Microsoft 365 (Office). It’s also available from the web at m365copilot.com, a spokesperson said.
Some management teams have resisted paying Microsoft to give the 365 Copilot to thousands of employees because they weren’t sure how helpful it would be at the $30 monthly price. Costs will vary for the Copilot Chat depending on what employees do with it, but at least organizations won’t end up paying for nonuse.
“As one customer said to me, this model lets the business value prove itself,” Spataro said.
Microsoft tallies up charges for Copilot Chat based on the tally of “messages” that a client uses. Each “message” costs a penny, according to a blog post. Responses that draw on the client’s proprietary files cost 30 “messages” each. Every action that an agent takes on behalf of employees costs 25 “messages.”
“We’re talking a cent, 2 cents, 30 cents, and that is a very easy way for people to get started,” Spataro said.
Salesforce charges $2 per conversation for its Agentforce AI chat service, where employees can set up automated sales and customer service processes.
The number of people using Microsoft 365 Copilot every day more than doubled quarter over quarter, CEO Satya Nadella said in October, although he did not disclose how many were using it. But sign-ups have been mounting. UBS said in October that it had 50,000 Microsoft 365 Copilot licenses, and in November, Accenture committed to having 200,000 users of the tool.