Ex-Meta staffers see booming business targeting online disinformation after two wars break out
More Videos
Published
1 year agoon
By
admin
People using their mobile phones outside the offices of Meta, the parent company of Facebook and Instagram, in King’s Cross, London.
Joshua Bratt | Pa Images | Getty Images
Lauren Wagner knows a lot about disinformation. Heading into the 2020 U.S. presidential election, she worked at Facebook, focusing on information integrity and overseeing products designed to make sure content was moderated and fact-checked.
She can’t believe what’s she’s seeing now. Since war erupted last month between Israel and Hamas, the constant deluge of misinformation and violent content spreading across the internet is hard for her to comprehend. Wagner left Facebook parent Meta last year, and her work in trust and safety feels like it was from a prior era.
“When you’re in a situation where there’s such a large volume of visual content, how do you even start managing that when it’s like long video clips and there’s multiple points of view?” Wagner said. “This idea of live-streaming terrorism, essentially at such a deep and in-depth scale, I don’t know how you manage that.”
The problem is even more pronounced because Meta, Google parent Alphabet, and X, formerly Twitter, have all eliminated jobs tied to content moderation and trust and safety as part of broader cost-cutting measures that began late last year and continued through 2023. Now, as people post and share out-of-context videos of previous wars, fabricated audio in news clips, and graphic videos of terrorist acts, the world’s most trafficked websites are struggling to keep up, experts have noted.
As the founder of a new venture capital firm, Radium Ventures, Wagner is in the midst of raising her first fund dedicated solely to startup founders working on trust and safety technologies. She said many more platforms that think they are “fairly innocuous” are seeing the need to act.
“Hopefully this is shining a light on the fact that if you house user-generated content, there’s an opportunity for misinformation, for charged information or potentially damaging information to spread,” Wagner said.
In addition to the traditional social networks, the highly polarized nature of the Israel-Hamas war affects internet platforms that weren’t typically known for hosting political discussions but now have to take precautionary measures. Popular online messaging and discussion channels such as Discord and Telegram could be exploited by terrorist groups and other bad actors who are increasingly using multiple communication services to create and conduct their propaganda campaigns.
A Discord spokesperson declined to comment. Telegram didn’t respond to a request for comment.
A demonstrator places flowers on white-shrouded body bags representing victims in the Israel-Hamas conflict, in front of the White House in Washington, DC, on November 15, 2023.
Mandel Ngan | AFP | Getty Images
On kids gaming site Roblox, thousands of users recently attended pro-Palestinian protests held within the virtual world. That has required the company to closely monitor for posts that violate its community standards, a Roblox spokesperson told CNBC in a statement.
Roblox has thousands of moderators and “automated detection tools in place to monitor,” the spokesperson said, adding that the site “allows for expressions of solidarity,” but does “not allow for content that endorses or condones violence, promotes terrorism or hatred against individuals or groups, or calls for supporting a specific political party.”
When it comes to looking for talent in the trust and safety space, there’s no shortage. Many of Wagner’s former colleagues at Meta lost their jobs and remain dedicated to the cause.
One of her first investments was in a startup called Cove, which was founded by former Meta trust and safety staffers. Cove is among a handful of emerging companies developing technology that they can sell to organizations, following an established enterprise software model. Other Meta veterans have recently started Cinder and Sero AI to go after the same general market.
“It adds some more coherence to the information ecosystem,” Wagner, who is also a senior advisor at the Responsible Innovation Labs nonprofit, said regarding the new crop of trust and safety tools. “They provide some level of standardized processes across companies where they can access tools and guidelines to be able to manage user-generated content effectively.”
‘Brilliant people out there’
It’s not just ex-Meta staffers who recognize the opportunity.
The founding team of startup TrustLab came from companies including Google, Reddit and TikTok parent ByteDance. And the founders of Intrinsic previously worked on trust and safety-related issues at Apple and Discord.
For the TrustCon conference in July, tech policy wonks and other industry experts headed to San Francisco to discuss the latest hot topics in online trust and safety, including their concerns about the potential societal effects of layoffs across the industry.
Several startups showcased their products in the exhibition hall, promoting their services, talking to potential clients and recruiting talent. ActiveFence, which describes itself as a “leader in providing Trust & Safety solutions to protect online platforms and their users from malicious behavior and content,” had a booth at the conference. So did Checkstep, a content moderation platform.
Cove also had an exhibit at the event.
“I think the cost-cutting has definitely obviously affected the labor markets and the hiring market,” said Cove CEO Michael Dworsky, who co-founded the company in 2021 after more than three years at Facebook. “There are a bunch of brilliant people out there that we can now hire.”
Cove has developed software to help manage a company’s content policy and review process. The management platform works alongside various content moderation systems, or classifiers, to detect issues such as harassment, so businesses can protect their users without needing expensive engineers to develop the code. The company, which counts anonymous social media apps YikYak and Sidechat as customers, says on its website that Cove is “the solution we wish we had at Meta.”
“When Facebook started really investing in trust and safety, it’s not like there were tools on the market that they could have bought,” said Cove technology chief Mason Silber, who previously spent seven years at Facebook. “They didn’t want to build, they didn’t want to become the experts. They did it more out of necessity than desire, and they built some of the most robust, trusted safety solutions in the world.”
A Meta spokesperson declined to comment for this story.
Wagner, who left Meta in mid-2022 after about two and a half years at the company, said that earlier content moderation was more manageable than it is today, particularly with the current Middle East crisis. In the past, for instance, a trust and safety team member could analyze a picture and determine whether it contained false information through a fairly routine scan, she said.
But the quantity and speed of photos and videos being uploaded and the ability of people to manipulate details, especially as generative AI tools become more mainstream, has created a whole new hassle.
Social media sites are now dealing with a swarm of content related to two simultaneous wars, one in the Middle East and another between Russia and Ukraine. On top of that, they have to get ready for the 2024 presidential election in less than a year. Former President Donald Trump, who is under criminal indictment in Georgia for alleged interference in the 2020 election, is the front-runner to become the Republican nominee.
Manu Aggarwal, a partner at research firm Everest Group, said trust and safety is among the fastest-growing segments of a part of the market called business process services, which includes the outsourcing of various IT-related tasks and call centers.
By 2024, Everest Group projects the overall business process services market to be about $300 billion, with trust and safety representing about $11 billion of that figure. Companies such as Accenture and Genpact, which offer outsourced trust and safety services and contract workers, currently capture the bulk of spending, primarily because Big Tech companies have been “building their own” tools, Aggarwal said.
As startups focus on selling packaged and easy-to-use technology to a wider swath of clients, Everest Group practice director Abhijnan Dasgupta estimates that spending on trust and safety tools could be between $750 million and $1 billion by the end of 2024, up from $500 million in 2023. This figure is partly dependent on whether companies adopt more AI services, thus requiring them to potentially abide by emerging AI regulations, he added.
Tech investors are circling the opportunity. Venture capital firm Accel is the lead investor in Cinder, a two-year-old startup whose founders helped build much of Meta’s internal trust and safety systems and also worked on counterterrorism efforts.
“What better team to solve this challenge than the one that played a major role in defining Facebook’s Trust and Safety operations?” Accel’s Sara Ittelson said in a press release announcing the financing in December.
Ittelson told CNBC that she expects the trust and safety technology market to grow as more platforms see the need for greater protection and as the social media market continues to fragment.
New content policy regulations have also spurred investment in the area.
The European Commission is now requiring large online platforms with big audiences in the EU to document and detail how they moderate and remove illegal and violent content on their services or face fines of up to 6% of their annual revenue.
Cinder and Cove are promoting their technologies as ways that online businesses can streamline and document their content moderation procedures to comply with the EU’s new regulations, called the Digital Services Act.
‘Frankenstein’s monster’
In the absence of specialized tech tools, Cove’s Dworsky said, many companies have tried to customize Zendesk, which sells customer support software, and Google Sheets to capture their trust and safety policies. That can result in a “very manual, unscalable approach,” he said, describing the process for some companies as “rebuilding and building a Frankenstein’s monster.”
Still, industry experts know that even the most effective trust and safety technologies aren’t a panacea for a problem as big and seemingly uncontrollable as the spread of violent content and disinformation. According to a survey published last week by the Anti-Defamation League, 70% of respondents said that on social media, they’d been exposed to at least one of several types of misinformation or hate related to the Israel-Hamas conflict.
As the problem expands, companies are dealing with the constant struggle over determining what constitutes free speech and what crosses the line into unlawful, or at least unacceptable, content.
Alex Goldenberg, the lead intelligence analyst at the Network Contagion Research Institute, said that in addition to doing their best to maintain integrity on their sites, companies should be honest with their users about their content moderation efforts.
“There’s a balance that is tough to strike, but it is strikable,” he said. “One thing I would recommend is transparency at a time where third-party access and understanding to what is going on at scale on social platforms is what is needed.”
Noam Bardin, the former CEO of navigation firm Waze, now owned by Google, founded the social news-sharing and real-time messaging service Post last year. Bardin, who’s from Israel, said he’s been frustrated with the spread of misinformation and disinformation since the war began in October.
“The whole perception of what’s going on is fashioned and managed through social media, and this means there’s a tremendous influx of propaganda, disinformation, AI-generated content, bringing content from other conflicts into this conflict,” Bardin said.
Bardin said that Meta and X have struggled to manage and remove questionable posts, a challenge that’s become even greater with the influx of videos.
At Post, which is most similar to Twitter, Bardin said he’s been incorporating “all these moderation tools, automated tools and processes” since his company’s inception. He uses services from ActiveFence and OpenWeb, which are both based in Israel.
“Basically, anytime you comment or you post on our platform, it goes through it,” Bardin said regarding the trust and safety software. “It looks at it from an AI perspective to understand what it is and to rank it in terms of harm, pornography, violence, etc.”
Post is an example of the kinds of companies that trust and safety startups are focused on. Active online communities with live-chatting services have also emerged on video game sites, online marketplaces, dating apps and music streaming sites, opening them up to potentially harmful content from users.
Brian Fishman, co-founder of Cinder, said “militant organizations” rely on a network of services to spread propaganda, including platforms like Telegram, and sites such as Rumble and Vimeo, which have less advanced technology than Facebook.
Representatives from Rumble and Vimeo didn’t respond to requests for comment.
Fishman said customers are starting to see trust and safety tools as almost an extension of their cybersecurity budgets. In both cases, companies have to spend money to prevent possible disasters.
“Some of it is you’re paying for insurance, which means that you’re not getting full return on that investment every day,” Fishman said. “You’re investing a little bit more during black times, so that you got capability when you really, really need it, and this is one of those moments where companies really need it.”
WATCH: Lawmakers ask social media and AI companies to crack down on misinformation
You may like
Technology
OpenAI says it needs ‘more capital than we’d imagined’ as it lays out for-profit plan
Published
5 hours agoon
December 27, 2024By
admin
OpenAI said Friday that in moving toward a new for-profit structure in 2025, the company will create a public benefit corporation to oversee commercial operations, removing some of its nonprofit restrictions and allowing it to function more like a high-growth startup.
“The hundreds of billions of dollars that major companies are now investing into AI development show what it will really take for OpenAI to continue pursuing the mission,” OpenAI’s board wrote in the post. “We once again need to raise more capital than we’d imagined. Investors want to back us but, at this scale of capital, need conventional equity and less structural bespokeness.”
The pressure on OpenAI is tied to its $157 billion valuation, achieved in the two years since the company launched its viral chatbot, ChatGPT, and kicked off the boom in generative artificial intelligence. OpenAI closed its latest $6.6 billion round in October, gearing up to aggressively compete with Elon Musk’s xAI as well as Microsoft, Google, Amazon and Anthropic in a market that’s predicted to top $1 trillion in revenue within a decade.
Developing the large language models at the heart of ChatGPT and other generative AI products requires an ongoing investment in high-powered processors, provided largely by Nvidia, and cloud infrastructure, which OpenAI largely receives from top backer Microsoft.
OpenAI expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC confirmed in September. Those numbers are increasing rapidly.
By transforming into a Delaware PBC “with ordinary shares of stock,” OpenAI says it can pursue commercial operations, while separately hiring a staff for its nonprofit arm and allowing that wing to take on charitable activities in health care, education and science.
The nonprofit will have a “significant interest” in the PBC “at a fair valuation determined by independent financial advisors,” OpenAI wrote.
OpenAI’s complicated structure as it exists today is the result of its creation as a nonprofit in 2015. It was founded by CEO Sam Altman, Musk and others as a research lab focused on artificial general intelligence, or AGI, which was an entirely futuristic concept at the time.
In 2019, OpenAI aimed to move past its role as solely a research lab in hopes of functioning more like a startup, so it created a so-called capped-profit model, with the nonprofit still controlling the overall entity.
“Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the nonprofit to easily do more than control the for-profit,” OpenAI wrote in Friday’s post.
OpenAI added that the change would “enable us to raise the necessary capital with conventional terms like our competitors.”
Musk’s opposition
OpenAI’s efforts to restructure face some major hurdles. The most significant is Musk, who is in the midst of a heated legal battle with Altman that could have a significant impact on the company’s future.
In recent months, Musk has sued OpenAI and asked a court to stop the company from converting to a for-profit corporation from a nonprofit. In posts on X, he described that effort as a “total scam” and claimed that “OpenAI is evil.” Earlier this month, OpenAI clapped back, alleging that in 2017 Musk “not only wanted, but actually created, a for-profit” to serve as the company’s proposed new structure.
In addition to its face-off with Musk, OpenAI has been dealing with an outflow of high-level talent, due in part to concerns that the company has focused on taking commercial products to market at the expense of safety.
In late September, OpenAI Chief Technology Officer Mira Murati announced she would depart the company after 6½ years. That same day, research chief Bob McGrew and Barret Zoph, a research vice president, also announced they were leaving. A month earlier, co-founder John Schulman said he was leaving for rival startup Anthropic.
Altman said during a September interview at Italian Tech Week that recent executive departures were not related to the company’s potential restructuring: “We have been thinking about that — our board has — for almost a year independently, as we think about what it takes to get to our next stage,” he said.
Those weren’t the first big-name exits. In May, OpenAI co-founder Ilya Sutskever and former safety leader Jan Leike announced their departures, with Leike also joining Anthropic.
Leike wrote in a social media post at the time that disagreements with leadership about company priorities drove his decision.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” he wrote.
One employee, who worked under Leike, quit soon after him, writing on X in September that “OpenAI was structured as a non-profit, but it acted like a for-profit.” The employee added, “You should not believe OpenAI when it promises to do the right thing later.”
Don’t miss these insights from CNBC PRO
Technology
Google CEO Pichai struggled to navigate a pressure-filled year
Published
5 hours agoon
December 27, 2024By
admin
NEW YORK, NY – NOVEMBER 01: Sundar Pichai, C.E.O., Google Inc. speaks at the New York Times DealBook conference on November 1, 2018 in New York City. (Photo by Stephanie Keith/Getty Images)
Stephanie Keith | Getty Images News | Getty Images
Google’s blowout earnings report in April, which sparked the biggest rally in Alphabet shares since 2015 and pushed its market cap past $2 trillion for the first time, tempered fear that the company was falling behind in artificial intelligence.
As executives enthusiastically talked about the results with Google’s employees at an all-hands meeting the following week, it was clear that Wall Street viewed things differently than the company’s workforce.
“We’ve noticed a significant decline in morale, increased distrust and a disconnect between leadership and the workforce,” one employee wrote in a comment that was read by executives at the meeting. “How does leadership plan to address these concerns and regain the trust, morale and cohesion that have been foundational to our company’s success?”
The comment was highly rated on an internal forum.
“Despite the company’s stellar performance and record earnings, many Googlers have not received meaningful compensation increases” another top-rated employee question read.
That meeting set the stage for what would be a year of contrasting takes from the company’s vocal workforce. As Google faced some of the most intense pressure its experienced since going public two decades ago, so too did CEO Sundar Pichai, who took the helm in 2015.
Pichai oversaw a steady stream of revenue growth this year in key areas like search ads and cloud. The company rolled out groundbreaking technologies, rounded out its AI strategy despite a slew of embarrassing product incidents and saw its stock price rise more than 40% as of Thursday’s close, ahead of the S&P 500 but trailing rivals Meta and Amazon.
Over the course of 2024, many staffers questioned Pichai’s vision following product mishaps in the first half of the year as well as internal shake-ups and layoffs, according to conversations with more than a dozen employees, audio recordings and internal correspondence.
As the second half of the year progressed and Google rolled out a number of eye-catching AI products, Pichai’s standing improved, though some skepticism remains, sources told CNBC.
Google DeepMind chief Demis Hassabis (L) and Google chief executive Sundar Pichai open the tech titan’s annual I/O developers conference focusing on how artificial intelligence is being woven into search, email, virtual meetings and more.
Glenn Chapman | AFP | Getty Images
The AI race pressure cooker
After the introduction of ChatGPT in late 2022, the tech industry saw an influx of AI products from Microsoft, with its Copilot AI assistant, and Meta, which placed its Meta AI chatbot in the search functions of its apps, as well as from hot startups like OpenAI and Perplexity.
The popularity of those tools has eaten into Google’s grip on U.S. search. The company’s share of the search advertising market is expected to dip below 50% in 2025, which would be the first time falling below that mark in more than a decade, according to research firm eMarketer.
Google responded to the pressures from new AI tools with offerings of its own. The company in 2024 rebranded its family of AI models as Gemini and released a number of products that were well received. But in its scramble to play catch-up, the company also released a pair of AI products that initially proved embarrassing.
In February, Google launched Imagen 2, which turned user prompts into AI-generated images. Immediately after it was introduced, the product came under scrutiny for historical inaccuracies discovered by users. Notably, when one user asked it to show a German soldier in 1943, the tool depicted a racially diverse set of soldiers wearing German military uniforms of the era.
The company pulled the feature, and Pichai told employees the company had “offended our users and shown bias,” according to a memo. Google said it would take a few weeks to relaunch Imagen 2, but it ended up being six months before it was revived as Imagen 3 in August.
“We definitely messed up on the image generation,” Google co-founder Sergey Brin told a small crowd at a hacker house in March, in a video posted to YouTube. “It was mostly due to just not thorough testing.”
The launch of AI Overview in May caused a similar reaction.
That product showed users AI summaries atop Google’s traditional search results. Pichai hyped the product, calling it the biggest change to search in 25 years. Once again, users were quick to find problems.
When asked “How many rocks should I eat each day,” the tool said, “According to UC Berkeley geologists, people should eat at least one small rock a day.” AI Overview also listed the vitamins and digestive benefits of rocks.
Google responded by saying it would add more guardrails to AI Overview for health-related queries but said the mistakes weren’t hallucinations, and were rather just rare edge cases. Search Vice President Liz Reid told employees at an all-hands meeting in June that AI Overview’s launch shouldn’t discourage them from taking risks.
“We should act with urgency,” Reid said. “When we find new problems, we should do the extensive testing but we won’t always find everything and that just means that we respond.”
Jaque Silva | Nurphoto | Getty Images
Beyond its AI blunders, Google also saw its greatest regulatory challenges to date in 2024.
In August, a federal judge ruled that the company illegally holds a monopoly in the search market. The Justice Department in November asked that Google be forced to divest its Chrome internet browser unit as a remedy for the ruling
The DOJ’s request represents the agency’s most aggressive attempt to break up a tech company since its antitrust case against Microsoft, which reached a settlement in 2001.
The remedies are expected to be decided next summer, and Google has said it will appeal, likely dragging out the situation a couple more years, but the company faces more antitrust hurdles.
In a separate case, the DOJ accused the company of illegally dominating online ad technology. That trial closed in September and awaits a judge ruling. In October, a U.S. judge issued a permanent injunction that will force Google to offer alternatives to its Google Play app store for Android phones. After the ruling in October, Google won a temporary pause on the ruling, meaning it won’t have to open up Android to more app stores yet.
A search for vision
Amid the external pressure, Google notched some notable victories particularly toward the end of 2024, leading to a more positive sentiment from people within and outside the company.
Google successfully launched its most powerful suite of new Gemini models that underpin all of the company’s AI products, including its lightweight model Gemini Flash, which has been popular among developers. YouTube’s combined ad and subscription revenue over the past four quarters surpassed $50 billion.
In the third quarter, Google saw the fastest-growing cloud business across the big tech players, up 35% over last year, with operating margins of 17%. The company has also seen double-digit revenue growth for each of the past four quarters and launched Trillium, its powerful sixth generation Tensor Processing Units, or TPUs, which were also found to have powered Apple’s AI models.
Despite the blunders, AI Overview reached nearly 1 billion monthly users by the end of October. Demand for AI software has also driven consistent growth for the company’s cloud infrastructure. And Google launched an impressive video generation product, Veo 2, this month as well as an updated AI note-taking product, NotebookLM.
Beyond AI, Google in December announced Willow, a chip the company calls its biggest step in the march toward commercially viable quantum computing. The Waymo self-driving car unit was also a bright spot, expanding its robotaxi service to three cities and laying the groundwork for even more expansion in 2025. The company has delivered 4 million fully autonomous rides this year, with plans to commercially launch in Austin, Texas, and Atlanta next year.
A Google quantum processor “Sycamore” is held up to the camera wearing blue gloves. In 2019, Google made a breakthrough in quantum computing.
Peter Kneffel | Picture Alliance | Getty Images
But as Pichai approaches a decade running Google and starts his sixth year as CEO of parent Alphabet, questions remain about his ability to guide the company into the future.
Internally, employees routinely criticize leadership on the company’s Memegen messaging board, and some have aired their grievances publicly.
“Google does not have one single visionary leader,” a Google software engineer wrote in a LinkedIn post earlier this year that received more than 8,500 reactions. “Not a one. From the C-suite to the SVPs to the VPs, they are all profoundly boring and glassy-eyed.”
In October, Google announced it would shake up the leadership of its ads and search division.
The company replaced longtime search boss Prabhakar Raghavan with Nick Fox, a deputy of Raghavan’s and a career Google employee. Raghavan was given the title of “chief scientist,” but internally, he is now listed as an “IC,” or individual contributor.
Google also shifted the team working on its Gemini AI app to the Google DeepMind division, under AI head Demis Hassabis. Employees praised Pichai’s leadership shuffle, but some complained that the moves should’ve happened sooner.
Notably, some employees were perturbed when Raghavan addressed employees at an all-hands meeting in April, when he urged them to move faster, according to several people who spoke with CNBC. Raghavan noted that the staffers working to fix the failed Imagen 2 tool had increased their workloads from 100 hours a week to 120 hours to correct it in a timely manner.
Pichai has made efforts to get Google back to its nimble startup-like culture.
When addressing employees, Pichai often name-checked co-founders Sergey Brin and Larry Page to remind them of Google’s scrappy roots. He’s flattened the company, removing 10% of middle management, according to audio of a December all-hands meeting. And in the spring, Pichai greenlit a hackathon, allowing employees to build using Google products that have yet to be announced. Pichai has also personally joined meetings with Google’s Labs team and enabled them to move quickly on products like NotebookLM, one of the company’s hit AI products in 2024.
Google Co-Founder Sergey Brin speaks during a press conference after the third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo at a hotel in Seoul on March 12, 2016.
Jung Yeon-Je | AFP | Getty Images
After Brin’s hacker house appearance in March, some employees internally joked he should retake the helm, nostalgic for what they perceived as a visionary leader devoid of corporate speak.
Brin co-founded Google with Page in 1998, but he stepped down as president of Alphabet in 2019. Brin, who remains a board member and a principal shareholder with a stake worth more than $140 billion, began appearing more frequently on campus starting in 2023, as part of an effort to help ramp up Google’s position in the hypercompetitive AI market. Employees, particularly working in AI and DeepMind said they’ve seen Brin walking around the company’s Mountain View, California, headquarters throughout the year and have been able to ask him questions for projects they’re pursuing.
Despite Brin’s reemergence, several employees told CNBC they’re doubtful he could adequately run what has become an increasingly larger and complex corporation.
Employees said that although Pichai didn’t strike them as particularly visionary or as a wartime leader, it’s hard to find someone better suited for the job, given all the complexities of Alphabet. The key quandary remains: move too early and risk widespread criticism; move too late and risk missing the boat.
Culture Clashes
Through the year, morale inside Google wavered. Efforts to cut costs across the company in order to invest more in AI resulted in some teams feeling bifurcated and created yet another challenge for Pichai.
Within the company’s AI and DeepMind divisions, morale is mostly high, according to employees, boosted by hefty investments. Elsewhere, the vibes have been marred by cost cuts, bureaucracy and declining trust in leadership, employees said.
DeepMind and AI teams have held off-sites, team-building activities, and have much bigger travel and recruiting budgets, people familiar with the matter said. In the spring, the company moved employees out of an eight-story office on San Francisco’s waterfront Embarcadero street and replaced them with AI and AI adjacent teams.
Google DeepMind co-founder and Chief Executive Officer Demis Hassabis gives a conference during the Mobile World Congress (MWC), the telecom industry’s biggest annual gathering, in Barcelona on February 26, 2024.
Pau Barrena | Afp | Getty Images
A meme posted internally in November summed it up.
The meme featured a photo of the cast of “Wicked” actors, where one, labeled “execs” looked longingly at one fellow actor labeled “Gemini” while ignoring the other beside her, which was labeled as “users.”
A Google spokesperson contested the idea that AI workers are receiving favorable treatment and said higher travel and recruiting budgets are not exclusive to AI teams or DeepMind.
“Most Googlers, regardless of team, continue to feel positively about our mission and the company’s future, and are proud to work here,” the spokesperson said.
A few employees say they’re no longer incentivized by the prospects of landing a promotion, which have become harder to achieve, and rather by the hope of avoiding layoffs.
Despite slashing 12,000 jobs, or roughly 6% of its workforce, in 2023, Google has continued eliminating roles this year. In her first public statements as Google’s CFO, Anat Ashkenazi, told Wall Street in October that one of her top priorities would be to drive more “cost efficiencies” across the company in order to invest more in AI.
“I think any organization can always push a little further and I’ll be looking at additional opportunities,” Ashkenazi said.
That month, Google posted a job listing for a “Central Reorg Support Team Partner.” The responsibilities of that fixed-term contract position would include consulting with local HR teams and noted the need for the support staff’s “ability to operate with empathy and diffuse/de-escalate challenging conversations/situations.”
“Hire the smartest people so they can tell us what to do,” one employee wrote on the internal forum in meme-style font atop the images of Brin and Page. “Hire a reorg consultant so they can tell us how to layoff the smartest people,” another said.
Google ultimately took the job listing down.
Pro-Palestinian protesters are blocked the Google I/O developer conference entrance to protest Google’s Project Nimbus and Israeli attacks on Gaza and Rafah, at its headquarters in Mountain View, California, United States on May 14, 2024.
Tayfun Coskun | Anadolu | Getty Images
Touting its AI technology to clients, Pichai’s leadership team has been aggressively pursuing federal government contracts, which has caused a heightened strain in some areas within the outspoken workforce since the beginning of the year.
Google terminated more than 50 employees after a series of protests against Project Nimbus, a $1.2 billion joint contract with Amazon that provides the Israeli government and military with cloud computing and AI services. Executives repeatedly said the contract didn’t violate any of the company’s “AI principles.”
However, documents and reports show the company’s agreement allowed for giving Israel AI tools that included image categorization, object tracking, as well as provisions for state-owned weapons manufacturers. Earlier this month, a New York Times report found that four months prior to signing on to Nimbus, officials at the company worried that signing the deal would harm its reputation and that “Google Cloud services could be used for, or linked to, the facilitation of human rights violations.”
In an all-hands meeting in April, a highly rated question asked why employees who did not participate in the protests were also fired, which was reported and cited in a National Labor Relations Board complaint from affected employees. Chris Rackow, Google’s security chief, took the stage at the all-hands and rebutted those claims.
“This was a very clear case of employees disrupting and occupying work spaces, and making other employees feel unsafe,” a Google spokesperson told CNBC, adding that the company “carefully confirmed” that every person terminated was involved in the protests. “By any standard, their behavior was completely unacceptable.”
That round of job eliminations underscored Google’s clampdown on internal discussions related to hot-button topics, including politics and geopolitical conflicts, which was encouraged by executives several years prior.
One internal meme that got more than 2,000 likes, compared Google to Star Wars’ Anakin Skywalker. The meme shows an image of a smiling childhood Skywalker, framed by one of the company’s original, colorful employee badges. The meme progresses Skywalker’s age in two later versions of the badge.
The final badge shows Darth Vader working for “Google,” spelled out in the font of IBM’s logo.
Don’t miss these insights from CNBC PRO
Technology
Larry Ellison wraps up banner year as Oracle’s stock rallies most since dot-com boom
Published
1 day agoon
December 26, 2024By
admin
Larry Ellison and Monica Seles and Bill Gates (back row) watch Carlos Alcaraz of Spain play against Alexander Zverev of Germany in their Quarterfinal match during the BNP Paribas Open in Indian Wells, California, on March 14, 2024.
Clive Brunskill | Getty Images
It’s been a good year for Larry Ellison.
Oracle’s co-founder has gained roughly $75 billion in paper wealth as the software company he started in 1979 enjoyed its biggest stock rally since 1999 and the dot-com boom.
While the S&P 500 index has gained 27% in 2024, Oracle shares have shot up 63%, lifting Ellison’s net worth to more than $217 billion, according to Forbes, behind only Tesla CEO Elon Musk and Amazon founder Jeff Bezos among the world’s richest people.
At 80, Ellison is a senior citizen in the tech industry, where his fellow billionaire founders are generally decades younger. Meta CEO Mark Zuckerberg, whose net worth has also ballooned past $200 billion, is half his age.
But Ellison has found the fountain of youth both personally and professionally. After being divorced several times, Ellison was reported this month to be involved with a 33-year-old woman. And at a meeting with analysts in Las Vegas in September, Ellison was as engaged as ever, mentioning offhand that the night before, he and his son were having dinner with his good friend Musk, who’s advising President-elect Donald Trump (then the Republican nominee) while running Tesla and his other ventures.
His big financial boon has come from Oracle, which has maneuvered its way into the artificial intelligence craze with its cloud infrastructure technology and has made its databases more accessible.
ChatGPT creator OpenAI said in June that it will use Oracle’s cloud infrastructure. Earlier this month, Oracle said it had also picked up business from Meta.
Startups, which often opt for market leader Amazon Web Services when picking a cloud, have been engaging Oracle as well. Last year, video generation startup Genmo set up a system to train an AI model with Nvidia graphics processing units, or GPUs, in Oracle’s cloud, CEO Paras Jain said. Genmo now relies on the Oracle cloud to produce videos based on the prompts that users type in on its website.
“Oracle produced a different product than what you can get elsewhere with GPU computing,” Jain said. The company offers “bare metal” computers that can sometimes yield better performance than architectures that employ server virtualization, he said.
In its latest earnings report earlier this month, Oracle came up short of analysts’ estimates and issued a forecast that was also weaker than Wall Street was expecting. The stock had its worst day of 2024, falling almost 7% and eating into the year’s gains.
Still, Ellison was bullish for the future.
“Oracle Cloud Infrastructure trains several of the world’s most important generative AI models because we are faster and less expensive than other clouds,” Ellison said in the earnings release.
For the current fiscal year, which ends in May, Oracle is expected to record revenue growth of about 10%, which would mark its second-strongest year of expansion since 2011.
Jain said that when Genmo has challenges, he communicates with Oracle sales executives and engineers through a Slack channel. The collaboration has resulted in better reliability and performance, he said. Jain said Oracle worked with Genmo to ensure that developers could launch the startup’s Mochi open-source video generator on Oracle’s cloud hardware with a single click.
“Oracle was also more price-competitive than these large hyperscalers,” Jain said.
‘That’s going to be so easy’
Three months before its December earnings report, at the analyst event in Las Vegas, Oracle had given a rosy outlook for the next three years. Executive Vice President Doug Kehring declared that the company would produce more than $66 billion in revenue in the 2026 fiscal year, and over $104 billion in fiscal 2029. The numbers suggested acceleration, with a compound annual growth rate of over 16%, compared with 9% in the latest quarter.
After Kehring and CEO Safra Catz spoke, it was Ellison’s turn. The company’s chairman, technology chief and top shareholder strutted onto the stage in a black sweater and jeans, waved to the analysts, licked his lips and sat down. For the next 74 minutes, he answered questions from seven analysts.
“Did — did he say $104 billion?” Ellison said, referring to Kehring’s projection. Some in the crowd giggled. “That’s going to be so easy. It is kind of crazy.”
Oracle’s revenue in fiscal 2023 was just shy of $50 billion.
The new target impressed Eric Lynch, managing director of Scharf Investments, which held $167 million in Oracle shares at the end of September.
“For a company doing single digits for a decade or so, that’s unbelievable,” Lynch told CNBC in an interview.
Oracle co-founder and Chairman Larry Ellison delivers a keynote address during the Oracle OpenWorld on October 22, 2018 in San Francisco, California.
Justin Sullivan | Getty Images
Oracle is still far behind in cloud infrastructure. In 2023, Amazon controlled 39% share of market, followed by Microsoft at 23% and Google at 8.2%, according to industry researcher Gartner. That left Oracle with 1.4%.
But in database software, Oracle remains a stalwart. Gartner estimated that the company had 17% market share in database management systems in 2023.
Ellison’s challenge is to find opportunities for expansion.
Last year, he visited Microsoft headquarters in Redmond, Washington, for the first time to announce a partnership that would enable organizations to use Oracle’s database through Microsoft’s Azure cloud. Microsoft even installed Oracle hardware in its data centers.
In June, Oracle rolled out a similar announcement with Google. Then, in September, Oracle finally partnered with Amazon, introducing its database on AWS.
Oracle and Amazon had exchanged barbs for years. AWS introduced a database called Aurora in 2014, and Amazon worked hard to move itself off Oracle. Following a CNBC report on the effort, Ellison expressed doubt about Amazon’s ability to reach its goal. But the project succeeded.
In 2019, Amazon published a blog post titled, “Migration Complete – Amazon’s Consumer Business Just Turned off its Final Oracle Database.”
Friendlier vibe
Ellison looked back on the history between the two companies at the analyst meeting in September.
“I got kind of got cute commenting about Amazon uses Oracle, doesn’t use AWS, blah, blah,” he said. “And that hurt some people’s feelings. I probably shouldn’t have said it.”
He said a friend at a major New York bank had asked him to make sure the Oracle database works on AWS.
“I said, ‘Great. It makes sense to me,'” Ellison said.
The multi-cloud strategy should deliver gains in database market share, said analyst Siti Panigrahi of Mizuho, which has the equivalent of a buy rating on Oracle shares. Cloud deals related to AI will also help Oracle deliver on its promise for faster revenue growth, he said.
“Oracle right now has an end-to-end stack for enterprises to build their AI strategy,” said Panigrahi, who worked on applications at Oracle in the 2000s.
So far, Oracle has been mainly cutting high-value AI deals with the likes of OpenAI and Musk’s X.ai. Of Oracle’s $97 billion in remaining performance obligations, or revenue that hasn’t yet been recognized, 40% or 50% of it is tied to renting out GPUs, Panigrahi said.
Oracle didn’t respond to a request for comment.
Panigrahi predicts that a wider swath of enterprises will begin adopting AI, which will be a boon to Oracle given its hundreds of thousands of big customers.
There’s also promise in Oracle Health, the segment that came out of the company’s $28.2 billion acquisition of electronic health record software vendor Cerner in 2022.
Yoshiki Hayashi, Marc Benioff and Larry Ellison attend the Transformative Medicine of USC: Rebels with a Cause Gala in Santa Monica, California, on Oct. 24, 2019.
Joshua Blanchard | Getty Images
Unlike rival Epic, Oracle Health lost U.S. market share in 2023, according to estimates from KLAS Research. But Ellison’s connection to Musk, who is set to co-lead Trump’s Department of Government Efficiency, might benefit Oracle Health “if there is a bigger push towards modernizing existing healthcare systems,” analysts at Evercore said in a note last week. They recommend buying the stock.
For now, Oracle is busy using AI to rewrite Cerner’s entire code base, Ellison said at the analyst event.
“This is another pillar for growth,” he said. “I think you haven’t quite seen it yet.”
Hours earlier, Ellison had put in a call to Marc Benioff, co-founder and CEO of Salesforce. Benioff knows Ellison as well as anyone, having worked for him for 13 years before starting the cloud software company that’s now a big competitor.
“It was awesome,” Benioff said in a wide-ranging interview the next day, regarding his chat with Ellison.
Benioff spoke about his former boss’s latest run of fortune.
“Larry really deeply wants this,” Benioff said. “This is very important to him, that he is building a great company, what he believes is one of the most important companies in the world, and also, wealth is very important to him.”
Don’t miss these insights from CNBC PRO
Trending
-
Sports2 years ago
‘Storybook stuff’: Inside the night Bryce Harper sent the Phillies to the World Series
-
Sports9 months ago
Story injured on diving stop, exits Red Sox game
-
Sports1 year ago
Game 1 of WS least-watched in recorded history
-
Sports2 years ago
MLB Rank 2023: Ranking baseball’s top 100 players
-
Sports3 years ago
Team Europe easily wins 4th straight Laver Cup
-
Environment2 years ago
Japan and South Korea have a lot at stake in a free and open South China Sea
-
Environment2 years ago
Game-changing Lectric XPedition launched as affordable electric cargo bike
-
Business2 years ago
Bank of England’s extraordinary response to government policy is almost unthinkable | Ed Conway