Amazon on Wednesday began laying off some employees in its cloud computing and human resources divisions.
Amazon Web Services CEO Adam Selipsky and human resources head Beth Galetti sent notes to staffers in the U.S., Canada and Costa Rica informing them of the job cuts.
“It is a tough day across our organization,” Selipsky wrote in the memo.
The layoffs are part of the previously announced job cuts that are expected to affect 9,000 employees. Last week, Amazon laid off some employees in its advertising unit, and it has let go of staffers in its video games and Twitch livestreaming units in recent weeks.
Amazon wrapped up a separate round of cuts earlier this year that affected approximately 18,000 employees. Combined with the cuts this month, it marks the largest layoffs in Amazon’s 29-year history.
Amazon CEO Andy Jassy has been aggressively slashing costs across the company as the e-retailer reckons with an economic downturn and slowing growth in its core retail business. Amazon froze hiring in its corporate workforce, axed some experimental projects and slowed warehouse expansion.
By announcing layoffs in ads and AWS, Jassy has shown that two of Amazon’s biggest and most profitable businesses aren’t immune to the cost-cutting. Both AWS and ads have experienced slowing growth in recent months as companies trim their spending amid a challenging economic environment.
Some teams within AWS were included in the earlier round of layoffs. A portion of the cuts on Wednesday is expected to land in AWS’ professional services arm, which helps customers troubleshoot issues with their cloud infrastructure, according to a current employee, who asked to remain anonymous because they weren’t authorized to speak on the matter.
Head count in AWS ballooned during the Covid pandemic, which proved to be a massive boon for Amazon and other cloud providers, as companies, government agencies and schools sped their transition to the cloud.
“Given this rapid growth, as well as the overall business and macroeconomic climate, it is critical that we focus on identifying and putting our resources behind our top priorities — those things that matter most to customers and that will move the needle for our business,” Selipsky wrote in the memo. “In many cases this means team members are shifting the projects, initiatives or teams on which they work; however, in other cases it has resulted in these role eliminations.”
Amazon is scheduled to report first-quarter earnings after the bell Thursday. Investors will look for any insight into whether Jassy’s cost-cutting efforts have improved profitability, and when Amazon executives expect AWS growth to reaccelerate.
Shares of Amazon surged more than 3% in afternoon trading Wednesday.
Here’s the full memo from Selipsky:
AWS team,
As you know, we recently made the difficult decision to eliminate some roles across Amazon globally, including within AWS. I wanted to let you know that conversations with impacted AWS employees started today, with notification messages sent to all impacted employees in the U.S., Canada, and Costa Rica. In other regions, we are following local processes, which may include time for consultation with employee representative bodies and possibly result in longer timelines to communicate with impacted employees.
It is a tough day across our organization. I fully realize the impact on every person and family who is affected. We are working hard to treat everyone impacted with respect, and to provide a number of resources and touchpoints to aid in this transition. This also includes packages that include a separation payment, transitional health insurance benefits, and external job placement support.
To those to whom we are saying goodbye today, thank you for everything you have done for this business and our customers. I am truly grateful. To all AWS builders, thank you for your compassion and empathy for your colleagues.
Both the size of our business and the size of our team have grown significantly over recent years, driven by customer demand for the cloud and for the unique value AWS provides. This growth has come quickly as we’ve moved as fast as we could to build what customers have needed. Given this rapid growth, as well as the overall business and macroeconomic climate, it is critical that we focus on identifying and putting our resources behind our top priorities—those things that matter most to customers and that will move the needle for our business. In many cases this means team members are shifting the projects, initiatives or teams on which they work; however, in other cases it has resulted in these role eliminations.
The fundamentals and the outlook for our business are strong, and we are very confident in our long-term prospects. We are the leading cloud provider by a wide range of benchmarks, from our feature set to our security capabilities to our operational performance. We are focused on continuing to innovate in the areas that matter most to our customers as we help them minimize expense, innovate rapidly, and transform their organizations.
I am optimistic about the future. We’ll tackle our opportunities and our challenges, and continue to change the world.
Thank you,
Adam
And here’s the full memo from Galetti:
PXT Team,
As Andy shared a few weeks ago, leaders across the company have worked closely with their teams to decide what investments they are going to make for the future, prioritizing what matters most to customers and the long-term health of our businesses. Given PXT’s close partnership with the business, these shifts impact our OP2 plans as well, and we have made the difficult decision to eliminate additional roles within the PXT organization.
Today we shared this update with our PXT colleagues whose roles were impacted across the U.S., Canada, and Costa Rica. In other regions, we are following local processes, which may include time for consultation with employee representative bodies and possibly result in longer timelines to communicate with impacted employees.
These decisions are not taken lightly, and I recognize the impact it will have across both those transitioning out of the company as well as our colleagues who remain.
To those leaving, I want to say thank you for your contributions. You’ve helped build Amazon into the extraordinary company it is today, and we are here to support you during this difficult time. In the U.S., we are providing packages that include a 60-day, non-working transitional period with full pay and benefits, plus an additional several weeks of severance depending on tenure, a separation payment, transitional benefits, and external job placement support.
While this moment is hard, I remain energized by the important work that lies ahead of us. Together, we are building a workplace that helps fuel how Amazonians invent and deliver for customers. From making it easier for employees to find the information and help they need, to expanding our benefits, I am proud of the progress we’ve made over the last few years. This meaningful work is a direct reflection of PXT’s perseverance, resilience, and leadership. Thank you.
Please know that the entire PXTLT, including myself, is here to answer your questions and support you.
An illustration photo shows Sora 2 logo on a smartphone.
Cfoto | Future Publishing | Getty Images
The Creative Artists Agency on Thursday slammed OpenAI’s new video creation app Sora for posing “significant risks” to their clients and intellectual property.
The talent agency, which represents artists including Doja Cat, Scarlett Johanson, and Tom Hanks, questioned whether OpenAI believed that “humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create.”
“Or does Open AI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work? In our opinion, the answer to this question is obvious,” the CAA wrote.
OpenAI did not immediately respond to CNBC’s request for comment.
The CAA said that it was “open to hearing” solutions from OpenAI and is working with IP leaders, unions, legislators and global policymakers on the matter.
“Control, permission for use, and compensation is a fundamental right of these workers,” the CAA wrote. “Anything less than the protection of creators and their rights is unacceptable.”
Sora, which launched last week and has quickly reached 1 million downloads, allows users to create AI-generated clips often featuring popular characters and brands.
Read more CNBC tech news
OpenAI launched with an “opt-out” system, which allowed the use of copyrighted material unless studios or agencies requested that their IP not be used.
CEO Sam Altman later said in a blog post that they would give rightsholders “more granular control over generation of characters.”
Talent agency WME sent a memo to agents on Wednesday that it has “notified OpenAI that all WME clients be opted out of the latest Sora AI update, regardless of whether IP rights holders have opted out IP our clients are associated with,” the LA Times reported.
United Talent Agency also criticized Sora’s use of copyrighted property as “exploitation, not innovation,” in a statement on Thursday.
“There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected,” UTA wrote. “When it comes to OpenAI’s Sora or any other platform that seeks to profit from our clients’ intellectual property and likeness, we stand with artists.”
In a letter written to OpenAI last week, Disney said it did not authorize OpenAI and Sora to copy, distribute, publicly display or perform any image or video that features its copyrighted works and characters, according to a person familiar with the matter.
Disney also wrote that it did not have an obligation to “opt-out” of appearing in Sora or any OpenAI system to preserve its rights under copyright law, the person said.
The Motion Picture Association issued a statement on Tuesday, urging OpenAI to take “immediate and decisive action” against videos using Sora to produce content infringing on its copyrighted material.
Entertainment companies have expressed numerous copyright concerns as generative AI has surged.
Universal and Disney sued creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their movies despite requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.
People walk past a billboard advertisement for YouTube in Berlin, Germany, on Sept. 27, 2019.
Sean Gallup | Getty Images
YouTube is offering creators who were banned from the platform a second chance.
On Thursday, the Google-owned platform announced it is rolling out a feature for previously terminated creators to apply to create a new channel. Previous rules led to a lifetime ban.
“We know many terminated creators deserve a second chance,” wrote the YouTube Team in a blog post. “We’re looking forward to providing an opportunity for creators to start fresh and bring their voice back to the platform.”
Tech companies have faced months of scrutiny from House Republicans and President Donald Trump, who have accused the platforms of political bias and overreach in content moderation.
Last week, YouTube agreed to pay $24.5 million to settle a lawsuit involving the suspension of Trump’s account following the U.S. Capitol riots on Jan. 6, 2021.
YouTube said this new option is separate from its already existing appeals process. If an appeal is unsuccessful, creators now have the option to apply for a new channel.
Approved creators under the new process will start from scratch, with no prior videos, subscribers or monetization privileges carried over.
Read more CNBC tech news
Over the next several weeks, eligible creators logging into YouTube Studio will see an option to request a new channel. Creators are only eligible to apply one year after their original channel was terminated.
YouTube said it will review requests based on the severity and frequency of past violations.
The company also said it will consider off-platform behavior that could harm the community, such as activity endangering child safety.
The program excludes creators terminated for copyright infringement, violations of its Creator Responsibility policy or those who deleted their accounts.
YouTube’s ‘second chance’ process fits with a broader trend at Google and other major platforms to ease strict content moderation rules imposed in the wake of the pandemic and the 2020 election.
In September, Alphabet lawyer Daniel Donovan sent a letter to House Judiciary Chair Jim Jordan, R-Ohio, that announced the platform had made changes to its community guidelines for content containing Covid-19 or election-related misinformation.
The letter also claimed that senior Biden administration officials pressed the company to remove certain Covid-related videos, saying the pressure was “unacceptable and wrong.”
YouTube ended its stand-alone Covid misinformation rules in December 2024, according to Donovan’s letter.
Google’s former CEO Eric Schmidt spoke at the Sifted Summit on Wednesday 8, October.
Bloomberg | Bloomberg | Getty Images
Google‘s former CEO Eric Schmidt has issued a stark reminder about the dangers of AI and how susceptible it is to being hacked.
Schmidt, who served as Google’s chief executive from 2001 to 2011, warned about “the bad stuff that AI can do,” when asked whether AI is more destructive than nuclear weapons during a fireside chat at the Sifted Summit
“Is there a possibility of a proliferation problem in AI? Absolutely,” Schmidt said Wednesday. The proliferation risks of AI include the technology falling into the hands of bad actors and being repurposed and misused.
“There’s evidence that you can take models, closed or open, and you can hack them to remove their guardrails. So in the course of their training, they learn a lot of things. A bad example would be they learn how to kill someone,” Schmidt said.
“All of the major companies make it impossible for those models to answer that question. Good decision. Everyone does this. They do it well, and they do it for the right reasons. There’s evidence that they can be reverse-engineered, and there are many other examples of that nature.”
AI systems are vulnerable to attack, with some methods including prompt injections and jailbreaking. In a prompt injection attack, hackers hide malicious instructions in user inputs or external data, like web pages or documents, to trick the AI into doing things it’s not meant to do — such as sharing private data or running harmful commands
Jailbreaking, on the other hand, involves manipulating the AI’s responses so it ignores its safety rules and produces restricted or dangerous content.
In 2023, a few months after OpenAI’s ChatGPT was released, users employed a “jailbreak” trick to circumvent the safety instructions embedded in the chatbot.
This included creating a ChatGPT alter-ego called DAN, an acronym for “Do Anything Now,” which involved threatening the chatbot with death if it didn’t comply. The alter-ego could provide answers on how to commit illegal activities or list the positive qualities of Adolf Hitler.
Schmidt said that there isn’t a good “non-proliferation regime” yet to help curb the dangers of AI.
AI is ‘underhyped’
Despite the grim warning, Schmidt was optimistic about AI more broadly and said the technology doesn’t get the hype it deserves.
“I wrote two books with Henry Kissinger about this before he died, and we came to the view that the arrival of an alien intelligence that is not quite us and more or less under our control is a very big deal for humanity, because humans are used to being at the top of the chain. I think so far, that thesis is proving out that the level of ability of these systems is going to far exceed what humans can do over time,” Schmidt said.
“Now the GPT series, which culminated in a ChatGPT moment for all of us, where they had 100 million users in two months, which is extraordinary, gives you a sense of the power of this technology. So I think it’s underhyped, not overhyped, and I look forward to being proven correct in five or 10 years,” he added.
His comments come amid growing talk of an AI bubble, as investors pour money into AI-focused firms and valuations look stretched, with comparisons being made to the dot-com bubble collapse of the early 2000s.
Schmidt said he doesn’t think history will repeat itself, however.
“I don’t think that’s going to happen here, but I’m not a professional investor,” he said.
“What I do know is that the people who are investing hard-earned dollars believe the economic return over a long period of time is enormous. Why else would they take the risk?”