Connect with us

Published

on

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now more than a year after ChatGPT’s introduction, the biggest AI story of 2023 may have turned out to be less the technology itself than the drama in the OpenAI boardroom over its rapid advancement. During the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.

The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly important to understand both sides of the divide.

Here’s a primer on the key terms and some of the prominent players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained in the first-ever post about e/acc.

In terms of AI, it is “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that is so advanced it can do things as well or better than humans. AGIs can also improve themselves, creating an endless feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs will have the capabilities to the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. “There is nothing stopping us from creating abundance for every human alive other than the will to do it,” the founding e/acc substack explained.

The founders of the e/acc started have been shrouded in mystery. But @basedbeffjezos, arguably the biggest proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community’s interests.”

Verdon is also the founder of Extropic, a tech startup which he described as “building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes as far as to say that “any deceleration of AI will cost lives,” and it would be a “form of murder” not to develop AI enough to prevent deaths.

Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who is known as one of the “godfathers of AI” after winning the prestigious Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to arrive for quite some time, has served as a vocal counterpoint in public to those who he says “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the dangers of a business model like Meta’s which is pushing for widely available gen AI models being placed in the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Future of Life Institute called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I think moving with caution and an increasing rigor for safety issues is really important. The letter I don’t think was the optimal way to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up in the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned about the rapid rate of progress and its stated mission “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

Some of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.

The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won’t be able to control it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, such as MIRI’s, aims to train AI systems to “align” them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that when we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it is an issue that requires immediate attention.

But “staring at the problem” won’t do any good, she stressed. “The whole point is addressing the risks and finding solution sets that are most effective,” she said. “It’s dual-use tech at its purist,” she added. “There is no case where AI is more of a weapon than a solution.” For example, large language models will become virtual lab assistants and accelerate medicine, but also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can’t be stopped, she said. “Slowing down is not part of the solution set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this year, her former employer the DoD said in its use of AI systems there will always be a human in the loop. That’s a protocol she says should be adopted everywhere. “The AI itself cannot be the authority,” she said. “It can’t just be, ‘the AI says X.’ … We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance.”

Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards safe, secure, and transparent development of AI technology.”

Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the global race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI promises and skepticism

OpenAI is currently working on Superalignment, which aims to “solve the core technical challenges of superintelligent alignment in four years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it’s a business imperative, that responsible AI shouldn’t be seen as a separate workstream but ultimately integrated into the way in which we work,” says Diya Wynn, the responsible AI lead for AWS.

According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may slow down AI’s pace of innovation, teams like Wynn’s see themselves as paving the way towards a safer future. “Companies are seeing value and beginning to prioritize responsible AI,” Wynn said, and as a result, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon isn’t convinced and says actions like those recently announced by governments are “far from what will ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can “robustly demonstrate the safety of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had

Continue Reading

Technology

Netflix earnings, Anthropic’s ‘woke’ problem, Travis Kelce’s Six Flags stake and more in Morning Squawk

Published

on

By

Netflix earnings, Anthropic's 'woke' problem, Travis Kelce's Six Flags stake and more in Morning Squawk

Dario Amodei, Anthropic CEO, speaking on CNBC’s Squawk Box outside the World Economic Forum in Davos, Switzerland on Jan. 21st, 2025.

Gerry Miller | CNBC

This is CNBC’s Morning Squawk newsletter. Subscribe here to receive future editions in your inbox.

Here are five key things investors need to know to start the trading day:

1. To be, or not to be

Buzzy artificial intelligence startup Anthropic has found itself at odds with the White House over regulatory policy for the AI industry. CEO Dario Amodei jumped into the discourse yesterday to push back on claims that the company is “woke.”

Here’s what to know:

  • Anthropic has largely struck a different tone on AI regulation than its competitor OpenAI. The company opposed a proposed amendment to President Donald Trump’s “One Big Beautiful Bill Act.” that would have suspended state-level AI law.
  • As a result, David Sacks — the venture capitalist serving as Trump’s AI and crypto czar — has chastised Anthropic. He said the company is running its regulatory strategy around “fear mongering” and has positioned “itself consistently as a foe of the Trump administration.”
  • LinkedIn co-founder Reid Hoffman came to Anthropic’s defense on Monday, calling the company “one of the good guys.” Hoffman’s vote of confidence is particular noteworthy given his investments in rival OpenAI.
  • Sacks shot back at Hoffman, writing on social media that Anthropic is looking to “backdoor Woke AI and other AI regulations.”
  • Anthropic’s Amodei said yesterday that the company is aligned with the White House on “key areas of AI policy” and shares goals with the administration and lawmakers on both sides of the aisle.

2. Tax troubles

In an aerial view, the Netflix logo is displayed above Netflix corporate offices on October 7, 2025 in Los Angeles, California.

Mario Tama | Getty Images

Netflix missed analysts’ earnings per share estimates for the third quarter, pushing shares down more than 7% in overnight trading. The streamer placed blame for its weaker-than-expected report on an expense stemming from a dispute with Brazilian tax authorities.

The California-based company’s report comes after it announced on Tuesday that it will bring the hit animated film “KPop Demon Hunters” to the toy market. Netflix said it will partner with toymakers Hasbro and Mattel on various items tied to the movie.

Stock futures are slightly lower this morning after Netflix’s slide. Follow live markets updates here.

3. A numbers game

An American flag flies at Warner Bros. Studio in Burbank, California, on Sept. 12, 2025.

Mario Tama | Getty Images

Warner Bros. Discovery said yesterday that it’s open to a sale, as the media giant gears up for a corporate split up. Investors appeared to like this news, with shares jumping 11% in the session.

The HBO and CNN parent said it will review all of its options after getting “unsolicited interest” from multiple parties. While the company previously announced plans to break its business into two, it has also seen takeover interest by fellow industry titan Paramount Skydance.

Speaking of HBO, Warner Bros. Discovery announced yesterday that it is hiking prices for the network’s streaming platform.

4. Confessions of a shopaholic

People look for discounts in a local store, in New York, U.S., December 25, 2023. 

Eduardo Munoz | Reuters

Shoppers are feeling “discount burnout” heading into Black Friday and Cyber Monday, according to consulting firm AlixPartners.

On average, the more than 9,000 U.S. consumers surveyed by the firm said price was less important to them than a year ago when deciding to buy new clothes. Additionally, fewer consumers listed sales and finding the top deal as “very important” compared to last year.

Overall, AlixPartners’ data shows fashion prices have risen $17 from last year on average. Some categories, including jackets and outerwear, saw larger price hikes than others, such as swimwear.

Get Morning Squawk directly in your inbox

5. Activist investor era

Taylor Swift (L) and Travis Kelce are seen in the Meatpacking District on Dec. 28, 2024 in New York City.

TheStewartofNY | GC Images | Getty Images

Activist investor firm Jana Partners linked up with an unexpected teammate for a stake in Six Flags: NFL star Travis Kelce. (You might also know Kelce as Taylor Swift’s fiancé.)

Jana and Kelce are part of an investment group that now holds an economic interest of around 9% in the amusement park operator. The group said it wants to work with the company’s board to improve shareholder value and guest experience.

Kelce said in a statement that he is a “lifelong” Six Flags fan and wants to ensure the company is “special for the next generation.” Shares of Six Flags are slightly lower before the bell this morning after rallying more than 17% yesterday.

The Daily Dividend

Loading chart…

CNBC’s MacKenzie Sigalos, Ashley Capoot, Sarah Whitten, Luke Fountain, Alex Sherman, Sara Salinas, Gabrielle Fonrouge, Yun Li, Sean Conlon and Sarah Min contributed to this report. Josephine Rozzelle edited this edition.

Continue Reading

Technology

AI is already taking white-collar jobs. Economists warn there’s ‘much more in the tank’

Published

on

By

AI is already taking white-collar jobs. Economists warn there's 'much more in the tank'

Marc Benioff, chief executive officer of Salesforce Inc., speaks during the 2025 Dreamforce conference in San Francisco, California, US, on Tuesday, Oct. 14, 2025.

Michael Short | Bloomberg | Getty Images

JPMorgan Chase and Goldman Sachs are harnessing it to employ fewer people. Ford CEO Jim Farley warned that it will “replace literally half of all white-collar workers.” Salesforce‘s Marc Benioff claimed it’s already doing up to 50% of the company’s workload. Walmart CEO Doug McMillon told The Wall Street Journal that it “is going to change literally every job.”

The “it” that’s on corporate America’s lips is artificial intelligence.

Less than three years into the generative AI boom, executives across every major industry are loudly telling employees and shareholders that, due to the technological revolution underway, the size and shape of their workforce is about to dramatically change, if it hasn’t already.

What started with the launch of OpenAI’s ChatGPT and a novel new way for consumers to use chatbots has rapidly made its way into the enterprise, with companies employing customized AI agents to automate functions in customer support, marketing, coding, content creation and elsewhere.

Recent estimates from Goldman Sachs suggest that 6% to 7% of U.S. workers could lose their jobs because of AI adoption. The Stanford Digital Economy Lab, using ADP employment data, found that entry-level hiring in “AI exposed jobs” has dropped 13% since large language models started proliferating. The report said software development, customer service and clerical work are the types of jobs most vulnerable to AI today.

“We are at the beginning of a multi-decade progress development that will have a major impact on the labor market,” said Gad Levanon, chief economist at the Burning Glass Institute, a research firm that focuses on changes in the economy and workforce.

Automation, of course, is nothing new. Every era has its printing press, ATM machine, self-checkout machine or online booking agency that’s replaced human labor with some form of technology. In the process, new jobs emerge and economies adapt and evolve.

A report from the World Economic Forum earlier this year estimated that the onslaught of AI, robotics and automation could displace 92 million jobs by 2030, while adding 170 million new roles. AI development, research, safety and implementation are all areas of growth, along with robotics.

Majority of CEOs expect a major transformation of jobs in next 4-5 years from AI: Roger Ferguson

Erik Brynjolfsson, director of the Stanford research group, said that, in addition to new types of roles, physical jobs such as health aids and construction workers are so far shielded from AI disruption.

“There’s going to be more turbulence in both directions in the coming months and years,” Brynjolfsson said in an interview. “We need to prepare our workforce.”

The high-level data isn’t yet showing massive changes.

The U.S. government is three weeks into a shutdown, so the Bureau of Labor Statistics has gone dark. But alternative reports from organizations like the Chicago Fed have shown an economy that’s plodding along. Employment growth is meek, but the labor market is holding steady.

The unemployment rate held flat at 4.3% in September, according to the Chicago Fed, as did the rate for layoffs and other separations at 2.1%.

A recent study published by the Budget Lab at Yale found no “discernible disruption” caused by ChatGPT. Martha Gimbel, co-founder of the lab, called the upheaval from AI “minimal” and “incredibly concentrated,” although that could shift as technological changes work through the broader economy.

“The rest of the economy often moves more slowly than Silicon Valley,” she said.

The New York Fed found in a survey last month that only 1% of services firms reported laying off workers because of AI in the last six months. The Society for Human Resource Management said its data shows that 6% of U.S. jobs have been automated by 50% or more, a number that rises to 32% for computer and math-related professions.

‘Scrappier teams’

It doesn’t take much prying to get corporate executives to talk about what’s coming.

Amazon CEO Andy Jassy said in June that his company’s corporate workforce will shrink from AI over the next few years, and encouraged employees to learn how to use AI tools to eventually “get more done with scrappier teams.”

The New York Times published an investigative piece on Tuesday, showing that Amazon’s automation team expects that it can avoid hiring more than 160,000 people in the U.S. by 2027, equaling savings of about 30 cents on every item that Amazon packs and delivers. The report was based on interviews and internal strategy documents, the Times said.

Palantir CEO Alex Karp told CNBC in August that his data analytics company, which has seen its market cap soar more than elevenfold in the past two years, aims to grow revenue by 10 times and reduce its head count by about 12%. He didn’t provide a timeframe for reaching that goal.

The message is making its way across the tech industry.

Benioff, Salesforce’s CEO, said last month that his software company has cut the number of customer support roles from 9,000 to 5,000 “because I need less heads.” Swedish fintech firm Klarna said it has downsized its workforce by 40% as it adopts AI. Shopify CEO Tobi Lutke told employees in April that they’ll be expected to prove why they “cannot get what they want done using AI” before asking for more head count and resources.

Mustafa Suleyman, CEO of Microsoft AI, speaks during an event commemorating the 50th anniversary of the company at Microsoft headquarters in Redmond, Washington, on April 4, 2025.

David Ryder | Bloomberg | Getty Images

Coding assistants have been some of the early winners of the generative AI rush, becoming the first real application type to attract a hefty number of paying users. The Information reported last week that Anysphere, the parent of Cursor, is in talks to raise funds at a $27 billion valuation, as it takes on Microsoft’s GitHub and other startups, including Replit, in an increasingly crowded market.

Software development is just the beginning.

In banking, JPMorgan’s managers have been told to avoid hiring people as the firm deploys AI across its businesses, CFO Jeremy Barnum told analysts last week. Goldman Sachs CEO David Solomon said that as his bank incorporates AI, it will be “taking a front-to-back view of how we organize our people, make decisions, and think about productivity and efficiency.”

Then there’s the auto sector.

When Ford CEO Farley told Walter Isaacson in an interview in July that “AI will leave a lot of white-collar people behind,” he was reflecting a sentiment that’s growing across his industry. According to a survey of 500 U.S. car dealers conducted by marketing solutions firm Phyron, half of respondents said they expect AI to sell vehicles autonomously by 2027.

“That means AI creating the marketing assets, handling listings, answering buyer questions, negotiating deals, arranging finance, and completing the sale — all without human input,” Phyron said in the report on its survey results last month.

The topic will likely get a lot of attention in the next couple weeks as the world’s biggest tech companies issue quarterly results and update investors on their AI deployments. Tesla kicks off tech earnings season on Wednesday, followed next week by Alphabet, Meta, Microsoft, Apple and Amazon.

WATCH: AI won’t replace workers but tasks workers do

AI won't replace workers but the tasks workers do, says AEI's James Pethokoukis

Continue Reading

Technology

Baidu’s Apollo Go plans to launch taxis with no steering wheels in Switzerland as the race for robotaxis in Europe heats up

Published

on

By

Baidu's Apollo Go plans to launch taxis with no steering wheels in Switzerland as the race for robotaxis in Europe heats up

Chinese tech company Baidu announced Wednesday its Apollo Go robotaxi arm has entered a strategic partnership with PostBus in Switzerland.

Baidu

BEIJING — Chinese tech giant Baidu announced Wednesday that its robotaxi unit will start test drives in Switzerland in December, as firms race to get their vehicles on European roads.

The company’s Apollo Go unit will work with Swiss public transit operator PostBus through a strategic partnership, Baidu said.

By the first quarter of 2027, the companies aim to begin operating a public-facing fully driverless taxi service called “AmiGo” that uses Apollo Go’s RT6 electric vehicles, the press release said. Baidu added that once the robotaxis are up and running, the operators plan to remove the cars’ steering wheels.

Plans to start tests in December are the most concrete steps Baidu has announced so far in getting its robotaxis on public roads in Europe.

The Chinese tech company said in August that it would partner with U.S. ride-hailing company Lyft to deploy robotaxis in the U.K. and Germany starting in 2026. A month earlier, Baidu announced a partnership with Uber to deploy Apollo Go robotaxis on the ride-hailing platform outside the U.S. and mainland China later in the year.

Other robotaxi companies are also racing to expand into Europe and the Middle East, after building up operations in the U.S. and China.

On Friday, Chinese robotaxi operator Pony.ai announced it will work with Stellantis to begin tests in Luxembourg in the coming months, before expanding to other European cities next year.

U.S. rival Waymo, owned by Google parent Alphabet, last week also announced plans to start tests in London before launching the self-driving taxi service there next year. Uber in June said it would start trials in spring 2026 of fully autonomous rides in the U.K. with SoftBank-backed self-driving tech startup Wayve.

— CNBC’s Arjun Kharpal contributed to this report.

Continue Reading

Trending