Connect with us

Published

on

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now more than a year after ChatGPT’s introduction, the biggest AI story of 2023 may have turned out to be less the technology itself than the drama in the OpenAI boardroom over its rapid advancement. During the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.

The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly important to understand both sides of the divide.

Here’s a primer on the key terms and some of the prominent players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained in the first-ever post about e/acc.

In terms of AI, it is “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that is so advanced it can do things as well or better than humans. AGIs can also improve themselves, creating an endless feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs will have the capabilities to the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. “There is nothing stopping us from creating abundance for every human alive other than the will to do it,” the founding e/acc substack explained.

The founders of the e/acc started have been shrouded in mystery. But @basedbeffjezos, arguably the biggest proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community’s interests.”

Verdon is also the founder of Extropic, a tech startup which he described as “building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes as far as to say that “any deceleration of AI will cost lives,” and it would be a “form of murder” not to develop AI enough to prevent deaths.

Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who is known as one of the “godfathers of AI” after winning the prestigious Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to arrive for quite some time, has served as a vocal counterpoint in public to those who he says “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the dangers of a business model like Meta’s which is pushing for widely available gen AI models being placed in the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Future of Life Institute called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I think moving with caution and an increasing rigor for safety issues is really important. The letter I don’t think was the optimal way to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up in the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned about the rapid rate of progress and its stated mission “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

Some of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.

The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won’t be able to control it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, such as MIRI’s, aims to train AI systems to “align” them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that when we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it is an issue that requires immediate attention.

But “staring at the problem” won’t do any good, she stressed. “The whole point is addressing the risks and finding solution sets that are most effective,” she said. “It’s dual-use tech at its purist,” she added. “There is no case where AI is more of a weapon than a solution.” For example, large language models will become virtual lab assistants and accelerate medicine, but also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can’t be stopped, she said. “Slowing down is not part of the solution set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this year, her former employer the DoD said in its use of AI systems there will always be a human in the loop. That’s a protocol she says should be adopted everywhere. “The AI itself cannot be the authority,” she said. “It can’t just be, ‘the AI says X.’ … We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance.”

Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards safe, secure, and transparent development of AI technology.”

Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the global race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI promises and skepticism

OpenAI is currently working on Superalignment, which aims to “solve the core technical challenges of superintelligent alignment in four years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it’s a business imperative, that responsible AI shouldn’t be seen as a separate workstream but ultimately integrated into the way in which we work,” says Diya Wynn, the responsible AI lead for AWS.

According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may slow down AI’s pace of innovation, teams like Wynn’s see themselves as paving the way towards a safer future. “Companies are seeing value and beginning to prioritize responsible AI,” Wynn said, and as a result, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon isn’t convinced and says actions like those recently announced by governments are “far from what will ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can “robustly demonstrate the safety of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had

Continue Reading

Technology

Elon Musk’s X temporarily down for tens of thousands of users

Published

on

By

Elon Musk's X temporarily down for tens of thousands of users

Elon Musk looks on as U.S. President Donald Trump meets South African President Cyril Ramaphosa in the Oval Office of the White House in Washington, D.C., U.S., May 21, 2025.

Kevin Lamarque | Reuters

The Elon Musk-owned social media platform X experienced a brief outage on Saturday morning, with tens of thousands of users reportedly unable to use the site.

About 25,000 users reported issues with the platform, according to the analytics platform Downdetector, which gathers data from users to monitor issues with various platforms.

Roughly 21,000 users reported issues just after 8:30 a.m. ET, per the analytics platform.

The issues appeared to be largely resolved by around 9:55 a.m., when about 2,000 users were reporting issues with the platform.

Read more CNBC politics coverage

X did not immediately respond to CNBC’s request for comment. Additional information on the outage was not available.

Musk, the billionaire owner of SpaceX and Tesla, acquired X, formerly known as Twitter in 2022.

The site has had a number of widespread outages since the acquisition.

The site experienced another outage in March, which Musk attributed at the time to a “massive cyberattack.”

“We get attacked every day, but this was done with a lot of resources,” Musk wrote in a post at the time.

This is breaking news. Check back for updates

Continue Reading

Technology

Companies turn to AI to navigate Trump tariff turbulence

Published

on

By

Companies turn to AI to navigate Trump tariff turbulence

Artificial intelligence robot looking at futuristic digital data display.

Yuichiro Chino | Moment | Getty Images

Businesses are turning to artificial intelligence tools to help them navigate real-world turbulence in global trade.

Several tech firms told CNBC say they’re deploying the nascent technology to visualize businesses’ global supply chains — from the materials that are used to form products, to where those goods are being shipped from — and understand how they’re affected by U.S. President Donald Trump’s reciprocal tariffs.

Last week, Salesforce said it had developed a new import specialist AI agent that can “instantly process changes for all 20,000 product categories in the U.S. customs system and then take action on them” as needed, to help navigate changes to tariff systems.

Engineers at the U.S. software giant used the Harmonized Tariff Schedule, a 4,400-page document of tariffs on goods imported to the U.S., to inform answers generated by the agent.

“The sheer pace and complexity of global tariff changes make it nearly impossible for most businesses to keep up manually,” Eric Loeb, executive vice president of government affairs at Salesforce, told CNBC. “In the past, companies might have relied on small teams of in-house experts to keep pace.”

Firms say that AI systems are enabling them to take decisions on adjustments to their global supply chains much faster.

Andrew Bell, chief product officer of supply chain management software firm Kinaxis, said that manufacturers and distributors looking to inform their response to tariffs are using his firm’s machine learning technology to assess their products and the materials that go into them, as well as external signals like news articles and macroeconomic data.

“With that information, we can start doing some of those simulations of, here is a particular part that is in your build material that has a significant tariff. If you switched to using this other part instead, what would the impact be overall?” Bell told CNBC.

‘AI’s moment to shine’

Trump’s tariffs list — which covers dozens of countries — has forced companies to rethink their supply chains and pricing, with the likes of Walmart and Nike already raising prices on some products. The U.S. imported about $3.3 trillion of goods in 2024, according to census data.

Uncertainty from the U.S. tariff measures “actually probably presents AI’s moment to shine,” Zack Kass, a futurist and former head of OpenAI’s go-to-market strategy, told CNBC’s Silvia Amaro at the Ambrosetti Forum in Italy last month.

Read more CNBC tech news

“If you wonder how hard things could get without AI vis-a-vis automation, and what would happen in a world where you can’t just employ a bunch of people overnight, AI presents this alternative proposal,” he added.

Nagendra Bandaru, managing partner and global head of technology services at Indian IT giant Wipro, said clients are using the company’s agentic AI solutions “to pivot supplier strategies, adjust trade lanes, and manage duty exposure dynamically as policy landscapes evolve.”

Wipro says it uses a range of AI systems — both proprietary and supplied by third parties — from large language models to traditional machine learning and computer vision techniques to inspect physical assets in cross-border transit.

‘Not a silver bullet’

While it preferred to keep company names confidential, Wipro said that firms using its AI products to navigate Trump’s tariffs range from a Fortune 500 electronics manufacturer with factories in Asia to an automotive parts supplier exporting to Europe and North America.

“AI is a powerful enabler — but not a silver bullet,” Bandaru told CNBC. “It doesn’t replace trade policy strategy, it enhances it by transforming global trade from a reactive challenge into a proactive, data-driven advantage.”

AI was already a key investment priority for global firms prior to Trump’s sweeping tariff announcements on April. Nearly three-quarters of business leaders ranked AI and generative AI in their top three technologies for investment in 2025, according to a report by Capgemini published in January.

“There are a number of ways AI can assist companies dealing with the tariffs and resulting uncertainty.  But any AI solution’s success will be predicated on the quality of the data it has access to,” Ajay Agarwal, partner at Bain Capital Ventures, told CNBC.

The venture capitalist said that one of his portfolio companies, FourKites, uses supply chain network data with AI to help firms understand the logistics impacts of adjusting suppliers due to tariffs.

“They are working with a number of Fortune 500 companies to leverage their agents for freight and ocean to provide this level of visibility and intelligence,” Agarwal said.

“Switching suppliers may reduce tariffs costs, but might increase lead times and transportation costs,” he added. “In addition, the volatility of the tariffs [has] severely impacted the rates and capacity available in both the ocean and the domestic freight networks.”

WATCH: Former OpenAI exec says tariffs ‘present AI’s moment to shine’

Former OpenAI exec says tariffs 'present AI's moment to shine'

Continue Reading

Technology

Amazon’s Zoox robotaxi unit issues second software recall in a month after San Francisco crash

Published

on

By

Amazon's Zoox robotaxi unit issues second software recall in a month after San Francisco crash

A Zoox autonomous robotaxi in San Francisco, California, US, on Wednesday, Dec. 4, 2024.

David Paul Morris | Bloomberg | Getty Images

Amazon‘s Zoox robotaxi unit issued a voluntary recall of its software for the second time in a month following a recent crash in San Francisco.

On May 8, an unoccupied Zoox robotaxi was turning at low speed when it was struck by an electric scooter rider after braking to yield at an intersection. The person on the scooter declined medical attention after sustaining minor injuries as a result of the collision, Zoox said.

“The Zoox vehicle was stopped at the time of contact,” the company said in a blog post. “The e-scooterist fell to the ground directly next to the vehicle. The robotaxi then began to move and stopped after completing the turn, but did not make further contact with the e-scooterist.”

Zoox said it submitted a voluntary software recall report to the National Highway Traffic Safety Administration on Thursday.

A Zoox spokesperson said the notice should be published on the NHTSA website early next week. The recall affected 270 vehicles, the spokesperson said.

The NHTSA said in a statement it had received the recall notice and that the agency “advises road users to be cautious in the vicinity of vehicles because drivers may incorrectly predict the travel path of a cyclist or scooter rider or come to an unexpected stop.”

If an autonomous vehicle continues to move after contact with any nearby vulnerable road user, it risks causing harm or further harm. In the AV industry, General Motors-backed Cruise exited the robotaxi business after a collision in which one of its vehicles injured a pedestrian who had been struck by a human-driven car and was then rolled over by the Cruise AV.

Zoox’s May incident comes roughly two weeks after the company announced a separate voluntary software recall following a recent Las Vegas crash. In that incident, an unoccupied Zoox robotaxi collided with a passenger vehicle, resulting in minor damage to both vehicles.

The company issued a software recall for 270 of its robotaxis in order to address a defect with its automated driving system that could cause it to inaccurately predict the movement of another car, increasing the “risk of a crash.”

Amazon acquired Zoox in 2020 for more than $1 billion, announcing at the time that the deal would help bring the self-driving technology company’s “vision for autonomous ride-hailing to reality.”

While Zoox is in a testing and development stage with its AVs on public roads in the U.S., Alphabet’s Waymo is already operating commercial, driverless ride-hailing services in Phoenix, San Francisco, Los Angeles and Austin, Texas, and is ramping up in Atlanta.

Tesla is promising it will launch its long-delayed robotaxis in Austin next month, and, if all goes well, plans to expand after that to San Francisco, Los Angeles and San Antonio, Texas.

— CNBC’s Lora Kolodny contributed to this report.

WATCH: Tesla’s decade-long journey to robotaxis

Tesla's decade-long journey to robotaxis

Continue Reading

Trending