Connect with us

Published

on

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now more than a year after ChatGPT’s introduction, the biggest AI story of 2023 may have turned out to be less the technology itself than the drama in the OpenAI boardroom over its rapid advancement. During the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.

The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly important to understand both sides of the divide.

Here’s a primer on the key terms and some of the prominent players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained in the first-ever post about e/acc.

In terms of AI, it is “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that is so advanced it can do things as well or better than humans. AGIs can also improve themselves, creating an endless feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs will have the capabilities to the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. “There is nothing stopping us from creating abundance for every human alive other than the will to do it,” the founding e/acc substack explained.

The founders of the e/acc started have been shrouded in mystery. But @basedbeffjezos, arguably the biggest proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community’s interests.”

Verdon is also the founder of Extropic, a tech startup which he described as “building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes as far as to say that “any deceleration of AI will cost lives,” and it would be a “form of murder” not to develop AI enough to prevent deaths.

Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who is known as one of the “godfathers of AI” after winning the prestigious Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to arrive for quite some time, has served as a vocal counterpoint in public to those who he says “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the dangers of a business model like Meta’s which is pushing for widely available gen AI models being placed in the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Future of Life Institute called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I think moving with caution and an increasing rigor for safety issues is really important. The letter I don’t think was the optimal way to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up in the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned about the rapid rate of progress and its stated mission “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

Some of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.

The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won’t be able to control it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, such as MIRI’s, aims to train AI systems to “align” them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that when we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it is an issue that requires immediate attention.

But “staring at the problem” won’t do any good, she stressed. “The whole point is addressing the risks and finding solution sets that are most effective,” she said. “It’s dual-use tech at its purist,” she added. “There is no case where AI is more of a weapon than a solution.” For example, large language models will become virtual lab assistants and accelerate medicine, but also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can’t be stopped, she said. “Slowing down is not part of the solution set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this year, her former employer the DoD said in its use of AI systems there will always be a human in the loop. That’s a protocol she says should be adopted everywhere. “The AI itself cannot be the authority,” she said. “It can’t just be, ‘the AI says X.’ … We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance.”

Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards safe, secure, and transparent development of AI technology.”

Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the global race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI promises and skepticism

OpenAI is currently working on Superalignment, which aims to “solve the core technical challenges of superintelligent alignment in four years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it’s a business imperative, that responsible AI shouldn’t be seen as a separate workstream but ultimately integrated into the way in which we work,” says Diya Wynn, the responsible AI lead for AWS.

According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may slow down AI’s pace of innovation, teams like Wynn’s see themselves as paving the way towards a safer future. “Companies are seeing value and beginning to prioritize responsible AI,” Wynn said, and as a result, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon isn’t convinced and says actions like those recently announced by governments are “far from what will ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can “robustly demonstrate the safety of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had

Continue Reading

Technology

Spotify paid over $100 million to podcasts in the first quarter, including Joe Rogan, Alex Cooper and Theo Von

Published

on

By

Spotify paid over 0 million to podcasts in the first quarter, including Joe Rogan, Alex Cooper and Theo Von

Pavlo Gonchar | Lightrocket | Getty Images

Spotify said Monday it paid more than $100 million to podcast publishers and podcasters worldwide in the first quarter of 2025.

The figure includes all creators on the platform across all formats and agreements, including the platform’s biggest fish, Joe Rogan, Alex Cooper and Theo Von, the company said.

Rogan, host of “The Joe Rogan Experience,” Cooper of “Call Her Daddy” and “This Past Weekend w/ Theo Von” were among the top podcasts on Spotify globally in 2024.

Rogan and Cooper’s exclusivity deals with Spotify have ended, and while Rogan signed a new Spotify deal last year worth up to $250 million, including revenue sharing and the ability to post on YouTube, Cooper inked a SiriusXM deal in August.

Read more CNBC tech news

Even when shows are no longer exclusive to Spotify, they are still uploaded to the platform and qualify for the Spotify Partner Program, which launched in January in the U.S., U.K., Canada and Australia.

The program allows creators to earn revenue every time an ad monetized by Spotify plays in the episode, as well as revenue when Premium subscribers watch dynamic ads on videos.

Competing platform Patreon said it paid out over $472 million to podcasters from over 6.7 million paid memberships in 2024.

YouTube’s payouts are massive by comparison but include more than just podcasts. The company said it paid $70 billion to creators between 2021 and 2024 with payouts rising each year, according to YouTube CEO Neal Mohan.

Spotify reports first-quarter earnings on Tuesday.

Continue Reading

Technology

Palo Alto Networks acquiring Protect AI to boost artificial intelligence tools

Published

on

By

Palo Alto Networks acquiring Protect AI to boost artificial intelligence tools

Palo Alto Networks signage displays on the screen at the Nasdaq Market in New York City, U.S., March 25, 2025.

Jeenah Moon | Reuters

Palo Alto Networks announced on Monday its intent to acquire Protect AI, a startup specializing in securing artificial intelligence and machine learning applications, for an undisclosed sum.

The deal is set to close by the first quarter of fiscal year 2026.

“By extending our AI security capabilities to include Protect AI’s innovative solutions for Securing for AI, businesses will be able to build AI applications with comprehensive security,” said Anand Oswal, senior vice president and general manager of network security at Palo Alto Networks, in a release.

Palo Alto has been steadily bolstering its artificial intelligence systems to confront increasingly sophisticated cyber threats. The use of rapidly built ecosystems of AI models by large enterprises and government organizations has created new vulnerabilities. The company said those risks require purpose-built defenses beyond conventional cybersecurity.

Read more CNBC tech news

The acquisition would fold Protect AI’s solutions and team into Palo Alto’s newly announced Prisma AIRS platform. Palo Alto said Protect AI has established itself as a key player in what it called a “critical new area of security.”

Protect AI’s CEO Ian Swanson said joining Palo Alto would allow the company to “scale our mission of making the AI landscape more secure for users and organizations of all sizes.”

The company’s stock price is up 23% in the past year lifting its market cap close to $120 billion. Palo Alto reports third-quarter earnings on May 21.

Stock Chart IconStock chart icon

hide content

Year-to-date stock chart for Palo Alto Networks

Continue Reading

Technology

Cloud software vendors Atlassian, Snowflake and Workday are betting on security startup Veza

Published

on

By

Cloud software vendors Atlassian, Snowflake and Workday are betting on security startup Veza

From left, Veza founders Rob Whitcher, Tarun Thakur and Maohua Lu.

Veza

Tech giants like Google, Amazon, Microsoft and Nvidia have captured headlines in recent years for their massive investments in artificial intelligence startups like OpenAI and Anthropic.

But when it comes to corporate investing by tech companies, cloud software vendors are getting aggressive as well. And in some cases they’re banding together.

Veza, whose software helps companies manage the various internal technologies that employees can access, has just raised $108 million in a financing round that included participation from software vendors Atlassian, Snowflake and Workday.

New Enterprise Associates led the round, which values Veza at just over $800 million, including the fresh capital.

For two years, Snowflake’s managers have used Veza to check who has read and write access, Harsha Kapre, director of the data analytics software company’s venture group told CNBC. It sits alongside a host of other cloud solutions the company uses.

“We have Workday, we have Salesforce — we have all these things,” Kapre said. “What Veza really unlocks for us is understanding who has access and determining who should have access.”

Kapre said that “over-provisioning,” or allowing too many people access to too much stuff, “raises the odds of an attack, because there’s just a lot of stuff that no one is even paying attention to.”

With Veza, administrators can check which employees and automated accounts have authorization to see corporate data, while managing policies for new hires and departures. Managers can approve or reject existing permissions in the software.

Veza says it has built hooks into more than 250 technologies, including Snowflake.

The IPO market is likely to pick up near Labor Day, says FirstMark's Rick Heitzmann

The funding lands at a challenging time for traditional venture firms. Since inflation started soaring in late 2021 and was followed by rising interest rates, startup exits have cooled dramatically, meaning venture firms are struggling to generate returns.

Wall Street was banking on a revival in the initial public offering market with President Donald Trump’s return to the White House, but the president’s sweeping tariff proposals led several companies to delay their offerings.

That all means startup investors have to preserve their cash as well.

In the first quarter, venture firms made 7,551 deals, down from more than 11,000 in the same quarter a year ago, according to a report from researcher PitchBook.

Corporate venture operates differently as the capital comes from the parent company and many investments are strategic, not just about generating financial returns.

Atlassian’s standard agreement asks that portfolio companies disclose each quarter the percentage of a startup’s customers that integrate with Atlassian. Snowflake looks at how much extra product consumption of its own technology occurs as a result of its startup investments, Kapre said, adding that the company has increased its pace of deal-making in the past year.

‘Sleeping industry’

Within the tech startup world, Veza is also in a relatively advantageous spot, because the proliferation of cyberattacks has lifted the importance of next-generation security software.

On the public markets, the First Trust Nasdaq Cybersecurity ETF, which includes CrowdStrike and Palo Alto Networks, is up 3% so far this year, compared with a 10% drop in the Nasdaq.

Veza’s technology runs across a variety of security areas tied to identity and access. In access management, Microsoft is the leader, and Okta is the challenger. Veza isn’t directly competing there, and is instead focused on visibility, an area where other players in and around the space lack technology, said Brian Guthrie, an analyst at Gartner.

Tarun Thakur, Veza’s co-founder and CEO, said his company’s software has become a key part of the ecosystem as other security vendors have started seeing permissions and entitlements as a place to gain broad access to corporate networks.

“We have woken up a sleeping industry,” Thakur, who helped start the company in 2020, said in an interview.

Thakur’s home in Los Gatos, California, doubles as headquarters for the startup, which employs 200 people. It isn’t disclosing revenue figures but says sales more than doubled in the fiscal year that ended in January. Customers include AMD, CrowdStrike and Intuit.

Guthrie said enterprises started recognizing that they needed stronger visibility about two years ago.

“I think it’s because of the number of identities,” he said. Companies realized they had an audit problem or “an account that got compromised,” Guthrie said.

AI agents create a new challenge. Last week Microsoft published a report that advised organizations to figure out the proper ratio of agents to humans.

Veza is building enhancements to enable richer support for agent identities, Thakur said. The new funding will also help Veza expand in the U.S. government and internationally and build more integrations, he said.

Peter Lenke, head of Atlassian’s venture arm, said his company isn’t yet a paying Veza client.

“There’s always potential down the road,” he said. Lenke said he heard about Veza from another investor well before the new round and decided to pursue a stake when the opportunity arose.

Lenke said that startups benefit from Atlassian investments because the company “has a large footprint” inside of enterprises.

“I think there’s a great symbiotic match there,” he said.

Don’t miss these insights from CNBC PRO

Making deals with Menlo Ventures' Matt Murphy

Continue Reading

Trending