Connect with us

Published

on

WASHINGTON, DC – SEPTEMBER 13: OpenAI CEO Sam Altman speaks with reporters on his arrival to the Senate bipartisan Artificial Intelligence (AI) Insight Forum on Capitol Hill in Washington, DC, on September 13, 2023. (Photo by Elizabeth Frantz for The Washington Post via Getty Images)

The Washington Post | The Washington Post | Getty Images

Now more than a year after ChatGPT’s introduction, the biggest AI story of 2023 may have turned out to be less the technology itself than the drama in the OpenAI boardroom over its rapid advancement. During the ousting, and subsequent reinstatement, of Sam Altman as CEO, the underlying tension for generative artificial intelligence going into 2024 is clear: AI is at the center of a huge divide between those who are fully embracing its rapid pace of innovation and those who want it to slow down due to the many risks involved.

The debate — known within tech circles as e/acc vs. decels — has been making the rounds in Silicon Valley since 2021. But as AI grows in power and influence, it’s increasingly important to understand both sides of the divide.

Here’s a primer on the key terms and some of the prominent players shaping AI’s future.

e/acc and techno-optimism

The term “e/acc” stands for effective accelerationism.

In short, those who are pro-e/acc want technology and innovation to be moving as fast as possible.

“Technocapital can usher in the next evolution of consciousness, creating unthinkable next-generation lifeforms and silicon-based awareness,” the backers of the concept explained in the first-ever post about e/acc.

In terms of AI, it is “artificial general intelligence”, or AGI, that underlies debate here. AGI is a super-intelligent AI that is so advanced it can do things as well or better than humans. AGIs can also improve themselves, creating an endless feedback loop with limitless possibilities.

OpenAI drama: Faster AI development won the fight

Some think that AGIs will have the capabilities to the end of the world, becoming so intelligent that they figure out how to eradicate humanity. But e/acc enthusiasts choose to focus on the benefits that an AGI can offer. “There is nothing stopping us from creating abundance for every human alive other than the will to do it,” the founding e/acc substack explained.

The founders of the e/acc started have been shrouded in mystery. But @basedbeffjezos, arguably the biggest proponent of e/acc, recently revealed himself to be Guillaume Verdon after his identity was exposed by the media.

Verdon, who formerly worked for Alphabet, X, and Google, is now working on what he calls the “AI Manhattan project” and said on X that “this is not the end, but a new beginning for e/acc. One where I can step up and make our voice heard in the traditional world beyond X, and use my credentials to provide backing for our community’s interests.”

Verdon is also the founder of Extropic, a tech startup which he described as “building the ultimate substrate for Generative AI in the physical world by harnessing thermodynamic physics.”

An AI manifesto from a top VC

One of the most prominent e/acc supporters is venture capitalist Marc Andreessen of Andreessen Horowitz, who previously called Verdon the “patron saint of techno-optimism.”

Techno-optimism is exactly what it sounds like: believers think more technology will ultimately make the world a better place. Andreessen wrote the Techno-Optimist Manifesto, a 5,000-plus word statement that explains how technology will empower humanity and solve all of its material problems. Andreessen even goes as far as to say that “any deceleration of AI will cost lives,” and it would be a “form of murder” not to develop AI enough to prevent deaths.

Another techno-optimist piece he wrote called Why AI Will Save the World was reposted by Yann LeCun, Chief AI Scientist at Meta, who is known as one of the “godfathers of AI” after winning the prestigious Turing Prize for his breakthroughs in AI.

Yann LeCun, chief AI scientist at Meta, speaks at the Viva Tech conference in Paris, June 13, 2023.

Chesnot | Getty Images News | Getty Images

LeCun labels himself on X as a “humanist who subscribes to both Positive and Normative forms of Active Techno-Optimism.”

LeCun, who recently said that he doesn’t expect AI “super-intelligence” to arrive for quite some time, has served as a vocal counterpoint in public to those who he says “doubt that current economic and political institutions, and humanity as a whole, will be capable of using [AI] for good.”

Meta’s embrace of open-source AI underlies Lecun’s belief that the technology will offer more potential than harm, while others have pointed to the dangers of a business model like Meta’s which is pushing for widely available gen AI models being placed in the hands of many developers.

AI alignment and deceleration

In March, an open letter by Encode Justice and the Future of Life Institute called for “all AI labs to immediately pause for at least six months the training of AI systems more powerful than GPT-4.”

The letter was endorsed by prominent figures in tech, such as Elon Musk and Apple co-founder Steve Wozniak.

OpenAI CEO Sam Altman addressed the letter back in April at an MIT event, saying, “I think moving with caution and an increasing rigor for safety issues is really important. The letter I don’t think was the optimal way to address it.”

OpenAI's Sam Altman on AI regulation: We can manage this for sure

Altman was caught up in the battle anew when the OpenAI boardroom drama played out and original directors of the nonprofit arm of OpenAI grew concerned about the rapid rate of progress and its stated mission “to ensure that artificial general intelligence — AI systems that are generally smarter than humans — benefits all of humanity.”

Some of the ideas from the open letter are key to decels, supporters of AI deceleration. Decels want progress to slow down because the future of AI is risky and unpredictable, and one of their biggest concerns is AI alignment.

The AI alignment problem tackles the idea that AI will eventually become so intelligent that humans won’t be able to control it.

“Our dominance as a species, driven by our relatively superior intelligence, has led to harmful consequences for other species, including extinction, because our goals are not aligned with theirs. We control the future — chimps are in zoos. Advanced AI systems could similarly impact humanity,” said Malo Bourgon, CEO of the Machine Intelligence Research Institute.

AI alignment research, such as MIRI’s, aims to train AI systems to “align” them with the goals, morals, and ethics of humans, which would prevent any existential risks to humanity. “The core risk is in creating entities much smarter than us with misaligned objectives whose actions are unpredictable and uncontrollable,” Bourgon said.

Government and AI’s end-of-the-world issue

Christine Parthemore, CEO of the Council on Strategic Risks and a former Pentagon official, has devoted her career to de-risking dangerous situations, and she recently told CNBC that when we consider the “mass scale death” AI could cause if used to oversee nuclear weapons, it is an issue that requires immediate attention.

But “staring at the problem” won’t do any good, she stressed. “The whole point is addressing the risks and finding solution sets that are most effective,” she said. “It’s dual-use tech at its purist,” she added. “There is no case where AI is more of a weapon than a solution.” For example, large language models will become virtual lab assistants and accelerate medicine, but also help nefarious actors identify the best and most transmissible pathogens to use for attack. This is among the reasons AI can’t be stopped, she said. “Slowing down is not part of the solution set,” Parthemore said.

Air Force Secretary on AI technology on the battlefield: There will always be humans involved

Earlier this year, her former employer the DoD said in its use of AI systems there will always be a human in the loop. That’s a protocol she says should be adopted everywhere. “The AI itself cannot be the authority,” she said. “It can’t just be, ‘the AI says X.’ … We need to trust the tools, or we should not be using them, but we need to contextualize. … There is enough general lack of understanding about this toolset that there is a higher risk of overconfidence and overreliance.”

Government officials and policymakers have started taking note of these risks. In July, the Biden-Harris administration announced that it secured voluntary commitments from AI giants Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI to “move towards safe, secure, and transparent development of AI technology.”

Just a few weeks ago, President Biden issued an executive order that further established new standards for AI safety and security, though stakeholders group across society are concerned about its limitations. Similarly, the U.K. government introduced the AI Safety Institute in early November, which is the first state-backed organization focusing on navigating AI.

Britain’s Prime Minister Rishi Sunak (L) attends an in-conversation event with X (formerly Twitter) CEO Elon Musk (R) in London on November 2, 2023, following the UK Artificial Intelligence (AI) Safety Summit. (Photo by Kirsty Wigglesworth / POOL / AFP) (Photo by KIRSTY WIGGLESWORTH/POOL/AFP via Getty Images)

Kirsty Wigglesworth | Afp | Getty Images

Amid the global race for AI supremacy, and links to geopolitical rivalry, China is implementing its own set of AI guardrails.

Responsible AI promises and skepticism

OpenAI is currently working on Superalignment, which aims to “solve the core technical challenges of superintelligent alignment in four years.”

At Amazon’s recent Amazon Web Services re:Invent 2023 conference, it announced new capabilities for AI innovation alongside the implementation of responsible AI safeguards across the organization.

“I often say it’s a business imperative, that responsible AI shouldn’t be seen as a separate workstream but ultimately integrated into the way in which we work,” says Diya Wynn, the responsible AI lead for AWS.

According to a study commissioned by AWS and conducted by Morning Consult, responsible AI is a growing business priority for 59% of business leaders, with about half (47%) planning on investing more in responsible AI in 2024 than they did in 2023.

Although factoring in responsible AI may slow down AI’s pace of innovation, teams like Wynn’s see themselves as paving the way towards a safer future. “Companies are seeing value and beginning to prioritize responsible AI,” Wynn said, and as a result, “systems are going to be safer, secure, [and more] inclusive.”

Bourgon isn’t convinced and says actions like those recently announced by governments are “far from what will ultimately be required.”

He predicts that it’s likely for AI systems to advance to catastrophic levels as early as 2030, and governments need to be prepared to indefinitely halt AI systems until leading AI developers can “robustly demonstrate the safety of their systems.”

WIRED's Steve Levy on the AI arms race: OpenAI doesn't have the 'invulnerability' it once had

Continue Reading

Technology

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Published

on

By

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX. 

Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.

Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.

In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit. 

“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.

To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.

Continue Reading

Technology

Why X’s new terms of service are driving some users to leave Elon Musk’s platform

Published

on

By

Why X's new terms of service are driving some users to leave Elon Musk's platform

Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.

Carlos Barria | Reuters

X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform. 

The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much. 

The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms. 

“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.

“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”

It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.

Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively. 

X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.

Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.

Artificial intelligence training

X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.

“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.

Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually. 

X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.

“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.

Though X’s new terms may be expansive, Meta’s policies aren’t that different. 

The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center. 

Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.

Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training. 

“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.

Liquidated damages

Bluesky CEO: Our platform is 'radically different' from anything else in social media

Continue Reading

Technology

The Pentagon’s battle inside the U.S. for control of a new Cyber Force

Published

on

By

The Pentagon's battle inside the U.S. for control of a new Cyber Force

A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”

The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.

“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.

The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.

Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.

Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.

“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.

Concern with Congress and a changing White House

The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.

As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.

Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.

Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.

Cyber Command 2.0 and the U.S. military

John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.

Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.

In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.

Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.

Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.

Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.

Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.

“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.

He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.

Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.

Continue Reading

Trending