Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Published

on

By

How Elon Musk’s plan to slash government agencies and regulation may benefit his empire

Elon Musk’s business empire is sprawling. It includes electric vehicle maker Tesla, social media company X, artificial intelligence startup xAI, computer interface company Neuralink, tunneling venture Boring Company and aerospace firm SpaceX. 

Some of his ventures already benefit tremendously from federal contracts. SpaceX has received more than $19 billion from contracts with the federal government, according to research from FedScout. Under a second Trump presidency, more lucrative contracts could come its way. SpaceX is on track to take in billions of dollars annually from prime contracts with the federal government for years to come, according to FedScout CEO Geoff Orazem.

Musk, who has frequently blamed the government for stifling innovation, could also push for less regulation of his businesses. Earlier this month, Musk and former Republican presidential candidate Vivek Ramaswamy were tapped by Trump to lead a government efficiency group called the Department of Government Efficiency, or DOGE.

In a recent commentary piece in the Wall Street Journal, Musk and Ramaswamy wrote that DOGE will “pursue three major kinds of reform: regulatory rescissions, administrative reductions and cost savings.” They went on to say that many existing federal regulations were never passed by Congress and should therefore be nullified, which President-elect Trump could accomplish through executive action. Musk and Ramaswamy also championed the large-scale auditing of agencies, calling out the Pentagon for failing its seventh consecutive audit. 

“The number one way Elon Musk and his companies would benefit from a Trump administration is through deregulation and defanging, you know, giving fewer resources to federal agencies tasked with oversight of him and his businesses,” says CNBC technology reporter Lora Kolodny.

To learn how else Elon Musk and his companies may benefit from having the ear of the president-elect watch the video.

Continue Reading

Technology

Why X’s new terms of service are driving some users to leave Elon Musk’s platform

Published

on

By

Why X's new terms of service are driving some users to leave Elon Musk's platform

Elon Musk attends the America First Policy Institute gala at Mar-A-Lago in Palm Beach, Florida, Nov. 14, 2024.

Carlos Barria | Reuters

X’s new terms of service, which took effect Nov. 15, are driving some users off Elon Musk’s microblogging platform. 

The new terms include expansive permissions requiring users to allow the company to use their data to train X’s artificial intelligence models while also making users liable for as much as $15,000 in damages if they use the platform too much. 

The terms are prompting some longtime users of the service, both celebrities and everyday people, to post that they are taking their content to other platforms. 

“With the recent and upcoming changes to the terms of service — and the return of volatile figures — I find myself at a crossroads, facing a direction I can no longer fully support,” actress Gabrielle Union posted on X the same day the new terms took effect, while announcing she would be leaving the platform.

“I’m going to start winding down my Twitter account,” a user with the handle @mplsFietser said in a post. “The changes to the terms of service are the final nail in the coffin for me.”

It’s unclear just how many users have left X due specifically to the company’s new terms of service, but since the start of November, many social media users have flocked to Bluesky, a microblogging startup whose origins stem from Twitter, the former name for X. Some users with new Bluesky accounts have posted that they moved to the service due to Musk and his support for President-elect Donald Trump.

Bluesky’s U.S. mobile app downloads have skyrocketed 651% since the start of November, according to estimates from Sensor Tower. In the same period, X and Meta’s Threads are up 20% and 42%, respectively. 

X and Threads have much larger monthly user bases. Although Musk said in May that X has 600 million monthly users, market intelligence firm Sensor Tower estimates X had 318 million monthly users as of October. That same month, Meta said Threads had nearly 275 million monthly users. Bluesky told CNBC on Thursday it had reached 21 million total users this week.

Here are some of the noteworthy changes in X’s new service terms and how they compare with those of rivals Bluesky and Threads.

Artificial intelligence training

X has come under heightened scrutiny because of its new terms, which say that any content on the service can be used royalty-free to train the company’s artificial intelligence large language models, including its Grok chatbot.

“You agree that this license includes the right for us to (i) provide, promote, and improve the Services, including, for example, for use with and training of our machine learning and artificial intelligence models, whether generative or another type,” X’s terms say.

Additionally, any “user interactions, inputs and results” shared with Grok can be used for what it calls “training and fine-tuning purposes,” according to the Grok section of the X app and website. This specific function, though, can be turned off manually. 

X’s terms do not specify whether users’ private messages can be used to train its AI models, and the company did not respond to a request for comment.

“You should only provide Content that you are comfortable sharing with others,” read a portion of X’s terms of service agreement.

Though X’s new terms may be expansive, Meta’s policies aren’t that different. 

The maker of Threads uses “information shared on Meta’s Products and services” to get its training data, according to the company’s Privacy Center. This includes “posts or photos and their captions.” There is also no direct way for users outside of the European Union to opt out of Meta’s AI training. Meta keeps training data “for as long as we need it on a case-by-case basis to ensure an AI model is operating appropriately, safely and efficiently,” according to its Privacy Center. 

Under Meta’s policy, private messages with friends or family aren’t used to train AI unless one of the users in a chat chooses to share it with the models, which can include Meta AI and AI Studio.

Bluesky, which has seen a user growth surge since Election Day, doesn’t do any generative AI training. 

“We do not use any of your content to train generative AI, and have no intention of doing so,” Bluesky said in a post on its platform Friday, confirming the same to CNBC as well.

Liquidated damages

Bluesky CEO: Our platform is 'radically different' from anything else in social media

Continue Reading

Technology

The Pentagon’s battle inside the U.S. for control of a new Cyber Force

Published

on

By

The Pentagon's battle inside the U.S. for control of a new Cyber Force

A recent Chinese cyber-espionage attack inside the nation’s major telecom networks that may have reached as high as the communications of President-elect Donald Trump and Vice President-elect J.D. Vance was designated this week by one U.S. senator as “far and away the most serious telecom hack in our history.”

The U.S. has yet to figure out the full scope of what China accomplished, and whether or not its spies are still inside U.S. communication networks.

“The barn door is still wide open, or mostly open,” Senator Mark Warner of Virginia and chairman of the Senate Intelligence Committee told the New York Times on Thursday.

The revelations highlight the rising cyberthreats tied to geopolitics and nation-state actor rivals of the U.S., but inside the federal government, there’s disagreement on how to fight back, with some advocates calling for the creation of an independent federal U.S. Cyber Force. In September, the Department of Defense formally appealed to Congress, urging lawmakers to reject that approach.

Among one of the most prominent voices advocating for the new branch is the Foundation for Defense of Democracies, a national security think tank, but the issue extends far beyond any single group. In June, defense committees in both the House and Senate approved measures calling for independent evaluations of the feasibility to create a separate cyber branch, as part of the annual defense policy deliberations.

Drawing on insights from more than 75 active-duty and retired military officers experienced in cyber operations, the FDD’s 40-page report highlights what it says are chronic structural issues within the U.S. Cyber Command (CYBERCOM), including fragmented recruitment and training practices across the Army, Navy, Air Force, and Marines.

“America’s cyber force generation system is clearly broken,” the FDD wrote, citing comments made in 2023 by then-leader of U.S. Cyber Command, Army General Paul Nakasone, who took over the role in 2018 and described current U.S. military cyber organization as unsustainable: “All options are on the table, except the status quo,” Nakasone had said.

Concern with Congress and a changing White House

The FDD analysis points to “deep concerns” that have existed within Congress for a decade — among members of both parties — about the military being able to staff up to successfully defend cyberspace. Talent shortages, inconsistent training, and misaligned missions, are undermining CYBERCOM’s capacity to respond effectively to complex cyber threats, it says. Creating a dedicated branch, proponents argue, would better position the U.S. in cyberspace. The Pentagon, however, warns that such a move could disrupt coordination, increase fragmentation, and ultimately weaken U.S. cyber readiness.

As the Pentagon doubles down on its resistance to establishment of a separate U.S. Cyber Force, the incoming Trump administration could play a significant role in shaping whether America leans toward a centralized cyber strategy or reinforces the current integrated framework that emphasizes cross-branch coordination.

Known for his assertive national security measures, Trump’s 2018 National Cyber Strategy emphasized embedding cyber capabilities across all elements of national power and focusing on cross-departmental coordination and public-private partnerships rather than creating a standalone cyber entity. At that time, the Trump’s administration emphasized centralizing civilian cybersecurity efforts under the Department of Homeland Security while tasking the Department of Defense with addressing more complex, defense-specific cyber threats. Trump’s pick for Secretary of Homeland Security, South Dakota Governor Kristi Noem, has talked up her, and her state’s, focus on cybersecurity.

Former Trump officials believe that a second Trump administration will take an aggressive stance on national security, fill gaps at the Energy Department, and reduce regulatory burdens on the private sector. They anticipate a stronger focus on offensive cyber operations, tailored threat vulnerability protection, and greater coordination between state and local governments. Changes will be coming at the top of the Cybersecurity and Infrastructure Security Agency, which was created during Trump’s first term and where current director Jen Easterly has announced she will leave once Trump is inaugurated.

Cyber Command 2.0 and the U.S. military

John Cohen, executive director of the Program for Countering Hybrid Threats at the Center for Internet Security, is among those who share the Pentagon’s concerns. “We can no longer afford to operate in stovepipes,” Cohen said, warning that a separate cyber branch could worsen existing silos and further isolate cyber operations from other critical military efforts.

Cohen emphasized that adversaries like China and Russia employ cyber tactics as part of broader, integrated strategies that include economic, physical, and psychological components. To counter such threats, he argued, the U.S. needs a cohesive approach across its military branches. “Confronting that requires our military to adapt to the changing battlespace in a consistent way,” he said.

In 2018, CYBERCOM certified its Cyber Mission Force teams as fully staffed, but concerns have been expressed by the FDD and others that personnel were shifted between teams to meet staffing goals — a move they say masked deeper structural problems. Nakasone has called for a CYBERCOM 2.0, saying in comments early this year “How do we think about training differently? How do we think about personnel differently?” and adding that a major issue has been the approach to military staffing within the command.

Austin Berglas, a former head of the FBI’s cyber program in New York who worked on consolidation efforts inside the Bureau, believes a separate cyber force could enhance U.S. capabilities by centralizing resources and priorities. “When I first took over the [FBI] cyber program … the assets were scattered,” said Berglas, who is now the global head of professional services at supply chain cyber defense company BlueVoyant. Centralization brought focus and efficiency to the FBI’s cyber efforts, he said, and it’s a model he believes would benefit the military’s cyber efforts as well. “Cyber is a different beast,” Berglas said, emphasizing the need for specialized training, advancement, and resource allocation that isn’t diluted by competing military priorities.

Berglas also pointed to the ongoing “cyber arms race” with adversaries like China, Russia, Iran, and North Korea. He warned that without a dedicated force, the U.S. risks falling behind as these nations expand their offensive cyber capabilities and exploit vulnerabilities across critical infrastructure.

Nakasone said in his comments earlier this year that a lot has changed since 2013 when U.S. Cyber Command began building out its Cyber Mission Force to combat issues like counterterrorism and financial cybercrime coming from Iran. “Completely different world in which we live in today,” he said, citing the threats from China and Russia.

Brandon Wales, a former executive director of the CISA, said there is the need to bolster U.S. cyber capabilities, but he cautions against major structural changes during a period of heightened global threats.

“A reorganization of this scale is obviously going to be disruptive and will take time,” said Wales, who is now vice president of cybersecurity strategy at SentinelOne.

He cited China’s preparations for a potential conflict over Taiwan as a reason the U.S. military needs to maintain readiness. Rather than creating a new branch, Wales supports initiatives like Cyber Command 2.0 and its aim to enhance coordination and capabilities within the existing structure. “Large reorganizations should always be the last resort because of how disruptive they are,” he said.

Wales says it’s important to ensure any structural changes do not undermine integration across military branches and recognize that coordination across existing branches is critical to addressing the complex, multidomain threats posed by U.S. adversaries. “You should not always assume that centralization solves all of your problems,” he said. “We need to enhance our capabilities, both defensively and offensively. This isn’t about one solution; it’s about ensuring we can quickly see, stop, disrupt, and prevent threats from hitting our critical infrastructure and systems,” he added.

Continue Reading

Trending