Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.
Eric Lee | Bloomberg | Getty Images
This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.
After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.
“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”
In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.
“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.
Most people agree that there need to be laws governing AI as the pace of development accelerates.
“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”
But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.“
When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.
“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”
But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.
From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.
This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.
“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.
It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.
Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”
But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.
For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.
OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.
Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.
Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.
“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.
AI ethics has its own lingo, too.
When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.“
The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.
When these LLMs invent incorrect facts in responses, they’re “hallucinating.”
One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.
“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”
Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.“
It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.
“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.
Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”
A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.
But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.
The logo for the Food and Drug Administration is seen ahead of a news conference on removing synthetic dyes from America’s food supply, at the Health and Human Services Headquarters in Washington, DC on April 22, 2025.
Nathan Posner | Anadolu | Getty Images
The U.S. Food and Drug Administration on Tuesday published a warning letter addressed to the wrist wearable company Whoop, alleging it is marketing a new blood pressure feature without proper approvals.
The letter centers around Whoop’s Blood Pressure Insights (BPI) feature, which the company introduced alongside its latest hardware launch in May.
Whoop said its BPI feature uses blood pressure information to offer performance and wellness insights that inform consumers and improve athletic performance.
But the FDA said Tuesday that Whoop’s BPI feature is intended to diagnose, cure, treat or prevent disease — a key distinction that would reclassify the wellness tracker as a “medical device” that has to undergo a rigorous testing and approval processes.
“Providing blood pressure estimation is not a low-risk function,” the FDA said in the letter. “An erroneously low or high blood pressure reading can have significant consequences for the user.”
A Whoop spokesperson said the company’s system offers only a single daily estimated range and midpoint, which distinguishes it from medical blood pressure devices used for diagnosis or management of high blood pressure.
Whoop users who purchase the $359 “Whoop Life” subscription tier can use the BPI feature to get daily insights about their blood pressure, including estimated systolic and diastolic ranges, according to the company.
Whoop also requires users to log three traditional cuff-readings to act as a baseline in order to unlock the BPI feature.
Additionally, the spokesperson said the BPI data is not unlike other wellness metrics that the company deals with. Just as heart rate variability and respiratory rate can have medical uses, the spokesperson said, they are permitted in a wellness context too.
“We believe the agency is overstepping its authority in this case by attempting to regulate a non-medical wellness feature as a medical device,” the Whoop spokesperson said.
Read more CNBC tech news
High blood pressure, also called hypertension, is the number one risk factor for heart attacks, strokes and other types of cardiovascular disease, according to Dr. Ian Kronish, an internist and co-director of Columbia University’s Hypertension Center.
Kronish told CNBC that wearables like Whoop are a big emerging topic of conversation among hypertension experts, in part because there’s “concern that these devices are not yet proven to be accurate.”
If patients don’t get accurate blood pressure readings, they can’t make informed decisions about the care they need.
At the same time, Kronish said wearables like Whoop present a “big opportunity” for patients to take more control over their health, and that many professionals are excited to work with these tools.
Understandably, it can be confusing for consumers to navigate. Kronish encouraged patients to talk with their doctor about how they should use wearables like Whoop.
“It’s really great to hear that the FDA is getting more involved around informing consumers,” Kronish said.
FILE PHOTO: The headquarters of the U.S. Food and Drug Administration (FDA) is seen in Silver Spring, Maryland November 4, 2009.
Jason Reed | Reuters
Whoop is not the only wearable manufacturer that’s exploring blood pressure monitoring.
Omron and Garmin both offer medical blood pressure monitoring with on-demand readings that fall under FDA regulation. Samsung also offers blood-pressure-reading technology, but it is not available in the U.S. market.
Apple has also been teasing a blood pressure sensor for its watches, but has not been able to deliver. In 2024, the tech giant received FDA approval for its sleep apnea detection feature.
Whoop has previously received FDA clearance for its ECG feature, which is used to record and analyze a heart’s electrical activity to detect potential irregularities in rhythm. But when it comes to blood pressure, Whoop believes the FDA’s perspective is antiquated.
“We do not believe blood pressure should be considered any more or less sensitive than other physiological metrics like heart rate and respiratory rate,” a spokesperson said. “It appears that the FDA’s concerns may stem from outdated assumptions about blood pressure being strictly a clinical domain and inherently associated with a medical diagnosis.”
The FDA said Whoop could be subject to regulatory actions like seizure, injunction, and civil money penalties if it fails to address the violations that the agency identified in its letter.
Whoop has 15 business days to respond with steps the company has taken to address the violations, as well as how it will prevent similar issues from happening again.
“Even accounting for BPI’s disclaimers, they do not change this conclusion, because they are insufficient to outweigh the fact that the product is, by design, intended to provide a blood pressure estimation that is inherently associated with the diagnosis of a disease or condition,” the FDA said.
United Launch Alliance Atlas V rocket carrying the first two demonstration satellites for Amazon’s Project Kuiper broadband internet constellation stands ready for launch on pad 41 at Cape Canaveral Space Force Station on October 5, 2023 in Cape Canaveral, Florida, United States.
Paul Hennessey | Anadolu Agency | Getty Images
As Amazon chases SpaceX in the internet satellite market, the e-commerce and computing giant is now counting on Elon Musk’s rival company to get its next batch of devices into space.
On Wednesday, weather permitting, 24 Kuiper satellites will hitch a ride on one of SpaceX’s Falcon 9 rockets from a launchpad on Florida’s Space Coast. A 27-minute launch window for the mission, dubbed “KF-01,” opens at 2:18 a.m. ET.
The launch will be livestreamed on X, the social media platform also owned by Musk.
The mission marks an unusual alliance. SpaceX’s Starlink is currently the dominant provider of low earth orbit satellite internet, with a constellation of roughly 8,000 satellites and about 5 million customers worldwide.
Amazon launched Project Kuiper in 2019 with an aim to provide broadband internet from a constellation of more than 3,000 satellites. The company is working under a tight deadline imposed by the Federal Communications Commission that requires it to have about 1,600 satellites in orbit by the end of July 2026.
Amazon’s first two Kuiper launches came in April and June, sending 27 satellites each time aboard rockets supplied by United Launch Alliance.
Assuming Wednesday’s launch is a success, Amazon will have a total of 78 satellites in orbit. In order to meet the FCC’s tight deadline, Amazon needs to rapidly manufacture and deploy satellites, securing a hefty amount of capacity from rocket providers. Kuiper has booked up to 83 launches, including three rides with SpaceX.
Space has emerged as a battleground between Musk and Amazon founder Jeff Bezos, two of the world’s richest men. Aside from Kuiper, Bezos also competes with Musk via his rocket company Blue Origin.
Blue Origin in January sent up its massive New Glenn rocket for the first time, which is intended to rival SpaceX’s reusable Falcon 9 rockets. While Blue Origin currently trails SpaceX, Bezos last year predicted his latest venture will one day be bigger than Amazon, which he started in 1994.
Kuiper has become one of Amazon’s biggest bets, with more than $10 billion earmarked for the project. The company may need to spend as much as $23 billion to build its full constellation, analysts at Bank of America wrote in a note to clients last week. That figure doesn’t include the cost of building terminals, which consumers will use to connect to the service.
The analysts estimate Amazon is spending $150 million per launch this year, while satellite production costs are projected to total $1.1 billion by the fourth quarter.
Amazon is going after a market that’s expected to grow to at least $40 billion by 2030, the analysts wrote, citing estimates by Boston Consulting Group. The firm estimated that Amazon could generate $7.1 billion in sales from Kuiper by 2032 if it claims 30% of the market.
“With Starlink’s solid early growth, our estimates could be conservative,” the analysts wrote.
The price of bitcoin was last down 2.8% at $116,516.00, according to Coin Metrics. That marks a pullback from the day’s high of $120,481.86.
Stock Chart IconStock chart icon
Bitcoin/USD Coin Metrics, 1-day
The drop comes on the heels of multiple crypto-related bills failing to overcome a procedural hurdle in the House, with 13 Republicans voting with Democrats to block the motion in a 196-223 vote.
Stocks linked to crypto also came under pressure in late afternoon trading. Shares of bitcoin miners Riot Platforms and Mara Holdings closed down 3.3% and 2.3%, respectively. Others like crypto trading platforms Coinbase slid 1.5%. All were under pressure in extended trading.