Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction.
Eric Lee | Bloomberg | Getty Images
This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.
After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.
“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”
In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.
“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.
Most people agree that there need to be laws governing AI as the pace of development accelerates.
“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”
But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.“
When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.
“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”
But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.
From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.
This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.
“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.
It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.
Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”
But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.
For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.
OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.
Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.
Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.
“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.
AI ethics has its own lingo, too.
When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.“
The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.
When these LLMs invent incorrect facts in responses, they’re “hallucinating.”
One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.
“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”
Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.“
It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.
“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.
Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”
A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.
But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.
Jensen Huang is interviewed by media during a reception for the 2025 Queen Elizabeth Prize for Engineering, at St James’ Palace November 5, 2025 in London, England, U.K.
Yui Mok | Getty Images Entertainment | Getty Images
Uneasy lies the head that wears the crown.
Shares of artificial intelligence czar Nvidia fell 2.6% on Tuesday as signs of unrest continued rippling through its kingdom.
Over the month, Nvidia has been contending with concerns over lofty valuations and an argument from the “The Big Short” investor Michael Burry that companies may be overestimating the lifespan of Nvidia’s chips. That accounting choice inflates profits, he alleged.
The pressure intensified last week in the form of a potential challenger to the crown. Google on Nov. 18 announced the release of its new AI model Gemini 3 — so far so good, given that Nvidia isn’t in the business of designing large language models — powered by its in-house AI chips — uh–oh.
And on Monday stateside, Meta, a potential kingmaker, appeared to signal that it is considering not just leasing Google’s custom AI chips, but also using them for its own data centers. It seemed like Nvidia felt the need to address some of those rumblings.
The chipmaker said on the social media platform X that its technology is more powerful and versatile than other types of AI chips, including the so-called ASIC chips, such as Google’s TPUs. Separately, Nvidia issued a private memo to Wall Street that disputed Burry’s allegations.
Power, whether in politics or semiconductors, requires a delicate balance.
Remaining silent may shroud those in power in a cloak of untouchability, projecting confidence in their authority — but also aloofness. Deigning to address unrest can soothe uncertainty, but also, paradoxically, signal insecurity.
For now, the crown is Nvidia’s to wear — and the weight of it is, too.
What you need to know today
The UK Autumn Budget 2025 is here. Britain prepares for a “smorgasbord” of tax hikes to be unveiled Wednesday. Follow CNBC’s coverage of the Budget throughout the day on our live blog here.
Taiwan President pledges $40 billion more for defense. Lai Ching-te, Taiwan’s leader, on Wednesday said the self-governing island will improve its self-defense capabilities in the face of “unprecedented military buildup” by China.
[PRO] What to watch as UK budget is unveiled. Strategists told CNBC they will be monitoring the budget’s effects on interest rates, economic growth and the British pound — and one “rabbit out of the hat” from U.K. Finance Minister Rachel Reeves.
And finally…
Lights on in skyscrapers and commercial buildings on the skyline of the City of London, UK, on Tuesday, Nov. 18, 2025. U.K. business chiefs urged Chancellor of the Exchequer Rachel Reeves to ease energy costs and avoid raising the tax burden on corporate Britain as she prepares this year’s budget.
The run-up to this year’s U.K. Autumn Budget has been different from the norm because so many different tax proposals have been floated, flagged, leaked and retracted in the weeks and months leading up to Wednesday’s statement.
It has also made it harder to gauge what we’re actually going to get when Finance Minister Rachel Reeves finally unveils her spending and taxation plans for the year ahead.
Uber on Wednesday rolled out fully driverless rides in its fourth market, launching the service in Abu Dhabi in partnership WeRide, a Chinese autonomous vehicle company.
The ride-hailing company said the launch in the United Arab Emirates capital represents the first driverless robotaxi service in the Middle East. In the U.S., Uber already offers robotaxi services in Austin, Phoenix and Atlanta through Alphabet’s Waymo.
Riders in Abu Dhabi can book a WeRide robotaxi when requesting an UberX or Uber Comfort ride, the ride-hailing company said.
WeRide, which is listed on the Nasdaq, formed its partnership with Uber in September 2024 and began offering autonomous rides with an operator on board in Abu Dhabi last December. Uber and WeRide also debuted robotaxi rides with a safety operator on board in Riyadh, Saudia Arabia, in October. In May, Uber said it plans to roll out the WeRide service to 15 more cities, including in Europe, over the next five years.
In recent years, Uber has bet big on autonomous vehicle technology through partnerships.
Uber started offering a robotaxi service in Austin and Atlanta earlier this year, and in Phoenix in late 2023. In July, the company landed a six-year robotaxi deal with electric vehicle maker Lucid and AV startup Nuro.
WeRide, meanwhile, has launched full driverless robotaxi services in China’s Beijing and Guangzhou, according to its website.
Uber has not said how it splits revenue from robotaxi rides with its partners.
Competitors have also readily adopted the technology, with Lyft announcing a deal with Waymo in September to launch robotaxis in Nashville next year.
Uber said the driverless vehicles in Abu Dhabi will operate in certain areas of Yas Island. Riders can boost their chance of a robotaxi drive by selecting the autonomous option. On-board support is available during the ride through the app and an in-vehicle tablet.
Amazon’s new MK30 Prime Air drone is displayed during Amazon’s “Delivering the Future” event at the company’s BFI1 Fulfillment Center, Robotics Research and Development Hub in Sumner, Washington on Oct. 18, 2023.
Jason Redmond | AFP | Getty Images
Amazon is facing a federal probe after one of its delivery drones downed an internet cable in central Texas last week.
The probe comes as Amazon vies to expand drone deliveries to more pockets of the U.S., more than a decade after it first conceived the aerial distribution program, and faces stiffer competition from Walmart, which has also begun drone deliveries.
The incident occurred on Nov. 18 around 12:45 p.m. Central in Waco, Texas. After dropping off a package, one of Amazon’s MK30 drones was ascending out of a customer’s yard when one of its six propellers got tangled in a nearby internet cable, according to a video of the incident viewed and verified by CNBC.
The video shows the Amazon drone shearing the wire line. The drone’s motor then appeared to shut off and the aircraft landed itself, with its propellers windmilling slightly on the way down, the video shows. The drone appeared to remain in tact beyond some damage to one of its propellers.
The Federal Aviation Administration is investigating the incident, a spokesperson confirmed. The National Transportation Safety Board said the agency is aware of the incident but has not opened a probe into the matter.
Amazon confirmed the incident to CNBC, saying that after clipping the internet cable, the drone performed a “safe contingent landing,” referring to the process that allows its drones to land safely in unexpected conditions.
“There were no injuries or widespread internet service outages. We’ve paid for the cable line’s repair for the customer and have apologized for the inconvenience this caused them,” an Amazon spokesperson told CNBC, noting that the drone had completed its package delivery.
The incident comes after federal investigators last month opened a separate probe into a crash involving two of Amazon’s Prime Air drones in Arizona. The two aircrafts collided with a construction crane in Tolleson, a city west of Phoenix, prompting Amazon to temporarily halt drone deliveries in the area.
For over a decade, Amazon has been working to realize founder Jeff Bezos’ vision of drones whizzing toothpaste, books and other goods to customers’ doorsteps in 30 minutes or less. The company began drone deliveries in 2022 in College Station, Texas, and Lockeford, California.
But progress has been slowed by a mix of regulatory hurdles, missed deadlines and layoffs in 2023 that coincided with broader cost-cutting efforts by Amazon CEO Andy Jassy.
The company has previously said its goal is to deliver 500 million packages by drone per year by the end of the decade.
The hexacopter-shaped MK30, the latest generation of Amazon’s Prime Air drone, is meant to be quieter, smaller and lighter than previous versions.
Amazon says the drones are equipped with a sense-and-avoid system that enables them to “detect and stay away from obstacles in the air and on the ground.” The company recommends that customers maintain “about 10 feet of open space” on their property so drones can complete deliveries
The company began drone deliveries in Waco earlier this month for customers within a certain radius of its same-day delivery site who order eligible items weighing 5 pounds or less. The drone deliveries are supposed to drop packages off in under an hour.
Amazon has brought other locations online in recent months, including Kansas City, Missouri, Pontiac, Michigan, San Antonio, Texas, and Ruskin, Florida. Amazon has also announced plans to expand drone deliveries to Richardson, Texas.
Walmart began offering drone deliveries in 2021, and currently partners with Alphabet’s Wing and venture-backed startup Zipline to make drone deliveries in a number of states, including in Texas.