Connect with us

Published

on

See also: Parrots, paperclips, and safety vs ethics: Why the artificial intelligence debate sounds like a foreign language

Here’s a list of some terms used by AI insiders:

AGI — AGI stands for “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, that can do most things as well or better than most humans, including improving itself.

Example: “For me, AGI is the equivalent of a median human that you could hire as a coworker, and they could say do anything you would be happy with a remote coworker doing behind a computer,” Sam Altman said at a recent Greylock VC event.

AI ethics describes the desire to prevent AI from causing immediate harm, and often focuses on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.

AI safety describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.

Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.

Example: “What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society,” Sam Altman said last week during the Senate hearing.

Emergent behavior — Emergent behavior is the technical way of saying that some AI models show abilities that weren’t initially intended. It can also describe surprising results from AI tools being deployed widely to the public.

Example: “Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely,” Microsoft researchers wrote in Sparks of Artificial General Intelligence.

Fast takeoff or hard takeoff — A phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Example: “AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman in a blog post.

Foom — Another way to say “hard takeoff.” It’s an onomatopeia, and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.

Example: “It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun.

GPU — The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most commonly used model at the moment is Nvidia’s A100.

Example: From Stability AI founder Emad Mostque:

Guardrails are software and policies that big tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect the AI from going off topic, like Nvidia’s “NeMo Guardrails” product.

Example: “The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests,” Christina Montgomery, the chair of IBM’s AI ethics board and VP at the company, said in Congress this week.

Inference — The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Example: “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT. It went to like a million users in five days. There is no way your GPU capacity can keep up with that,” Sid Sheth, founder of D-Matrix, previously told CNBC.

Large language model — A kind of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that seems like a human wrote it.

Example: “Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing its to perform more advanced coding, math and creative writing tasks,” CNBC reported earlier this week.

Paperclips are an important symbol for AI Safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.

Example: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” Bostrom wrote in his thought experiment.

Singularity is an older term that’s not used often anymore, but it refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.

Example: “The advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.

Continue Reading

Technology

Silicon Valley’s new defense tech ‘neoprimes’ are pulling billions in funding to challenge legacy giants

Published

on

By

Silicon Valley’s new defense tech ‘neoprimes’ are pulling billions in funding to challenge legacy giants

Guvendemir | E+ | Getty Images

A wave of defense tech startups in Silicon Valley is drawing billions in funding and reshaping America’s national security.

Anduril Industries, recently valued at $30.5 billion following its latest funding round, is among the so-called “neoprimes” — companies challenging the dominance of legacy contractors, dubbed “primes,” such as Lockheed MartinNorthrop Grumman, Boeing, General Dynamics, and RTX (formerly Raytheon).

“There’s more money than ever going to what we call the ‘neoprimes'” Jameson Darby, co-founder and director of autonomy at investment syndicate MilVet Angels, or MVA, told CNBC. “It’s still a fraction of the overall budget, but the trend is all positive.”

Other examples of defense tech startups challenging the incumbents include SpaceX and Palantir Technologies, said Darby, who is also a founding member of the U.S. Department of Defense’s Defense Innovation Unit.

Unlike the primes, these startups are faster, leaner and software-first — with many of them building things that can help close “critical technology gaps that are really important to national security,” said Ernestine Fu Mak, co-founder of MVA and founder of Brave Capital, a venture capital firm.

Venture funding for U.S.-based defense tech startups totaled about $38 billion through the first half of 2025, and could exceed its 2021 peak if the pace remains constant for the rest of the year, according to JPMorgan.

‘The battlefield is changing’

As the global war landscape changed over the past decades, the U.S. Department of Defense has identified several technologies that are critical to national security, including hypersonics, energy resilience, space technology, integrated sensing and cyber.

“In a post-9/11 world, the entire Department of Defense effectively focused on … the global war on terrorism. It was our military versus insurgents, guerrillas, asymmetric warfare, relatively low-tech fighters in most cases,” said Darby.

But war today is more focused on “great power competition,” said Mak.

The battlefield is changing and new technologies are needed … warfare no longer being limited to land, sea, air. There’s also cyber and space domains that have become contested.

Ernestine Fu Mak

Co-founder, MilVet Angels

“The focus is more on deterring and competing with [adversaries] in these very high-tech, multi-domain conflicts,” Mak added. “The battlefield is changing and new technologies are needed… warfare no longer being limited to land, sea, air. There’s also cyber and space domains that have become contested.”

Today, some of these Silicon Valley “neoprimes” are developing not just weapons, but also dual-use technologies that can be applied both commercially and by militaries.

“So things like artificial intelligence and autonomy have broad, sweeping commercial applications, but they’re also clearly a force multiplier in a military context,” said Darby. “[The] Department of War is rapidly assessing and adopting these dual-use technologies … they’re sending signals to the investment world, to the defense industrial base, that the U.S. government needs these things.”

That direction from the government has, in turn, provided a clear and strategic roadmap for both investors and entrepreneurs, said Mak.

The ‘new guard’

On Sept. 17, MVA came out of stealth mode after quietly backing some leading defense tech startups since 2021.

Today, Mak says the syndicate’s roughly 250 members include tech founders, Wall Street financiers, company executives, intelligence officials, former military leaders and Navy SEALs. Together, they’ve invested in companies like Anduril Industries, Shield AI, Hermeus, Ursa Major and Aetherflux.

“Overall, we believe that ‘neoprimes’ cannot exist in the abstract. They require people — individuals who bring technical expertise, who carry a deep sense of mission, and who contribute complementary voices and talents. Together, this coalition forms what we are convening and calling the ‘new guard,'” said Mak.

She added that modern national security requires both the “warrior’s insight on the battlefield” and the “builder’s drive for innovation”.

“Working together with engaged, informed patriots whose participation strengthens our defense ecosystem and reinforces the very fabric of national security,” Mak said.

Mak and Darby both agree that as new technologies develop and make their way onto battlefields globally, it’s changing the way militaries fight, which can also pose new threats.

“You’re seeing these technologists, these builders … building defense tech, and the reason why they’re doing so, is not to initiate conflict, but rather to create a credible deterrent that discourages aggression,” said Mak.

“No one in defense tech is looking to wage war, rather, it’s looking to deter it and wanting adversaries to think twice before threatening peace and stability,” Mak added.

Continue Reading

Technology

Amazon faces FAA, NTSB probe after two delivery drones crashed into crane in Arizona

Published

on

By

Amazon faces FAA, NTSB probe after two delivery drones crashed into crane in Arizona

Two Amazon Prime Air MK30 drones collided with a crane on Oct. 2, 2025 in Tolleson, Arizona.

Courtesy: 12News

Amazon is facing federal probes after two of its Prime Air delivery drones collided with a crane in Arizona, prompting the company to temporarily pause drone service in the area.

The incident occurred on Wednesday around 1 p.m. EST in Tolleson, Arizona, a city west of Phoenix. Two MK30 drones crashed into the boom of a stationary construction crane that was in a commercial area just a few miles away from an Amazon warehouse.

One person was evaluated on the scene for possible smoke inhalation, said Sergeant Erik Mendez of the Tolleson Police Department.

“We’re aware of an incident involving two Prime Air drones in Tolleson, Arizona,” Amazon spokesperson Terrence Clark said in a statement. “We’re currently working with the relevant authorities to investigate.”

Both drones sustained “substantial” damage from the collision on Wednesday, which occurred when the aircraft were mid-route, according to preliminary FAA crash reports.

The Federal Aviation Administration and National Transportation Safety Board are investigating the incident. The NTSB didn’t immediately respond to a request for comment.

Read more CNBC tech news

The drones were believed to be flying northeast back-to-back when they collided with the crane that was being used for roof work on a distribution facility, Tolleson police said in a release. The drones landed in the backyard of a nearby building, according to the release.

The probes come just a few months after Amazon, in January, paused drone deliveries in Tolleson and College Station, Texas, temporarily following two crashes at its Pendleton, Oregon, test site. Those crashes also prompted investigations by the FAA and NTSB. The company resumed deliveries in March after it said it had resolved issues with the drone’s software, CNBC previously reported.

Amazon says its delivery drones are equipped with a sense-and-avoid system that enables them to “detect and stay away from obstacles in the air and on the ground.” The system also allows the aircraft to operate without visual observers over greater distances, the company said.

For over a decade, Amazon has been working to bring to life founder Jeff Bezos’ vision of drones whizzing toothpaste, books and batteries to customers’ doorsteps in 30 minutes or less. But progress has been slow, as Prime Air has only been made available in a handful of U.S. cities.

Amazon has set a goal to deliver 500 million packages by drone per year by the end of the decade.

Google and Amazon race to upgrade voice assistants with AI as OpenAI raises the stakes

Continue Reading

Technology

Intel stock is up 50% over the last month, putting U.S. stake at $16 billion

Published

on

By

Intel stock is up 50% over the last month, putting U.S. stake at  billion

Signage outside the Intel headquarters in San Jose, California, US, on Thursday, Sept. 18, 2025.

David Paul Morris | Bloomberg | Getty Images

Shares of U.S. chipmaker Intel climbed 3% Thursday, putting the monthly gain over 50%.

The surge pushed the stock past $37, hiking the value of the U.S. government’s 10% stake in Intel to roughly $16 billion.

The Trump administration negotiated an $8.9 billion investment in Intel common stock in August, purchasing 433.3 million shares at $20.47 per share.

Press secretary Karoline Leavitt celebrated the surge with a post on X from the Association of Mature American Citizens, a conservative organization.

Intel shares jumped 7% on Wednesday after news that the company is in early talks with AMD to add the hardware-maker as a customer.

Read more CNBC tech news

Continue Reading

Trending