Connect with us

Published

on

See also: Parrots, paperclips, and safety vs ethics: Why the artificial intelligence debate sounds like a foreign language

Here’s a list of some terms used by AI insiders:

AGI — AGI stands for “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, that can do most things as well or better than most humans, including improving itself.

Example: “For me, AGI is the equivalent of a median human that you could hire as a coworker, and they could say do anything you would be happy with a remote coworker doing behind a computer,” Sam Altman said at a recent Greylock VC event.

AI ethics describes the desire to prevent AI from causing immediate harm, and often focuses on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.

AI safety describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.

Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.

Example: “What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society,” Sam Altman said last week during the Senate hearing.

Emergent behavior — Emergent behavior is the technical way of saying that some AI models show abilities that weren’t initially intended. It can also describe surprising results from AI tools being deployed widely to the public.

Example: “Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely,” Microsoft researchers wrote in Sparks of Artificial General Intelligence.

Fast takeoff or hard takeoff — A phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Example: “AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman in a blog post.

Foom — Another way to say “hard takeoff.” It’s an onomatopeia, and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.

Example: “It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun.

GPU — The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most commonly used model at the moment is Nvidia’s A100.

Example: From Stability AI founder Emad Mostque:

Guardrails are software and policies that big tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect the AI from going off topic, like Nvidia’s “NeMo Guardrails” product.

Example: “The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests,” Christina Montgomery, the chair of IBM’s AI ethics board and VP at the company, said in Congress this week.

Inference — The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Example: “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT. It went to like a million users in five days. There is no way your GPU capacity can keep up with that,” Sid Sheth, founder of D-Matrix, previously told CNBC.

Large language model — A kind of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that seems like a human wrote it.

Example: “Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing its to perform more advanced coding, math and creative writing tasks,” CNBC reported earlier this week.

Paperclips are an important symbol for AI Safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.

Example: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” Bostrom wrote in his thought experiment.

Singularity is an older term that’s not used often anymore, but it refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.

Example: “The advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.

Continue Reading

Technology

Google agrees to pay Texas $1.4 billion data privacy settlement

Published

on

By

Google agrees to pay Texas .4 billion data privacy settlement

A Google corporate logo hangs above the entrance to the company’s office at St. John’s Terminal in New York City on March 11, 2025.

Gary Hershorn | Corbis News | Getty Images

Google agreed to pay nearly $1.4 billion to the state of Texas to settle allegations of violating the data privacy rights of state residents, Texas Attorney General Ken Paxton said Friday.

Paxton sued Google in 2022 for allegedly unlawfully tracking and collecting the private data of users.

The attorney general said the settlement, which covers allegations in two separate lawsuits against the search engine and app giant, dwarfed all past settlements by other states with Google for similar data privacy violations.

Google’s settlement comes nearly 10 months after Paxton obtained a $1.4 billion settlement for Texas from Meta, the parent company of Facebook and Instagram, to resolve claims of unauthorized use of biometric data by users of those popular social media platforms.

“In Texas, Big Tech is not above the law,” Paxton said in a statement on Friday.

“For years, Google secretly tracked people’s movements, private searches, and even their voiceprints and facial geometry through their products and services. I fought back and won,” said Paxton.

“This $1.375 billion settlement is a major win for Texans’ privacy and tells companies that they will pay for abusing our trust.”

Google spokesman Jose Castaneda said the company did not admit any wrongdoing or liability in the settlement, which involves allegations related to the Chrome browser’s incognito setting, disclosures related to location history on the Google Maps app, and biometric claims related to Google Photo.

Castaneda said Google does not have to make any changes to products in connection with the settlement and that all of the policy changes that the company made in connection with the allegations were previously announced or implemented.

“This settles a raft of old claims, many of which have already been resolved elsewhere, concerning product policies we have long since changed,” Castaneda said.

“We are pleased to put them behind us, and we will continue to build robust privacy controls into our services.”

Continue Reading

Technology

Virtual chronic care company Omada Health files for IPO

Published

on

By

Virtual chronic care company Omada Health files for IPO

Omada Health smart devices in use.

Courtesy: Omada Health

Virtual care company Omada Health filed for an IPO on Friday, the latest digital health company that’s signaled its intent to hit the public markets despite a turbulent economy.

Founded in 2012, Omada offers virtual care programs to support patients with chronic conditions like prediabetes, diabetes and hypertension. The company describes its approach as a “between-visit care model” that is complementary to the broader health-care ecosystem, according to its prospectus.

Revenue increased 57% in the first quarter to $55 million, up from $35.1 million during the same period last year, the filing said. The San Francisco-based company generated $169.8 million in revenue during 2024, up 38% from $122.8 million the previous year.

Omada’s net loss narrowed to $9.4 million during its first quarter from $19 million during the same period last year. It reported a net loss of $47.1 million in 2024, compared to a $67.5 million net loss during 2023.

The IPO market has been largely dormant across the tech sector for the past three years, and within digital health, it’s been almost completely dead. After President Donald Trump announced a sweeping tariff policy that plunged U.S. markets into turmoil last month, taking a company public is an even riskier endeavor. Online lender Klarna delayed its long-anticipated IPO, as did ticket marketplace StubHub.

But Omada Health isn’t the first digital health company to file for its public market debut this year. Virtual physical therapy startup Hinge Health filed its prospectus in March, and provided an update with its first-quarter earnings on Monday, a signal to investors that it’s looking to forge ahead.

Omada contracts with employers, and the company said it works with more than 2,000 customers and supports 679,000 members as of March 31. More than 156 million Americans suffer from at least one chronic condition, so there is a significant market opportunity, according to the company’s filing.

In 2022, Omada announced a $192 million funding round that pushed its valuation above $1 billion. U.S. Venture Partners, Andreessen Horowitz and Fidelity’s FMR LLC are the largest outside shareholders in the company, each owning between 9% and 10% of the stock.

“To our prospective shareholders, thank you for learning more about Omada. I invite you join our journey,” Omada co-founder and CEO Sean Duffy said in the filing. “In front of us is a unique chance to build a promising and successful business while truly changing lives.”

WATCH: The IPO market is likely to pick up near Labor Day, says FirstMark’s Rick Heitzmann

The IPO market is likely to pick up near Labor Day, says FirstMark's Rick Heitzmann

Continue Reading

Technology

Google would need to shift up to 2,000 employees for antitrust remedies, search head says

Published

on

By

Google would need to shift up to 2,000 employees for antitrust remedies, search head says

Liz Reid, vice president, search, Google speaks during an event in New Delhi on December 19, 2022.

Sajjad Hussain | AFP | Getty Images

Testimony in Google‘s antitrust search remedies trial that wrapped hearings Friday shows how the company is calculating possible changes proposed by the Department of Justice.

Google head of search Liz Reid testified in court Tuesday that the company would need to divert between 1,000 and 2,000 employees, roughly 20% of Google’s search organization, to carry out some of the proposed remedies, a source with knowledge of the proceedings confirmed.

The testimony comes during the final days of the remedies trial, which will determine what penalties should be taken against Google after a judge last year ruled the company has held an illegal monopoly in its core market of internet search.

The DOJ, which filed the original antitrust suit and proposed remedies, asked the judge to force Google to share its data used for generating search results, such as click data. It also asked for the company to remove the use of “compelled syndication,” which refers to the practice of making certain deals with companies to ensure its search engine remains the default choice in browsers and smartphones. 

Read more CNBC tech news

Google pays Apple billions of dollars per year to be the default search engine on iPhones. It’s lucrative for Apple and a valuable way for Google to get more search volume and users.

Apple’s SVP of Services Eddy Cue testified Wednesday that Apple chooses to feature Google because it’s “the best search engine.”

The DOJ also proposed the company divest its Chrome browser but that was not included in Reid’s initial calculation, the source confirmed.

Reid on Tuesday said Google’s proprietary “Knowledge Graph” database, which it uses to surface search results, contains more than 500 billion facts, according to the source, and that Google has invested more than $20 billion in engineering costs and content acquisition over more than a decade.

“People ask Google questions they wouldn’t ask anyone else,” she said, according to the source.

Reid echoed Google’s argument that sharing its data would create privacy risks, the source confirmed.

Closing arguments for the search remedies trial will take place May 29th and 30th, followed by the judge’s decision expected in August.

The company faces a separate remedies trial for its advertising tech business, which is scheduled to begin Sept. 22.

Continue Reading

Trending