Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

How quantum could supercharge Google’s AI ambitions

Published

on

By

How quantum could supercharge Google’s AI ambitions

Inside a secretive set of buildings in Santa Barbara, California, scientists at Alphabet are working on one of the company’s most ambitious bets yet. They’re attempting to develop the world’s most advanced quantum computers.

“In the future, quantum and AI, they could really complement each other back and forth,” said Julian Kelly, director of hardware at Google Quantum AI.

Google has been viewed by many as late to the generative AI boom, because OpenAI broke into the mainstream first with ChatGPT in late 2022.

Late last year, Google made clear that it wouldn’t be caught on the backfoot again. The company unveiled a breakthrough quantum computing chip called Willow, which it says can solve a benchmark problem unimaginably faster than what’s possible with a classical computer, and demonstrated that adding more quantum bits to the chip reduced errors exponentially. 

“That’s a milestone for the field,” said John Preskill, director of the Caltech Institute for Quantum Information and Matter. “We’ve been wanting to see that for quite a while.”

Willow may now give Google a chance to take the lead in the next technological era. It also could be a way to turn research into a commercial opportunity, especially as AI hits a data wall. Leading AI models are running out of high-quality data to train on after already scraping much of the data on the internet.

“One of the potential applications that you can think of for a quantum computer is generating new and novel data,” said Kelly. 

He uses the example of AlphaFold, an AI model developed by Google DeepMind that helps scientists study protein structures. Its creators won the 2024 Nobel Prize in Chemistry. 

“[AlphaFold] trains on data that’s informed by quantum mechanics, but that’s actually not that common,” said Kelly. “So a thing that a quantum computer could do is generate data that AI could then be trained on in order to give it a little more information about how quantum mechanics works.” 

Kelly has said that he believes Google is only about five years away from a breakout, practical application that can only be solved on a quantum computer. But for Google to win the next big platform shift, it would have to turn a breakthrough into a business. 

Watch the video to learn more.

Continue Reading

Technology

Nintendo Switch 2 retail preorder to begin April 24 following tariff delays

Published

on

By

Nintendo Switch 2 retail preorder to begin April 24 following tariff delays

An attendee wearing a Super Mario costume uses a Nintendo Switch 2 game console while playing a video game during the Nintendo Switch 2 Experience at the ExCeL London international exhibition and convention centre in London, Britain, April 11, 2025. 

Isabel Infantes | Reuters

Nintendo on Friday announced that retail preorder for its Nintendo Switch 2 gaming system will begin on April 24 starting at $449.99.

Preorders for the hotly anticipated console were initially slated for April 9, but Nintendo delayed the date to assess the impact of the far-reaching, aggressive “reciprocal” tariffs that President Donald Trump announced earlier this month.

Most electronics companies, including Nintendo, manufacture their products in Asia. Nintendo’s Switch 1 consoles were made in China and Vietnam, Reuters reported in 2019. Trump has imposed a 145% tariff rate on China and a 10% rate on Vietnam. The latter is down from 46%, after he instituted a 90-day pause to allow for negotiations.

Nintendo said Friday that the Switch 2 will cost $449.99 in the U.S., which is the same price the company first announced on April 2.

“We apologize for the retail pre-order delay, and hope this reduces some of the uncertainty our consumers may be experiencing,” Nintendo said in a statement. “We thank our customers for their patience, and we share their excitement to experience Nintendo Switch 2 starting June 5, 2025.”

The Nintendo Switch 2 and “Mario Kart World bundle will cost $499.99, the digital version “Mario Kart World” will cost $79.99 and the digital version of “Donkey Kong Bananza” will cost $69.99, Nintendo said. All of those prices remain unchanged from the company’s initial announcement.

However, accessories for the Nintendo Switch 2 will “experience price adjustments,” the company said, and other future changes in costs are possible for “any Nintendo product.”

It will cost gamers $10 more to by the dock set, $1 more to buy the controller strap and $5 more to buy most other accessories, for instance.

WATCH: Nintendo has ‘a lot of work to do’ to convince casual users to upgrade to Switch 2: Kantan Games

Nintendo has 'a lot of work to do' to convince casual users to upgrade to Switch 2: Kantan Games

Continue Reading

Technology

Etsy touts ‘shopping domestically’ as Trump tariffs threaten price increases for imports

Published

on

By

Etsy touts 'shopping domestically' as Trump tariffs threaten price increases for imports

An employee walks past a quilt displaying Etsy Inc. signage at the company’s headquarters in the Brooklyn.

Victor J. Blue/Bloomberg via Getty Images

Etsy is trying to make it easier for shoppers to purchase products from local merchants and avoid the extra cost of imports as President Donald Trump’s sweeping tariffs raise concerns about soaring prices.

In a post to Etsy’s website on Thursday, CEO Josh Silverman said the company is “surfacing new ways for buyers to discover businesses in their countries” via shopping pages and by featuring local sellers on its website and app.

“While we continue to nurture and enable cross-border trade on Etsy, we understand that people are increasingly interested in shopping domestically,” Silverman said.

Etsy operates an online marketplace that connects buyers and sellers with mostly artisanal and handcrafted goods. The site, which had 5.6 million active sellers as of the end of December, competes with e-commerce juggernaut Amazon, as well as newer entrants that have ties to China like Temu, Shein and TikTok Shop.

By highlighting local sellers, Etsy could relieve some shoppers from having to pay higher prices induced by President Trump’s widespread tariffs on trade partners. Trump has imposed tariffs on most foreign countries, with China facing a rate of 145%, and other nations facing 10% rates after he instituted a 90-day pause to allow for negotiations. Trump also signed an executive order that will end the de minimis provision, a loophole for low-value shipments often used by online businesses, on May 2.

Temu and Shein have already announced they plan to raise prices late next week in response to the tariffs. Sellers on Amazon’s third-party marketplace, many of whom source their products from China, have said they’re considering raising prices.

Silverman said Etsy has provided guidance for its sellers to help them “run their businesses with as little disruption as possible” in the wake of tariffs and changes to the de minimis exemption.

Before Trump’s “Liberation Day” tariffs took effect, Silverman said on the company’s fourth-quarter earnings call in late February that he expects Etsy to benefit from the tariffs and de minimis restrictions because it “has much less dependence on products coming in from China.”

“We’re doing whatever work we can do to anticipate and prepare for come what may,” Silverman said at the time. “In general, though, I think Etsy will be more resilient than many of our competitors in these situations.”

Still, American shoppers may face higher prices on Etsy as U.S. businesses that source their products or components from China pass some of those costs on to consumers.

Etsy shares are down 17% this year, slightly more than the Nasdaq.

WATCH: Amazon CEO Andy Jassy says sellers will pass cost of tariffs on to consumers

Amazon CEO Andy Jassy: Sellers will pass increased tariff costs on to consumers

Continue Reading

Trending