Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

Google hires Windsurf CEO Varun Mohan, others in latest AI talent deal

Published

on

By

Google hires Windsurf CEO Varun Mohan, others in latest AI talent deal

Chief executive officer of Google Sundar Pichai.

Marek Antoni Iwanczuk | Sopa Images | Lightrocket | Getty Images

Google on Friday made the latest a splash in the AI talent wars, announcing an agreement to bring in Varun Mohan, co-founder and CEO of artificial intelligence coding startup Windsurf.

As part of the deal, Google will also hire other senior Windsurf research and development employees. Google is not investing in Windsurf, but the search giant will take a nonexclusive license to certain Windsurf technology, according to a person familiar with the matter. Windsurf remains free to license its technology to others.

“We’re excited to welcome some top AI coding talent from Windsurf’s team to Google DeepMind to advance our work in agentic coding,” a Google spokesperson wrote in an email. “We’re excited to continue bringing the benefits of Gemini to software developers everywhere.”

The deal between Google and Windsurf comes after the AI coding startup had been in talks with OpenAI for a $3 billion acquisition deal, CNBC reported in April. OpenAI did not immediately respond to a request for comment.

The move ratchets up the talent war in AI particularly among prominent companies. Meta has made lucrative job offers to several employees at OpenAI in recent weeks. Most notably, the Facebook parent added Scale AI founder Alexandr Wang to lead its AI strategy as part of a $14.3 billion investment into his startup. 

Douglas Chen, another Windsurf co-founder, will be among those joining Google in the deal, Jeff Wang, the startup’s new interim CEO and its head of business for the past two years, wrote in a post on X.

“Most of Windsurf’s world-class team will continue to build the Windsurf product with the goal of maximizing its impact in the enterprise,” Wang wrote.

Windsurf has become more popular this year as an option for so-called vibe coding, which is the process of using new age AI tools to write code. Developers and non-developers have embraced the concept, leading to more revenue for Windsurf and competitors, such as Cursor, which OpenAI also looked at buying. All the interest has led investors to assign higher valuations to the startups.

This isn’t the first time Google has hired select people out of a startup. It did the same with Character.AI last summer. Amazon and Microsoft have also absorbed AI talent in this fashion, with the Adept and Inflection deals, respectively.

Microsoft is pushing an agent mode in its Visual Studio Code editor for vibe coding. In April, Microsoft CEO Satya Nadella said AI is composing as much of 30% of his company’s code.

The Verge reported the Google-Windsurf deal earlier on Friday.

WATCH: Google pushes “AI Mode” on homepage

Google pushes "AI Mode" on homepage

Continue Reading

Technology

Nvidia’s Jensen Huang sells more than $36 million in stock, catches Warren Buffett in net worth

Published

on

By

Nvidia's Jensen Huang sells more than  million in stock, catches Warren Buffett in net worth

Jensen Huang, CEO of Nvidia, holds a motherboard as he speaks during the Viva Technology conference dedicated to innovation and startups at Porte de Versailles exhibition center in Paris, France, on June 11, 2025.

Gonzalo Fuentes | Reuters

Nvidia CEO Jensen Huang unloaded roughly $36.4 million worth of stock in the leading artificial intelligence chipmaker, according to a U.S. Securities and Exchange Commission filing.

The sale, which totals 225,000 shares, comes as part of Huang’s previously adopted plan in March to unload up to 6 million shares of Nvidia through the end of the year. He sold his first batch of stock from the agreement in June, equaling about $15 million.

Last year, the tech executive sold about $700 million worth of shares as part of a prearranged plan. Nvidia stock climbed about 1% Friday.

Huang’s net worth has skyrocketed as investors bet on Nvidia’s AI dominance and graphics processing units powering large language models.

The 62-year-old’s wealth has grown by more than a quarter, or about $29 billion, since the start of 2025 alone, based on Bloomberg’s Billionaires Index. His net worth last stood at $143 billion in the index, putting him neck-and-neck with Berkshire Hathaway‘s Warren Buffett at $144 billion.

Shortly after the market opened Friday, Fortune‘s analysis of net worth had Huang ahead of Buffett, with the Nvidia CEO at $143.7 billion and the Oracle of Omaha at $142.1 billion.

Read more CNBC tech news

The company has also achieved its own notable milestones this year, as it prospers off the AI boom.

On Wednesday, the Santa Clara, California-based chipmaker became the first company to top a $4 trillion market capitalization, beating out both Microsoft and Apple. The chipmaker closed above that milestone Thursday as CNBC reported that the technology titan met with President Donald Trump.

Brooke Seawell, venture partner at New Enterprise Associates, sold about $24 million worth of Nvidia shares, according to an SEC filing. Seawell has been on the company’s board since 1997, according to the company.

Huang still holds more than 858 million shares of Nvidia, both directly and indirectly, in different partnerships and trusts.

WATCH: Nvidia hits $4 trillion in market cap milestone despite curbs on chip exports

Nvidia hits $4 trillion in market cap milestone despite curbs on chip exports

Continue Reading

Technology

Tesla to officially launch in India with planned showroom opening

Published

on

By

Tesla to officially launch in India with planned showroom opening

Elon Musk meets with Indian Prime Minister Narendra Modi at Blair House in Washington DC, USA on February 13, 2025.

Anadolu | Anadolu | Getty Images

Tesla will open a showroom in Mumbai, India next week, marking the U.S. electric carmakers first official foray into the country.

The one and a half hour launch event for the Tesla “Experience Center” will take place on July 15 at the Maker Maxity Mall in Bandra Kurla Complex in Mumbai, according to an event invitation seen by CNBC.

Along with the showroom display, which will feature the company’s cars, Tesla is also likely to officially launch direct sales to Indian customers.

The automaker has had its eye on India for a while and now appears to have stepped up efforts to launch locally.

In April, Tesla boss Elon Musk spoke with Indian Prime Minister Narendra Modi to discuss collaboration in areas including technology and innovation. That same month, the EV-maker’s finance chief said the company has been “very careful” in trying to figure out when to enter the market.

Tesla has no manufacturing operations in India, even though the country’s government is likely keen for the company to establish a factory. Instead the cars sold in India will need to be imported from Tesla’s other manufacturing locations in places like Shanghai, China, and Berlin, Germany.

As Tesla begins sales in India, it will come up against challenges from long-time Chinese rival BYD, as well as local player Tata Motors.

One potential challenge for Tesla comes by way of India’s import duties on electric vehicles, which stand at around 70%. India has tried to entice investment in the country by offering companies a reduced duty of 15% if they commit to invest $500 million and set up manufacturing locally.

HD Kumaraswamy, India’s minister for heavy industries, told reporters in June that Tesla is “not interested” in manufacturing in the country, according to a Reuters report.

Tesla is looking to recruit roles in Mumbai, job listings posted on LinkedIn . These include advisors working in showrooms, security, vehicle operators to collect data for its Autopilot feature and service technicians.

There are also roles being advertised in the Indian capital of New Delhi, including for store managers. It’s unclear if Tesla is planning to launch a showroom in the city.

Continue Reading

Trending