Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

OpenAI CEO Sam Altman denies sexual abuse allegations made by his sister in lawsuit

Published

on

By

OpenAI CEO Sam Altman denies sexual abuse allegations made by his sister in lawsuit

OpenAI CEO Sam Altman visits “Making Money With Charles Payne” at Fox Business Network Studios in New York on Dec. 4, 2024.

Mike Coppola | Getty Images

OpenAI CEO Sam Altman’s sister, Ann Altman, filed a lawsuit on Monday, alleging that her brother sexually abused her regularly between the years of 1997 and 2006.

The lawsuit, which was filed in U.S. District Court in the Eastern District of Missouri, alleges that the abuse took place at the family’s home in Clayton, Missouri, and began when Ann, who goes by Annie, was three and Sam was 12. The filing claims that the abusive activities took place “several times per week,” beginning with oral sex and later involving penetration.

The lawsuit claims that “as a direct and proximate result of the foregoing acts of sexual assault,” the plaintiff has experienced “severe emotional distress, mental anguish, and depression, which is expected to continue into the future.”

The younger Altman has publicly made similar sexual assault allegations against her brother in the past on platforms like X, but this is the first time she’s taken him to court. She’s being represented by Ryan Mahoney, whose Illinois-based firm specializes in matters including sexual assault and harassment.

The lawsuit requests a jury trial and damages in excess of $75,000.

In a joint statement on X with his mother, Connie, and his brothers Jack and Max, Sam Altman denied the allegations.

“Annie has made deeply hurtful and entirely untrue claims about our family, and especially Sam,” the statement said. “We’ve chosen not to respond publicly, out of respect for her privacy and our own. However, she has now taken legal action against Sam, and we feel we have no choice but to address this.”

Their response says “all of these claims are utterly untrue,” adding that “this situation causes immense pain to our entire family.” They said that Ann Altman faces “mental health challenges” and “refuses conventional treatment and lashes out at family members who are genuinely trying to help.”

Sam Altman has gained international prominence since OpenAI’s debut of the artificial intelligence chatbot ChatGPT in November 2022. Backed by Microsoft, the company was most recently valued at $157 billion, with funding coming from Thrive Capital, chipmaker Nvidia, SoftBank and others.

Altman was briefly ousted from the CEO role by OpenAI’s board in November 2023, but was quickly reinstated due to pressure from investors and employees.

This isn’t the only lawsuit the tech exec faces.

In March, Tesla and SpaceX CEO Elon Musk sued OpenAI and co-founders Altman and Greg Brockman, alleging breach of contract and fiduciary duty. Musk, who now runs a competing AI startup, xAI, was a co-founder of OpenAI when it began as a nonprofit in 2015. Musk left the board in 2018 and has publicly criticized OpenAI for allegedly abandoning its original mission.

Musk is suing to keep OpenAI from turning into a for-profit company. In June, Musk withdrew the original complaint filed in a San Francisco state court and later refiled in federal court. 

Last month, OpenAI clapped back against Musk, claiming in a blog post that in 2017 Musk “not only wanted, but actually created, a for-profit” to serve as the company’s proposed new structure.

WATCH: OpenAI unveils for-profit plans

OpenAI unveils for-profit plans

Continue Reading

Technology

Meta employees criticize Zuckerberg decisions to end fact-checking, add Dana White to board

Published

on

By

Meta employees criticize Zuckerberg decisions to end fact-checking, add Dana White to board

This photo illustration created on January 7, 2025, in Washington, DC, shows an image of Mark Zuckerberg, CEO of Meta, and an image of the Meta logo. 

Drew Angerer | Afp | Getty Images

Meta employees took to their internal forum on Tuesday, criticizing the company’s decision to end third-party fact-checking on its services two weeks before President-elect Donald Trump’s inauguration.

Company employees voiced their concern after Joel Kaplan, Meta’s new chief global affairs officer and former White House deputy chief of staff under former President George W. Bush, announced the content policy changes on Workplace, the in-house communications tool. 

“We’re optimistic that these changes help us return to that fundamental commitment to free expression,” Kaplan wrote in the post, which was reviewed by CNBC. 

The content policy announcement follows a string of decisions that appear targeted to appease the incoming administration. On Monday, Meta added new members to its board, including UFC CEO Dana White, a longtime friend of Trump, and the company confirmed last month that it was contributing $1 million to Trump’s inauguration.

Among the latest changes, Kaplan announced that Meta will scrap its fact-checking program and shift to a user-generated system like X’s Community Notes. Kaplan, who took over his new role last week, also said that Meta will lift restrictions on certain topics and focus its enforcement on illegal and high-severity violations while giving users “a more personalized approach to political content.”

One worker wrote they were “extremely concerned” about the decision, saying it appears Meta is “sending a bigger, stronger message to people that facts no longer matter, and conflating that with a victory for free speech.”

Another employee commented that by “simply absolving ourselves from the duty to at least try to create a safe and respective platform is a really sad direction to take.” Other comments expressed concern about the impact the policy change could have on the discourse around topics like immigration, gender identity and gender, which, according to one employee, could result in an “influx of racist and transphobic content.”

A separate employee said they were scared that “we’re entering into really dangerous territory by paving the way for the further spread of misinformation.”

The changes weren’t universally criticized, as some Meta workers congratulated the company’s decision to end third-party fact checking. One wrote that X’s Community Notes feature has “proven to be a much better representation of the ground truth.” 

Another employee commented that the company should “provide an accounting of the worst outcomes of the early years” that necessitated the creation of a third-party fact-checking program and whether the new policies would prevent the same type of fall out from happening again.

As part of the company’s massive layoffs in 2023, Meta also scrapped an internal fact-checking project, CNBC reported. That project would have let third-party fact checkers like the Associated Press and Reuters, in addition to credible experts, comment on flagged articles in order to verify the content.

Although Meta announced the end of its fact-checking program on Tuesday, the company had already been pulling it back. In September, a spokesperson for the AP told CNBC that the news agency’s “fact-checking agreement with Meta ended back in January” 2024. 

Dana White, CEO of the Ultimate Fighting Championship gestures as he speaks during a rally for Republican presidential nominee and former U.S. President Donald Trump at Madison Square Garden, in New York, U.S., Oct. 27, 2024.

Andrew Kelly | Reuters

After the announcement of White’s addition to the board on Monday, employees also posted criticism, questions and jokes on Workplace, according to posts reviewed by CNBC.

White, who has led UFC since 2001, became embroiled in controversy in 2023 after a video published by TMZ showed him slapping his wife at a New Year’s Eve party in Mexico. White issued a public apology, and his wife, Anne White, issued a statement to TMZ, calling it an isolated incident.

Commenters on Workplace made jokes asking whether performance reviews would now involve mixed martial arts style fights.

In addition to White, John Elkann, the CEO of Italian auto holding company Exor, was named to Meta’s board.

Some employees asked what value autos and entertainment executives could bring to Meta, and whether White’s addition reflects the company’s values. One post suggested the new board appointments would help with political alliances that could be valuable but could also change the company culture in unintended or unwanted ways.

Comments in Workplace alluding to White’s personal history were flagged and removed from the discussion, according to posts from the internal app read by CNBC.

An employee who said he was with Meta’s Internal Community Relations team, posted a reminder to Workplace about the company’s “community engagement expectations” policy, or CEE, for using the platform.

“Multiple comments have been flagged by the community for review,” the employee posted. “It’s important that we maintain a respectful work environment where people can do their best work.” 

The internal community relations team member added that “insulting, criticizing, or antagonizing our colleagues or Board members is not aligned with the CEE.”

Several workers responded to that note saying that even respectful posts, if critical, had been removed, amounting to a corporate form of censorship.

One worker said that because critical comments were being removed, the person wanted to voice support for “women and all voices.”

Meta declined to comment.

— CNBC’s Salvador Rodriguez contributed to this report.

WATCH: Meta adds Dana White, John Elkann, and Charlie Songhurst to board of directors.

Meta adds Dana White, John Elkann, and Charlie Songhurst to board of directors

Continue Reading

Technology

Bitcoin drops below $98,000 as Treasury yields pressure risk assets

Published

on

By

Bitcoin drops below ,000 as Treasury yields pressure risk assets

Nicolas Economou | Nurphoto | Getty Images

Bitcoin slumped on Tuesday as a spike in Treasury yields weighed on risk assets broadly.

The price of the flagship cryptocurrency was last lower by 4.8% at $97,183.80, according to Coin Metrics. The broader market of cryptocurrencies, as measured by the CoinDesk 20 index, dropped more than 5%.

Crypto stocks Coinbase and MicroStrategy fell more than 7% and 9%, respectively. Bitcoin miners Mara Holdings and Core Scientific were down about 5% each.

Stock Chart IconStock chart icon

hide content

Bitcoin drops below $98,000

The moves followed a sudden increase in the 10-year U.S. Treasury yield after data released by the Institute for Supply Management reflected faster-than-expected growth in the U.S. services sector in December, adding to concerns about stickier inflation. Rising yields tend to pressure growth oriented risk assets.

Bitcoin traded above $102,000 on Monday and is widely expected to about double this year from that level. Investors are hopeful that clearer regulation will support digital asset prices and in turn benefit stocks like Coinbase and Robinhood.

However, uncertainty about the path of Federal Reserve interest rate cuts could put bumps in the road for crypto prices. In December, the central bank signaled that although it was cutting rates a third time, it may do fewer rate cuts in 2025 than investors had anticipated. Historically, rate cuts have had a positive effect on bitcoin price while hikes have had a negative impact.

Bitcoin is up more than 3% since the start of the year. It posted a 120% gain for 2024.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Trending