Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

Creators say they didn’t know Google uses YouTube to train AI

Published

on

By

Creators say they didn't know Google uses YouTube to train AI

Silhouettes of laptop and mobile device users are seen next to a screen projection of the YouTube logo.

Dado Ruvic | Reuters

Google is using its expansive library of YouTube videos to train its artificial intelligence models, including Gemini and the Veo 3 video and audio generator, CNBC has learned.

The tech company is turning to its catalog of 20 billion YouTube videos to train these new-age AI tools, according to a person who was not authorized to speak publicly about the matter. Google confirmed to CNBC that it relies on its vault of YouTube videos to train its AI models, but the company said it only uses a subset of its videos for the training and that it honors specific agreements with creators and media companies.

“We’ve always used YouTube content to make our products better, and this hasn’t changed with the advent of AI,” said a YouTube spokesperson in a statement. “We also recognize the need for guardrails, which is why we’ve invested in robust protections that allow creators to protect their image and likeness in the AI era — something we’re committed to continuing.”

Such use of YouTube videos has the potential to lead to an intellectual property crisis for creators and media companies, experts said.

While YouTube says it has shared this information previously, experts who spoke with CNBC said it’s not widely understood by creators and media organizations that Google is training its AI models using its video library.

YouTube didn’t say how many of the 20 billion videos on its platform or which ones are used for AI training. But given the platform’s scale, training on just 1% of the catalog would amount to 2.3 billion minutes of content, which experts say is more than 40 times the training data used by competing AI models.

The company shared in a blog post published in September that YouTube content could be used to “improve the product experience … including through machine learning and AI applications.” Users who have uploaded content to the service have no way of opting out of letting Google train on their videos. 

“It’s plausible that they’re taking data from a lot of creators that have spent a lot of time and energy and their own thought to put into these videos,” said Luke Arrigoni, CEO of Loti, a company that works to protect digital identity for creators. “It’s helping the Veo 3 model make a synthetic version, a poor facsimile, of these creators. That’s not necessarily fair to them.”

CNBC spoke with multiple leading creators and IP professionals, none were aware or had been informed by YouTube that their content could be used to train Google’s AI models.

Google DeepMind Veo 3.

Courtesy: Google DeepMind

The revelation that YouTube is training on its users’ videos is noteworthy after Google in May announced Veo 3, one of the most advanced AI video generators on the market. In its unveiling, Google showcased cinematic-level video sequences, including a scene of an old man on a boat and another showing Pixar-like animals talking with one another. The entirety of the scenes, both the visual and the audio, were entirely AI generated. 

According to YouTube, an average of 20 million videos are uploaded to the platform each day by independent creators by nearly every major media company. Many creators say they are now concerned they may be unknowingly helping to train a system that could eventually compete with or replace them.

“It doesn’t hurt their competitive advantage at all to tell people what kind of videos they train on and how many they trained on,” Arrigoni said. “The only thing that it would really impact would be their relationship to creators.”

Even if Veo 3’s final output does not directly replicate existing work, the generated content fuels commercial tools that could compete with the creators who made the training data possible, all without credit, consent or compensation, experts said.

When uploading a video to the platform, the user is agreeing that YouTube has a broad license to the content.

“By providing Content to the Service, you grant to YouTube a worldwide, non-exclusive, royalty-free, sublicensable and transferable license to use that Content,” the terms of service read.

“We’ve seen a growing number of creators discover fake versions of themselves circulating across platforms — new tools like Veo 3 are only going to accelerate the trend,” said Dan Neely, CEO of Vermillio, which helps individuals protect their likeness from being misused and also facilitates secure licensing of authorized content.

Neely’s company has challenged AI platforms for generating content that allegedly infringes on its clients’ intellectual property, both individual and corporate. Neely says that although YouTube has the right to use this content, many of the content creators who post on the platform are unaware that their videos are being used to train video-generating AI software.

Vermillio uses a proprietary tool called Trace ID to asses whether an AI-generated video has significant overlap with a human-created video. Trace ID assigns scores on a scale of zero to 100. Any score over 10 for a video with audio is considered meaningful, Neely said.

A video from YouTube creator Brodie Moss closely matched content generated by Veo 3. Using Vermillio’s Trace ID tool, the system attributed a score of 71 to the original video with the audio alone scoring over 90.

Vermillio

In one example cited by Neely, a video from YouTube creator Brodie Moss closely matched content generated by Veo 3. Trace ID attributed a score of 71 to the original video with the audio alone scoring over 90.

Some creators told CNBC they welcome the opportunity to use Veo 3, even if it may have been trained on their content.

“I try to treat it as friendly competition more so than these are adversaries,” said Sam Beres, a creator with 10 million subscribers on YouTube. “I’m trying to do things positively because it is the inevitable —but it’s kind of an exciting inevitable.”

Google includes an indemnification clause for its generative AI products, including Veo, which means that if a user faces a copyright challenge over AI-generated content, Google will take on legal responsibility and cover the associated costs.

YouTube announced a partnership with Creative Artists Agency in December to develop access for top talent to identify and manage AI-generated content that features their likeness. YouTube also has a tool for creators to request a video to be taken down if they believe it abuses their likeness.

However, Arrigoni said that the tool hasn’t been reliable for his clients.

YouTube also allows creators to opt out of third party training from select AI companies including Amazon, Apple and Nvidia, but users are not able to stop Google from training for its own models.

The Walt Disney Company and Universal filed a joint lawsuit last Wednesday against the AI image generator Midjourney, alleging copyright infringement, the first lawsuit of its kind out of Hollywood.

“The people who are losing are the artists and the creators and the teenagers whose lives are upended,” said Sen. Josh Hawley, R-Mo., in May at a Senate hearing about the use of AI to replicate the likeness of humans. “We’ve got to give individuals powerful enforceable rights and their images in their property in their lives back again or this is just never going to stop.”

Disclosure: Universal is part of NBCUniversal, the parent company of CNBC.

WATCH: Google buyouts highlight tech’s cost-cutting amid AI CapEx boom

Google buyouts highlight tech's cost-cutting amid AI CapEx boom

Continue Reading

Technology

Samsung aims to catch up to Chinese rivals for thin foldable phones as Apple said to enter the fray

Published

on

By

Samsung aims to catch up to Chinese rivals for thin foldable phones as Apple said to enter the fray

Samsung launched the Galaxy Z Fold6 at its Galaxy Unpacked event in Paris. The tech giant said the foldable device is thinner and lighter than its predecessor.

Arjun Kharpal | CNBC

Samsung will unveil a thinner version of its flagship foldable smartphone at a launch likely set to take place next month, as it battles Chinese rivals to deliver the slimmest devices to the market.

Folding phones, which have a single screen that can fold in half, came in focus when Samsung first launched such a device in 2019. But Chinese players, in particular Honor and Oppo, have since aggressively released foldables that are thinner and lighter than Samsung’s offerings.

Why are slim foldables important?

“With foldables, thinness has become more critical than ever because people aren’t prepared to accept the compromise for a thicker and heavier phone to get the real estate that a folding phone can deliver,” Ben Wood, chief analyst at CCS Insight, told CNBC on Thursday.

Honor, Oppo and other Chinese players have used their slim designs to differentiate themselves from Samsung.

Let’s look at a comparison: Samsung’s last foldable from 2024, the Galaxy Z Fold6, is 12.1 millimeter ~(0.48 inches) thick when folded and weighs 239 grams (8.43 oz). Oppo’s Find N5, which was released earlier this year, is 8.93 millimeters thick when closed and weighs 229 grams. The Honor Magic V3, which was launched last year, is 9.2 millimeters when folded and weighs 226 grams.

“Samsung needs to step up” in foldables, Wood said.

And that’s what the South Korean tech giant is planning to do at its upcoming launch, which is likely to take place next month.

“The newest Galaxy Z series is the thinnest, lightest and most advanced foldable yet – meticulously crafted and built to last,” Samsung said in a preview blog post about the phone earlier this month.

But the competition is not letting up. Honor is planning a launch on July 2 in China for its latest folding phone, the Magic V5.

“The interesting thing for Samsung, if they can approach the thinness that Honor has achieved it is will be a significant step up from predecessor, it will be a tangible step up in design,” Wood said.

Despite these advances by way of foldables, the market for the devices has not been as exciting as many had hoped.

CCS Insight said that foldables will account for just 2% of the overall smartphone market this year. Thinner phones may be one way to address the sluggish market, but consumer preferences would also need to change.

“There is a chance that by delivering much thinner foldables that are more akin to the traditional monoblock phone, it will provide an opportunity to turn consumer heads and get them to revisit the idea of having a folding device,” Wood said.

“However, I would caution foldables do remain problematic because in many cases consumers struggle to see why they need a folding device.”

Although the market remains small for foldables compared to traditional smartphones, noted analyst Ming-Chi Kuo of TF International Securities on Wednesday said Apple  — which has been notably absent from this product line-up — plans to make a folding iPhone starting next year.

Continue Reading

Technology

Google looks likely to lose appeal against record $4.7 billion EU fine

Published

on

By

Google looks likely to lose appeal against record .7 billion EU fine

Cheng Xin | Getty Images

Google suffered a setback Thursday after an advisor to the European Union’s top court recommended it dismiss the tech giant’s appeal against a record 4.1-billion-euro ($4.7 billion) antitrust fine.

Juliane Kokott, advocate general at the European Court of Justice, advised the court to throw out Google’s appeal and confirm the fine, which was reduced in 2022 to 4.125 billion euros from 4.34 billion euros previously by the EU’s General Court.

“In her Opinion delivered today, Advocate General Kokott proposes that the Court of Justice dismiss Google’s appeal and, therefore, uphold the judgment of the General Court,” the Luxembourg-based ECJ said in a press release Thursday.

The fine relates to a long-running antitrust case surrounding Google’s Android operating system.

In 2018, the European Commission slapped Google with the record-breaking penalty on the grounds that it abused Android’s mobile dominance to give unfair advantage to its own apps via pre-installation deals with smartphone makers. The Commission is the executive body of the EU.

Google said it was “disappointed” with the ECJ advocate general’s verdict, adding it “would discourage investment in open platforms and harm Android users, partners and app developers.”

“Android has created more choice for everyone and supports thousands of successful businesses in Europe and around the world,” a spokesperson for the company told CNBC via email.

Though the advocate general’s proposal is non-binding, judges tend to follow four out of five such non-binding opinions. The ECJ is expected to deliver a final ruling in the coming months.

AWS' Tanuja Randery on how businesses are using AI

Continue Reading

Trending