The news industry just gained a powerful ally in its effort to take on OpenAI.
The Center for Investigative Reporting, the country’s oldest nonprofit newsroom, sued OpenAI and lead backer Microsoft in federal court on Thursday for alleged copyright infringement, following similar suits from publications including The New York Times, Chicago Tribune and the New York Daily News.
The CIR alleged in the suit, filed in the Southern District of New York, that OpenAI “copied, used, abridged, and displayed CIR’s valuable content without CIR’s permission or authorization, and without any compensation to CIR.”
Since its public release in late 2022, OpenAI’s ChatGPT chatbot has been crawling the web to provide answers to user queries, often relying heavily on copy pulled directly from news stories.
“When they populated their training sets with works of journalism, Defendants had a choice: to respect works of journalism, or not,” the plaintiffs wrote in the lawsuit. “Defendants chose the latter.”
In a press release on Thursday, Monika Bauerlein, CEO of the nonprofit, accused the defendants of “free rider behavior.”
“OpenAI and Microsoft started vacuuming up our stories to make their product more powerful, but they never asked for permission or offered compensation, unlike other organizations that license our material,” Bauerlein said.
The CIR, which is home to Mother Jones and audio programming Reveal, also alleged in the suit that OpenAI “trained ChatGPT not to acknowledge or respect copyright. And they did this all without permission.”
The group said it’s seeking “actual damages and Defendants’ profits, or statutory damages of no less than $750 per infringed work and $2,500 per DMCA violation,” referring to the Digital Millennium Copyright Act.
OpenAI and Microsoft didn’t immediately respond to requests for comment.
With the news industry broadly struggling to maintain sufficient advertising and subscription revenue to pay for its costly newsgathering operations, many publications are aggressively trying to protect their businesses as AI-generated content becomes more prevalent.
In December, The New York Times filed a suit against Microsoft and OpenAI, alleging intellectual property violations related to its journalistic content appearing in ChatGPT training data. The Times said it seeks to hold Microsoft and OpenAI accountable for “billions of dollars in statutory and actual damages” related to the “unlawful copying and use of the Times’s uniquely valuable works,” according to a filing in the U.S. District Court for the Southern District of New York. OpenAI disagreed with the Times’ characterization of events.
The Chicago Tribune, along with seven other newspapers, followed with a similar suit in April.
Outside of news, a group of prominent U.S. authors, including Jonathan Franzen, John Grisham, George R.R. Martin and Jodi Picoult, sued OpenAI last year, alleging copyright infringement in using their work to train ChatGPT.
But not all news organizations are gearing up for a fight, and some are instead joining forces with OpenAI. Earlier on Thursday, OpenAI and Time magazine announced a “multi-year content deal” that will allow OpenAI to access current and archived articles from more than 100 years of Time’s history.
OpenAI will be able to display Time’s content within its ChatGPT chatbot in response to user questions, according to a press release, and to use Time’s content “to enhance its products,” or, likely, to train its artificial intelligence models.
OpenAI announced a similar partnership in May with News Corp., allowing OpenAI to access current and archived articles from The Wall Street Journal, MarketWatch, Barron’s, the New York Post and other publications. Reddit also announced in May that it will partner with OpenAI, allowing the company to train its AI models on Reddit content.
A group of prominent figures, including artificial intelligence and technology experts, has called for an end to efforts to create ‘superintelligence’ — a form of AI that would surpass human intellect.
More than 800 people, including Apple cofounder Steve Wozniak and former U.S. National Security Advisor Susan Rice, signed a statement published Wednesday calling for a pause on the development of superintelligence.
In a statement published Wednesday, with over 800 signatories, including prominent AI figures and the biggest names in AI, ranging from Apple cofounder Steve Wozniak to former National Security Advisor Susan Rice, called for a pause on the development of superintelligence.
The list of signatories notably includes prominent AI leaders, including scientists like Yoshua Bengio and Geoff Hinton, who are widely considered “godfathers” of modern AI. Leading AI safety researchers like UC Berkeley’s Stuart Russell also signed on.
Superintelligence has become a buzzword in the AI world, as companies from xAI to OpenAI compete to release more advanced large language models. Meta notably has gone so far as to name its LLM division the ‘Meta Superintelligence Labs.’
But signatories of the recent statement warn that the prospect of superintelligence has “raised concerns, ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”
The statement calls for a prohibition on superintelligence development until strong public buy-in and a broad scientific consensus that it can be done safely and controllably is reached.
In addition to the AI figures, the names behind the statement come from a broad coalition of academics, media personalities, religious leaders and ex-politicians.
Other prominent names include Virgin’s Richard Branson, former chairman of the Joint Chiefs of Staff Mike Mullen, and British royal family member Meghan Markle. Prominent media allies to the U.S. President Donald Trump, including Steve Bannon and Glen Beck also signed on.
As of Wednesday, the list of signatories was still growing.
Netflix is “all in” on leveraging generative artificial intelligence on its streaming platform, according to the company, as AI continues to make its way into mainstream entertainment.
The comments came from Netflix’s earnings report Tuesday, which highlighted AI as a major focus for the world’s largest streaming service by subscriber count.
“For many years now, [machine learning] and AI have been powering our title recommendations as well as production and promotion technology,” Netflix said in a letter to shareholders.
It added that generative AI presents a “significant opportunity” across its streaming platform, including improving its recommendations, ads business, and movies and TV content.
“We’re empowering creators with a broad set of GenAI tools to help them achieve their visions and deliver even more impactful titles for members,” the company said.
Netflix provided recent examples of this, noting its recently distributed film Happy Gilmore 2 used generative AI tools to help de-age characters. Meanwhile, producers for the Netflix series Billionaires’ Bunker have used various generative AI tools during pre-production to explore wardrobe and set designs.
Concerns of AI replacement
Netflix’s comments come amid broader concerns in the entertainment and art world regarding the potential for AI to replace human workers and the technology’s use of human-made content.
Speaking during an earnings call, Netflix CEO Ted Sarandos seemingly addressed those issues, noting that AI can enhance the overall TV and movie experience, but “can’t automatically make you a great storyteller if you’re not.”
“We’re confident that AI is going to help us and help our creative partners tell stories better, faster and in new ways — we’re all in on that,” Sarandos said. He added: “We’re not worried about AI replacing creativity.”
However, many in the entertainment industry remain skeptical of AI and its growing presence in media.
An upstart production studio called Particle6 recently faced massive backlash for its plan to create, design, manage and monetize AI-generated actors and talent, including from the media union SAG-AFTRA.
SAG-AFTRA previously led a significant actors’ strike in July 2023, amid a broader series of Hollywood labor disputes that saw concerns about the use of artificial intelligence brought to the forefront.
The strike lasted over 100 days before a tentative agreement was reached between SAG-AFTRA and the Alliance of Motion Picture and Television Producers, which included the establishment of contractual AI protections for film and TV performers for the first time.
To further encourage the responsible use of such AI tools, Netflix recently released a new AI-focused production guidance aimed at its creators.
It’s been more than 17 years since the modern smartphone era began with the launch of the iPhone, and tech companies have been obsessed with trying to disrupt it ever since.
The most common approach is mixed reality XR headsets: computerized goggles that put all of your apps and other digital content right in front of your face.
Samsung is the latest to take on the category with the Galaxy XR. Samsung will start selling it on Tuesday night for $1,800, about half the price of Apple‘s Vision Pro.
Early adopters will also get a suite of digital freebies, like free access to the paid version of Google‘s Gemini AI assistant and YouTube Premium for a year.
The headset was made in partnership with Google for the software and Qualcomm, which makes the chip powering the Galaxy XR.
Samsung Galaxy XR Headset
Courtesy: Samsung
Samsung’s Galaxy XR lets you enter an immersive, virtual computing experience where your apps and other content appear to float in your field of view. External cameras project the real world onto the tiny 4K displays in the headset, meaning you can walk around a room while wearing the Galaxy XR without bumping into anything.
You control everything with hand gestures, your voice or a mix of both.
As for the headset itself, you’d be forgiven for thinking you were looking at an Apple Vision Pro.
From the curved glass on the front of the Galaxy XR, to the metal trim and the external battery pack that dangles from the headset by a cable, it’s almost as if Samsung and Google spent the last two years reverse-engineering the Vision Pro.
Read more CNBC tech news
And in those two years, we’ve learned a lot about these computers for your face.
They’re niche, expensive products that most people don’t want to use, and there’s still no killer app or enough immersive content to keep you consistently entertained and justify the $2,000 or more you’re spending.
The promise of the metaverse evaporated as soon as ChatGPT came on the scene in late 2022 and the tech industry shifted its focus to artificial intelligence. Even Mark Zuckerberg, who changed his company’s name to “Meta” in 2022, barely talks about the metaverse anymore.
But Samsung has a different pitch for the Galaxy XR.
It may come with all the drawbacks of Apple or Meta’s headsets, but Samsung and Google say the Galaxy XR is really a stepping stone to AI glasses currently in development with eyewear brands Warby Parker and Gentle Monster.
Those devices will rely on Google’s AI assistant Gemini, which is also central to the experience on the Galaxy XR.
Google showed an early demo of those glasses at its annual I/O event in May, but there are no details on when such a device will launch. Google also has a long track record of announcing products at I/O that never actually go on sale to the public.
Remember Google Glass? What about the Nexus Q?
Samsung Galaxy XR Headset
Courtesy: Samsung
But Google and Samsung are acting like things are different this time, and that’s why Gemini is such a big part of the Galaxy XR.
While you can control everything in the headset using hand gestures and Samsung even mimicked the same gestures Apple came up with for the Vision Pro.
The Gemini controls were, however, the most impressive portion of the Galaxy XR demo Samsung had in New York last week.
I could use Gemini to organize floating windows of apps in my virtual workspace, ask it questions about landmarks I was looking at in Google Maps, or prompt it to generate a goofy video using Veo, Google’s AI video generator that’s like OpenAI’s Sora.
Overall, the Gemini demo was flawless. It understood everything I said, even in a noisy conference room, and executed my commands quickly.
It wasn’t exactly revolutionary, but it was a step beyond the capabilities of the Vision Pro, which doesn’t have generative AI features at all.
I could see how Gemini will evolve to fit into a more comfortable and stylish form factor, like Meta has with its Ray-Ban AI glasses. And I can now understand why Apple has reportedly changed its plans from developing a new version of the Vision Pro in favor of AI glasses that are expected to launch in 2026.
Samsung Galaxy XR Headset
Courtesy: Samsung
Now for the major downside.
Gemini runs in the cloud, meaning you must give it permission to “see” everything you do on your headset by transmitting it over the internet to Google’s servers. Google doesn’t have the same private cloud technology Apple has for its AI systems, so you risk sharing a lot of personal information about what you do on your device with the company. That’s going to be a nonstarter for many people.
Even though you can see the promise of AI-powered glasses, they’re even more of a niche product than immersive headsets, much smaller than smartphones, laptops or tablets.
Meta, the market leader for the category, only sold 2 million pairs of its Ray-Ban glasses in the first two years. By comparison, Apple sells well over 200 million iPhones a year. We’re a long way off from glasses becoming a must-have accessory to your phone like wireless earbuds or a smartwatch.
And as impressive as Gemini is so far, a future where the smartphone is replaced by an AI device like glasses has never felt further away.