Connect with us

Published

on

Four federal U.S. agencies issued a warning on Tuesday that they already have the authority to tackle harms caused by artificial intelligence bias and they plan to use it.

The warning comes as Congress is grappling with how it should take action to protect Americans from potential risks stemming from AI. The urgency behind that push has increased as the technology has rapidly advanced with tools that are readily accessible to consumers, like OpenAI’s chatbot ChatGPT. Earlier this month, Senate Majority Leader Chuck Schumer, D-N.Y., announced he’s working toward a broad framework for AI legislation, indicating it’s an important priority in Congress.

But even as lawmakers attempt to write targeted rules for the new technology, regulators asserted they already have the tools to pursue companies abusing or misusing AI in a variety of ways.

In a joint announcement from the Consumer Financial Protection Bureau, the Department of Justice, the Equal Employment Opportunity Commission and the Federal Trade Commission, regulators laid out some of the ways existing laws would allow them to take action against companies for their use of AI.

For example, the CFPB is looking into so-called digital redlining, or housing discrimination that results from bias in lending or home-valuation algorithms, according to Rohit Chopra, the agency’s director. CFPB also plans to propose rules to ensure AI valuation models for residential real estate have safeguards against discrimination.

“There is not an exemption in our nation’s civil rights laws for new technologies and artificial intelligence that engages in unlawful discrimination,” Chopra told reporters during a virtual press conference Tuesday.

“Each agency here today has legal authorities to readily combat AI-driven harm,” FTC Chair Lina Khan said. “Firms should be on notice that systems that bolster fraud or perpetuate unlawful bias can violate the FTC Act. There is no AI exemption to the laws on the books.”

Khan added the FTC stands ready to hold companies accountable for their claims of what their AI technology can do, adding enforcing against deceptive marketing has long been part of the agency’s expertise.

The FTC is also prepared to take action against companies that unlawfully seek to block new entrants to AI markets, Khan said.

“A handful of powerful firms today control the necessary raw materials, not only the vast stores of data but also the cloud services and computing power, that startups and other businesses rely on to develop and deploy AI products,” Khan said. “And this control could create the opportunity for firms to engage in unfair methods of competition.”

Kristen Clarke, assistant attorney general for the DOJ Civil Rights Division, pointed to a prior settlement with Meta over allegations that the company had used algorithms that unlawfully discriminated on the basis of sex and race in displaying housing ads.

“The Civil Rights Division is committed to using federal civil rights laws to hold companies accountable when they use artificial intelligence in ways that prove discriminatory,” Clarke said.

EEOC Chair Charlotte Burrows noted the use of AI for hiring and recruitment, saying it can result in biased decisions if trained on biased datasets. That practice may look like screening out all candidates who don’t look like those in the select group the AI was trained to identify.

Still, regulators also acknowledged there’s room for Congress to act.

“I do believe that it’s important for Congress to be looking at this,” Burrows said. “I don’t want in any way the fact that I think we have pretty robust tools for some of the problems that we’re seeing to in any way undermine those important conversations and the thought that we need to do more as well.”

“Artificial intelligence poses some of the greatest modern day threats when it comes to discrimination today and these issues warrant closer study and examination by policymakers and others,” said Clarke, adding that in the meantime agencies have “an arsenal of bedrock civil rights laws” to “hold bad actors accountable.”

“While we continue with enforcement on the agency side, we’ve welcomed work that others might do to figure out how we can ensure that we are keeping up with the escalating threats that we see today,” Clarke said.

Subscribe to CNBC on YouTube.

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Continue Reading

Technology

Tesla faces U.S. auto safety probe after reports FSD ran red lights, caused collisions

Published

on

By

Tesla faces U.S. auto safety probe after reports FSD ran red lights, caused collisions

The tablet of the new Tesla Model 3.

Matteo Della Torre | Nurphoto | Getty Images

Tesla is facing a federal investigation into possible safety defects with FSD, its partially automated driving system that is also known as Full Self-Driving (Supervised).

Media, vehicle owner and other incident reports to the National Highway Traffic Safety Administration showed that in 44 separate incidents, Tesla drivers using FSD said the system caused them to run a red light, steer into oncoming traffic or commit other traffic safety violations leading to collisions, including some that injured people.

In a notice posted to the agency’s website on Thursday, NHTSA said the investigation concerns “all Tesla vehicles that have been equipped with FSD (Supervised) or FSD (Beta),” which is an estimated 2,882,566 of the company’s electric cars.

Tesla cars, even with FSD engaged, require a human driver ready to brake or steer at any time.

The NHTSA Office of Defects Investigation opened a Preliminary Evaluation to “assess whether there was prior warning or adequate time for the driver to respond to the unexpected behavior” by Tesla’s FSD, or “to safely supervise the automated driving task,” among other things.

Read more CNBC tech news

The ODI’s review will also assess “warnings to the driver about the system’s impending behavior; the time given to drivers to respond; the capability of FSD to detect, display to the driver, and respond appropriately to traffic signals; and the capability of FSD to detect and respond to lane markings and wrong-way signage.”

Tesla did not respond to a request for comment on the new federal probe. The company released an updated version of FSD this week, version 14.1, to customers.

For years, Tesla CEO Elon Musk has promised investors that Tesla would someday be able to turn their existing electric vehicles into robotaxis, capable of generating income for owners while they sleep or go on vacation, with a simple software update.

That hasn’t happened yet, and Tesla has since informed owners that future upgrades will require new hardware as well as software releases.

Tesla is testing a Robotaxi-brand ride-hailing service in Texas and elsewhere, but it includes human safety drivers or valets on board who either conduct the drives or manually intervene as needed.

In February this year, Musk and President Donald Trump slashed NHTSA staff as part of a broader effort to reduce the federal workforce, impacting the agency’s ability to investigate vehicle safety and regulate autonomous vehicles, The Washington Post first reported.

Read NHTSA’s Tesla FSD traffic safety violations investigation filings here.

William Blair's Dorsheimer: Tesla's stock is more aligned with robotaxis & FSD than new models

Continue Reading

Technology

Trump meets with Jared Isaacman about top NASA job after pulling nomination

Published

on

By

Trump meets with Jared Isaacman about top NASA job after pulling nomination

Commander Jared Isaacman of Polaris Dawn, a private human spaceflight mission, speaks at a press conference at the Kennedy Space Center in Cape Canaveral, Florida, U.S. August 19, 2024. 

Joe Skipper | Reuters

President Donald Trump has met with Jared Isaacman to discuss another nomination to lead NASA, a source familiar with the talks confirmed to CNBC’s Morgan Brennan.

Isaacman, who has close ties with SpaceX CEO Elon Musk, was at the White House in September for Trump’s dinner for tech power players. Musk did not attend.

Trump and Isaacman have had multiple in-person meetings in recent weeks to talk about the Shift4 founder’s vision for the space program, according to Bloomberg, citing a person familiar with the meetings.

After a fiery back-and-forth between Musk and Trump over government spending, the president pulled Isaacman’s nomination for the post, saying he was a “blue blooded Democrat, who had never contributed to a Republican before.”

“I also thought it inappropriate that a very close friend of Elon, who was in the Space Business, run NASA, when NASA is such a big part of Elon’s corporate life,” Trump wrote in a Truth Social post on June 6.

Trump named Transportation Secretary Sean Duffy interim head of NASA in July.

Isaacman, who declined to comment, was initially nominated in December to lead the space agency.

Isaacman is a seasoned space traveller, having led two private spaceflights with SpaceX in 2021 and 2024. Shift4 has invested $27.5 million in SpaceX, according to a 2021 filing.

Read more CNBC tech news

Isaacman stepped down as CEO from Shift4, the payments company he founded in 1999 at the age of 16, after his nomination was pulled, and now serves as executive chairman.

“Even knowing the outcome, I would do it all over again,” Isaacman wrote about the NASA nomination process in a letter to investors announcing the Shift4 change.

Now, it looks like he gets to do it all over again.

Tensions between Musk and Trump have cooled in the months since, but big challenges face the U.S. space program..

Trump has proposed cutting more than $6 billion from NASA’s budget.

As a result of Trump’s Department of Government Efficiency initiative, which Musk led in the first half of 2025, around 4,000 NASA employees took deferred resignation program offers, cutting the space agency’s staff of 18,000 by about one-fifth.

During the October government shutdown, NASA has made exceptions that allow employees to keep working on missions involving Musk’s SpaceX and Jeff Bezos’ Blue Origin.

The First-Ever Private Spacewalk with Polaris Dawn Mission Commander Jared Isaacman

Continue Reading

Technology

Top Hollywood agencies slam OpenAI’s Sora as ‘exploitation’ and a risk to clients

Published

on

By

Top Hollywood agencies slam OpenAI's Sora as 'exploitation' and a risk to clients

An illustration photo shows Sora 2 logo on a smartphone.

Cfoto | Future Publishing | Getty Images

The Creative Artists Agency on Thursday slammed OpenAI’s new video creation app Sora for posing “significant risks” to their clients and intellectual property.

The talent agency, which represents artists including Doja Cat, Scarlett Johanson, and Tom Hanks, questioned whether OpenAI believed that “humans, writers, artists, actors, directors, producers, musicians, and athletes deserve to be compensated and credited for the work they create.”

“Or does Open AI believe they can just steal it, disregarding global copyright principles and blatantly dismissing creators’ rights, as well as the many people and companies who fund the production, creation, and publication of these humans’ work? In our opinion, the answer to this question is obvious,” the CAA wrote.

OpenAI did not immediately respond to CNBC’s request for comment.

The CAA said that it was “open to hearing” solutions from OpenAI and is working with IP leaders, unions, legislators and global policymakers on the matter.

“Control, permission for use, and compensation is a fundamental right of these workers,” the CAA wrote. “Anything less than the protection of creators and their rights is unacceptable.”

Sora, which launched last week and has quickly reached 1 million downloads, allows users to create AI-generated clips often featuring popular characters and brands.

Read more CNBC tech news

OpenAI launched with an “opt-out” system, which allowed the use of copyrighted material unless studios or agencies requested that their IP not be used.

CEO Sam Altman later said in a blog post that they would give rightsholders “more granular control over generation of characters.”

Talent agency WME sent a memo to agents on Wednesday that it has “notified OpenAI that all WME clients be opted out of the latest Sora AI update, regardless of whether IP rights holders have opted out IP our clients are associated with,” the LA Times reported.

United Talent Agency also criticized Sora’s use of copyrighted property as “exploitation, not innovation,” in a statement on Thursday.

“There is no substitute for human talent in our business, and we will continue to fight tirelessly for our clients to ensure that they are protected,” UTA wrote. “When it comes to OpenAI’s Sora or any other platform that seeks to profit from our clients’ intellectual property and likeness, we stand with artists.”

In a letter written to OpenAI last week, Disney said it did not authorize OpenAI and Sora to copy, distribute, publicly display or perform any image or video that features its copyrighted works and characters, according to a person familiar with the matter.

Disney also wrote that it did not have an obligation to “opt-out” of appearing in Sora or any OpenAI system to preserve its rights under copyright law, the person said.

The Motion Picture Association issued a statement on Tuesday, urging OpenAI to take “immediate and decisive action” against videos using Sora to produce content infringing on its copyrighted material.

Entertainment companies have expressed numerous copyright concerns as generative AI has surged.

Universal and Disney sued creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their movies despite requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.

Hollywood backlash grows against OpenAI's new Sora video model

Continue Reading

Trending