Connect with us

Published

on

OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz

Elizabeth Frantz | Reuters

At most tech CEO hearings in recent years, lawmakers have taken a contentious tone, grilling executives over their data-privacy practices, competitive methods and more.

But at Tuesday’s hearing on AI oversight including OpenAI CEO Sam Altman, lawmakers seemed notably more welcoming toward the ChatGPT maker. One senator even went as far as asking whether Altman would be qualified to administer rules regulating the industry.

Altman’s warm welcome on Capitol Hill, which included a dinner discussion the night prior with dozens of House lawmakers and a separate speaking event Tuesday afternoon attended by House Speaker Kevin McCarthy, R-Calif., has raised concerns from some AI experts who were not in attendance this week.

These experts caution that lawmakers’ decision to learn about the technology from a leading industry executive could unduly sway the solutions they seek to regulate AI. In conversations with CNBC in the days after Altman’s testimony, AI leaders urged Congress to engage with a diverse set of voices in the field to ensure a wide range of concerns are addressed, rather than focus on those that serve corporate interests.

OpenAI did not immediately respond to a request for comment on this story.

A friendly tone

For some experts, the tone of the hearing and Altman’s other engagements on the Hill raised alarm.

Lawmakers’ praise for Altman at times sounded almost like “celebrity worship,” according to Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at New York University.

“You don’t ask the hard questions to people you’re engaged in a fandom about,” she said.

“It doesn’t sound like the kind of hearing that’s oriented around accountability,” said Sarah Myers West, managing director of the AI Now Institute. “Saying, ‘Oh, you should be in charge of a new regulatory agency’ is not an accountability posture.”

West said the “laudatory” tone of some representatives following the dinner with Altman was surprising. She acknowledged it may “signal that they’re just trying to sort of wrap their heads around what this new market even is.”

But she added, “It’s not new. It’s been around for a long time.”

Safiya Umoja Noble, a professor at UCLA and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” said lawmakers who attended the dinner with Altman seemed “deeply influenced to appreciate his product and what his company is doing. And that also doesn’t seem like a fair deliberation over the facts of what these technologies are.”

“Honestly, it’s disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them,” Noble said.

Real differences from the social media era?

OpenAI's Sam Altman testifies before Congress—Here are the key moments

At Tuesday’s Senate hearing, lawmakers made comparisons to the social media era, noting their surprise that industry executives showed up asking for regulation. But experts who spoke with CNBC said industry calls for regulation are nothing new and often serve an industry’s own interests.

“It’s really important to pay attention to specifics here and not let the supposed novelty of someone in tech saying the word ‘regulation’ without scoffing distract us from the very real stakes and what’s actually being proposed, the substance of those regulations,” said Whittaker.

“Facebook has been using that strategy for years,” Meredith Broussard, New York University professor and author of “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” said of the call for regulation. “Really, what they do is they say, ‘Oh, yeah, we’re definitely ready to be regulated.’… And then they lobby [for] exactly the opposite. They take advantage of the confusion.”

Experts cautioned that the kinds of regulation Altman suggested, like an agency to oversee AI, could actually stall regulation and entrench incumbents.

“That seems like a great way to completely slow down any progress on regulation,” said Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face. “Government is already not resourced enough to well support the agencies and entities they already have.”

Ravit Dotan, who leads an AI ethics lab at the University of Pittsburgh as well as AI ethics at generative AI startup Bria.ai, said that while it makes sense for lawmakers to take Big Tech companies’ opinions into account since they are key stakeholders, they shouldn’t dominate the conversation.

“One of the concerns that is coming from smaller companies generally is whether regulation would be something that is so cumbersome that only the big companies are really able to deal with [it], and then smaller companies end up having a lot of burdens,” Dotan said.

Several researchers said the government should focus on enforcing the laws already on the books and applauded a recent joint agency statement that asserted the U.S. already has the power to enforce against discriminatory outcomes from the use of AI.

Dotan said there were bright spots in the hearing when she felt lawmakers were “informed” in their questions. But in other cases, she said she wished lawmakers had pressed Altman for deeper explanations or commitments.

For example, when asked about the likelihood that AI will displace jobs, Altman said that eventually it will create more quality jobs. While Dotan said she agreed with that assessment, she wished lawmakers had asked Altman for more potential solutions to help displaced workers find a living or gain skills training in the meantime, before new job opportunities become more widely available.

“There are so many things that a company with the power of OpenAI backed by Microsoft has when it comes to displacement,” Dotan said. “So to me, to leave it as, ‘Your market is going to sort itself out eventually,’ was very disappointing.”

Diversity of voices

A key message AI experts have for lawmakers and government officials is to include a wider array of voices, both in personal background and field of experience, when considering regulating the technology.

“I think that community organizations and researchers should be at the table; people who have been studying the harmful effects of a variety of different kinds of technologies should be at the table,” said Noble. “We should have policies and resources available for people who’ve been damaged and harmed by these technologies … There are a lot of great ideas for repair that come from people who’ve been harmed. And we really have yet to see meaningful engagement in those ways.”

Mitchell said she hopes Congress engages more specifically with people involved in auditing AI tools and experts in surveillance capitalism and human-computer interactions, among others. West suggested that people with expertise in fields that will be affected by AI should also be included, like labor and climate experts.

Whittaker pointed out that there may already be “more hopeful seeds of meaningful regulation outside of the federal government,” pointing to the Writers Guild of America strike as an example, in which demands include job protections from AI.

Government should also pay greater attention and offer more resources to researchers in fields like social sciences, who have played a large role in uncovering the ways technology can result in discrimination and bias, according to Noble.

“Many of the challenges around the impact of AI in society has come from humanists and social scientists,” she said. “And yet we see that the funding that is predicated upon our findings, quite frankly, is now being distributed back to computer science departments that work alongside industry.”

Noble said she was “stunned” to see that the White House’s announcement of funding for seven new AI research centers seemed to have an emphasis on computer science.

“Most of the women that I know who have been the leading voices around the harms of AI for the last 20 years are not invited to the White House, are not funded by [the National Science Foundation and] are not included in any kind of transformative support,” Noble said. “And yet our work does have and has had tremendous impact on shifting the conversations about the impact of these technologies on society.”

Noble pointed to the White House meeting earlier this month that included Altman and other tech CEOs, such as Google’s Sundar Pichai and Microsoft’s Satya Nadella. Noble said the photo of that meeting “really told the story of who has put themselves in charge. …The same people who’ve been the makers of the problems are now somehow in charge of the solutions.”

Bringing in independent researchers to engage with government would give those experts opportunities to make “important counterpoints” to corporate testimony, Noble said.

Still, other experts noted that they and their peers have engaged with government about AI, albeit without the same media attention Altman’s hearing received and perhaps without a large event like the dinner Altman attended with a wide turnout of lawmakers.

Mitchell worries lawmakers are now “primed” from their discussions with industry leaders.

“They made the decision to start these discussions, to ground these discussions in corporate interests,” Mitchell said. “They could have gone in a totally opposite direction and asked them last.”

Mitchell said she appreciated Altman’s comments on Section 230, the law that helps shield online platforms from being held responsible for their users’ speech. Altman conceded that outputs of generative AI tools would not necessarily be covered by the legal liability shield and a different framework is needed to assess liability for AI products.

“I think, ultimately, the U.S. government will go in a direction that favors large tech corporations,” Mitchell said. “My hope is that other people, or people like me, can at least minimize the damage, or show some of the devil in the details to lead away from some of the more problematic ideas.”

“There’s a whole chorus of people who have been warning about the problems, including bias along the lines of race and gender and disability, inside AI systems,” said Broussard. “And if the critical voices get elevated as much as the commercial voices, then I think we’re going to have a more robust dialogue.”

Subscribe to CNBC on YouTube.

WATCH: Can China’s ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Can China's ChatGPT clones give it an edge over the U.S. in an A.I. arms race?

Continue Reading

Technology

World’s biggest chipmaker TSMC posts record 2024 revenue as AI boost continues

Published

on

By

World's biggest chipmaker TSMC posts record 2024 revenue as AI boost continues

The logo for Taiwan Semiconductor Manufacturing Company is displayed on a screen on the floor of the New York Stock Exchange on Sept. 26, 2023.

Brendan Mcdermid | Reuters

Taiwan Semiconductor Manufacturing Co. posted December quarter revenue that topped analyst estimates, as the company continues to get a boost from the AI boom.

The world’s largest chip manufacturer reported fourth-quarter revenue of 868.5 billion New Taiwan dollars ($26.3 billion), according to CNBC calculations, up 38.8% year-on-year.

That beat Refinitiv consensus estimates of 850.1 billion New Taiwan dollars.

For 2024, TSMC’s revenue totaled 2.9 trillion New Taiwan Dollars, its highest annual sales since going public in 1994.

TSMC manufacturers semiconductors for some of the world’s biggest companies, including Apple and Nvidia.

TSMC is seen as the most advanced chipmaker in the world, given its ability to manufacture leading-edge semiconductors. The company has been helped along by the strong demand for AI chips, particularly from Nvidia, as well as ever-improving smartphone semiconductors.

“TSMC has benefited significantly from the strong demand for AI,” Brady Wang, associate director at Counterpoint Research told CNBC.

Wang said “capacity utilization” for TSMC’s 3 nanometer and 5 nanometer processes — the most advanced chips — “has consistently exceeded 100%.”

AI graphics processing units (GPUs), such as those designed by Nvidia, and other artificial intelligence chips are driving this demand, Wang said.

Taiwan-listed shares of TSMC have risen 88% over the last 12 months.

TSMC’s latest sales figures may also give hope to investors that the the demand for artificial intelligence chips and services may continue into 2025.

Foxconn, which assembles Apple’s iPhones, reported its highest-ever fourth quarter revenue this week, as it notched strong demand for AI servers.

Meanwhile, Microsoft this month said that it plans to spend $80 billion in its fiscal year to June on the construction of data centers that can handle artificial intelligence workloads.

CNBC’s Jordan Novet contributed to this report.

Continue Reading

Technology

Supreme Court set to hear oral arguments on challenge to TikTok ban

Published

on

By

Supreme Court set to hear oral arguments on challenge to TikTok ban

Tik Tok creators gather before a press conference to voice their opposition to the “Protecting Americans from Foreign Adversary Controlled Applications Act,” pending crackdown legislation on TikTok in the House of Representatives, on Capitol Hill in Washington, U.S., March 12, 2024.

Craig Hudson | Reuters

The Supreme Court on Friday will hear oral arguments in the case involving the future of TikTok in the U.S., which could ban the popular app as soon as next week.

The justices will consider whether the Protecting Americans from Foreign Adversary Controlled Applications Act, the law that targets TikTok’s ban and imposes harsh civil penalties for app “entities” that continue to carry the service after Jan.19, violates the U.S. Constitution’s free speech protections.

It’s unclear when the court will hand down a decision, and if China’s ByteDance continues to refuse to divest TikTok to an American company, it faces a complete ban nationwide.

What will change about the user experience?

The roughly 115 million U.S. TikTok monthly active users could face a range of scenarios depending on when the Supreme Court hands down a decision.

If no word comes before the law takes effect on Jan. 19 and the ban goes through, it’s possible that users would still be able to post or engage with the app if they already have it downloaded. However, those users would likely be unable to update or redownload the app after that date, multiple legal experts said.

Thousands of short-form video creators who generate income from TikTok through ad revenue, paid partnerships, merchandise and more will likely need to transition their businesses to other platforms, like YouTube or Instagram.

“Shutting down TikTok, even for a single day, would be a big deal, not just for people who create content on TikTok, but everyone who shares or views content,” said George Wang, a staff attorney at the Knight First Amendment Institute who helped write the institute’s amicus briefs on the case. 

“It sets a really dangerous precedent for how we regulate speech online,” Wang said.

Who supports and opposes the ban?

Dozens of high-profile amicus briefs from organizations, members of Congress and President-elect Donald Trump were filed supporting both the government and ByteDance.

The government, led by Attorney General Merrick Garland, alleges that until ByteDance divests TikTok, the app remains a “powerful tool for espionage” and a “potent weapon for covert influence operations.”

Trump’s brief did not voice support for either side, but it did ask the court to oppose banning the platform and allow him to find a political resolution that allows the service to continue while addressing national security concerns. 

The short-form video app played a notable role in both Trump and Democratic nominee Kamala Harris’ presidential campaigns in 2024, and it’s one of the most common news sources for younger voters.

In a September Truth Social post, Trump wrote in all caps Americans who want to save TikTok should vote for him. The post was quoted in his amicus brief. 

What comes next?

It appears TikTok could really get shut down, says Jim Cramer

Continue Reading

Technology

Nvidia’s tiny $3,000 computer steals the show at CES

Published

on

By

Nvidia's tiny ,000 computer steals the show at CES

Nvidia CEO Jensen Huang speaks about Project Digits personal AI supercomputer for researchers and students during a keynote address at the Consumer Electronics Show (CES) in Las Vegas, Nevada on January 6, 2025. Gadgets, robots and vehicles imbued with artificial intelligence will once again vie for attention at the Consumer Electronics Show, as vendors behind the scenes will seek ways to deal with tariffs threatened by US President-elect Donald Trump. The annual Consumer Electronics Show (CES) opens formally in Las Vegas on January 7, 2025, but preceding days are packed with product announcements. (Photo by Patrick T. Fallon / AFP) (Photo by PATRICK T. FALLON/AFP via Getty Images)

Patrick T. Fallon | Afp | Getty Images

Nvidia CEO Jensen Huang was greeted as a rock star this week CES in Las Vegas, following an artificial intelligence boom that’s made the chipmaker the second most-valuable company in the world.

At his nearly two-hour keynote on Monday kicking off the annual conference, Huang packed a 12,000-seat arena, drawing comparisons to the way Steve Jobs would reveal products at Apple events.

Huang concluded with an Apple-like trick: a surprise product reveal. He presented one of Nvidia’s server racks and, using some stage magic, held up a much smaller version, which looked like a tiny cube of a computer.

“This is an AI supercomputer,” Huang said, while donning an alligator skin leather jacket. “It runs the entire Nvidia AI stack. All of Nvidia’s software runs on this.”

Huang said the computer is called Project Digits and runs off a relative of the Grace Blackwell graphics processing units (GPUs) that are currently powering the most advanced AI server clusters. The GPU is paired with an ARM-based Grace central processing unit (CPU). Nvidia worked with Chinese semiconductor company MediaTek to create the system-on-a chip called GB10.

Formerly known as the Consumer Electronics Show, CES is typically the spot to launch flashy and futuristic consumer gadgets. At this year’s show, which started on Tuesday and wraps up on Friday, several companies announced AI integrations with appliances, laptops and even grills. Other major announcements included a laptop from Lenovo which has a rollable screen that can expand vertically. There were also new robots, including a Roomba competitor with a robotic arm.

CES 2025: AI Tech on Display

Unlike Nvidia’s traditional GPUs for gaming, Project Digits isn’t targeting consumers. instead, it’s aimed at machine learning researchers, smaller companies, and universities that want to developed advanced AI but don’t have the billions of dollars to build massive data centers or buy enough cloud credits.

“There’s a gaping hole for data scientists and ML researchers and who are actively working, who are actively building something,” Huang said. “Maybe you don’t need a giant cluster. You’re just developing the early versions of the model, and you’re iterating constantly. You could do it in the cloud, but it just costs a lot more money.”

The supercomputer will cost about $3,000 when it becomes available in May, Nvidia said, and will be available from the company itself as well as some of its manufacturing partners. Huang said Project Digits is a placeholder name, indicating it may change by the time the computer goes on sale.

“If you have a good name for it, reach out to us,” Huang said.

Diversifying its business

The Nvidia Project Digits supercomputer during the 2025 CES event in Las Vegas, Nevada, US, on Wednesday, Jan. 8, 2025. 

Bridget Bennett | Bloomberg | Getty Images

“It was a little scary to see Nvidia come out with something so good for so little in price,” Melius Research analyst Ben Reitzes wrote in a note this week. He said Nvidia may have “stolen the show,” due to Project Digits as well other announcements including graphics cards for gaming, new robot chips and a deal with Toyota.

Project Digits, which runs Linux and the same Nvidia software used on the company’s GPU server clusters, represents a huge increase in capabilities for researchers and universities, said David Bader, director of the Institute for Data Science at New Jersey Institute of Technology.

Bader, who has worked on research projects with Nvidia in the past, said the computer appears to be able to handle enough data and information to train the biggest and most cutting-edge models. He told CNBC Anthropic, Google, Amazon and others “would pay $100 million to build a super computer for training” to get a system with these sorts of capabilities.

For $3,000, users can soon get a product they can plug into a standard electrical outlet in their home or office, Bader said. It’s particularly exciting for academics, who have often left for private industry in order to access bigger and more powerful computers, he said.

“Any student who is able to have one of these systems that cost roughly the same as a high-end laptop or gaming laptop, they’ll be able to do the same research and build the same models,” Bader said.

Reitzes said the computer may be Nvidia’s first move into the $50 billion market for PC and laptop chips.

“It’s not too hard to imagine it would be easy to just do it all themselves and allow the system to run Windows someday,” Reitzes wrote. “But I guess they don’t want to step on too many toes.”

Huang didn’t rule out that possibility when asked about it by Wall Street analysts on Tuesday.

He said that MediaTek may be able to sell the GB10 chip to other computer makers in the market. He made sure to leave some mystery in the air.

“Obviously, we have plans,” Huang said.

WATCH: Nvidia pullback due to CES expectations

Nvidia pullback due to CES expectations & 'market issues,' says Morgan Stanley's Joseph Moore

Continue Reading

Trending