OpenAI CEO Sam Altman testifies before a Senate Judiciary Privacy, Technology, and the Law Subcommittee hearing titled ‘Oversight of A.I.: Rules for Artificial Intelligence’ on Capitol Hill in Washington, U.S., May 16, 2023. REUTERS/Elizabeth Frantz
Elizabeth Frantz | Reuters
At most tech CEO hearings in recent years, lawmakers have taken a contentious tone, grilling executives over their data-privacy practices, competitive methods and more.
But at Tuesday’s hearing on AI oversight including OpenAI CEO Sam Altman, lawmakers seemed notably more welcoming toward the ChatGPT maker. One senator even went as far as asking whether Altman would be qualified to administer rules regulating the industry.
Altman’s warm welcome on Capitol Hill, which included a dinner discussion the night prior with dozens of House lawmakers and a separate speaking event Tuesday afternoon attended by House Speaker Kevin McCarthy, R-Calif., has raised concerns from some AI experts who were not in attendance this week.
These experts caution that lawmakers’ decision to learn about the technology from a leading industry executive could unduly sway the solutions they seek to regulate AI. In conversations with CNBC in the days after Altman’s testimony, AI leaders urged Congress to engage with a diverse set of voices in the field to ensure a wide range of concerns are addressed, rather than focus on those that serve corporate interests.
OpenAI did not immediately respond to a request for comment on this story.
A friendly tone
For some experts, the tone of the hearing and Altman’s other engagements on the Hill raised alarm.
Lawmakers’ praise for Altman at times sounded almost like “celebrity worship,” according to Meredith Whittaker, president of the Signal Foundation and co-founder of the AI Now Institute at New York University.
“You don’t ask the hard questions to people you’re engaged in a fandom about,” she said.
“It doesn’t sound like the kind of hearing that’s oriented around accountability,” said Sarah Myers West, managing director of the AI Now Institute. “Saying, ‘Oh, you should be in charge of a new regulatory agency’ is not an accountability posture.”
West said the “laudatory” tone of some representatives following the dinner with Altman was surprising. She acknowledged it may “signal that they’re just trying to sort of wrap their heads around what this new market even is.”
But she added, “It’s not new. It’s been around for a long time.”
Safiya Umoja Noble, a professor at UCLA and author of “Algorithms of Oppression: How Search Engines Reinforce Racism,” said lawmakers who attended the dinner with Altman seemed “deeply influenced to appreciate his product and what his company is doing. And that also doesn’t seem like a fair deliberation over the facts of what these technologies are.”
“Honestly, it’s disheartening to see Congress let these CEOs pave the way for carte blanche, whatever they want, the terms that are most favorable to them,” Noble said.
Real differences from the social media era?
At Tuesday’s Senate hearing, lawmakers made comparisons to the social media era, noting their surprise that industry executives showed up asking for regulation. But experts who spoke with CNBC said industry calls for regulation are nothing new and often serve an industry’s own interests.
“It’s really important to pay attention to specifics here and not let the supposed novelty of someone in tech saying the word ‘regulation’ without scoffing distract us from the very real stakes and what’s actually being proposed, the substance of those regulations,” said Whittaker.
“Facebook has been using that strategy for years,” Meredith Broussard, New York University professor and author of “More Than a Glitch: Confronting Race, Gender, and Ability Bias in Tech,” said of the call for regulation. “Really, what they do is they say, ‘Oh, yeah, we’re definitely ready to be regulated.’… And then they lobby [for] exactly the opposite. They take advantage of the confusion.”
Experts cautioned that the kinds of regulation Altman suggested, like an agency to oversee AI, could actually stall regulation and entrench incumbents.
“That seems like a great way to completely slow down any progress on regulation,” said Margaret Mitchell, researcher and chief ethics scientist at AI company Hugging Face. “Government is already not resourced enough to well support the agencies and entities they already have.”
Ravit Dotan, who leads an AI ethics lab at the University of Pittsburgh as well as AI ethics at generative AI startup Bria.ai, said that while it makes sense for lawmakers to take Big Tech companies’ opinions into account since they are key stakeholders, they shouldn’t dominate the conversation.
“One of the concerns that is coming from smaller companies generally is whether regulation would be something that is so cumbersome that only the big companies are really able to deal with [it], and then smaller companies end up having a lot of burdens,” Dotan said.
Several researchers said the government should focus on enforcing the laws already on the books and applauded a recent joint agency statement that asserted the U.S. already has the power to enforce against discriminatory outcomes from the use of AI.
Dotan said there were bright spots in the hearing when she felt lawmakers were “informed” in their questions. But in other cases, she said she wished lawmakers had pressed Altman for deeper explanations or commitments.
For example, when asked about the likelihood that AI will displace jobs, Altman said that eventually it will create more quality jobs. While Dotan said she agreed with that assessment, she wished lawmakers had asked Altman for more potential solutions to help displaced workers find a living or gain skills training in the meantime, before new job opportunities become more widely available.
“There are so many things that a company with the power of OpenAI backed by Microsoft has when it comes to displacement,” Dotan said. “So to me, to leave it as, ‘Your market is going to sort itself out eventually,’ was very disappointing.”
Diversity of voices
A key message AI experts have for lawmakers and government officials is to include a wider array of voices, both in personal background and field of experience, when considering regulating the technology.
“I think that community organizations and researchers should be at the table; people who have been studying the harmful effects of a variety of different kinds of technologies should be at the table,” said Noble. “We should have policies and resources available for people who’ve been damaged and harmed by these technologies … There are a lot of great ideas for repair that come from people who’ve been harmed. And we really have yet to see meaningful engagement in those ways.”
Mitchell said she hopes Congress engages more specifically with people involved in auditing AI tools and experts in surveillance capitalism and human-computer interactions, among others. West suggested that people with expertise in fields that will be affected by AI should also be included, like labor and climate experts.
Whittaker pointed out that there may already be “more hopeful seeds of meaningful regulation outside of the federal government,” pointing to the Writers Guild of America strike as an example, in which demands include job protections from AI.
Government should also pay greater attention and offer more resources to researchers in fields like social sciences, who have played a large role in uncovering the ways technology can result in discrimination and bias, according to Noble.
“Many of the challenges around the impact of AI in society has come from humanists and social scientists,” she said. “And yet we see that the funding that is predicated upon our findings, quite frankly, is now being distributed back to computer science departments that work alongside industry.”
“Most of the women that I know who have been the leading voices around the harms of AI for the last 20 years are not invited to the White House, are not funded by [the National Science Foundation and] are not included in any kind of transformative support,” Noble said. “And yet our work does have and has had tremendous impact on shifting the conversations about the impact of these technologies on society.”
Noble pointed to the White House meeting earlier this month that included Altman and other tech CEOs, such as Google’s Sundar Pichai and Microsoft’s Satya Nadella. Noble said the photo of that meeting “really told the story of who has put themselves in charge. …The same people who’ve been the makers of the problems are now somehow in charge of the solutions.”
Bringing in independent researchers to engage with government would give those experts opportunities to make “important counterpoints” to corporate testimony, Noble said.
Still, other experts noted that they and their peers have engaged with government about AI, albeit without the same media attention Altman’s hearing received and perhaps without a large event like the dinner Altman attended with a wide turnout of lawmakers.
Mitchell worries lawmakers are now “primed” from their discussions with industry leaders.
“They made the decision to start these discussions, to ground these discussions in corporate interests,” Mitchell said. “They could have gone in a totally opposite direction and asked them last.”
Mitchell said she appreciated Altman’s comments on Section 230, the law that helps shield online platforms from being held responsible for their users’ speech. Altman conceded that outputs of generative AI tools would not necessarily be covered by the legal liability shield and a different framework is needed to assess liability for AI products.
“I think, ultimately, the U.S. government will go in a direction that favors large tech corporations,” Mitchell said. “My hope is that other people, or people like me, can at least minimize the damage, or show some of the devil in the details to lead away from some of the more problematic ideas.”
“There’s a whole chorus of people who have been warning about the problems, including bias along the lines of race and gender and disability, inside AI systems,” said Broussard. “And if the critical voices get elevated as much as the commercial voices, then I think we’re going to have a more robust dialogue.”
Tesla launched a revamped version of its Model Y in China.
Tesla
Tesla on Friday announced a revamped version of its popular Model Y in China, as the U.S. electric car giant looks to fend off challenges from domestic rivals.
The Model Y will start at 263,500 Chinese yuan ($35,935), with deliveries set to begin in March. That is 5.4% more expensive than the starting price of the previous Model Y.
A spokesperson for Tesla China said that the new Model Y is only open for pre-sale in the Chinese market, rather than being launched globally.
Elon Musk’s electric vehicle firm is facing heightened competition around the world, from startups and traditional carmakers in Europe. In China, the company continues to face an onslaught of rivals from BYD to newer players like Xpeng and Nio.
Jason Low, principal analyst at Canalys, notes that the Tesla Model Y was the best-selling EV in China in 2024 and that the popularity of the car “remains high.” However, he noted that the competition in the sports utility vehicle (SUV) segment with vehicles priced between 250,000 yuan and 350,000 yuan “has been fierce.”
“Tesla must showcase compelling smart features, particularly a unique but well localized cockpit and services ecosystem,” as well as “effective” semi-autonomous driver assistance features “to ensure its competitiveness in the market,” Low added.
Tesla is offering a number of incentives for customers to buy the Model Y including a five-year 0% interest financing plan.
The new Model Y can accelerate from 0 kilometers per hour to 100 kilometers per hour in 4.3 seconds, Tesla said, exceeding the speed capabilities of the previous vehicle. The Model Y Long Range has a further driving range on a single charge versus its predecessor.
Tesla has not introduced a new model since it began delivering the Cybertruck in late 2023, which starts at nearly $80,000.
Investors have been yearning for a new mass-market model to reinvigorate sales. Tesla has previously hinted that that a new affordable model could be launched in the first half of 2025.
Despite Tesla’s headwinds, the company’s stock is up nearly 70% over the last 12 months, partly due to CEO Musk’s close relationship with U.S. President-elect Donald Trump.
The logo for Taiwan Semiconductor Manufacturing Company is displayed on a screen on the floor of the New York Stock Exchange on Sept. 26, 2023.
Brendan Mcdermid | Reuters
Taiwan Semiconductor Manufacturing Co. posted December quarter revenue that topped analyst estimates, as the company continues to get a boost from the AI boom.
The world’s largest chip manufacturer reported fourth-quarter revenue of 868.5 billion New Taiwan dollars ($26.3 billion), according to CNBC calculations, up 38.8% year-on-year.
That beat Refinitiv consensus estimates of 850.1 billion New Taiwan dollars.
For 2024, TSMC’s revenue totaled 2.9 trillion New Taiwan Dollars, its highest annual sales since going public in 1994.
TSMC manufacturers semiconductors for some of the world’s biggest companies, including Apple and Nvidia.
TSMC is seen as the most advanced chipmaker in the world, given its ability to manufacture leading-edge semiconductors. The company has been helped along by the strong demand for AI chips, particularly from Nvidia, as well as ever-improving smartphone semiconductors.
“TSMC has benefited significantly from the strong demand for AI,” Brady Wang, associate director at Counterpoint Research told CNBC.
Wang said “capacity utilization” for TSMC’s 3 nanometer and 5 nanometer processes — the most advanced chips — “has consistently exceeded 100%.”
AI graphics processing units (GPUs), such as those designed by Nvidia, and other artificial intelligence chips are driving this demand, Wang said.
Taiwan-listed shares of TSMC have risen 88% over the last 12 months.
TSMC’s latest sales figures may also give hope to investors that the the demand for artificial intelligence chips and services may continue into 2025.
Meanwhile, Microsoft this month said that it plans to spend $80 billion in its fiscal year to June on the construction of data centers that can handle artificial intelligence workloads.
Tik Tok creators gather before a press conference to voice their opposition to the “Protecting Americans from Foreign Adversary Controlled Applications Act,” pending crackdown legislation on TikTok in the House of Representatives, on Capitol Hill in Washington, U.S., March 12, 2024.
Craig Hudson | Reuters
The Supreme Court on Friday will hear oral arguments in the case involving the future of TikTok in the U.S., which could ban the popular app as soon as next week.
The justices will consider whether the Protecting Americans from Foreign Adversary Controlled Applications Act, the law that targets TikTok’s ban and imposes harsh civil penalties for app “entities” that continue to carry the service after Jan.19, violates the U.S. Constitution’s free speech protections.
It’s unclear when the court will hand down a decision, and if China’s ByteDance continues to refuse to divest TikTok to an American company, it faces a complete ban nationwide.
What will change about the user experience?
The roughly 115 million U.S. TikTok monthly active users could face a range of scenarios depending on when the Supreme Court hands down a decision.
If no word comes before the law takes effect on Jan. 19 and the ban goes through, it’s possible that users would still be able to post or engage with the app if they already have it downloaded. However, those users would likely be unable to update or redownload the app after that date, multiple legal experts said.
Thousands of short-form video creators who generate income from TikTok through ad revenue, paid partnerships, merchandise and more will likely need to transition their businesses to other platforms, like YouTube or Instagram.
“Shutting down TikTok, even for a single day, would be a big deal, not just for people who create content on TikTok, but everyone who shares or views content,” said George Wang, a staff attorney at the Knight First Amendment Institute who helped write the institute’s amicus briefs on the case.
“It sets a really dangerous precedent for how we regulate speech online,” Wang said.
Who supports and opposes the ban?
Dozens of high-profile amicus briefs from organizations, members of Congress and President-elect Donald Trump were filed supporting both the government and ByteDance.
The government, led by Attorney General Merrick Garland, alleges that until ByteDance divests TikTok, the app remains a “powerful tool for espionage” and a “potent weapon for covert influence operations.”
Trump’s brief did not voice support for either side, but it did ask the court to oppose banning the platform and allow him to find a political resolution that allows the service to continue while addressing national security concerns.
The short-form video app played a notable role in both Trump and Democratic nominee Kamala Harris’ presidential campaigns in 2024, and it’s one of the most common news sources for younger voters.
In a September Truth Social post, Trump wrote in all caps Americans who want to save TikTok should vote for him. The post was quoted in his amicus brief.
What comes next?
It’s unclear when the Supreme Court will issue its ruling, but the case’s expedited hearing has some predicting that the court could issue a quick ruling.
The case will have “enormous implications” since TikTok’s user base in the U.S. is so large, said Erwin Chemerinsky, dean of Berkeley Law.
“It’s unprecedented for the government to prohibit platforms for speech, especially one so many people use,” Chemerinsky said. “Ultimately, this is a tension between free speech issues on the one hand and claims of national security on the other.”