FTC Chairwoman Lina Khan testifies during the House Energy and Commerce Subcommittee on Innovation, Data, and Commerce hearing on the “FY2024 Federal Trade Commission Budget,” in Rayburn Building on Tuesday, April 18, 2023.
Tom Williams | Cq-roll Call, Inc. | Getty Images
The Federal Trade Commission is on alert for the ways that rapidly-advancing artificial intelligence could be used to violate antitrust and consumer protection laws it’s charged with enforcing, Chair Lina Khan wrote in a New York Times op-ed on Wednesday.
“Although these tools are novel, they are not exempt from existing rules, and the F.T.C. will vigorously enforce the laws we are charged with administering, even in this new market,” Khan wrote, echoing a theme the agency shared in a joint statement with three other enforcers last week.
In the op-ed, Khan detailed several ways AI might be used to harm consumers or the market that she believes federal enforcers should be looking for. She also compared the current inflection point around AI to the earlier mid-2000s era in tech, when companies like Facebook and Google came to forever change communications, but with substantial implications on data privacy that weren’t fully realized until years later.
“What began as a revolutionary set of technologies ended up concentrating enormous private power over key services and locking in business models that come at extraordinary cost to our privacy and security,” Khan wrote.
But, she said, “The trajectory of the Web 2.0 era was not inevitable — it was instead shaped by a broad range of policy choices. And we now face another moment of choice. As the use of A.I. becomes more widespread, public officials have a responsibility to ensure this hard-learned history doesn’t repeat itself.”
One possible effect enforcers should look out for, according to Khan, is the impact of only a few firms controlling the raw materials needed to deploy AI tools. That’s because that type of control could enable dominant companies to leverage their power to exclude rivals, “picking winners and losers in ways that further entrench their dominance.”
Khan also warned that AI tools used to set prices “can facilitate collusive behavior that unfairly inflates prices — as well as forms of precisely targeted price discrimination.”
“The F.T.C. is well equipped with legal jurisdiction to handle the issues brought to the fore by the rapidly developing A.I. sector, including collusion, monopolization, mergers, price discrimination and unfair methods of competition,” she wrote.
Khan also warned that generative AI “risks turbocharging fraud” by creating authentic-sounding messages. When it comes to scams and deceptive business practices, Khan said the FTC would not only look at ” fly-by-night scammers deploying these tools but also at the upstream firms that are enabling them.”
Finally, Khan said that existing laws about improper collection or use of personal data will apply to the massive datasets on which AI tools are trained, and laws prohibiting discrimination will also apply in cases where AI was used to make decisions.
An Amazon worker moves boxes on Amazon Prime Day in the East Village of New York City, July 11, 2023.
Spencer Platt | Getty Images
Amazon is extending its Prime Day discount bonanza, announcing that the annual sale will run four days this year.
The 96-hour event will start at 12:01 a.m. PT on July 8, and continue through July 11, Amazon said in a release.
For the first time, the company will roll out themed “deal drops” that change daily and are available “while supplies last.” Amazon has in recent years toyed with adding more limited-run and invite-only deals during Prime Day events to create a feeling of urgency or scarcity.
Amazon launched Prime Day in 2015 as a way to secure new members for its $139-a-year loyalty program, and to promote its own products and services while providing a sales boost in the middle of the year. In 2019, the company made Prime Day a 48-hour event, and it’s since added a second Prime Day-like event in the fall.
Prime Day is also a significant revenue driver for other retailers, which often host competing discount events.
Illustration of the SK Hynix company logo seen displayed on a smartphone screen.
Sopa Images | Lightrocket | Getty Images
Shares in South Korea’s SK Hynix extended gains to hit a more than 2-decade high on Tuesday, following reports over the weekend that SK Group plans to build the country’s largest AI data center.
SK Hynix shares, which have surged almost 50% so far this year on the back of an AI boom, were up nearly 3%, following gains on Monday.
The company’s parent, SK Group, plans to build the AI data center in partnership with Amazon Web Services in Ulsan, according to domestic media. SK Telecom and SK Broadband are reportedly leading the initiative, with support from other affiliates, including SK Hynix.
SK Hynix is a leading supplier of dynamic random access memory or DRAM — a type of semiconductor memory found in PCs, workstations and servers that is used to store data and program code.
The company’s DRAM rival, Samsung, was also trading up 4% on Tuesday. However, it’s growth has fallen behind that of SK Hynix.
On Friday, Samsung Electronics’ market cap reportedly slid to a 9-year low of 345.1 trillion won ($252 billion) as the chipmaker struggles to capitalize on AI-led demand.
SK Hynix, on the other hand, has become a leader in high bandwidth memory — a type of DRAM used in artificial intelligence servers — supplying to clients such as AI behemoth Nvidia.
A report from Counterpoint Research in April said that SK Hynix had captured 70% of the HBM market by revenue share in the first quarter.
This HBM strength helped it overtake Samsung in the overall DRAM market for the first time ever, with a 36% global market share as compared to Samsung’s 34%.
OpenAI has been awarded a $200 million contract to provide the U.S. Defense Department with artificial intelligence tools.
The department announced the one-year contract on Monday, months after OpenAI said it would collaborate with defense technology startup Anduril to deploy advanced AI systems for “national security missions.”
“Under this award, the performer will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” the Defense Department said. It’s the first contract with OpenAI listed on the Department of Defense’s website.
Anduril received a $100 million defense contract in December. Weeks earlier, OpenAI rival Anthropic said it would work with Palantir and Amazon to supply its AI models to U.S. defense and intelligence agencies.
Sam Altman, OpenAI’s co-founder and CEO, said in a discussion with OpenAI board member and former National Security Agency leader Paul Nakasone at a Vanderbilt University event in April that “we have to and are proud to and really want to engage in national security areas.”
OpenAI did not immediately respond to a request for comment.
The Defense Department specified that the contract is with OpenAI Public Sector LLC, and that the work will mostly occur in the National Capital Region, which encompasses Washington, D.C., and several nearby counties in Maryland and Virginia.
Meanwhile, OpenAI is working to build additional computing power in the U.S. In January, Altman appeared alongside President Donald Trump at the White House to announce the $500 billion Stargate project to build AI infrastructure in the U.S.
The new contract will represent a small portion of revenue at OpenAI, which is generating over $10 billion in annualized sales. In March, the company announced a $40 billion financing round at a $300 billion valuation.
In April, Microsoft, which supplies cloud infrastructure to OpenAI, said the U.S. Defense Information Systems Agency has authorized the use of the Azure OpenAI service with secret classified information.