House Speaker Nancy Pelosi (D-CA), holds her weekly press conference in the United States Capitol in Washington, May 13, 2021.
Evelyn Hockstein | Reuters
The Trump Justice Department’s reported decision to subpoena tech companies for account data of U.S. lawmakers was a step that “goes even beyond Richard Nixon,” House Speaker Nancy Pelosi, D-Calif., said in an interview on CNN’s “State of the Union” on Sunday.
“Richard Nixon had an enemies list,” Pelosi said. “This is about undermining the rule of law.”
The New York Times reported Thursday that the Justice Department under the former president in 2017 and 2018 subpoenaed Apple for information from the accounts of at least a dozen people tied to the House Intelligence Committee, including two Democratic lawmakers: Chairman Adam Schiff, D-Calif., and Rep. Eric Swalwell, D-Calif. Microsoft acknowledged Friday it had received a similar request.
The investigation reportedly sought the source of leaks about contact between Trump associates and Russia. A gag order prevented Apple and Microsoft from initially notifying the owners of the affected accounts of the subpoenas, the companies said. Apple said it didn’t know the probe involved the metadata of lawmakers when it complied.
The Justice Department’s internal watchdog said it would investigate the probe. While that step is important, Pelosi said, “it is not a substitute for what we must do in the Congress,” adding that she would ensure a review of the situation in the House.
Pelosi expressed disbelief in the claims by former Attorneys General Bill Barr and Jeff Sessions that they were unaware of the probes into lawmakers. She said they must testify under oath, though she did not say whether she would subpoena their testimony should they not voluntarily comply.
“How could it be that there could be an investigation of members in the other branch of government and the press and the rest too and the attorneys general did not know?” she said. “So who are these people and are they still in the Justice Department?”
Google CEO Sundar Pichai speaks in conversation with Emily Chang during the APEC CEO Summit at Moscone West on November 16, 2023 in San Francisco, California. The APEC summit is being held in San Francisco and runs through November 17.
Google is launching what it considers its largest and most capable artificial intelligence model Wednesday as pressure mounts on the company to answer how it’ll monetize AI.
The large language model Gemini will include a suite of three different sizes: Gemini Ultra, its largest, most capable category; Gemini Pro, which scales across a wide range of tasks; and Gemini Nano, which it will use for specific tasks and mobile devices.
For now, the company is planning to license Gemini to customers through Google Cloud for them to use in their own applications. Starting Dec. 13, developers and enterprise customers can access Gemini Pro via the Gemini API in Google AI Studio or Google Cloud Vertex AI. Android developers will also be able to build with Gemini Nano. Gemini will also be used to power Google products like its Bard chatbot and Search Generative Experience, which tries to answer search queries with conversational-style text (SGE is not widely available yet).
Gemini Ultra is the first model to outperform human experts on MMLU (massive multitask language understanding), which uses a combination of 57 subjects such as math, physics, history, law, medicine and ethics for testing both world knowledge and problem-solving abilities, the company said in a blog post Wednesday. It can supposedly understand nuance and reasoning in complex subjects.
Sundar Pichai, chief executive officer of Alphabet Inc., during the Google I/O Developers Conference in Mountain View, California, US, on Wednesday, May 10, 2023.
David Paul Morris | Bloomberg | Getty Images
“Gemini is the result of large-scale collaborative efforts by teams across Google, including our colleagues at Google Research,” wrote CEO Sundar Pichai in a blog post Wednesday. “It was built from the ground up to be multimodal, which means it can generalize and seamlessly understand, operate across and combine different types of information including text, code, audio, image and video.”
Starting today, Google’s chatbot Bard will use Gemini Pro to help with advanced reasoning, planning, understanding and other capabilities. Early next year, it will launch “Bard Advanced,” which will use Gemini Ultra, executives said on a call with reporters Tuesday. It represents the biggest update to Bard, its ChatGPT-like chatbot.
The update comes eight months after the search giant first launched Bard and one year after OpenAI launched ChatGPT on GPT-3.5. In March of this year, the Sam Altman-led startup launched GPT-4. Executives said Tuesday that Gemini Pro outperformed GPT-3.5 but dodged questions about how it stacked up against GPT-4.
When asked if Google has plans to charge for access to “Bard Advanced,” Google’s general manager for Bard, Sissie Hsiao, said it is focused on creating a good experience and doesn’t have any monetization details yet.
When asked on a press briefing if Gemini has any novel capabilities compared with current generation LLMs, Eli Collins, vice president of product at Google DeepMind, answered, “I suspect it does” but that it’s still working to understand Gemini Ultra’s novel capabilities.
Google reportedly postponed the launch of Gemini because it wasn’t ready, bringing back memories of the company’s rocky rollout of its AI tools at the beginning of the year.
Multiple reporters asked about the delay, to which Collins answered that testing the more advanced models take longer. Collins said Gemini is the most highly tested AI model that the company’s built and that it has “the most comprehensive safety evaluations” of any Google model.
Collins said that despite being its largest model, Gemini Ultra is significantly cheaper to serve. “It’s not just more capable, it’s more efficient,” he said. “We still require significant compute to train Gemini but we’re getting much more efficient in terms of our ability to train these models.”
Collins said the company will release a technical white paper with more details of the model on Wednesday but said it won’t be releasing the perimeter count. Earlier this year, CNBC found Google’s PaLM 2 large language model, its latest AI model at the time, used nearly five times the amount of text data for training as its predecessor LLM.
Also on Wednesday, Google introduced its next-generation tensor processing unit for training AI models. The TPU v5p chip, which Salesforce and startup Lightricks have begun using, offers better performance for the price than the TPU v4 announced in 2021, Google said. But the company didn’t provide information on performance compared with market leader Nvidia.
The chip announcement comes weeks after cloud rivals Amazon and Microsoft showed off custom silicon targeting AI.
During Google’s third-quarter earnings conference call in October, investors asked executives more questions about how it’s going to turn AI into actual profit.
In August, Google launched an “early experiment” called Search Generative Experience, or SGE, which lets users see what a generative AI experience would look like when using the search engine — search is still a major profit center for the company. The result is more conversational, reflecting the age of chatbots. However, it is still considered an experiment and has yet to launch to the general public.
Investors have been asking for a timeline for SGE since May, when the company first announced the experiment at its annual developer conference Google I/O. The Gemini announcement Wednesday hardly mentioned SGE and executives were vague about its plans to launch to the general public, saying that Gemini would be incorporated into it “in the next year.”
“This new era of models represents one of the biggest science and engineering efforts we’ve undertaken as a company,” Pichai said in Wednesday’s blog post. “I’m genuinely excited for what’s ahead, and for the opportunities Gemini will unlock for people everywhere.”
Facebook co-founder and Meta CEO Mark Zuckerberg sits in his seat inside a bipartisan Artificial Intelligence Insight Forum for all U.S. senators hosted by Senate Majority Leader Chuck Schumer at the U.S. Capitol in Washington, D.C., on Sept. 13, 2023.
Leah Millis | Reuters
Facebook and Instagram created “prime locations” for sexual predators that enabled child sexual abuse, solicitation, and trafficking, New Mexico’s attorney general alleged in a civil suit filed Wednesday against Meta and CEO Mark Zuckerberg.
The suit was brought after an “undercover investigation” allegedly revealed myriad instances of sexually explicit content being served to minors, child sexual coercion, or the sale of child sexual abuse material, or CSAM, New Mexico attorney general Raúl Torrez said in a press release.
The suit alleges that “certain child exploitative content” is ten times “more prevalent” on Facebook and Instagram as compared to pornography site PornHub and adult content platform OnlyFans, according to the release.
“Child exploitation is a horrific crime and online predators are determined criminals,” Meta said in a statement to CNBC. A spokesperson said that the company deploys “sophisticated technology, hire child safety experts, report content to the National Center for Missing and Exploited Children, and share information and tools with other companies and law enforcement, including state attorneys general, to help root out predators.”
The New Mexico suit follows coordinated legal actions against Meta by 42 other attorneys general in October. Those actions alleged that Facebook and Instagram directly targeted and were addictive to children and teens.
New Mexico’s suit, by contrast, alleges Meta and Zuckerberg violated the state’s Unfair Practice Act. The four-count suit alleges that the company and Zuckerberg engaged in “unfair trade practices” by facilitating the distribution of CSAM and the trafficking of minors, and undermined the health and safety of New Mexican children.
The lawsuit argues that Meta’s algorithms allegedly promote sex and exploitation content to users and that Facebook and Instagram lack “effective” age verification. The suit also alleges that the company failed to identify child sexual exploitation “networks” and to fully prevent users it had suspended for those violations from rejoining the platform using new accounts
“In one month alone, we disabled more than half a million accounts for violating our child safety policies,” a Meta spokesperson said in a statement.
“Mr. Zuckerberg and other Meta executives are aware of the serious harm their products can pose to young users, and yet they have failed to make sufficient changes to their platforms that would prevent the sexual exploitation of children,” Torres said in the release.
New Mexico seeks civil penalties and for Meta to implement effective age verification, improve its detection and removal systems for CSAM, and address the alleged functionalities that “amplify” CSAM.
Dubai, UNITED ARAB EMIRATES — A global rush for the next wave of generative artificial intelligence is increasing public scrutiny on an often-overlooked but critically important environmental issue: Big Tech’s expanding water footprint.
Tech giants, including the likes of Microsoft and Alphabet-owned Google, have recently reported a substantial upswing in their water consumption and researchers say one of the main culprits is the race to capitalize on the next wave of AI.
Shaolei Ren, a researcher at the University of California, Riverside, published a study in April investigating the resources needed to run buzzy generative AI models, such as OpenAI’s ChatGPT.
Ren and his colleagues found that ChatGPT gulps 500 milliliters of water (roughly the amount of water in a standard 16-ounce bottle) for every 10 to 50 prompts, depending on when and where the AI model is deployed.
The study’s authors warned that if the growing water footprint of AI models is not sufficiently addressed, the issue could become a major roadblock to the socially responsible and sustainable use of AI in the future.
People take part in a protest called by Uruguay’s Central Union (PIT-CNT) in “defense of water” against the handling of the national authorities with respect to the management of the shortage of drinking water reserves in Montevideo on May 31, 2023.
Eitan Abramovich | Afp | Getty Images
ChatGPT creator OpenAI, part owned by Microsoft, did not respond to a request to comment on the study’s findings.
“In general, the public is getting more knowledgeable and aware of the water issue and if they learn that the Big Tech’s are taking away their water resources and they are not getting enough water, nobody will like it,” Ren told CNBC via videoconference.
“I think we are going to see more clashes over the water usage in the coming years as well, so this type of risk will have to be taken care of by the companies,” he added.
‘A hidden cost’
Data centers are part of the lifeblood of Big Tech — and a lot of water is required to keep the power-hungry servers cool and running smoothly.
In July, protesters took to the streets of Uruguay’s capital to push back against Google’s plan to build a data center. The proposal sought to use vast quantities of water at a time when the South American country was suffering its worst drought in 74 years.
Google reportedly said at the time the project was still at an exploratory phase and stressed that sustainability remained at the heart of its mission.
With AI, we’re seeing the classic problem with technology in that you have efficiency gains but then you have rebound effects with more energy and more resources being used.
Somya Joshi
Head of division: global agendas, climate and systems at SEI
In Microsoft’s latest environmental sustainability report, the U.S. tech company disclosed that its global water consumption rose by more than a third from 2021 to 2022, climbing to nearly 1.7 billion gallons.
It means that Microsoft’s annual water use would be enough to fill more than 2,500 Olympic-sized swimming pools.
For Google, meanwhile, total water consumption at its data centers and offices came in at 5.6 billion gallons in 2022, a 21% increase on the year before.
Both companies are working to reduce their water footprint and become “water positive” by the end of the decade, meaning that they aim to replenish more water than they use.
It’s notable, however, that their latest water consumption figures were disclosed before the launch of their own respective ChatGPT competitors. The computing power needed to run Microsoft’s Bing Chat and Google Bard could mean significantly higher levels of water use over the coming months.
“With AI, we’re seeing the classic problem with technology in that you have efficiency gains but then you have rebound effects with more energy and more resources being used,” said Somya Joshi, head of division: global agendas, climate and systems at the Stockholm Environment Institute.
“And when it comes to water, we’re seeing an exponential rise in water use just for supplying cooling to some of the machines that are needed, like heavy computation servers, and large-language models using larger and larger amounts of data,” Joshi told CNBC during the COP28 climate summit in the United Arab Emirates.
“So, on one hand, companies are promising to their customers more efficient models … but this comes with a hidden cost when it comes to energy, carbon and water,” she added.
How are tech firms reducing their water footprint?
A spokesperson for Microsoft told CNBC that the company is investing in research to measure the energy and water use and carbon impact of AI, while working on ways to make large systems more efficient.
“AI will be a powerful tool for advancing sustainability solutions, but we need a plentiful clean energy supply globally to power this new technology, which has increased consumption demands,” a spokesperson for Microsoft told CNBC via email.
“We will continue to monitor our emissions, accelerate progress while increasing our use of clean energy to power datacenters, purchasing renewable energy, and other efforts to meet our sustainability goals of being carbon negative, water positive and zero waste by 2030,” they added.
Aerial view of the proposed site of the Meta Platforms Inc. data center outside Talavera de la Reina, Spain, on Monday, July 17, 2023. Meta is planning to build a 1 billion ($1.1 billion) data center which it expects to use about 665 million liters (176 million gallons) of water a year, and up to 195 liters per second during “peak water flow,” according to a technical report.
Paul Hanna | Bloomberg | Getty Images
Separately, a Google spokesperson told CNBC that research shows that while AI computing demand has dramatically increased, the energy needed to power this technology is rising “at a much slower rate than many forecasts have predicted.”
“We are using tested practices to reduce the carbon footprint of workloads by large margins; together these principles can reduce the energy of training a model by up to 100x and emissions by up to 1000x,” the spokesperson said.
“Google data centers are designed, built and operated to maximize efficiency – compared with five years ago, Google now delivers around 5X as much computing power with the same amount of electrical power,” they continued.
“To support the next generation of fundamental advances in AI, our latest TPU v4 [supercomputer] is proven to be one of the fastest, most efficient, and most sustainable ML [machine leanring] infrastructure hubs in the world.”