Connect with us

Published

on

Elon Musk announced his new company xAI which he says has the goal to understand the true nature of the universe. 

Jaap Arriens | Nurphoto | Getty Images

Elon Musk’s artificial intelligence startup, xAI, is being accused by environmental and health advocates of adding to the pollution problem in Memphis, Tennessee, by using natural gas burning turbines at its new data center, and doing so without a permit.

The company said it was opening the data center in June in a former Electrolux factory, shortly after announcing it had raised $6 billion at a $24 billion valuation. In a post on X last month, Musk boasted that xAI had begun training its AI models at the facility using 100,000 of Nvidia’s H100 processors.

The Southern Environmental Law Center sent a letter this week to the Health Department in Shelby County, where Memphis is located, and to a regional office of the Environmental Protection Agency on behalf of several local groups, asking regulators to investigate xAI for its unpermitted use of the turbines and the pollution they create.

The letter notes that xAI “has installed at least 18 gas combustion turbines over the last several months (with more potentially on the way).”

The company has been using the turbines to power the facility, but its long-term plan is to use power from the local utility, Memphis Light, Gas and Water and the Tennessee Valley Authority.

MLGW told CNBC that it started providing 50 megawatts of power to xAI at the beginning of August. However, the xAI facility requires an additional 100 megawatts. The utility has installed more circuit breakers, and started making improvements to transmission lines in the area to prepare for the added power consumption by xAI, as well.

Musk, who is also the CEO of Tesla and SpaceX and owner of social media company X, started xAI in 2023 to develop large language models and AI products that aim to compete with those from Google, Microsoft and OpenAI. The company’s initial product is a chatbot called Grok, billed as a politically incorrect alternative to OpenAI’s ChatGPT. AI models generally require massive amounts of power for data training and processing.

“This plant requires an enormous amount of electricity,” the advocates wrote in the letter.

Some of the 18 turbines are visible from the road around the property and, according to the advocates’ letter, emit air pollutants called nitrogen oxides (NOx) that add to a longstanding smog problem in the area. Shelby County has been given an “F” grade by the American Lung Association for its smog.

Elon Musk told Nvidia to ship thousands of AI chips reserved for Tesla to X and xAI

According to the Centers for Disease Control and Prevention’s website, even low levels of nitrogen oxides in the air can irritate a person’s eyes, nose, throat and lungs, causing them to cough, experience shortness of breath, tiredness and nausea. Breathing high levels of nitrogen oxides can cause “rapid burning, spasms, and swelling of tissues in the throat and upper respiratory tract,” and other serious health problems, the agency says.

Businesses in Tennessee are typically required to obtain permits to operate the types of turbines used by xAI. The permits would establish the allowable concentration of emissions, and determine efficiency requirements for the engines.

‘Significant health and environmental impact’

A permit would also mandate air quality testing to make sure users aren’t polluting more than they had planned to in the area due to issues like poor engine maintenance.

“The overarching concern remains that there has been very little transparency and opportunity for public input for the xAI project,” Amanda Garcia, a senior attorney with the Tennessee office of the Southern Environmental Law Center, told CNBC. The added concern, she said, is that it’s “already having significant health and environmental impact on the surrounding community.”

The groups wrote in the letter that the xAI turbines already in place have the capacity to emit an estimated 130 tons of nitrogen oxides annually, which would rank them as the ninth-largest source of the pollutants in the county. Their combined capacity could power around 50,000 homes.

Musk-led companies have a history of building facilities or operating high-emissions equipment without obtaining permits first.

CNBC reported earlier this month that SpaceX operated a water deluge and cooling system at its launch facility in Boca Chica, Texas, repeatedly discharging industrial wastewater there without a permit, a violation of the Clean Water Act.

Musk’s tunneling venture, The Boring Co., was also fined by Texas environmental regulators for a similar issue — discharging wastewater into the Colorado River in Bastrop, Texas, without applying for permits or installing appropriate pollution controls.

Tesla was cited by a California air pollution regulator in 2021 for installing and modifying paint shop equipment that emitted hazardous air pollutants, without a permit and reviews as required by the Clean Air Act.

The EPA regional office covering Memphis didn’t respond to a request for comment. Nor did xAI.

WATCH: Musk should ‘get back to building affordable electric vehicle’s over investing in xAI

Musk should focus on building affordable electric vehicles, strategist says

Continue Reading

Technology

How AI regulation could shake out in 2025

Published

on

By

How AI regulation could shake out in 2025

U.S. President-elect Donald Trump and Elon Musk watch the launch of the sixth test flight of the SpaceX Starship rocket in Brownsville, Texas, on Nov. 19, 2024.

Brandon Bell | Via Reuters

The U.S. political landscape is set to undergo some shifts in 2025 — and those changes will have some major implications for the regulation of artificial intelligence.

President-elect Donald Trump will be inaugurated on Jan. 20. Joining him in the White House will be a raft of top advisors from the world of business — including Elon Musk and Vivek Ramaswamy — who are expected to influence policy thinking around nascent technologies such as AI and cryptocurrencies.

Across the Atlantic, a tale of two jurisdictions has emerged, with the U.K. and European Union diverging in regulatory thinking. While the EU has taken more of a heavy hand with the Silicon Valley giants behind the most powerful AI systems, Britain has adopted a more light-touch approach.

In 2025, the state of AI regulation globally could be in for a major overhaul. CNBC takes a look at some of the key developments to watch — from the evolution of the EU’s landmark AI Act to what a Trump administration could do for the U.S.

Musk’s U.S. policy influence

Elon Musk walks on Capitol Hill on the day of a meeting with Senate Republican Leader-elect John Thune (R-SD), in Washington, U.S. December 5, 2024. 

Benoit Tessier | Reuters

Although it’s not an issue that featured very heavily during Trump’s election campaign, artificial intelligence is expected to be one of the key sectors set to benefit from the next U.S. administration.

For one, Trump appointed Musk, CEO of electric car manufacturer Tesla, to co-lead his “Department of Government Efficiency” alongside Ramaswamy, an American biotech entrepreneur who dropped out of the 2024 presidential election race to back Trump.

Matt Calkins, CEO of Appian, told CNBC Trump’s close relationship with Musk could put the U.S. in a good position when it comes to AI, citing the billionaire’s experience as a co-founder of OpenAI and CEO of xAI, his own AI lab, as positive indicators.

“We’ve finally got one person in the U.S. administration who truly knows about AI and has an opinion about it,” Calkins said in an interview last month. Musk was one of Trump’s most prominent endorsers in the business community, even appearing at some of his campaign rallies.

There is currently no confirmation on what Trump has planned in terms of possible presidential directives or executive orders. But Calkins thinks it’s likely Musk will look to suggest guardrails to ensure AI development doesn’t endanger civilization — a risk he’s warned about multiple times in the past.

“He has an unquestioned reluctance to allow AI to cause catastrophic human outcomes – he’s definitely worried about that, he was talking about it long before he had a policy position,” Calkins told CNBC.

Currently, there is no comprehensive federal AI legislation in the U.S. Rather, there’s been a patchwork of regulatory frameworks at the state and local level, with numerous AI bills introduced across 45 states plus Washington D.C., Puerto Rico and the U.S. Virgin Islands.

The EU AI Act

The European Union is so far the only jurisdiction globally to drive forward comprehensive rules for artificial intelligence with its AI Act.

Jaque Silva | Nurphoto | Getty Images

The European Union has so far been the only jurisdiction globally to push forward with comprehensive statutory rules for the AI industry. Earlier this year, the bloc’s AI Act — a first-of-its-kind AI regulatory framework — officially entered into force.

The law isn’t yet fully in force yet, but it’s already causing tension among large U.S. tech companies, who are concerned that some aspects of the regulation are too strict and may quash innovation.

In December, the EU AI Office, a newly created body overseeing models under the AI Act, published a second-draft code of practice for general-purpose AI (GPAI) models, which refers to systems like OpenAI’s GPT family of large language models, or LLMs.

The second draft included exemptions for providers of certain open-source AI models. Such models are typically available to the public to allow developers to build their own custom versions. It also includes a requirement for developers of “systemic” GPAI models to undergo rigorous risk assessments.

The Computer & Communications Industry Association — whose members include Amazon, Google and Meta — warned it “contains measures going far beyond the Act’s agreed scope, such as far-reaching copyright measures.”

The AI Office wasn’t immediately available for comment when contacted by CNBC.

It’s worth noting the EU AI Act is far from reaching full implementation.

As Shelley McKinley, chief legal officer of popular code repository platform GitHub, told CNBC in November, “the next phase of the work has started, which may mean there’s more ahead of us than there is behind us at this point.”

For example, in February, the first provisions of the Act will become enforceable. These provisions cover “high-risk” AI applications such as remote biometric identification, loan decisioning and educational scoring. A third draft of the code on GPAI models is slated for publication that same month.

European tech leaders are concerned about the risk that punitive EU measures on U.S. tech firms could provoke a reaction from Trump, which might in turn cause the bloc to soften its approach.

Take antitrust regulation, for example. The EU’s been an active player taking action to curb U.S. tech giants’ dominance — but that’s something that could result in a negative response from Trump, according to Swiss VPN firm Proton’s CEO Andy Yen.

“[Trump’s] view is he probably wants to regulate his tech companies himself,” Yen told CNBC in a November interview at the Web Summit tech conference in Lisbon, Portugal. “He doesn’t want Europe to get involved.”

UK copyright review

Britain’s Prime Minister Keir Starmer gives a media interview while attending the 79th United Nations General Assembly at the United Nations Headquarters in New York, U.S. September 25, 2024.

Leon Neal | Via Reuters

One country to watch for is the U.K. Previously, Britain has shied away from introducing statutory obligations for AI model makers due to the fear that new legislation could be too restrictive.

However, Keir Starmer’s government has said it plans to draw up legislation for AI, although details remain thin for now. The general expectation is that the U.K. will take a more principles-based approach to AI regulation, as opposed to the EU’s risk-based framework.

Last month, the government dropped its first major indicator for where regulation is moving, announcing a consultation on measures to regulate the use of copyrighted content to train AI models. Copyright is a big issue for generative AI and LLMs, in particular.

Most LLMs use public data from the open web to train their AI models. But that often includes examples of artwork and other copyrighted material. Artists and publishers like the New York Times allege that these systems are unfairly scraping their valuable content without consent to generate original output.

To address this issue, the U.K. government is considering making an exception to copyright law for AI model training, while still allowing rights holders to opt out of having their works used for training purposes.

Appian’s Calkins said that the U.K. could end up being a “global leader” on the issue of copyright infringement by AI models, adding that the country isn’t “subject to the same overwhelming lobbying blitz from domestic AI leaders that the U.S. is.”

U.S.-China relations a possible point of tension

U.S. President Donald Trump, right, and Xi Jinping, China’s president, walk past members of the People’s Liberation Army (PLA) during a welcome ceremony outside the Great Hall of the People in Beijing, China, on Thursday, Nov. 9, 2017.  

Qilai Shen | Bloomberg | Getty Images

Lastly, as world governments seek to regulate fast-growing AI systems, there’s a risk geopolitical tensions between the U.S. and China may escalate under Trump.

In his first term as president, Trump enforced a number of hawkish policy measures on China, including a decision to add Huawei to a trade blacklist restricting it from doing business with American tech suppliers. He also launched a bid to ban TikTok,which is owned by Chinese firm ByteDance, in the U.S. — although he’s since softened his position on TikTok.

China is racing to beat the U.S. for dominance in AI. At the same time, the U.S. has taken measures to restrict China’s access to key technologies, mainly chips like those designed by Nvidia, which are required to train more advanced AI models. China has responded by attempting to build its own homegrown chip industry.

Technologists worry that a geopolitical fracturing between the U.S. and China on artificial intelligence could result in other risks, such as the potential for one of the two to develop a form of AI smarter than humans.

Max Tegmark, founder of the nonprofit Future of Life Institute, believes the U.S. and China could in future create a form of AI that can improve itself and design new systems without human supervision, potentially forcing both countries’ governments to individually come up with rules around AI safety.

“My optimistic path forward is the U.S. and China unilaterally impose national safety standards to prevent their own companies from doing harm and building uncontrollable AGI, not to appease the rivals superpowers, but just to protect themselves,” Tegmark told CNBC in a November interview.

Governments are already trying to work together to figure out how to create regulations and frameworks around AI. In 2023, the U.K. hosted a global AI safety summit, which the U.S. and China administrations both attended, to discuss potential guardrails around the technology.

– CNBC’s Arjun Kharpal contributed to this report

Continue Reading

Technology

Amit Yoran, chair and CEO of cybersecurity firm Tenable, dies unexpectedly after cancer battle

Published

on

By

Amit Yoran, chair and CEO of cybersecurity firm Tenable, dies unexpectedly after cancer battle

Amit Yoran, CEO and chairman of Tenable

H/O Tenable

Amit Yoran, who ushered cybersecurity company Tenable into the public market as chief executive, died on Friday. He was 54.

Yoran’s passing was confirmed by Tenable in a Saturday press release. While the company said his death was unexpected, Yoran went on medical leave early last month as he battled cancer.

Funeral details have not yet been announced, the company said on Saturday.

Yoran took the helm of Tenable in 2016, his latest leadership role in the cybersecurity field. He previously served as president of RSA Security from 2014 to 2016. Yoran founded and led NetWitness as CEO between 2006 and 2011 before it was acquired by RSA, according to his LinkedIn page.

His decadeslong career in cybersecurity also included government and nonprofit work. Yoran was National Cybersecurity Director for the U.S. Department of Homeland Security from 2003 to 2004. He sat on the board of the Center for Internet Security since 2019.

Two years into Yoran’s tenure, Tenable went public on the Nasdaq. At the time, the IPO was seen as a success story for cybersecurity companies on Wall Street.

Yoran called the company’s focus on the vulnerabilities of businesses’ technology as unique in the market, while also noting its successful shift to a subscription model. By 2018, Yoran said, more than half of Fortune 500 companies were Tenable customers.

“We’ve become one of the most trusted and beloved brands in cybersecurity,” he told CNBC at the time of Tenable’s IPO. “Only the best and highest-performing private companies have the opportunity to go public. And that gives us a spot on a much larger stage to be able to tell our story.”

Tenable CFO Steve Vintz and Chief Operating Officer Mark Thurmond have acted as co-CEOs since Yoran went on medical leave in December. They will continue sharing the role while its board of directors looks for a permanent successor, the company said.

Yoran had expected his leave to last only a few months and said his condition was a “treatable situation,” according to a note to employees published on his LinkedIn page. He had “complete trust” in Vintz and Thurmond to lead the company in his absence.

“We have much to do and there is no time to waste,” Yoran wrote. “As I take a brief pause to prioritize my health, I will stay as connected as I can while giving myself the space to heal fully. I am deeply grateful for each of you, not only for the dedication you bring to your work but for the sense of community we’ve built together.”

Yoran was also the chair of Tenable’s board, a position that now will be held by Art Coviello, the company’s lead independent director. In a statement, Coviello called Yoran an “extraordinary” leader, colleague and friend.

“His passion for cybersecurity, his strategic vision, and his ability to inspire those around him have shaped Tenable’s culture and mission,” Coviello said. “His legacy will continue to guide us as we move forward.”

Continue Reading

Technology

Microsoft expects to spend $80 billion on AI-enabled data centers in fiscal 2025

Published

on

By

Microsoft expects to spend  billion on AI-enabled data centers in fiscal 2025

Vice Chair and President at Microsoft, Brad Smith, participates in the first day of Web Summit in Lisbon, Portugal, on November 12, 2024. The largest technology conference in the world this year has 71,528 attendees from 153 countries and 3,050 companies, with AI emerging as the most represented industry. (Photo by Rita Franca/NurPhoto via Getty Images)

Nurphoto | Nurphoto | Getty Images

Microsoft plans to spend $80 billion in fiscal 2025 on the construction of data centers that can handle artificial intelligence workloads, the company said in a Friday blog post

Over half of the expected AI infrastructure spending will take place in the U.S., Microsoft Vice Chair and President Brad Smith wrote. Microsoft’s 2025 fiscal year ends in June. 

“Today, the United States leads the global AI race thanks to the investment of private capital and innovations by American companies of all sizes, from dynamic start-ups to well-established enterprises,” Smith said. “At Microsoft, we’ve seen this firsthand through our partnership with OpenAI, from rising firms such as Anthropic and xAI, and our own AI-enabled software platforms and applications.”

Several top-tier technology companies are rushing to spend billions on Nvidia graphics processing units for training and running AI models. The fast spread of OpenAI’s ChatGPT assistant, which launched in late 2022, kicked off the AI race for companies to deliver their own generative AI capabilities. Having invested more than $13 billion in OpenAI, Microsoft provides cloud infrastructure to the startup and has incorporated its models into Windows, Teams and other products.

Microsoft reported $20 billion in capital expenditures and assets acquired under finance leases worldwide, with $14.9 billion spent on property and equipment, in the first quarter of fiscal 2025. Capital expenditures will increase sequentially in the fiscal second quarter, Microsoft Chief Financial Officer Amy Hood said in October.

The company’s revenue from Azure and other cloud services grew 33% year over year, with 12 percentage points of that growth stemming from AI services.

Smith called on President-elect Donald Trump‘s incoming administration to protect the country’s leadership in AI through education and the promotion of U.S. AI technologies abroad.

“China is starting to offer developing countries subsidized access to scarce chips, and it’s promising to build local AI data centers,” Smith wrote. “The Chinese wisely recognize that if a country standardizes on China’s AI platform, it likely will continue to rely on that platform in the future.”

He added, “The best response for the United States is not to complain about the competition but to ensure we win the race ahead. This will require that we move quickly and effectively to promote American AI as a superior alternative.”

Don’t miss these insights from CNBC PRO

Microsoft plans to spend $80 billion to build out AI this year

Continue Reading

Trending