Connect with us

Published

on

The unoccupied space-facing port on the International Space Station’s Harmony module is pictured several hours before the SpaceX Dragon Freedom spacecraft would relocate there after undocking from Harmony’s forward port.

NASA Johnson Space Center

In the race to conquer the cosmos, the greatest challenge to space exploration might be the vastness of the unknown, but that distance from planet Earth isn’t dissuading the invisible hands of cybercriminals aiming to sabotage missions from thousands of miles below.

Spacecraft, satellites, and space-based systems all face cybersecurity threats that are becoming increasingly sophisticated and dangerous. With interconnected technologies controlling everything from navigation to anti-ballistic missiles, a security breach could have catastrophic consequences.

“There are unique constraints to operating in space where you do not have physical access to spacecraft for repairs or updates after launch,” said William Russell, director of contracting and national security acquisitions at the U.S. Government Accountability Office. “The consequences of malicious cyber activities include loss of mission data, decreased lifespan or capability of space systems or constellations, or the control of space vehicles.”

Critical space infrastructure is susceptible to threats across three key segments: in space, on the ground segment and within the communication links between the two. A break in one can be a cascading failure for all, said Wayne Lonstein, co-founder and CEO at VFT Solutions, and co-author of Cyber-Human Systems, Space Technologies, and Threats. “In many ways, the threats to critical infrastructure on Earth can cause vulnerabilities in space,” Lonstein said. “Internet, power, spoofing and so many other vectors that can cause havoc in space,” he added.

AI risks in mission critical systems

The integration of artificial intelligence into space projects has heightened the risk of sophisticated cyber attacks orchestrated by state actors and individual hackers. AI integration into space exploration allows more decision-making with less human oversight.

For example, NASA is using AI to target scientific specimens for planetary rovers. However, reduced human oversight could make these missions more prone to unexplained and potentially calamitous cyberattacks, said Sylvester Kaczmarek, chief technology officer at OrbiSky Systems, which specializes in the integration of AI, robotics, cybersecurity, and edge computing in aerospace applications.

Data poisoning, where attackers feed corrupted data to AI models, is one example of what could go wrong, Kaczmarek said. Another threat, he said, is model inversion, where adversaries reverse-engineer AI models to extract sensitive information, potentially compromising mission integrity. If compromised, AI systems could be used to interfere with or take control of strategically important national space missions.

“AI systems may be susceptible to unique types of cyberattacks, such as adversarial attacks, where malicious inputs are designed to deceive the AI into making incorrect decisions or predictions,” Lonstein said. AI could also enable adversaries to “carry out sophisticated espionage or sabotage operations against space systems, potentially altering mission parameters or stealing sensitive information,” he added.

The Quetzal-1 CubeSat is seen as it deploys from the JEM Small Satellite Orbital Deployer aboard the International Space Station.

NASA Johnson Space Center

Worse yet, AI can be weaponized — used to develop advanced space-based weapons or counter-space technologies that could disrupt or destroy satellites and other space assets.

The U.S. government is tightening up the integrity and security of AI systems in space. The 2023 Cyberspace Solarium Commission report stressed the importance of designating outer space as a critical infrastructure sector, urging enhanced cybersecurity protocols for satellite operators.

Lonstein recommends rigorous testing of AI systems in simulated space conditions before deployment, and redundancy as a way to safeguard against an unexpected breach. “Implement redundant systems to ensure that if one AI component fails, others can take over, thus maintaining mission integrity and functionality,” he said.

Use of strict access controls, authentication, and error correction mechanisms can further ensure that AI systems operate with accurate information. There are reactive measures for when even these defenses have been breached, through the design of AI systems with fail-safe mechanisms that can revert to a “safe state” or “default mode” in the event of a malfunction or unexpected behavior, Lonstein said. Manual override is important, too. “Ensure that ground control can manually override or intervene in AI decision- making, when necessary, providing an additional layer of safety,” he added.

U.S.-China competition

The rivalry between the U.S. and China includes the new battleground of space. As both nations ramp up their space ambitions and militarized capabilities beyond Earth’s atmosphere, the threat of cyberattacks targeting critical orbital assets has become an increasingly pressing concern.

“The competition between the U.S. and China, with Russia as a secondary player, heightens the risk of cyberattacks as these nations seek to gain technological superiority,” Kaczmarek said.

Though they don’t garner as much attention in the mainstream press as consumer, crypto or even nation-state hacks against key U.S. private and government infrastructure on the ground, notable cyberattacks have targeted critical space-based technologies in recent years. With the U.S., China, Russia and India intensifying their push for space dominance, the stakes have never been higher.

There were repeated cyberattacks this year on Japan’s space agency JAXA. In 2022, there were hacks on SpaceX’s Starlink satellite system, which Elon Musk attributed to Russia after the satellites were supplied to Ukraine. In August 2023, the U.S. government issued a warning that Russian and Chinese spies were aiming to steal sensitive technology and data from U.S. space companies such as SpaceX and Blue Origin. China has been implicated in numerous cyber-espionage campaigns dating back as far as a decade, such as the 2014 breach of the U.S. National Oceanic and Atmospheric Administration weather systems, jeopardizing space-based environmental monitoring.

“Nations like China and Russia target U.S. space assets to disrupt operations or steal intellectual property, potentially leading to compromised missions and a loss of technological edge,” Kaczmarek said.

Space-based systems increasingly support critical infrastructure back on Earth, and any cyberattacks on these systems could undermine national security and economic interests. Last year, the U.S. government let hackers break into a government satellite as a way to test vulnerabilities that could be exploited by the Chinese. That came amid growing concerns at the highest levels of the government that China is attempting to “deny, exploit or hijack” enemy satellites — revelations that became public in the leak of classified documents by U.S. Air National Guardsman Jack Teixeira in 2023.

“The ongoing space race and the associated technologies will continue to be impacted by Viasat-like cyberattacks,” said GAO’s Russell, referring to a 2022 cyberattack against the satellite company attributed by U.S. and U.K. intelligence to Russia as part of its war against Ukraine.

Big Tech’s space-based cloud

Private companies and the government will need to use all the cybersecurity tools at their disposal, including encryption, intrusion detection systems, and collaboration with government agencies like the Cybersecurity and Infrastructure Security Agency for intelligence sharing and coordinated defense.

“These collaborations can also involve developing cybersecurity frameworks specifically tailored to space systems,” Kaczmarek said.

At the same time, Silicon Valley-based tech companies have been making rapid advancements in the field of cybersecurity, including those designed to secure space technologies. Companies like Microsoft, Amazon, Google, and Nvidia are increasingly being enlisted by the U.S. Space Force and Department of Defense for their specialized resources and advanced cyber capabilities.

Notably, Microsoft is a founding member of the Space Information Sharing and Analysis Center and has been an active participant since its formation several years ago. “Microsoft has partnered with the U.S. Space Force to support their growth as a fully digital service, bringing the latest technologies to ensure Space Force Guardians are prepared for space-based conflicts,” said a Microsoft spokesperson via email.

As part of the $19.8 million contract, Microsoft provides its Azure cloud computing infrastructure, simulations, augmented reality, and data management tools to support and secure a wide range of Space Force missions. “Microsoft is playing a key role in defending against cyber threats in space,” the spokesperson wrote.

Google Cloud, Amazon Web Services and defense contractor General Dynamics also offer cloud infrastructure for storing and processing vast amounts of data generated by satellites and space missions.

Nvidia‘s powerful GPUs can be used for processing and analyzing satellite imagery and data. According to Lonstein, the chipmaker’s AI chips can enhance image processing, anomaly detection, and predictive analytics for space missions. But there is a limit to reliance on technology in space operations as a safety benefit rather than added layer of risk.

“High dependency on automated systems can lead to catastrophic failures if those systems malfunction or encounter unexpected scenarios,” Lonstein said.

A single point of failure could compromise the entire mission. Moreover, extensive use of technology could be detrimental to human operators’ skills and knowledge, which might atrophy if not regularly exercised.

“This could lead to challenges in manual operation during emergencies or system failures,” Lonstein added.

Andreessen Horowitz's Katherine Boyle talks deregulating space

Continue Reading

Technology

Figure AI sued by whistleblower who warned that startup’s robots could ‘fracture a human skull’

Published

on

By

Figure AI sued by whistleblower who warned that startup's robots could 'fracture a human skull'

Startup Figure AI is developing general-purpose humanoid robots.

Figure AI

Figure AI, an Nvidia-backed developer of humanoid robots, was sued by the startup’s former head of product safety who alleged that he was wrongfully terminated after warning top executives that the company’s robots “were powerful enough to fracture a human skull.”

Robert Gruendel, a principal robotic safety engineer, is the plaintiff in the suit filed Friday in a federal court in the Northern District of California. Gruendel’s attorneys describe their client as a whistleblower who was fired in September, days after lodging his “most direct and documented safety complaints.”

The suit lands two months after Figure was valued at $39 billion in a funding round led by Parkway Venture Capital. That’s a 15-fold increase in valuation from early 2024, when the company raised a round from investors including Jeff Bezos, Nvidia, and Microsoft.

In the complaint, Gruendel’s lawyers say the plaintiff warned Figure CEO Brett Adcock and Kyle Edelberg, chief engineer, about the robot’s lethal capabilities, and said one “had already carved a ¼-inch gash into a steel refrigerator door during a malfunction.”

The complaint also says Gruendel warned company leaders not to “downgrade” a “safety road map” that he had been asked to present to two prospective investors who ended up funding the company.

Gruendel worried that a “product safety plan which contributed to their decision to invest” had been “gutted” the same month Figure closed the investment round, a move that “could be interpreted as fraudulent,” the suit says.

The plaintiff’s concerns were “treated as obstacles, not obligations,” and the company cited a “vague ‘change in business direction’ as the pretext” for his termination, according to the suit.

Gruendel is seeking economic, compensatory and punitive damages and demanding a jury trial.

Figure didn’t immediately respond to a request for comment. Nor did attorneys for Gruendel.

The humanoid robot market remains nascent today, with companies like Tesla and Boston Dynamics pursuing futuristic offerings, alongside Figure, while China’s Unitree Robotics is preparing for an IPO. Morgan Stanley said in a report in May that adoption is “likely to accelerate in the 2030s” and could top $5 trillion by 2050.

Read the filing here:

AI is turbocharging the evolution of humanoid robots, says Agility Robotics CEO

Continue Reading

Technology

Here are real AI stocks to invest in and speculative ones to avoid

Published

on

By

Here are real AI stocks to invest in and speculative ones to avoid

Continue Reading

Technology

The Street’s bad call on Palo Alto – plus, two portfolio stocks reach new highs

Published

on

By

The Street's bad call on Palo Alto – plus, two portfolio stocks reach new highs

Continue Reading

Trending