Connect with us

Published

on

The Cisco logo is on display at the Mobile World Congress in Barcelona, Spain, on February 26, 2024. 

Charlie Perez | Nurphoto | Getty Images

Enterprise technology titan Cisco Systems on Thursday unveiled a new security architecture product aimed at securing data centers, clouds, and other IT environments with the help of AI.

Called HyperShield, the product uses AI to protect applications, devices, and data across public and private data centers, clouds, and physical locations, according to a company press release.

HyperShield follows the company’s $28 billion acquisition of Splunk last year, a cybersecurity company competing with the likes of DataDog, Elastic, SolarWinds, and Dynatrace. Its launch also builds on Cisco’s partnership with Nvidia on managing and securing AI infrastructure.

This is Cisco’s big chance to prove itself as a serious AI player at a time when technology giants like Microsoft, Google, and Amazon are spending billions to become leaders in artificial intelligence.

“This is not a product, but a new architecture – the first version of something new,” Jeetu Patel, Cisco’s executive vice president and general manager of security and collaboration, told CNBC in an interview this week.

Other brands are also moving in a similar direction. Hewlett Packard Enterprise recently announced new large AI model integrations for its Aruba networking division, while Broadcom’s VMWare launched a tool to allow companies to use generative AI products in a privacy-secure way.

How it works

HyperShield serves as a “shield for security,” Patel said, explaining that it takes security directly to the things that need to be secured.

The technology acts like a “fabric,” rather than a “fence,” giving cyber workers better visibility of software vulnerabilities across applications, according to Patel.

The product has an autonomous segmentation feature aimed at helping businesses avoid vulnerabilities and breaches. It allows Cisco’s AI to divide a computer network into smaller parts to improve performance and security.

Another feature, called self-qualifying upgrades, lets organizations automate the process of testing and deploying upgrades.

Patel said organizations dealing with critical infrastructure — such as oil rigs, internet of things (IoT) devices, and MRI machines in hospitals — need to take particular care when upgrading their systems.

Designed with AI in mind

Patel said Cisco’s HyperShield technology was designed with a new world of digital AI assistants – like ChatGPT, Google Gemini, and other advanced tools – in mind.

“We’re moving from a world of scarcity to a world of abundance, with digital AI assistants for everything,” Patel told CNBC. “Those assistants live in data centres.”

Cisco CEO Chuck Robbins: $28 billion Splunk deal will be a significant financial growth driver

“So when you consider the increase in requirements that this places on the data centre, and how we build for that, there is a need to rearchitect, not build more of the same,” said Patel.

He noted that a security architecture like HyperShield hadn’t been built previously because much of the architectures across the industry were created in a time when modern-day applications and technologies like generative AI didn’t exist.

It currently takes roughly four days for a network vulnerability to be discovered before it’s exploited, and the time taken to patch it is even longer at an average 45 days, according to Patel.

He said that new technologies like AI and machine learning are needed to identify and patch vulnerabilities to be compressed from days to minutes.

“Previously you had to work on the assumption that a breach had happened, [and that] once someone was in, there was lateral movement that you had to identify before you could respond,” Patel told CNBC.

“We need to move to a position where we can predict and respond.”

Why it matters for investors

Cisco shares have underperformed the Nasdaq in the last 12 months, falling nearly 5% year-over-year while the tech-heavy index has jumped over 30%.

Over the past five years, it’s been an even worse investment relative to the broader sector. The stock is down 14% over that stretch, trailing the Nasdaq’s 95% gain.

Stock Chart IconStock chart icon

hide content

Cisco share price performance year-over-year, compared with the performance of the Nasdaq Composite over the same period.

Cisco has long been the world’s largest maker of computer networking equipment, like switches, modems, and routers. It’s been boosting its cybersecurity business to meet customer demands and fuel growth.

That’s where the company’s blockbuster acquisition of Splunk comes in: Splunk’s technology helps businesses monitor and analyze their data to minimize the risk of hacks and resolve technical issues faster.

As the public cloud has gobbled up more of Cisco’s traditional back-end business, the company has needed to find new and bigger revenue streams — with cybersecurity emerging as a key bet.

– CNBC’s Rohan Goswami and Jordan Novet contributed to this report

Continue Reading

Technology

World’s first major law for artificial intelligence gets final EU green light

Published

on

By

World’s first major law for artificial intelligence gets final EU green light

Mr.cole_photographer | Moment | Getty Images

European Union member states on Tuesday agreed the world’s first major law for regulating artificial intelligence, as institutions around the world race to introduce curbs for the technology.

The EU Council said that it reached final approval for the AI Act — a ground-breaking piece of regulation that aims to introduce the first comprehensive set of rules for artificial intelligence.

“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.

“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.

The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the threats they pose to society.

The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Forms of unacceptable AI applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and emotional recognition in the workplace and schools.

High-risk AI systems cover autonomous vehicles or medical devices, which are evaluated on the risks they pose to the health, safety, and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias embedded in AI algorithms.

Continue Reading

Technology

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Published

on

By

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Dado Ruvic | Reuters

A slew of major tech companies including Microsoft, Amazon, and OpenAI, on Tuesday agreed to a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit.

The agreement will see companies from countries including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, make voluntary commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers will each publish safety frameworks laying out how they’ll measure risks of their frontier models, such as examining the risk of misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems which would be considered “intolerable” — these risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

In those sorts of extreme circumstances, companies say they will implement a “kill switch” that would see them cease development of their AI models if they can’t guarantee mitigation of these risks.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the U.K.’s prime minister, said in a statement Tuesday.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he added.

The pact agreed Tuesday expands on a previous set of commitments made by companies involved in the development of generative AI software the U.K.’s AI Safety Summit in Bletchley Park, England, last November.

The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

The commitments agreed Tuesday only apply to so-called “frontier” models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Ever since ChatGPT was first introduced to the world in November 2022, regulators and tech leaders have become increasingly worried about the risks surrounding advanced AI systems capable of generating text and visual content on par with, or better than, humans.

Microsoft's new PCs with AI is a 'thumbs up,' says WSJ's Joanna Stern

The European Union has sought to clamp down on unfettered AI development with the creation of its AI Act, which was approved by the EU Council on Tuesday.

The U.K. hasn’t proposed formal laws for AI, however, instead opting for a “light-touch” approach to AI regulation that entails regulators applying existing laws to the technology.

The government recently said it will consider legislating for frontier models at a point in future, but has not committed to a timeline for introducing formal laws.

Continue Reading

Technology

Amazon, Meta back Scale AI in $1 billion funding deal that values firm at $14 billion

Published

on

By

Amazon, Meta back Scale AI in  billion funding deal that values firm at  billion

Scale AI CEO Alex Wang, left.

Scale AI

Artificial intelligence startup Scale AI said Tuesday that it has raised $1 billion in a Series F funding round that values the enterprise tech company at $13.8 billion — almost double its last reported valuation. The San Francisco-based company, ranked No. 12 on this year’s CNBC Disruptor 50 list, has now raised $1.6 billion to date.

Its latest funding round is being led by Accel, and includes Cisco Investments, DFJ Growth, Intel Capital, ServiceNow Ventures, AMD Ventures, WCM, Amazon, Elad Gil (co-founder of Color Genomics and serial tech investor), and Meta, all of which are new investors in the company.

Existing investors including Y Combinator, Nat Friedman, Index Ventures, Founders Fund, Coatue, Thrive Capital, Spark Capital, Nvidia, Tiger Global Management, Greenoaks, and Wellington Management also participated in the round.

Scale AI is playing a key role in the rise of generative artificial intelligence and large language models, with the data — whether it is text, images, video or voice recordings — needing to be labeled correctly before it can be digested and used effectively by AI technology. Scale AI has evolved from labeling data used to train models that powered autonomous driving to now helping to improve and fine tune the underlying data for nearly any organization looking to implement AI, powering some of the most advanced models in use.

“Our calling is to build the data foundry for AI, and with today’s funding, we’re moving into the next phase of that journey – accelerating the abundance of frontier data that will pave our road to AGI,” founder and CEO Alexandr Wang said in a statement announcing the news.

More coverage of the 2024 CNBC Disruptor 50

Scale AI is also increasingly working with the public sector.

In August, the company was awarded a contract with the Department of Defense Chief Digital and Artificial Intelligence Office, which the company said will help boost the DoD’s efforts to advance AI capabilities for the entire military, spanning projects across the Army, Marine Corps, Navy, Air Force, Space Force and Coast Guard.

In May, Scale AI launched Donovan, an AI-powered decision-making platform that is the first LLM deployed to a U.S. government classified network.

Wang spoke at December’s AI Insight Forum in Washington, D.C., about the role Scale AI is playing in helping support the U.S. and its allies.

“The race for AI global leadership is well underway, and our nation’s ability to efficiently adopt and implement AI will define the future of warfare,” he said. “I firmly believe that the United States has the ability to lead the world in AI adoption to support U.S. national security. The world is not slowing down, and we must rise to the occasion.”

The company is also looking to play a role in AI development globally. It announced in May that it will open a London office as its European headquarters and will look to support and partner with the U.K. government on its AI initiatives.

Sign up for our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.

Continue Reading

Trending