Connect with us

Published

on

Lionel Bonaventure | Afp | Getty Images

Soaring investment from big tech companies in artificial intelligence and chatbots — amid massive layoffs and a growth decline — has left many chief information security officers in a whirlwind.

With OpenAI’s ChatGPT, Microsoft‘s Bing AI, Google‘s Bard and Elon Musk’s plan for his own chatbot making headlines, generative AI is seeping into the workplace, and chief information security officers need to approach this technology with caution and prepare with necessary security measures.

The tech behind GPT, or generative pretrained transformers, is powered by large language models (LLMs), or algorithms that produce a chatbot’s human-like conversations. But not every company has its own GPT, so companies need to monitor how workers use this technology.

People are going to use generative AI if they find it useful to do their work, says Michael Chui, a partner at the McKinsey Global Institute, comparing it to the way workers use personal computers or phones.

“Even when it’s not sanctioned or blessed by IT, people are finding [chatbots] useful,” Chui said.

“Throughout history, we’ve found technologies which are so compelling that individuals are willing to pay for it,” he said. “People were buying mobile phones long before businesses said, ‘I will supply this to you.’ PCs were similar, so we’re seeing the equivalent now with generative AI.”

As a result, there’s “catch up” for companies in terms of how the are going to approach security measures, Chui added.

Whether it’s standard business practice like monitoring what information is shared on an AI platform or integrating a company-sanctioned GPT in the workplace, experts think there are certain areas where CISOs and companies should start.

Start with the basics of information security

CISOs — already combating burnout and stress — deal with enough problems, like potential cybersecurity attacks and increasing automation needs. As AI and GPT move into the workplace, CISOs can start with the security basics.

Chui said companies can license use of an existing AI platform, so they can monitor what employees say to a chatbot and make sure that the information shared is protected.

“If you’re a corporation, you don’t want your employees prompting a publicly available chatbot with confidential information,” Chui said. “So, you could put technical means in place, where you can license the software and have an enforceable legal agreement about where your data goes or doesn’t go.”

Licensing use of software comes with additional checks and balances, Chui said. Protection of confidential information, regulation of where the information gets stored, and guidelines for how employees can use the software — all are standard procedure when companies license software, AI or not.

“If you have an agreement, you can audit the software, so you can see if they’re protecting the data in the ways that you want it to be protected,” Chui said.

Most companies that store information with cloud-based software already do this, Chui said, so getting ahead and offering employees an AI platform that’s company-sanctioned means a business is already in-line with existing industry practices.

How to create or integrate a customized GPT

One security option for companies is to develop their own GPT, or hire companies that create this technology to make a custom version, says Sameer Penakalapati, chief executive officer at Ceipal, an AI-driven talent acquisition platform.

In specific functions like HR, there are multiple platforms from Ceipal to Beamery’s TalentGPT, and companies may consider Microsoft’s plan to offer customizable GPT. But despite increasingly high costs, companies may also want to create their own technology.

If a company creates its own GPT, the software will have the exact information it wants employees to have access to. A company can also safeguard the information that employees feed into it, Penakalapati said, but even hiring an AI company to generate this platform will enable companies to feed and store information safely, he added.

Whatever path a company chooses, Penakalapati said that CISOs should remember that these machines perform based on how they have been taught. It’s important to be intentional about the data you’re giving the technology.

“I always tell people to make sure you have technology that provides information based on unbiased and accurate data,” Penakalapati said. “Because this technology is not created by accident.”

Warren Buffett on ChatGPT and AI: This is extraordinary but not sure if it’s beneficial yet

Continue Reading

Technology

Michael Dell says ‘at some point there’ll be too many’ AI data centers, but not yet

Published

on

By

Michael Dell says 'at some point there'll be too many' AI data centers, but not yet

Dell CEO Michael Dell: AI demand is very solid

Dell Technologies CEO Michael Dell said Tuesday that while demand for computing power is “tremendous,” the production of artificial intelligence data centers will eventually top out.

“I’m sure at some point there’ll be too many of these things built, but we don’t see any signs of that,” Dell said on “Closing Bell: Overtime.”

The hardware maker’s server networking business grew 58% last year and was up 69% last quarter, Dell said. As large language models have evolved to more multimodal and multi-agent systems, the demand for AI processing power and capacity has continued to be strong.

Read more CNBC tech news

Dell’s AI servers are powered by Nvidia‘s Blackwell Ultra chips. The company then sells its devices to customers like cloud service provider CoreWeave and xAI, Elon Musk’s startup.

Dell shares rose over 3% Tuesday after increasing its expected long-term revenue and profit growth in an analyst meeting.

The computer maker raised its expected annual revenue growth to 7% to 9%, up from its previous target of 3% to 4%, with diluted earnings per share now expected to be 15% higher, up from its previous 8% target.

The company reported strong second-quarter earnings in August, and said it planned to ship $20 billion worth of AI servers in fiscal 2026. That is double what it sold last year.

Stock Chart IconStock chart icon

hide content

Dell year-to-date stock chart.

Continue Reading

Technology

OpenAI’s Sora 2 must stop allowing copyright infringement, Motion Picture Association says

Published

on

By

OpenAI's Sora 2 must stop allowing copyright infringement, Motion Picture Association says

Cfoto | Future Publishing | Getty Images

The Motion Picture Association on Monday urged OpenAI to “take immediate and decisive action” against its new video creation model Sora 2, which is being used to produce content that it says is infringing on copyrighted media.

Following the Sora app’s rollout last week, users have been swarming the platform with AI-generated clips featuring characters from popular shows and brands.

“Since Sora 2’s release, videos that infringe our members’ films, shows, and characters have proliferated on OpenAI’s service and across social media,” MPA CEO Charles Rivkin said in a statement.

OpenAI CEO Sam Altman clarified in a blog post that the company will give rightsholders “more granular control” over how their characters are used.

But Rivkin said that OpenAI “must acknowledge it remains their responsibility – not rightsholders’ – to prevent infringement on the Sora 2 service,” and that “well-established copyright law safeguards the rights of creators and applies here.”

OpenAI did not respond to a request for comment.

Concerns erupted immediately after Sora videos were created last week featuring everything from James Bond playing poker with Altman to body cam footage of cartoon character Mario evading the police.

Although OpenAI previously held an opt-out system, which placed the burden on studios to request that characters not appear on Sora, Altman’s follow-up blog post said the platform was changing to an opt-in model, suggesting that Sora would not allow the usage of copyrighted characters without permission.

However, Altman noted that the company may not be able to prevent all IP from being misused.

“There may be some edge cases of generations that get through that shouldn’t, and getting our stack to work well will take some iteration,” Altman wrote.

Copyright concerns have emerged as a major issue during the generative AI boom.

Disney and Universal sued AI image creator Midjourney in June, alleging that the company used and distributed AI-generated characters from their films and disregarded requests to stop. Disney also sent a cease-and-desist letter to AI startup Character.AI in September, warning the company to stop using its copyrighted characters without authorization.

WATCH: OpenAI’s Sora 2 sparks AI ‘slop’ backlash

OpenAI's Sora 2 sparks AI 'slop' backlash

Continue Reading

Technology

Billionaire tech investor Orlando Bravo says ‘valuations in AI are at a bubble’

Published

on

By

Billionaire tech investor Orlando Bravo says 'valuations in AI are at a bubble'

Orlando Bravo: AI valuations are in a bubble

Thoma Bravo co-founder Orlando Bravo said that valuations for artificial intelligence companies are “at a bubble,” comparing it to the dotcom era.

But one key difference in the market now, he said, is that large companies with “healthy balance sheets” are financing AI businesses.

Bravo’s private equity firm boasts more than $181 billion in assets under management as of June, and focuses on buying and selling enterprise tech companies, with a significant chunk of its portfolio invested in cybersecurity.

Bravo told CNBC’s “Squawk on the Street” on Tuesday that investors can’t value a $50 million annual recurring revenue company at $10 billion.

“That company is going to have to produce a billion dollars in free cash flow to double an investor’s money, ultimately,” he said. “Even if the product is right, even if the market’s right, that’s a tall order, managerially.”

Read more CNBC tech news

OpenAI recently finalized a secondary share sale that would value the ChatGPT-maker at $500 billion. The company is projected to make $13 billion in revenue for 2025.

Nvidia recently said it would invest up to $100 billion in OpenAI, in part, to help the ChatGPT maker lease its chips and build out supercomputing facilities in the coming years.

Other public companies have soared on AI promises, with Palantir’s market cap climbing to $437 billion, putting it among the 20 most valuable publicly traded companies in the U.S., and AppLovin now worth $213 billion.

Even early-stage valuations are massive in AI, with Thinking Machines Lab notching a $12 billion valuation on a $2 billion seed round.

Despite the inflated numbers, Bravo emphasized that there’s a “big difference” between the dotcom collapse and the current landscape of AI.

“Now you have some really big companies and some big balance sheets and healthy balance sheets financing this activity, which is different than what happened roughly 25 years ago,” he said.

Oracle shares fall on report the company is struggling to make money renting out Nvidia chips

Continue Reading

Trending