Connect with us

Published

on

AI giving Big Tech 'inordinate power' over our lives, Signal president says

Tech execs have voiced concern that the development of artificial intelligence is concentrated in the hands of too few companies, potentially giving them excessive control over the rapidly evolving technology.

An explosion of interest in AI was sparked by OpenAI’s ChatGPT late last year thanks to the novel way in which the chatbot can answer user prompts.

Its popularity contributed to the start of what many in the tech industry have called an AI arms race, as tech giants including Microsoft and Google seek to develop and launch their own artificial intelligence models. These require huge amounts of computing power as they are trained on massive amounts of data.

“Right now, there are only a handful of companies with the resources needed to create these large-scale AI models and deploy them at scale. And we need to recognize that this is giving them inordinate power over our lives and institutions,” Meredith Whittaker, president of encrypted messaging app Signal, told CNBC in an interview last week.

“We should really be concerned about, again, a handful of corporations driven by profit and shareholder returns making such socially consequential decisions.”

Whittaker previously spent 13 years at Google but became disillusioned in 2017 when she found out the search giant was working on a controversial contract with the Department of Defense known as Project Maven. Whittaker grew concerned Google’s AI could potentially be used for drone warfare and helped organize a walkout at the company that involved thousands of employees.

“AI, as we understand it today, is fundamentally a technology that is derivative of centralized corporate power and control,” Whittaker said.

“It is built on the concentrated resources that accrued to a handful of large tech corporations, largely based in the U.S. and China via the surveillance advertising business model, which gave them powerful computational infrastructure and huge amounts of data; large markets from which to pull that data; and the ability to process and structure that data in ways useful for creating new technologies.”

Whittaker is not alone in this view.

Frank McCourt, the former owner of the Los Angeles Dodgers baseball team, now runs Project Liberty, an organization looking to motivate technologists and policymakers “to build a more responsible approach to technology development,” according to its website.

McCourt also thinks AI could give too much power to tech giants. He said there are “basically five companies that have all the data,” although he didn’t name the firms.

“Large language models require massive amounts of data. If we don’t make changes here, the game is over … Only these same platforms will prevail. And they’ll be the beneficiaries,” McCourt told CNBC in an interview last week.

“Sure, people will come and build small things on those big platforms. But it’s the big underlying platforms that control this data that will be the winners.”

The internet is broken — and there’s a lot of harm being caused, Project Liberty founder says

Whittaker and McCourt are among those who feel users have lost control of their data online and that it is being harnessed by technology giants to feed their profits.

“Big tech and social media giants are inflicting profound damage on our society,” says McCourt’s Project Liberty manifesto says. And he believes AI could make this worse.

“Let’s not be fooled, generative AI is a fancy name for a more powerful usage of our data,” McCourt said in his CNBC interview.

Generative AI is the technology that describes applications like ChatGPT. The models underpinning these apps are trained on vast amounts of data.

“Generative AI built with large language models are basically enhanced, or more powerful versions, of the technology we have now, given a fancy name. It is centralized, autocratic surveillance technology. And that, I’m against. And I think it’s doing a lot of harm in the world right now,” McCourt said.

The inventor of the web, Tim Berners Lee, has also raised concerns about the concentration of power among the tech giants.

For Jimmy Wales, the founder of Wikipedia, it is the state of social media that is of particular concern right now.

On AI, however, he feels that while the technology giants now are leading the way, there is space for disruption.

In an interview with CNBC last week, Wales pointed to a leaked Google memo this year in which a researcher at the U.S. tech giant said the company has “no moat” in the AI industry, referring to a threat from open-source models. These are AI models that are not owned by a single entity, such as Google or Microsoft, and instead can be developed and added to by anyone. These could potentially see the creation of competing AI applications without the massive amount of resources it currently takes.

“The models that are out there, and open source models that anybody can download and run on a few machines that a startup can spend [just] $50,000 training … that’s not a big deal at all. It’s really impressive,” Wales added.

Europe could put itself 10 years behind by regulating AI, Wikipedia founder says

Continue Reading

Technology

Alibaba shares soar after Chinese tech giant unveils new DeepSeek rival

Published

on

By

Alibaba shares soar after Chinese tech giant unveils new DeepSeek rival

The Alibaba office building is seen in Nanjing, Jiangsu province, China, on Aug 28, 2024.

CFOTO | Future Publishing | Getty Images

Alibaba shares surged on Wednesday after the Chinese behemoth revealed a new reasoning model it claims can rival DeepSeek’s global blockbuster R1.

Hong Kong-listed shares of Alibaba ended the Thursday session up 8.39% — hitting a new 52-week high — with the company’s New York-trading stock rising around 2.5% in premarket deals. Alibaba shares have gained nearly 71% in Hong Kong in the year to date.

The Chinese giant on Thursday unveiled QwQ-32B, its latest AI reasoning model, which it said “rivals cutting-edge reasoning model, e.g., DeepSeek-R1.”

Alibaba’s QwQ-32B operates with 32 billion parameters compared to DeepSeek’s 671 billion parameters with 37 billion parameters actively engaged during inference — the process of running live data through a trained AI model in order to generate a prediction or tackle a task.

Parameters are variables that large language models (LLMs) — AI systems that can understand and generate human language — pick up during training and use in prediction and decision-making. A lower volume of parameters typically signals higher efficiency amid increasing demand for optimized AI that consumes fewer resources.

Alibaba said its new model achieved “impressive results” and the company can “continuously improve the performance especially in math and coding.”

Both established and emerging AI players around the world are racing to produce more efficient and higher-performance models since the unexpected launch of DeepSeek’s revolutionary R1 earlier this year.

Chinese firms have been doubling down on the technology with Alibaba investing in AI after debuting its first model in 2023. The strength of the company’s cloud Intelligence unit was a key contributor to Alibaba’s sharp profit hike in the December quarter.

“Looking ahead, revenue growth at Cloud Intelligence Group driven by AI will continue to accelerate,” Alibaba CEO Eddie Wu said at the time.

Optimism surrounding AI developments could lead to large gains for Alibaba stock and set the company’s earnings “on a more upwardly-pointing trajectory,” Bernstein analysts said.

“The pace of innovation is incredibly fast right now. It’s really good for the world to see this happening,” Futurum Group CEO Dan Newman told CNBC’s “Squawk Box Europe” on Thursday. “When DeepSeek came out, it made everyone sort of question, was OpenAi the final answer? Would the incumbents, the Microsofts, the Googles, or the Amazons that have all made massive investments win?”

He stressed that the large language models were increasingly “becoming commoditized” as developers look to drive down costs and improve access to users.

“As we see this more efficiency, this cost coming down, we’re also going to see use going off. The training era, which is what Nvidia really built its initial AI boom off, was a big moment,” Newman said. “But the inference, the consumption of AI, is really the future and this is going to exponentially increase that volume.”

Continue Reading

Technology

It’s ‘never been easier’ to become an online scammer as cybercrime markets flourish, security experts warn

Published

on

By

It's 'never been easier' to become an online scammer as cybercrime markets flourish, security experts warn

“Looking back to the 1990s and early 2000s, you needed to have a reasonable level of technical competence to pull off these types of crimes,” Nicholas Court, assistant director of Interpol’s Financial Crime and Anti-Corruption Centre, told CNBC.

Imaginima | E+ | Getty Images

An expanding network of cybercrime marketplaces is making it easier than ever to become a professional fraudster, posing unprecedented cybersecurity threats worldwide, experts warn.

Cybercriminals are often portrayed in popular media as rogue and highly skilled individuals, wielding coding and hacking abilities from a dimly lit room. But such stereotypes are becoming outdated. 

“Looking back to the 1990s and early 2000s, you needed to have a reasonable level of technical competence to pull off these types of crimes,” Nicholas Court, assistant director of Interpol’s Financial Crime and Anti-Corruption Centre, tells CNBC. 

Today, the barriers to entry have come down “quite significantly,” Court said. For example, obtaining personal data, such as email addresses, and sending them spam messages en masse — one of the oldest online scams in the book — has never been easier.

Cybersecurity experts say the change is due to advances in scam technology and the growth of organized online markets where cybercrime expertise and resources are bought and sold. 

A growing cybercrime economy 

“The last decade or so has seen an evolution of rogue cybercriminals into organized groups and networks all of which are part of a thriving underground economy,” said Tony Burnside, vice president and head of Asia-Pacific at Netskope, a cloud security company.

Driving that trend has been the emergence of global underground markets that offer “cybercrime-as-a-service” or “CaaS,” through which vendors charge customers for different types of malicious tools and cybercrime services, he added.

Examples of CaaS include ransomware and hacking tools, botnets for rent, stolen data, and anything else that may aid cybercriminals in their illicit activities.

“The availability of these services certainly helps in enabling more cybercriminals, allowing them to scale up and sophisticate their crime while reducing the technical expertise required,” Burnside said. 

CaaS is often hosted on markets in the “darknet” — a part of the internet that uses encryption technology to protect the anonymity of users.

Examples include Abacus Market, Torzon Market and Styx, though the top markets often change as authorities shut them down and new ones emerge. 

Burnside adds that the criminal gangs operating CaaS services and markets have begun to operate like “legitimate organizations in their structure and processes.”

Meanwhile, vendors on these illicit exchanges tend to accept payments only in cryptocurrency in attempts to remain anonymous, obscure proceeds and evade detection. 

Silk Road, an infamous dark web marketplace that was shut down by law enforcement in 2013, is recognized by many as one of the earliest large-scale applications of cryptocurrency.

Darknet emerges from shadows 

Though the use of cryptocurrencies in the cybercrime market can help obscure the identities of participants, it can also make their activities more traceable on the blockchain, according to Chainalysis, a blockchain research firm that traces illicit crypto transactions. 

According to Chainalysis data, while darknet markets remain a major factor in the global cybercrime ecosystem, more activity is moving to the public internet and secure messaging services like Telegram. 

The largest of those marketplaces identified by Chainalysis is Huione Guarantee — a platform affiliated with Cambodian conglomerate Huione Group — which the firm says acts as a “one-stop shop for nearly every form of cybercrime.”

The Chinese-language platform operates as a peer-to-peer marketplace where vendors offer services Chainalysis says are linked to illicit activity like money laundering and crypto-based scams.

Vendors pay to advertise on the Huione website, often directing interested parties into private Telegram groups. If a sale is made, Huione appears to act as an escrow and dispute intermediary to “guarantee” the exchange.

Chainalysis data shows that vendors on Huione Guarantee have processed a staggering $70 billion in crypto transactions since 2021. Meanwhile, Elliptic, another blockchain analytics firm, estimates that Huione Group entities have received at least $89 billion in crypto assets, making it “the largest ever illicit online marketplace.

The platform advertises and directs potential buyers to vendor groups on Telegram that offer everything from scam technology and money laundering to escort services and illicit goods. 

Judging from the scale and volume of the transactions on Huione Guarantee, it is likely leveraged by numerous organized criminal groups, according to Andrew Fierman, head of national security intelligence at Chainlaysis.

However, he adds that the many services don’t cost much money, providing a low barrier to entry and access point into cybercrime for “anyone with internet connection.” 

According to Chainalysis, individuals looking to facilitate “romance” or investment scams may be able to purchase the necessary tools and services on Huione for just a couple of hundred dollars. Costs can reach thousands of dollars, depending on the level of complexity they are looking to execute.

Investing or romance scams involve a fraudster building a relationship with a victim via social media or dating apps, intending to con them out of money through a sham investment opportunity.

A scammer attempting to pull off this type of scam might shop Huione Guarantee for a portfolio of potential victims’ data, such as phone numbers; old social media accounts that appear to be from real people; and AI-powered facial and voice manipulation software, which can be used by a scammer to digitally disguise themselves. 

Other vendors on the site offer services related to the creation of fake investment and gambling platforms. Fiermen says scammers often deceive victims into depositing money on such platforms.

In a disclaimer on its website, the platform says it does not participate in or understand its customers’ specific businesses and is responsible only for guaranteeing payments between buyers and sellers, according to a CNBC translation of the Chinese-language statement.

According to Fierman, Huione Guarantee’s activity appears to be concentrated in Cambodia and China, but there’s evidence that other platforms are emerging. 

‘Child’s play’

As CaaS and cybercrime markets continue to grow, the technology that is offered and leveraged by criminal vendors has also advanced, allowing more sophisticated scams on scale — with less effort, experts say. 

AI-generated deepfake videos and voice cloning are increasingly looking more real, with previously infeasible attacks now realistic thanks to generative AI advancements, according to Kim-Hock Leow, Asia CEO of cybersecurity company Wizlynx Group. 

Last year, Hong Kong police reported that a finance worker at a multinational firm had been tricked into paying out $25 million to fraudsters using deepfake technology to pose as the company’s chief financial officer in a video conference call.

“This would have been completely impossible to pull off just a few years ago, even for criminals with technical skills, and now it is a viable attack even for those without,” added NetSkope’s Burnside.

Meanwhile, cybersecurity experts told CNBC that AI tools can be used to enhance phishing and social engineering scams, helping to write more personalized and human-like messages. 

“It has become child’s play to create really convincing fake emails, audio notes, images or videos designed to scam and trick victims,” said Burnside, noting that dark variants of legitimate generative AI tools continue to find their way into dark markets. 

Prevention efforts

Because of the global and anonymous nature of CaaS vendors and cybercrime marketplaces, they are very difficult to police, cybersecurity experts told CNBC, noting that markets that are shut down often resurface under different names or are replaced.

For that reason, Interpol’s Nicholas Court says cybercrime isn’t the type of activity “you can arrest your way out of.” 

“The volume of criminality is going up so fast that it is actually harder for law enforcement to catch the same proportion of cybercriminals,” he said, adding that this calls for a significant focus on prevention and public awareness campaigns to warn about the rapid sophistication of scams and AI tools.

“Almost everybody receives scam messages these days. While it used to be enough to tell people not to send money to someone that refuses to video call, that’s not enough anymore.” 

On the enterprise level, Wizlynx Group’s Leow says that as cybercriminals become more tech- and AI-savvy, so must companies’ cybersecurity protocols.

For example, AI tools can be used to help automate security systems on the enterprise level, lowering the threshold for detection and accelerating response times, he added.

Meanwhile, new tools are emerging, such as “dark web monitoring,” which can track cybercrime markets and underground forums for leaked or stolen data, including credentials, financial data, and intellectual property.

It’s “never been easier” to commit cybercrime, so it’s crucial to prioritize cybersecurity by investing in technological solutions and enhancing employee awareness, Leow said. 

Continue Reading

Technology

MongoDB shares sink after company issues weak guidance

Published

on

By

MongoDB shares sink after company issues weak guidance

Dev Ittycheria, CEO of MongoDB

Adam Jeffery | CNBC

MongoDB shares sank 16% in extended trading on Wednesday after the database software maker issued disappointing guidance.

Here’s how the company did in comparison with LSEG consensus:

  • Earnings per share: $1.28 adjusted vs. 66 cents expected
  • Revenue: $548.4 million vs. $519.6 million expected

Revenue increased about 20% from a year ago in the quarter that ended on Jan. 31, according to a statement. The company generated $15.8 million in net income, or 19 cents per share, which factors in stock-based compensation. In the same quarter a year ago, MongoDB had registered a net loss of $55.5 million, or 77 cents per share.

MongoDB added 1,900 customers in the quarter, bringing the total to 54,500. But the company ended the quarter with about $360 million in deferred revenue, below the StreetAccount consensus of $370.4 million.

MongoDB is seeing slower growth than it had hoped for in new applications using its Atlas cloud-based database service, Srdjan Tanjga, MongoDB’s interim finance chief, said on a conference call with analysts. Meanwhile, MongoDB is hiring rapidly to pursue more deals with large companies, while pulling back on mid-sized businesses, Tanjga said.

During the quarter, MongoDB acquired artificial intelligence startup Voyage for an undisclosed sum.

“We want to capitalize on a once-in-a-generation opportunity,” CEO Dev Ittycheria said.

For the fiscal first quarter, MongoDB called for 63 cents to 67 cents in adjusted earnings per share on $524 million to $529 million in revenue. Analysts surveyed by LSEG had expected 62 cents of per-share earnings and revenue of $526.8 million.

MongoDB said it expects adjusted earnings per share of $2.44 to $2.62 and revenue of $2.24 billion to $2.28 billion for fiscal 2026. That implies 12.7% revenue growth, which would be the slowest rate at least since the company went public in 2017. Analysts were anticipating $3.34 per share of earnings and $2.32 billion in revenue.

Prior to Wednesday’s after-hours move, MongoDB shares were up 13%, while the S&P 500 was down about 1%.

WATCH: MongoDB shares fall more than 10% as non-gross margins come in lighter-than-expected

MongoDB shares fall more than 10% as non-gross margins come in lighter-than-expected

Continue Reading

Trending