Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

Shares in Chinese chipmaker SMIC drop nearly 7% after earnings miss

Published

on

By

 Shares in Chinese chipmaker SMIC drop nearly 7% after earnings miss

A logo hangs on the building of the Beijing branch of Semiconductor Manufacturing International Corporation (SMIC) on December 4, 2020 in Beijing, China.

Vcg | Visual China Group | Getty Images

Shares of Semiconductor Manufacturing International Corporation, China’s largest contract chip maker, fell nearly 7% Friday after its first-quarter earnings missed estimates.

After trading on Thursday, the company reported a first-quarter revenue of $2.24 billion, up about 28% from a year earlier. Meanwhile, profit attributable to shareholders surged 162% year on year to $188 million.

However, both figures missed LSEG mean estimates of $2.34 billion in revenue and $225.1 million in net income, as well as the company’s own forecasts.

During an earnings call Friday, an SMIC representative said the earnings missed original guidance due to “production fluctuations” which sent blended average selling prices falling. This impact is expected to extend into the second quarter, they added.

For the current quarter, the chipmaker forecasted revenue to fall 4% to 6% sequentially. Gross margin is also expected to fall within the range of 18% to 20%, compared to 22.5% in the first quarter.

Still, the first quarter saw SMIC’s wafer shipments increase by 15% from the previous quarter and by about 28% year-on-year.

In the earnings call, SMIC attributed that growth to customer shipment pull in, brought by changes in geopolitics and increased demand driven by government policies such as domestic trade-in programs and consumption subsidies.

In another positive sign for the company, its first-quarter capacity utilization— the percentage of total available manufacturing capacity that is being used at any given time— reached 89.6%, up 4.1% quarter on quarter.

Demand in China for chips is extremely strong, says Benchmark's Cody Acree

“SMIC’s nearly 90% utilization rate reflects strong domestic demand for semiconductors, likely driven by smartphone and consumer electronics production,” said Ray Wang, a Washington-based semiconductor and technology analyst, adding that the demand was also reflected in the company’s strong quarterly revenue growth.

Meanwhile, the company said in the earnings call that it is “currently in an important period of capacity construction, roll out, and continuously increasing market share.”

However, SMIC’s first-quarter research and development spending decreased to $148.9 million, down from $217 million in the previous quarter.

Amid increased demand, it will be crucial for SMIC to continue ramping up their capacity, Simon Chen, principal analyst of semiconductor manufacturing at Informa Tech told CNBC.

SMIC generates most of its revenue from older-generation semiconductors, often referred to as “mature-node” or “legacy” chips, which are commonly found in consumer electronics and industrial equipment.

The state-backed chipmaker is critical to Beijing’s ambitions to build a self-sufficient semiconductor supply chain, with the government pumping billions into such efforts. Over 84% of its first-quarter revenue was derived from customers in China.

“The localization transformation of the supply chain has been strengthened, and more manufacturing demand has shifted back domestically,” a representative said Friday.

However, chip analysts say the chipmaker’s ability to increase capacity in advance chips — used in applications that demand higher levels of computing performance and efficiency at higher yields — is limited.

This is due to U.S.-led export controls, which prevent it from accessing some of the world’s most advanced chip-making equipment from the Netherlands-based ASML. 

Nevertheless, the chipmaker appears to be making some breakthroughs. Advanced chips manufactured by SMIC have reportedly appeared in various Huawei products, notably in the Mate 60 Pro smartphone and some AI processors.

In the earnings call, the company also said it would closely monitor the potential impacts of the U.S.-China trade war on its demand, noting a lack of visibility for the second half of the year.

Phelix Lee, an equity analyst for Morningstar focused on semiconductors, told CNBC that the impacts of U.S. tariffs on SMIC are limited due to most of its revenue coming from Chinese customers.

While U.S. customers make up about 8-15% of revenue on a quarterly basis, the chips usually remain and are consumed in Chinese products and end users, he said.

“There could be some disruption to chemical, gas, and equipment supply; but the firm is working on alternatives in China and other non-U.S. regions,” he added.

SMIC’s Hong Kong-listed shares have gained over 32.23% year-to-date.

Continue Reading

Technology

Amazon adds pet prescriptions to its online pharmacy

Published

on

By

Amazon adds pet prescriptions to its online pharmacy

Close-up of a hand holding a cellphone displaying the Amazon Pharmacy system, Lafayette, California, September 15, 2021. 

Smith Collection | Gado | Getty Images

Amazon is expanding its online pharmacy to fill prescription pet medications, the company announced Thursday.

The company said it has added “hundreds of commonly prescribed pet medications” to its U.S. site, ranging from flea and tick solutions to treatments for chronic conditions.

Prescriptions are purchased via Amazon’s storefront and must be approved by a veterinarian. Online pet pharmacy Vetsource will oversee the dispensing and delivery of medications, said Amazon, adding that items are typically delivered within two to six days.

Amazon launched its digital drugstore in 2020 with the added perk of discounts and free delivery for Prime members. The company has been working to speed up prescription shipments over the past year, bringing same-day delivery to a handful of U.S. cities. Last October, Amazon set a goal to make speedy medicine delivery available in nearly half of the U.S. in 2025.

The new pet medication offerings puts Amazon into more direct competition with online pet pharmacy Chewy, as well as Walmart, which offers pet prescription delivery.

Amazon Pharmacy is part of the company’s growing stable of healthcare offerings, which also includes One Medical, the primary care provider it acquired for roughly $3.9 billion in July 2022. Amazon’s online pharmacy was born out of the company’s 2018 acquisition of online pharmacy PillPack.

WATCH: Amazon’s new Vulcan robot can ‘feel’ what it touches

Here's a first look at Vulcan, Amazon's new stowing robot that can feel what it touches

Continue Reading

Technology

Coinbase acquires crypto derivatives exchange Deribit for $2.9 billion

Published

on

By

Coinbase acquires crypto derivatives exchange Deribit for .9 billion

The Coinbase logo is displayed on a smartphone with stock market percentages on the background.

Omar Marques | SOPA Images | Lightrocket | Getty Images

Coinbase agreed to acquire Dubai-based Deribit, a major crypto derivatives exchange, for $2.9 billion, the largest deal in the crypto industry to date.

The company said Thursday that the cost comprises $700 million in cash and 11 million shares of Coinbase class A common stock. The transaction is expected to close by the end of the year.

Shares of Coinbase rose nearly 6%.

The acquisition positions Coinbase as an international leader in crypto derivatives by open interest and options volume, Greg Tusar, vice president of institutional product, said in a blog post – which could allow it take on big players like Binance. Coinbase operates the largest marketplace for buying and selling cryptocurrencies within the U.S., but has a smaller share of the global crypto market, where activity largely takes place on Binance.

Deribit facilitated more than $1 trillion in trading volume last year and has about $30 billion of current open interest on the platform.

“We’re excited to join forces with Coinbase to power a new era in global crypto derivatives,” Deribit CEO Luuk Strijers said in a statement. “As the leading crypto options platform, we’ve built a strong, profitable business, and this acquisition will accelerate the foundation we laid while providing traders with even more opportunities across spot, futures, perpetuals, and options – all under one trusted brand. Together with Coinbase, we’re set to shape the future of the global crypto derivatives market.”

Tusar also noted that Deribit has a “consistent track record” of generating positive adjusted EBITDA the company believes will grow as a combined entity.  

“One of the things we liked most about this deal is that it’s not just a game changer for our international expansion plans — it immediately diversifies our revenue and enhances profitability,” Tusar told CNBC.

The deal comes at a time when the crypto industry is riding regulatory tailwinds from the first ever pro-crypto White House. Support of the industry has fueled crypto M&A activity in recent weeks. In March, crypto exchange Kraken agreed to acquire NinjaTrader for $1.5 billion, and last month Ripple agreed to buy prime broker Hidden Road.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Trending