Connect with us

Published

on

See also: Parrots, paperclips, and safety vs ethics: Why the artificial intelligence debate sounds like a foreign language

Here’s a list of some terms used by AI insiders:

AGI — AGI stands for “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, that can do most things as well or better than most humans, including improving itself.

Example: “For me, AGI is the equivalent of a median human that you could hire as a coworker, and they could say do anything you would be happy with a remote coworker doing behind a computer,” Sam Altman said at a recent Greylock VC event.

AI ethics describes the desire to prevent AI from causing immediate harm, and often focuses on questions like how AI systems collect and process data and the possibility of bias in areas like housing or employment.

AI safety describes the longer-term fear that AI will progress so suddenly that a super-intelligent AI might harm or even eliminate humanity.

Alignment is the practice of tweaking an AI model so that it produces the outputs its creators desired. In the short term, alignment refers to the practice of building software and content moderation. But it can also refer to the much larger and still theoretical task of ensuring that any AGI would be friendly towards humanity.

Example: “What these systems get aligned to — whose values, what those bounds are — that is somehow set by society as a whole, by governments. And so creating that dataset, our alignment dataset, it could be, an AI constitution, whatever it is, that has got to come very broadly from society,” Sam Altman said last week during the Senate hearing.

Emergent behavior — Emergent behavior is the technical way of saying that some AI models show abilities that weren’t initially intended. It can also describe surprising results from AI tools being deployed widely to the public.

Example: “Even as a first step, however, GPT-4 challenges a considerable number of widely held assumptions about machine intelligence, and exhibits emergent behaviors and capabilities whose sources and mechanisms are, at this moment, hard to discern precisely,” Microsoft researchers wrote in Sparks of Artificial General Intelligence.

Fast takeoff or hard takeoff — A phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Example: “AGI could happen soon or far in the future; the takeoff speed from the initial AGI to more powerful successor systems could be slow or fast,” said OpenAI CEO Sam Altman in a blog post.

Foom — Another way to say “hard takeoff.” It’s an onomatopeia, and has also been described as an acronym for “Fast Onset of Overwhelming Mastery” in several blog posts and essays.

Example: “It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun.

GPU — The chips used to train models and run inference, which are descendants of chips used to play advanced computer games. The most commonly used model at the moment is Nvidia’s A100.

Example: From Stability AI founder Emad Mostque:

Guardrails are software and policies that big tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.” It can also refer to specific applications that protect the AI from going off topic, like Nvidia’s “NeMo Guardrails” product.

Example: “The moment for government to play a role has not passed us by this period of focused public attention on AI is precisely the time to define and build the right guardrails to protect people and their interests,” Christina Montgomery, the chair of IBM’s AI ethics board and VP at the company, said in Congress this week.

Inference — The act of using an AI model to make predictions or generate text, images, or other content. Inference can require a lot of computing power.

Example: “The problem with inference is if the workload spikes very rapidly, which is what happened to ChatGPT. It went to like a million users in five days. There is no way your GPU capacity can keep up with that,” Sid Sheth, founder of D-Matrix, previously told CNBC.

Large language model — A kind of AI model that underpins ChatGPT and Google’s new generative AI features. Its defining feature is that it uses terabytes of data to find the statistical relationships between words, which is how it produces text that seems like a human wrote it.

Example: “Google’s new large language model, which the company announced last week, uses almost five times as much training data as its predecessor from 2022, allowing its to perform more advanced coding, math and creative writing tasks,” CNBC reported earlier this week.

Paperclips are an important symbol for AI Safety proponents because they symbolize the chance an AGI could destroy humanity. It refers to a thought experiment published by philosopher Nick Bostrom about a “superintelligence” given the mission to make as many paperclips as possible. It decides to turn all humans, Earth, and increasing parts of the cosmos into paperclips. OpenAI’s logo is a reference to this tale.

Example: “It also seems perfectly possible to have a superintelligence whose sole goal is something completely arbitrary, such as to manufacture as many paperclips as possible, and who would resist with all its might any attempt to alter this goal,” Bostrom wrote in his thought experiment.

Singularity is an older term that’s not used often anymore, but it refers to the moment that technological change becomes self-reinforcing, or the moment of creation of an AGI. It’s a metaphor — literally, singularity refers to the point of a black hole with infinite density.

Example: “The advent of artificial general intelligence is called a singularity because it is so hard to predict what will happen after that,” Tesla CEO Elon Musk said in an interview with CNBC this week.

Continue Reading

Technology

Apple’s market share slides in China as iPhone shipments decline, analyst Kuo says

Published

on

By

Apple's market share slides in China as iPhone shipments decline, analyst Kuo says

Jaap Arriens | Nurphoto | Getty Images

Apple is losing market share in China due to declining iPhone shipments, supply chain analyst Ming-Chi Kuo wrote in a report on Friday. The stock slid 2.4%.

“Apple has adopted a cautious stance when discussing 2025 iPhone production plans with key suppliers,” Kuo, an analyst at TF Securities, wrote in the post. He added that despite the expected launch of the new iPhone SE 4, shipments are expected to decline 6% year over year for the first half of 2025.

Kuo expects Apple’s market share to continue to slide, as two of the coming iPhones are so thin that they likely will only support eSIM, which the Chinese market currently does not promote.

“These two models could face shipping momentum challenges unless their design is modified,” he wrote.

Kuo wrote that in December, overall smartphone shipments in China were flat from a year earlier, but iPhone shipments dropped 10% to 12%.

There is also “no evidence” that Apple Intelligence, the company’s on-device artificial intelligence offering, is driving hardware upgrades or services revenue, according to Kuo. He wrote that the feature “has not boosted iPhone replacement demand,” according to a supply chain survey he conducted, and added that in his view, the feature’s appeal “has significantly declined compared to cloud-based AI services, which have advanced rapidly in subsequent months.”

Apple’s estimated iPhone shipments total about 220 million units for 2024 and between about 220 million and 225 million for this year, Kuo wrote. That is “below the market consensus of 240 million or more,” he wrote.

Apple did not immediately respond to CNBC’s request for comment.

WATCH: Apple has to do something to justify its runup

Apple has to do something to justify its run-up, says Capital Area Planning's Ethridge

Continue Reading

Technology

Amazon to halt some of its DEI programs: Internal memo

Published

on

By

Amazon to halt some of its DEI programs: Internal memo

Amazon said it is halting some of its diversity and inclusion initiatives, joining a growing list of major corporations that have made similar moves in the face of increasing public and legal scrutiny.

In a Dec. 16 internal note to staffers that was obtained by CNBC, Candi Castleberry, Amazon’s VP of inclusive experiences and technology, said the company was in the process of “winding down outdated programs and materials” as part of a broader review of hundreds of initiatives.

“Rather than have individual groups build programs, we are focusing on programs with proven outcomes — and we also aim to foster a more truly inclusive culture,” Castleberry wrote in the note, which was first reported by Bloomberg.

Castleberry’s memo doesn’t say which programs the company is dropping as a result of its review. The company typically releases annual data on the racial and gender makeup of its workforce, and it also operates Black, LGBTQ+, indigenous and veteran employee resource groups, among others.

In 2020, Amazon set a goal of doubling the number of Black employees in vice president and director roles. It announced the same goal in 2021 and also pledged to hire 30% more Black employees for product manager, engineer and other corporate roles.

Meta on Friday made a similar retreat from its diversity, equity and inclusion initiatives. The social media company said it’s ending its approach of considering qualified candidates from underrepresented groups for open roles and its equity and inclusion training programs. The decision drew backlash from Meta employees, including one staffer who wrote, “If you don’t stand by your principles when things get difficult, they aren’t values. They’re hobbies.”

Other companies, including McDonald’s, Walmart and Ford, have also made changes to their DEI initiatives in recent months. Rising conservative backlash and the Supreme Court’s ruling against affirmative action in 2023 spurred many corporations to alter or discontinue their DEI programs.

Amazon, which is the nation’s second-largest private employer behind Walmart, also recently made changes to its “Our Positions” webpage, which lays out the company’s stance on a variety of policy issues. Previously, there were separate sections dedicated to “Equity for Black people,” “Diversity, equity and inclusion” and “LGBTQ+ rights,” according to records from the Internet Archive’s Wayback Machine.

The current webpage has streamlined those sections into a single paragraph. The section says that Amazon believes in creating a diverse and inclusive company and that inequitable treatment of anyone is unacceptable. The Information earlier reported the changes.

Amazon spokesperson Kelly Nantel told CNBC in a statement: “We update this page from time to time to ensure that it reflects updates we’ve made to various programs and positions.”

Read the full memo from Amazon’s Castleberry:

Team,

As we head toward the end of the year, I want to give another update on the work we’ve been doing around representation and inclusion.

As a large, global company that operates in different countries and industries, we serve hundreds of millions of customers from a range of backgrounds and globally diverse communities. To serve them effectively, we need millions of employees and partners that reflect our customers and communities. We strive to be representative of those customers and build a culture that’s inclusive for everyone.

In the last few years we took a new approach, reviewing hundreds of programs across the company, using science to evaluate their effectiveness, impact, and ROI — identifying the ones we believed should continue. Each one of these addresses a specific disparity, and is designed to end when that disparity is eliminated. In parallel, we worked to unify employee groups together under one umbrella, and build programs that are open to all. Rather than have individual groups build programs, we are focusing on programs with proven outcomes — and we also aim to foster a more truly inclusive culture. You can read more about this on our Together at Amazon page on A to Z.

This approach — where we move away from programs that were separate from our existing processes, and instead integrating our work into existing processes so they become durable — is the evolution to “built in” and “born inclusive,” instead of “bolted on.” As part of this evolution, we’ve been winding down outdated programs and materials, and we’re aiming to complete that by the end of 2024. We also know there will always be individuals or teams who continue to do well-intentioned things that don’t align with our company-wide approach, and we might not always see those right away. But we’ll keep at it.

We’ll continue to share ongoing updates, and appreciate your hard work in driving this progress. We believe this is important work, so we’ll keep investing in programs that help us reflect those audiences, help employees grow, thrive, and connect, and we remain dedicated to delivering inclusive experiences for customers, employees, and communities around the world.

#InThisTogether,

Candi

Continue Reading

Technology

Tesla recalling 239,000 vehicles in U.S. over rearview camera failures

Published

on

By

Tesla recalling 239,000 vehicles in U.S. over rearview camera failures

New Tesla Model 3 vehicles on a truck at a logistics drop zone in Seattle, Washington, on Aug. 22, 2024.

M. Scott Brauer | Bloomberg | Getty Images

Tesla is voluntarily recalling about 239,000 of its electric vehicles in the U.S. to fix an issue that can cause its rearview cameras to fail, the company disclosed in filings posted Friday to the National Highway Traffic Safety Administration’s website.

“A rearview camera that does not display an image reduces the driver’s rear view, increasing the risk of a crash,” Tesla wrote in a letter to the regulator. The recall applies to Tesla’s 2024-2025 Model 3 and Model S sedans, and to its 2023-2025 Model X and Model Y SUVs.

The company also said in the acknowledgement letter that it has already “released an over-the-air (OTA) software update, free of charge” that can fix some of the vehicles’ camera issues.

In 2024, Tesla issued 16 recalls in the U.S. that applied to 5.14 million of its EVs, according to NHTSA data. The recall remedies included a mix of over-the-air software updates and parts replacements. More than 40% of last year’s recalls pertained to issues with the newest vehicle in the company’s lineup, the Cybertruck, an angular steel pickup that Tesla began delivering to customers in late 2023.

Regarding the latest recall, the company said it had received 887 warranty claims and dozens of field reports but told the NHTSA that it was not aware of any injurious, fatal or other collisions resulting from the rearview camera failures.

Other customers with vehicles that “experienced a circuit board failure or stress that may lead to a circuit board failure,” which cause the backup camera failures, can have their vehicles’ computers replaced by Tesla, free of charge, the company said.

Tesla did not immediately respond to CNBC’s request for comment.

Don’t miss these insights from CNBC PRO

Tesla: Here's why Bank of America downgraded the stock to neutral

Continue Reading

Trending