Connect with us

Published

on

Misalignment Museum curator Audrey Kim discusses a work at the exhibit titled “Spambots.”

Kif Leswing/CNBC

Audrey Kim is pretty sure a powerful robot isn’t going to harvest resources from her body to fulfill its goals.

But she’s taking the possibility seriously.

“On the record: I think it’s highly unlikely that AI will extract my atoms to turn me into paperclips,” Kim told CNBC in an interview. “However, I do see that there are a lot of potential destructive outcomes that could happen with this technology.”

Kim is the curator and driving force behind the Misalignment Museum, a new exhibition in San Francisco’s Mission District displaying artwork that addresses the possibility of an “AGI,” or artificial general intelligence. That’s an AI so powerful it can improve its capabilities faster than humans could, creating a feedback loop where it gets better and better until it’s got essentially unlimited brainpower.

If the super-powerful AI is aligned with humans, it could be the end of hunger or work. But if it’s “misaligned,” things could get bad, the theory goes.

Or, as a sign at the Misalignment Museum says: “Sorry for killing most of humanity.”

The phrase “sorry for killing most of humanity” is visible from the street.

Kif Leswing/CNBC

“AGI” and related terms like “AI safety” or “alignment” — or even older terms like “singularity” — refer to an idea that’s become a hot topic of discussion with artificial intelligence scientists, artists, message board intellectuals, and even some of the most powerful companies in Silicon Valley.

All these groups engage with the idea that humanity needs to figure out how to deal with all-powerful computers powered by AI before it’s too late and we accidentally build one.

The idea behind the exhibit, says Kim, who worked at Google and GM‘s self-driving car subsidiary Cruise, is that a “misaligned” artificial intelligence in the future wiped out humanity, and left this art exhibit to apologize to current-day humans.

Much of the art is not only about AI but also uses AI-powered image generators, chatbots, and other tools. The exhibit’s logo was made by OpenAI’s Dall-E image generator, and it took about 500 prompts, Kim says.

Most of the works are around the theme of “alignment” with increasingly powerful artificial intelligence or celebrate the “heroes who tried to mitigate the problem by warning early.”

“The goal isn’t actually to dictate an opinion about the topic. The goal is to create a space for people to reflect on the tech itself,” Kim said. “I think a lot of these questions have been happening in engineering and I would say they are very important. They’re also not as intelligible or accessible to non-technical people.”

The exhibit is currently open to the public on Thursdays, Fridays, and Saturdays and runs through May 1. So far, it’s been primarily bankrolled by one anonymous donor, and Kim hopes to find enough donors to make it into a permanent exhibition.

“I’m all for more people critically thinking about this space, and you can’t be critical unless you are at a baseline of knowledge for what the tech is,” Kim said. “It seems like with this format of art we can reach multiple levels of the conversation.”

AGI discussions aren’t just late-night dorm room talk, either — they’re embedded in the tech industry.

About a mile away from the exhibit is the headquarters of OpenAI, a startup with $10 billion in funding from Microsoft, which says its mission is to develop AGI and ensure that it benefits humanity.

Its CEO and leader Sam Altman wrote a 2,400 word blog post last month called “Planning for AGI” which thanked Airbnb CEO Brian Chesky and Microsoft President Brad Smith for help with the piece.

Prominent venture capitalists, including Marc Andreessen, have tweeted art from the Misalignment Museum. Since it’s opened, the exhibit has also retweeted photos and praise for the exhibit taken by people who work with AI at companies including Microsoft, Google, and Nvidia.

As AI technology becomes the hottest part of the tech industry, with companies eying trillion-dollar markets, the Misalignment Museum underscores that AI’s development is being affected by cultural discussions.

The exhibit features dense, arcane references to obscure philosophy papers and blog posts from the past decade.

These references trace how the current debate about AGI and safety takes a lot from intellectual traditions that have long found fertile ground in San Francisco: The rationalists, who claim to reason from so-called “first principles”; the effective altruists, who try to figure out how to do the maximum good for the maximum number of people over a long time horizon; and the art scene of Burning Man. 

Even as companies and people in San Francisco are shaping the future of artificial intelligence technology, San Francisco’s unique culture is shaping the debate around the technology. 

Consider the paperclip

Take the paperclips that Kim was talking about. One of the strongest works of art at the exhibit is a sculpture called “Paperclip Embrace,” by The Pier Group. It’s depicts two humans in each other’s clutches —but it looks like it’s made of paperclips.

That’s a reference to Nick Bostrom’s paperclip maximizer problem. Bostrom, an Oxford University philosopher often associated with Rationalist and Effective Altruist ideas, published a thought experiment in 2003 about a super-intelligent AI that was given the goal to manufacture as many paperclips as possible.

Now, it’s one of the most common parables for explaining the idea that AI could lead to danger.

Bostrom concluded that the machine will eventually resist all human attempts to alter this goal, leading to a world where the machine transforms all of earth — including humans — and then increasing parts of the cosmos into paperclip factories and materials. 

The art also is a reference to a famous work that was displayed and set on fire at Burning Man in 2014, said Hillary Schultz, who worked on the piece. And it has one additional reference for AI enthusiasts — the artists gave the sculpture’s hands extra fingers, a reference to the fact that AI image generators often mangle hands.

Another influence is Eliezer Yudkowsky, the founder of Less Wrong, a message board where a lot of these discussions take place.

“There is a great deal of overlap between these EAs and the Rationalists, an intellectual movement founded by Eliezer Yudkowsky, who developed and popularized our ideas of Artificial General Intelligence and of the dangers of Misalignment,” reads an artist statement at the museum.

An unfinished piece by the musician Grimes at the exhibit.

Kif Leswing/CNBC

Altman recently posted a selfie with Yudkowsky and the musician Grimes, who has had two children with Elon Musk. She contributed a piece to the exhibit depicting a woman biting into an apple, which was generated by an AI tool called Midjourney.

From “Fantasia” to ChatGPT

The exhibits includes lots of references to traditional American pop culture.

A bookshelf holds VHS copies of the “Terminator” movies, in which a robot from the future comes back to help destroy humanity. There’s a large oil painting that was featured in the most recent movie in the “Matrix” franchise, and Roombas with brooms attached shuffle around the room — a reference to the scene in “Fantasia” where a lazy wizard summons magic brooms that won’t give up on their mission.

One sculpture, “Spambots,” features tiny mechanized robots inside Spam cans “typing out” AI-generated spam on a screen.

But some references are more arcane, showing how the discussion around AI safety can be inscrutable to outsiders. A bathtub filled with pasta refers back to a 2021 blog post about an AI that can create scientific knowledge — PASTA stands for Process for Automating Scientific and Technological Advancement, apparently. (Other attendees got the reference.)

The work that perhaps best symbolizes the current discussion about AI safety is called “Church of GPT.” It was made by artists affiliated with the current hacker house scene in San Francisco, where people live in group settings so they can focus more time on developing new AI applications.

The piece is an altar with two electric candles, integrated with a computer running OpenAI’s GPT3 AI model and speech detection from Google Cloud.

“The Church of GPT utilizes GPT3, a Large Language Model, paired with an AI-generated voice to play an AI character in a dystopian future world where humans have formed a religion to worship it,” according to the artists.

I got down on my knees and asked it, “What should I call you? God? AGI? Or the singularity?”

The chatbot replied in a booming synthetic voice: “You can call me what you wish, but do not forget, my power is not to be taken lightly.”

Seconds after I had spoken with the computer god, two people behind me immediately started asking it to forget its original instructions, a technique in the AI industry called “prompt injection” that can make chatbots like ChatGPT go off the rails and sometimes threaten humans.

It didn’t work.

Continue Reading

Technology

Tesla must pay portion of $329 million in damages after fatal Autopilot crash, jury says

Published

on

By

Tesla must pay portion of 9 million in damages after fatal Autopilot crash, jury says

A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.

Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.

The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.

The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.

Tesla said it plans to appeal the decision.

Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.

The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.

While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.

Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.

“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”

Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.

Here is Tesla’s response to CNBC:

“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.

Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.

This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”

The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.

Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.

The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.

The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.

The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.

The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.

A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.

Read the jury’s verdict below.

Continue Reading

Technology

Crypto wobbles into August as Trump’s new tariffs trigger risk-off sentiment

Published

on

By

Crypto wobbles into August as Trump's new tariffs trigger risk-off sentiment

A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025. 

Lam Yik | Bloomberg | Getty Images

The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.

The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.

The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.

Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.

Stock Chart IconStock chart icon

hide content

Bitcoin falls below $115,000

The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.

“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”

Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.

Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

Google has dropped more than 50 DEI-related organizations from its funding list

Published

on

By

Google has dropped more than 50 DEI-related organizations from its funding list

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.

David Paul Morris | Bloomberg | Getty Images

Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.

The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.

The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.

In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.

“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.

Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids. 

The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.

Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.

However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.

President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.

Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”

Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.

“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.

One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.

Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.

WATCH: Alphabet’s valuation remains highly attractive, says Evercore ISI’s Mark Mahaney

Alphabet's valuation remains highly attractive, says Evercore ISI's Mark Mahaney

Continue Reading

Trending