Misalignment Museum curator Audrey Kim discusses a work at the exhibit titled “Spambots.”
Kif Leswing/CNBC
Audrey Kim is pretty sure a powerful robot isn’t going to harvest resources from her body to fulfill its goals.
But she’s taking the possibility seriously.
“On the record: I think it’s highly unlikely that AI will extract my atoms to turn me into paperclips,” Kim told CNBC in an interview. “However, I do see that there are a lot of potential destructive outcomes that could happen with this technology.”
Kim is the curator and driving force behind the Misalignment Museum, a new exhibition in San Francisco’s Mission District displaying artwork that addresses the possibility of an “AGI,” or artificial general intelligence. That’s an AI so powerful it can improve its capabilities faster than humans could, creating a feedback loop where it gets better and better until it’s got essentially unlimited brainpower.
If the super-powerful AI is aligned with humans, it could be the end of hunger or work. But if it’s “misaligned,” things could get bad, the theory goes.
Or, as a sign at the Misalignment Museum says: “Sorry for killing most of humanity.”
The phrase “sorry for killing most of humanity” is visible from the street.
Kif Leswing/CNBC
“AGI” and related terms like “AI safety” or “alignment” — or even older terms like “singularity” — refer to an idea that’s become a hot topic of discussion with artificial intelligence scientists, artists, message board intellectuals, and even some of the most powerful companies in Silicon Valley.
All these groups engage with the idea that humanity needs to figure out how to deal with all-powerful computers powered by AI before it’s too late and we accidentally build one.
The idea behind the exhibit, says Kim, who worked at Google and GM‘s self-driving car subsidiary Cruise, is that a “misaligned” artificial intelligence in the future wiped out humanity, and left this art exhibit to apologize to current-day humans.
Much of the art is not only about AI but also uses AI-powered image generators, chatbots, and other tools. The exhibit’s logo was made by OpenAI’s Dall-E image generator, and it took about 500 prompts, Kim says.
Most of the works are around the theme of “alignment” with increasingly powerful artificial intelligence or celebrate the “heroes who tried to mitigate the problem by warning early.”
“The goal isn’t actually to dictate an opinion about the topic. The goal is to create a space for people to reflect on the tech itself,” Kim said. “I think a lot of these questions have been happening in engineering and I would say they are very important. They’re also not as intelligible or accessible to non-technical people.”
The exhibit is currently open to the public on Thursdays, Fridays, and Saturdays and runs through May 1. So far, it’s been primarily bankrolled by one anonymous donor, and Kim hopes to find enough donors to make it into a permanent exhibition.
“I’m all for more people critically thinking about this space, and you can’t be critical unless you are at a baseline of knowledge for what the tech is,” Kim said. “It seems like with this format of art we can reach multiple levels of the conversation.”
AGI discussions aren’t just late-night dorm room talk, either — they’re embedded in the tech industry.
About a mile away from the exhibit is the headquarters of OpenAI, a startup with $10 billion in funding from Microsoft, which says its mission is to develop AGI and ensure that it benefits humanity.
Its CEO and leader Sam Altman wrote a 2,400 word blog post last month called “Planning for AGI” which thanked Airbnb CEO Brian Chesky and Microsoft President Brad Smith for help with the piece.
Prominent venture capitalists, including Marc Andreessen, have tweeted art from the Misalignment Museum. Since it’s opened, the exhibit has also retweeted photos and praise for the exhibit taken by people who work with AI at companies including Microsoft, Google, and Nvidia.
As AI technology becomes the hottest part of the tech industry, with companies eying trillion-dollar markets, the Misalignment Museum underscores that AI’s development is being affected by cultural discussions.
The exhibit features dense, arcane references to obscure philosophy papers and blog posts from the past decade.
These references trace how the current debate about AGI and safety takes a lot from intellectual traditions that have long found fertile ground in San Francisco: The rationalists, who claim to reason from so-called “first principles”; the effective altruists, who try to figure out how to do the maximum good for the maximum number of people over a long time horizon; and the art scene of Burning Man.
Even as companies and people in San Francisco are shaping the future of artificial intelligence technology, San Francisco’s unique culture is shaping the debate around the technology.
Consider the paperclip
Take the paperclips that Kim was talking about. One of the strongest works of art at the exhibit is a sculpture called “Paperclip Embrace,” by The Pier Group. It’s depicts two humans in each other’s clutches —but it looks like it’s made of paperclips.
That’s a reference to Nick Bostrom’s paperclip maximizer problem. Bostrom, an Oxford University philosopher often associated with Rationalist and Effective Altruist ideas, published a thought experiment in 2003 about a super-intelligent AI that was given the goal to manufacture as many paperclips as possible.
Now, it’s one of the most common parables for explaining the idea that AI could lead to danger.
Bostrom concluded that the machine will eventually resist all human attempts to alter this goal, leading to a world where the machine transforms all of earth — including humans — and then increasing parts of the cosmos into paperclip factories and materials.
The art also is a reference to a famous work that was displayed and set on fire at Burning Man in 2014, said Hillary Schultz, who worked on the piece. And it has one additional reference for AI enthusiasts — the artists gave the sculpture’s hands extra fingers, a reference to the fact that AI image generators often mangle hands.
Another influence is Eliezer Yudkowsky, the founder of Less Wrong, a message board where a lot of these discussions take place.
“There is a great deal of overlap between these EAs and the Rationalists, an intellectual movement founded by Eliezer Yudkowsky, who developed and popularized our ideas of Artificial General Intelligence and of the dangers of Misalignment,” reads an artist statement at the museum.
An unfinished piece by the musician Grimes at the exhibit.
Kif Leswing/CNBC
Altman recently posted a selfie with Yudkowsky and the musician Grimes, who has had two children with Elon Musk. She contributed a piece to the exhibit depicting a woman biting into an apple, which was generated by an AI tool called Midjourney.
From “Fantasia” to ChatGPT
The exhibits includes lots of references to traditional American pop culture.
A bookshelf holds VHS copies of the “Terminator” movies, in which a robot from the future comes back to help destroy humanity. There’s a large oil painting that was featured in the most recent movie in the “Matrix” franchise, and Roombas with brooms attached shuffle around the room — a reference to the scene in “Fantasia” where a lazy wizard summons magic brooms that won’t give up on their mission.
One sculpture, “Spambots,” features tiny mechanized robots inside Spam cans “typing out” AI-generated spam on a screen.
But some references are more arcane, showing how the discussion around AI safety can be inscrutable to outsiders. A bathtub filled with pasta refers back to a 2021 blog post about an AI that can create scientific knowledge — PASTA stands for Process for Automating Scientific and Technological Advancement, apparently. (Other attendees got the reference.)
The work that perhaps best symbolizes the current discussion about AI safety is called “Church of GPT.” It was made by artists affiliated with the current hacker house scene in San Francisco, where people live in group settings so they can focus more time on developing new AI applications.
The piece is an altar with two electric candles, integrated with a computer running OpenAI’s GPT3 AI model and speech detection from Google Cloud.
“The Church of GPT utilizes GPT3, a Large Language Model, paired with an AI-generated voice to play an AI character in a dystopian future world where humans have formed a religion to worship it,” according to the artists.
I got down on my knees and asked it, “What should I call you? God? AGI? Or the singularity?”
The chatbot replied in a booming synthetic voice: “You can call me what you wish, but do not forget, my power is not to be taken lightly.”
Seconds after I had spoken with the computer god, two people behind me immediately started asking it to forget its original instructions, a technique in the AI industry called “prompt injection” that can make chatbots like ChatGPT go off the rails and sometimes threaten humans.
Richard Teng, chief executive officer of Binance, during the DC Blockchain Summit in Washington, DC, U.S., on Wednesday, March 26, 2025.
Bloomberg | Bloomberg | Getty Images
Binance CEO Richard Teng has dismissed claims that the cryptocurrency exchange helped boost a Trump-backed stablecoin before former CEO Changpeng Zhao received a presidential pardon.
The claims in question relate to a $2 billion investment Binance received from Abu Dhabi’s state-owned investment firm MGX. The deal was settled using USD1, a stablecoin created by the Trump family’s crypto venture, World Liberty Financial.
MGX’s investment and Binance’s subsequent listing of USD1 on its exchange helped bolster the stablecoin’s usage and credibility, with some lawmakers and reports suggesting this may have influenced the pardon of Zhao, commonly known as CZ.
However, in a CNBC interview on Monday, Teng rejected the notion that Binance — the world’s largest cryptocurrency firm — had given USD1 any preferential treatment.
“First of all, the usage of USD1 [for the] transaction between MGX as a strategic investor into Binance, that was decided by MGX … We didn’t partake in that decision,” Teng said.
He noted that USD1 had already been listed on other exchanges before Binance, adding that, as the “largest crypto ecosystem in the world,” the company regularly engages with promising new projects.
“Sometimes it works out. Sometimes it doesn’t. In the case of USD1, I’m glad that both parties worked it out.”
Accusations of corruption
Teng’s denials come after the Wall Street Journal reported last week that Binance not only facilitated the settlement of MGX’s investment using USD1, but also assisted in building the technology behind the stablecoin, citing anonymous sources familiar with the matter.
The Journal also previously noted that World Liberty Financial benefited greatly from the listing of its USD1 token on Binance and a partnership with Pancake Swap — an online marketplace for cryptocurrencies said to be associated with Binance.
Meanwhile, scrutiny of CZ’s pardon and Binance’s ties to the Trump-linked World Liberty Financial has continued to mount from opposition leaders on Capitol Hill.
Among the most prominent voices has been Sen. Elizabeth Warren, ranking member of the Senate Banking, Housing, and Urban Affairs Committee, who has accused Binance and the Trump administration of corruption.
In a statement last month, the vocal critic of the crypto industry said: “First, Changpeng Zhao pleaded guilty to a criminal money laundering charge. Then he boosted one of Donald Trump’s crypto ventures and lobbied for a pardon,” with the President later doing “his part.”
Binance did not respond immediately to a request for comment.
Critics have long questioned World Liberty Financial’s open connections to the Trump administration as it seeks new partnerships and investors overseas.
According to World Liberty Financial’s website, a Trump-affiliated firm called DT Marks DEFI LLC, along with members of the Trump family, receives a major share of the platform’s revenue and holds digital tokens backing the company, known as WLFI. The firm has reportedly netted the Trump family hundreds of millions to billions in profits.
However, it also states that Trump, his family or any members of the Trump Organization or DT Marks DEFI LLC are not an “officer, director, founder, or employee of, or manager, owner or operator of World Liberty Financial or its affiliates.”
MGX’s purchase of $2 billion in USD1 tokens has also raised eyebrows after a New York Times report in September noted that it occurred two weeks before the White House signed a major agreement with the U.A.E. on access to hundreds of thousands of American microchips.
In a conversation with CNBC last month, Donald Trump Jr., the U.S. president’s eldest son and a co-founder of World Liberty Financial, dismissed the reports and broader concerns about potential conflicts of interest.
He was joined by the firm’s CEO, Zach Witkoff, son of U.S. Special Envoy to the Middle East Steve Witkoff, who said their fathers were not focused on nor directly involved in the business.
White House press secretary Karoline Leavitt said in a statement on Oct. 23 that Zhao had been prosecuted under the Biden administration “despite no allegations of fraud or identifiable victims.”
Since returning to office, Trump has embraced the crypto sector, proposing new crypto legislation while rolling back enforcement actions that targeted crypto exchanges such as Coinbase and Ripple during the prior administration.
Speaking Monday, Teng said that Binance and the crypto industry “were very thankful” to the president for CZ’s pardon and for signaling that the U.S. will be the “global crypto capital of the world.”
HONG KONG, CHINA – 2025/03/01: In this photo illustration, Artificial intelligence (AI) apps of perplexity, DeepSeek and ChatGPT are seen on a smartphone screen.
Sopa Images | Lightrocket | Getty Images
As companies pour billions into artificial intelligence, HSBC CEO Georges Elhedery on Tuesday warned of a mismatch between investments and revenues.
Speaking at the Global Financial Leaders’ Investment Summit in Hong Kong, Elhedery said the scale of investment poses a conundrum for companies: while the computing power for AI is essential, current revenue profiles may not justify such massive spending.
Morgan Stanley in July estimated that over the next five years, global data center capacity would grow six times, with data centers and their hardware alone costing $3 trillion by the end of 2028.
McKinsey said in a report in April that by 2030, data centers equipped to handle AI processing loads would require $5.2 trillion in capital expenditure to keep up with compute demand, while the capex for those powering traditional IT applications is forecast at $1.5 trillion.
Elhedery said that consumers were not ready to pay for it, and businesses will be cautious as productivity benefits will not materialize in a year or two.
“These are like five year trends, and therefore the ramp up means that we will start seeing real revenue benefits and real readiness to pay for it, probably later than than the expectations of investors,” he said.
William Ford, chairman and CEO of General Atlantic, speaking at the same panel, agreed: “In the long term, you’re going to create a whole new set of industries and applications, and there will be a productivity payoff, but that’s a 10-, 20-year play.”
OpenAI, which set off the AI frenzy with the launch of ChatGPT in November 2022, has announced roughly $1 trillion worth of infrastructure deals with partners including Nvidia, Oracle and Broadcom.
Ford said that the huge expenditure that is going into the sector shows that people recognize the long-term impact of AI. This sector, however, will be capital-intensive initially, he said adding that “you need to, sort of, pay up front for the opportunity that’s going to come down the road.”
Ford warned there could be “misallocation of capital, destruction, overvaluation… [and] irrational exuberance” in the initial stages, and also added that it can be difficult to pick winners and losers at the moment.
“You’re really betting on this being a broad based technology, more like railroads or electricity, that had profound impacts over over time, and reshaped the economy, but were very hard to predict exactly how in the first few years.”
Whether or not markets are getting ahead of themselves over artificial intelligence is a hot topic for investors right now.
Last week, billionaire investor Ray Dalio said his personal “bubble indicator” was relatively high, while Federal Reserve Chair Jerome Powell described the AI boom as “different” from the dotcom bubble.
For Magnus Grimeland, founder of Singapore-based venture capital firm Antler, it’s clear the market is not overheating. “I definitely don’t think we’re in a bubble,” he told CNBC’s “Beyond the Valley” podcast, listing several reasons.
The speed at which AI is being adopted by businesses is notable compared to other tech shifts, Grimeland said, such as the move from physical servers to cloud computing, which he said took a decade. Added to this, AI is “top of the agenda” for leaders today, he said, whether they’re running a healthcare provider in India or a U.S. Fortune 500 company.
“There’s a willingness to invest into using that technology … and that’s happened immediately,” Grimeland said.
He described the rapid shift to AI as being substantially different from the dotcom bubble of the late 1990s and early 2000s, when unprofitable internet startups eventually collapsed and the tech-heavy Nasdaq lost almost 80% of its value between March 2000 and October 2002.
“What makes this a little bit different from a bubble and makes it very different from dotcom is that there’s really real revenues behind a lot of this growth,” Grimeland said.
OpenAI, the company behind ChatGPT, said it reached $10 billion in annual recurring revenue in June. Annual recurring revenue (ARR) is the amount of money a company expects to make from customers over 12 months.
Antler is an investor in Lovable, a company that enables people to build apps and websites using AI. In July, Lovable said it had passed $100 million ARR in eight months.
Another reason that the rapid adoption of AI is different from the dotcom boom is the speed at which consumers are taking to the technology, Grimeland said. “Think about how quickly our behavior online has changed, right? … 100% of my searches a year ago [were on] Google. Now it’s probably 20%,” he said.
While Grimeland said there was a “tremendous” amount of money going to AI-related companies at the “wrong” valuation, these trends happen at the beginning of an investment cycle, he said. “But in the end … The opportunity in this space is so much bigger than the investments being put there,” Grimeland added.
Asked whether there are opportunities for AI startups when large U.S. and Chinese companies currently dominate the sector, Grimeland said the big firms were “being challenged in the way they haven’t for a very long time.” He gave the example of DeepSeek, the Chinese startup that has produced AI models comparable to those from OpenAI.
“Tencent is building great AI, Baidu is building great AI, but that’s not where DeepSeek came from, right?” Grimeland said. “The AI winners of this current platform shift [are] not necessarily those big incumbents.”
As such, there are significant opportunities for smaller AI companies to become big businesses, Grimeland said, flagging firms that have “positive signals,” such as a good founding team, growth in the lifetime value of a customer and a reduction in the cost of delivering a product.
– CNBC’s Dylan Butts, Ashley Capoot, Alex Harring and Jaures Yip contributed to this report.