Connect with us

Published

on

Sam Altman, CEO of OpenAI, and Lisa Su, CEO of Advanced Micro Devices, testify during the Senate Commerce, Science and Transportation Committee hearing titled “Winning the AI Race: Strengthening U.S. Capabilities in Computing and Innovation,” in Hart building on Thursday, May 8, 2025.

Tom Williams | CQ-Roll Call, Inc. | Getty Images

In a sweeping interview last week, OpenAI CEO Sam Altman addressed a plethora of moral and ethical questions regarding his company and the popular ChatGPT AI model.  

“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model,” Altman told former Fox News host Tucker Carlson in a nearly hour-long interview. 

“I don’t actually worry about us getting the big moral decisions wrong,” Altman said, though he admitted “maybe we will get those wrong too.” 

Rather, he said he loses the most sleep over the “very small decisions” on model behavior, which can ultimately have big repercussions.

These decisions tend to center around the ethics that inform ChatGPT, and what questions the chatbot does and doesn’t answer. Here’s an outline of some of those moral and ethical dilemmas that appear to be keeping Altman awake at night.

How does ChatGPT address suicide?

According to Altman, the most difficult issue the company is grappling with recently is how ChatGPT approaches suicide, in light of a lawsuit from a family who blamed the chatbot for their teenage son’s suicide.

The CEO said that out of the thousands of people who commit suicide each week, many of them could possibly have been talking to ChatGPT in the lead-up.

“They probably talked about [suicide], and we probably didn’t save their lives,” Altman said candidly. “Maybe we could have said something better. Maybe we could have been more proactive. Maybe we could have provided a little bit better advice about, hey, you need to get this help.” 

Jay Edelson on OpenAI wrongful death lawsuit: We're putting OpenAI & Sam Altman on trial, not AI

Last month, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”

Soon after, in a blog post titled “Helping people when they need it most,” OpenAI detailed plans to address ChatGPT’s shortcomings when handling “sensitive situations,” and said it would keep improving its technology to protect people who are at their most vulnerable. 

How are ChatGPT’s ethics determined?

Another large topic broached in the sit-down interview was the ethics and morals that inform ChatGPT and its stewards. 

While Altman described the base model of ChatGPT as trained on the collective experience, knowledge and learnings of humanity, he said that OpenAI must then align certain behaviors of the chatbot and decide what questions it won’t answer. 

“This is a really hard problem. We have a lot of users now, and they come from very different life perspectives… But on the whole, I have been pleasantly surprised with the model’s ability to learn and apply a moral framework.” 

When pressed on how certain model specifications are decided, Altman said the company had consulted “hundreds of moral philosophers and people who thought about ethics of technology and systems.”

An example he gave of a model specification made was that ChatGPT will avoid answering questions on how to make biological weapons if prompted by users.

“There are clear examples of where society has an interest that is in significant tension with user freedom,” Altman said, though he added the company “won’t get everything right, and also needs the input of the world” to help make these decisions.

How private is ChatGPT?

Another big discussion topic was the concept of user privacy regarding chatbots, with Carlson arguing that generative AI could be used for “totalitarian control.”

In response, Altman said one piece of policy he has been pushing for in Washington is “AI privilege,” which refers to the idea that anything a user says to a chatbot should be completely confidential. 

“When you talk to a doctor about your health or a lawyer about your legal problems, the government cannot get that information, right?… I think we should have the same concept for AI.” 

OpenAI CEO Sam Altman on path to profitability: Willing to run at a loss to focus on growth

According to Altman, that would allow users to consult AI chatbots about their medical history and legal problems, among other things. Currently, U.S. officials can subpoena the company for user data, he added.

“I think I feel optimistic that we can get the government to understand the importance of this,” he said. 

Will ChatGPT be used in military operations?

Just how powerful is OpenAI?

Carlson, in his interview, predicted that on its current trajectory, generative AI and by extension, Sam Altman, could amass more power than any other person, going so far as to call ChatGPT a “religion.”

In response, Altman said he used to worry a lot about the concentration of power that could result from generative AI, but he now believes that AI will result in “a huge up leveling” of all people. 

“What’s happening now is tons of people use ChatGPT and other chatbots, and they’re all more capable. They’re all kind of doing more. They’re all able to achieve more, start new businesses, come up with new knowledge, and that feels pretty good.”

However, the CEO said he thinks AI will eliminate many jobs that exist today, especially in the short-term.

Continue Reading

Technology

Why Eaton’s CFO change isn’t a red flag — plus, Palo Alto’s buzzy new deal

Published

on

By

Why Eaton's CFO change isn't a red flag — plus, Palo Alto's buzzy new deal

Continue Reading

Technology

Google launches Nano Banana Pro, an updated AI image generator powered by Gemini 3

Published

on

By

Google launches Nano Banana Pro, an updated AI image generator powered by Gemini 3

Sopa Images | Lightrocket | Getty Images

Google on Thursday rolled out Nano Banana Pro, its latest image editing and generation tool, continuing the company’s momentum after launching its new Gemini artificial intelligence model earlier this week.

The product is built on Gemini 3 Pro, which was announced on Tuesday and contributed to record-breaking stock highs.

Alphabet’s stock was up 4% Thursday.

Josh Woodward, vice president of Google Labs and Gemini, told CNBC’s Deirdre Bosa that the Nano Banana Pro’s capabilities expand beyond its original iteration, which launched in late August.

“It’s incredible at infographics. It can make slide decks. It can take up to 14 different images, or five different characters, and sort of keep that character consistency,” he said.

He added that internal users have experimented with the feature by inputting code snippets and even LinkedIn resumes to create infographics.

“I think this ability to visualize things that were previously maybe not something you would think of as a visual medium that tends to be one of the magic things people are finding with it,” Woodward said.

The original Nano Banana went viral on social media as users turned photos of themselves or their pets into hyperrealistic 3D figurines. Woodward wrote in an X post in September that the product helped add 13 million new users to the Gemini app in the span of four days.

Nano Banana Pro is currently available in the Gemini app, with limited free quotas, Google’s writing assistant, NotebookLM, as well as the company’s developer, enterprise and advertising products.

Google AI Pro and Ultra subscribers will have access to the product in Google’s search features AI Mode.

Read more CNBC tech news

The feature will later also roll out to Ultra subscribers first in Flow, Google’s AI filmmaking tool.

Google introduced another feature in the Gemini app that allows users to upload any image to find out if it was generated by Google AI.

Images generated on free Nano Banana accounts will have a watermark, but it will be removed for Google AI Ultra tier subscribers.

Google has been working to gain ground on OpenAI in the generative AI race, which ignited after the release of ChatGPT in 2022.

Last week, OpenAI announced two updates to its GPT-5 model to make it “warmer by default and more conversational” as well as ” more efficient and easier to understand in everyday use,” the company said.

ChatGPT currently tops the list of free apps on Apple’s App Store, with Gemini in the second spot.

The Gemini app currently has over 650 million monthly active users per month, and Gemini-powered AI Overviews has 2 billion monthly users, Google said in a release. OpenAI CEO Sam Altman said in October that ChatGPT had reached 800 million weekly active users.

Woodward said Google AI products have had growing demand, with many users signing up for Gemini’s subscription plan to have “higher limits with some of these advanced models.”

“We’re seeing high numbers of people coming to lots of these products,” he said. “That’s really the best problem to have, is there’s a lot of demand, and we’re trying to figure out actually how to serve it.”

The company is looking to continue scaling its AI offerings, Woodward said, highlighting Flow, Google’s AI filmmaking tool, and Genie, a “world building” model that is currently available as a limited research preview.

Gemini 3.0 and Google's custom AI chip edge

Continue Reading

Technology

U.S. greenlights AI chip exports to Gulf tech giants after Saudi Crown Prince’s Washington visit

Published

on

By

U.S. greenlights AI chip exports to Gulf tech giants after Saudi Crown Prince's Washington visit

U.S. President Donald Trump and Crown Prince and Prime Minister Mohammed bin Salman of Saudi Arabia stand for a photo with Tesla CEO Elon Musk, Nvidia CEO Jensen Huang and other participants at the U.S.-Saudi Investment Forum at the Kennedy Center on Nov. 19, 2025 in Washington, DC.

Win McNamee | Getty Images

The U.S. has approved sales of advanced Nvidia chips to Saudi Arabia’s HUMAIN and the United Arab Emirates’ G42, authorizing the state-backed firms to buy up to 35,000 chips, worth an estimated $1 billion.

The approval of these chip exports marks a major reversal for the U.S., which had previously balked at the idea of direct exports to state-backed AI companies in the Gulf. Export controls were put into place to avoid advanced American technology making its way to China through the back door of Gulf Arab states.  

Before former President Joe Biden left office in January, he administered a final round of export restrictions on advanced AI chips, targeting companies like Nvidia, in a sweeping effort to keep that cutting-edge U.S. intellectual property out of China’s reach.

Now, President Donald Trump is moving to expand the reach of such advanced technology in order to “promote continued American AI dominance and global technological leadership,” the U.S. Commerce Department said in a statement published on Wednesday. 

The U.S. Commerce Department approved the chip exports, with the condition the state-backed AI outfits agree to “rigorous security and reporting requirements,” overseen by the Department of Commerce’s Bureau of Industry and Security.

Saudi’s Victory Lap

The export approval follows Saudi Crown Prince Mohammed bin Salman’s trip to Washington this week where the Kingdom pledged to spend $1 trillion in the U.S., up from $600 billion originally committed during Trump’s Gulf tour in May.

“Even if we don’t get to that, both sides have skin in the game,” Afshin Molavi, senior fellow at the Foreign Policy Institute of the Johns Hopkins University School of Advanced International Studies, told CNBC’s Dan Murphy.

Saudi pledges $1 trillion investment as dealmakers head to DC

Saudi Arabia’s AI company HUMAIN, backed by its nearly $1 trillion Public Investment Fund signed a long list of partnerships with Adobe, Qualcomm, AMD, Cisco, GlobalAI, Groq, Luma, and xAI at a U.S.-Saudi Investment Forum held in Washington, D.C this week. Notably, HUMAIN will be teaming up with Elon Musk’s xAI to build a 500 megawatt data center in the Kingdom.

“What we want to do in 2026 is to build the capacity equivalent to what Saudi has built in the last 20 years, in one year,” Tareq Amin, CEO of HUMAIN, said at the summit. HUMAIN is hoping to position Saudi Arabia as the third biggest global AI hub, after the likes of the U.S. and China.

Winning over the U.S. Commerce Department

Continue Reading

Trending