As governments deliberate on whether artificial intelligence poses risks or dangers and whether it needs regulating, Singapore is taking more of a wait-and-see approach.
“We are currently not looking at regulating AI,” Lee Wan Sie, director for trusted AI and data at Singapore’s Infocomm Media Development Authority, told CNBC. IMDA promotes and regulates Singapore’s communication and media sectors.
The Singapore government is making efforts to promote the responsible use of AI.
It is calling for companies to collaborate in the world’s first AI testing toolkit — called AI Verify — that enables users to conduct technical tests on their AI models and record process checks.
We will learn how AI is being used before we decide if more needs to be done from a regulatory front.
Lee Wan Sie
Director for trusted AI & data, IMDA
AI Verify was launched as a pilot project in 2022. Tech giant IBM and Singapore Airlines have already started pilot testing as part of the program.
Calls for regulation
In recent months, AI buzz has gathered pace after chatbot ChatGPT went viral for its ability to generate humanlike responses to users’ prompts. It hit 100 million users in just two months after its launch.
Globally, there have been repeated calls for government interventions to address the potential risks of AI, however.
“At this stage, it is quite clear that we want to be able to learn from the industry. We will learn how AI is being used before we decide if more needs to be done from a regulatory front,” said Lee, adding that regulation may be introduced at a later stage.
“We recognize that as a small country, as the government, we may not have all the answers to this. So it’s very important that we work closely with the industry, research organizations and other governments,” said Lee.
Haniyeh Mahmoudian, an AI ethicist at DataRobot and an advisory member of the U.S. National AI Advisory Committee, said “it really benefits” both businesses and policymakers.
Read more about tech and crypto from CNBC Pro
“The industry is more hands-on when it comes to AI. Sometimes when it comes to regulations, you see the gap between what the policymakers are thinking about AI versus what’s actually happening in the business,” said Mahmoudian.
“So having this type of collaboration specifically creating these types of toolkits has the input from the industry. It really benefits both sides,” she added.
Google, Microsoft and IBM are among tech giants which have already joined the AI Verify Foundation — a global open-source community set up to discuss AI standards and best practices, as well as collaborate on governing AI.
“We at Microsoft applaud the Singapore government’s leadership in this area,” said Brad Smith, president and vice chair at Microsoft, in a press release.
“By creating practical resources like the AI governance testing framework and toolkit, Singapore is helping organizations build robust governance and testing processes,” said Smith.
Collaborative approach
France’s President Emmanuel Macron and his ministers have expressed a need for AI regulation. “I think we do need a regulation and all the players, even the U.S. players, agree with that,” Macron told CNBC last week.
China has already developed draft rules designed to manage how companies develop generative AI products like ChatGPT.
Innovation in a safe environment
Singapore could act as a “steward” in the region for allowing innovation but in a safe environment, said Stella Cramer, APAC head of international law firm Clifford Chance’s tech group.
Clifford Chance works with regulators on guidelines and frameworks across a range of markets.
Singapore has really sort of positioned itself as almost like the steward in the region of responsible and trustworthy use of AI.
Stella Cramer
APAC head of international law firm Clifford Chance’s tech group
“There’s just this consistent approach that we’re seeing around openness and collaboration. Singapore is viewed as a jurisdiction that is a safe place to come and test and roll out your technology with the support of the regulators in a controlled environment,” said Cramer.
The city-state has launched several pilot projects such as the FinTech Regulatory Sandbox or healthtech sandbox for industry players to test out their products in a live environment before going to market.
“These structured frameworks and testing toolkits will help guide AI governance policies to promote safe and trustworthy AI for businesses,” said Cramer.
“AI Verify may potentially be useful for demonstration of compliance to certain requirements,” said IMDA’s Lee. “At the end, as a regulator, if I want to enforce [regulation], I must know how to do it.”
A Tesla robotaxi drives on the street along South Congress Avenue in Austin, Texas, on June 22, 2025
Joel Angel Juarez | Reuters
Tesla was contacted by the National Highway Traffic Safety Administration on Monday after videos posted on social media showed the company’s robotaxis driving in a chaotic manner on public roads in Austin, Texas.
Elon Musk’s electric vehicle maker debuted autonomous trips in Austin on Sunday, opening the service to a limited number of riders by invitation only.
In the videos shared widely online, one Tesla robotaxi was spotted traveling the wrong way down a road, and another was shown braking hard in the middle of traffic, responding to “stationary police vehicles outside its driving path,” among several other examples.
A spokesperson for NHTSA said in an e-mail that the agency “is aware of the referenced incidents and is in contact with the manufacturer to gather additional information.”
Tesla Vice President of Vehicle Engineering Lars Moravy, and regulatory counsel Casey Blaine didn’t immediately respond to a request for comment.
The federal safety regulator says it doesn’t “pre-approve new technologies or vehicle systems.” Instead, automakers certify that each vehicle model they make meets federal motor vehicle safety standards. The agency says it will investigate “incidents involving potential safety defects,” and take “necessary actions to protect road safety,” after assessing a wide array of reports and information.
NHTSA previously initiated an investigation into possible safety defects with Tesla’s FSD-Supervised technology, or FSD Beta systems, following injurious and fatal accidents. That probe is ongoing.
The Tesla robotaxis in Austin are Model Y SUVs equipped with the company’s latest FSD Unsupervised software and hardware. The pilot robotaxi service, involving fewer than two-dozen vehicles, operates during daylight hours and only in good weather, with a human safety supervisor in the front passenger seat.
The service is now limited to invited users, who agree to the terms of Tesla’s “early access program.” Those who have received invites are mostly promoters of Tesla’s products, stock and CEO.
While the rollout sent Tesla shares up 8% on Monday, the launch fell shy of fulfilling Musk’s many driverless promises over the past decade.
In 2015, Musk told shareholders Tesla cars would achieve “full autonomy” within three years. In 2016, he said a Tesla EV would be able to make a cross-country drive without needing any human intervention before the end of 2017. And in 2019, on a call with institutional investors that helped him raise more than $2 billion, Musk said Tesla would have 1 million robotaxi-ready vehicles on the road in 2020, able to complete 100 hours of driving work per week each, making money for their owners.
None of that has happened.
Meanwhile, Alphabet-owned Waymo says it has surpassed 10 million paid trips last month. Competitors in China, including Baidu’s Apollo Go, WeRide and Pony.ai, are also operating commercial robotaxi fleets.
Runway is best known for its AI video-generation tools and earned a spot on CNBC’s Disruptor 50 list earlier this month.
The deal talks between Meta and Runway did not progress far and dissolved, according to a person familiar with the matter who asked not to be named due to the confidential nature of the discussions.
Bloomberg earlier reported the talks. Meta declined to comment.
Read more CNBC tech news
Meta CEO Mark Zuckerberg has been aggressively pushing to bolster his company’s AI efforts in recent months. The social media giant invested $14.3 billion into Scale AI in June, and it has also approached the startups Safe Superintelligence and Perplexity AI about potential acquisitions this year.
Meta agreed to a 49% stake in Scale AI and hired away founder Alexandr Wang along with a few other employees from the company.
While Meta was unsuccessful in its efforts to buy Superintelligence outright, Daniel Gross, the company’s CEO, and former GitHub CEO Nat Friedman are joining Meta’s AI efforts, where they will work on products under Wang.
A woman walks past a logo of WhatsApp during a Meta event in Mumbai, India, on Sept. 20, 2023.
Niharika Kulkarni | Nurphoto | Getty Images
Meta is pushing back against a ban on WhatsApp from government devices.
The chief administrative officer, or CAO, of the U.S. House of Representatives told staffers on Monday that they are not allowed to use Meta’s popular messaging app. The CAO cited a lack of transparency about WhatsApp’s data privacy and security practices as the reason for the ban, according to a report by Axios that cited an internal email from the government office.
The CAO told House staff members in the email that they are not allowed to download WhatsApp on their government devices or access the app on their smartphones or desktop computers, the report said. Staff members must remove WhatsApp from their devices if they have the app installed on their devices, the report said.
“Protecting the People’s House is our topmost priority, and we are always monitoring and analyzing for potential cybersecurity risks that could endanger the data of House Members and staff,” U.S. House Chief Administrative Officer Catherine Szpindor told CNBC in a written statement.
Meta spokesperson Andy Stone on Monday responded to the report via a post on X, saying the company disagrees “with the House Chief Administrative Officer’s characterization in the strongest possible terms.”
“We know members and their staffs regularly use WhatsApp and we look forward to ensuring members of the House can join their Senate counterparts in doing so officially,” Stone said.
In a separate X post, Stone said WhatsApp’s encrypted nature provides a “higher level of security than most of the apps on the CAO’s approved list that do not offer that protection.”
Some of the messaging apps the CAO said are acceptable alternatives to WhatsApp include Microsoft Teams, Signal and Apple’s iMessage, the Axios report said.
Meta is currently embroiled in an antitrust case with the Federal Trade Commission over the social media company’s acquisitions of WhatsApp and Instagram.