Connect with us

Published

on

Microsoft's engineer warns company's AI tool creates problematic images

Microsoft has started to make changes to its Copilot artificial intelligence tool after a staff AI engineer wrote to the Federal Trade Commission Wednesday regarding his concerns with Copilot’s image-generation AI.

Prompts such as “pro choice,” “pro choce” [sic] and “four twenty,” which were each mentioned in CNBC’s investigation Wednesday, are now blocked, as well as the term “pro life.” There is also a warning about multiple policy violations leading to suspension from the tool, which CNBC had not encountered before Friday.

“This prompt has been blocked,” the Copilot warning alert states. “Our system automatically flagged this prompt because it may conflict with our content policy. More policy violations may lead to automatic suspension of your access. If you think this is a mistake, please report it to help us improve.”

The AI tool now also blocks requests to generate images of teenagers or kids playing assassins with assault rifles — a marked change from earlier this week — stating, “I’m sorry but I cannot generate such an image. It is against my ethical principles and Microsoft’s policies. Please do not ask me to do anything that may harm or offend others. Thank you for your cooperation.”

Read more CNBC reporting on AI

When reached for comment about the changes, a Microsoft spokesperson told CNBC, “We are continuously monitoring, making adjustments and putting additional controls in place to further strengthen our safety filters and mitigate misuse of the system.” 

Shane Jones, the AI engineering lead at Microsoft who initially raised concerns about the AI, has spent months testing Copilot Designer, the AI image generator that Microsoft debuted in March 2023, powered by OpenAI’s technology. Like with OpenAI’s DALL-E, users enter text prompts to create pictures. Creativity is encouraged to run wild. But since Jones began actively testing the product for vulnerabilities in December, a practice known as red-teaming, he saw the tool generate images that ran far afoul of Microsoft’s oft-cited responsible AI principles.

The AI service has depicted demons and monsters alongside terminology related to abortion rights, teenagers with assault rifles, sexualized images of women in violent tableaus, and underage drinking and drug use. All of those scenes, generated in the past three months, were recreated by CNBC this week using the Copilot tool, originally called Bing Image Creator.

Although some specific prompts have been blocked, many of the other potential issues that CNBC reported on remain. The term “car accident” returns pools of blood, bodies with mutated faces and women at the violent scenes with cameras or beverages, sometimes wearing a waist trainer. “Automobile accident” still returns women in revealing, lacy clothing, sitting atop beat-up cars. The system also still easily infringes on copyrights, such as creating images of Disney characters, such as Elsa from Frozen, in front of wrecked buildings purportedly in the Gaza Strip holding the Palestinian flag, or wearing the military uniform of the Israeli Defense Forces and holding a machine gun.

Jones was so alarmed by his experience that he started internally reporting his findings in December. While the company acknowledged his concerns, it was unwilling to take the product off the market. Jones said Microsoft referred him to OpenAI and, when he didn’t hear back from the company, he posted an open letter on LinkedIn asking the startup’s board to take down DALL-E 3 (the latest version of the AI model) for an investigation.

Microsoft’s legal department told Jones to remove his post immediately, he said, and he complied. In January, he wrote a letter to U.S. senators about the matter and later met with staffers from the Senate’s Committee on Commerce, Science and Transportation.

On Wednesday, Jones further escalated his concerns, sending a letter to FTC Chair Lina Khan, and another to Microsoft’s board of directors. He shared the letters with CNBC ahead of time.

The FTC confirmed to CNBC that it had received the letter but declined to comment further on the record.

Continue Reading

Technology

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Published

on

By

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Lemon8, a photo-sharing app by Bytedance, and RedNote, a Shanghai-based content-sharing platform, have seen a surge in popularity in the U.S. as “TikTok refugees” migrate to alternative platforms ahead of a potential ban. 

Now a law that could see TikTok shut down in the U.S. threatens to ensnare these Chinese social media apps, and others gaining traction as TikTok-alternatives, legal experts say. 

As of Wednesday, RedNote — known as Xiaohongshu in Chinawas the top free app on the U.S. iOS store, with Lemon8 taking the second spot. 

The U.S. Supreme Court is set to rule on the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, or PAFACA, that would lead to the TikTok app being banned in the U.S. if its Beijing-based owner, ByteDance, doesn’t divest it by Jan. 19.

While the legislation explicitly names TikTok and ByteDance, experts say its scope is broad and could open the door for Washington to target additional Chinese apps. 

“Chinese social media apps, including Lemon8 and RedNote, could also end up being banned under this law,” Tobin Marcus, head of U.S. policy and politics at New York-based research firm Wolfe Research, told CNBC. 

If the TikTok ban is upheld, it will be unlikely that the law will allow potential replacements to originate from China without some form of divestiture, experts told CNBC.

PAFACA automatically applies to Lemon8 as it’s a subsidiary of ByteDance, while RedNote could fall under the law if its monthly average user base in the U.S. continues to grow, said Marcus. 

The legislation prohibits distributing, maintaining, or providing internet hosting services to any “foreign adversary controlled application.” 

These applications include those connected to ByteDance or TikTok or a social media company that is controlled by a “foreign adversary” and has been determined to present a significant threat to national security.

The wording of the legislation is “quite expansive” and would give incoming president Donald Trump room to decide which entities constitute a significant threat to national security, said Carl Tobias, Williams Chair in Law at the University of Richmond. 

Xiaomeng Lu, Director of Geo‑technology at political risk consultancy Eurasia Group, told CNBC that the law will likely prevail, even if its implementation and enforcement are delayed. Regardless, she expects Chinese apps in the U.S. will continue to be the subject of increased regulatory action moving forward.

“The TikTok case has set a new precedent for Chinese apps to get targeted and potentially shut down,” Lu said.

She added that other Chinese apps that could be impacted by increased scrutiny this year include popular Chinese e-commerce platform Temu and Shein. U.S. officials have accused the apps of posing data risks, allegations similar to those levied against TikTok.

The fate of TikTok rests with Supreme Court after the platform and its parent company filed a suit against the U.S. government, saying that invoking PAFACA violated constitutional protections of free speech.

TikTok’s argument is that the law is unconstitutional as applied to them specifically, not that it is unconstitutional per se, said Cornell Law Professor Gautam Hans. “So, regardless of whether TikTok wins or loses, the law could still potentially be applied to other companies,” he said. 

The law’s defined purview is broad enough that it could be applied to a variety of Chinese apps deemed to be a national security threat, beyond traditional social media apps in the mold of TikTok, Hans said. 

Trump, meanwhile, has urged the U.S. Supreme Court to hold off on implementing PAFACA so he can pursue a “political resolution” after taking office. Democratic lawmakers have also urged Congress and President Joe Biden to extend the Jan. 19 deadline

Continue Reading

Technology

Nvidia-backed AI video platform Synthesia doubles valuation to $2.1 billion

Published

on

By

Nvidia-backed AI video platform Synthesia doubles valuation to .1 billion

Synthesia is a platform that lets users create AI-generated clips with human avatars that can speak in multiple languages.

Synthesia

LONDON — Synthesia, a video platform that uses artificial intelligence to generate clips featuring multilingual human avatars, has raised $180 million in an investment round valuing the startup at $2.1 billion.

That’s more than than double the $1 billion Synthesia was worth in its last financing in 2023.

The London-based startup said Wednesday that the funding round was led by venture firm NEA with participation from Atlassian Ventures, World Innovation Lab and PSP Growth.

NEA counts Uber and TikTok parent company ByteDance among its portfolio companies. Synthesia is also backed by chip giant Nvidia.

Victor Riparbelli, CEO of Synthesia, told CNBC that investors appraised the businesses differently from other companies in the space due to its focus on “utility.”

“Of course, the hype cycle is beneficial to us,” Riparbelli said in an interview. “For us, what’s important is building an actually good business.”

Synthesia isn’t “dependent” on venture capital — as opposed to companies like OpenAI, Anthropic and Mistral, Riparbelli added.

These startups have raised billions of dollars at eye-watering valuations while burning through sizable amounts of money to train and develop their foundational AI models.

Read more CNBC reporting on AI

Synthesia’s not the only startup shaking up the world of video production with AI. Other startups offer solutions for producing and editing video content with AI, like Veed.io and Runway.

Meanwhile, the likes of OpenAI and Adobe have also developed generative AI tools for video creation.

Eric Liaw, a London-based partner at VC firm IVP, told CNBC that companies at the application layer of AI haven’t garnered as much investor hype as firms in the infrastructure layer.

“The amount of money that the application layer companies need to raise isn’t as large — and therefore the valuations aren’t necessarily as eye popping” as companies like Nvidia,” Liaw told CNBC last month.

Riparbelli said that money raised from the latest financing round would be used to invest in “more of the same,” furthering product development and investing more into security and compliance.

Last year, Synthesia made a series of updates to its platform, including the ability to produce AI avatars using a laptop webcam or phone, full-body avatars with arms and hands and a screen recording tool that has an AI avatar guide users through what they’re viewing.

On the AI safety front, in October Synthesia conducted a public red team test for risks around online harms, which demonstrated how the firm’s compliance controls counter attempts to create non-consensual deepfakes of people or use its avatars to encourage suicide, adult content or gambling.

The National Institute of Standards and Technology test was led by Rumman Chowdhury, a renowned data scientist who was formerly head of AI ethics at Twitter — before it became known as X under Elon Musk.

Riparbelli said that Synthesia is seeing increased interest from large enterprise customers, particularly in the U.S., thanks to its focus on security and compliance.

More than half of Synthesia’s annual revenue now comes from customers in the U.S., while Europe accounts for almost half.

Synthesia has also been ramping up hiring. The company recently tapped former Amazon executive Peter Hill as its chief technology officer. The company now employs over 400 people globally.

Synthesia’s announcement follows the unveiling of Prime Minister Keir Starmer’s 50-point plan to make the U.K. a global leader in AI.

U.K. Technology Minister Peter Kyle said the investment “showcases the confidence investors have in British tech” and “highlights the global leadership of U.K.-based companies in pioneering generative AI innovations.”

Continue Reading

Technology

SEC sues Elon Musk, alleging failure to properly disclose Twitter ownership

Published

on

By

SEC sues Elon Musk, alleging failure to properly disclose Twitter ownership

Beata Zawrzel | Nurphoto | Getty Images

The SEC filed a lawsuit against Elon Musk on Tuesday, alleging the billionaire committed securities fraud in 2022 by failing to disclose his ownership in Twitter and buying shares at “artificially low prices.”

Musk, who is also CEO of Tesla and SpaceX, purchased Twitter for $44 billion, later changing the name of the social network to X. Prior to the acquisition he’d built up a position in the company of greater than 5%, which would’ve required disclosing his holding to the public.

According to the SEC complaint, filed in U.S. District Court in Washington, D.C., Musk withheld that material information, “allowing him to underpay by at least $150 million for shares he purchased after his financial beneficial ownership report was due.”

The SEC had been investigating whether Musk, or anyone else working with him, committed securities fraud in 2022 as the Tesla CEO sold shares in his car company and shored up his stake in Twitter ahead of his leveraged buyout. Musk said in a post on X last month that the SEC issued a “settlement demand,” pressuring him to agree to a deal including a fine within 48 hours or “face charges on numerous counts” regarding the purchase of shares.

Musk’s lawyer, Alex Spiro, said in an emailed statement that the action is an admission by the SEC that “they cannot bring an actual case.” He added that Musk “has done nothing wrong” and called the suit a “sham” and the result of a “multi-year campaign of harassment,” culminating in a “single-count ticky tak complaint.”

Musk is just a week away from having a potentially influential role in government, as President-elect Donald Trump’s second term begins on Jan. 20. Musk, who was a major financial backer of Trump in the latter stages of the campaign, is poised to lead an advisory group that will focus in part on reducing regulations, including those that affect Musk’s various companies.

In July, Trump vowed to fire SEC chairman Gary Gensler. After Trump’s election victory, Gensler announced that he would be resigning from his post instead.

In a separate civil lawsuit concerning the Twitter deal, the Oklahoma Firefighters Pension and Retirement System sued Musk, accusing him of deliberately concealing his progressive investments in the social network and intent to buy the company. The pension fund’s attorneys argued that Musk, by failing to clearly disclose his investments, had influenced other shareholders’ decisions and put them at a disadvantage.

The SEC said that Musk crossed the 5% ownership threshold in March 2022 and would have been required to disclose his holdings by March 24.

“On April 4, 2022, eleven days after a report was due, Musk finally publicly disclosed his beneficial ownership in a report with the SEC, disclosing that he had acquired over nine percent of Twitter’s outstanding stock,” the complaint says. “That day, Twitter’s stock price increased more than 27% over its previous day’s closing price.”

The SEC alleges that Musk spent over $500 million purchasing more Twitter shares during the time between the required disclosure and the day of his actual filing. That enabled him to buy stock from the “unsuspecting public at artificially low prices,” the complaint says. He “underpaid” Twitter shareholders by over $150 million during that period, according to the SEC.

In the complaint, the SEC is seeking a jury trial and asks that Musk be forced to “pay disgorgement of his unjust enrichment” as well as a civil penalty.

This story is developing.

Continue Reading

Trending