Connect with us

Published

on

A woman walks past tents for the homeless lining a street in Los Angeles, Calif. on Feb. 1, 2021.

FREDERIC J. BROWN | AFP | Getty Images

In December of last year, single mom Courtney Peterson was laid off from her job working for a now-shuttered inpatient transitional living program. Aside from the flexibility it allowed her to sometimes bring her seven-year-old son to work, it paid enough to cover rent in a studio apartment in the Van Nuys neighborhood in Los Angeles, where they had lived for a year and a half. 

Peterson said she began to research potential avenues for help, immediately concerned about making January’s rent. When her son was an infant, they lived in a travel trailer, she said, a situation she did not want to return to.

“I started to reach out to local churches or places that said they offered rent assistance,” Peterson told CNBC. “But a lot of them wanted me to have active eviction notices in order to give me assistance. I felt like I was running out of options. I’d reached out to pretty much everyone I could possibly think of with no luck.”

Instead of an eviction notice, Peterson received a letter from Homelessness Prevention Unit within the Los Angeles County Department of Health Services, offering a lifeline. The pilot program uses predictive artificial intelligence to identify individuals and families at risk of becoming homeless, offering aid to help them stabilize and remain housed.

In 2023, California had more than 181,000 homeless individuals, up more than 30 percent since 2007, according to data from the U.S Department of Housing and Urban Development. A report from the Auditor of the State of California found the state spent $24 billion on homelessness from 2018 through 2023.

Launched in 2021, the technology has helped the department serve nearly 800 individuals and families at risk of becoming homeless, with 86 percent of participants retaining permanent housing when they leave the program, according to Dana Vanderford, associate director of homelessness prevention at the county’s Department of Health Services. 

Individuals and families have access to between $4,000 and $8,000, she said, with the majority of the funding for the program coming from the American Rescue Plan Act. Tracking down individuals to help and convincing them that the offer is real and not a scam can be a challenge, but once contact is established, aid is quickly put into motion.

“We often meet our clients within days of a loss of housing, or days after they’ve had a medical emergency. The timing with which we meet people feels critical,” Vanderford said. “Our ability to appear out of nowhere, cold-call a person, provide them with resources and prevent that imminent loss of housing for 86 percent of the people that we’ve worked with feels remarkable.”

Peterson said she and her son received some $8,000 to cover rent, utilities and basic needs, allowing her to stay put in her apartment while she looks for a new job. The program works with clients for four months and then follows up with them at the six-month mark and the 12-month mark, as well as 18 months after discharge. Case workers like Amber Lung, who helped Peterson, say they can see how important preventative work is firsthand.

“Once folks do lose that housing, it feels like there’s so many more hurdles to get back to [being] housed, and so if we can fill in just a little bit of a gap there might be to help them retain that housing, I think it’s much easier to stabilize things than if folks end up in a shelter or on the streets to get them back into that position,” Lung said.

Using AI to prevent homelessness: Here's what to know

Predicting Risk

The AI model was developed by the California Policy Lab at UCLA over the course of several years, using data provided by Los Angeles County’s Chief Information Office. The CIO integrated data from seven different county departments, de-identified for privacy, including emergency room visits, behavioral health care, and large public benefits programs from food stamps to income support and homeless services, according to Janey Rountree, executive director of the California Policy Lab. The program also pulled data from the criminal justice system.

Those data, linked together over many years, are what would be used to make predictions about who would go on to experience homelessness, developed during a period of time when the policy lab had the outcome to test the model’s accuracy. 

Once the model identified patterns in who experienced homelessness, the lab used it to attempt to make predictions about the future, creating an anonymized list of individuals ranked from highest risk to lowest. The lab provided the list to the county so it could reach out to people who may be at risk of losing housing before it happened.

However, past research has found that anonymized data can be traced back to individuals based on demographic information. A sweeping study on data privacy, based on 1990 U.S. Census data found 87% of Americans could be identified by using ZIP code, birth date and gender.

“We have a deep, multi-decade long housing shortage in California, and the cost of housing is going up, increasingly, and that is the cause of our people experiencing homelessness,” Rountree said. “The biggest misperception is that homelessness is caused by individual risk factors, when in fact it’s very clear that the root cause of this is a structural economic issue.”

The Policy Lab provided the software to the county for free, Rountree said, and does not plan to monetize it. Using AI in close partnership with people who have relevant subject matter expertise from teachers to social workers can help to promote positive social outcomes, she said. 

“I just want to emphasize how important it is for every community experiencing homelessness, to test and innovate around prevention,” she said. ” It’s a relatively new strategy in the lifespan of homeless services. We need more evidence. We need to do more experiments around how to find people at risk. I think this is just one way to do that.”

The National Alliance to End Homelessness found in 2017 a chronically homeless person costs the taxpayer an average of $35,578 per year, and those costs are reduced by an average of nearly half when they are placed in supportive housing.

Los Angeles County has had initial conversations with Santa Clara County about the program, and San Diego County is also exploring a similar approach, Vanderford said.

Government Use of Artificial Intelligence

AI in the hands of government agencies has faced scrutiny due to potential outcomes. Police reliance on AI technology has led to wrongful arrests, and in California, voters rejected a plan to repeal the state’s bail system in 2020 and replace it with an algorithm to determine individual risk, over concerns it would increase bias in the justice system.

Broadly speaking, Margaret Mitchell, chief ethics scientist at AI startup Hugging Face, said ethics around the government use of AI hinge on context of use and safety of identifiable information, even if anonymized. Mitchell also points to how important it is to receive informed consent from people seeking help from government programs.

 “Are the people aware of all the signals that are being collected and the risk of it being associated to them and then the dual use concerns for malicious use against them?” Mitchell said. “There’s also the issue of how long this data is being kept and who might eventually see it.”

While the technology aims to provide aid to those in need before their housing is lost in Los Angeles County, which Mitchell said is a positive thing to do from a “virtue ethics” perspective, there are broader questions from a utilitarian viewpoint.

 “Those would be concerns like, ‘What is the cost to the taxpayer and how likely is this system to actually avoid houselessness?'” she said.

As for Peterson, she’s in the process of looking for work, hoping for a remote position that will allow her flexibility. Down the road, she’s hoping to obtain her licensed vocational nursing certification and one day buy a home where her son has his own room.

“It has meant a lot just because you know my son hasn’t always had that stability. I haven’t always had that stability,” she said of the aid from the program. “To be able to call this place home and know that I’m not going to have to move out tomorrow, my son’s not going to have to find new friends right away… It’s meant a lot to both me and my son.”

Continue Reading

Technology

Microsoft launches consumption-based 365 Copilot Chat option for corporate users

Published

on

By

Microsoft launches consumption-based 365 Copilot Chat option for corporate users

Microsoft Chairman and CEO Satya Nadella speaks during the Microsoft May 20 Briefing event at Microsoft in Redmond, Washington, on May 20, 2024. Nadella unveiled a new category of PC on Monday that features generative artificial intelligence tools built directly into Windows, the company’s world leading operating system.

Jason Redmond | AFP | Getty Images

Microsoft on Wednesday announced a tier of its Copilot assistant for corporate users with a consumption-based pricing model. The new Microsoft 365 Copilot Chat option represents an alternative to the Microsoft 365 Copilot, which organizations have been able to pay for based on the number of employees with access to it.

The introduction shows Microsoft’s determination to popularize generative artificial intelligence software in the workplace. Several companies have adopted the Microsoft 365 Copilot since it became available for $30 per person per month in November 2023, but one group of analysts recently characterized the product push as “slow/underwhelming.”

Copilot Chat can be an on-ramp to Microsoft 365 Copilot, with a lower barrier to entry, Jared Spataro, Microsoft’s chief marketing officer for AI at work, said in a CNBC interview this week. Both offerings rely on artificial intelligence models from Microsoft-backed OpenAI.

Copilot Chat can fetch information from the web and summarize text in uploaded documents, and people using it can create agents that perform tasks in the background. It can enrich answers with information from customers’ files and third-party sources.

Unlike Microsoft 365 Copilot, Copilot Chat can’t be found in Office applications such as Word and Excel. People can reach Copilot Chat starting today in the Microsoft 365 Copilot app for Windows, Android and iOS. The app is formerly known as Microsoft 365 (Office). It’s also available from the web at m365copilot.com, a spokesperson said.

Some management teams have resisted paying Microsoft to give the 365 Copilot to thousands of employees because they weren’t sure how helpful it would be at the $30 monthly price. Costs will vary for the Copilot Chat depending on what employees do with it, but at least organizations won’t end up paying for nonuse.

“As one customer said to me, this model lets the business value prove itself,” Spataro said.

Microsoft tallies up charges for Copilot Chat based on the tally of “messages” that a client uses. Each “message” costs a penny, according to a blog post. Responses that draw on the client’s proprietary files cost 30 “messages” each. Every action that an agent takes on behalf of employees costs 25 “messages.”

“We’re talking a cent, 2 cents, 30 cents, and that is a very easy way for people to get started,” Spataro said.

Salesforce charges $2 per conversation for its Agentforce AI chat service, where employees can set up automated sales and customer service processes.

The number of people using Microsoft 365 Copilot every day more than doubled quarter over quarter, CEO Satya Nadella said in October, although he did not disclose how many were using it. But sign-ups have been mounting. UBS said in October that it had 50,000 Microsoft 365 Copilot licenses, and in November, Accenture committed to having 200,000 users of the tool.

Don’t miss these insights from CNBC PRO

OpenAI's Sam Altman: Microsoft partnership has been tremendously positive for both companies

Continue Reading

Technology

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Published

on

By

These Chinese apps have surged in popularity in the U.S. A TikTok ban could ensnare them

Lemon8, a photo-sharing app by Bytedance, and RedNote, a Shanghai-based content-sharing platform, have seen a surge in popularity in the U.S. as “TikTok refugees” migrate to alternative platforms ahead of a potential ban. 

Now a law that could see TikTok shut down in the U.S. threatens to ensnare these Chinese social media apps, and others gaining traction as TikTok-alternatives, legal experts say. 

As of Wednesday, RedNote — known as Xiaohongshu in Chinawas the top free app on the U.S. iOS store, with Lemon8 taking the second spot. 

The U.S. Supreme Court is set to rule on the constitutionality of the Protecting Americans from Foreign Adversary Controlled Applications Act, or PAFACA, that would lead to the TikTok app being banned in the U.S. if its Beijing-based owner, ByteDance, doesn’t divest it by Jan. 19.

While the legislation explicitly names TikTok and ByteDance, experts say its scope is broad and could open the door for Washington to target additional Chinese apps. 

“Chinese social media apps, including Lemon8 and RedNote, could also end up being banned under this law,” Tobin Marcus, head of U.S. policy and politics at New York-based research firm Wolfe Research, told CNBC. 

If the TikTok ban is upheld, it will be unlikely that the law will allow potential replacements to originate from China without some form of divestiture, experts told CNBC.

PAFACA automatically applies to Lemon8 as it’s a subsidiary of ByteDance, while RedNote could fall under the law if its monthly average user base in the U.S. continues to grow, said Marcus. 

The legislation prohibits distributing, maintaining, or providing internet hosting services to any “foreign adversary controlled application.” 

These applications include those connected to ByteDance or TikTok or a social media company that is controlled by a “foreign adversary” and has been determined to present a significant threat to national security.

The wording of the legislation is “quite expansive” and would give incoming president Donald Trump room to decide which entities constitute a significant threat to national security, said Carl Tobias, Williams Chair in Law at the University of Richmond. 

Xiaomeng Lu, Director of Geo‑technology at political risk consultancy Eurasia Group, told CNBC that the law will likely prevail, even if its implementation and enforcement are delayed. Regardless, she expects Chinese apps in the U.S. will continue to be the subject of increased regulatory action moving forward.

“The TikTok case has set a new precedent for Chinese apps to get targeted and potentially shut down,” Lu said.

She added that other Chinese apps that could be impacted by increased scrutiny this year include popular Chinese e-commerce platform Temu and Shein. U.S. officials have accused the apps of posing data risks, allegations similar to those levied against TikTok.

The fate of TikTok rests with Supreme Court after the platform and its parent company filed a suit against the U.S. government, saying that invoking PAFACA violated constitutional protections of free speech.

TikTok’s argument is that the law is unconstitutional as applied to them specifically, not that it is unconstitutional per se, said Cornell Law Professor Gautam Hans. “So, regardless of whether TikTok wins or loses, the law could still potentially be applied to other companies,” he said. 

The law’s defined purview is broad enough that it could be applied to a variety of Chinese apps deemed to be a national security threat, beyond traditional social media apps in the mold of TikTok, Hans said. 

Trump, meanwhile, has urged the U.S. Supreme Court to hold off on implementing PAFACA so he can pursue a “political resolution” after taking office. Democratic lawmakers have also urged Congress and President Joe Biden to extend the Jan. 19 deadline

Continue Reading

Technology

Nvidia-backed AI video platform Synthesia doubles valuation to $2.1 billion

Published

on

By

Nvidia-backed AI video platform Synthesia doubles valuation to .1 billion

Synthesia is a platform that lets users create AI-generated clips with human avatars that can speak in multiple languages.

Synthesia

LONDON — Synthesia, a video platform that uses artificial intelligence to generate clips featuring multilingual human avatars, has raised $180 million in an investment round valuing the startup at $2.1 billion.

That’s more than than double the $1 billion Synthesia was worth in its last financing in 2023.

The London-based startup said Wednesday that the funding round was led by venture firm NEA with participation from Atlassian Ventures, World Innovation Lab and PSP Growth.

NEA counts Uber and TikTok parent company ByteDance among its portfolio companies. Synthesia is also backed by chip giant Nvidia.

Victor Riparbelli, CEO of Synthesia, told CNBC that investors appraised the businesses differently from other companies in the space due to its focus on “utility.”

“Of course, the hype cycle is beneficial to us,” Riparbelli said in an interview. “For us, what’s important is building an actually good business.”

Synthesia isn’t “dependent” on venture capital — as opposed to companies like OpenAI, Anthropic and Mistral, Riparbelli added.

These startups have raised billions of dollars at eye-watering valuations while burning through sizable amounts of money to train and develop their foundational AI models.

Read more CNBC reporting on AI

Synthesia’s not the only startup shaking up the world of video production with AI. Other startups offer solutions for producing and editing video content with AI, like Veed.io and Runway.

Meanwhile, the likes of OpenAI and Adobe have also developed generative AI tools for video creation.

Eric Liaw, a London-based partner at VC firm IVP, told CNBC that companies at the application layer of AI haven’t garnered as much investor hype as firms in the infrastructure layer.

“The amount of money that the application layer companies need to raise isn’t as large — and therefore the valuations aren’t necessarily as eye popping” as companies like Nvidia,” Liaw told CNBC last month.

Riparbelli said that money raised from the latest financing round would be used to invest in “more of the same,” furthering product development and investing more into security and compliance.

Last year, Synthesia made a series of updates to its platform, including the ability to produce AI avatars using a laptop webcam or phone, full-body avatars with arms and hands and a screen recording tool that has an AI avatar guide users through what they’re viewing.

On the AI safety front, in October Synthesia conducted a public red team test for risks around online harms, which demonstrated how the firm’s compliance controls counter attempts to create non-consensual deepfakes of people or use its avatars to encourage suicide, adult content or gambling.

The National Institute of Standards and Technology test was led by Rumman Chowdhury, a renowned data scientist who was formerly head of AI ethics at Twitter — before it became known as X under Elon Musk.

Riparbelli said that Synthesia is seeing increased interest from large enterprise customers, particularly in the U.S., thanks to its focus on security and compliance.

More than half of Synthesia’s annual revenue now comes from customers in the U.S., while Europe accounts for almost half.

Synthesia has also been ramping up hiring. The company recently tapped former Amazon executive Peter Hill as its chief technology officer. The company now employs over 400 people globally.

Synthesia’s announcement follows the unveiling of Prime Minister Keir Starmer’s 50-point plan to make the U.K. a global leader in AI.

U.K. Technology Minister Peter Kyle said the investment “showcases the confidence investors have in British tech” and “highlights the global leadership of U.K.-based companies in pioneering generative AI innovations.”

Continue Reading

Trending