As the person in charge of Airbnb’s worldwide ban on parties, she’s spent more than three years figuring out how to battle party “collusion” by users, flag “repeat party houses” and, most of all, design an anti-party AI system with enough training data to halt high-risk reservations before the offender even gets to the checkout page.
It’s been a bit like a game of whack-a-mole: Whenever Banerjee’s algorithms flag some concerns, new ones pop up.
Airbnb defines a party as a gathering that occurs at an Airbnb listing and “causes significant disruption to neighbors and the surrounding community,” according to a company rep. To determine violations, the company considers whether the gathering is an open-invite one, and whether it involves excessive noise, trash, visitors, parking issues for neighbors, and other factors.
Bannerjee joined the company’s trust and safety team in May 2020 and now runs that group. In her short time at the company, she’s overseen a ban on high-risk reservations by users aged 25 and under, an pilot program for anti-party AI in Australia, heightened defenses on holiday weekends, a host insurance policy worth millions of dollars, and this summer, a global rollout of Airbnb’s reservation screening system.
Some measures have worked better than others, but the company says party reports dropped 55% between August 2020 and August 2022 — and since the worldwide launch of Banerjee’s system in May, more than 320,000 guests have been blocked or redirected from booking attempts on Airbnb.
Overall, the company’s business is getting stronger as the post-pandemic travel boom starts to fade. Last month, the company reported earnings that beat analysts’ expectations on earnings per share and revenue, with the latter growing 18% year-over-year, despite fewer-than-expected number of nights and experiences booked via the platform.
Turning parental party radar into an algorithm
Courtesy: Airbnb
Airbnb says the pandemic and hosts’ fears of property damage are the main drivers behind its anti-party push, but there have been darker incidents as well.
A Halloween party at an Airbnb in 2019 left five people dead. This year between Memorial Day and Labor Day weekends, at least five people were killed at parties hosted at Airbnbs. In June, the company was sued by a family who lost their 18-year-old son in a shooting at a 2021 Airbnb party.
When Banerjee first joined Airbnb’s trust team in summer 2020, she recalls people around her asking, “How do you solve this problem?” The stream of questions, from people above and below her on the corporate ladder, contributed to her anxiety. Airbnb’s party problem was complex, and in some ways, she didn’t know where to start.
As a mother of five, Banerjee knows how to sniff out a secretive shindig.
Last summer, Banerjee’s 17-year-old daughter had a friend who wanted to throw an 18th birthday party – and she was thinking about booking an Airbnb to do it. Banerjee recalls her daughter telling her about the plan, asking her whether she should tell her friend not to book an Airbnb because of the AI safeguards. The friend ended up throwing the party at her own home.
“Being a mother of teenagers and seeing teenage friends of my kids, your antenna is especially sharp and you have a radar for, ‘Oh my God, okay, this is a party about to happen,” Banerjee said. “Between our data scientists and our machine learning engineers and us, we started looking at these signals.”
For Banerjee, it was about translating that antenna into a usable algorithm.
In an April 2020 meeting with Nate Blecharczyk, the company’s co-founder and chief strategy officer, Banerjee recalls strategizing about ways to fix Airbnb’s party problem on three different time scales: “right now,” within a year, and in the general future.
For the “right now” scale, they talked about looking at platform data, studying the patterns and signals for current party reports, and seeing how those puzzle pieces align.
The first step, in July 2020, was rolling out a ban on high-risk reservations by users under the age of 25, especially those who either didn’t have much history on the platform or who didn’t have good reviews from hosts. Although Airbnb says that blocked or redirected “thousands” of guests globally, Banerjee still saw users trying to get around the ban by having an older friend or relative book the reservation for them. Two months later, Airbnb announced a “global party pan,” but that was mostly lip service – at least, until they had the technology to back it up.
Around the same time, Banerjee sent out a series of invitations. Rather than to a party, they were invites to attend party risk reduction workshops, sent to Airbnb designers, data scientists, machine learning engineers and members of the operations and communications teams. In Zoom meetings, they looked at results from the booking ban for guests under age 25 and started putting further plans in motion: Banerjee’s team created a 24/7 safety line for hosts, rolled out a neighborhood support line, and decided to staff up the customer support call center.
One of the biggest takeaways, though, was to remove the option for hosts to list their home as available for gatherings of more than 16 people.
Courtesy: Airbnb
Now that they had a significant amount of data on how potential partiers might act, Banerjee’s had a new goal: Build the AI equivalent of a neighbor checking on the house when the high-schooler’s parents leave them home alone for the weekend.
Around January 2021, Banerjee recalled hearing from Airbnb’s Australia offices that disruptive parties at Airbnbs were up and coming, just like they were in North America, as travel had come to a relative standstill and Covid was in full swing. Banerjee considered rolling out the under-25 ban in Australia, but after chatting with Blecharczyk, she decided to experiment with a party-banning machine learning model instead.
But Banerjee was nervous. Soon after, she phoned her father in Kolkata, India – it was between 10pm and 11pm for her, which was mid-morning for him. As the first female engineer in her family, Banerjee’s father is one of her biggest supporters, she said, and typically the person she calls during the most difficult moments of her life.
Banerjee said, “I remember talking to him saying, ‘I’m just very scared – I feel like I’m on the verge of doing one of the most important things of my career, but I still don’t know if we are going to succeed, like we have the pandemic going on, the business is hurting… We have something that we think is going to be great, but we don’t know yet. I’m just on this verge of uncertainty, and it just makes me really nervous.'”
Banerjee recalled her father telling her that this has happened to her before and that she’d succeed again. He’d be more worried, he told her, if she was overconfident.
In October 2021, Banerjee’s team rolled out the pilot program for their reservation screening AI in Australia. The company saw a 35% drop in parties between regions of the country that had the program versus those that did not. The team spent months analyzing the results and upgraded the system with more data, as well as safety and property damage incidents and records of user collusion.
How the AI system works to stop parties
Listings on Airbnb
Source: Airbnb
Imagine you’re a 21-year-old planning a Halloween party in your hometown. Your plan: Book an Airbnb house for one night, send out the “BYOB” texts and try to avoid posting cliched Instagram captions.
There’s just one problem: Airbnb’s AI system is working against you from the second you sign on.
The party-banning algorithm looks at hundreds of factors: the reservation’s closeness to the user’s birthday, the user’s age, length of stay, the listing’s proximity to where the user is based, how far in advance the reservation is being made, weekend vs. weekday, the type of listing and whether the listing is located in a heavily crowded location rather than a rural one.
Deep learning is a subset of machine learning that uses neural networks – that is, the systems process information in a way inspired by the human brain. The systems are certainly not functionally comparable to the human brain, but they do follow the pattern of learning by example. In the case of Airbnb, one model focuses specifically on the risk of parties, while another focuses on property damage, for instance.
“When we started looking at the data, we found that in most cases, we were noticing that these were bookings that were made extremely last-minute, potentially by a guest account that was created at the last minute, and then a booking was made for a potential party weekend such as New Year’s Eve or Halloween, and they would book an entire home for maybe one night,” Banerjee told CNBC. “And if you looked at where the guest actually lived, that was really in close proximity to where the listing was getting booked.”
After the models do their analysis, the system assigns every reservation a party risk. Depending on the risk tolerance that Airbnb has assigned for that country or area, the reservation will either be banned or greenlit. The team also introduced “heightened party defenses” for holiday weekends such as the Fourth of July, Halloween and New Year’s Eve.
Source: Airbnb
In some cases, like when the right decision isn’t quite clear, reservation requests are flagged for human review, and those human agents can look at the message thread to gauge party risk. But the company is also “starting to invest in a huge way” in large language models for content understanding, to help understand party incidents and fraud, Banerjee said.
“The LLM trend is something that if you are not on that train, it’s like missing out on the internet,” Banerjee told CNBC.
Banerjee said her team has seen a higher risk of parties in the U.S. and Canada, and the next-riskiest would probably be Australia and certain European countries. In Asia, reservations seem to be considerably less risky.
The algorithms are trained partly on tickets labeled as parties or property damage, as well as hypothetical incidents and past ones that occurred before the system went live to see if it would have flagged them. They’re also trained on what “good” guest behavior looks like, such as someone who checks in and out on time, leaves a review on time, and has no incidents on the platform.
But like many forms of AI training data, the idea of “good” guests is ripe for bias. Airbnb has introduced anti-discrimination experiments in the past, such as hiding guests’ photos, preventing hosts from viewing a guest’s full name before the booking is confirmed, and introducing a Smart Pricing tool to help address earnings disparities, although the latter unwittingly ended up widening the gap.
Airbnb said its reservation-screening AI has been evaluated by the company’s anti-discrimination team and that the company often tests the system in areas like precision and recall.
Going global
Courtesy: Airbnb
Almost exactly one year ago, Banerjee was at a plant nursery with her husband and mother-in-law when she received a call from Airbnb CEO Brian Chesky.
She thought he’d be calling about the results of the Australia pilot program, but instead he asked her about trust in the platform. Given all the talk she did about machine learning models and features, she recalled him asking her, would she feel safe sending one of her college-bound kids to stay at an Airbnb – and if not, what would make her feel safe?
That phone call ultimately resulted in the decision to expand Banerjee’s team’s reservation screening AI worldwide the following spring.
Things kicked into high gear, with TV spots for Banerjee, some of which she spotted in between pull-ups on the gym television. She asked her daughter for advice on what to wear. The next thing she knew, the team was getting ready for a live demo of the reservation screening AI with Chesky. Banerjee was nervous.
Last fall, the team sat down with Chesky after working with front-end engineers to create a fake party risk, showing someone booking an entire mansion during a holiday weekend at the last minute and seeing if the model would flag it in real-time. It worked.
Chesky’s only feedback, Banerjee recalled, was to change the existing message – “Your reservation cannot be completed at this point in time because we detect a party risk” – to be more customer-friendly, potentially offering an option to appeal or book a different weekend. They followed his advice. Now, the message reads, “The details of this reservation indicate it could lead to an unauthorized party in the home. You still have the option to book a hotel or private room, or you can contact us with any questions.”
Over the next few months, Banerjee remembers a frenzy of activity but also feeling calm and confident. She went to visit her family in India in April 2023 for the first time in about a year. She told her father about the rollout excitement, which happened in batches the following month.
This past Labor Day, Banerjee was visiting her son in Texas as the algorithm blocked or redirected 5,000 potential party bookings.
But no matter how quickly the AI models learn, Banerjee and her team will need to continue to monitor and change the systems as party-inclined users figure out ways around the barriers.
“The interesting part about the world of trust and safety is that it never stays static,” Banerjee said. “As soon as you build a defense, some of these bad actors out there who are potentially trying to buck the system and throw a party, they will get smarter and they’ll try to do something different.”
A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.
Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.
The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.
The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.
Tesla said it plans to appeal the decision.
Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.
The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.
While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.
Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.
“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”
Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.
Here is Tesla’s response to CNBC:
“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.
Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.
This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”
The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.
Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.
The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.
The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.
The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.
The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.
A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.
A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025.
Lam Yik | Bloomberg | Getty Images
The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.
The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.
The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.
Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.
Stock Chart IconStock chart icon
Bitcoin falls below $115,000
The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.
“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”
Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.
Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.
Don’t miss these cryptocurrency insights from CNBC Pro:
Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.
David Paul Morris | Bloomberg | Getty Images
Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.
The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.
The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.
In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.
“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.
Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids.
The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.
Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.
However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.
President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.
Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”
Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.
“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.
One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.
Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.