As the person in charge of Airbnb’s worldwide ban on parties, she’s spent more than three years figuring out how to battle party “collusion” by users, flag “repeat party houses” and, most of all, design an anti-party AI system with enough training data to halt high-risk reservations before the offender even gets to the checkout page.
It’s been a bit like a game of whack-a-mole: Whenever Banerjee’s algorithms flag some concerns, new ones pop up.
Airbnb defines a party as a gathering that occurs at an Airbnb listing and “causes significant disruption to neighbors and the surrounding community,” according to a company rep. To determine violations, the company considers whether the gathering is an open-invite one, and whether it involves excessive noise, trash, visitors, parking issues for neighbors, and other factors.
Bannerjee joined the company’s trust and safety team in May 2020 and now runs that group. In her short time at the company, she’s overseen a ban on high-risk reservations by users aged 25 and under, an pilot program for anti-party AI in Australia, heightened defenses on holiday weekends, a host insurance policy worth millions of dollars, and this summer, a global rollout of Airbnb’s reservation screening system.
Some measures have worked better than others, but the company says party reports dropped 55% between August 2020 and August 2022 — and since the worldwide launch of Banerjee’s system in May, more than 320,000 guests have been blocked or redirected from booking attempts on Airbnb.
Overall, the company’s business is getting stronger as the post-pandemic travel boom starts to fade. Last month, the company reported earnings that beat analysts’ expectations on earnings per share and revenue, with the latter growing 18% year-over-year, despite fewer-than-expected number of nights and experiences booked via the platform.
Turning parental party radar into an algorithm
Courtesy: Airbnb
Airbnb says the pandemic and hosts’ fears of property damage are the main drivers behind its anti-party push, but there have been darker incidents as well.
A Halloween party at an Airbnb in 2019 left five people dead. This year between Memorial Day and Labor Day weekends, at least five people were killed at parties hosted at Airbnbs. In June, the company was sued by a family who lost their 18-year-old son in a shooting at a 2021 Airbnb party.
When Banerjee first joined Airbnb’s trust team in summer 2020, she recalls people around her asking, “How do you solve this problem?” The stream of questions, from people above and below her on the corporate ladder, contributed to her anxiety. Airbnb’s party problem was complex, and in some ways, she didn’t know where to start.
As a mother of five, Banerjee knows how to sniff out a secretive shindig.
Last summer, Banerjee’s 17-year-old daughter had a friend who wanted to throw an 18th birthday party – and she was thinking about booking an Airbnb to do it. Banerjee recalls her daughter telling her about the plan, asking her whether she should tell her friend not to book an Airbnb because of the AI safeguards. The friend ended up throwing the party at her own home.
“Being a mother of teenagers and seeing teenage friends of my kids, your antenna is especially sharp and you have a radar for, ‘Oh my God, okay, this is a party about to happen,” Banerjee said. “Between our data scientists and our machine learning engineers and us, we started looking at these signals.”
For Banerjee, it was about translating that antenna into a usable algorithm.
In an April 2020 meeting with Nate Blecharczyk, the company’s co-founder and chief strategy officer, Banerjee recalls strategizing about ways to fix Airbnb’s party problem on three different time scales: “right now,” within a year, and in the general future.
For the “right now” scale, they talked about looking at platform data, studying the patterns and signals for current party reports, and seeing how those puzzle pieces align.
The first step, in July 2020, was rolling out a ban on high-risk reservations by users under the age of 25, especially those who either didn’t have much history on the platform or who didn’t have good reviews from hosts. Although Airbnb says that blocked or redirected “thousands” of guests globally, Banerjee still saw users trying to get around the ban by having an older friend or relative book the reservation for them. Two months later, Airbnb announced a “global party pan,” but that was mostly lip service – at least, until they had the technology to back it up.
Around the same time, Banerjee sent out a series of invitations. Rather than to a party, they were invites to attend party risk reduction workshops, sent to Airbnb designers, data scientists, machine learning engineers and members of the operations and communications teams. In Zoom meetings, they looked at results from the booking ban for guests under age 25 and started putting further plans in motion: Banerjee’s team created a 24/7 safety line for hosts, rolled out a neighborhood support line, and decided to staff up the customer support call center.
One of the biggest takeaways, though, was to remove the option for hosts to list their home as available for gatherings of more than 16 people.
Courtesy: Airbnb
Now that they had a significant amount of data on how potential partiers might act, Banerjee’s had a new goal: Build the AI equivalent of a neighbor checking on the house when the high-schooler’s parents leave them home alone for the weekend.
Around January 2021, Banerjee recalled hearing from Airbnb’s Australia offices that disruptive parties at Airbnbs were up and coming, just like they were in North America, as travel had come to a relative standstill and Covid was in full swing. Banerjee considered rolling out the under-25 ban in Australia, but after chatting with Blecharczyk, she decided to experiment with a party-banning machine learning model instead.
But Banerjee was nervous. Soon after, she phoned her father in Kolkata, India – it was between 10pm and 11pm for her, which was mid-morning for him. As the first female engineer in her family, Banerjee’s father is one of her biggest supporters, she said, and typically the person she calls during the most difficult moments of her life.
Banerjee said, “I remember talking to him saying, ‘I’m just very scared – I feel like I’m on the verge of doing one of the most important things of my career, but I still don’t know if we are going to succeed, like we have the pandemic going on, the business is hurting… We have something that we think is going to be great, but we don’t know yet. I’m just on this verge of uncertainty, and it just makes me really nervous.'”
Banerjee recalled her father telling her that this has happened to her before and that she’d succeed again. He’d be more worried, he told her, if she was overconfident.
In October 2021, Banerjee’s team rolled out the pilot program for their reservation screening AI in Australia. The company saw a 35% drop in parties between regions of the country that had the program versus those that did not. The team spent months analyzing the results and upgraded the system with more data, as well as safety and property damage incidents and records of user collusion.
How the AI system works to stop parties
Listings on Airbnb
Source: Airbnb
Imagine you’re a 21-year-old planning a Halloween party in your hometown. Your plan: Book an Airbnb house for one night, send out the “BYOB” texts and try to avoid posting cliched Instagram captions.
There’s just one problem: Airbnb’s AI system is working against you from the second you sign on.
The party-banning algorithm looks at hundreds of factors: the reservation’s closeness to the user’s birthday, the user’s age, length of stay, the listing’s proximity to where the user is based, how far in advance the reservation is being made, weekend vs. weekday, the type of listing and whether the listing is located in a heavily crowded location rather than a rural one.
Deep learning is a subset of machine learning that uses neural networks – that is, the systems process information in a way inspired by the human brain. The systems are certainly not functionally comparable to the human brain, but they do follow the pattern of learning by example. In the case of Airbnb, one model focuses specifically on the risk of parties, while another focuses on property damage, for instance.
“When we started looking at the data, we found that in most cases, we were noticing that these were bookings that were made extremely last-minute, potentially by a guest account that was created at the last minute, and then a booking was made for a potential party weekend such as New Year’s Eve or Halloween, and they would book an entire home for maybe one night,” Banerjee told CNBC. “And if you looked at where the guest actually lived, that was really in close proximity to where the listing was getting booked.”
After the models do their analysis, the system assigns every reservation a party risk. Depending on the risk tolerance that Airbnb has assigned for that country or area, the reservation will either be banned or greenlit. The team also introduced “heightened party defenses” for holiday weekends such as the Fourth of July, Halloween and New Year’s Eve.
Source: Airbnb
In some cases, like when the right decision isn’t quite clear, reservation requests are flagged for human review, and those human agents can look at the message thread to gauge party risk. But the company is also “starting to invest in a huge way” in large language models for content understanding, to help understand party incidents and fraud, Banerjee said.
“The LLM trend is something that if you are not on that train, it’s like missing out on the internet,” Banerjee told CNBC.
Banerjee said her team has seen a higher risk of parties in the U.S. and Canada, and the next-riskiest would probably be Australia and certain European countries. In Asia, reservations seem to be considerably less risky.
The algorithms are trained partly on tickets labeled as parties or property damage, as well as hypothetical incidents and past ones that occurred before the system went live to see if it would have flagged them. They’re also trained on what “good” guest behavior looks like, such as someone who checks in and out on time, leaves a review on time, and has no incidents on the platform.
But like many forms of AI training data, the idea of “good” guests is ripe for bias. Airbnb has introduced anti-discrimination experiments in the past, such as hiding guests’ photos, preventing hosts from viewing a guest’s full name before the booking is confirmed, and introducing a Smart Pricing tool to help address earnings disparities, although the latter unwittingly ended up widening the gap.
Airbnb said its reservation-screening AI has been evaluated by the company’s anti-discrimination team and that the company often tests the system in areas like precision and recall.
Going global
Courtesy: Airbnb
Almost exactly one year ago, Banerjee was at a plant nursery with her husband and mother-in-law when she received a call from Airbnb CEO Brian Chesky.
She thought he’d be calling about the results of the Australia pilot program, but instead he asked her about trust in the platform. Given all the talk she did about machine learning models and features, she recalled him asking her, would she feel safe sending one of her college-bound kids to stay at an Airbnb – and if not, what would make her feel safe?
That phone call ultimately resulted in the decision to expand Banerjee’s team’s reservation screening AI worldwide the following spring.
Things kicked into high gear, with TV spots for Banerjee, some of which she spotted in between pull-ups on the gym television. She asked her daughter for advice on what to wear. The next thing she knew, the team was getting ready for a live demo of the reservation screening AI with Chesky. Banerjee was nervous.
Last fall, the team sat down with Chesky after working with front-end engineers to create a fake party risk, showing someone booking an entire mansion during a holiday weekend at the last minute and seeing if the model would flag it in real-time. It worked.
Chesky’s only feedback, Banerjee recalled, was to change the existing message – “Your reservation cannot be completed at this point in time because we detect a party risk” – to be more customer-friendly, potentially offering an option to appeal or book a different weekend. They followed his advice. Now, the message reads, “The details of this reservation indicate it could lead to an unauthorized party in the home. You still have the option to book a hotel or private room, or you can contact us with any questions.”
Over the next few months, Banerjee remembers a frenzy of activity but also feeling calm and confident. She went to visit her family in India in April 2023 for the first time in about a year. She told her father about the rollout excitement, which happened in batches the following month.
This past Labor Day, Banerjee was visiting her son in Texas as the algorithm blocked or redirected 5,000 potential party bookings.
But no matter how quickly the AI models learn, Banerjee and her team will need to continue to monitor and change the systems as party-inclined users figure out ways around the barriers.
“The interesting part about the world of trust and safety is that it never stays static,” Banerjee said. “As soon as you build a defense, some of these bad actors out there who are potentially trying to buck the system and throw a party, they will get smarter and they’ll try to do something different.”
An employee walks past a quilt displaying Etsy Inc. signage at the company’s headquarters in the Brooklyn.
Victor J. Blue/Bloomberg via Getty Images
Etsy is trying to make it easier for shoppers to purchase products from local merchants and avoid the extra cost of imports as President Donald Trump’s sweeping tariffs raise concerns about soaring prices.
In a post to Etsy’s website on Thursday, CEO Josh Silverman said the company is “surfacing new ways for buyers to discover businesses in their countries” via shopping pages and by featuring local sellers on its website and app.
“While we continue to nurture and enable cross-border trade on Etsy, we understand that people are increasingly interested in shopping domestically,” Silverman said.
Etsy operates an online marketplace that connects buyers and sellers with mostly artisanal and handcrafted goods. The site, which had 5.6 million active sellers as of the end of December, competes with e-commerce juggernaut Amazon, as well as newer entrants that have ties to China like Temu, Shein and TikTok Shop.
By highlighting local sellers, Etsy could relieve some shoppers from having to pay higher prices induced by President Trump’s widespread tariffs on trade partners. Trump has imposed tariffs on most foreign countries, with China facing a rate of 145%, and other nations facing 10% rates after he instituted a 90-day pause to allow for negotiations. Trump also signed an executive order that will end the de minimis provision, a loophole for low-value shipments often used by online businesses, on May 2.
Temu and Shein have already announced they plan to raise prices late next week in response to the tariffs. Sellers on Amazon’s third-party marketplace, many of whom source their products from China, have said they’re considering raising prices.
Silverman said Etsy has provided guidance for its sellers to help them “run their businesses with as little disruption as possible” in the wake of tariffs and changes to the de minimis exemption.
Before Trump’s “Liberation Day” tariffs took effect, Silverman said on the company’s fourth-quarter earnings call in late February that he expects Etsy to benefit from the tariffs and de minimis restrictions because it “has much less dependence on products coming in from China.”
“We’re doing whatever work we can do to anticipate and prepare for come what may,” Silverman said at the time. “In general, though, I think Etsy will be more resilient than many of our competitors in these situations.”
Still, American shoppers may face higher prices on Etsy as U.S. businesses that source their products or components from China pass some of those costs on to consumers.
Etsy shares are down 17% this year, slightly more than the Nasdaq.
Google CEO Sundar Pichai testifies before the House Judiciary Committee at the Rayburn House Office Building on December 11, 2018 in Washington, DC.
Alex Wong | Getty Images
Google’s antitrust woes are continuing to mount, just as the company tries to brace for a future dominated by artificial intelligence.
On Thursday, a federal judge ruled that Google held illegal monopolies in online advertising markets due to its position between ad buyers and sellers.
The ruling, which followed a September trial in Alexandria, Virginia, represents a second major antitrust blow for Google in under a year. In August, a judge determined the company has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoftmore than 20 years ago.
Google is in a particularly precarious spot as it tries to simultaneously defend its primary business in court while fending off an onslaught of new competition due to the emergence of generative AI, most notably OpenAI’s ChatGPT, which offers users alternative ways to search for information. Revenue growth has cooled in recent years, and Google also now faces the added potential of a slowdown in ad spending due to economic concerns from President Donald Trump’s sweeping new tariffs.
Parent company Alphabet reports first-quarter results next week. Alphabet’s stock price dipped more than 1% on Thursday and is now down 20% this year.
In Thursday’s ruling, U.S. District Judge Leonie Brinkema said Google’s anticompetitive practices “substantially harmed” publishers and users on the web. The trial featured 39 live witnesses, depositions from an additional 20 witnesses and hundreds of exhibits.
Judge Brinkema ruled that Google unlawfully controls two of the three parts of the advertising technology market: the publisher ad server market and ad exchange market. Brinkema dismissed the third part of the case, determining that tools used for general display advertising can’t clearly be defined as Google’s own market. In particular, the judge cited the purchases of DoubleClick and Admeld and said the government failed to show those “acquisitions were anticompetitive.”
“We won half of this case and we will appeal the other half,” Lee-Anne Mulholland, Google’s vice president or regulatory affairs, said in an emailed statement. “We disagree with the Court’s decision regarding our publisher tools. Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective.”
Attorney General Pam Bondi said in a press release from the DOJ that the ruling represents a “landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”
Potential ad disruption
If regulators force the company to divest parts of the ad-tech business, as the Justice Department has requested, it could open up opportunities for smaller players and other competitors to fill the void and snap up valuable market share. Amazon has been growing its ad business in recent years.
Meanwhile, Google is still defending itself against claims that its search has acted as a monopoly by creating strong barriers to entry and a feedback loop that sustained its dominance. Google said in August, immediately after the search case ruling, that it would appeal, meaning the matter can play out in court for years even after the remedies are determined.
The remedies trial, which will lay out the consequences, begins next week. The Justice Department is aiming for a break up of Google’s Chrome browser and eliminating exclusive agreements, like its deal with Apple for search on iPhones. The judge is expected to make the ruling by August.
Google CEO Sundar Pichai (L) and Apple CEO Tim Cook (R) listen as U.S. President Joe Biden speaks during a roundtable with American and Indian business leaders in the East Room of the White House on June 23, 2023 in Washington, DC.
Anna Moneymaker | Getty Images
After the ad market ruling on Thursday, Gartner’s Andrew Frank said Google’s “conflicts of interest” are apparent by how the market runs.
“The structure has been decades in the making,” Frank said, adding that “untangling that would be a significant challenge, particularly since lawyers don’t tend to be system architects.”
However, the uncertainty that comes with a potentially years-long appeals process means many publishers and advertisers will be waiting to see how things shake out before making any big decisions given how much they rely on Google’s technology.
“Google will have incentives to encourage more competition possibly by loosening certain restrictions on certain media it controls, YouTube being one of them,” Frank said. “Those kind of incentives may create opportunities for other publishers or ad tech players.”
A date for the remedies trial hasn’t been set.
Damian Rollison, senior director of market insights for marketing platform Soci, said the revenue hit from the ad market case could be more dramatic than the impact from the search case.
“The company stands to lose a lot more in material terms if its ad business, long its main source of revenue, is broken up,” Rollison said in an email. “Whereas divisions like Chrome are more strategically important.”
Jason Citron, CEO of Discord in Washington, DC, on January 31, 2024.
Andrew Caballero-Reynolds | AFP | Getty Images
The New Jersey attorney general sued Discord on Thursday, alleging that the company misled consumers about child safety features on the gaming-centric social messaging app.
The lawsuit, filed in the New Jersey Superior Court by Attorney General Matthew Platkin and the state’s division of consumer affairs, alleges that Discord violated the state’s consumer fraud laws.
Discord did so, the complaint said, by allegedly “misleading children and parents from New Jersey” about safety features, “obscuring” the risks children face on the platform and failing to enforce its minimum age requirement.
“Discord’s strategy of employing difficult to navigate and ambiguous safety settings to lull parents and children into a false sense of safety, when Discord knew well that children on the Application were being targeted and exploited, are unconscionable and/or abusive commercial acts or practices,” lawyers wrote in the legal filing.
They alleged that Discord’s acts and practices were “offensive to public policy.”
A Discord spokesperson said in a statement that the company disputes the allegations and that it is “proud of our continuous efforts and investments in features and tools that help make Discord safer.”
“Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today,” the spokesperson said.
One of the lawsuit’s allegations centers around Discord’s age-verification process, which the plaintiffs believe is flawed, writing that children under thirteen can easily lie about their age to bypass the app’s minimum age requirement.
The lawsuit also alleges that Discord misled parents to believe that its so-called Safe Direct Messaging feature “was designed to automatically scan and delete all private messages containing explicit media content.” The lawyers claim that Discord misrepresented the efficacy of that safety tool.
“By default, direct messages between ‘friends’ were not scanned at all,” the complaint stated. “But even when Safe Direct Messaging filters were enabled, children were still exposed to child sexual abuse material, videos depicting violence or terror, and other harmful content.”
The New Jersey attorney general is seeking unspecified civil penalties against Discord, according to the complaint.
The filing marks the latest lawsuit brought by various state attorneys general around the country against social media companies.
In 2023, a bipartisan coalition of over 40 state attorneys general sued Meta over allegations that the company knowingly implemented addictive features across apps like Facebook and Instagram that harm the mental well being of children and young adults.
The New Mexico attorney general sued Snap in Sep. 2024 over allegations that Snapchat’s design features have made it easy for predators to easily target children through sextortion schemes.
The following month, a bipartisan group of over a dozen state attorneys general filed lawsuits against TikTok over allegations that the app misleads consumers that its safe for children. In one particular lawsuit filed by the District of Columbia’s attorney general, lawyers allege that the ByteDance-owned app maintains a virtual currency that “substantially harms children” and a livestreaming feature that “exploits them financially.”
In January 2024, executives from Meta, TikTok, Snap, Discord and X were grilled by lawmakers during a senate hearing over allegations that the companies failed to protect children on their respective social media platforms.