Connect with us

Published

on

Insta_photos | Istock | Getty Images

Cue the George Orwell reference.

Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.

Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.

Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.

Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.

Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.

CNBC didn’t receive a response from Walmart, T-Mobile, Chevron, Starbucks or Nestle regarding their use of Aware. A representative from AstraZeneca said the company uses the eDiscovery product but it doesn’t use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery for monitoring trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal records retention in its social media platform.

It doesn’t take a dystopian novel enthusiast to see where it could all go very wrong.

Generative AI is coming to wealth management in a very big way, says Ritholtz's Josh Brown

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.

Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”

Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that’s exploded in the past year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became the buzzy phrase for corporate earnings calls, and some form of the technology is automating tasks in just about every industry, from financial services and biomedical research to logistics, online travel and utilities.

Aware’s revenue has jumped 150% per year on average over the past five years, Schumann told CNBC, and its typical customer has about 30,000 employees. Top competitors include Qualtrics, Relativity, Proofpoint, Smarsh and Netskope.

By industry standards, Aware is staying quite lean. The company last raised money in 2021, when it pulled in $60 million in a round led by Goldman Sachs Asset Management. Compare that with large language model, or LLM, companies such as OpenAI and Anthropic, which have raised billions of dollars each, largely from strategic partners.

‘Tracking real-time toxicity’

Schumann started the company in 2017 after spending almost eight years working on enterprise collaboration at insurance company Nationwide.

Before that, he was an entrepreneur. And Aware isn’t the first company he’s started that’s elicited thoughts of Orwell.

In 2005, Schumann founded a company called BigBrotherLite.com. According to his LinkedIn profile, the business developed software that “enhanced the digital and mobile viewing experience” of the CBS reality series “Big Brother.” In Orwell’s classic novel “1984,” Big Brother was the leader of a totalitarian state in which citizens were under perpetual surveillance.

I built a simple player focused on a cleaner and easier consumer experience for people to watch the TV show on their computer,” Schumann said in an email.

At Aware, he’s doing something very different.

Every year, the company puts out a report aggregating insights from the billions — in 2023, the number was 6.5 billion — of messages sent across large companies, tabulating perceived risk factors and workplace sentiment scores. Schumann refers to the trillions of messages sent across workplace communication platforms every year as “the fastest-growing unstructured data set in the world.” 

When including other types of content being shared, such as images and videos, Aware’s analytics AI analyzes more than 100 million pieces of content every day. In so doing, the technology creates a company social graph, looking at which teams internally talk to each other more than others.

“It’s always tracking real-time employee sentiment, and it’s always tracking real-time toxicity,” Schumann said of the analytics tool. “If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it’s because they’re talking about something positively, collectively. The technology would be able to tell them whatever it was.”

Aware confirmed to CNBC that it uses data from its enterprise clients to train its machine-learning models. The company’s data repository contains about 6.5 billion messages, representing about 20 billion individual interactions across more than 3 million unique employees, the company said. 

When a new client signs up for the analytics tool, it takes Aware’s AI models about two weeks to train on employee messages and get to know the patterns of emotion and sentiment within the company so it can see what’s normal versus abnormal, Schumann said.

“It won’t have names of people, to protect the privacy,” Schumann said. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”

FTC scrutinizes megacap's AI deals

But Aware’s eDiscovery tool operates differently. A company can set up role-based access to employee names depending on the “extreme risk” category of the company’s choice, which instructs Aware’s technology to pull an individual’s name, in certain cases, for human resources or another company representative.

“Some of the common ones are extreme violence, extreme bullying, harassment, but it does vary by industry,” Schumann said, adding that in financial services, suspected insider trading would be tracked.

For instance, a client can specify a “violent threats” policy, or any other category, using Aware’s technology, Schumann said, and have the AI models monitor for violations in Slack, Microsoft Teams and Workplace by Meta. The client could also couple that with rule-based flags for certain phrases, statements and more. If the AI found something that violated a company’s specified policies, it could provide the employee’s name to the client’s designated representative.

This type of practice has been used for years within email communications. What’s new is the use of AI and its application across workplace messaging platforms such as Slack and Teams.

Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.

“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.” 

Schumann said that though Aware’s eDiscovery tool allows security or HR investigations teams to use AI to search through massive amounts of data, a “similar but basic capability already exists today” in Slack, Teams and other platforms.

“A key distinction here is that Aware and its AI models are not making decisions,” Schumann said. “Our AI simply makes it easier to comb through this new data set to identify potential risks or policy violations.”

Privacy concerns

Even if data is aggregated or anonymized, research suggests, it’s a flawed concept. A landmark study on data privacy using 1990 U.S. Census data showed that 87% of Americans could be identified solely by using ZIP code, birth date and gender. Aware clients using its analytics tool have the power to add metadata to message tracking, such as employee age, location, division, tenure or job function. 

“What they’re saying is relying on a very outdated and, I would say, entirely debunked notion at this point that anonymization or aggregation is like a magic bullet through the privacy concern,” Kak said.

Additionally, the type of AI model Aware uses can be effective at generating inferences from aggregate data, making accurate guesses, for instance, about personal identifiers based on language, context, slang terms and more, according to recent research.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” Kak said. “There is no one who can tell you with a straight face that these challenges are solved.”

And what about employee recourse? If an interaction is flagged and a worker is disciplined or fired, it’s difficult for them to offer a defense if they’re not privy to all of the data involved, Williams said.

“How do you face your accuser when we know that AI explainability is still immature?” Williams said.

Schumann said in response: “None of our AI models make decisions or recommendations regarding employee discipline.”

“When the model flags an interaction,” Schumann said, “it provides full context around what happened and what policy it triggered, giving investigation teams the information they need to decide next steps consistent with company policies and the law.”

WATCH: AI is ‘really at play here’ with the recent tech layoffs

AI is 'really at play here' with the recent tech layoffs, says Jason Greer

Continue Reading

Technology

Google hit with second antitrust blow, adding to concerns about future of ads business

Published

on

By

Google hit with second antitrust blow, adding to concerns about future of ads business

Google CEO Sundar Pichai testifies before the House Judiciary Committee at the Rayburn House Office Building on December 11, 2018 in Washington, DC.

Alex Wong | Getty Images

Google’s antitrust woes are continuing to mount, just as the company tries to brace for a future dominated by artificial intelligence.

On Thursday, a federal judge ruled that Google held illegal monopolies in online advertising markets due to its position between ad buyers and sellers.

The ruling, which followed a September trial in Alexandria, Virginia, represents a second major antitrust blow for Google in under a year. In August, a judge determined the company has held a monopoly in its core market of internet search, the most-significant antitrust ruling in the tech industry since the case against Microsoft more than 20 years ago. 

Google is in a particularly precarious spot as it tries to simultaneously defend its primary business in court while fending off an onslaught of new competition due to the emergence of generative AI, most notably OpenAI’s ChatGPT, which offers users alternative ways to search for information. Revenue growth has cooled in recent years, and Google also now faces the added potential of a slowdown in ad spending due to economic concerns from President Donald Trump’s sweeping new tariffs.

Parent company Alphabet reports first-quarter results next week. Alphabet’s stock price dipped more than 1% on Thursday and is now down 20% this year.

Why Google's antitrust woes endangers its AI momentum

In Thursday’s ruling, U.S. District Judge Leonie Brinkema said Google’s anticompetitive practices “substantially harmed” publishers and users on the web. The trial featured 39 live witnesses, depositions from an additional 20 witnesses and hundreds of exhibits.

Judge Brinkema ruled that Google unlawfully controls two of the three parts of the advertising technology market: the publisher ad server market and ad exchange market. Brinkema dismissed the third part of the case, determining that tools used for general display advertising can’t clearly be defined as Google’s own market. In particular, the judge cited the purchases of DoubleClick and Admeld and said the government failed to show those “acquisitions were anticompetitive.”

“We won half of this case and we will appeal the other half,” Lee-Anne Mulholland, Google’s vice president or regulatory affairs, said in an emailed statement. “We disagree with the Court’s decision regarding our publisher tools. Publishers have many options and they choose Google because our ad tech tools are simple, affordable and effective.”

Attorney General Pam Bondi said in a press release from the DOJ that the ruling represents a “landmark victory in the ongoing fight to stop Google from monopolizing the digital public square.”

Potential ad disruption

If regulators force the company to divest parts of the ad-tech business, as the Justice Department has requested, it could open up opportunities for smaller players and other competitors to fill the void and snap up valuable market share. Amazon has been growing its ad business in recent years.

Meanwhile, Google is still defending itself against claims that its search has acted as a monopoly by creating strong barriers to entry and a feedback loop that sustained its dominance. Google said in August, immediately after the search case ruling, that it would appeal, meaning the matter can play out in court for years even after the remedies are determined.

The remedies trial, which will lay out the consequences, begins next week. The Justice Department is aiming for a break up of Google’s Chrome browser and eliminating exclusive agreements, like its deal with Apple for search on iPhones. The judge is expected to make the ruling by August.

Google CEO Sundar Pichai (L) and Apple CEO Tim Cook (R) listen as U.S. President Joe Biden speaks during a roundtable with American and Indian business leaders in the East Room of the White House on June 23, 2023 in Washington, DC.

Anna Moneymaker | Getty Images

After the ad market ruling on Thursday, Gartner’s Andrew Frank said Google’s “conflicts of interest” are apparent by how the market runs.

“The structure has been decades in the making,” Frank said, adding that “untangling that would be a significant challenge, particularly since lawyers don’t tend to be system architects.”

However, the uncertainty that comes with a potentially years-long appeals process means many publishers and advertisers will be waiting to see how things shake out before making any big decisions given how much they rely on Google’s technology.

“Google will have incentives to encourage more competition possibly by loosening certain restrictions on certain media it controls, YouTube being one of them,” Frank said. “Those kind of incentives may create opportunities for other publishers or ad tech players.”

A date for the remedies trial hasn’t been set.

Damian Rollison, senior director of market insights for marketing platform Soci, said the revenue hit from the ad market case could be more dramatic than the impact from the search case.

“The company stands to lose a lot more in material terms if its ad business, long its main source of revenue, is broken up,” Rollison said in an email. “Whereas divisions like Chrome are more strategically important.”

WATCH: U.S. judge finds Google holds illegal online ad-tech monopolies

U.S. judge finds Google holds illegal online ad tech monopolies

Continue Reading

Technology

Discord sued by New Jersey over child safety features

Published

on

By

Discord sued by New Jersey over child safety features

Jason Citron, CEO of Discord in Washington, DC, on January 31, 2024.

Andrew Caballero-Reynolds | AFP | Getty Images

The New Jersey attorney general sued Discord on Thursday, alleging that the company misled consumers about child safety features on the gaming-centric social messaging app.

The lawsuit, filed in the New Jersey Superior Court by Attorney General Matthew Platkin and the state’s division of consumer affairs, alleges that Discord violated the state’s consumer fraud laws.

Discord did so, the complaint said, by allegedly “misleading children and parents from New Jersey” about safety features, “obscuring” the risks children face on the platform and failing to enforce its minimum age requirement.

“Discord’s strategy of employing difficult to navigate and ambiguous safety settings to lull parents and children into a false sense of safety, when Discord knew well that children on the Application were being targeted and exploited, are unconscionable and/or abusive commercial acts or practices,” lawyers wrote in the legal filing.

They alleged that Discord’s acts and practices were “offensive to public policy.”

A Discord spokesperson said in a statement that the company disputes the allegations and that it is “proud of our continuous efforts and investments in features and tools that help make Discord safer.”

“Given our engagement with the Attorney General’s office, we are surprised by the announcement that New Jersey has filed an action against Discord today,” the spokesperson said.

One of the lawsuit’s allegations centers around Discord’s age-verification process, which the plaintiffs believe is flawed, writing that children under thirteen can easily lie about their age to bypass the app’s minimum age requirement.

The lawsuit also alleges that Discord misled parents to believe that its so-called Safe Direct Messaging feature “was designed to automatically scan and delete all private messages containing explicit media content.” The lawyers claim that Discord misrepresented the efficacy of that safety tool.

“By default, direct messages between ‘friends’ were not scanned at all,” the complaint stated. “But even when Safe Direct Messaging filters were enabled, children were still exposed to child sexual abuse material, videos depicting violence or terror, and other harmful content.”

The New Jersey attorney general is seeking unspecified civil penalties against Discord, according to the complaint.

The filing marks the latest lawsuit brought by various state attorneys general around the country against social media companies.

In 2023, a bipartisan coalition of over 40 state attorneys general sued Meta over allegations that the company knowingly implemented addictive features across apps like Facebook and Instagram that harm the mental well being of children and young adults.

The New Mexico attorney general sued Snap in Sep. 2024 over allegations that Snapchat’s design features have made it easy for predators to easily target children through sextortion schemes.

The following month, a bipartisan group of over a dozen state attorneys general filed lawsuits against TikTok over allegations that the app misleads consumers that its safe for children. In one particular lawsuit filed by the District of Columbia’s attorney general, lawyers allege that the ByteDance-owned app maintains a virtual currency that “substantially harms children” and a  livestreaming feature that “exploits them financially.”

In January 2024, executives from Meta, TikTok, Snap, Discord and X were grilled by lawmakers during a senate hearing over allegations that the companies failed to protect children on their respective social media platforms.

WATCH: The FTC has an uphill battle in its antitrust case against Meta.

The FTC has an uphill battle in its antitrust case against Meta: Former Facebook general counsel

Continue Reading

Technology

23andMe bankruptcy under congressional investigation for customer data

Published

on

By

23andMe bankruptcy under congressional investigation for customer data

Signage at 23andMe headquarters in Sunnyvale, California, U.S., on Wednesday, Jan. 27, 2021.

David Paul Morris | Bloomberg | Getty Images

The House Committee on Energy and Commerce is investigating 23andMe‘s decision to file for Chapter 11 bankruptcy protection and has expressed concern that its sensitive genetic data is “at risk of being compromised,” CNBC has learned.

Rep. Brett Guthrie, R-Ky., Rep. Gus Bilirakis, R-Fla., and Rep. Gary Palmer, R.-Ala., sent a letter to 23andMe’s interim CEO Joe Selsavage on Thursday requesting answers to a series of questions about its data and privacy practices by May 1.

The congressmen are the latest government officials to raise concerns about 23andMe’s commitment to data security, as the House Committee on Oversight and Government Reform and the Federal Trade Commission have sent the company similar letters in recent weeks.

23andMe exploded into the mainstream with its at-home DNA testing kits that gave customers insight into their family histories and genetic profiles. The company was once valued at a peak of $6 billion, but has since struggled to generate recurring revenue and establish a lucrative research and therapeutics businesses.

After filing for bankruptcy in in Missouri federal court in March, 23andMe’s assets, including its vast genetic database, are up for sale.

“With the lack of a federal comprehensive data privacy and security law, we write to express our great concern about the safety of Americans’ most sensitive personal information,” Guthrie, Bilirakis and Palmer wrote in the letter.

23andMe did not immediately respond to CNBC’s request for comment.

More CNBC health coverage

23andMe has been inundated with privacy concerns in recent years after hackers accessed the information of nearly 7 million customers in 2023. 

DNA data is particularly sensitive because each person’s sequence is unique, meaning it can never be fully anonymized, according to the National Human Genome Research Institute. If genetic data falls into the hands of bad actors, it could be used to facilitate identity theft, insurance fraud and other crimes.

The House Committee on Energy and Commerce has jurisdiction over issues involving data privacy. Guthrie serves as the chairman of the committee, Palmer serves as the chairman of the Subcommittee on Oversight and Investigations and Bilirakis serves as the chairman of the Subcommittee on Commerce, Manufacturing and Trade.

The congressmen said that while Americans’ health information is protected under legislation like the Health Insurance Portability and Accountability Act, or HIPAA, direct-to-consumer companies like 23andMe are typically not covered under that law. They said they feel “great concern” about the safety of the company’s customer data, especially given the uncertainty around the sale process.

23andMe has repeatedly said it will not change how it manages or protects consumer data throughout the transaction. Similarly, in a March release, the company said all potential buyers must agree to comply with its privacy policy and applicable law. 

“To constitute a qualified bid, potential buyers must, among other requirements, agree to comply with 23andMe’s consumer privacy policy and all applicable laws with respect to the treatment of customer data,” 23andMe said in the release.

23andMe customers can still delete their account and accompanying data through the company’s website. But Guthrie, Bilirakis and Palmer said there are reports that some users have had trouble doing so.

“Regardless of whether the company changes ownership, we want to ensure that customer access and deletion requests are being honored by 23andMe,” the congressmen wrote.

WATCH: The rise and fall of 23andMe

The rise and fall of 23andMe

Continue Reading

Trending