Connect with us

Published

on

Insta_photos | Istock | Getty Images

Cue the George Orwell reference.

Depending on where you work, there’s a significant chance that artificial intelligence is analyzing your messages on Slack, Microsoft Teams, Zoom and other popular apps.

Huge U.S. employers such as Walmart, Delta Air Lines, T-Mobile, Chevron and Starbucks, as well as European brands including Nestle and AstraZeneca, have turned to a seven-year-old startup, Aware, to monitor chatter among their rank and file, according to the company.

Jeff Schumann, co-founder and CEO of the Columbus, Ohio-based startup, says the AI helps companies “understand the risk within their communications,” getting a read on employee sentiment in real time, rather than depending on an annual or twice-per-year survey.

Using the anonymized data in Aware’s analytics product, clients can see how employees of a certain age group or in a particular geography are responding to a new corporate policy or marketing campaign, according to Schumann. Aware’s dozens of AI models, built to read text and process images, can also identify bullying, harassment, discrimination, noncompliance, pornography, nudity and other behaviors, he said.

Aware’s analytics tool — the one that monitors employee sentiment and toxicity — doesn’t have the ability to flag individual employee names, according to Schumann. But its separate eDiscovery tool can, in the event of extreme threats or other risk behaviors that are predetermined by the client, he added.

CNBC didn’t receive a response from Walmart, T-Mobile, Chevron, Starbucks or Nestle regarding their use of Aware. A representative from AstraZeneca said the company uses the eDiscovery product but it doesn’t use analytics to monitor sentiment or toxicity. Delta told CNBC that it uses Aware’s analytics and eDiscovery for monitoring trends and sentiment as a way to gather feedback from employees and other stakeholders, and for legal records retention in its social media platform.

It doesn’t take a dystopian novel enthusiast to see where it could all go very wrong.

Generative AI is coming to wealth management in a very big way, says Ritholtz's Josh Brown

Jutta Williams, co-founder of AI accountability nonprofit Humane Intelligence, said AI adds a new and potentially problematic wrinkle to so-called insider risk programs, which have existed for years to evaluate things like corporate espionage, especially within email communications.

Speaking broadly about employee surveillance AI rather than Aware’s technology specifically, Williams told CNBC: “A lot of this becomes thought crime.” She added, “This is treating people like inventory in a way I’ve not seen.”

Employee surveillance AI is a rapidly expanding but niche piece of a larger AI market that’s exploded in the past year, following the launch of OpenAI’s ChatGPT chatbot in late 2022. Generative AI quickly became the buzzy phrase for corporate earnings calls, and some form of the technology is automating tasks in just about every industry, from financial services and biomedical research to logistics, online travel and utilities.

Aware’s revenue has jumped 150% per year on average over the past five years, Schumann told CNBC, and its typical customer has about 30,000 employees. Top competitors include Qualtrics, Relativity, Proofpoint, Smarsh and Netskope.

By industry standards, Aware is staying quite lean. The company last raised money in 2021, when it pulled in $60 million in a round led by Goldman Sachs Asset Management. Compare that with large language model, or LLM, companies such as OpenAI and Anthropic, which have raised billions of dollars each, largely from strategic partners.

‘Tracking real-time toxicity’

Schumann started the company in 2017 after spending almost eight years working on enterprise collaboration at insurance company Nationwide.

Before that, he was an entrepreneur. And Aware isn’t the first company he’s started that’s elicited thoughts of Orwell.

In 2005, Schumann founded a company called BigBrotherLite.com. According to his LinkedIn profile, the business developed software that “enhanced the digital and mobile viewing experience” of the CBS reality series “Big Brother.” In Orwell’s classic novel “1984,” Big Brother was the leader of a totalitarian state in which citizens were under perpetual surveillance.

I built a simple player focused on a cleaner and easier consumer experience for people to watch the TV show on their computer,” Schumann said in an email.

At Aware, he’s doing something very different.

Every year, the company puts out a report aggregating insights from the billions — in 2023, the number was 6.5 billion — of messages sent across large companies, tabulating perceived risk factors and workplace sentiment scores. Schumann refers to the trillions of messages sent across workplace communication platforms every year as “the fastest-growing unstructured data set in the world.” 

When including other types of content being shared, such as images and videos, Aware’s analytics AI analyzes more than 100 million pieces of content every day. In so doing, the technology creates a company social graph, looking at which teams internally talk to each other more than others.

“It’s always tracking real-time employee sentiment, and it’s always tracking real-time toxicity,” Schumann said of the analytics tool. “If you were a bank using Aware and the sentiment of the workforce spiked in the last 20 minutes, it’s because they’re talking about something positively, collectively. The technology would be able to tell them whatever it was.”

Aware confirmed to CNBC that it uses data from its enterprise clients to train its machine-learning models. The company’s data repository contains about 6.5 billion messages, representing about 20 billion individual interactions across more than 3 million unique employees, the company said. 

When a new client signs up for the analytics tool, it takes Aware’s AI models about two weeks to train on employee messages and get to know the patterns of emotion and sentiment within the company so it can see what’s normal versus abnormal, Schumann said.

“It won’t have names of people, to protect the privacy,” Schumann said. Rather, he said, clients will see that “maybe the workforce over the age of 40 in this part of the United States is seeing the changes to [a] policy very negatively because of the cost, but everybody else outside of that age group and location sees it positively because it impacts them in a different way.”

FTC scrutinizes megacap's AI deals

But Aware’s eDiscovery tool operates differently. A company can set up role-based access to employee names depending on the “extreme risk” category of the company’s choice, which instructs Aware’s technology to pull an individual’s name, in certain cases, for human resources or another company representative.

“Some of the common ones are extreme violence, extreme bullying, harassment, but it does vary by industry,” Schumann said, adding that in financial services, suspected insider trading would be tracked.

For instance, a client can specify a “violent threats” policy, or any other category, using Aware’s technology, Schumann said, and have the AI models monitor for violations in Slack, Microsoft Teams and Workplace by Meta. The client could also couple that with rule-based flags for certain phrases, statements and more. If the AI found something that violated a company’s specified policies, it could provide the employee’s name to the client’s designated representative.

This type of practice has been used for years within email communications. What’s new is the use of AI and its application across workplace messaging platforms such as Slack and Teams.

Amba Kak, executive director of the AI Now Institute at New York University, worries about using AI to help determine what’s considered risky behavior.

“It results in a chilling effect on what people are saying in the workplace,” said Kak, adding that the Federal Trade Commission, Justice Department and Equal Employment Opportunity Commission have all expressed concerns on the matter, though she wasn’t speaking specifically about Aware’s technology. “These are as much worker rights issues as they are privacy issues.” 

Schumann said that though Aware’s eDiscovery tool allows security or HR investigations teams to use AI to search through massive amounts of data, a “similar but basic capability already exists today” in Slack, Teams and other platforms.

“A key distinction here is that Aware and its AI models are not making decisions,” Schumann said. “Our AI simply makes it easier to comb through this new data set to identify potential risks or policy violations.”

Privacy concerns

Even if data is aggregated or anonymized, research suggests, it’s a flawed concept. A landmark study on data privacy using 1990 U.S. Census data showed that 87% of Americans could be identified solely by using ZIP code, birth date and gender. Aware clients using its analytics tool have the power to add metadata to message tracking, such as employee age, location, division, tenure or job function. 

“What they’re saying is relying on a very outdated and, I would say, entirely debunked notion at this point that anonymization or aggregation is like a magic bullet through the privacy concern,” Kak said.

Additionally, the type of AI model Aware uses can be effective at generating inferences from aggregate data, making accurate guesses, for instance, about personal identifiers based on language, context, slang terms and more, according to recent research.

“No company is essentially in a position to make any sweeping assurances about the privacy and security of LLMs and these kinds of systems,” Kak said. “There is no one who can tell you with a straight face that these challenges are solved.”

And what about employee recourse? If an interaction is flagged and a worker is disciplined or fired, it’s difficult for them to offer a defense if they’re not privy to all of the data involved, Williams said.

“How do you face your accuser when we know that AI explainability is still immature?” Williams said.

Schumann said in response: “None of our AI models make decisions or recommendations regarding employee discipline.”

“When the model flags an interaction,” Schumann said, “it provides full context around what happened and what policy it triggered, giving investigation teams the information they need to decide next steps consistent with company policies and the law.”

WATCH: AI is ‘really at play here’ with the recent tech layoffs

AI is 'really at play here' with the recent tech layoffs, says Jason Greer

Continue Reading

Technology

China’s Honor launches new challenge to Samsung with thin foldable smartphone and a big battery

Published

on

By

China's Honor launches new challenge to Samsung with thin foldable smartphone and a big battery

Honor launched the Honor Magic V5 on Wednesday July 2, as it looks to challenge Samsung in the foldable space.

Honor

Honor on Wednesday touted the slimness and battery capacity of its newly launched thin foldable phone, as it lays down a fresh challenge to market leader Samsung.

The Honor Magic V5 goes will initially go on sale in China, but the Chinese tech firm will likely bring the device to international markets later this year.

The company, which spun off from Chinese tech giant Huawei in 2020, is looking to stand out from rivals with key features of the Magic V5, like artificial intelligence, battery and size.

Honor said the Magic V5 is 8.8 mm to 9mm when folded, depending on the color choice. The phone’s predecessor, the Magic V3 — Honor skipped the Magic V4 name — was 9.2 mm when folded. Honor said the Magic V5 weighs 217 grams to 222 grams, again, depending on the color model. The previous version was 226 grams.

In China, Honor will launch a special 1 terabyte storage size version of the Magic V5, which it says will have a battery capacity of more than 6000 milliampere-hour — among the highest for foldable phones.

Honor has tried hard to tout these features, as competition in foldables ramps up, even as these types of devices have a very small share of the overall smartphone market.

Honor vs. Samsung

Foldables represented less than 2% of the overall smartphone market in 2024, according to International Data Corporation. Samsung was the biggest player with 34% market share followed by Huawei with just under 24%, IDC added. Honor took the fourth spot with a nearly 11% share.

Honor is looking to get a head start on Samsung, which has its own foldable launch next week on July 9.

Francisco Jeronimo, a vice president at the International Data Corporation, said the Magic V5 is a strong offering from Honor.

“This is the dream foldable smartphone that any user who is interested in this category will think of,” Jeronimo told CNBC, pointing to features such as the battery.

“This phone continues to push the bar forward, and it will challenge Samsung as they are about to launch their seventh generation of foldable phones,” he added.

The thinness of a foldable phone has become a battleground for smartphone makers to appeal to consumers who want the large screen size the device has to offer without extra weight.

At its event next week, Samsung is expected to release a foldable that is thinner than its predecessor and could come close to challenging Honor’s offering by way of size, analysts said. If that happens, then Honor will be facing more competition, especially against Samsung, which has a bigger global footprint.

“The biggest challenge for Honor is the brand equity and distribution reach vs Samsung, where the Korean vendor has the edge,” Neil Shah, co-founder of Counterpoint Research, told CNBC.

Honor’s push into international markets beyond China is still fairly young, with the company looking to build up its brand.

“Further, if Samsung catches up with a thinner form-factor in upcoming iterations, as it has been the real pioneer in foldables with its vertical integration expertise from displays to batteries, the differentiating factor might narrow for Honor,” Shah added.

Vertical integration refers to when a company owns several parts of a product’s supply chain. Samsung has a display and battery business which provides the components for its foldables.

Honor talks up AI

Smartphone players, including Honor, have also looked to stand out via the AI features available on their device.

In March, Honor pledged a $10 billion investment in AI over the next five years, with part of that going toward the development of next-generation agents that are seen as more advanced personal assistants.

Honor said its AI assistant Yoyo can interact with other AI models, such as those created by DeepSeek and Alibaba in China, to create presentation decks.

The company also flagged its AI agent can hail a taxi ride across multiple apps in China, automatically accepting the quickest ride to arrive? and cancelling the rest.

Continue Reading

Technology

AI virtual personality YouTubers, or ‘VTubers,’ are earning millions

Published

on

By

AI virtual personality YouTubers, or ‘VTubers,’ are earning millions

One of the most popular gaming YouTubers is named Bloo, and has bright blue wavy hair and dark blue eyes. But he isn’t a human — he’s a fully virtual personality powered by artificial intelligence.

“I’m here to keep my millions of viewers worldwide entertained and coming back for more,” said Bloo in an interview with CNBC. “I’m all about good vibes and engaging content. I’m built by humans, but boosted by AI.”

Bloo is a virtual YouTuber, or VTuber, who has built a massive following of 2.5 million subscribers and more than 700 million views through videos of him playing popular games like Grand Theft Auto, Roblox and Minecraft. VTubers first gained traction in Japan in the 2010s. Now, advances in AI are making it easier than ever to create VTubers, fueling a new wave of virtual creators on YouTube.

The virtual character – whose bright colors and 3D physique look like something out of a Pixar film or the video game Fortnite – was created by Jordi van den Bussche, a long time YouTuber also known as kwebbelkop. Van den Bussche created Bloo after finding himself unable to keep up with the demands of content creation. The work no longer matched the output.

“Turns out, the flaw in this equation is the human, so we need to somehow remove the human,” said van den Bussche, a 29-year old from Amsterdam, in an interview. “The only logical way was to replace the human with either a photorealistic person or a cartoon. The VTuber was the only option, and that’s where Bloo came from.”

Jordi Van Den Bussche, YouTuber known as Kwebbelkop.

Courtesy: Jordi Van Den Bussche

Bloo has already generated more than seven figures in revenue, according to van den Bussche. Many VTubers like Bloo are “puppeteered,” meaning a human controls the character’s voice and movements in real time using motion capture or face-tracking technology. Everything else, from video thumbnails to voice dubbing in other languages, is handled by AI technology from ElevenLabs, OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude. Van den Bussche’s long-term goal is for Bloo’s entire personality and content creation process to be run by AI.

Van den Bussche has already tested fully AI-generated videos on Bloo’s channel, but says the results have not yet been promising. The content doesn’t perform as well because the AI still lacks the intuition and creative instincts of a human, he said. 

“When AI can do it better, faster or cheaper than humans, that’s when we’ll start using it permanently,” van den Bussche said.

The technology might not be far away.

Startup Hedra offers a product that uses AI technology to generate videos that are up to five minutes long. It raised $32 million in a funding round in May led by Andreessen Horowitz’s Infrastructure fund.

Hedra’s product, Character-3, allows users to create AI-generated characters for videos and can add dialogue and other characteristics. CEO Michael Lingelbach told CNBC Hedra is working on a product that will allow users to create self-sustaining, fully-automated characters.

Hedra’s product Character-3 allows users to make figures powered by AI that can be animated in real-time.

Hedra

“We’re doing a lot of research accelerating models like Character-3 to real time, and that’s going to be a really good fit for VTubers,” Lingelbach said. 

Character-3’s technology is already being used by a growing number of creators who are experimenting with new formats, and many of their projects are going viral. One of those is comedian Jon Lajoie’s Talking Baby Podcast, which features a hyper-realistic animated baby talking into a microphone. Another is Milla Sofia, a virtual singer and artist whose AI-generated music videos attract thousands of views. 

Talking Baby Podcast

Source: Instagram | Talking Baby Podcast

These creators are using Character-3 to produce content that stands out on social media, helping them reach wide audiences without the cost and complexity of traditional production.

AI-generated video is a rapidly evolving technology that is reshaping how content is made and shared online, making it easier than ever to produce high-quality video without cameras, actors or editing software. In May, Google announced Veo 3, a tool that creates AI-generated videos with audio.

Google said it uses a subset of YouTube content to train Veo 3, CNBC reported in June. While many creators said they were unaware of the training, experts said it has the potential to create an intellectual property crisis on the platform.

Faceless AI YouTubers

Creators are increasingly finding profitable ways to capitalize on the generative AI technology ushered in by the launch of OpenAI’s ChatGPT in late 2022.

One growing trend is the rise of faceless AI channels. These are run by creators who use these tools to produce videos with artificially generated images and voiceover that can sometimes earn thousands of dollars a month without them ever appearing on camera.

“My goal is to scale up to 50 channels, though it’s getting harder because of how YouTube handles new channels and trust scores,” said GoldenHand, a Spain-based creator who declined to share his real name.

Working with a small team, GoldenHand said he publishes up to 80 videos per day across his network of channels. Some maintain a steady few thousand views per video while others might suddenly go viral and rack up millions of views, mostly to an audience of those over the age of 65.

GoldenHand said his content is audio-driven storytelling. He describes his YouTube videos as audiobooks that are paired with AI-generated images and subtitles. Everything after the initial idea is created entirely by AI.

He recently launched a new platform, TubeChef, which gives creators access to his system to automatically generate faceless AI videos starting at $18 a month.

“People think using AI means you’re less creative, but I feel more creative than ever,” he said. “Coming up with 60 to 80 viral video ideas a day is no joke. The ideation is where all the effort goes now.”

AI Slop

As AI-generated content becomes more common online, concerns about its impact are growing. Some users worry about the spread of misinformation, especially as it becomes easier to generate convincing but entirely AI-fabricated videos.

“Even if the content is informative and someone might find it entertaining or useful, I feel we are moving into a time where … you do not have a way to understand what is human made and what is not,” said Henry Ajder, founder of Latent Space Advisory, which helps business navigate the AI landscape.

Others are frustrated by the sheer volume of low-effort, AI content flooding their feeds. This kind of material is often referred to as “AI slop,” low-quality, randomly generated content made using artificial intelligence. 

Google DeepMind Veo 3.

Courtesy: Google DeepMind

“The age of slop is inevitable,” said Ajder, who is also an AI policy advisor at Meta, which owns Facebook and Instagram. “I’m not sure what we do about it.”

While it’s not new, the surge in this type of content has led to growing criticism from users who say it’s harder to find meaningful or original material, particularly on apps like TikTok, YouTube and Instagram.

“I am actually so tired of AI slop,” said one user on X. “AI images are everywhere now. There is no creativity and no effort in anything relating to art, video, or writing when using AI. It’s disappointing.”

However, the creators of this AI content tell CNBC that it comes down to supply and demand. As the AI-generated content continues to get clicks, there’s no reason to stop creating more of it, said Noah Morris, a creator with 18 faceless YouTube channels.

Some argue that AI videos still have inherent artistic value, and though it’s become much easier to create, slop-like content has always existed on the internet, Lingelbach said.

“There’s never been a barrier to people making uninteresting content,” he said. “Now there’s just more opportunity to create different kinds of uninteresting content, but also more kinds of really interesting content too.”

Continue Reading

Technology

Elon Musk’s X is down for some users

Published

on

By

Elon Musk's X is down for some users

The X logo appears on a phone, and the xAI logo is displayed on a laptop in Krakow, Poland, on April 1, 2025. (Photo by Klaudia Radecka/NurPhoto via Getty Images)

Nurphoto | Nurphoto | Getty Images

Elon Musk‘s social media platform X was hit with an outage on Wednesday, leaving some users unable to load the site.

More than 15,000 users reported issues with the platform at around 9:53 a.m. ET, according to analytics firm Downdetector, which gathers data from users who spot glitches and report them to service.

The issues appeared to be largely resolved by 10:30 a.m., though some users continue to report disruptions with the platform.

The site has suffered from multiple disruptions in recent months.

Representatives from X didn’t immediately respond to a request for comment on the outage.

Read more CNBC tech news

Continue Reading

Trending