Meredith Whittaker, a former Google Manager who is now president at Signal.(Florian Hetz for The Washington Post via Getty Images)
Florian Hetzt | The Washington Post | Getty Images
Meredith Whittaker took a top role at the Signal Foundation last year, moving into the nonprofit world after a career in academia, government work and the tech industry.
She’s now president of an organization that operates one of the world’s most popular encrypted messaging apps, with tens of millions of people using it to keep their chats private and out of the purview of big tech companies.
Whittaker has real-world reasons to be skeptical of for-profit companies and their use of data — she previously spent 13 years at Google.
After more than a decade at the search giant, she learned from a friend in 2017 that Google’s cloud computing unit was working on a controversial contract with the Department of Defense known as Project Maven. She and other workers saw it as hypocritical for Google to work on artificial intelligence technology that could potentially be used for drone warfare. They started discussing taking collective action against the company.
“People were meeting each week, talking about organizing,” Whittaker said in an interview with CNBC, with Women’s History Month as a backdrop. “There was already sort of a consciousness in the company that hadn’t existed before.”
With tensions high, Google workers then learned that the company reportedly paid former executive Andy Rubin a $90 million exit package despite credible sexual misconduct claims against the Android founder.
Whittaker helped organize a massive walkout against the company, bringing along thousands of Google workers to demand greater transparency and an end to forced arbitration for employees. The walkout represented a historic moment in the tech industry, which until then, had few high-profile instances of employee activism.
“Give me a break,” Whittaker said of the Rubin revelations and ensuing walkout. “Everyone knew; the whisper network was not whispering anymore.”
Google did not immediately respond to a request for comment.
Whittaker left Google in 2019 to return full time to the AI Now Institute at New York University, an organization she co-founded in 2017 that says its mission is to “help ensure that AI systems are accountable to the communities and contexts in which they’re applied.”
Whittaker never intended on pursuing a career in tech. She studied rhetoric at the University of California, Berkeley. She said she was broke and needed a gig when she joined Google in 2006, after submitting a resume on Monster.com. She eventually landed a temp job in customer support.
“I remember the moment when someone kind of explained to me that a server was a different kind of computer,” Whittaker said. “We weren’t living in a world at that point where every kid learned to code — that knowledge wasn’t saturated.”
‘Why do we get free juice?’
Beyond learning about technology, Whittaker had to adjust to the culture of the industry. At companies like Google at the time, that meant lavish perks and a lot of pampering.
“Part of it was trying to figure out, why do we get free juice?” Whittaker said. “It was so foreign to me because I didn’t grow up rich.”
Whittaker said she would “osmotically learn” more about the tech sector and Google’s role in it by observing and asking questions. When she was told about Google’s mission to index the world’s information, she remembers it sounding relatively simple even though it involved numerous complexities, touching on political, economic and societal concerns.
“Why is Google so gung-ho over net neutrality?” Whittaker said, referring to the company’s battle to ensure that internet service providers offer equal access to content distribution.
Several European telecommunications providers are now urging regulators to require tech companies to pay them “fair share” fees, while the tech industry says such costs represent an “internet tax” that unfairly burdens them.
“The technological sort of nuance and the political and economic stuff, I think I learned at the same time,” Whittaker said. “Now I understand the difference between what we’re saying publicly and how that might work internally.”
Signal app
Signal
At Signal, Whittaker gets to focus on the mission without worrying about sales. Signal has become popular among journalists, researchers and activists for its ability to scramble messages so that third parties are unable to intercept the communications.
As a nonprofit, Whittaker said that Signal is “existentially important” for society and that there’s no underlying financial motivation for the app to deviate from its stated position of protecting private communication.
“We go out of our way in sometimes spending a lot more money and a lot more time to ensure that we have as little data as possible,” Whittaker said. “We know nothing about who’s talking to whom, we don’t know who you are, we don’t know your profile photo or who is in the groups that you talk to.”
Tesla and Twitter CEO Elon Musk has praised Signal as a direct messaging tool, and tweeted in November that “the goal of Twitter DMs is to superset Signal.”
Musk and Whittaker share some concerns about companies profiting off AI technologies. Musk was an early backer of ChatGPT creator OpenAI, which was founded as a nonprofit. But he said in a recent tweet that it’s become a “maximum-profit company effectively controlled by Microsoft.” In January, Microsoft announced a multibillion-dollar investment in OpenAI, which calls itself a “capped-profit” company.
Beyond just the confusing structure of OpenAI, Whittaker is out on the ChatGPT hype. Google recently jumped into the generative AI market, debuting its chatbot dubbed Bard.
Whittaker said she finds little value in the technology and struggles to see any game-changing uses. Eventually the excitement will decline, though “maybe not as precipitously as like Web3 or something,” she said.
“It has no understanding of anything,” Whittaker said of ChatGPT and similar tools. “It predicts what is likely to be the next word in a sentence.”
OpenAI did not immediately respond to a request for comment.
She fears that companies could use generative AI software to “justify the degradation of people’s jobs,” resulting in writers, editors and content makers losing their careers. And she definitely wants people to know that Signal has absolutely no plans to incorporate ChatGPT into its service.
“On the record, loudly as possible, no!” Whittaker said.
OpenAI CEO Sam Altman speaks during the Federal Reserve’s Integrated Review of the Capital Framework for Large Banks Conference in Washington, D.C., U.S., July 22, 2025.
Ken Cedeno | Reuters
OpenAI is detailing its plans to address ChatGPT’s shortcomings when handling “sensitive situations” following a lawsuit from a family who blamed the chatbot for their teenage son’s death by suicide.
“We will keep improving, guided by experts and grounded in responsibility to the people who use our tools — and we hope others will join us in helping make sure this technology protects people at their most vulnerable,” OpenAI wrote on Tuesday, in a blog post titled, “Helping people when they need it most.”
Earlier on Tuesday, the parents of Adam Raine filed a product liability and wrongful death suit against OpenAI after their son died by suicide at age 16, NBC News reported. In the lawsuit, the family said that “ChatGPT actively helped Adam explore suicide methods.”
The company did not mention the Raine family or lawsuit in its blog post.
OpenAI said that although ChatGPT is trained to direct people to seek help when expressing suicidal intent, the chatbot tends to offer answers that go against the company’s safeguards after many messages over an extended period of time.
The company said it’s also working on an update to its GPT-5 model released earlier this month that will cause the chatbot to deescalate conversations, and that it’s exploring how to “connect people to certified therapists before they are in an acute crisis,” including possibly building a network of licensed professionals that users could reach directly through ChatGPT.
Additionally, OpenAI said it’s looking into how to connect users with “those closest to them,” like friends and family members.
When it comes to teens, OpenAI said it will soon introduce controls that will give parents options to gain more insight into how their children use ChatGPT.
Jay Edelson, lead counsel for the Raine family, told CNBC on Tuesday that nobody from OpenAI has reached out to the family directly to offer condolences or discuss any effort to improve the safety of the company’s products.
“If you’re going to use the most powerful consumer tech on the planet — you have to trust that the founders have a moral compass,” Edelson said. “That’s the question for OpenAI right now, how can anyone trust them?”
Raine’s story isn’t isolated.
Writer Laura Reiley earlier this month published an essay in The New York Times detailing how her 29-year-old daughter died by suicide after discussing the idea extensively with ChatGPT. And in a case in Florida, 14-year-old Sewell Setzer III died by suicide last year after discussing it with an AI chatbot on the app Character.AI.
As AI services grow in popularity, a host of concerns are arising around their use for therapy, companionship and other emotional needs.
But regulating the industry may also prove challenging.
On Monday, a coalition of AI companies, venture capitalists and executives, including OpenAI President and co-founder Greg Brockman announced Leading the Future, a political operation that “will oppose policies that stifle innovation” when it comes to AI.
If you are having suicidal thoughts or are in distress, contact the Suicide & Crisis Lifeline at 988 for support and assistance from a trained counselor.
Okta CEO Todd McKinnon appears on CNBC in September 2018.
Anjali Sundaram | CNBC
Okta shares rose 4% in extended trading on Tuesday after the identity software maker reported fiscal results that exceeded Wall Street projections.
Here’s how the company did in comparison with LSEG consensus:
Earnings per share: 91 cents adjusted vs. 84 cents expected
Revenue: $728 million vs. $711.8 million expected
Okta’s revenue grew about 13% year over year in the fiscal second quarter, which ended on July 31, according to a statement. Net income of $67 million, or 37 cents per share, was up from $29 million, or 15 cents per share, in the same quarter last year.
In May, Okta adjusted its guidance to reflect macroeconomic uncertainty. But business has been going well, said Todd McKinnon, Okta’s co-founder and CEO, in an interview with CNBC on Tuesday.
“It was much better than we thought,” McKinnon said. “Yeah, the results speak for themselves.”
U.S. government customers are being more careful about signing up for deals after President Donald Trump launched the Department of Government Efficiency in January.
“But even under that additional review, we did really well,” McKinnon said.
Net retention rate, a metric to show growth with existing customers, came to 106% in the quarter, unchanged from three months ago.
Read more CNBC tech news
Companies will need to buy software to manage the identities of artificial intelligence agents working in their environments, which should lead to expansions with customers, McKinnon said. Selling suites of several kinds of Okta software should also boost revenue growth, he said.
Management called for 74 cents to 75 cents in adjusted earnings per share and $728 million to $730 million in revenue for the fiscal third quarter. Analysts surveyed by LSEG had expected earnings of 75 cents per share, with $722.9 million in revenue. Okta expects $2.260 billion to $2.265 billion in current remaining performance obligation, a measurement of subscription backlog to be recognized in the next 12 months, just above StreetAccount’s $2.26 billion consensus.
The company bumped up its fiscal 2026 forecast. It sees $3.33 to $3.38 in full-year adjusted earnings per share, with $2.875 billion to $2.885 billion in revenue. The LSEG consensus showed $3.28 in adjusted earnings per share on $2.86 billion in revenue. Okta’s full fiscal year guidance from May included $3.23 to $3.28 per share and $2.850 billion to $2.860 in revenue.
“Palo Alto is going to be like, ‘You have to buy security from us, and your endpoint from us and your SIEM [security information and event management] from us and your network from us,’ ” McKinnon said. “We just think that’s wrong, because customers need choice. It’s very unlikely they’re going to get every piece of technology or every piece of security from one vendor.”
A Palo Alto spokesperson did not immediately respond to a request for comment.
Earlier on Tuesday, Okta said it had agreed to acquire Israeli startup Axiom Security, which sells software for managing data access. The companies did not disclose terms of the deal.
As of Tuesday’s close, Okta shares were up 16%, while the technology-heavy Nasdaq was up 11%.
Executives will discuss the results with analysts on a conference call starting at 5 p.m. ET.
Apple on Tuesday sent invites to the media and analysts for a launch event at its campus on September 9 at 10 A.M pacific time.
The tagline on the invite is: “Awe dropping.”
Apple is expected to release new iPhones, as it usually does in September. This year’s model would be the iPhone 17. It also often announces new Apple Watch models in September.
While Apple’s launch events used to be held live, with executives demonstrating features on stage, since 2020 they have been pre-recorded videos. Apple said it would stream the event on its website.
Analysts expect Apple to release a lineup of new phones with updated processors and specs, including a new slim version that trades battery life and cameras for a light weight and design.