Several hundred Google employees have signed and circulated a manifesto opposing the company’s vaccine mandate, posing the latest challenge for leadership as it approaches key deadlines for returning workers to offices in person.
The Biden administration has ordered U.S. companies with 100 or more workers to ensure their employees are fully vaccinated or regularly tested for Covid-19 by Jan. 4. In response, Google has asked its more than 150,000 employees to upload their vaccination status to its internal systems by Dec. 3, whether they plan on coming into the office or not, according to internal documents viewed by CNBC. The company has also said that all employees who work directly or indirectly with government contracts must be vaccinated — even if they are working from home.
“Vaccines are key to our ability to enable a safe return to office for everyone and minimize the spread of Covid-19 in our communities, wrote Chris Rackow, Google VP of security, in an email sent near the end of October.
Rackow stated the company was already implementing requirements, so the changes from Biden’s executive order were “minimal.” His email gave a deadline of Nov. 12 for employees to request exemptions for reasons such as religious beliefs or medical conditions, and said that cases would be decided on a case-by-case basis.
The manifesto within Google, which has been signed by at least 600 Google employees, asks company leaders to retract the vaccine mandate and create a new one that is “inclusive of all Googlers,” arguing leadership’s decision will have outsized influence in corporate America. It also calls on employees to “oppose the mandate as a matter of principle” and tells employees to not let the policy alter their decision if they’ve already chosen not to receive the Covid-19 shot.
The manifesto comes as most of the Google workforce approaches a deadline to return to physical offices three days a week starting Jan. 10. The company’s notably outspoken employees have previously debated everything from government contracts to cafeteria food changes.
A spokesperson for Google said the company stands behind its policy. “As we’ve stated to all our employees and the author of this document, our vaccination requirements are one of the most important ways we can keep our workforce safe and keep our services running. We firmly stand behind our vaccination policy.”
The mandate dilemma
Vaccination is a dilemma not only for Google, but for corporate America in general. The Covid-19 virus has contributed to 772,570 deaths in the U.S., according to Johns Hopkins data. Despite proven effectiveness in providing a high level of protection against hospitalization and death, the country is struggling to persuade millions of people to get their first dose, as more than 60 million Americans remain unvaccinated.
In July, CEO Sundar Pichai announced the company would require vaccinations for those returning to offices. In October, Pichai said that the San Francisco Bay Area offices, near its headquarters, are up to 30% filled while New York is seeing nearly half of its employees back. He added at that time that employees who don’t want to get vaccinated would be able to continue working remotely.
The company has taken other steps to convince employees to get vaccinated as well. For instance, Joe Kava, vice president of data centers at Google, announced a $5,000 vaccination incentive spot bonus for U.S. data center employees, according to the manifesto.
In an email cited in the manifesto and viewed by CNBC, Google VP of global security Chris Rackow said that because of the company’s work with the federal government, which “today encompasses products and services spanning Ads, Cloud Maps, Workspace and more,” all employees working directly or indirectly with government contracts will require vaccinations — even if they are working from home. Frequent testing is “not a valid alternative,” he added.
The authors of the manifesto strongly disagree.
“I believe that Sundar’s Vaccine Mandate is deeply flawed,” the manifesto states, calling company leadership “coercive,” and “the antithesis of inclusion.”
In a subhead titled “Respect the User,” the authors write that the mandate of “barring unvaccinated Googlers from the office publicly and possibly embarrassingly exposes a private choice as it would be difficult for the Googler not to reveal why they cannot return.”
The author also argues the mandate violates the company’s principles of inclusiveness.
“Such Googlers may never feel comfortable expressing their true sentiments about a company health policy and other, unrelated sensitive topics. This results in silenced perspective and exacerbates the internal ideological ‘echo chamber’ which folks both inside and outside of Google have observed for years.”
The manifesto also opposes Google having a record of employees’ vaccination status.
“I do not believe Google should be privy to the health and medical history of Googlers and the vaccination status is no exception.” Google has asked employees to upload their vaccination proof to Google’s “environmental health and safety” team even if they already uploaded it to One Medical, one of Google’s benefits providers, according to internal documentation.
The author then tries to argue the vaccine mandate may be the start of a slippery slope, paving the way for other intrusive measures — a common line of argument among people opposed to the mandates.
“It normalizes medical intervention compulsion not only for Covid-19 vaccination but for future vaccines and possibly even non-vaccine interventions by extension. It justifies the principle of division and unequal treatment of Googlers based on their personal beliefs and decisions. The implications are chilling. Due to its presence as an industry leader, Google’s mandate will influence companies around the world to consider these as acceptable tradeoffs.”
The group has sent these concerns in an open letter to Google’s chief health officer Karen DeSalvo, the document states.
In Google’s most recent all-hands meeting, called TGIF, some employees attempted to bring more attention to the vaccine question by getting fellow employees “downvote” other questions in an internal system called Dory, according to an internal email chain viewed by CNBC. The goal was to ensure their questions would gain enough votes to qualify for executives to address them.
Google’s health ambitions
The pushback against vaccine mandates poses a new challenge for Google’s leadership at a time when it is trying to target the healthcare industry among its growing business ambitions — particularly for its cloud unit.
In August, Google disbanded its health unit as a formalized business unit for the health-care sector and Dr. David Feinberg, who spent the past two years leading the search giant’s health care unit, left the company. Nonetheless, Google Cloud CEO Thomas Kurian has routinely mentioned healthcare sector as a key focus area and DeSalvo, an ex-Obama administrator whom Google hired as its first health chief in 2019, told CNBC’s “Squawk Box” last month the tech giant is “still all in on health.”
The company has tried to capitalize on the broader fight against Covid in several ways. In the first half of 2021, the company spent nearly $30 million on at-home Covid tests for employees from Cue Health, which went public in September at a $3 billion valuation. Shortly after, the company announced a separate partnership with Google’s cloud unit to collect and analyze Covid-19 data with hopes of predicting future variants. Google also teamed up with Apple for an opt-in contract tracing software in hopes of tracking Covid-19.
A YouTube tool that uses creators’ biometrics to help them remove AI-generated videos that exploit their likeness also allows Google to train its artificial intelligence models on that sensitive data, experts told CNBC.
In response to concern from intellectual property experts, YouTube told CNBC that Google has never used creators’ biometric data to train AI models and it is reviewing the language used in the tool’s sign-up form to avoid confusion. But YouTube told CNBC it will not be changing its underlying policy.
The discrepancy highlights a broader divide inside Alphabet, where Google is aggressively expanding its AI efforts while YouTube works to maintain trust with creators and rights holders who depend on the platform for their businesses.
YouTube is expanding its “likeness detection,” a tool the company introduced in October that flags when a creator’s face is used without their permission in deepfakes, the term used to describe fake videos created using AI. The feature is being expanded to millions of creators in the YouTube Partner Program as AI-manipulated content becomes more prevalent throughout social media.
The tool scans videos uploaded across YouTube to identify where a creator’s face may have been altered or generated by artificial intelligence. Creators can then decide whether to request the video’s removal, but to use the tool, YouTube requires that creators upload a government ID and a biometric video of their face. Biometrics are the measurement of physical characteristics to verify a person’s identity.
Experts say that by tying the tool to Google’s privacy policy, YouTube has left the door open for future misuse of creators’ biometrics. The policy states that public content, including biometric information, can be used “to help train Google’s AI models and build products and features.”
“Likeness detection is a completely optional feature, but does require a visual reference to work,” YouTube spokesperson Jack Malon said in a statement to CNBC. “Our approach to that data is not changing. As our Help Center has stated since the launch, the data provided for the likeness detection tool is only used for identity verification purposes and to power this specific safety feature.”
YouTube told CNBC it is “considering ways to make the in-product language clearer.” The company has not said what specific changes to the wording will be made or when they will take effect.
Experts remain cautious, saying they raised concerns about the policy to YouTube months ago.
“As Google races to compete in AI and training data becomes strategic gold, creators need to think carefully about whether they want their face controlled by a platform rather than owned by themselves,” said Dan Neely, CEO of Vermillio, which helps individuals protect their likeness from being misused and also facilitates secure licensing of authorized content. “Your likeness will be one of the most valuable assets in the AI era, and once you give that control away, you may never get it back.”
Vermillio and Loti are third-party companies working with creators, celebrities and media companies to monitor and enforce likeness rights across the internet. With advancements in AI video generation, their usefulness has ramped up for IP rights holders.
Loti CEO Luke Arrigoni said the risks of YouTube’s current biometric policy “are enormous.”
“Because the release currently allows someone to be able to attach that name to the actual biometrics of the face, they could create something more synthetic that looks like that person,” Arrigoni said.
Neely and Arrigoni both said they would not currently recommend that any of their clients sign up for likeness detection on YouTube.
YouTube’s head of creator product, Amjad Hanif, said YouTube built its likeness detection tool to operate “at the scale of YouTube,” where hundreds of hours of new footage are posted every minute. The tool is set to be made available to the more than 3 million creators in the YouTube Partner Program by the end of January, Hanif said.
“We do well when creators do well,” Hanif told CNBC. “We’re here as stewards and supporters of the creator ecosystem, and so we are investing in tools to support them on that journey.”
The rollout comes as AI-generated video tools rapidly improve in quality and accessibility, raising new concerns for creators whose likeness and voice are central to their business.
YouTuber Doctor Mike, whose real name is Mikhail Varshavski, makes videos reacting to TV medical dramas, answering questions on health fads and debunking myths that have flooded the internet for nearly a decade.
Doctor Mike
YouTube creator Mikhail Varshavski, a physician who goes by Doctor Mike on the video platform, said he uses the service’s likeness detection tool to review dozens of AI-manipulated videos a week.
Varshavski has been on YouTube for nearly a decade and has amassed more than 14 million subscribers on the platform. He makes videos reacting to TV medical dramas, answering questions on health fads and debunking myths. He relies on his credibility as a board-certified physician to inform his viewers.
Rapid advances in AI have made it easier for bad actors to copy his face and voice in deepfake videos that could give his viewers misleading medical advice, Varshavski said.
He first encountered a deepfake of himself on TikTok, where an AI-generated doppelgänger promoted a “miracle” supplement.
“It obviously freaked me out, because I’ve spent over a decade investing in garnering the audience’s trust and telling them the truth and helping them make good health-care decisions,” he said. “To see someone use my likeness in order to trick someone into buying something they don’t need or that can potentially hurt them, scared everything about me in that situation.”
AI video generation tools like Google’s Veo 3and OpenAI’s Sora have made it significantly easier to create deepfakes of celebrities and creators like Varshavski. That’s because their likeness is frequently featured in the datasets used by tech companies to train their AI models.
Veo 3 is trained on a subset of the more than 20 billion videos uploaded to YouTube, CNBC reported in July. That could include several hundred hours of video from Varshavski.
Deepfakes have “become more widespread and proliferative,” Varshavski said. “I’ve seen full-on channels created weaponizing these types of AI deep fakes, whether it was for tricking people to buy a product or strictly to bully someone.”
At the moment, creators have no way to monetize unauthorized use of their likeness, unlike the revenue-sharing options available through YouTube’s Content ID system for copyrighted material, which is typically used by companies that hold large copyright catalogs. YouTube’s Hanif said the company is exploring how a similar model could work for AI-generated likeness use in the future.
Earlier this year, YouTube gave creators the option to permit third-party AI companies to train on their videos. Hanif said that millions of creators have opted into that program, with no promise of compensation.
Hanif said his team is still working to improve the accuracy of the product but early testing has been successful, though he did not provide accuracy metrics.
As for takedown activity across the platform, Hanif said that remains low largely because many creators choose not to delete flagged videos.
“They’ll be happy to know that it’s there, but not really feel like it merits taking down,” Hanif said. “By and far the most common action is to say, ‘I’ve looked at it, but I’m OK with it.'”
Agents and rights advocates told CNBC that low takedown numbers are more likely due to confusion and lack of awareness rather than comfort with AI content.
MongoDB shares ripped more than 25% higher on Tuesday after the company blew past Wall Street’s third-quarter expectations and lifted its forecast as its cloud database platform gained traction with customers.
The database software provider posted adjusted earnings of $1.32 per share on $628 million in revenue. That topped the 80 cents adjusted per share and $592 million in revenue expected by analysts polled by LSEG. Revenues grew 19% from last year.
MongoDB said its Atlas platform grew 30% from a year ago and accounted for 75% of total revenues for the quarter. The company said it ended the period with more than 60,800 Atlas customers, with revenues expected to grow 27% for the platform in the current period.
“Q3 was an exceptional quarter that was driven by our continued go-to-market execution and the broad-based demand we are seeing across business,” said CEO Chirantan “CJ” Desai in his first earnings call at the helm of the company.
Dev Ittycheria, who ran the company for 11 years and took it public, stepped down in November.
Read more CNBC tech news
Desai believes the company is approaching a “once in a lifetime” opportunity as artificial intelligence, cloud and data trends reach a “true inflection point.” He told investors he plans to focus on building customer relationships and innovation in the coming months.
Citing those tailwinds, MongoDB boosted its guidance for the full year on Atlas growth and tailwinds from ongoing artificial intelligence demand. The company now anticipates revenues between $2.434 billion and $2.439 billion, up from prior guidance of $2.34 billion and $2.36 billion.
Analysts at Bernstein lifted their price target on shares to $452, expecting the stock to continue benefiting from accelerating growth as other software companies struggle.
“We expect strong consumption demand, potential upside from AI, and benefits from an easing interest rate environment to continue driving re-rating upside in the near term,” they wrote.
Ben Seri (CTO), Sanaz Yashar (CEO), Snir Havdala (CPO) of Zafran Security.
Courtesy: Eric Sultan | Zafran
Zafran Security, a cybersecurity startup created by an Iranian-born spy whose story helped inspire the hit Apple TV series “Tehran,” has raised $60 million, the company said Tuesday.
Sanaz Yashar, the former spy and CEO of Zafran, told CNBC that the funding round comes as a result of the accelerating speed and pace of cyberattacks due to the on-going AI boon. Zafran uses artificial intelligence and automation technology to manage threat exposure.
It’s “becoming much more severe that it was even a year ago,” she said in an exclusive interview.
The round brings Zafran’s total funding to $130 million since its founding in 2022. Zafran did not disclose the valuation at which it raised, but the startup said it has more than tripled annual recurring revenue since its last round for $70 million in September 2024. Annual recurring revenue is a term often used to measure income expected on a 12-month basis for a product.
The company plans to use the money to hire more people, Yashar said.
Menlo Ventures led the funding round, with participation from Sequoia Capital and Cyberstarts, which was an early investor in the startup Wiz that sold to Google for $32 billion in March.
Companies are looking for ways to reinvigorate their cybersecurity capabilities as AI reshapes the sophistication and capabilities of cyber criminals.
Yashar and co-founders Ben Seri and Snir Havdala created Zafran following an investigation into a ransomware attack on a hospital in Israel.
“The data was there,” Yashar told CNBC, adding that cohesive security tools might have prevented the attack. “If the security tools were talking to each other, they could block it.”
Yashar, who moved to Israel from Tehran at 17, served for 15 years in an elite cybersecurity intelligence unit within the Israel Defense Forces known as Unit 8200. She also led major investigations at threat detection firm FireEye and Mandiant, which Google bought in 2022.