Connect with us

Published

on

Google terminated 28 employees Wednesday, according to an internal memo viewed by CNBC, after a series of protests against labor conditions and the company’s contract to provide the Israeli government and military with cloud computing and artificial intelligence services.

The news comes one day after nine Google workers were arrested on trespassing charges Tuesday night after staging a sit-in at the company’s offices in New York and Sunnyvale, California, including a protest in Google Cloud CEO Thomas Kurian’s office.

Some of the arrested workers in New York and Sunnyvale, who spoke with CNBC earlier on Wednesday, said that during the protest they were locked out of their work accounts and offices, placed on administrative leave and told to wait to return to work until being contacted by HR.

On Wednesday evening, a memo sent by Chris Rackow, Google’s vice president of global security, told Googlers that “following investigation, today we terminated the employment of twenty-eight employees found to be involved. We will continue to investigate and take action as needed.”

The arrests, which were livestreamed on Twitch by participants, follow rallies outside Google offices in New York, Sunnyvale and Seattle, which attracted hundreds of attendees, according to workers involved. The protests were led by the “No Tech for Apartheid” organization, focused on Project Nimbus — Google and Amazon’s joint $1.2 billion contract to provide the Israeli government and military with cloud computing services, including AI tools, data centers and other cloud infrastructure.

“This evening, Google indiscriminately fired over two dozen workers, including those among us who did not directly participate in yesterday’s historic, bicoastal 10-hour sit-in protests,” No Tech for Apartheid said in a statement, adding, “In the three years that we have been organizing against Project Nimbus, we have yet to hear from a single executive about our concerns. Google workers have the right to peacefully protest about terms and conditions of our labor. These firings were clearly retaliatory.”

Protesters in Sunnyvale sat in Kurian’s office for more than nine hours until their arrests, writing demands on Kurian’s whiteboard and wearing shirts that read “Googler against genocide.” In New York, protesters sat in a three-floor common space. Five workers from Sunnyvale and four from New York were arrested.

“On a personal level, I am opposed to Google taking any military contracts — no matter which government they’re with or what exactly the contract is about,” Cheyne Anderson, a Google Cloud software engineer based in Washington, told CNBC earlier on Wednesday. “And I hold that opinion because Google is an international company and no matter which military it’s with, there are always going to be people on the receiving end… represented in Google’s employee base and also our user base.” Anderson had flown to Sunnyvale for the protest in Kurian’s office and was one of the workers arrested Tuesday.

“Google Cloud supports numerous governments around the world in countries where we operate, including the Israeli government, with our generally available cloud computing services,” a Google spokesperson told CNBC on Wednesday evening, adding, “This work is not directed at highly sensitive, classified, or military workloads relevant to weapons or intelligence services.”

The demonstrations show Google’s increased pressure from workers who oppose military use of its AI and cloud technology. Last month, Google Cloud engineer Eddie Hatfield interrupted a keynote speech from the managing director of Google’s Israel business stating, “I refuse to build technology that powers genocide.” Hatfield was subsequently fired. That same week, an internal Google employee message board was shut down after staffers posted comments about the company’s Israeli military contracts. A spokesperson at the time described the posts as “divisive content that is disruptive to our workplace.”

On Oct. 7, Hamas carried out deadly attacks on Israel, killing 1,200 and taking more than 240 hostages.  The following day, Israel declared war and began implementing a siege of Gaza, cutting off access to power, food, water and fuel. At least 33,899 people have been killed in the Gaza Strip since that date, the enclave’s Health Ministry said Wednesday in a statement on Telegram. In January at the U.N.’s top court, Israel rejected genocide charges brought by South Africa.

The Israeli Ministry of Defense reportedly sought consulting services from Google to expand its access to Google Cloud services. Google Photos is one platform used by the Israeli government to conduct surveillance in Gaza, according to The New York Times.

“I think what happened yesterday is evidence that Google’s attempts to suppress all of the voices of opposition to this contract are not only not working but actually having the opposite effect,” Ariel Koren, a former Google employee who resigned in 2022 after leading efforts to oppose the Project Nimbus contract, told CNBC earlier on Wednesday. “It’s really just creating more agitation, more anger and more commitment.”

The New York sit-in started at noon ET and ended around 9:30 p.m. ET. Security asked workers to remove their banner, which spanned two floors, about an hour into the protest, according to Hasan Ibraheem, a Google software engineer based in New York City and one of the arrested workers.

“I realized, ‘Oh, the place that I work at is very complicit and aiding in this genocide — I have a responsibility to act against it,'” Ibraheem told CNBC earlier on Wednesday. Ibraheem added, “The fact that I am receiving money from Google and Israel is paying Google — I am receiving part of that money, and that weighed very heavily on me.”

The New York workers were released from the police station after about four hours.

The workers were also protesting their labor conditions — namely “that the company stop the harassment, intimidation, bullying, silencing, and censorship of Palestinian, Arab, Muslim Googlers — and that the company address the health and safety crisis workers, especially those in Google Cloud, are facing due to the potential impacts of their work,” according to a release by the campaign.

“A small number of employee protesters entered and disrupted a few of our locations,” a Google spokesperson told CNBC Wednesday evening. “Physically impeding other employees’ work and preventing them from accessing our facilities is a clear violation of our policies, and completely unacceptable behavior. After refusing multiple requests to leave the premises, law enforcement was engaged to remove them to ensure office safety. We have so far concluded individual investigations that resulted in the termination of employment for 28 employees, and will continue to investigate and take action as needed.”

Read the full memo below.

Googlers,

You may have seen reports of protests at some of our offices yesterday. Unfortunately, a number of employees brought the event into our buildings in New York and Sunnyvale. They took over office spaces, defaced our property, and physically impeded the work of other Googlers. Their behavior was unacceptable, extremely disruptive, and made co-workers feel threatened. We placed employees involved under investigation and cut their access to our systems. Those who refused to leave were arrested by law enforcement and removed from our offices.

Following investigation, today we terminated the employment of twenty-eight employees found to be involved. We will continue to investigate and take action as needed.

Behavior like this has no place in our workplace and we will not tolerate it. It clearly violates multiple policies that all employees must adhere to — including our Code of Conduct and Policy on Harassment, Discrimination, Retaliation, Standards of Conduct, and Workplace Concerns.

We are a place of business and every Googler is expected to read our policies and apply them to how they conduct themselves and communicate in our workplace. The overwhelming majority of our employees do the right thing. If you’re one of the few who are tempted to think we’re going to overlook conduct that violates our policies, think again. The company takes this extremely seriously, and we will continue to apply our longstanding policies to take action against disruptive behavior — up to and including termination.

You should expect to hear more from leaders about standards of behavior and discourse in the workplace.

Chris

Continue Reading

Technology

World’s first major law for artificial intelligence gets final EU green light

Published

on

By

World’s first major law for artificial intelligence gets final EU green light

Mr.cole_photographer | Moment | Getty Images

European Union member states on Tuesday agreed the world’s first major law for regulating artificial intelligence, as institutions around the world race to introduce curbs for the technology.

The EU Council said that it reached final approval for the AI Act — a ground-breaking piece of regulation that aims to introduce the first comprehensive set of rules for artificial intelligence.

“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.

“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.

The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the threats they pose to society.

The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Forms of unacceptable AI applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and emotional recognition in the workplace and schools.

High-risk AI systems cover autonomous vehicles or medical devices, which are evaluated on the risks they pose to the health, safety, and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias embedded in AI algorithms.

Continue Reading

Technology

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Published

on

By

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Dado Ruvic | Reuters

A slew of major tech companies including Microsoft, Amazon, and OpenAI, on Tuesday agreed to a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit.

The agreement will see companies from countries including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, make voluntary commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers will each publish safety frameworks laying out how they’ll measure risks of their frontier models, such as examining the risk of misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems which would be considered “intolerable” — these risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

In those sorts of extreme circumstances, companies say they will implement a “kill switch” that would see them cease development of their AI models if they can’t guarantee mitigation of these risks.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the U.K.’s prime minister, said in a statement Tuesday.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he added.

The pact agreed Tuesday expands on a previous set of commitments made by companies involved in the development of generative AI software the U.K.’s AI Safety Summit in Bletchley Park, England, last November.

The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

The commitments agreed Tuesday only apply to so-called “frontier” models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Ever since ChatGPT was first introduced to the world in November 2022, regulators and tech leaders have become increasingly worried about the risks surrounding advanced AI systems capable of generating text and visual content on par with, or better than, humans.

Microsoft's new PCs with AI is a 'thumbs up,' says WSJ's Joanna Stern

The European Union has sought to clamp down on unfettered AI development with the creation of its AI Act, which was approved by the EU Council on Tuesday.

The U.K. hasn’t proposed formal laws for AI, however, instead opting for a “light-touch” approach to AI regulation that entails regulators applying existing laws to the technology.

The government recently said it will consider legislating for frontier models at a point in future, but has not committed to a timeline for introducing formal laws.

Continue Reading

Technology

Amazon, Meta back Scale AI in $1 billion funding deal that values firm at $14 billion

Published

on

By

Amazon, Meta back Scale AI in  billion funding deal that values firm at  billion

Scale AI CEO Alex Wang, left.

Scale AI

Artificial intelligence startup Scale AI said Tuesday that it has raised $1 billion in a Series F funding round that values the enterprise tech company at $13.8 billion — almost double its last reported valuation. The San Francisco-based company, ranked No. 12 on this year’s CNBC Disruptor 50 list, has now raised $1.6 billion to date.

Its latest funding round is being led by Accel, and includes Cisco Investments, DFJ Growth, Intel Capital, ServiceNow Ventures, AMD Ventures, WCM, Amazon, Elad Gil (co-founder of Color Genomics and serial tech investor), and Meta, all of which are new investors in the company.

Existing investors including Y Combinator, Nat Friedman, Index Ventures, Founders Fund, Coatue, Thrive Capital, Spark Capital, Nvidia, Tiger Global Management, Greenoaks, and Wellington Management also participated in the round.

Scale AI is playing a key role in the rise of generative artificial intelligence and large language models, with the data — whether it is text, images, video or voice recordings — needing to be labeled correctly before it can be digested and used effectively by AI technology. Scale AI has evolved from labeling data used to train models that powered autonomous driving to now helping to improve and fine tune the underlying data for nearly any organization looking to implement AI, powering some of the most advanced models in use.

“Our calling is to build the data foundry for AI, and with today’s funding, we’re moving into the next phase of that journey – accelerating the abundance of frontier data that will pave our road to AGI,” founder and CEO Alexandr Wang said in a statement announcing the news.

More coverage of the 2024 CNBC Disruptor 50

Scale AI is also increasingly working with the public sector.

In August, the company was awarded a contract with the Department of Defense Chief Digital and Artificial Intelligence Office, which the company said will help boost the DoD’s efforts to advance AI capabilities for the entire military, spanning projects across the Army, Marine Corps, Navy, Air Force, Space Force and Coast Guard.

In May, Scale AI launched Donovan, an AI-powered decision-making platform that is the first LLM deployed to a U.S. government classified network.

Wang spoke at December’s AI Insight Forum in Washington, D.C., about the role Scale AI is playing in helping support the U.S. and its allies.

“The race for AI global leadership is well underway, and our nation’s ability to efficiently adopt and implement AI will define the future of warfare,” he said. “I firmly believe that the United States has the ability to lead the world in AI adoption to support U.S. national security. The world is not slowing down, and we must rise to the occasion.”

The company is also looking to play a role in AI development globally. It announced in May that it will open a London office as its European headquarters and will look to support and partner with the U.K. government on its AI initiatives.

Sign up for our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.

Continue Reading

Trending