Connect with us

Published

on

Google CEO Sundar Pichai (L), and Epic Games CEO Tim Sweeney.

Reuters

A federal court jury decided late on Monday that Google‘s Android app store, Google Play, uses anticompetitive practices that hurt consumers and software developers.

The verdict is a significant win for Epic Games and its CEO Tim Sweeney, which have been fighting against mobile app stores and their fees since 2020 — including an unsuccessful challenge to Apple‘s App Store rules which is currently being appealed to the Supreme Court.

Sweeney attributed the win to revelations during the trial that Google had allegedly deleted or failed to keep records such as chats about its secretive deals with app makers. He also noted that it had been a jury trial, while the Apple case was decided by a judge.

“The brazenness of Google executives violating the law, and then deleting all of the records of violating the law,” Sweeney said. “That was really astonishing. This is very much not a normal court case, you don’t expect a trillion-dollar corporation to operate the way Google operated.”

Epic Games originally sued Google in 2020, alleging that it uses its dominant position as the developer of Android to strike deals with handset makers and collect excess fees from consumers. Google collects between 15% and 30% for all digital purchases made through its storefront. Epic tried to bypass those fees by charging users directly for purchases in the popular game Fortnite; Google then booted the game out of its store, spurring the lawsuit.

The decision could give app makers a bigger revenue share of the digital app market, which is currently dominated by Google and Apple, and is worth about $200 billion per year. The loss for Google could also empower other antitrust-based challenges to the search giant’s business, including a similar case brought by the Department of Justice.

Monday’s unanimous verdict came after a four-week trial in federal court in California. The jury unanimously found that Google acquired and maintained monopoly power in the Android app distribution market, as well as the in-app billing market for digital goods and services transactions.

The result is markedly different than Epic Games’ similar effort to change Apple’s App Store, in which which it lost 9 of 10 counts in 2021. Its only win was a judgment to suspend a rule about being able to email app customers. That ruling is currently being appealed to the Supreme Court.

One major difference was that Epic had a harder time finding documentation from inside Apple. Another is that Google’s Android allows software to be installed from the internet, a process called sideloading, while Apple bars it.

“The big difference between Apple and Google is Apple didn’t write anything down. And because they’re a big vertically integrated monopoly, they don’t do deals with developers and carriers to shut down competition, they just simply block at the technical level,” Sweeney said.

During the Google trial, Epic Games instead focused on whether Google locked up the app store market through deals with handset makers, and whether it scared users away from using Android’s sideloading functionality through security warnings.

It specifically called out secretive revenue-sharing contract deals with Samsung and Chinese handset makers, which these partners allegedly signed in exchange for supporting the Google Play store on new devices. It also revealed that Google had entered into talks with Epic Games over an investment in the Fortnite maker.

What could come next

U.S. District Judge James Donato will hold hearings in January to determine what changes Google will have to make.

Google might have to alter its Google Play Store rules, including opening up an option for billing and distribution outside of the store. Epic will push for lower fees, alternatives to Google Play, and less scary warnings about installing software from the internet, Sweeney said. He added that Epic Games is not seeking monetary damages.

Sweeney is not optimistic that change will be immediate.

“If Google is obstructing a vertical remedy through appeals and isn’t offering an awesome deal,” Sweeney said, the company will not be on Google’s services.

Google said it will appeal the decision. Google previously reached settlements with consumers, state attorneys generals, and Match Group over Google’s app store policies.

“We plan to challenge the verdict,” Wilson White, Google VP for Government Affairs & Public Policy, said in a statement. “Android and Google Play provide more choice and openness than any other major mobile platform. The trial made clear that we compete fiercely with Apple and its App Store, as well as app stores on Android devices and gaming consoles. We will continue to defend the Android business model and remain deeply committed to our users, partners, and the broader Android ecosystem.”

Sweeney does hope that some of Google’s deals revealed during the trial could give its partners leverage in negotiations. On Tuesday, Wells Fargo analysts cited the risk of partners striking harder bargains in exchange for carrying Google’s app store or using its billing system.

However, investors don’t seem to be particularly worried that the result of this trial will threaten Google’s app business, which could about total $38.5 billion in annual revenue this year, according to an estimate from Wells Fargo. Google stock fell less than 1% during trading on Tuesday.

Continue Reading

Technology

World’s first major law for artificial intelligence gets final EU green light

Published

on

By

World’s first major law for artificial intelligence gets final EU green light

Mr.cole_photographer | Moment | Getty Images

European Union member states on Tuesday agreed the world’s first major law for regulating artificial intelligence, as institutions around the world race to introduce curbs for the technology.

The EU Council said that it reached final approval for the AI Act — a ground-breaking piece of regulation that aims to introduce the first comprehensive set of rules for artificial intelligence.

“The adoption of the AI act is a significant milestone for the European Union,” Mathieu Michel, Belgium’s secretary of state for digitization said in a Tuesday statement.

“With the AI act, Europe emphasizes the importance of trust, transparency and accountability when dealing with new technologies while at the same time ensuring this fast-changing technology can flourish and boost European innovation,” Michel added.

The AI Act applies a risk-based approach to artificial intelligence, meaning that different applications of the technology are treated differently, depending on the threats they pose to society.

The law prohibits applications of AI that are considered “unacceptable” in terms of their risk level. Forms of unacceptable AI applications feature so-called “social scoring” systems that rank citizens based on aggregation and analysis of their data, predictive policing, and emotional recognition in the workplace and schools.

High-risk AI systems cover autonomous vehicles or medical devices, which are evaluated on the risks they pose to the health, safety, and fundamental rights of citizens. They also include applications of AI in financial services and education, where there is a risk of bias embedded in AI algorithms.

Continue Reading

Technology

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Published

on

By

Tech giants pledge AI safety commitments — including a ‘kill switch’ if they can’t mitigate risks

Dado Ruvic | Reuters

A slew of major tech companies including Microsoft, Amazon, and OpenAI, on Tuesday agreed to a landmark international agreement on artificial intelligence safety at the Seoul AI Safety Summit.

The agreement will see companies from countries including the U.S., China, Canada, the U.K., France, South Korea, and the United Arab Emirates, make voluntary commitments to ensure the safe development of their most advanced AI models.

Where they have not done so already, AI model makers will each publish safety frameworks laying out how they’ll measure risks of their frontier models, such as examining the risk of misuse of the technology by bad actors.

These frameworks will include “red lines” for the tech firms that define the kinds of risks associated with frontier AI systems which would be considered “intolerable” — these risks include but aren’t limited to automated cyberattacks and the threat of bioweapons.

In those sorts of extreme circumstances, companies say they will implement a “kill switch” that would see them cease development of their AI models if they can’t guarantee mitigation of these risks.

“It’s a world first to have so many leading AI companies from so many different parts of the globe all agreeing to the same commitments on AI safety,” Rishi Sunak, the U.K.’s prime minister, said in a statement Tuesday.

“These commitments ensure the world’s leading AI companies will provide transparency and accountability on their plans to develop safe AI,” he added.

The pact agreed Tuesday expands on a previous set of commitments made by companies involved in the development of generative AI software the U.K.’s AI Safety Summit in Bletchley Park, England, last November.

The companies have agreed to take input on these thresholds from “trusted actors,” including their home governments as appropriate, before releasing them ahead of the next planned AI summit — the AI Action Summit in France — in early 2025.

The commitments agreed Tuesday only apply to so-called “frontier” models. This term refers to the technology behind generative AI systems like OpenAI’s GPT family of large language models, which powers the popular ChatGPT AI chatbot.

Ever since ChatGPT was first introduced to the world in November 2022, regulators and tech leaders have become increasingly worried about the risks surrounding advanced AI systems capable of generating text and visual content on par with, or better than, humans.

Microsoft's new PCs with AI is a 'thumbs up,' says WSJ's Joanna Stern

The European Union has sought to clamp down on unfettered AI development with the creation of its AI Act, which was approved by the EU Council on Tuesday.

The U.K. hasn’t proposed formal laws for AI, however, instead opting for a “light-touch” approach to AI regulation that entails regulators applying existing laws to the technology.

The government recently said it will consider legislating for frontier models at a point in future, but has not committed to a timeline for introducing formal laws.

Continue Reading

Technology

Amazon, Meta back Scale AI in $1 billion funding deal that values firm at $14 billion

Published

on

By

Amazon, Meta back Scale AI in  billion funding deal that values firm at  billion

Scale AI CEO Alex Wang, left.

Scale AI

Artificial intelligence startup Scale AI said Tuesday that it has raised $1 billion in a Series F funding round that values the enterprise tech company at $13.8 billion — almost double its last reported valuation. The San Francisco-based company, ranked No. 12 on this year’s CNBC Disruptor 50 list, has now raised $1.6 billion to date.

Its latest funding round is being led by Accel, and includes Cisco Investments, DFJ Growth, Intel Capital, ServiceNow Ventures, AMD Ventures, WCM, Amazon, Elad Gil (co-founder of Color Genomics and serial tech investor), and Meta, all of which are new investors in the company.

Existing investors including Y Combinator, Nat Friedman, Index Ventures, Founders Fund, Coatue, Thrive Capital, Spark Capital, Nvidia, Tiger Global Management, Greenoaks, and Wellington Management also participated in the round.

Scale AI is playing a key role in the rise of generative artificial intelligence and large language models, with the data — whether it is text, images, video or voice recordings — needing to be labeled correctly before it can be digested and used effectively by AI technology. Scale AI has evolved from labeling data used to train models that powered autonomous driving to now helping to improve and fine tune the underlying data for nearly any organization looking to implement AI, powering some of the most advanced models in use.

“Our calling is to build the data foundry for AI, and with today’s funding, we’re moving into the next phase of that journey – accelerating the abundance of frontier data that will pave our road to AGI,” founder and CEO Alexandr Wang said in a statement announcing the news.

More coverage of the 2024 CNBC Disruptor 50

Scale AI is also increasingly working with the public sector.

In August, the company was awarded a contract with the Department of Defense Chief Digital and Artificial Intelligence Office, which the company said will help boost the DoD’s efforts to advance AI capabilities for the entire military, spanning projects across the Army, Marine Corps, Navy, Air Force, Space Force and Coast Guard.

In May, Scale AI launched Donovan, an AI-powered decision-making platform that is the first LLM deployed to a U.S. government classified network.

Wang spoke at December’s AI Insight Forum in Washington, D.C., about the role Scale AI is playing in helping support the U.S. and its allies.

“The race for AI global leadership is well underway, and our nation’s ability to efficiently adopt and implement AI will define the future of warfare,” he said. “I firmly believe that the United States has the ability to lead the world in AI adoption to support U.S. national security. The world is not slowing down, and we must rise to the occasion.”

The company is also looking to play a role in AI development globally. It announced in May that it will open a London office as its European headquarters and will look to support and partner with the U.K. government on its AI initiatives.

Sign up for our weekly, original newsletter that goes beyond the annual Disruptor 50 list, offering a closer look at list-making companies and their innovative founders.

Continue Reading

Trending