Elon Musk, the multi-billionaire owner of Tesla and X, is suing artificial intelligence company OpenAI, accusing the firm of prioritising profit over developing AI for the public good.
Mr Musk is bringing the suit against OpenAI, which he co-founded, and its chief executive, Sam Altman, for breaching a contract by reneging on its pledge to develop AI carefully and make the tech widely available.
The company behind the ground-breaking generative AI chatbot, ChatGPT, has “been transformed into a closed-source de facto subsidiary of the largest technology company, Microsoft”, a court filing said.
The court action is the latest in a series of challenges to Mr Altman who was ousted from his position at OpenAI by the company board and briefly went to work at Microsoft, OpenAI’s biggest shareholder, before being returned to his post.
The AI giant was originally founded as a not-for-profit company but has grown to have commercial interests, which has caused tension between board members and founders.
By embracing a close relationship with Microsoft, OpenAI and its top executives have set that pact “aflame” and are “perverting” the company’s mission, Mr Musk alleges in the lawsuit.
“Under its new board, it is not just developing but is actually refining an AGI [artificial general intelligence] to maximize profits for Microsoft, rather than for the benefit of humanity”, the filing said.
More on Elon Musk
Related Topics:
Please use Chrome browser for a more accessible video player
1:10
OpenAI unveils new video tool
A key part of OpenAI’s mission to benefit humanity, the court filing said, was to make the company software open source and share it, but this has not happened.
Instead, the company operates on a for-profit model.
Mr Musk has his own AI company, called xAI and has said OpenAI is not focused enough on the potential harms of AI.
As well as alleging breach of contract, Mr Musk’s claim said OpenAI is violating fiduciary duty and is engaged in unfair business practices. A jury trial has been sought by Mr Musk.
OpenAI and Microsoft have been contacted for comment.
The UK government is being urged to take even stronger action to tackle the ongoing crisis of families unable to afford baby formula milk.
The prime minister backed limited reforms to the market to help parents save money but will not yet support more radical changes.
Sir Keir Starmer confirmed support for better public health messaging to inform parents that cheaper brands are nutritionally equivalent when compared with the most expensive.
A ban on spending store loyalty points on baby formula will also be lifted.
They were among recommendations made by the Competition and Markets Authority which investigated the baby formula industry and described the price rises in recent years as unjustifiable.
Image: A newborn. File Pic: iStock
In the House of Commons the prime minister said: “For too long parents have been pushed into spending more on infant formula.
“They were told they’re paying for better quality and left hundreds of pounds out of pocket.
More from UK
“I can announce today that we’re changing that. We will take action to give parents and carers the confidence to access infant formula at more affordable prices, with clearer guidance for retailers on helping new parents use loyalty points and vouchers together.”
It comes two-and-a-half years after a Sky News investigation revealed the extreme measures families were taking to feed their babies.
Please use Chrome browser for a more accessible video player
Parents described how they had resorted to stealing to feed their infants, some were watering down formula milk or substituting it for condensed milk.
The British Pregnancy Advisory Service described the situation in 2023 as a “national scandal”.
Campaigners told Sky News the UK government needed to go further to address the crisis.
Co-founder of Feed UK Erin Williams told Sky News: “It is progress, they promised to look at this enormous nationwide problem and they have.
“At the moment women are still not routinely getting important information before giving birth – this should be given proactively to everybody and that will be a big win.
“The prime minister though needs to be tougher on the baby formula companies.
“Their marketing claims, their unjustified pricing – it’s stacked against families who just need to feed their babies safely.”
The UK government stopped short of accepting all of the recommendations made by the CMA.
More radical ideas such as a price cap on baby formula are not being considered.
Charities have also told Sky News the situations some families find themselves in have not eased.
Founder of the Hartlepool Baby Bank, Emilie De Bruijn, told Sky News the demand they see from desperate families is “constant and unmanageable”.
She said: “Parents are really feeling the pinch right now, and demands on baby banks are rising and it can feel quite relentless.
“We are pleased to see the extension of the National Breastfeeding Helpline alongside measures such as allowing parents to use points and vouchers.
“It is important that parents are supported to feed their children in whatever way they want and we hope that steps will continue to be taken to reduce the cost of formula and increase understanding that all brands are nutritionally the same.”
An engineer who took aerospace giant Leonardo UK to an employment tribunal for having to share women’s toilets with transgender colleagues has lost a discrimination claim.
Maria Kelly alleged harassment related to sex, direct sex discrimination and indirect sex discrimination.
Ms Kelly took action after lodging a formal grievance with the company.
The tribunal was heard in Edinburgh in October, but all of her claims have now been dismissed by employment judge Michelle Sutherland.
Ms Kelly said she believes the outcome “fundamentally misunderstands both the law and my case”, as she announced plans to appeal.
In a written judgment published on Wednesday, Ms Sutherland said Leonardo UK’s position was that “one out of 9,500 employees raised a concern about the impact of the policy despite multiple means to do so”.
She found there was no “disadvantage” due to the policy.
More on Scotland
Related Topics:
Ms Sutherland added: “Any fear or privacy impact could be addressed by affected female staff making recourse to the single occupancy facilities.
“Any effect on risk of assault arising from 0.5% of men using the women’s toilets instead of the men’s toilets would not have changed the overall risk profile across toilet facilities generally.
“In the circumstances of this case, the toilet access policy was in the alternative a proportionate means of achieving a legitimate aim.”
The case followed the UK Supreme Court judgment in April which ruled the terms “woman” and “sex” in the 2010 Equality Act refer to a biological woman and biological sex.
Ms Kelly, people and capability lead for the firm, had told the tribunal she began using a “secret” toilet at her workplace after encountering a transgender colleague in a female bathroom in March 2023.
She said she had first become aware of a transgender person using the female toilets in 2019 but did not raise the issue with the company at the time as she feared being labelled “transphobic” or being put on the “naughty list”.
Ms Kelly said: “I am of course disappointed by the judgment, which I believe fundamentally misunderstands both the law and my case.
“I intend to appeal, and I will ask the EAT (Employment Appeal Tribunal) to consider expediting my appeal as the decision risks further confounding the already widespread misunderstanding and defiance of the Supreme Court’s judgment in For Women Scotland.”
Maya Forstater, chief executive of charity Sex Matters, said: “This judgment interprets the law as transactivists would wish it to be, and is incompatible with the Supreme Court ruling in For Women Scotland in several places.
“It is incredible that even after the highest court in the land has ruled that the law recognises men and women in terms of biological sex, there are lower courts still trying to see the world in terms of gender identity.”
Leonardo UK acknowledged the tribunal’s judgment.
A spokesperson for the firm added: “We recognise that the process has been demanding for everyone involved and we appreciate the professionalism shown by colleagues who supported the proceedings.
“Our focus now is to ensure that workplace conduct remains respectful and that our facilities’ policies continue to meet legal standards.
“We will review the forthcoming Equality and Human Rights Commission guidance when it is published and will make any adjustments that are required.
“Leonardo remains a supportive and inclusive environment for all employees.”
Hundreds of UK online safety workers at TikTok have already signed agreements to leave the company, whistleblowers have told Sky News, despite the firm stressing to MPs that the cuts were “still proposals only”.
More than 400 online safety workers have agreed to leave the social media company, with only five left in consultation, Sky News understands.
“[The workers have] signed a mutual termination agreement, a legally binding contract,” said John Chadfield, national officer for the Communication Workers’ Union.
“They’ve handed laptops in, they’ve handed passes in, they’ve been told not to come to the office. That’s no longer a proposal, that’s a foregone conclusion. That’s a plan that’s been executed.”
Image: Moderators gathered to protest the redundancies
“Everyone in Trust and Safety” was emailed, said Lucy, a moderator speaking on condition of anonymity for legal reasons.
After a mandatory 45-day consultation period, the teams were then sent “mutual termination agreements” to sign by 31 October.
More from Science, Climate & Tech
Sky News has seen correspondence from TikTok to the employees telling them to sign by that date.
“We had to sign it before the 31st if we wanted the better deal,” said Lucy, who had worked for TikTok for years.
“If we signed it afterwards, that diminished the benefits that we get.”
Image: Three former moderators at TikTok have spoken to Sky News on camera
Despite hundreds of moderators signing the termination contracts by 31 October, Ali Law, TikTok’s director of public policy and government affairs for northern Europe, said to MPs in a letter on 7 November: “It is important to stress the cuts remain proposals only.”
“We continue to engage directly with potentially affected team members,” he said in a letter to Dame Chi Onwurah, chair of the science, innovation and technology committee.
After signing the termination contracts, the employees say they were asked to hand in their laptops and had access to their work systems revoked. They were put on gardening leave until 30 December.
“We really felt like we were doing something good,” said Saskia, a moderator also speaking under anonymity.
“You felt like you had a purpose, and now, you’re the first one to get let go.”
Image: TikTok moderators and union workers protested outside the company’s London headquarters in September
A TikTok worker not affected by the job cuts confirmed to Sky News that all of the affected Trust and Safety employees “are now logged out of the system”.
“Workers and the wider public are rightly concerned about these job cuts that impact safety online,” said the TUC’s general secretary, Paul Nowak.
“But TikTok seem to be obscuring the reality of job cuts to MPs. TikTok need to come clean and clarify how many vital content moderators’ roles have gone.
“The select committee must do everything to get to the bottom of the social media giant’s claims, the wider issues of AI moderation, and ensure that other workers in the UK don’t lose their jobs to untested, unsafe and unregulated AI systems.”
Image: Moderators and union representatives outside TikTok’s offices
When asked if the cuts were in fact a plan that had already been executed, Mr Law said there was “limited amounts” he could directly comment on.
TikTok told us: “It is entirely right that we follow UK employment law, including when consultations remained ongoing for some employees and roles were still under proposal for removal.
“We have been open and transparent about the changes that were proposed, including in detailed public letters to the committee, and it is disingenuous to suggest otherwise.”
The three whistleblowers Sky News spoke to said they were concerned TikTok users would be put at risk by the cuts.
The company said it will increase the role of AI in its moderation, while maintaining some human safety workers, but one whistleblower said she didn’t think the AI was “ready”.
“People are getting new ideas and new trends are coming. AI cannot get this,” said Anna, a former moderator.
“Even now, with the things that it’s supposed to be ready to do, I don’t think it’s ready.”
Please use Chrome browser for a more accessible video player
12:04
Is TikTok improving safety with AI?
Lucy also said she thought the cuts would put users at risk.
“There are a lot of nuances in the language. AI cannot understand all the nuances,” she said.
“AI cannot differentiate some ironic comment or versus a real threat or bullying or of a lot of things that have to do with user safety, mainly of children and teenagers.”
TikTok has been asked by MPs for evidence that its safety rates – which are currently some of the best in the industry – will not worsen after these cuts.
The select committee says it has not produced that evidence, although TikTok insists safety will improve.
“[In its letter to MPs] TikTok refers to evidence showing that their proposed staffing cuts and changes will improve content moderation and fact-checking – but at no point do they present any credible data on this to us,” said Dame Chi earlier this month.
“It’s alarming that they aren’t offering us transparency over this information. Without it, how can we have any confidence whether these changes will safeguard users?”
Image: Dame Chi Onwurah speaks at the House of Commons. File pic: Reuters
TikTok’s use of AI in moderation
In an exclusive interview with Sky News earlier this month, Mr Law said the new moderation model would mean TikTok can “approach moderation with a higher level of speed and consistency”.
He said: “Because, when you’re doing this from a human moderation perspective, there are trade-offs.
“If you want something to be as accurate as possible, you need to give the human moderator as much time as possible to make the right decision, and so you’re trading off speed and accuracy in a way that might prove harmful to people in terms of being able to see that content.
“You don’t have that with the deployment of AI.”
As well as increasing the role of AI in moderation, TikTok is reportedly offshoring jobs to agencies in other countries.
Sky News has spoken to multiple workers who confirmed they’d seen their jobs being advertised in other countries through third-party agencies, and has independently seen moderator job adverts in places like Lisbon.
Image: John Chadfield, national officer for technology at the Communication Workers Union
“AI is a fantastic fig leaf. It’s a fig leaf for greed,” said Mr Chadfield. “In TikTok’s case, there’s a fundamental wish to not be an employer of a significant amount of staff.
“As the platform has grown, as it has grown to hundreds of millions of users, they have realised that the overhead to maintain a professional trust and safety division means hundreds of thousands of staff employed by TikTok.
“But they don’t want that. They see themselves as, you know, ‘We want specialists in the roles employed directly by TikTok and we’ll offshore and outsource the rest’.”
Mr Law told Sky News that TikTok is always focused “on outcomes”.
He said: “Our focus is on making sure the platform is as safe as possible.
“And we will make deployments of the most advanced technology in order to achieve that, working with the many thousands of trust and safety professionals that we will have at TikTok around the world on an ongoing basis.”
Asked specifically about the safety concerns raised by the whistleblowers, TikTok said: “As we have laid out in detail, this reorganisation of our global operating model for Trust and Safety will ensure we maximize effectiveness and speed in our moderation processes.
“We will continue to use a combination of technology and human teams to keep our users safe, and today over 85% of the content removed for violating our rules is identified and taken down by automated technologies.”
*All moderator names have been changed for legal reasons.