The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.
CFOTO | Future Publishing via Getty Images
Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.
Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.
Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.
It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.
That has, understandably, generated fears that jobs may be displaced by AI.
Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations.
But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.
Getting paid to review AI
Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.
The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.
The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.
They must provide consent to engage in the research.
One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.
The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.
He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.
He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.
The new ‘AI workers’
Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.
As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”
“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”
In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.
The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.
However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.
Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.
“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.
“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added.
“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”
Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.
According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.
Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.
Reinforcement learning
Meanwhile, companies are also using AI to automate reviews of regulatory documentation and legal paperwork — but with human oversight.
Firms often have to scan through huge amounts of paperwork to vet potential partners and assess whether or not they can expand into certain territories.
Going through all of this paperwork can be a tedious process which workers don’t necessarily want to take on — so the ability to pass it on to an AI model becomes attractive. But, according to researchers, it still requires a human touch.
Mesh AI, a digital transformation-focused consulting firm, says that human feedback can help AI models learn mistakes they make through trial and error.
“With this approach organizations can automate analysis and tracking of their regulatory commitments,” Michael Chalmers, CEO at Mesh AI, told CNBC via email.
Small and medium-sized enterprises “can shift their focus from mundane document analysis to approving the outputs generated from said AI models and further improving them by applying reinforcement learning from human feedback.”
CEO of Supermicro Charles Liang speaks during the Reuters NEXT conference in New York City, U.S., December 10, 2024.
Mike Segar | Reuters
PARIS — Super Micro plans to increase its investment in Europe, including ramping up manufacturing of its AI servers in the region, CEO Charles Liang told CNBC in an interview that aired on Wednesday.
The company sells servers which are packed with Nvidia chips and are key for training and implementing huge AI models. It has manufacturing facilities in the Netherlands, but could expand to other places.
“But because the demand in Europe is growing very fast, so I already decided, indeed, [there’s] already a plan to invest more in Europe, including manufacturing,” Liang told CNBC at the Raise Summit in Paris, France.
“The demand is global, and the demand will continue to improve in [the] next many years,” Liang added.
Liang’s comments come less than a month after Nvidia CEO Jensen Huang visited various parts of Europe, signing infrastructure deals and urging the region to ramp up its computing capacity.
Growth to be ‘strong’
Super Micro rode the growth wave after OpenAI’s ChatGPT boom boosted demand for Nvidia’s chips, which underpin big AI models. The server maker’s stock hit a record high in March 2024. However, the stock is around 60% off that all-time high over concerns about its accounting and financial reporting. But the company in February filed its delayed financial report for its 2024 fiscal year, assuaging those fears.
In May, the company reported weaker-than-expected guidance for the current quarter, raising concerns about demand for its product.
However, Liang dismissed those fears. “Our growth rate continues to be strong, because we continue to grow our fundamental technology, and we [are] also expanding our business scope,” Liang said.
“So the room … to grow will be still very tremendous, very big.”
Jeff Williams, chief operating officer of Apple Inc., during the Apple Worldwide Developers Conference (WWDC) at Apple Park campus in Cupertino, California, US, on Monday, June 9, 2025.
David Paul Morris | Bloomberg | Getty Images
Apple said on Tuesday that Chief Operating Officer Jeff Williams, a 27-year company veteran, will be retiring later this year.
Current operations leader Sabih Khan will take over much of the COO role later this month, Apple said in a press release. For his remaining time with the comapny, Williams will continue to head up Apple’s design team, Apple Watch, and health initiatives, reporting to CEO Tim Cook.
Williams becomes the latestlongtime Apple executive to step down as key employees, who were active in the company’s hyper-growth years, reach retirement age. Williams, 62, previously headed Apple’s formidable operations division, which is in charge of manufacturing millions of complicated devices like iPhones, while keeping costs down.
He also led important teams inside Apple, including the company’s fabled industrial design team, after longtime leader Jony Ive retired in 2019. When Williams retires, Apple’s design team will report to CEO Tim Cook, Apple said.
“He’s helped to create one of the most respected global supply chains in the world; launched Apple Watch and overseen its development; architected Apple’s health strategy; and led our world class team of designers with great wisdom, heart, and dedication,” Cook said in the statement.
Williams said he plans to spend more time with friends and family.
“June marked my 27th anniversary with Apple, and my 40th in the industry,” Williams said in the release.
Williams is leaving Apple at a time when its famous supply chain is under significant pressure, as the U.S. imposes tariffs on many of the countries where Apple sources its devices, and White House officials publicly pressure Apple to move more production to the U.S.
Khan was added to Apple’s executive team in 2019, taking an executive vice president title. Apple said on Tuesday that he will lead supply chain, product quality, planning, procurement, and fulfillment at Apple.
The operations leader joined Apple’s procurement group in 1995, and before that worked as an engineer and technical leader at GE Plastics. He has a bachelor’s degree from Tufts University and a master’s degree in mechanical engineering from Rensselaer Polytechnic Institute in upstate New York.
Khan has worked closely with Cook. Once, during a meeting when Cook said that a manufacturing problem was “really bad,” Khan stood up and drove to the airport, and immediately booked a flight to China to fix it, according to an anecdote published in Fortune.
Elon Musk, chief executive officer of SpaceX and Tesla, attends the Viva Technology conference at the Porte de Versailles exhibition center in Paris, June 16, 2023.
Gonzalo Fuentes | Reuters
Tesla CEO Elon Musk told Wedbush Securities’ Dan Ives to “Shut up” on Tuesday after the analyst offered three recommendations to the electric vehicle company’s board in a post on X.
Ives has been one of the most bullish Tesla observers on Wall Street. With a $500 price target on the stock, he has the highest projection of any analyst tracked by FactSet.
But on Tuesday, Ives took to X with critical remarks about Musk’s political activity after the world’s richest person said over the weekend that he was creating a new political party called the America Party to challenge Republican candidates who voted for the spending bill that was backed by President Donald Trump.
Ives’ post followed a nearly 7% slide in Tesla’s stock Monday, which wiped out $68 billion in market cap. Ives called for Tesla’s board to create a new pay package for Musk that would get him 25% voting control and clear a path to merge with xAI, establish “guardrails” for how much time Musk has to spend at Tesla, and provide “oversight on political endeavors.”
Ives published a lengthier note with other analysts from his firm headlined, “The Tesla board MUST Act and Create Ground Rules For Musk; Soap Opera Must End.” The analysts said that Musk’s launching of a new political party created a “tipping point in the Tesla story,” necessitating action by the company’s board to rein in the CEO.
Still, Wedbush maintained its price target and its buy recommendation on the stock.
“Shut up, Dan,” Musk wrote in response on X, even though the first suggestion would hand the CEO the voting control he has long sought at Tesla.
In an email to CNBC, Ives wrote, “Elon has his opinion and I get it, but we stand by what the right course of action is for the Board.”
Musk’s historic 2018 CEO pay package, which had been worth around $56 billion and has since gone up in value, was voided last year by the Delaware Court of Chancery. Judge Kathaleen McCormick ruled that Tesla’s board members had lacked independence from Musk and failed to properly negotiate at arm’s length with the CEO.
Tesla has appealed that case to the Delaware state Supreme Court and is trying to determine what Musk’s next pay package should entail.
Ives isn’t the only Tesla bull to criticize Musk’s continued political activism.
Analysts at William Blair downgraded the stock to the equivalent of a hold from a buy on Monday, because of Musk’s political plans and rhetoric as well as the negative impacts that the spending bill passed by Congress could have on Tesla’s margins and EV sales.
“We expect that investors are growing tired of the distraction at a point when the business needs Musk’s attention the most and only see downside from his dip back into politics,” the analysts wrote. “We would prefer this effort to be channeled towards the robotaxi rollout at this critical juncture.”
Trump supporter James Fishback, CEO of hedge fund Azoria Partners, said Saturday that his firm postponed the listing of an exchange-traded fund, the Azoria Tesla Convexity ETF, that would invest in the EV company’s shares and options. He began his post on X saying, “Elon has gone too far.”
“I encourage the Board to meet immediately and ask Elon to clarify his political ambitions and evaluate whether they are compatible with his full-time obligations to Tesla as CEO,” Fishback wrote.
Musk said Saturday that he has formed the America Party, which he claimed will give Americans “back your freedom.” He hasn’t shared formal details, including where the party may be registered, how much funding he will provide for it and which candidates he will back.
Tesla’s stock is now down about 25% this year, badly underperforming U.S. indexes and by far the worst performance among tech’s megacaps.
Musk spent much of the first half of the year working with the Trump administration and leading an effort to massively downsize the federal government. His official work with the administration wrapped up at the end of May, and his exit preceded a public spat between Musk and Trump over the spending bill and other matters.
Musk, Tesla’s board chair Robyn Denholm and investor relations representative Travis Axelrod didn’t immediately respond to requests for comment.