The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.
CFOTO | Future Publishing via Getty Images
Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.
Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.
Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.
It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.
That has, understandably, generated fears that jobs may be displaced by AI.
Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations.
But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.
Getting paid to review AI
Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.
The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.
The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.
They must provide consent to engage in the research.
One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.
The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.
He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.
He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.
The new ‘AI workers’
Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.
As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”
“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”
In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.
The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.
However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.
Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.
“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.
“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added.
“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”
Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.
According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.
Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.
Reinforcement learning
Meanwhile, companies are also using AI to automate reviews of regulatory documentation and legal paperwork — but with human oversight.
Firms often have to scan through huge amounts of paperwork to vet potential partners and assess whether or not they can expand into certain territories.
Going through all of this paperwork can be a tedious process which workers don’t necessarily want to take on — so the ability to pass it on to an AI model becomes attractive. But, according to researchers, it still requires a human touch.
Mesh AI, a digital transformation-focused consulting firm, says that human feedback can help AI models learn mistakes they make through trial and error.
“With this approach organizations can automate analysis and tracking of their regulatory commitments,” Michael Chalmers, CEO at Mesh AI, told CNBC via email.
Small and medium-sized enterprises “can shift their focus from mundane document analysis to approving the outputs generated from said AI models and further improving them by applying reinforcement learning from human feedback.”
White House trade advisor Peter Navarro chastised Apple CEO Tim Cook on Monday over the company’s response to pressure from the Trump administration to make more of its products outside of China.
“Going back to the first Trump term, Tim Cook has continually asked for more time in order to move his factories out of China,” Navarro said in an interview on CNBC’s “Squawk on the Street.” “I mean it’s the longest-running soap opera in Silicon Valley.”
CNBC has reached out to Apple for comment on Navarro’s criticism.
President Donald Trump has in recent months ramped up demands for Apple to move production of its iconic iPhone to the U.S. from overseas. Apple’s flagship phone is produced primarily in China, but the company has increasingly boosted production in India, partly to avoid the higher cost of Trump’s tariffs.
Trump in May warned Apple would have to pay a tariff of 25% or more for iPhones made outside the U.S. In separate remarks, Trump said he told Cook, “I don’t want you building in India.”
Read more CNBC tech news
Analysts and supply chain experts have argued it would be impossible for Apple to completely move iPhone production to the U.S. By some estimates, a U.S.-made iPhone could cost as much as $3,500.
Navarro said Cook isn’t shifting production out of China quickly enough.
“With all these new advanced manufacturing techniques and the way things are moving with AI and things like that, it’s inconceivable to me that Tim Cook could not produce his iPhones elsewhere around the world and in this country,” Navarro said.
Apple currently makes very few products in the U.S. During Trump’s first term, Apple extended its commitment to assemble the $3,000 Mac Pro in Texas.
In February, Apple said it would spend $500 billion within the U.S., including on assembling some AI servers.
CoreWeave founders Brian Venturo, at left in sweatshirt, and Mike Intrator slap five after ringing the opening bell at Nasdaq headquarters in New York on March 28, 2025.
Michael M. Santiago | Getty Images News | Getty Images
Artificial intelligence hyperscaler CoreWeave said Monday it will acquire Core Scientific, a leading data center infrastructure provider, in an all-stock deal valued at approximately $9 billion.
Coreweave stock fell about 4% on Monday while Core Scientific stock plummeted about 20%. Shares of both companies rallied at the end of June after the Wall Street Journal reported that talks were underway for an acquisition.
The deal strengthens CoreWeave’s position in the AI arms race by bringing critical infrastructure in-house.
CoreWeave CEO Michael Intrator said the move will eliminate $10 billion in future lease obligations and significantly enhance operating efficiency.
The transaction is expected to close in the fourth quarter of 2025, pending regulatory and shareholder approval.
Read more CNBC tech news
The deal expands CoreWeave’s access to power and real estate, giving it ownership of 1.3 gigawatts of gross capacity across Core Scientific’s U.S. data center footprint, with another gigawatt available for future growth.
Core Scientific has increasingly focused on high-performance compute workloads since emerging from bankruptcy and relisting on the Nasdaq in 2024.
Core Scientific shareholders will receive 0.1235 CoreWeave shares for each share they hold — implying a $20.40 per-share valuation and a 66% premium to Core Scientific’s closing stock price before deal talks were reported.
After closing, Core Scientific shareholders will own less than 10% of the combined company.
Two young men stand inside a shopping mall in front of a large illuminated Apple logo seen through a window in Chongqing, China, on June 4, 2025.
Cheng Xin | Getty Images
Apple on Monday appealed what it called an “unprecedented” 500 million euro ($586 million) fine issued by the European Union for violating the bloc’s Digital Markets Act.
“As our appeal will show, the EC [European Commission] is mandating how we run our store and forcing business terms which are confusing for developers and bad for users,” the company said in a statement. “We implemented this to avoid punitive daily fines and will share the facts with the Court.”
Apple recently made changes to its App Store‘s European policies that the company said would be in compliance with the DMA and would avoid the fines.
The Commission, which is the executive body of the EU, announced its fine in April, saying that Apple “breached its anti-steering obligation” under the DMA with restrictions on the App Store.
Read more CNBC tech news
“Due to a number of restrictions imposed by Apple, app developers cannot fully benefit from the advantages of alternative distribution channels outside the App Store,” the commission wrote. “Similarly, consumers cannot fully benefit from alternative and cheaper offers as Apple prevents app developers from directly informing consumers of such offers.”
Under the DMA, tech giants like Apple and Google are required to allow businesses to inform end-users of offers outside their platform — including those at different prices or with different conditions.
Companies like Epic Games and Spotify have complained about restrictions within the App Store that make it harder for them to communicate alternative payment methods to iOS users.
Apple typically takes a 15%-30% cut on in-app purchases.