The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.
CFOTO | Future Publishing via Getty Images
Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.
Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.
Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.
It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.
That has, understandably, generated fears that jobs may be displaced by AI.
Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations.
But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.
Getting paid to review AI
Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.
The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.
The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.
They must provide consent to engage in the research.
One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.
The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.
He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.
He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.
The new ‘AI workers’
Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.
As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”
“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”
In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.
The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.
However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.
Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.
“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.
“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added.
“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”
Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.
According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.
Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.
Reinforcement learning
Meanwhile, companies are also using AI to automate reviews of regulatory documentation and legal paperwork — but with human oversight.
Firms often have to scan through huge amounts of paperwork to vet potential partners and assess whether or not they can expand into certain territories.
Going through all of this paperwork can be a tedious process which workers don’t necessarily want to take on — so the ability to pass it on to an AI model becomes attractive. But, according to researchers, it still requires a human touch.
Mesh AI, a digital transformation-focused consulting firm, says that human feedback can help AI models learn mistakes they make through trial and error.
“With this approach organizations can automate analysis and tracking of their regulatory commitments,” Michael Chalmers, CEO at Mesh AI, told CNBC via email.
Small and medium-sized enterprises “can shift their focus from mundane document analysis to approving the outputs generated from said AI models and further improving them by applying reinforcement learning from human feedback.”
French satellite group Eutelsat, often seen as Europe’s answer to Elon Musk’s Starlink, saw its share price plummet Wednesday following a report that Japanse investor SoftBank cut its stake in the company.
Shares in Eutelsat were last trading 7.8% lower as of 6:00 a.m. ET.
The moves come following a Reuters report that SoftBank has sold 36 million rights, corresponding to around 26 million shares and around half their stake in the satellite operator.
Eutelsat is the owner of the satellite internet provider OneWeb, which it merged with in 2023 in a bid to challenge Starlink’s dominance in the market.
But the French group has struggled to tap into the U.S. company’s market share. Eutelsat currently has more than 600 satellites in orbit compared to Starlink’s over 6,750, according to the companies’ websites.
After soaring more than 600% in early March this year, as Europe scrambled to bolster its tech sovereignty in the wake of the U.S. cutting military support to Ukraine, Eutelsat shares have since dropped more than 70%.
The company is seen as crucial to Europe’s tech sovereignty ambitions. In June the French state led a 1.35 billion euro ($1.57 billion) investment in Eutelsat, becoming its biggest shareholder with a roughly 30% stake.
Tech sovereignty
In November SoftBank said it had sold its entire stake in U.S. chipmaker Nvidia as it looked to free up funds for its investment in OpenAI and other projects.
SoftBank wouldn’t have made the move if it didn’t need to bankroll its next artificial intelligence investments, founder Masayoshi Son said on Monday at an event.
The Japanese giant’s Eutelsat move mirrors its “aggressive monetisation” across its portfolio, Luke Kehoe, analyst at Ookla, told CNBC.
“With governments and strategic European investors, not SoftBank, now funding the recapitalisation, Eutelsat is becoming less a growth story and more a pillar of Europe’s digital sovereignty infrastructure.”
While Starlink is holding on to its scale advantage and is dominant in retail broadband, Eutelsat is carving out a niche in government, aviation, backhaul and emergency connectivity, said Kehoe.
“The open question is whether that higher-value, B2B-centric positioning can deliver attractive returns once the current wave of capex and recapitalisations is behind it, and whether Europe is willing to keep writing cheques at the scale required to narrow the gap with Starlink.”
Eutelsat and SoftBank have been approached for comment.
Apple’s latest iPhone models are shown on display at its Regent Street, London store on the launch day of the iPhone 17.
Arjun Kharpal | CNBC
Apple will hit a record level of iPhone shipments this year driven by its latest models and a resurgence in its key market of China, research firm IDC has forecast.
The company will ship 247.4 million iPhones in 2025, up just over 6% year-on-year, IDC forecast in a report on Tuesday. That’s more than the 236 million it sold in 2021, when the iPhone 13 was released.
Apple’s predicted surge is “thanks to the phenomenal success of its latest iPhone 17 series,” Nabila Popal, senior research director at IDC, said in a statement, adding that in China, “massive demand for iPhone 17 has significantly accelerated Apple’s performance.”
Shipments are a term used by analysts to refer to the number of devices sent by a vendor to its sales channels like e-commerce partners or stores. They do not directly equate to sales but indicate the demand expected by a company for their products.
When it launched in September, investors saw the iPhone 17 series as a key set of devices for Apple, which was facing increased competition in China and questions about its artificial intelligence strategy, as Android rivals were powering on.
Apple’s shipments are expected to jump 17% year-on-year in China in the fourth quarter, IDC said, leading the research firm to forecast 3% growth in the market this year versus a previous projection of a 1% decline.
IDC’s report follows on from Counterpoint Research last week which forecast Apple to ship more smartphones than Samsung in 2025 for the first time in 14 years.
Bloomberg reported last month that Apple could delay the release of the base model of its next device, the iPhone 18, until 2027, which would break its regular cycle of releasing all of its phones in fall each year. IDC said this could mean Apple’s shipments may drop by 4.2% next year.
Anthropic, the AI startup behind the popular Claude chatbot, is in early talks to launch one of the largest initial public offerings as early as next year, the Financial Times reported Wednesday.
For the potential IPO, Anthropic has engaged law firm Wilson Sonsini Goodrich & Rosati, which has previously worked on high-profile tech IPOs such as Google, LinkedIn and Lyft, the FT said, citing two sources familiar with the matter.
The start-up, led by chief executive Dario Amodei, was also pursuing a private funding round that could value it above $300 billion, including a $15 billion combined commitment from Microsoft and Nvidia, per the report.
It added that Anthropic has also discussed a potential IPO with major investment banks, but that sources characterized the discussions as preliminary and informal.
If true, the news could position Anthropic in a race to market with rival ChatGPT-maker OpenAI, which is also reportedly laying the groundwork for a public offering. The potential listings would also test investors’ appetite for loss-making AI startups amid growing fears of a so-called AI bubble.
However, an Anthropic spokesperson told the FT: “It’s fairly standard practice for companies operating at our scale and revenue level to effectively operate as if they are publicly traded companies,” adding that no decisions have been made on timing or whether to go public.
CNBC was unable to reach Anthropic and Wilson Sonsini, which has advised Anthropic for a few years, for comment.
According to one of the FT’s sources, Anthropic has been working through internal preparations for a potential listing, though details were not provided.
CNBC also reported last month that Anthropic was recently valued to the range of $350 billion after receiving investments of up to $5 billion from Microsoft and $10 billion from Nvidia.
According to the FT report, investors in the company are enthusiastic about Anthropic’s potential IPO, which could see it “seize the initiative” from OpenAI.
While OpenAI has been rumoured to be considering an IPO, its chief financial officer recently said the company is not pursuing a near-term listing, even as it closed a $6.6 billion share sale at a $500 billion valuation in October.