Connect with us

Published

on

The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.

CFOTO | Future Publishing via Getty Images

Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.

Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.

Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.

It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.

That has, understandably, generated fears that jobs may be displaced by AI.

Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations. 

But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.

Getting paid to review AI

Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.

A crash course for small business on using A.I. from a Harvard Business School professor

The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.

The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.

They must provide consent to engage in the research.

One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.

The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.

He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.

He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.

The new ‘AI workers’

Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.

As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”

“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”

In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.

The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.

However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.

Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.

“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.

“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added. 

“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”

We've already seen a few shorts in the 'fake' AI space, says Muddy Waters' Block

Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.

According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.

Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.

Reinforcement learning

Adobe CEO on new AI models, monetizing Firefly and new growth

Continue Reading

Technology

How black boxes became key to solving airplane crashes

Published

on

By

How black boxes became key to solving airplane crashes

After the search for survivors and recovery of victims in tragic aviation accidents — like that of a UPS cargo plane shortly after takeoff from Louisville Muhammad Ali International Airport in Kentucky last month — comes the search for flight data and a cockpit voice recorder often called the “black box.”

Every commercial plane has them. Aerospace giants GE Aerospace and Honeywell are among a few companies that design them to be nearly indestructible so they can help investigators understand the cause of a crash.

“They’re very crucial because it’s one of the few sources of information that tells us what happened leading up to the accident,” said Chris Babcock, branch chief of the vehicle recorder division at the National Transportation Safety Board. “We can get a lot of information from parts and from the airplane.”

Commercial aircraft have become very complex. A Boeing 787 Dreamliner records thousands of different pieces of information. In the case of the Air India crash in June, data revealed both engine fuel switches were put into a cutoff position within one second of each other. A voice recording from inside the cockpit captured the pilots discussing the cutoffs.

“All of those parameters today can have a very huge impact on the investigation,” said former NTSB member John Goglia. “It’s our goal to to provide information back to our investigators who are on scene as quick as we can to help move the investigation forward.”

This crucial data can also help prevent future accidents. A crash can cost airlines or plane manufacturers hundreds of millions of dollars and leave victims’ families with a lifetime of grief.

But in some circumstances black boxes were destroyed or never found. Experts say further developments such as cockpit video recorders and real-time data streaming are needed.

“The technology is there. Crash worthy cockpit video recorders are already being installed in a lot of helicopters and other types of airplanes, but they’re not required,” said Jeff Guzzetti, aviation analyst and former accident investigator for the Federal Aviation Administration and NTSB. “There’s privacy and cost issues involving cockpit video recorders but the NTSB has been recommending that the FAA require them for years now.”

Watch the video to learn more.

CNBC’s Leslie Josephs contributed to this report.

Continue Reading

Technology

Stocks end November with mixed results despite a strong Thanksgiving week rally

Published

on

By

Stocks end November with mixed results despite a strong Thanksgiving week rally

Continue Reading

Technology

Palantir has worst month in two years as AI stocks sell off

Published

on

By

Palantir has worst month in two years as AI stocks sell off

CEO of Palantir Technologies Alex Karp attends the Pennsylvania Energy and Innovation Summit, at Carnegie Mellon University in Pittsburgh, Pennsylvania, U.S., July 15, 2025.

Nathan Howard | Reuters

It’s been a tough November for Palantir.

Shares of the software analytics provider dropped 16% for their worst month since August 2023 as investors dumped AI stocks due to valuation fears. Meanwhile, famed investor Michael Burry doubled down on the artificial intelligence trade and bet against the company.

Palantir started November off on a high note.

The Denver-based company topped Wall Street’s third-quarter earnings and revenue expectations. Palantir also posted its second-straight $1 billion revenue quarter, but high valuation concerns contributed to a post-print selloff.

In a note to clients, Jefferies analysts called Palantir’s valuation “extreme” and argued investors would find better risk-reward in AI names such as Microsoft and Snowflake. Analysts at RBC Capital Markets raised concerns about the company’s “increasingly concentrated growth profile,” while Deutsche Bank called the valuation “very difficult to wrap our heads around.”

Adding fuel to the post-earnings selloff was the revelation that Burry is betting against Palantir and AI chipmaker Nvidia. Burry, who is widely known for predicting the housing crisis that occurred in 2008 and the portrayal of him in the film “The Big Short,” later accused hyperscalers of artificially boosting earnings.

Palantir CEO Alex Karp vocally hit the front lines, appearing twice in one week on CNBC, where he accused Burry of “market manipulation” and called the investor’s actions “egregious.”

“The idea that chips and ontology is what you want to short is bats— crazy,” Karp told CNBC’s “Squawk Box.”

Despite the vicious selloff, Palantir has notched some deal wins this month. That included a multiyear contract with consulting firm PwC to speed up AI adoption in the U.K. and a deal with aircraft engine maintenance company FTAI.

But those announcements did little to shake off valuation worries that have haunted all AI-tied companies in November.

Across the board, investors have viciously ditched the high-priced group, citing fears of stretched valuations and a bubble.

In November, Nvidia pulled back more than 12%, while Microsoft and Amazon dropped about 5% each. Quantum computing names such as Rigetti Computing and D-Wave Quantum have shed more than a third of their value.

Apple and Alphabet were the only Magnificent 7 stocks to end the month with gains.

Sill, questions linger over Palantir’s valuation, and those worries aren’t a new concern.

Even after its steep price drop, the company’s stock trades at 233 times forward earnings. By comparison, Nvidia and Alphabet traded at about 38 times and 30 times, respectively, at Friday’s close.

Karp, who has long defended the company, didn’t miss an opportunity to clap back at his critics, arguing in a letter to shareholders that the company is making it feasible for everyday investors to attain rates of return once “limited to the most successful venture capitalists in Palo Alto.”

“Please turn on the conventional television and see how unhappy those that didn’t invest in us are,” Karp said during an earnings call. “Enjoy, get some popcorn. They’re crying. We are every day making this company better, and we’re doing it for this nation, for allied countries.”

Palantir declined to comment for this story.

WATCH: Palantir CEO Alex Karp: We’ve printed venture results for the average American

Palantir CEO Alex Karp: We've printed venture results for the average American

Continue Reading

Trending