Connect with us

Published

on

The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.

CFOTO | Future Publishing via Getty Images

Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.

Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.

Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.

It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.

That has, understandably, generated fears that jobs may be displaced by AI.

Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations. 

But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.

Getting paid to review AI

Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.

A crash course for small business on using A.I. from a Harvard Business School professor

The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.

The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.

They must provide consent to engage in the research.

One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.

The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.

He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.

He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.

The new ‘AI workers’

Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.

As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”

“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”

In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.

The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.

However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.

Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.

“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.

“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added. 

“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”

We've already seen a few shorts in the 'fake' AI space, says Muddy Waters' Block

Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.

According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.

Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.

Reinforcement learning

Adobe CEO on new AI models, monetizing Firefly and new growth

Continue Reading

Technology

SpaceX aims for $800 billion valuation in secondary share sale, WSJ reports

Published

on

By

SpaceX aims for 0 billion valuation in secondary share sale, WSJ reports

Dado Ruvic | Reuters

Elon Musk’s SpaceX, is initiating a secondary share sale that would give the company a valuation of up to $800 billion, The Wall Street Journal reported Friday.

SpaceX is also telling some investors it will consider going public possibly around the end of next year, the report said.

At the elevated price, Musk’s aerospace and defense contractor would be valued above ChatGPT maker OpenAI, which wrapped up a share sale at a $500 billion valuation in October.

SpaceX has been investing heavily in reusable rockets, launch facilities and satellites, while competing for government contracts with newer space players, including Jeff Bezos‘ Blue Origin. SpaceX is far ahead, and operates the world’s largest network of satellites in low earth orbit through Starlink, which powers satellite internet services under the same brand name.

A SpaceX IPO would include its Starlink business, which the company previously considered spinning out.

Musk recently discussed whether SpaceX would go public during Tesla‘s annual shareholders meeting last month. Musk, who is the CEO of both companies, said he doesn’t love running publicly traded businesses, in part because they draw “spurious lawsuits,” and can “make it very difficult to operate effectively.”

However, Musk said during the meeting that he wanted to “try to figure out some way for Tesla shareholders to participate in SpaceX,” adding, “maybe at some point, SpaceX should become a public company despite all the downsides.”

WATCH: What retail investors should know about OpenAI and SpaceX

Want to ‘invest' in OpenAI and SpaceX? How tokenization will change investing

Continue Reading

Technology

Judge finalizes remedies in Google antitrust case

Published

on

By

Judge finalizes remedies in Google antitrust case

The logo for Google LLC is seen at the Google Store Chelsea in Manhattan, New York City, U.S., November 17, 2021.

Andrew Kelly | Reuters

A U.S. judge on Friday finalized his decision for the consequences Google will face for its search monopoly ruling, adding new details to the decided remedies.

Last year, Google was found to hold an illegal monopoly in its core market of internet search, and in September, U.S. District Judge Amit Mehta ruled against the most severe consequences that were proposed by the Department of Justice.

That included the proposal of a forced sale of Google’s Chrome browser, which provides data that helps the company’s advertising business deliver targeted ads. Alphabet shares popped 8% in extended trading as investors celebrated what they viewed as minimal consequences from a historic defeat last year in the landmark antitrust case.

Investors largely shrugged off the ruling as non-impactful to Google. However some told CNBC it’s still a bite that could “sting.”

Mehta on Friday issued additional details for his ruling in new filings.

“The age-old saying ‘the devil is in the details’ may not have been devised with the drafting of an antitrust remedies judgment in mind, but it sure does fit,” Mehta wrote in one of the Friday filings.

Google did not immediately respond to a request for comment. The company has previously said it will appeal the remedies.

In August 2024, Mehta ruled that Google violated Section 2 of the Sherman Act and held a monopoly in search and related advertising. The antitrust trial started in September 2023.

In his September decision, Mehta said the company would be able to make payments to preload products, but it could not have exclusive contracts that condition payments or licensing. Google was also ordered to loosen its hold on search data. Mehta in September also ruled that Google would have to make available certain search index data and user interaction data, though “not ads data.”

The DOJ had asked Google to stop the practice of “compelled syndication,” which refers to the practice of making certain deals with companies to ensure its search engine remains the default choice in browsers and smartphones.

The judge’s September ruling didn’t end the practice entirely — Mehta ruled out that Google couldn’t enter into exclusive deals, which was a win for the company. Google pays Apple billions of dollars per year to be the default search engine on iPhones. It’s lucrative for Apple and a valuable way for Google to get more search volume and users.

Mehta’s new details

In the Friday filings, Mehta wrote that Google cannot enter into any deal like the one it’s had with Apple “unless the agreement terminates no more than one year after the date it is entered.”

This includes deals involving generative artificial intelligence products, including any “application, software, service, feature, tool, functionality, or product” that involve or use genAI or large-language models, Mehta wrote.

GenAI “plays a significant role in these remedies,” Mehta wrote.

The judge also reiterated the web index data it will require Google to share with certain competitors. 

Google has to share some of the raw search interaction data it uses to train its ranking and AI systems, but it does not have to share the actual algorithms — just the data that feeds them.” In September, Mehta said those data sets represent a “small fraction” of Google’s overall traffic, but argued the company’s models are trained on data that contributed to Google’s edge over competitors.

The company must make this data available to qualified competitors at least twice, one of the Friday filing states. Google must share that data in a “syndication license” model whose term will be five years from the date the license is signed, the filing states.

Mehta on Friday also included requirements on the makeup of a technical committee that will determine the firms Google must share its data with.

Committee “members shall be experts in some combination of software engineering, information retrieval, artificial intelligence, economics, behavioral science, and data privacy and data security,” the filing states.

The judge went on to say that no committee member can have a conflict of interest, such as having worked for Google or any of its competitors in the six months prior to or one year after serving in the role.

Google is also required to appoint an internal compliance officer that will be responsible “for administering Google’s antitrust compliance program and helping to ensure compliance with this Final Judgment,” per one of the filings. The company must also appoint a senior business executive “whom Google shall make available to update the Court on Google’s compliance at regular status conferences or as otherwise ordered.”

This is breaking news. Check back for updates.

WATCH: Judge Issues final remedies in Google antitrust case

Judge Issues final remedies in Google antitrust case

Continue Reading

Technology

Amazon had a very big week that could shape where its stagnant stock goes next

Published

on

By

Amazon had a very big week that could shape where its stagnant stock goes next

Continue Reading

Trending