Connect with us

Published

on

The logo of generative AI chatbot ChatGPT, which is owned by Microsoft-backed company OpenAI.

CFOTO | Future Publishing via Getty Images

Artificial intelligence might be driving concerns over people’s job security — but a new wave of jobs are being created that focus solely on reviewing the inputs and outputs of next-generation AI models.

Since Nov. 2022, global business leaders, workers and academics alike have been gripped by fears that the emergence of generative AI will disrupt vast numbers of professional jobs.

Generative AI, which enables AI algorithms to generate humanlike, realistic text and images in response to textual prompts, is trained on vast quantities of data.

It can produce sophisticated prose and even company presentations close to the quality of academically trained individuals.

That has, understandably, generated fears that jobs may be displaced by AI.

Morgan Stanley estimates that as many as 300 million jobs could be taken over by AI, including office and administrative support jobs, legal work, and architecture and engineering, life, physical and social sciences, and financial and business operations. 

But the inputs that AI models receive, and the outputs they create, often need to be guided and reviewed by humans — and this is creating some new paid careers and side hustles.

Getting paid to review AI

Prolific, a company that helps connect AI developers with research participants, has had direct involvement in providing people with compensation for reviewing AI-generated material.

A crash course for small business on using A.I. from a Harvard Business School professor

The company pays its candidates sums of money to assess the quality of AI-generated outputs. Prolific recommends developers pay participants at least $12 an hour, while minimum pay is set at $8 an hour.

The human reviewers are guided by Prolific’s customers, which include Meta, Google, the University of Oxford and University College London. They help reviewers through the process, learning about the potentially inaccurate or otherwise harmful material they may come across.

They must provide consent to engage in the research.

One research participant CNBC spoke to said he has used Prolific on a number of occasions to give his verdict on the quality of AI models.

The research participant, who preferred to remain anonymous due to privacy concerns, said that he often had to step in to provide feedback on where the AI model went wrong and needed correcting or amending to ensure it didn’t produce unsavory responses.

He came across a number of instances where certain AI models were producing things that were problematic — on one occasion, the research participant would even be confronted with an AI model trying to convince him to buy drugs.

He was shocked when the AI approached him with this comment — though the purpose of the study was to test the boundaries of this particular AI and provide it with feedback to ensure that it doesn’t cause harm in future.

The new ‘AI workers’

Phelim Bradley, CEO of Prolific, said that there are plenty of new kinds of “AI workers” who are playing a key role in informing the data that goes into AI models like ChatGPT — and what comes out.

As governments assess how to regulate AI, Bradley said that it’s “important that enough focus is given to topics including the fair and ethical treatment of AI workers such as data annotators, the sourcing and transparency of data used to build AI models, as well as the dangers of bias creeping into these systems due to the way in which they are being trained.”

“If we can get the approach right in these areas, it will go a long way to ensuring the best and most ethical foundations for the AI-enabled applications of the future.”

In July, Prolific raised $32 million in funding from investors including Partech and Oxford Science Enterprises.

The likes of Google, Microsoft and Meta have been battling to dominate in generative AI, an emerging field of AI that has involved commercial interest primarily thanks to its frequently floated productivity gains.

However, this has opened a can of worms for regulators and AI ethicists, who are concerned there is a lack of transparency surrounding how these models reach decisions on the content they produce, and that more needs to be done to ensure that AI is serving human interests — not the other way around.

Hume, a company that uses AI to read human emotions from verbal, facial and vocal expressions, uses Prolific to test the quality of its AI models. The company recruits people via Prolific to participate in surveys to tell it whether an AI-generated response was a good response or a bad response.

“Increasingly, the emphasis of researchers in these large companies and labs is shifting towards alignment with human preferences and safety,” Alan Cowen, Hume’s co-founder and CEO, told CNBC.

“There’s more of an emphasize on being able to monitor things in these applications. I think we’re just seeing the very beginning of this technology being released,” he added. 

“It makes sense to expect that some of the things that have long been pursued in AI — having personalised tutors and digital assistants; models that can read legal documents and revise them these, are actually coming to fruition.”

We've already seen a few shorts in the 'fake' AI space, says Muddy Waters' Block

Another role placing humans at the core of AI development is prompt engineers. These are workers who figure out what text-based prompts work best to insert into the generative AI model to achieve the most optimal responses.

According to LinkedIn data released last week, there’s been a rush specifically toward jobs mentioning AI.

Job postings on LinkedIn that mention either AI or generative AI more than doubled globally between July 2021 and July 2023, according to the jobs and networking platform.

Reinforcement learning

Adobe CEO on new AI models, monetizing Firefly and new growth

Continue Reading

Technology

Inside one of the first all-female hacker houses in San Francisco

Published

on

By

Inside one of the first all-female hacker houses in San Francisco

For Molly Cantillon, living in a hacker house wasn’t just a dream, but a necessity.

“I had lived in a few hacker houses before and wanted to replicate that energy,” said Cantillon, 20, co-founder of HackHer House and founder of the startup NOX. “A place where really energetic, hardcore people came together to solve problems. But every house I lived in was mostly male. It was obvious to me that I wanted to do the inverse and build an all-female hacker house that created the same dynamic but with women.”

Cantillon, who has lived in several hacker houses over the years, saw a need for a space dedicated exclusively to women. That’s why she co-founded HackHer House, the first all-female hacker house in the San Francisco Bay Area.

“A hacker house is a shared living space where builders and innovators come together to work on their own projects while collaborating with others,” said Jennifer Li, General Partner at Andreessen Horowitz and sponsor of the HackHer House. “It’s a community that thrives on creativity and resource sharing, making it a cost-effective solution for those in high-rent areas like Silicon Valley, where talented founders and engineers can easily connect and support each other.”

Founded by Cantillon, Zoya Garg, Anna Monaco and Anne Brandes, this house was designed to empower women in a tech world traditionally dominated by men. 

“We’re trying to break stereotypes here,” said Garg, 21, a rising senior at Stanford University. “This house isn’t just about living together; it’s about creating a community where women can thrive in tech.”

Located in North Beach, HackHer House was home this summer to seven women, all of whom share the goal of launching successful ventures in tech. 

Venture capital played a key role in making HackHer House possible. With financial backing, the house offered subsidized rent, allowing the women to focus on their projects instead of struggling with the Bay Area’s notoriously high living costs.

“New grad students face daunting living expenses, with campus costs reaching the high hundreds to over a thousand dollars a month,” said Li. “In the Bay Area, finding a comfortable room typically starts at $2,000, and while prices may have eased slightly, they remain significantly higher than the rest of the U.S. This reality forces many, including founders, to share rooms or crash on friends’ couches just to make ends meet.” 

Hacker houses aren’t new to the Bay Area or cities like New York and London. These live-in incubators serve as homes and workspaces, offering a collaborative environment where tech founders and innovators can share ideas and resources. In a city renowned for tech advancements, hacker houses are viewed as critical for driving the next wave of innovation. By providing affordable housing and a vibrant community, these spaces enable entrepreneurs to thrive in an otherwise cutthroat and expensive market.

Watch this video to see how Hacker House is shaping the future of women in tech.

Continue Reading

Technology

Elon Musk’s X will be allowed back online in Brazil after paying one more fine

Published

on

By

Elon Musk's X will be allowed back online in Brazil after paying one more fine

The Federal Supreme Court (STF) in Brazil suspends Elon Musk’s social network after it fails to comply with orders from Minister Alexandre de Moraes to block accounts of those being investigated by the Brazilian justice system. 

Cris Faga | Nurphoto | Getty Images

X has to pay one last fine before the social network owned by Elon Musk is allowed back online in Brazil, according to a decision out Friday from the country’s top justice, Alexandre de Moraes.

The platform was suspended nationwide at the end of August, a decision upheld by a panel of judges on Sept. 2. Earlier this month, X filed paperwork informing Brazil’s supreme court that it is now in compliance with orders, which it previously defied.

As Brazil’s G1 Globo reported, X must now pay a new fine of 10 million reals (about $2 million) for two additional days of non-compliance with the court’s orders. X’s legal representative in Brazil, Rachel de Oliveira, is also required to pay a fine of 300,000 reals.

The case dates back to April, when de Moraes, the minister of Brazil’s supreme court, known as Supremo Tribunal Federal (STF), initiated a probe into Musk and X over alleged obstruction of justice.

Musk had vowed to defy the court’s orders to take down certain accounts in Brazil. He called the court’s actions “censorship,” and railed online against de Moraes, describing the judge as a “criminal” and encouraging the U.S. to end foreign aid to Brazil.

In mid-August, Musk closed down X offices in Brazil. That left his company without a legal representative in the country, a federal requirement for all tech platforms to do business there.

By Aug. 28, de Moraes’ court threatened a ban and fines if X didn’t appoint a legal representative within 24 hours, and if it didn’t comply with takedown requests for accounts the court said had engaged in plots to dox or harm federal agents, among other things.

Earlier this month, the STF froze the business assets of Musk companies, including both X and satellite internet business Starlink, operating in Brazil. The STF said in court filings that it viewed Starlink parent SpaceX and X as companies that worked together as related parties.

Musk wrote in a post on X at that time that, “Unless the Brazilian government returns the illegally seized property of and SpaceX, we will seek reciprocal seizure of government assets too.”

On August 29, 2024, in Brazil, the Minister of the Supreme Court, STF Minister Alexandre de Moraes, orders the blocking of the accounts of another company, Starlink, of Elon Musk, to guarantee the payment of fines imposed by the STF due to the lack of representatives of X in Brazil. 

Ton Molina | Nurphoto | Getty Images

As head of the STF, de Moraes has long supported federal regulations to rein in hate speech and misinformation online. His views have garnered pushback from tech companies and far-right officials in the country, along with former President Jair Bolsonaro and his supporters.

Bolsonaro is under investigation, suspected of orchestrating a coup in Brazil after losing the 2022 presidential election to current President Luiz Inacio Lula da Silva.

While Musk has called for retribution against de Moraes and Lula, he has worked with and praised Bolsonaro for years. The former president of Brazil authorized SpaceX to deliver satellite internet services commercially in Brazil in 2022.

Musk bills himself as a free speech defender, but his track record suggests otherwise. Under his management, X removed content critical of ruling parties in Turkey and India at the government’s insistence. X agreed to more than 80% of government take-down requests in 2023 over a comparable period the prior year, according to analysis by the tech news site Rest of World.

X faces increased competition in Brazil from social apps like Meta-owned Threads, and Bluesky, which have attracted users during its suspension.

Starlink also faces competition in Brazil from eSpace, a French-American firm that gained permission this year from the National Telecommunications Agency (Anatel) to deliver satellite internet services in the country.

Lukas Darien, an attorney and law professor at Brazil’s Facex University Center, told CNBC that the STF’s enforcement actions against X are likely to change the way large technology companies will view the court.

“There is no change to the law here,” Darien wrote in a message. “But specifically, big tech companies are now aware that the laws will be applied regardless of the size of a business and the magnitude of its reach in the country.”

Musk and representatives for X didn’t immediately respond to a request for comment on Friday.

Late Thursday, X Global Government Affairs posted the following statement:

“X is committed to protecting free speech within the boundaries of the law and we recognize and respect the sovereignty of the countries in which we operate. We believe that the people of Brazil having access to X is essential for a thriving democracy, and we will continue to defend freedom of expression and due process of law through legal processes.”

WATCH: X is a financial ‘disaster’

Elon Musk's X is a financial 'disaster,' co-authors of new book 'Character Limit' say

Continue Reading

Technology

OpenAI sees roughly $5 billion loss this year on $3.7 billion in revenue

Published

on

By

OpenAI sees roughly  billion loss this year on .7 billion in revenue

Sam Altman, CEO of OpenAI, at the Hope Global Forums annual meeting in Atlanta on Dec. 11, 2023.

Dustin Chambers | Bloomberg | Getty Images

OpenAI, the creator of ChatGPT, expects about $5 billion in losses on $3.7 billion in revenue this year, CNBC has confirmed.

The company generated $300 million in revenue last month, up 1,700% since the beginning of last year, and expects to bring in $11.6 billion in sales next year, according to a person close to OpenAI who asked not to be named because the numbers are confidential.

The New York Times was first to report on OpenAI’s financials earlier on Friday after viewing company documents. CNBC hasn’t seen the financials.

OpenAI, which is backed by Microsoft, is currently pursuing a funding round that would value the company at more than $150 billion, people familiar with the matter have told CNBC. Thrive Capital is leading the round and plans to invest $1 billion, with Tiger Global planning to join as well.

OpenAI CFO Sarah Friar told investors in an email Thursday that the funding round is oversubscribed and will close by next week. Her note followed a number of key departures, most notably technology chief Mira Murati, who announced the previous day that she was leaving OpenAI after six and a half years.

Also this week, news surfaced that OpenAI’s board is considering plans to restructure the firm to a for-profit business. The company will retain its nonprofit segment as a separate entity, a person familiar with the matter told CNBC. The structure would be more straightforward for investors and make it easier for OpenAI employees to realize liquidity, the source said.

OpenAI’s services have exploded in popularity since the company launched ChatGPT in late 2022. The company sells subscriptions to various tools and licenses its GPT family of large language models, which are powering much of the generative AI boom. Running those models requires a massive investment in Nvidia’s graphics processing units.

The Times, citing an analysis by a financial professional who reviewed OpenAI’s documents, reported that the roughly $5 billion in loses this year are tied to costs for running its services as well as employee salaries and office rent. The costs don’t include equity-based compensation, “among several large expenses not fully explained in the documents,” the paper said.

WATCH: OpenAI has a lot of challengers, says Madrona’s Matt McIlwain

OpenAI has a lot of challengers, says Madrona's Matt McIlwain

Continue Reading

Trending