Connect with us

Published

on

Sam Altman, chief executive officer and co-founder of OpenAI, speaks during a Senate Judiciary Subcommittee hearing in Washington, DC, US, on Tuesday, May 16, 2023. Congress is debating the potential and pitfalls of artificial intelligence as products like ChatGPT raise questions about the future of creative industries and the ability to tell fact from fiction. 

Eric Lee | Bloomberg | Getty Images

This past week, OpenAI CEO Sam Altman charmed a room full of politicians in Washington, D.C., over dinner, then testified for about nearly three hours about potential risks of artificial intelligence at a Senate hearing.

After the hearing, he summed up his stance on AI regulation, using terms that are not widely known among the general public.

“AGI safety is really important, and frontier models should be regulated,” Altman tweeted. “Regulatory capture is bad, and we shouldn’t mess with models below the threshold.”

In this case, “AGI” refers to “artificial general intelligence.” As a concept, it’s used to mean a significantly more advanced AI than is currently possible, one that can do most things as well or better than most humans, including improving itself.

“Frontier models” is a way to talk about the AI systems that are the most expensive to produce and which analyze the most data. Large language models, like OpenAI’s GPT-4, are frontier models, as compared to smaller AI models that perform specific tasks like identifying cats in photos.

Most people agree that there need to be laws governing AI as the pace of development accelerates.

“Machine learning, deep learning, for the past 10 years or so, it developed very rapidly. When ChatGPT came out, it developed in a way we never imagined, that it could go this fast,” said My Thai, a computer science professor at the University of Florida. “We’re afraid that we’re racing into a more powerful system that we don’t fully comprehend and anticipate what what it is it can do.”

But the language around this debate reveals two major camps among academics, politicians, and the technology industry. Some are more concerned about what they call “AI safety.” The other camp is worried about what they call “AI ethics.

When Altman spoke to Congress, he mostly avoided jargon, but his tweet suggested he’s mostly concerned about AI safety — a stance shared by many industry leaders at companies like Altman-run OpenAI, Google DeepMind and well-capitalized startups. They worry about the possibility of building an unfriendly AGI with unimaginable powers. This camp believes we need urgent attention from governments to regulate development an prevent an untimely end to humanity — an effort similar to nuclear nonproliferation.

“It’s good to hear so many people starting to get serious about AGI safety,” DeepMind founder and current Inflection AI CEO Mustafa Suleyman tweeted on Friday. “We need to be very ambitious. The Manhattan Project cost 0.4% of U.S. GDP. Imagine what an equivalent programme for safety could achieve today.”

But much of the discussion in Congress and at the White House about regulation is through an AI ethics lens, which focuses on current harms.

From this perspective, governments should enforce transparency around how AI systems collect and use data, restrict its use in areas that are subject to anti-discrimination law like housing or employment, and explain how current AI technology falls short. The White House’s AI Bill of Rights proposal from late last year included many of these concerns.

This camp was represented at the congressional hearing by IBM Chief Privacy Officer Christina Montgomery, who told lawmakers believes each company working on these technologies should have an “AI ethics” point of contact.

“There must be clear guidance on AI end uses or categories of AI-supported activity that are inherently high-risk,” Montgomery told Congress.

How to understand AI lingo like an insider

See also: How to talk about AI like an insider

It’s not surprising the debate around AI has developed its own lingo. It started as a technical academic field.

Much of the software being discussed today is based on so-called large language models (LLMs), which use graphic processing units (GPUs) to predict statistically likely sentences, images, or music, a process called “inference.” Of course, AI models need to be built first, in a data analysis process called “training.”

But other terms, especially from AI safety proponents, are more cultural in nature, and often refer to shared references and in-jokes.

For example, AI safety people might say that they’re worried about turning into a paper clip. That refers to a thought experiment popularized by philosopher Nick Bostrom that posits that a super-powerful AI — a “superintelligence” — could be given a mission to make as many paper clips as possible, and logically decide to kill humans make paper clips out of their remains.

OpenAI’s logo is inspired by this tale, and the company has even made paper clips in the shape of its logo.

Another concept in AI safety is the “hard takeoff” or “fast takeoff,” which is a phrase that suggests if someone succeeds at building an AGI that it will already be too late to save humanity.

Sometimes, this idea is described in terms of an onomatopeia — “foom” — especially among critics of the concept.

“It’s like you believe in the ridiculous hard take-off ‘foom’ scenario, which makes it sound like you have zero understanding of how everything works,” tweeted Meta AI chief Yann LeCun, who is skeptical of AGI claims, in a recent debate on social media.

AI ethics has its own lingo, too.

When describing the limitations of the current LLM systems, which cannot understand meaning but merely produce human-seeming language, AI ethics people often compare them to “Stochastic Parrots.

The analogy, coined by Emily Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell in a paper written while some of the authors were at Google, emphasizes that while sophisticated AI models can produce realistic seeming text, the software doesn’t understand the concepts behind the language — like a parrot.

When these LLMs invent incorrect facts in responses, they’re “hallucinating.”

One topic IBM’s Montgomery pressed during the hearing was “explainability” in AI results. That means that when researchers and practitioners cannot point to the exact numbers and path of operations that larger AI models use to derive their output, this could hide some inherent biases in the LLMs.

“You have to have explainability around the algorithm,” said Adnan Masood, AI architect at UST-Global. “Previously, if you look at the classical algorithms, it tells you, ‘Why am I making that decision?’ Now with a larger model, they’re becoming this huge model, they’re a black box.”

Another important term is “guardrails,” which encompasses software and policies that Big Tech companies are currently building around AI models to ensure that they don’t leak data or produce disturbing content, which is often called “going off the rails.

It can also refer to specific applications that protect AI software from going off topic, like Nvidia’s “NeMo Guardrails” product.

“Our AI ethics board plays a critical role in overseeing internal AI governance processes, creating reasonable guardrails to ensure we introduce technology into the world in a responsible and safe manner,” Montgomery said this week.

Sometimes these terms can have multiple meanings, as in the case of “emergent behavior.”

A recent paper from Microsoft Research called “sparks of artificial general intelligence” claimed to identify several “emergent behaviors” in OpenAI’s GPT-4, such as the ability to draw animals using a programming language for graphs.

But it can also describe what happens when simple changes are made at a very big scale — like the patterns birds make when flying in packs, or, in AI’s case, what happens when ChatGPT and similar products are being used by millions of people, such as widespread spam or disinformation.

BCA Research: 50/50 chance A.I. will wipe out all of humanity

Continue Reading

Technology

Bitcoin falls over 5% as volatility continues after Trump’s bitcoin reserve plan

Published

on

By

Bitcoin falls over 5% as volatility continues after Trump's bitcoin reserve plan

Jonathan Raa | Nurphoto | Getty Images

Bitcoin fell on Monday as volatility in the price of the world’s largest cryptocurrency continues following an executive order signed by President Donald Trump to create a strategic bitcoin reserve for the United States.

Bitcoin was trading at $81,712, down over 5% but off earlier lows, at 9:42 a.m. Singapore time, according to Coin Metrics.

The reserve will be funded by coins that have been seized in criminal and civil forfeiture cases and there are no plans for the U.S. government to buy more bitcoin. After the strategic reserve announcement last Thursday, crypto prices declined as investors were disappointed it wasn’t a more aggressive program.

Other cryptocurrency prices also dropped on Monday. Both ether and XRP were down about 7.5% at around 9:43 a.m. Singapore time.

Some investors, however, said the move to establish a reserve was bullish in the long-term.

“I absolutely think the market has this wrong,” Matt Hougan, chief investment officer at Bitwise Asset Management, told CNBC’s “Squawk Box Asia” on Monday. “The market is short-term disappointed” that the government didn’t say it was immediately going to start acquiring 100,000 or 200,000 bitcoin, he added.

Hougan pointed towards comments on X from White House Crypto and AI Czar David Sacks, who said the U.S. would look for “budget-neutral strategies for acquiring additional bitcoin, provided that those strategies have no incremental costs on American taxpayers.”

“I think the right question to ask is: did this executive order make it more likely that in the future, bitcoin will be a geopolitically important currency or asset? Will other governments look to follow the U.S.’s lead and build their own strategic reserve? And to me, the answer to that is emphatically yes,” Hougan said.

“The reason that questions matters is that’s the question that determines if bitcoin is $80,000 a coin or $1 million a coin.”

Hougan called the decline in crypto prices a “short-term setback.”

“I think the market will soon find its footing and realize that actually this is incredibly bullish long term for this asset and for crypto as a whole,” he said.

Continue Reading

Technology

Meet the 21-year-old helping coders use AI to cheat in Google and other tech job interviews

Published

on

By

Meet the 21-year-old helping coders use AI to cheat in Google and other tech job interviews

A person walks past the entrance to a Google building in Dublin, Feb. 15, 2023.

Artur Widak | Anadolu | Getty Images

After landing internship offers from Amazon, Meta and TikTok, computer science student Chungin “Roy” Lee has decided to move to San Francisco.

But he won’t be joining any of those companies.

Instead, Lee will be building his own startup that offers a peculiar service: helping software engineers use artificial intelligence to cheat in their technical job interviews. 

“Everyone programs nowadays with the help of AI,” said Lee, a 21-year-old student at Columbia University, which has opened disciplinary proceedings against him, according to documents viewed by CNBC. A Columbia spokesperson said the university doesn’t comment on individual students.

“It doesn’t make sense to have an interview format that assumes you don’t have the use of AI,” Lee said.

Lee is at the forefront of a movement among professional coders who are exploiting the limitations of remote job interviews, popularized during the Covid pandemic, by using AI tools off camera to ensure they give hiring managers the best possible answers. 

The hiring process that took hold in the work-from-home era involved candidates interviewing from behind a Zoom screen rather than traveling, sometimes across the country, for on-location interviews, where they could show their coding skills on dry-erase boards.

In late 2022 came the boom in generative AI, with the release of OpenAI’s ChatGPT. Since then, tech companies have laid off tens of thousands of programmers while touting the use of AI to write code. At Google, for example, more than 25% of new code is written by AI, CEO Sundar Pichai told investors in October.

The combination of rapid advancements in AI, mass layoffs of software developers, and a continuing world of remote and hybrid work has created a novel conundrum for recruiters.

The problem has become so prevalent that Pichai suggested during a Google town hall in February that his hiring managers consider returning to in-person job interviews.

Google isn’t the only tech company weighing that idea.

But engineers aren’t slowing down.  

Lee has turned his cheating into a business. His company, Interview Coder, markets itself as a service that helps software developers cheat during job interviews. The internship offers that he landed are the proof he uses to show that his technology works.

AI assistants for virtual interviews can provide written code, make code improvements, and generate detailed explanations of results that candidates can read. The AI tools all work quickly, which is helpful for timed interviews.

Hiring managers are venting their frustrations on social media over the rise of AI cheaters, saying that those who get caught are eliminated from contention. Interviewers say they’re exhausted from having to discern whether candidates are using their own skills or relying on AI.

Clara Shih, head of business AI at Meta, on the 'agentic' future of the economy

‘Invisible’ help

The cheating tools rely on generative AI models to provide software engineers with real-time answers to coding problems as they’re presented during interviews. The AI analyzes both written and oral questions and instantaneously generates code. The widgets can also provide the cheaters with explanations for the solutions that they can use in the interview. 

The tools’ most valuable feature, however, might be their secrecy. Interview Coder is invisible to the interviewer.

While candidates are using technology to cheat, employers are observing their behavior during interviews to try to catch them. Interviewers have learned to look for eyes wandering to the side, the reflection of other apps visible on candidates’ glasses, and answers that sound rehearsed or don’t match questions, among other clues.

Perhaps the biggest tell is a simple “Hmm.”

Hiring managers said they’ve noticed that many candidates use the ubiquitous sound to buy themselves time while waiting for their AI tools to finish their work. 

“I’ll hear a pause, then ‘Hmm,’ and all of a sudden, it’s the perfect answer,” said Anna Spearman, founder of Techie Staffing, an agency that helps companies fill technical roles. “There have also been instances where the code looked OK, but they couldn’t describe how they came to the conclusion.”

Henry Kirk, a software developer and co-founder of Studio.init in New York, said this type of cheating used to be easy to catch.

“But now it’s harder to detect,” said Kirk. He said the technology has gotten smart enough to present the answers in a place that doesn’t require users to move their eyes.

“The eye movement used to be the biggest giveaway,” Kirk said. 

Interview Coder’s website says its virtual interview tool is immune to screen detection features that are available to companies on services such as Zoom and Google Meet. Lee markets his product as being webcam-proof.

When Kirk hosted a virtual coding challenge for an engineering job he was looking to fill in June, 700 people applied, he said. Kirk recorded the process of the first interview round. He was looking to see if any candidates were cheating in ways that included using results from large language models.

“More than 50% of them cheated,” he said.

AI cheating tools have improved so much over the last year that they’ve become nearly undetectable, experts said. Other than Lee’s Interview Coder, software engineers can also use programs such as Leetcode Wizard or ChatGPT. 

Kirk said his startup is considering moving to in-person interviews, though he knows that potentially limits the talent pool.

“The problem is now I don’t trust the results as much,” Kirk said. “I don’t know what else to do other than on-site.”

Google CEO Sundar Pichai during an event at the Google for Startups Campus in Warsaw, Poland, Feb. 13, 2025.

Omar Marques | Anadolu | Getty Images

Back to the Googleplex

It’s become a big topic at Google, and one Pichai addressed in February at an internal town hall meeting, where executives read questions and comments that were submitted by employees and summarized by AI, according to an audio recording that was reviewed by CNBC.

One question asked of management was, “Can we get onsite job interviews back?”

“There are many email threads about this topic,” the question said. “If budget is constraint, can we get the candidates to an office or environment we can control?”

Pichai turned to Brian Ong, Google’s vice president of recruiting, who was joining through a virtual livestream.

“Brian, do we do hybrid?” Pichai asked.

Ong said candidates and Google employees have said they prefer virtual job interviews because scheduling a video call is easier than finding a time to meet in available conference rooms. The virtual interview process is about two weeks faster, he added.

He said interviewers are instructed to probe candidates on their answers as a way to decipher whether they actually know what they’re talking about.

“We definitely have more work to do to integrate how AI is now more prevalent in the interview process,” said Ong. He said his recruiting organization is working with Google’s software engineer steering committee to figure out how the company can refine its interviewing process. 

“Given we all work hybrid, I think it’s worth thinking about some fraction of the interviews being in person,” Pichai responded. “I think it’ll help both the candidates understand Google’s culture and I think it’s good for both sides.”

Ong said it’s also an issue “all of our other competitor companies are looking at.”

A Google spokesperson declined to comment beyond what was said at the meeting.

Other companies have already shifted their hiring practices to account for AI cheating. 

Deloitte reinstated in-person interviews for its U.K. graduate program, according to a September report

Anthropic, the maker of AI chatbot Claude, issued new guidance in its job applications in February, asking candidates not to use AI assistants during the hiring process. 

“While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,” the new policy says. “We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate ‘Yes’ if you have read and agree.”

Amazon is also taking steps to combat AI cheating. 

The company asks that candidates acknowledge that they won’t use unauthorized tools during the interview or assessment process, spokesperson Margaret Callahan told CNBC.

Chungin “Roy” Lee, a 21-year-old student at Columbia University, is the founder of Interview Coder, a startup that makes software to help computer programmers cheat in job interviews with the help of AI.

Courtesy of Chungin Lee

‘F*ck Leetcode’

If you visit InterviewCoder.co, the first thing that greets you is large gray type that reads “F*ck Leetcode.”

Leetcode is the program used by many tech companies to evaluate software engineers for technical roles. Tech companies such as Meta, Google and Amazon use it to keep tabs on the thousands of job applicants they evaluate.

“Every time I mention interviews, I get frustrated comments about Leetcode,” wrote Ryan Peterman, a software engineer at Meta, in a newsletter posted on Substack in December. Peterman said Leetcode problems are purposely designed to be much harder than what software engineers would do on the job. Leetcode is the best tool companies have to filter hundreds of applicants, Peterman wrote.

Coders said they hate Leetcode because it emphasizes algorithmic problem-solving and asks applicants to solve riddles and puzzles that seem irrelevant to the job, according to those CNBC spoke with as well as comments CNBC found from engineers across various social media platforms. Another downside is that it sometimes requires hours of work that may not result in a job offer or advancement, they said.

Leetcode served as Lee’s inspiration for building Interview Coder, he said. With the help of AI, he said, he created the service in less than a week.

“I thought I wanted to work at a big tech company and spent 600 hours practicing for Leetcode,” Lee said. “It made me miserable, and I almost stopped programming because of how much I didn’t like it.”

Lee’s social media posts are filled with comments from other programmers expressing similar frustrations. 

“Legend,” several comments said in response to some of his X posts. Others said they enjoyed him “f—ing with big tech.” 

Rival software Leetcode Wizard was also inspired by distaste for Leetcode. 

Isabel De Vries, Leetcode Wizard’s head of marketing, told CNBC in a statement that Leetcode-style interviews fail to accurately measure engineering skills and fail to reflect actual daily engineering work. 

“Our product originates from the same frustrations many of our users are having,” De Vries said.

Leetcode did not respond to CNBC’s request for comment.

Henry Kirk, a software developer and co-founder of Studio.init in New York, is considering moving job interviews to be on site in response to software engineers using AI to cheat in virtual interviews.

Photo by Krista Schlueter for Inc. Magazine

When Kirk, of Studio.init, posted on LinkedIn in February to vent about his frustrations with AI cheating, he received nearly 200 comments. But most argued that employers should allow candidates to use AI in the hiring process.

“Even the SAT lets you use a calculator,” said one comment. “I think you just make it harder to succeed on purpose when in the real world Google and gpt will always be at my fingertips.”

Lee promotes Interview Coder as being “invisible to all screen-recording softwares.” To prove its effectiveness, he recorded himself passing an Amazon interview and posted the video on YouTube. Amazon and the other companies that had made offers to Lee then rescinded them.

Lee got hundreds of comments praising the video, which YouTube removed after CNBC reached out to Amazon and Google for this story. YouTube cited a “copyright claim” by Amazon as the reason for removing the video.

“I as an interviewer am so annoyed by him but as a candidate also adore him,” former Meta staff engineer Yangshun Tay, co-founder of startup GreatFrontEnd, wrote in a LinkedIn post about Lee and his video. “Cheating isn’t right, but oh god I am so tired of these stupid algorithm interviews.”

After YouTube removed the video, Lee uploaded it once again.

Cheating as a service

Lee said he never planned to work at Amazon, Meta or TikTok. He said he wanted to show others just how easy it is to game Leetcode and force companies to find a better alternative.

And, he said, he’s making money in the process. 

Interview Coder is available as a subscription for $60 a month. Lee said the company is on track to hit $1 million in annual recurring revenue by mid-May.

He recently hired the internet influencers who go by the name “Costco Guys” to make a video marketing his software. 

“If you’re struggling to pass your Leetcode interviews and want to get a job at a big tech company, you’ve got to take a look at Interviewcoder.co to pass your interview,” the Costco Guys say in their video. “Because Interview Coder gets five big booms! Boom! Boom! Boom! Boom! Boooooom!”

Leetcode Wizard bills itself on its website as “The #1 AI-powered coding interview cheating app” and “The perfect tool for achieving a ‘Strong Hire’ result in any coding interview and landing your dream job at any FAANG company.” Leetcode Wizard charges 49 euros ($53) a month for a “Pro” subscription. 

More than 16,000 people have used the app, and “several hundred” people have told Leetcode Wizard that they received offers thanks to the software, the company told CNBC. 

“Our product will have succeeded once we can shut it down, when leetcode interviews are a thing of the past,” De Vries said. 

Lee said he’s moving from New York to San Francisco in March to continue building Interview Coder and start working on his next company.

Kirk said he understands software engineers’ frustration with Leetcode and the tech industry. He’s had to use Leetcode numerous times throughout his career, and he was laid off by Google in 2023. He now wants to help out-of-work engineers get jobs.

But he remains worried that AI cheating will persist.

“We need to make sure they know their stuff because these tools still make mistakes,” Kirk said. 

Half of companies currently use AI in the hiring process, and 68% will by the end of 2025, according to an October survey commissioned by ResumeBuilder.com.

Lee said that if companies want to bill themselves as AI-first, they should encourage its use by candidates.

Asked if he worries about software engineers losing the trust of the tech industry, Lee paused. 

“Hmm,” he mumbled.  

“My reaction to that is any company that is slow to respond to market changes will get hurt and that’s the fault of the company,” Lee said. “If there are better tools, then it’s their fault for not resorting to the better alternative to exist. I don’t feel guilty at all for not catering to a company’s inability to adapt.”

WATCH: How DeepSeek supercharged AI’s distillation problem

How DeepSeek supercharged AI's distillation problem

Continue Reading

Technology

How Facebook Marketplace is keeping young people on the platform

Published

on

By

How Facebook Marketplace is keeping young people on the platform

Meta‘s Facebook’s influence remains strong globally, but younger users are logging in less. Only 32% of U.S. teens use Facebook today, down from 71% in 2014, according to a 2024 Pew Research study. However, Facebook’s resale platform Marketplace is one reason young people are on the platform.

“I only use Facebook for Marketplace,” said Mirka Arevalo, a student at Buffalo University. “I go in knowing what I want, not just casually browsing.”

Launched in 2016, Facebook Marketplace has grown into one of Meta’s biggest success stories. With 1.1 billion users across 70 countries, it competes with eBay and Craigslist, according to BusinessDasher.

“Marketplace is the flea market of the internet,” said Charles Lindsay, an associate professor of marketing at the University of Buffalo. “There’s a massive amount of consumer-to-consumer business.”

Unlike eBay or Etsy, Marketplace doesn’t charge listing fees, and local pickups help avoid shipping costs, according to Facebook’s Help Center.

“Sellers love that Marketplace has no fees,” said Jasmine Enberg, VP and Principal Analyst at eMarketer. “Introducing fees could push users elsewhere.”

Marketplace also taps into the booming resale market, projected to hit $350 billion by 2027, according to ThredUp.

“Younger buyers are drawn to affordability and sustainability,” said Yoo-Kyoung Seock, a professor at the College of Family and Consumer Sciences at the University of Georgia. “Marketplace offers both.”

A key advantage is trust; users’ Facebook profiles make transactions feel safer than on anonymous platforms like Craigslist, according to Seock.

In January 2025, eBay partnered with Facebook Marketplace, allowing select eBay listings to appear on Marketplace in the U.S., Germany, and France. Analysts project this will drive an additional $1.6 billion in sales for eBay by the end of 2025, according to Wells Fargo.

“This partnership boosts the number of buyers and sellers,” said Enberg. “It could also solve some of Marketplace’s trust issues.”

While Facebook doesn’t charge listing fees, it does take a 10% cut of sales made through its shipping service, according to Facebook’s Help Center.

Marketplace isn’t a major direct revenue source, but it keeps users engaged.

“It’s one of the least monetized parts of Facebook,” said Enberg. “But it brings in engagement, which advertisers value.”

Meta relies on ads for over 97% of its $164.5 billion revenue in 2024.

“Marketplace helps Meta prove younger users still log in,” said Enberg. “Even if they’re buying and selling instead of scrolling.”

By keeping users engaged, Marketplace plays a key role in Facebook’s long-term strategy, ensuring the platform remains relevant in a changing digital landscape.

Watch the video to learn more.

Continue Reading

Trending