Connect with us

Published

on

A person walks past the entrance to a Google building in Dublin, Feb. 15, 2023.

Artur Widak | Anadolu | Getty Images

After landing internship offers from Amazon, Meta and TikTok, computer science student Chungin “Roy” Lee has decided to move to San Francisco.

But he won’t be joining any of those companies.

Instead, Lee will be building his own startup that offers a peculiar service: helping software engineers use artificial intelligence to cheat in their technical job interviews. 

“Everyone programs nowadays with the help of AI,” said Lee, a 21-year-old student at Columbia University, which has opened disciplinary proceedings against him, according to documents viewed by CNBC. A Columbia spokesperson said the university doesn’t comment on individual students.

“It doesn’t make sense to have an interview format that assumes you don’t have the use of AI,” Lee said.

Lee is at the forefront of a movement among professional coders who are exploiting the limitations of remote job interviews, popularized during the Covid pandemic, by using AI tools off camera to ensure they give hiring managers the best possible answers. 

The hiring process that took hold in the work-from-home era involved candidates interviewing from behind a Zoom screen rather than traveling, sometimes across the country, for on-location interviews, where they could show their coding skills on dry-erase boards.

In late 2022 came the boom in generative AI, with the release of OpenAI’s ChatGPT. Since then, tech companies have laid off tens of thousands of programmers while touting the use of AI to write code. At Google, for example, more than 25% of new code is written by AI, CEO Sundar Pichai told investors in October.

The combination of rapid advancements in AI, mass layoffs of software developers, and a continuing world of remote and hybrid work has created a novel conundrum for recruiters.

The problem has become so prevalent that Pichai suggested during a Google town hall in February that his hiring managers consider returning to in-person job interviews.

Google isn’t the only tech company weighing that idea.

But engineers aren’t slowing down.  

Lee has turned his cheating into a business. His company, Interview Coder, markets itself as a service that helps software developers cheat during job interviews. The internship offers that he landed are the proof he uses to show that his technology works.

AI assistants for virtual interviews can provide written code, make code improvements, and generate detailed explanations of results that candidates can read. The AI tools all work quickly, which is helpful for timed interviews.

Hiring managers are venting their frustrations on social media over the rise of AI cheaters, saying that those who get caught are eliminated from contention. Interviewers say they’re exhausted from having to discern whether candidates are using their own skills or relying on AI.

Clara Shih, head of business AI at Meta, on the 'agentic' future of the economy

‘Invisible’ help

The cheating tools rely on generative AI models to provide software engineers with real-time answers to coding problems as they’re presented during interviews. The AI analyzes both written and oral questions and instantaneously generates code. The widgets can also provide the cheaters with explanations for the solutions that they can use in the interview. 

The tools’ most valuable feature, however, might be their secrecy. Interview Coder is invisible to the interviewer.

While candidates are using technology to cheat, employers are observing their behavior during interviews to try to catch them. Interviewers have learned to look for eyes wandering to the side, the reflection of other apps visible on candidates’ glasses, and answers that sound rehearsed or don’t match questions, among other clues.

Perhaps the biggest tell is a simple “Hmm.”

Hiring managers said they’ve noticed that many candidates use the ubiquitous sound to buy themselves time while waiting for their AI tools to finish their work. 

“I’ll hear a pause, then ‘Hmm,’ and all of a sudden, it’s the perfect answer,” said Anna Spearman, founder of Techie Staffing, an agency that helps companies fill technical roles. “There have also been instances where the code looked OK, but they couldn’t describe how they came to the conclusion.”

Henry Kirk, a software developer and co-founder of Studio.init in New York, said this type of cheating used to be easy to catch.

“But now it’s harder to detect,” said Kirk. He said the technology has gotten smart enough to present the answers in a place that doesn’t require users to move their eyes.

“The eye movement used to be the biggest giveaway,” Kirk said. 

Interview Coder’s website says its virtual interview tool is immune to screen detection features that are available to companies on services such as Zoom and Google Meet. Lee markets his product as being webcam-proof.

When Kirk hosted a virtual coding challenge for an engineering job he was looking to fill in June, 700 people applied, he said. Kirk recorded the process of the first interview round. He was looking to see if any candidates were cheating in ways that included using results from large language models.

“More than 50% of them cheated,” he said.

AI cheating tools have improved so much over the last year that they’ve become nearly undetectable, experts said. Other than Lee’s Interview Coder, software engineers can also use programs such as Leetcode Wizard or ChatGPT. 

Kirk said his startup is considering moving to in-person interviews, though he knows that potentially limits the talent pool.

“The problem is now I don’t trust the results as much,” Kirk said. “I don’t know what else to do other than on-site.”

Google CEO Sundar Pichai during an event at the Google for Startups Campus in Warsaw, Poland, Feb. 13, 2025.

Omar Marques | Anadolu | Getty Images

Back to the Googleplex

It’s become a big topic at Google, and one Pichai addressed in February at an internal town hall meeting, where executives read questions and comments that were submitted by employees and summarized by AI, according to an audio recording that was reviewed by CNBC.

One question asked of management was, “Can we get onsite job interviews back?”

“There are many email threads about this topic,” the question said. “If budget is constraint, can we get the candidates to an office or environment we can control?”

Pichai turned to Brian Ong, Google’s vice president of recruiting, who was joining through a virtual livestream.

“Brian, do we do hybrid?” Pichai asked.

Ong said candidates and Google employees have said they prefer virtual job interviews because scheduling a video call is easier than finding a time to meet in available conference rooms. The virtual interview process is about two weeks faster, he added.

He said interviewers are instructed to probe candidates on their answers as a way to decipher whether they actually know what they’re talking about.

“We definitely have more work to do to integrate how AI is now more prevalent in the interview process,” said Ong. He said his recruiting organization is working with Google’s software engineer steering committee to figure out how the company can refine its interviewing process. 

“Given we all work hybrid, I think it’s worth thinking about some fraction of the interviews being in person,” Pichai responded. “I think it’ll help both the candidates understand Google’s culture and I think it’s good for both sides.”

Ong said it’s also an issue “all of our other competitor companies are looking at.”

A Google spokesperson declined to comment beyond what was said at the meeting.

Other companies have already shifted their hiring practices to account for AI cheating. 

Deloitte reinstated in-person interviews for its U.K. graduate program, according to a September report

Anthropic, the maker of AI chatbot Claude, issued new guidance in its job applications in February, asking candidates not to use AI assistants during the hiring process. 

“While we encourage people to use AI systems during their role to help them work faster and more effectively, please do not use AI assistants during the application process,” the new policy says. “We want to understand your personal interest in Anthropic without mediation through an AI system, and we also want to evaluate your non-AI-assisted communication skills. Please indicate ‘Yes’ if you have read and agree.”

Amazon is also taking steps to combat AI cheating. 

The company asks that candidates acknowledge that they won’t use unauthorized tools during the interview or assessment process, spokesperson Margaret Callahan told CNBC.

Chungin “Roy” Lee, a 21-year-old student at Columbia University, is the founder of Interview Coder, a startup that makes software to help computer programmers cheat in job interviews with the help of AI.

Courtesy of Chungin Lee

‘F*ck Leetcode’

If you visit InterviewCoder.co, the first thing that greets you is large gray type that reads “F*ck Leetcode.”

Leetcode is the program used by many tech companies to evaluate software engineers for technical roles. Tech companies such as Meta, Google and Amazon use it to keep tabs on the thousands of job applicants they evaluate.

“Every time I mention interviews, I get frustrated comments about Leetcode,” wrote Ryan Peterman, a software engineer at Meta, in a newsletter posted on Substack in December. Peterman said Leetcode problems are purposely designed to be much harder than what software engineers would do on the job. Leetcode is the best tool companies have to filter hundreds of applicants, Peterman wrote.

Coders said they hate Leetcode because it emphasizes algorithmic problem-solving and asks applicants to solve riddles and puzzles that seem irrelevant to the job, according to those CNBC spoke with as well as comments CNBC found from engineers across various social media platforms. Another downside is that it sometimes requires hours of work that may not result in a job offer or advancement, they said.

Leetcode served as Lee’s inspiration for building Interview Coder, he said. With the help of AI, he said, he created the service in less than a week.

“I thought I wanted to work at a big tech company and spent 600 hours practicing for Leetcode,” Lee said. “It made me miserable, and I almost stopped programming because of how much I didn’t like it.”

Lee’s social media posts are filled with comments from other programmers expressing similar frustrations. 

“Legend,” several comments said in response to some of his X posts. Others said they enjoyed him “f—ing with big tech.” 

Rival software Leetcode Wizard was also inspired by distaste for Leetcode. 

Isabel De Vries, Leetcode Wizard’s head of marketing, told CNBC in a statement that Leetcode-style interviews fail to accurately measure engineering skills and fail to reflect actual daily engineering work. 

“Our product originates from the same frustrations many of our users are having,” De Vries said.

Leetcode did not respond to CNBC’s request for comment.

Henry Kirk, a software developer and co-founder of Studio.init in New York, is considering moving job interviews to be on site in response to software engineers using AI to cheat in virtual interviews.

Photo by Krista Schlueter for Inc. Magazine

When Kirk, of Studio.init, posted on LinkedIn in February to vent about his frustrations with AI cheating, he received nearly 200 comments. But most argued that employers should allow candidates to use AI in the hiring process.

“Even the SAT lets you use a calculator,” said one comment. “I think you just make it harder to succeed on purpose when in the real world Google and gpt will always be at my fingertips.”

Lee promotes Interview Coder as being “invisible to all screen-recording softwares.” To prove its effectiveness, he recorded himself passing an Amazon interview and posted the video on YouTube. Amazon and the other companies that had made offers to Lee then rescinded them.

Lee got hundreds of comments praising the video, which YouTube removed after CNBC reached out to Amazon and Google for this story. YouTube cited a “copyright claim” by Amazon as the reason for removing the video.

“I as an interviewer am so annoyed by him but as a candidate also adore him,” former Meta staff engineer Yangshun Tay, co-founder of startup GreatFrontEnd, wrote in a LinkedIn post about Lee and his video. “Cheating isn’t right, but oh god I am so tired of these stupid algorithm interviews.”

After YouTube removed the video, Lee uploaded it once again.

Cheating as a service

Lee said he never planned to work at Amazon, Meta or TikTok. He said he wanted to show others just how easy it is to game Leetcode and force companies to find a better alternative.

And, he said, he’s making money in the process. 

Interview Coder is available as a subscription for $60 a month. Lee said the company is on track to hit $1 million in annual recurring revenue by mid-May.

He recently hired the internet influencers who go by the name “Costco Guys” to make a video marketing his software. 

“If you’re struggling to pass your Leetcode interviews and want to get a job at a big tech company, you’ve got to take a look at Interviewcoder.co to pass your interview,” the Costco Guys say in their video. “Because Interview Coder gets five big booms! Boom! Boom! Boom! Boom! Boooooom!”

Leetcode Wizard bills itself on its website as “The #1 AI-powered coding interview cheating app” and “The perfect tool for achieving a ‘Strong Hire’ result in any coding interview and landing your dream job at any FAANG company.” Leetcode Wizard charges 49 euros ($53) a month for a “Pro” subscription. 

More than 16,000 people have used the app, and “several hundred” people have told Leetcode Wizard that they received offers thanks to the software, the company told CNBC. 

“Our product will have succeeded once we can shut it down, when leetcode interviews are a thing of the past,” De Vries said. 

Lee said he’s moving from New York to San Francisco in March to continue building Interview Coder and start working on his next company.

Kirk said he understands software engineers’ frustration with Leetcode and the tech industry. He’s had to use Leetcode numerous times throughout his career, and he was laid off by Google in 2023. He now wants to help out-of-work engineers get jobs.

But he remains worried that AI cheating will persist.

“We need to make sure they know their stuff because these tools still make mistakes,” Kirk said. 

Half of companies currently use AI in the hiring process, and 68% will by the end of 2025, according to an October survey commissioned by ResumeBuilder.com.

Lee said that if companies want to bill themselves as AI-first, they should encourage its use by candidates.

Asked if he worries about software engineers losing the trust of the tech industry, Lee paused. 

“Hmm,” he mumbled.  

“My reaction to that is any company that is slow to respond to market changes will get hurt and that’s the fault of the company,” Lee said. “If there are better tools, then it’s their fault for not resorting to the better alternative to exist. I don’t feel guilty at all for not catering to a company’s inability to adapt.”

WATCH: How DeepSeek supercharged AI’s distillation problem

How DeepSeek supercharged AI's distillation problem

Continue Reading

Technology

Tesla must pay portion of $329 million in damages after fatal Autopilot crash, jury says

Published

on

By

Tesla must pay portion of 9 million in damages after fatal Autopilot crash, jury says

A jury in Miami has determined that Tesla should be held partly liable for a fatal 2019 Autopilot crash, and must compensate the family of the deceased and an injured survivor a portion of $329 million in damages.

Tesla’s payout is based on $129 million in compensatory damages, and $200 million in punitive damages against the company.

The jury determined Tesla should be held 33% responsible for the fatal crash. That means the automaker would be responsible for about $42.5 million in compensatory damages. In cases like these, punitive damages are typically capped at three times compensatory damages.

The plaintiffs’ attorneys told CNBC on Friday that because punitive damages were only assessed against Tesla, they expect the automaker to pay the full $200 million, bringing total payments to around $242.5 million.

Tesla said it plans to appeal the decision.

Attorneys for the plaintiffs had asked the jury to award damages based on $345 million in total damages. The trial in the Southern District of Florida started on July 14.

The suit centered around who shouldered the blame for the deadly crash in Key Largo, Florida. A Tesla owner named George McGee was driving his Model S electric sedan while using the company’s Enhanced Autopilot, a partially automated driving system.

While driving, McGee dropped his mobile phone that he was using and scrambled to pick it up. He said during the trial that he believed Enhanced Autopilot would brake if an obstacle was in the way. His Model S accelerated through an intersection at just over 60 miles per hour, hitting a nearby empty parked car and its owners, who were standing on the other side of their vehicle.

Naibel Benavides, who was 22, died on the scene from injuries sustained in the crash. Her body was discovered about 75 feet away from the point of impact. Her boyfriend, Dillon Angulo, survived but suffered multiple broken bones, a traumatic brain injury and psychological effects.

“Tesla designed Autopilot only for controlled access highways yet deliberately chose not to restrict drivers from using it elsewhere, alongside Elon Musk telling the world Autopilot drove better than humans,” Brett Schreiber, counsel for the plaintiffs, said in an e-mailed statement on Friday. “Tesla’s lies turned our roads into test tracks for their fundamentally flawed technology, putting everyday Americans like Naibel Benavides and Dillon Angulo in harm’s way.”

Following the verdict, the plaintiffs’ families hugged each other and their lawyers, and Angulo was “visibly emotional” as he embraced his mother, according to NBC.

Here is Tesla’s response to CNBC:

“Today’s verdict is wrong and only works to set back automotive safety and jeopardize Tesla’s and the entire industry’s efforts to develop and implement life-saving technology. We plan to appeal given the substantial errors of law and irregularities at trial.

Even though this jury found that the driver was overwhelmingly responsible for this tragic accident in 2019, the evidence has always shown that this driver was solely at fault because he was speeding, with his foot on the accelerator – which overrode Autopilot – as he rummaged for his dropped phone without his eyes on the road. To be clear, no car in 2019, and none today, would have prevented this crash.

This was never about Autopilot; it was a fiction concocted by plaintiffs’ lawyers blaming the car when the driver – from day one – admitted and accepted responsibility.”

The verdict comes as Musk, Tesla’s CEO, is trying to persuade investors that his company can pivot into a leader in autonomous vehicles, and that its self-driving systems are safe enough to operate fleets of robotaxis on public roads in the U.S.

Tesla shares dipped 1.8% on Friday and are now down 25% for the year, the biggest drop among tech’s megacap companies.

The verdict could set a precedent for Autopilot-related suits against Tesla. About a dozen active cases are underway focused on similar claims involving incidents where Autopilot or Tesla’s FSD— Full Self-Driving (Supervised) — had been in use just before a fatal or injurious crash.

The National Highway Traffic Safety Administration initiated a probe in 2021 into possible safety defects in Tesla’s Autopilot systems. During the course of that investigation, Tesla made changes, including a number of over-the-air software updates.

The agency then opened a second probe, which is ongoing, evaluating whether Tesla’s “recall remedy” to resolve issues with the behavior of its Autopilot, especially around stationary first responder vehicles, had been effective.

The NHTSA has also warned Tesla that its social media posts may mislead drivers into thinking its cars are capable of functioning as robotaxis, even though owners manuals say the cars require hands-on steering and a driver attentive to steering and braking at all times.

A site that tracks Tesla-involved collisions, TeslaDeaths.com, has reported at least 58 deaths resulting from incidents where Tesla drivers had Autopilot engaged just before impact.

Read the jury’s verdict below.

Continue Reading

Technology

Crypto wobbles into August as Trump’s new tariffs trigger risk-off sentiment

Published

on

By

Crypto wobbles into August as Trump's new tariffs trigger risk-off sentiment

A screen showing the price of various cryptocurrencies against the US dollar displayed at a Crypto Panda cryptocurrency store in Hong Kong, China, on Monday, Feb. 3, 2025. 

Lam Yik | Bloomberg | Getty Images

The crypto market slid Friday after President Donald Trump unveiled his modified “reciprocal” tariffs on dozens of countries.

The price of bitcoin showed relative strength, hovering at the flat line while ether, XRP and Binance Coin fell 2% each. Overnight, bitcoin dropped to a low of $114,110.73.

The descent triggered a wave of long liquidations, which forces traders to sell their assets at market price to settle their debts, pushing prices lower. Bitcoin saw $172 million in liquidations across centralized exchanges in the past 24 hours, according to CoinGlass, and ether saw $210 million.

Crypto-linked stocks suffered deeper losses. Coinbase led the way, down 15% following its disappointing second-quarter earnings report. Circle fell 4%, Galaxy Digital lost 2%, and ether treasury company Bitmine Immersion was down 8%. Bitcoin proxy MicroStrategy was down by 5%.

Stock Chart IconStock chart icon

hide content

Bitcoin falls below $115,000

The stock moves came amid a new wave of risk off sentiment after President Trump issued new tariffs ranging between 10% and 41%, triggering worries about increasing inflation and the Federal Reserve’s ability to cut interest rates. In periods of broad based derisking, crypto tends to get hit as investors pull out of the most speculative and volatile assets. Technical resilience and institutional demand for bitcoin and ether are helping support their prices.

“After running red hot in July, this is a healthy strategic cooldown. Markets aren’t reacting to a crisis, they’re responding to the lack of one,” said Ben Kurland, CEO at crypto research platform DYOR. “With no new macro catalyst on the horizon, capital is rotating out of speculative assets and into safer ground … it’s a calculated pause.”

Crypto is coming off a winning month but could soon hit the brakes amid the new macro uncertainty, and in a month usually characterized by lower trading volumes and increased volatility. Bitcoin gained 8% in July, according to Coin Metrics, while ether surged more than 49%.

Ether ETFs saw more than $5 billion in inflows in July alone (with just a single day of outflows of $1.8 million on July 2), bringing it’s total cumulative inflows to $9.64 to date. Bitcoin ETFs saw $114 million in outflows in the final trading session of July, bringing its monthly inflows to about $6 billion out of a cumulative $55 billion.

Don’t miss these cryptocurrency insights from CNBC Pro:

Continue Reading

Technology

Google has dropped more than 50 DEI-related organizations from its funding list

Published

on

By

Google has dropped more than 50 DEI-related organizations from its funding list

Google CEO Sundar Pichai gestures to the crowd during Google’s annual I/O developers conference in Mountain View, California, on May 20, 2025.

David Paul Morris | Bloomberg | Getty Images

Google has purged more than 50 organizations related to diversity, equity and inclusion, or DEI, from a list of organizations that the tech company provides funding to, according to a new report.

The company has removed a total of 214 groups from its funding list while adding 101, according to a new report from tech watchdog organization The Tech Transparency Project. The watchdog group cites the most recent public list of organizations that receive the most substantial contributions from Google’s U.S. Government Affairs and Public Policy team.

The largest category of purged groups were DEI-related, with a total of 58 groups removed from Google’s funding list, TTP found. The dropped groups had mission statements that included the words “diversity, “equity,” “inclusion,” or “race,” “activism,” and “women.” Those are also terms the Trump administration officials have reportedly told federal agencies to limit or avoid.

In response to the report, Google spokesperson José Castañeda told CNBC that the list reflects contributions made in 2024 and that it does not reflect all contributions made by other teams within the company.

“We contribute to hundreds of groups from across the political spectrum that advocate for pro-innovation policies, and those groups change from year to year based on where our contributions will have the most impact,” Castañeda said in an email.

Organizations that were removed from Google’s list include the African American Community Service Agency, which seeks to “empower all Black and historically excluded communities”; the Latino Leadership Alliance, which is dedicated to “race equity affecting the Latino community”; and Enroot, which creates out-of-school experiences for immigrant kids. 

The organization funding purge is the latest to come as Google began backtracking some of its commitments to DEI over the last couple of years. That pull back came due to cost cutting to prioritize investments into artificial intelligence technology as well as the changing political and legal landscape amid increasing national anti-DEI policies.

Over the past decade, Silicon Valley and other industries used DEI programs to root out bias in hiring, promote fairness in the workplace and advance the careers of women and people of color — demographics that have historically been overlooked in the workplace.

However, the U.S. Supreme Court’s 2023 decision to end affirmative action at colleges led to additional backlash against DEI programs in conservative circles.

President Donald Trump signed an executive order upon taking office in January to end the government’s DEI programs and directed federal agencies to combat what the administration considers “illegal” private-sector DEI mandates, policies and programs. Shortly after, Google’s Chief People Officer Fiona Cicconi told employees that the company would end DEI-related hiring “aspirational goals” due to new federal requirements and Google’s categorization as a federal contractor.

Despite DEI becoming such a divisive term, many companies are continuing the work but using different language or rolling the efforts under less-charged terminology, like “learning” or “hiring.”

Even Google CEO Sundar Pichai maintained the importance diversity plays in its workforce at an all-hands meeting in March.

“We’re a global company, we have users around the world, and we think the best way to serve them well is by having a workforce that represents that diversity,” Pichai said at the time.

One of the groups dropped from Google’s contributions list is the National Network to End Domestic Violence, which provides training, assistance, and public awareness campaigns on the issue of violence against women, the TTP report found. The group had been on Google’s list of funded organizations for at least nine years and continues to name the company as one of its corporate partners.

Google said it still gave $75,000 to the National Network to End Domestic Violence in 2024 but did not say why the group was removed from the public contributions list.

WATCH: Alphabet’s valuation remains highly attractive, says Evercore ISI’s Mark Mahaney

Alphabet's valuation remains highly attractive, says Evercore ISI's Mark Mahaney

Continue Reading

Trending