Connect with us

Published

on

Instagrams Reels video feed reportedly recommends risqu footage of children as well as overtly sexual adult videos to adult users who follow children with some of the disturbing content placed next to ads from major companies.

In one instance, an ad promoting the dating app Bumble was sandwiched between a video of a person caressing a life-size latex doll and another clip of an underage girl exposing her midriff, according to the Wall Street Journal, which set up test accounts to probe Instagrams algorithm.

In other cases, Mark Zuckerberg’s Meta-owned app showed a Pizza Hut commercial next to a video of a man laying in bed with a purported 10-year-old girl, while a Walmart ad was displayed next to a video of a woman exposing her crotch.

The shocking results were revealed as Meta faces a sweeping legal challenge from dozens of states alleging the company has failed to prevent underage users from joining Instagram or to shield them from harmful content.

It also comes on the heels of dozens of blue-chirp firms pulling their advertising from Elon Musk’s X platform after their promos appeared next to posts touting Adolf Hitler and the Nazi party. The exodus is expected to reportedly cost the site formerly known as Twitter as much as $75 million in revenue this year.

Meta now faces its own advertiser revolt after some companies cited in the study suspended ads on all its platforms, which include Facebook, following Monday’s report by the Journal.

The Journal’s test accounts followed only young gymnasts, cheerleaders and other teen and preteen influencers active on the platform.

Essential weekly read to fuel business lunches.

Please provide a valid email address.

By clicking above you agree to the Terms of Use and Privacy Policy.

Thanks for signing up!
Never miss a story.

“Thousands of followers of such young peoples accounts often include large numbers of adult men, and that many of the accounts who followed those children also had demonstrated interest in sex content related to both children and adults, the outlet found.

The Reels feed presented to test accounts became even more disturbing after the Journals reporters followed adult users who were already following children-related content.

The algorithm purportedly displayed a mix of adult pornography and child-sexualizing material, such as a video of a clothed girl caressing her torso and another of a child pantomiming a sex act.

When reached for comment, a Meta spokesperson argued the tests were a manufactured experience that does not reflect the experience of most users.

We dont want this kind of content on our platforms and brands dont want their ads to appear next to it, a Meta spokesperson said in a statement. We continue to invest aggressively to stop it – and report every quarter on the prevalence of such content, which remains very low.

Our systems are effective at reducing harmful content and weve invested billions in safety, security and brand suitability solutions, the spokesperson added. We tested Reels for nearly a year before releasing it widely – with a robust set of safety controls and measures.

Meta noted that it has approximately 40,000 employees globally dedicated to ensuring the safety and integrity of its platforms.

The company asserted that the spread of such content is relatively small, with just three to four views of posts that violate its policies per every 10,000 views on Instagram.

However, current and former Meta employees reportedly told the Journal that the tendency of the companys algorithms to present child sex content users was known internally to be a problem even before Reels was released in 2020 to compete with popular video app TikTok.

The Journal’s findings followed a June report by the publication that revealed Instagrams recommendation algorithms fueled what it described as a vast pedophile network that advertised the sale of child-sex material on the platform.

That report prompted Meta to block access to thousands of additional search terms on Instagram and to set up an internal task force to crack down on the illegal content.

Nonetheless, several major companies expressed outrage or disappointment over the companys handling of their ads including Match Group, the parent company of Tinder, which has reportedly pulled all of its ads for its major companies from Meta-owned apps.

Most companies sign deals stipulating that their ads should not appear next to sexually-charged or explicit content.

We have no desire to pay Meta to market our brands to predators or place our ads anywhere near this content, Match spokeswoman Justine Sacco said in a statement.

Bumble spokesman Robbie McKay said the dating app would never intentionally advertise adjacent to inappropriate content and has since suspended advertising on Meta platforms.

A Disney representative said the company had brought the problem to the highest levels at Meta to be addressed, while Hinge said it will push Meta to take more action.

The Canadian Center for Child Protection, a nonprofit dedicated to child safety, purportedly got similar results after conducting its own tests. The Post has reached out to the group for comment.

Time and time again, weve seen recommendation algorithms drive users to discover and then spiral inside of these online child exploitation communities, the centers executive director Lianna McDonald told the Journal.

Continue Reading

Technology

Australia is trying to enforce the first teen social media ban. Governments worldwide are watching.

Published

on

By

Australia is trying to enforce the first teen social media ban. Governments worldwide are watching.

In this photo illustration, iPhone screens display various social media apps on the screens on February 9, 2025 in Bath, England.

Anna Barclay | Getty Images News | Getty Images

Australia on Wednesday became the first country to formally bar users under the age of 16 from accessing major social media platforms, a move expected to be closely monitored by global tech companies and policymakers around the world.

Canberra’s ban, which came into effect from midnight local time, targets 10 major services, including Alphabet‘s YouTube, Meta’s Instagram, ByteDance’s TikTok, Reddit, Snapchat and Elon Musk’s X.

The controversial rule requires these platforms to take “reasonable steps” to prevent underage access, using ageverification methods such as inference from online activity, facial estimation via selfies, uploaded IDs, or linked bank details.

All targeted platforms had agreed to comply with the policy to some extent. Elon Musk’s X had been one of the last holdouts, but signaled on Wednesday that it would comply. 

The policy means millions of Australian children are expected to have lost access to their social accounts. 

However, the impact of the policy could be even wider, as it will set a benchmark for other governments considering teen social media bans, including Denmark, Norway, France, Spain, Malaysia and New Zealand. 

Controversial rollout

Ahead of the legislation’s passage last year, a YouGov survey found that 77% of Australians backed the under-16 social media ban. Still, the rollout has faced some resistance since becoming law.

Supporters of the bill have argued it safeguards children from social media-linked harms, including cyberbullying, mental health issues, and exposure to predators and pornography. 

Among those welcoming the official ban on Wednesday was Jonathan Haidt, social psychologist and author of The Anxious Generation, a 2024 best-selling book that linked a growing mental health crisis to smartphone and social media usage, especially for the young.

Social media platforms have too much power and nothing is being done about it: Niall Ferguson

In a post on social media platform X, Haidt commended policymakers in Australia for “freeing kids under 16 from the social media trap.”

“There will surely be difficulties in the early months, but the world is rooting for your success, and many other nations will follow,” he added. 

On the other hand, opponents contend that the ban infringes on freedoms of expression and access to information, raises privacy concerns through invasive age verification, and represents excessive government intervention that undermines parental responsibility.

Those critics include groups like Amnesty Tech, which said in a statement Tuesday that the ban was an ineffective fix that ignored the rights and realities of younger generations.

“The most effective way to protect children and young people online is by protecting all social media users through better regulation, stronger data protection laws and better platform design,” said Amnesty Tech Programme Director Damini Satija.

Dr. Vivek Murthy: Social media is one of the key drivers of our youth mental health crisis today

Meanwhile, David Inserra, a fellow for free expression and technology at the Cato Institute, warned in a blog post that children would evade the new policy by shifting to new platforms, private apps like Telegram, or VPNs, driving them to “more isolated communities and platforms with fewer protections” where monitoring is harder.

Tech companies like Google have also warned that the policy could be extremely difficult to enforce, while government-commissioned reports have pointed to inaccuracies in ageverification technology, such as selfie-based ageguessing software. 

Indeed, on Wednesday, local reports in Australia indicated that many children had already bypassed the ban, with age-assurance tools misclassifying users, and workarounds such as VPNs proving effective.

However, Australian Prime Minister Anthony Albanese had attempted to preempt these issues, acknowledging in an opinion piece on Sunday that the system would not work flawlessly from the start, likening it to liquor laws.

“The fact that teenagers occasionally find a way to have a drink doesn’t diminish the value of having a clear national standard,” he added.

Experts told CNBC that the rollout is expected to continue to face challenges and that regulators would need to take a trial-and-error approach. 

“There’s a fair amount of teething problems around it. Many young people have been posting on TikTok that they successfully evaded the age limitations and that’s to be expected,” said Terry Flew, a professor of digital communication and culture at the University of Sydney. 

“You were never going to get 100% disappearance of every person under the age of 16 from every one of the designated platforms on day one,” he added.

Global implications

Experts told CNBC that the policy rollout in Australia will be closely watched by tech firms and lawmakers worldwide, as other countries consider their own moves to ban or restrict teen social media usage. 

“Governments are responding to how public expectations have changed about the internet and social media, and the companies have not been particularly responsive to moral suasion,” said Flew. 

“We see similar pressures are emerging, particularly, but not exclusively in Europe,” he added.  

The European Parliament passed a non-binding resolution in November advocating a minimum age of 16 for social media access, allowing parental consent for 13 to 15-year-olds. 

The bloc has also proposed banning addictive features such as infinite scrolling and auto-play for minors, which could lead to EU-wide enforcement against non-compliant platforms.

Pinterest CEO on using AI to reduce social media harms

Outside Europe, Malaysia and New Zealand have also been advancing proposals to ban social media for children under 16.

However, laws elsewhere are expected to differ from Australia’s, whether that be regarding age restrictions or age verification processes. 

“My hope is that countries that are looking at implementing similar policies will monitor for what doesn’t work in Australia and learn from our mistakes,” said Tama Leaver, professor at the Department of Internet Studies at Curtin University and a Chief Investigator in the ARC Centre of Excellence for the Digital Child.

“I think platforms and tech companies are also starting to realize that if they don’t want age-gating policies everywhere, they’re going to have to do much better at providing safer, appropriate experiences for young users.”

Continue Reading

Technology

CNBC Daily Open: A Fed rate cut might not be festive enough

Published

on

By

CNBC Daily Open: A Fed rate cut might not be festive enough

An eagle sculpture stands on the facade of the Marriner S. Eccles Federal Reserve building in Washington, D.C., U.S., on Friday, Nov. 18, 2016.

Andrew Harrer | Bloomberg | Getty Images

On Wednesday stateside, the U.S. Federal Reserve is widely expected to lower its benchmark interest rates by a quarter percentage point to a range of 3.5%-3.75%.

However, given that traders are all but certain that the cut will happen — an 87.6% chance, to be exact, according to the CME FedWatch tool — the news is likely already priced into stocks by the market.

That means any whiff of restraint could weigh on equities. In fact, the talk in the markets is that the Fed might deliver a “hawkish cut”: lower rates while suggesting it could be a while before it cuts again.

The “dot plot,” or a projection of where Fed officials think interest rates will end up over the next few years, will be the clearest signal of any hawkishness. Investors will also parse Chair Jerome Powell’s press conference and central bankers’ estimates for U.S. economic growth and inflation to gauge the Fed’s future rate path.

In other words, the Fed could rein in market sentiment even if it cuts rates. Perhaps end-of-year festivities might be muted this year.

What you need to know today

And finally…

Researchers inside a lab at the Shenzhen Synthetic Biology Infrastructure facility in Shenzhen, China, on Wednesday, Nov. 26, 2025.

Bloomberg | Bloomberg | Getty Images

U.S.-China AI talent race heats up

When it comes to brain power, “America’s edge is deteriorating dangerously,” Chris Miller, author of the book “Chip War: The Fight for the World’s Most Critical Technology,” told a U.S. Senate Foreign Relations subcommittee last week. It’s a lead that’s “fragile and much smaller” than its advantage in AI chips, he said.

Part of the difference comes from the sheer scale, especially as education levels rise in China. Its population is four times that of the U.S., and the same goes for the volume of science, technology, engineering and mathematics graduates. In 2020, China produced 3.57 million STEM graduates, the most of any country, and far outpacing the 820,000 in the U.S.

— Evelyn Cheng

Continue Reading

Technology

CEO of South Korean online retail giant Coupang resigns over data breach

Published

on

By

CEO of South Korean online retail giant Coupang resigns over data breach

Park Dae-jun, CEO of South Korean online retail giant Coupang has resigned, three weeks after the company became aware of a massive data breach that affected nearly 34 million customers.

Coupang

The CEO of South Korean online retail giant Coupang Corp. resigned Wednesday, three weeks after the company became aware of a massive data breach that affected nearly 34 million customers.

Coupang said CEO Park Dae-jun resigned due to the data breach incident — which was revealed on Nov. 18 — according to a Google translation of the statement in Korean.

“I am deeply sorry for disappointing the public with the recent personal information incident,” Park said, adding, “I feel a deep sense of responsibility for the outbreak and the subsequent recovery process, and I have decided to step down from all positions.”

Following his resignation, parent company Coupang Inc. appointed Harold Rogers, the Chief Administrative Officer and General Counsel, as interim CEO.

Coupang said that Rogers plans to “focus on alleviating customer anxiety caused by the personal information leak” and to stabilize the organisation.

Park, who joined the company in 2012, became Coupang’s sole CEO in May, after the company transitioned away from a dual-CEO system.

According to Coupang, he was responsible for the company’s innovative new business and regional infrastructure development, and led projects to expand sales channels for small and medium enterprises, among others.

South Korean companies are known for being “very, very cost-efficient,” which may have led to neglecting areas like cybersecurity, Peter Kim, managing director at KB Securities, told CNBC’s “Squawk Box Asia” Wednesday.

“I think the core issue here is that we’ve had a number of other breaches, not just Coupang, but previously, telecom companies in Korea,” Kim added. “I understand some data companies consider Korea to be [the] top three or four most breached on a data, on an IT security basis in the world.”

Coupang breach a ‘double-edged sword’ for Chinese rivals due to security concerns: KB Securities

South Korean companies have been hit by cybersecurity breaches before, including an April incident at mobile carrier SK Telecom that affected 23.24 million people. The country previously saw one of its largest cybersecurity incidents in 2011, when attackers stole over 35 million user details from internet platforms Nate and Cyworld.

Nate is one of the most popular search engines in South Korea, while Cyworld was one of the country’s largest social networking sites in the early 2000s.

Prime Minister Kim Min-seok reportedly said Wednesday that strict action would be taken against the company if violations of the law were found, according to South Korean media outlet Yonhap.

Police also raided the Coupang headquarters for a second day on Wednesday, continuing their investigation into the data breach.

Yonhap also reported, citing sources, that the police search warrant “specifies a Chinese national who formerly worked for Coupang as a suspect on charges of breaching the information and communications network and leaking confidential data.”

Last week, South Korean President Lee Jae Myung called for increased penalties on data breaches, saying that the Coupang data breach had served as a wake-up call.

— CNBC’s Chery Kang contributed to this report.

How Coupang grew into South Korea's biggest online retailer

Continue Reading

Trending