Connect with us

Published

on

Momo Productions | Digitalvision | Getty Images

Kids and teens under 18 years old in Louisiana may soon need their parents’ permission to sign up for online accounts, including for social media, gaming and more, under a newly passed bill in the state.

The measure, which still needs to be signed by the state’s governor to take effect, follows a trend of laws in conservative states such as Utah and Arkansas that seek to limit adolescents’ unrestricted access to social media. Liberal states such as California as well as some Democratic lawmakers in Congress have also been working on new regulations to protect kids from some of the harmful effects of social media.

related investing news

Meta vs. TikTok: How they're each using AI to attract advertisers and which one is winning

CNBC Investing Club

While protecting kids on the internet is a value shared across the board, tech companies and many civil society groups that oppose the industry in other matters have warned such legislation ignores the positive effects social media can have, particularly for marginalized youth. They also warn new restrictions could have unintended harmful effects on kids, such as limiting the resources they have to turn to for help out of a negative home life and forcing tech platforms to collect more information on both kids and adults to ensure compliance based on age.

Still, the unanimous vote in both chambers of the Louisiana state legislature underscores the popularity of legislation aimed at protecting kids from online harms.

The bill would also clarify agreements minors made when they signed up for existing accounts can be rendered null. The state code already says parents or legal guardians can rescind contracts their kids sign up for.

NetChoice, a group that represents internet platforms including Amazon, Google, Meta and TikTok, said it opposes the Louisiana bill and hopes the governor will veto it. NetChoice is currently suing the state of California for its Age-Appropriate Design Code that has similar aims to protect kids from online harms, due to alleged First Amendment issues. NetChoice Vice President and General Counsel Carl Szabo said in a statement that the Louisiana bill would also violate the First Amendment.

“It will decimate anonymous browsing and gaming — requiring citizens to hand over data to prove their identity and age just to use an online service. Anonymity can be important for individuals using social media services for things like whistleblowers, victims, and those identifying crime in the neighborhood who fear backlash,” Szabo said. “What’s worse is that it fails to really address the underlying issues. Instead, Louisiana policymakers could actually help teens and parents by following the educational approaches of Virginia and Florida.”

The office of Democratic Gov. John Bel Edwards did not immediately respond to CNBC’s request for comment on the bill. If he chooses to sign it, it will take effect in August 2024.

Subscribe to CNBC on YouTube.

WATCH: Sen. Blackburn says safety should come first on social media ‘children have lost their lives’

Sen. Blackburn says safety should come first on social media 'children have lost their lives'

Continue Reading

Technology

Cognition to buy AI startup Windsurf days after Google poached CEO in $2.4 billion licensing deal

Published

on

By

Cognition to buy AI startup Windsurf days after Google poached CEO in .4 billion licensing deal

In this photo illustration, a man seen holding a smartphone with the logo of US artificial intelligence company Cognition AI Inc. in front of website.

Timon Schneider | SOPA Images | Sipa USA | AP

Artificial intelligence startup Cognition announced it’s acquiring Windsurf, the AI coding company that lost its CEO and several other senior employees to Google just days earlier.

Cognition said on Monday that it will purchase Windsurf’s intellectual property, product, trademark, brand and talent, but didn’t disclose terms of the deal. It’s the latest development in an AI talent war, as companies like Meta, Google and OpenAI fiercely compete for top engineers and researchers.

OpenAI had been in talks to acquire Windsurf for about $3 billion in April, but the deal fell apart, and Google said on Friday that it hired Windsurf’s co-founder and CEO Varun Mohan. Google is paying $2.4 billion in licensing fees and for compensation, as CNBC previously reported.

“Every new employee of Cognition will be treated the same way as existing employees: with transparency, fairness, and deep respect for their abilities and value,” Cognition CEO Scott Wu wrote in a memo to employees on Monday. “After today, our efforts will be as a united and aligned team. There’s only one boat and we’re all in it together.”

Cognition didn’t immediately respond to CNBC’s request for comment. Windsurf directed CNBC to Cognition.

Cognition is best known for its AI coding agent named Devin, which is designed to help engineers build software faster. As of March, the startup had raised hundreds of millions of dollars at a valuation of close to $4 billion, according to a report from Bloomberg.

Both companies are backed by Peter Thiel’s Founders Fund. Other investors in Windsurf include Greenoaks, Kleiner Perkins and General Catalyst.

“I’m overwhelmed with excitement and optimism, but most of all, gratitude,” Jeff Wang, the interim CEO of Windsurf, wrote in a post on X on Monday. “Trying times reveal character, and I couldn’t be prouder of how every single person at Windsurf showed up these last three days for each other and for our users.”

Wu said that the acquisition ensures all Windsurf employees are “treated with respect and well taken care of in this transaction.” All employees will participate financially in the deal, have vesting cliffs waived for their work to date and receive fully accelerated vesting for their, according to the memo.

“There’s never been a more exciting time to build,” Wu wrote.

WATCH: Google snatches Windsurf CEO after OpenAI deal dissolves

Google snatches Windsurf CEO after OpenAI deal dissolves

Continue Reading

Technology

Musk’s xAI faces European scrutiny over Grok’s ‘horrific’ antisemitic posts

Published

on

By

Musk's xAI faces European scrutiny over Grok's 'horrific' antisemitic posts

The Grok logo is being displayed on a smartphone with Xai visible in the background in this photo illustration on April 1, 2024. 

Jonathan Raa | Nurphoto | Getty Images

The European Union on Monday called in representatives from Elon Musk‘s xAI after the company’s social network X, and chatbot Grok, generated and spread anti-semitic hate speech, including praise for Adolf Hitler, last week.

A spokesperson for the European Commission told CNBC via e-mail that a technical meeting will take place on Tuesday.

xAI did not immediately respond to a request for comment.

Sandro Gozi, a member of Italy’s parliament and member of the Renew Europe group, last week urged the Commission to hold a formal inquiry.

“The case raises serious concerns about compliance with the Digital Services Act (DSA) as well as the governance of generative AI in the Union’s digital space,” Gozi wrote.

X was already under a Commission probe for possible violations of the DSA.

Read more CNBC tech news

Grok also generated and spread offensive posts about political leaders in Poland and Turkey, including Polish Prime Minister Donald Tusk and Turkish President Recep Erdogan.

Over the weekend, xAI posted a statement apologizing for the hateful content.

“First off, we deeply apologize for the horrific behavior that many experienced. … After careful investigation, we discovered the root cause was an update to a code path upstream of the @grok bot,” the company said in the statement.

Musk and his xAI team launched a new version of Grok Wednesday night amid the backlash. Musk called it “the smartest AI in the world.”

xAI works with other businesses run and largely owned by Musk, including Tesla, the publicly traded automaker, and SpaceX, the U.S. aerospace and defense contractor.

Despite Grok’s recent outburst of hate speech, the U.S. Department of Defense awarded xAI a $200 million contract to develop AI. Anthropic, Google and OpenAI also received AI contracts.

CNBC’s April Roach contributed to this article.

Continue Reading

Technology

Meta removes 10 million Facebook profiles in effort to combat spam

Published

on

By

Meta removes 10 million Facebook profiles in effort to combat spam

Meta CEO Mark Zuckerberg looks on before the luncheon on the inauguration day of U.S. President Donald Trump’s second presidential term in Washington on Jan. 20, 2025.

Evelyn Hockstein | Reuters

Meta on Monday said it has removed about 10 million profiles for impersonating large content producers through the first half of 2025 as part of an effort by the company to combat “spammy content.”

The crackdown is part of Meta’s broader effort to make the Facebook feed more relevant and authentic by taking action against and removing accounts that engage in “spammy” behavior, such as content created using artificial intelligence tools.

As part of that initiative, Meta is also rolling out stricter measures to promote original posts from creators, the company said in a blog post.

Facebook also took action against approximately 500,000 accounts that it identified to be engaged in inauthentic behavior and spam. These actions included demoting comments and reducing distribution of content, which are intended to make it harder for these accounts to monetize their posts.

Meta said unoriginal content is when images or videos are reused without crediting the original creator. Meta said it now has technology that will detect duplicate videos and reduce the distribution of that content.

The action against spam and inauthentic content comes as Meta increases its investment in AI, with CEO Mark Zuckerberg on Monday announcing plans to spend “hundreds of billions of dollars” on AI compute infrastructure to bring the company’s first supercluster online next year.

This mandate comes at a time when AI is making it easier to mass-produce content across social media platforms. Other platforms are also taking action to combat the increase of spammy, low-quality content on social media, also known as “AI slop.”

Google’s YouTube announced a change in policy this month that prevents content that is mass-produced or repetitive from being eligible for being awarded revenue.

This announcement sparked confusion on social media, with many users believing this was a reversal on YouTube’s stance on AI content. However, YouTube clarified that the policy change is aimed at curbing unoriginal, spammy and repetitive videos.

“We welcome creators using AI tools to enhance their storytelling, and channels that use AI in their content remain eligible to monetize,” said a spokesperson for YouTube in a blog post to clarify the new policy.

YouTube’s new policy change will take effect on Tuesday.

Don’t miss these insights from CNBC PRO

Meta announces massive 'Prometheus' & 'Hyperion' data center plans

Continue Reading

Trending