Connect with us

Published

on

“If you randomly follow the algorithm, you probably would consume less radical content using YouTube as you typically do!”

So says Manoel Ribeiro, co-author of a new paper on YouTube’s recommendation algorithm and radicalization, in an X (formerly Twitter) thread about his research.

The studypublished in February in the Proceedings of the National Academies of Sciences (PNAS)is the latest in a growing collection of research that challenges conventional wisdom about social media algorithms and political extremism or polarization.

Want more on sex, technology, bodily autonomy, law, and online culture? Subscribe to Sex & Tech from Reason and Elizabeth Nolan Brown. Email(Required) CommentsThis field is for validation purposes and should be left unchanged. Submit

Δ Introducing the Counterfactual Bots

For this study, a team of researchers spanning four universities (the University of Pennsylvania, Yale, Carnegie Mellon, and Switzerland’s cole Polytechnique Fdrale de Lausanne) aimed to examine whether YouTube’s algorithms guide viewers toward more and more extreme content.

This supposed “radicalizing” effect has been touted extensively by people in politics, advocacy, academia, and mediaoften offered as justification for giving the government more control over how tech platforms can run. But the research cited to “prove” such an effect is often flawed in a number of ways, including not taking into account what a viewer would have watched in the absence of algorithmic advice.

“Attempts to evaluate the effect of recommenders have suffered from a lack of appropriate counterfactualswhat a user would have viewed in the absence of algorithmic recommendationsand hence cannot disentangle the effects of the algorithm from a user’s intentions,” note the researchers in the abstract to this study.

To overcome this limitation, they relied on “counterfactual bots.” Basically, they had some bots watch a video and then replicate what a real user (based on actual user histories) watched from there, and other bots watch that same first video and then follow YouTube recommendations, in effect going down the algorithmic “rabbit hole” that so many have warned against.

The counterfactual bots following an algorithm-led path wound up consuming less partisan content.

The researchers also found “that real users who consume ‘bursts’ of highly partisan videos subsequently consume more partisan content than identical bots who subsequently follow algorithmic viewing rules.”

“This gap corresponds to an intrinsic preference of users for such content relative to what the algorithm recommends,” notes study co-author Amir Ghasemian on X. Pssst. Social Media Users Have Agency

“Why should you trust this paper rather than other papers or reports saying otherwise?” comments Ribeiro on X. “Because we came up with a way to disentangle the causal effect of the algorithm.”

As Ghasemian explained on X: “It has been shown that exposure to partisan videos is followed by an increase in future consumption of these videos.”

People often assume that this is because algorithms start pushing more of that content.

“We show this is not due to more recommendations of such content. Instead, it is due to a change in user preferences toward more partisan videos,” writes Ghasemian.

Or, as the paper puts it: “a user’s preferences are the primary determinant of their experience.”

That’s an important difference, suggesting that social media users aren’t passive vessels simply consuming whatever some algorithm tells them to but, rather, people with existing and shifting preferences, interests, and habits.

Ghasemian also notes that “recommendation algorithms have been criticized for continuing to recommend problematic content to previously interested users long after they have lost interest in it themselves.” So the researchers set out to see what happens when a user switches from watching more far-right to more moderate content.

They found that “YouTube’s sidebar recommender ‘forgets’ their partisan preference within roughly 30 videos regardless of their prior history, while homepage recommendations shift more gradually toward moderate content,” per the paper abstract.

Their conclusion: “Individual consumption patterns mostly reflect individual preferences, where algorithmic recommendations play, if anything, a moderating role.” It’s Not Just This Study

While “empirical studies using different methodological approaches have reached somewhat different conclusions regarding the relative importance” of algorithms in what a user watches, “no studies find support for the alarming claims of radicalization that characterized early, early, anecdotal accounts,” note the researcher in their paper.

Theirs is part of a burgeoning body of research suggesting that the supposed radicalization effects of algorithmic recommendations aren’t realand, in fact, algorithms (on YouTube and otherwise) may steer people toward more moderate content.

(See my defense of algorithms from Reason’s January 2023 print issue for a whole host of information to this effect.)

A 2021 study from some of the same researchers behind the new study found “little evidence that the YouTube recommendation algorithm is driving attention to” what the researchers call “far right” and “anti-woke” content. The growing popularity of anti-woke content could instead be attributed to “individual preferences that extend across the web as a whole.”

In a 2022 working paper titled “Subscriptions and external links help drive resentful users to alternative and extremist YouTube videos,” researchers found that “exposure to alternative and extremist channel videos on YouTube is heavily concentrated among a small group of people with high prior levels of gender and racial resentment” who typically subscribe to channels from which they’re recommended videos or get to these videos from off-site links. “Non-subscribers are rarely recommended videos from alternative and extremist channels and seldom follow such recommendations when offered.”

And a 2019 paper from researchers Mark Ledwich and Anna Zaitsev found that YouTube algorithms disadvantaged “channels that fall outside mainstream media,” especially “White Identitarian and Conspiracy channels.” Even when someone viewed these types of videos, “their recommendations will be populated with a mixture of extreme and more mainstream content” going forward, leading Ledwich and Zaitsev to conclude that YouTube is “more likely to steer people away from extremist content rather than vice versa.”

Some argue that changes to YouTube’s recommendation algorithm in 2019 shifted things, and these studies don’t capture the old reality. Perhaps. But whether or not that’s the case, the new realityshown in study after recent studyis that YouTube algorithms today aren’t driving people to more extreme content.

And it’s not just YouTube’s algorithm that has been getting reputation rehabbed by research. A series of studies on the influence of Facebook and Instagram algorithms in the lead up to the 2020 election cut against the idea that algorithmic feeds are making people more polarized or less informed.

Researchers tweaked user feeds so that they saw either algorithmically selected content or a chronological feed, or so that they didn’t see re-shares of the sorts of that algorithms prize. Getting rid of algorithmic content or re-shares didn’t reduce polarization or increase accurate political knowledge. But it did increase “the amount of political and untrustworthy content” that a user saw. Today’s Image Esme side-eyes your algorithm panic (ENB/Reason)

Continue Reading

Sports

Follow live: Kings look to take 3-0 series lead vs. Oilers

Published

on

By

null

Continue Reading

Sports

Hagel suspended for Game 3 due to hit on Barkov

Published

on

By

Hagel suspended for Game 3 due to hit on Barkov

Tampa Bay Lightning winger Brandon Hagel was suspended one game by the NHL Department of Player Safety on Friday night for what it labeled “an extremely forceful body check to an unsuspecting opponent” that injured Florida Panthers captain Aleksander Barkov.

Hagel will miss Saturday’s Game 3 in Sunrise, Florida. The Panthers lead the series 2-0.

Around midway through the third period of Thursday’s Game 2, Tampa Bay was on the power play while trailing 1-0. Barkov pressured defenseman Ryan McDonagh deep in the Lightning zone. With the puck clearly past Barkov, Hagel lined him up for a huge hit that sent the Panthers captain to the ice and thumping off the end boards.

A penalty was whistled, and the officials conferred before calling a “five-minute penalty.” After review, Hagel was given a 5-minute major for interference. Barkov left the game with 10:09 remaining in regulation and did not return to the Panthers’ 2-0 win.

Lightning coach Jon Cooper said after the game that he didn’t expect Hagel to receive a major penalty for the hit.

“Refs make the call. I was a little surprised it was a five, but it was,” he said.

The NHL ruled that Hagel’s hit made “some head contact” on Barkov.

“It’s important to note that Barkov is never in possession of the puck on this play and is therefore not eligible to be checked in any manner,” the league said.

In the Friday hearing, held remotely, Hagel argued that he approached the play anticipating that Barkov would play the puck. But the Department of Player Safety said the onus was on Hagel to ensure that Barkov was eligible to be checked. It also determined that the hit had “sufficient force” for supplemental discipline.

It’s Hagel’s first suspension in 375 regular-season and 36 playoff games. He was fined for boarding Florida’s Eetu Luostarinen in May 2022.

The Panthers held an optional skate Friday. Coach Paul Maurice said Barkov “hasn’t been ruled out yet” but “hasn’t been cleared” for Game 3.

“He’s an irreplicable player,” Panthers defenseman Seth Jones said of Barkov. “One of the best centermen in the league. He’s super important to our team.”

The Lightning lose Hagel while they struggle to score in the series; they scored two goals in Game 1 and were shut out in Game 2. Tampa Bay was the highest-scoring team in the regular season (3.56), with Hagel contributing 35 goals and 55 assists in 82 games.

Continue Reading

Sports

Goalies Montembeault, Dobes leave Caps-Habs

Published

on

By

Goalies Montembeault, Dobes leave Caps-Habs

The Washington Capitals and Montreal Canadiens lost their starting goalies because of injuries in Game 3 of their first-round series Friday night.

Canadiens starter Sam Montembeault was replaced by rookie Jakub Dobes, who made his playoff debut, in the second period. Capitals starter Logan Thompson left late in the third period after a collision with teammate Dylan Strome.

The Canadiens won 6-3 to cut their series deficit to 2-1.

Montembeault left the crease with 8:21 remaining in the second period and the score tied 2-2. Replays showed him reaching for the back of his left leg after making a save on Capitals defenseman Alex Alexeyev. Montembeault had stopped 11 of 13 shots. For the series, he stopped 58 of 63 shots (.921 save percentage) with a 2.49 goals-against average.

Dobes, 23, was 7-4-3 in 16 games for the Canadiens in the regular season with a .909 save percentage. Dobes had a win over the Capitals on Jan. 10, stopping 15 shots in a 3-2 overtime win.

Thompson was helped from the ice by a trainer and teammates after Strome collided with him with 6:37 left in regulation right after Canadiens forward Juraj Slafkovsky made it a 5-3 Montreal lead. Thompson attempted to skate off on his own but couldn’t put weight down on his left leg.

Backup goalie Charlie Lindgren replaced Thompson, who had been outstanding for the Capitals in the first two games of the series, winning both with a .951 save percentage and a 1.47 goals-against average. He made 30 saves on 35 shots in Game 3.

Continue Reading

Trending