Connect with us

Published

on

Once again, we’re debating about “platforming Nazis,” following the publication of an article in The Atlantic titled ” Substack Has a Nazi Problem” and a campaign by some Substack writers to see some offensive accounts given the boot. And once again, the side calling for more content suppression is short-sighted and wrong.

This is far from the first time we’ve been here. It seems every big social media platform has been pressured to ban bigoted or otherwise offensive accounts. And Substackeveryone’s favorite platform for pretending like it’s 2005 and we’re all bloggers againhas already come under fire multiple times for its moderation policies (or lack thereof). Substack vs. Social Media

Substack differs from blogging systems of yore in some key ways: It’s set up primarily for emailed content (largely newsletters but also podcasts and videos), it has paid some writers directly at times, and it provides an easy way for any creator to monetize content by soliciting fees directly from their audience rather than running ads. But it’s similar to predecessors like WordPress and Blogger in some key ways, alsoand more similar to such platforms than to social media sites such as Instagram or X (formerly Twitter). For instance, unlike on algorithm-driven social media platforms, Substack readers opt into receiving posts from specific creators, are guaranteed to get emailed those posts, and will not receive random content to which they didn’t subscribe.

Substack is also similar to old-school blogging platforms in that it’s less heavy-handed with moderation. On the likes of Facebook, X, and other social media platforms, there are tons of rules about what kinds of things you are and aren’t allowed to post and elaborate systems for reporting and moderating possibly verboten content.

Substack has some rules , but they’re pretty broadnothing illegal, no inciting violence, no plagiarism, no spam, and no porn (nonpornographic nudity is OK, however).

Substack’s somewhat more laissez faire attitude toward moderation irks people who think every tech company should be in the business of deciding which viewpoints are worth hearing, which businesses should exist, and which groups should be allowed to speak online. To this censorial crew, tech companies shouldn’t be neutral providers of services like web hosting, newsletter management, or payment processing. Rather, they must evaluate the moral worth of every single customer or user and deny services to those found lacking. Nazis, Nazis, Everywhere

Uh, pretty easy just not to do business with Nazis, some might say. Which is actually… not true. At least not in 2023. Because while the term “Nazi” might have a fixed historical meaning, it’s bandied about pretty broadly these days. It gets used to describe people who (thankfully) aren’t actually antisemitic or advocating for any sort of ethnic cleansing. Donald Trump and his supporters get called Nazis. The folks at Planned Parenthood get called Nazis. People who don’t support Israel get called Nazis. All sorts of people get called Nazis for all sorts of reasons. Are tech companies supposed to bar all these people? And how much time should they put into investigating whether people are actual Nazis or just, like, Nazis by hyperbole? In the end, “not doing business with Nazis” would require a significant time investment and a lot of subjective judgment calls.

Uh, pretty easy just not to do business with people who might be mistaken for Nazis, some might counter. Perhaps. In theory. But in practice, we again run into the fact that the term is ridiculously overused. In practice, it would be more like “not doing business with anyone who anyone describes as a Nazi”a much wider groupor devoting a lot of the business to content moderation.

OK, but you can have toxic views even if you’re not literally a Nazi. Of course. But you have to admit that what we’re talking about now is no longer ” doing business with Nazis .” It’s about doing business with anyone who holds bigoted views, offensive views, views that aren’t progressive, etc. That’s a much, much wider pool of people, requiring many more borderline judgment calls.

This doesn’t stop at Nazis, the Nazi-adjacent, and those with genuinely horrific ideas. Again, we’re going to run into the fact that sometimes people stating relatively commonplace viewpointsthat we need to deport more immigrants, for example, or that Israel shouldn’t exist, or that sex-selective abortions should be allowed, or whateverare going to get looped in. Even if you abhor these viewpoints, they hardly seem like the kind of thing that shouldn’t be allowed to exist on popular platforms. Slippery Slopes and Streisand Effects

Maybe you disagree with me here. Maybe you think anyone with even remotely bad opinions (as judged by you) should be banned. That’s an all too common position, frankly.

In Substack’s case, some of the “Nazis” in question really may beor at least revereactual Nazis. “At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics,” Jonathan M. Katz wrote in The Atlantic last month.

But you needn’t have sympathy for Nazis and other bigots to find restricting speech bad policy.

Here’s the thing: Once you start saying tech companies must make judgment calls based not just on countering illegal content but also on countering Bad Content, it opens the door to wanna-be censors of all sorts. Just look at how every time a social media platform expands its content moderation purview, a lot of the same folks who pushed for itor at least those on the same side as those who pushed for itwind up caught in its dragnet. Anything related to sex work will be one of the first targets, followed quickly by LGBT issues. Probably also anyone with not-so-nice opinions of cops. Those advocating ways around abortion bans. And so on. It’s been all too easy for the enemies of equality, social justice, and criminal justice reform to frame all of these things as harmful or dangerous. And once a tech company has caved to being the safety and morality arbiter generally, it’s a lot easier for them to get involved again and again for lighter and lighter reasons.

Here’s the other thing: Nazis don’t magically become not-Nazis just because their content gets restricted or they get kicked off a particular platform. They simply congregate in private messaging groups or more remote corners of the internet instead. This makes it more difficult to keep tabs on them and to counter them. Getting kicked off platform after platform can also embolden those espousing these ideologies and their supporters, lending credence to their mythologies about being brave and persecuted truth-tellers and perhaps strengthening affinity among those otherwise loosely engaged.

There’s also the ” Streisand effec t” (so named after Barbra Streisand’s attempt to suppress a picture of the cliffside outside her house only drew enormous attention to a picture that would otherwise have been little seen). The fact that Nazi accounts may exist on Substack doesn’t mean many people are reading them, nor does it mean that non-Nazis are being exposed to them. You know what is exposing usand, alas, perhaps some sympathetic types, tooto these newsletters? The Atlantic article and the Substackers Against Nazis group continuing to draw attention to these accounts. Substack’s Ethos

In their open letter, Substackers Against Nazis don’t explicitly call for any particular accounts to be banned. They’re just “asking a very simple question…:Why are you platforming and monetizing Nazis?” But the implication of the letter is that Substack should change its policy or the writers in question will walk. “This issue has already led to the announced departures of several prominent Substackers,” the letter reads. “Is platforming Nazis part of your vision of success? Let us knowfrom there we can each decide if this is still where we want to be.”

Substack executives haven’t publicly responded to critics this time. But thy have laid out their moderation vision before, and it’s commendable.

“In most cases, we don’t think that censoring content is helpful, and in fact it often backfires,” Substack co-founders Chris Best, Hamish McKenzie, and Jairaj Sethi wrote in 2020, in response to calls for them to exclude relatively mainstream but nonprogressive voices. “Heavy-handed censorship can draw more attention to content than it otherwise would have enjoyed, and at the same time it can give the content creators a martyr complex that they can trade off for future gain.” They go on to reject those who would have Substack moderators serve as “moral police” and suggest that those who want “Substack but with more controls on speech” migrate to such a platform.

“There will always be many writers on Substack with whom we strongly disagree, and we will err on the side of respecting their right to express themselves, and readers’ right to decide for themselves what to read,” they wrote.

If the accounts Katz identified are making “credible threats of physical harm,” then they are in violation of Substack’s terms of service. If they’re merely spouting racist nonsense, then folks are free to ignore them, condemn them, or counter their words with their own. And they’re certainly free to stop writing on or reading Substack.

But if Substack’s past comments are any indication, the company won’t ban people for racist nonsense alone. Keep Substack Decentralized

Plenty of (non-Nazi) Substack writers support this stance. “Substack shouldn’t decide what we read,” asserts Elle Griffin. “We should.” Griffin opposes the coalition aiming to make Substack “act more like other social media platforms.” Her post was co-signed by dozens of Substackers (and a whole lot more signed on after publication), including Edward Snowden, Richard Dawkins, Bari Weiss, Greg Lukianoff, Bridget Phetasy, Freddie deBoer, Meghan Daum, and Michael Moynihan.

“I, and the writers who have signed this post, are among those who hope Substack will not change its stance on freedom of expression, even against pressure to do so,” writes Griffin.

Their letter brings up another reason to oppose this pressure: It doesn’t work to accomplish its ostensible goal. It just ends up an endless game of Whac-A-Mole that simultaneously doesn’t rid a platform of noxious voices while leading to the deplatforming of other content based on private and political agendas.

They also note that it’s extremely difficult to encounter extremist content on Substack if you don’t go looking for it:

The author of the recent Atlantic piece gave one way: actively go searching for it. He admits to finding “white-supremacist, neo-Confederate, and explicitly Nazi newsletters” by conducting a “search of the Substack website and of extremist Telegram channels.” But this only proves my point: If you want to find hate content on Substack, you have to go huntin g for it on extremist third-party chat channels, because unlike other social media platforms, on Substack it won’t just show up in your feed.

And they point out that (as on blogs of yore) individual creators can moderate content as they see fit on their own accounts. So a newsletter writer can choose to allow or not to allow comments, can set their own commenting policies, and can delete comments at their own discretion. Some can opt to be safe spaces, some can opt to be free-for-alls, and some for a stance in between.

I’m with Griffin and company here. Substack has nothing to gain from going the way of Facebook, X, et al.and the colossal drama those platforms have spawned and the mess they’ve become proves it. Substack is right to keep ignoring both the Nazis and those calling to kick them out.

Continue Reading

World

Tens of thousands killed in two days in Sudan city, analysts believe

Published

on

By

Tens of thousands killed in two days in Sudan city, analysts believe

Tens of thousands of people have been killed in the Sudanese city of Al Fashir by the Rapid Support Forces (RSF) in a two-day window after the paramilitary group captured the regional capital, analysts believe.

Sky News is not able to independently verify the claim by Yale Humanitarian Labs, as the city remains under a telecommunications blackout.

Stains and shapes resembling blood and corpses can be seen from space in satellite images analysed by the research lab.

Al Fashir University. Pic: Airbus DS/2025
Image:
Al Fashir University. Pic: Airbus DS/2025

Al Fashir University. Pic: Airbus DS/2025
Image:
Al Fashir University. Pic: Airbus DS/2025

Nathaniel Raymond, executive director of Yale Humanitarian Labs, said: “In the past 48 hours since we’ve had [satellite] imagery over Al Fashir, we see a proliferation of objects that weren’t there before RSF took control of Al Fashir – they are approximately 1.3m to 2m long which is critical because in satellite imagery at very high resolution, that’s the average length of a human body lying vertical.”

Mini Minawi, the governor of North Darfur, said on X that 460 civilians have been killed in the last functioning hospital in the city.

The Sudan Doctors Network has also shared that the RSF “cold-bloodedly killed everyone they found inside Al Saudi Hospital, including patients, their companions, and anyone else present in the wards”.

World Health Organisation (WHO) chief Dr Tedros Adhanom Ghebreyesus said it was “appalled and deeply shocked” by the reports.

Satellite images support the claims of a massacre at Al Saudi Hospital, according to Mr Raymond, who said YHL’s report detailed “a large pile of them [objects believed to be bodies] against a wall at one building at Saudi hospital. And we believe that’s consistent with reports that patients and staff were executed en masse”.

In a video message released on Wednesday, RSF commander Mohamed Hamdan Dagalo acknowledged “violations in Al Fashir” and claimed “an investigation committee should start to hold any soldier or officer accountable”.

Please use Chrome browser for a more accessible video player

Army soldiers ‘fled key Sudan city’ before capture

The Saudi Maternity Hospital in Al Fashir. Pic: Airbus DS /2025 via AP
Image:
The Saudi Maternity Hospital in Al Fashir. Pic: Airbus DS /2025 via AP

The commander is known for committing atrocities in Darfur in the early 2000s as a Janjaweed militia leader, and the RSF has been accused of carrying out genocide in Darfur 20 years on.

Sources have told Sky News the RSF is holding doctors, journalists and politicians captive, demanding ransoms from some families to release their loved ones.

One video shows a man from Al Fashir with an armed man kneeling on the ground, telling his family to pay 15,000. The currency was not made clear.

In some cases, ransoms have been paid, but then more messages come demanding that more money be transferred to secure release.

Muammer Ibrahim, a journalist based in the city, is currently being held by the RSF, who initially shared videos of him crouched on the ground, surrounded by fighters, announcing his hometown had been captured under duress.

Read more:
Key Sudan city falls – what does this mean for the war?
‘Massacre’ kills more than 50, including children

200,000 trapped after army flees

He is being held incommunicado as his family scrambles to negotiate his release. Muammer courageously covered the siege of Al Fashir for months, enduring starvation and shelling.

The Committee to Protect Journalists regional director Sara Qudah said the abduction of Muammar Ibrahim “is a grave and alarming reminder that journalists in Al Fashir are being targeted simply for telling the truth”.

Continue Reading

Technology

Meta CEO Mark Zuckerberg defends AI spending: ‘We’re seeing the returns’

Published

on

By

Meta CEO Mark Zuckerberg defends AI spending: 'We're seeing the returns'

Mark Zuckerberg, chief executive officer of Meta Platforms Inc., during the Meta Connect event in Menlo Park, California, US, on Wednesday, Sept. 17, 2025.

David Paul Morris | Bloomberg | Getty Images

Meta CEO Mark Zuckerberg is sounding a familiar tune when it comes to artificial intelligence: better to invest too much than too little.

On his company’s third-quarter earnings call on Wednesday, Zuckerberg addressed Meta’s hefty spending this year, most notably its $14.3 billion investment in Scale AI as part of a plan to overhaul the AI unit, now known as Superintelligence Labs.

Some skeptics worry that the spending from Meta and its competitors in AI, namely OpenAI, is fueling a bubble.

For Meta’s newly formed group to have enough computing power to pursue cutting-edge AI models, the company has been building out massive data centers and signing cloud-computing deals with companies like Oracle, Google and CoreWeave.

Zuckerberg said the company is seeing a “pattern” and that it looks like Meta will need even more power than what was originally estimated. Over time, he said, those growing AI investments will eventually pay off in a big way.

“Being able to make a significantly larger investment here is very likely to be a profitable thing over, over some period,” Zuckerberg said on the call.

If Meta overspends on AI-related computing resources, Zuckerberg said, the company can repurpose the capacity and improve its core recommendation systems “in our family of apps and ads in a profitable way.”

Along with its rivals, Meta boosted its expectations for capital expenditures.

Capex this year will now be between $70 billion and $72 billion, compared to prior guidance of $66 billion to $72 billion, the company said.

Meanwhile, Alphabet on Wednesday increased its range for capital expenditures to $91 billion to $93 billion, up from a previous target of $75 billion to $85 billion. And on Microsoft’s earnings call after the bell, the software company said it now expects capex growth to accelerate in 2026 after previously projecting slowing expansion.

Alphabet was the only one of the three to see its stock pop, as the shares jumped 6% in extended trading. Meta shares fell about 8%, and Microsoft dipped more than 3%.

Zuckerberg floated the idea that if Meta ends up with excess computing power, it could offer some to third parties. But he said that isn’t yet an issue.

“Obviously, if you got to a point where you overbuilt, you could have that as an option,” Zuckerberg said.

In the “very worst case,” Zuckerberg said, Meta ends up with several years worth of excess data center capacity. That would result in a “loss and depreciation” of certain assets, but the company would “grow into that and use it over time,” he said.

As it stands today, Meta’s advertising business continues to grow at a healthy pace thanks in part to its AI investments.

“We’re seeing the returns in the core business that’s giving us a lot of confidence that we should be investing a lot more, and we want to make sure that we’re not under investing,” Zuckerberg said.

Revenue in the third quarter rose 26% from a year earlier to $51.24 billion, topping analyst estimates of $49.41 billion and representing the company’s fastest growth rate since the first quarter of 2024.

WATCH: Meta reports Q3 earnings beat, company takes one-time tax charge.

Meta reports Q3 earnings beat, company takes one-time tax charge

Continue Reading

Technology

Google expects ‘significant increase’ in capital expenditure in 2026, execs say

Published

on

By

Google expects 'significant increase' in capital expenditure in 2026, execs say

Sundar Pichai, chief executive officer of Alphabet Inc., during the Bloomberg Tech conference in San Francisco, California, US, on Wednesday, June 4, 2025.

David Paul Morris | Bloomberg | Getty Images

Google parent Alphabet is planning a “significant increase” in spend next year as it continues to invest in AI infrastructure to meet the demand of its customer backlog, executives said Wednesday.

The company reported its first $100 billion revenue quarter on Wednesday, beating Wall Street’s expectations for Alphabet’s third quarter. Executives then said that the company plans to grow its capital spend for this year.

“With the growth across our business and demand from Cloud customers, we now expect 2025 capital expenditures to be in a range of $91 billion to $93 billion,” the company said in its earnings report

It marks the second time the company increased its capital expenditure this year. In July, the company increased its expectation from $75 billion to $85 billion, most of which goes toward investments in projects like new data centers.

There’ll be even more spend in 2026, executives said Wednesday.

“Looking out to 2026, we expect a significant increase in CapEx and will provide more detail on our fourth quarter earnings call,” said Anat Ashkenazi, Alphabet’s finance chief.

The latest increases come as companies across the industry race to build more infrastructure to keep up with billions in customer demand for the compute necessary to power AI services. Also on Wednesday, Meta raised the low end of its guidance for 2025 capital expenditures by $4 billion, saying it expects that figure to come in between $70 billion and $72 billion. That figure was previously $66 billion to $72 billion.

Google executives explained that they’re racing to meet demand for cloud services, which saw a 46% quarter-over-quarter growth to the backlog in the third quarter.

“We continue to drive strong growth in new businesses,” CEO Sundar Pichai said. “Google Cloud accelerated, ending the quarter with $155 billion in backlog.”

The company reported 32% cloud revenue growth from the year prior and is keeping pace with its megacap competitors. Pichai and Ashkenazi said the company has received more $1 billion deals in the last nine months than it had in the past two years combined. 

In August, Google won a $10 billion cloud contract from Meta spanning six years. Anthropic last week announced a deal that gives the artificial intelligence company access to up to 1 million of Google’s custom-designed Tensor Processing Units, or TPUs. The deal is worth tens of billions of dollars.

The spend on infrastructure is also helping the company improve its own AI products, executives said on the call.

Google’s flagship AI app Gemini now has more than 650 million monthly active users. That’s up from the 450 million active users Pichai reported the previous quarter. 

Search also improved thanks to AI advancements, executives said. Google’s search business generated $56.56 billion in revenue — up 15% from the prior year, tempering fears that the competitive AI landscape may be cannibalizing the company’s core search and ads business.

AI Mode, Google’s AI product that lays within its search engine, has 75 million daily active users in the U.S., and those search queries doubled over the third quarter, executives said. They also reiterated that the company is testing ads in that AI Mode product.

WATCH: Google catching up with Meta pulled on shares following earnings, says D.A. Davidson’s Gil Luria

Google catching up with Meta pulled on shares following earnings, says D.A. Davidson's Gil Luria

Continue Reading

Trending