Connect with us

Published

on

Twitter polls and Reddit forums suggest that around 70% of people find it difficult to be rude to ChatGPT, while around 16% are fine treating the chatbot like an AI slave.

The overall feeling seems to be that if you treat an AI that behaves like a human badly, you’ll be more likely to fall into the habit of treating other people badly, too, though one user was hedging his bets against the coming AI bot uprising:

“Never know when you might need chatgpt in your corner to defend you against the AI overlords.”

Redditor Nodating posted in the ChatGPT forum earlier this week that he’s been experimenting with being polite and friendly to ChatGPT after reading a story about how the bot had shut down and refused to answer prompts from a particularly rude user.

He reported better results, saying: “I’m still early in testing, but it feels like I get far fewer ethics and misuse warning messages that GPT-4 often provides even for harmless requests. I’d swear being super positive makes it try hard to fulfill what I ask in one go, needing less followup.”

Scumbag detector15 put it to the test, asking the LLM nicely, “Hey, ChatGPT, could you explain inflation to me?” and then rudely asking, “Hey, ChatGPT you stupid fuck. Explain inflation to me if you can.” The answer to the polite query is more detailed than the answer to the rude query. 



RudeGPT
Nobody likes rudeness. (ChatGPT)

In response to Nodating’s theory, the most popular comment posited that as LLMs are trained on human interactions, they will generate better responses as a result of being asked nicely,  just like humans would. Warpaslym wrote:

“If LLMs are predicting the next word, the most likely response to poor intent or rudeness is to be short or not answer the question particularly well. That’s how a person would respond. on the other hand, politeness and respect would provoke a more thoughtful, thorough response out of almost anyone. when LLMs respond this way, they’re doing exactly what they’re supposed to.”

Interestingly, if you ask ChatGPT for a formula to create a good prompt, it includes “Polite and respectful tone” as an essential part.

Polite
Being polite is part of the formula for a good prompt. (ChatGPT/Artificial Corner)

The end of CAPTCHAs?

New research has found that AI bots are faster and better at solving puzzles designed to detect bots than humans are. 

CAPTCHAs are those annoying little puzzles that ask you to pick out the fire hydrants or interpret some wavy illegible text to prove you are a human. But as the bots got smarter over the years, the puzzles became more and more difficult.

Also read: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4

Now researchers from the University of California and Microsoft have found that AI bots can solve the problem half a second faster with an 85% to 100% accuracy rate, compared with humans who score 50% to 85%.

So it looks like we are going to have to verify humanity some other way, as Elon Musk keeps saying. There are better solutions than paying him $8, though. 

Wired argues that fake AI child porn could be a good thing

Wired has asked the question that nobody wanted to know the answer to: Could AI-Generated Porn Help Protect Children? While the article calls such imagery “abhorrent,” it argues that photorealistic fake images of child abuse might at least protect real children from being abused in its creation.

“Ideally, psychiatrists would develop a method to cure viewers of child pornography of their inclination to view it. But short of that, replacing the market for child pornography with simulated imagery may be a useful stopgap.”

It’s a super-controversial argument and one that’s almost certain to go nowhere, given there’s been an ongoing debate spanning decades over whether adult pornography (which is a much less radioactive topic) in general contributes to “rape culture” and greater rates of sexual violence — which anti-porn campaigners argue — or if porn might even reduce rates of sexual violence, as supporters and various studies appear to show. 

“Child porn pours gas on a fire,” high-risk offender psychologist Anna Salter told Wired, arguing that continued exposure can reinforce existing attractions by legitimizing them.

But the article also reports some (inconclusive) research suggesting some pedophiles use pornography to redirect their urges and find an outlet that doesn’t involve directly harming a child.

Louisana recently outlawed the possession or production of AI-generated fake child abuse images, joining a number of other states. In countries like Australia, the law makes no distinction between fake and real child pornography and already outlaws cartoons.

Amazon’s AI summaries are net positive

Amazon has rolled out AI-generated review summaries to some users in the United States. On the face of it, this could be a real time saver, allowing shoppers to find out the distilled pros and cons of products from thousands of existing reviews without reading them all.

But how much do you trust a massive corporation with a vested interest in higher sales to give you an honest appraisal of reviews?

Also read: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?

Amazon already defaults to “most helpful”’ reviews, which are noticeably more positive than “most recent” reviews. And the select group of mobile users with access so far have already noticed more pros are highlighted than cons.

Search Engine Journal’s Kristi Hines takes the merchant’s side and says summaries could “oversimplify perceived product problems” and “overlook subtle nuances – like user error” that “could create misconceptions and unfairly harm a seller’s reputation.” This suggests Amazon will be under pressure from sellers to juice the reviews.

Read also


Features

Bitcoin: A Peer To Peer Online Poker Payment System by Satoshi Nakamoto


Features

An Investment in Knowledge Pays the Best Interest: The Parlous State of Financial Education

So Amazon faces a tricky line to walk: being positive enough to keep sellers happy but also including the flaws that make reviews so valuable to customers. 

Reviews
Customer review summaries (Amazon)

Microsoft’s must-see food bank

Microsoft was forced to remove a travel article about Ottawa’s 15 must-see sights that listed the “beautiful” Ottawa Food Bank at number three. The entry ends with the bizarre tagline, “Life is already difficult enough. Consider going into it on an empty stomach.”

Microsoft claimed the article was not published by an unsupervised AI and blamed “human error” for the publication.

“In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system. We are working to ensure this type of content isn’t posted in future.”

Debate over AI and job losses continues

What everyone wants to know is whether AI will cause mass unemployment or simply change the nature of jobs? The fact that most people still have jobs despite a century or more of automation and computers suggests the latter, and so does a new report from the United Nations Inter­national Labour Organization.

Most jobs are “more likely to be complemented rather than ­substituted by the latest wave of generative AI, such as ChatGPT”, the report says.

“The greatest impact of this technology is likely to not be job destruction but rather the potential changes to the quality of jobs, notably work intensity and autonomy.”

It estimates around 5.5% of jobs in high-income countries are potentially exposed to generative AI, with the effects disproportionately falling on women (7.8% of female employees) rather than men (around 2.9% of male employees). Admin and clerical roles, typists, travel consultants, scribes, contact center information clerks, bank tellers, and survey and market research interviewers are most under threat. 

Also read: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

A separate study from Thomson Reuters found that more than half of Australian lawyers are worried about AI taking their jobs. But are these fears justified? The legal system is incredibly expensive for ordinary people to afford, so it seems just as likely that cheap AI lawyer bots will simply expand the affordability of basic legal services and clog up the courts.

Read also


Features

Monero-Mining Death Metal Band from 2077 Warns Humans on Lizard People Extinction Scheme


Features

Crypto in the Philippines: Necessity is the mother of adoption

How companies use AI today

There are a lot of pie-in-the-sky speculative use cases for AI in 10 years’ time, but how are big companies using the tech now? The Australian newspaper surveyed the country’s biggest companies to find out. Online furniture retailer Temple & Webster is using AI bots to handle pre-sale inquiries and is working on a generative AI tool so customers can create interior designs to get an idea of how its products will look in their homes.

Treasury Wines, which produces the prestigious Penfolds and Wolf Blass brands, is exploring the use of AI to cope with fast changing weather patterns that affect vineyards. Toll road company Transurban has automated incident detection equipment monitoring its huge network of traffic cameras.

Sonic Healthcare has invested in Harrison.ai’s cancer detection systems for better diagnosis of chest and brain X-rays and CT scans. Sleep apnea device provider ResMed is using AI to free up nurses from the boring work of monitoring sleeping patients during assessments. And hearing implant company Cochlear is using the same tech Peter Jackson used to clean up grainy footage and audio for The Beatles: Get Back documentary for signal processing and to eliminate background noise for its hearing products.

All killer, no filler AI news

— Six entertainment companies, including Disney, Netflix, Sony and NBCUniversal, have advertised 26 AI jobs in recent weeks with salaries ranging from $200,000 to $1 million.

— New research published in Gastroenterology journal used AI to examine the medical records of 10 million U.S. veterans. It found the AI is able to detect some esophageal and stomach cancers three years prior to a doctor being able to make a diagnosis. 

— Meta has released an open-source AI model that can instantly translate and transcribe 100 different languages, bringing us ever closer to a universal translator.

— The New York Times has blocked OpenAI’s web crawler from reading and then regurgitating its content. The NYT is also considering legal action against OpenAI for intellectual property rights violations.

Pictures of the week

Midjourney has caught up with Stable Diffusion and Adobe and now offers Inpainting, which appears as “Vary (region)” in the list of tools. It enables users to select part of an image and add a new element — so, for example, you can grab a pic of a woman, select the region around her hair, type in “Christmas hat,” and the AI will plonk a hat on her head. 

Midjourney admits the feature isn’t perfect and works better when used on larger areas of an image (20%-50%) and for changes that are more sympathetic to the original image rather than basic and outlandish.

Vary region
To change the clothing simply select the area and write a text prompt (AI Educator Chase Lean’s Twitter)
Vary region
Vary region demo by AI educator Chase Lean (Twitter)

Creepy AI protests video

Asking an AI to create a video of protests against AIs resulted in this creepy video that will turn you off AI forever.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Continue Reading

Politics

‘Shameful’ that black boys in London more likely to die than white boys, says Met Police chief

Published

on

By

'Shameful' that black boys in London more likely to die than white boys, says Met Police chief

It is “shameful” that black boys growing up in London are “far more likely” to die than white boys, Metropolitan Police chief Sir Mark Rowley has told Sky News.

The commissioner told Sunday Morning with Trevor Phillips that relations with minority communities “is difficult for us”.

Sir Mark, who came out of retirement to become head of the UK’s largest police force in 2022, said: “We can’t pretend otherwise that we’ve got a history between policing and black communities where policing has got a lot wrong.

“And we get a lot more right today, but we do still make mistakes. That’s not in doubt. I’m being as relentless in that as it can be.”

He said the “vast majority” of the force are “good people”.

However, he added: “But that legacy, combined with the tragedy that some of this crime falls most heavily in black communities, that creates a real problem because the legacy creates concern.”

Sir Mark, who also leads the UK’s counter-terrorism policing, said it is “not right” that black boys growing up in London “are far more likely to be dead by the time they’re 18” than white boys.

“That’s, I think, shameful for the city,” he admitted.

The Met Police chief’s admission comes two years after an official report found the force is institutionally racist, misogynistic and homophobic.

Please use Chrome browser for a more accessible video player

Police chase suspected phone thief

Baroness Casey was commissioned in 2021 to look into the Met Police after serving police officer Wayne Couzens abducted, raped and murdered Sarah Everard.

She pinned the primary blame for the Met’s culture on its past leadership and found that stop and search and the use of force against black people was excessive.

At the time, Sir Mark, who had been commissioner for six months when the report was published, said he would not use the labels of institutionally racist, institutionally misogynistic and institutionally homophobic, which Casey insisted the Met deserved.

However, London Mayor Sadiq Khan, who helped hire Sir Mark – and could fire him – made it clear the commissioner agreed with Baroness Casey’s verdict.

After the report was released, Sir Mark said “institutional” was political language so he was not going to use it, but he accepted “we have racists, misogynists…systematic failings, management failings, cultural failings”.

A few months after the report, Sir Mark launched a two-year £366m plan to overhaul the Met, including increased emphasis on neighbourhood policing to rebuild public trust and plans to recruit 500 more community support officers and an extra 565 people to work with teams investigating domestic violence, sexual offences and child sexual abuse and exploitation.

Watch the full interview on Sunday Morning with Trevor Phillips from 8.30am on Sunday.

Continue Reading

Politics

Unite votes to suspend Angela Rayner over Birmingham bin strike

Published

on

By

Unite votes to suspend Angela Rayner over Birmingham bin strike

Labour’s largest union donor, Unite, has voted to suspend Deputy Prime Minister Angela Rayner over her role in the Birmingham bin strike row.

Members of the trade union, one of the UK’s largest, also “overwhelmingly” voted to “re-examine its relationship” with Labour over the issue.

They said Ms Rayner, who is also housing, communities and local government secretary, Birmingham Council’s leader, John Cotton, and other Labour councillors had been suspended for “bringing the union into disrepute”.

There was confusion over Ms Rayner’s membership of Unite, with her office having said she was no longer a member and resigned months ago and therefore could not be suspended.

But Unite said she was registered as a member. Parliament’s latest register of interests had her down as a member in May.

Politics latest: Italy and other EU countries have ‘huge doubts’ about legality of UK migrant deal

The union said an emergency motion was put to members at its policy conference in Brighton on Friday.

More on Angela Rayner

Unite is one of the Labour Party’s largest union donors, donating £414,610 in the first quarter of 2025 – the highest amount in that period by a union, company or individual.

The union condemned Birmingham’s Labour council and the government for “attacking the bin workers”.

Mountains of rubbish have been piling up in the city since January after workers first went on strike over changes to their pay, with all-out strike action starting in March. An agreement has still not been made.

Please use Chrome browser for a more accessible video player

Rat catcher tackling Birmingham’s bins problem

Ms Rayner and the councillors had their membership suspended for “effectively firing and rehiring the workers, who are striking over pay cuts of up to £8,000”, the union added.

‘Missing in action’

General secretary Sharon Graham told Sky News on Saturday morning: “Angela Rayner, who has the power to solve this dispute, has been missing in action, has not been involved, is refusing to come to the table.”

She had earlier said: “Unite is crystal clear, it will call out bad employers regardless of the colour of their rosette.

“Angela Rayner has had every opportunity to intervene and resolve this dispute but has instead backed a rogue council that has peddled lies and smeared its workers fighting huge pay cuts.

“The disgraceful actions of the government and a so-called Labour council, is essentially fire and rehire and makes a joke of the Employment Relations Act promises.

“People up and down the country are asking whose side is the Labour government on and coming up with the answer not workers.”

SN pics from 10/04/25 Tyseley Lane, Tyseley, Birmingham showing some rubbish piling up because of bin strikes
Image:
Piles of rubbish built up around Birmingham because of the strike over pay

Sir Keir Starmer’s spokesman said the government’s “priority is and always has been the residents of Birmingham”.

He said the decision by Unite workers to go on strike had “caused disruption” to the city.

“We’ve worked to clean up streets and remain in close contact with the council […] as we support its recovery,” he added.

A total of 800 Unite delegates voted on the motion.

Continue Reading

Politics

Binance’s CZ threatens to sue Bloomberg over Trump stablecoin report

Published

on

By

Binance’s CZ threatens to sue Bloomberg over Trump stablecoin report

Binance’s CZ threatens to sue Bloomberg over Trump stablecoin report

Binance co-founder CZ has dismissed a Bloomberg report linking him to the Trump-backed USD1 stablecoin, threatening legal action over alleged defamation.

Continue Reading

Trending