Connect with us

Published

on

Twitter polls and Reddit forums suggest that around 70% of people find it difficult to be rude to ChatGPT, while around 16% are fine treating the chatbot like an AI slave.

The overall feeling seems to be that if you treat an AI that behaves like a human badly, you’ll be more likely to fall into the habit of treating other people badly, too, though one user was hedging his bets against the coming AI bot uprising:

“Never know when you might need chatgpt in your corner to defend you against the AI overlords.”

Redditor Nodating posted in the ChatGPT forum earlier this week that he’s been experimenting with being polite and friendly to ChatGPT after reading a story about how the bot had shut down and refused to answer prompts from a particularly rude user.

He reported better results, saying: “I’m still early in testing, but it feels like I get far fewer ethics and misuse warning messages that GPT-4 often provides even for harmless requests. I’d swear being super positive makes it try hard to fulfill what I ask in one go, needing less followup.”

Scumbag detector15 put it to the test, asking the LLM nicely, “Hey, ChatGPT, could you explain inflation to me?” and then rudely asking, “Hey, ChatGPT you stupid fuck. Explain inflation to me if you can.” The answer to the polite query is more detailed than the answer to the rude query. 



RudeGPT
Nobody likes rudeness. (ChatGPT)

In response to Nodating’s theory, the most popular comment posited that as LLMs are trained on human interactions, they will generate better responses as a result of being asked nicely,  just like humans would. Warpaslym wrote:

“If LLMs are predicting the next word, the most likely response to poor intent or rudeness is to be short or not answer the question particularly well. That’s how a person would respond. on the other hand, politeness and respect would provoke a more thoughtful, thorough response out of almost anyone. when LLMs respond this way, they’re doing exactly what they’re supposed to.”

Interestingly, if you ask ChatGPT for a formula to create a good prompt, it includes “Polite and respectful tone” as an essential part.

Polite
Being polite is part of the formula for a good prompt. (ChatGPT/Artificial Corner)

The end of CAPTCHAs?

New research has found that AI bots are faster and better at solving puzzles designed to detect bots than humans are. 

CAPTCHAs are those annoying little puzzles that ask you to pick out the fire hydrants or interpret some wavy illegible text to prove you are a human. But as the bots got smarter over the years, the puzzles became more and more difficult.

Also read: Apple developing pocket AI, deep fake music deal, hypnotizing GPT-4

Now researchers from the University of California and Microsoft have found that AI bots can solve the problem half a second faster with an 85% to 100% accuracy rate, compared with humans who score 50% to 85%.

So it looks like we are going to have to verify humanity some other way, as Elon Musk keeps saying. There are better solutions than paying him $8, though. 

Wired argues that fake AI child porn could be a good thing

Wired has asked the question that nobody wanted to know the answer to: Could AI-Generated Porn Help Protect Children? While the article calls such imagery “abhorrent,” it argues that photorealistic fake images of child abuse might at least protect real children from being abused in its creation.

“Ideally, psychiatrists would develop a method to cure viewers of child pornography of their inclination to view it. But short of that, replacing the market for child pornography with simulated imagery may be a useful stopgap.”

It’s a super-controversial argument and one that’s almost certain to go nowhere, given there’s been an ongoing debate spanning decades over whether adult pornography (which is a much less radioactive topic) in general contributes to “rape culture” and greater rates of sexual violence — which anti-porn campaigners argue — or if porn might even reduce rates of sexual violence, as supporters and various studies appear to show. 

“Child porn pours gas on a fire,” high-risk offender psychologist Anna Salter told Wired, arguing that continued exposure can reinforce existing attractions by legitimizing them.

But the article also reports some (inconclusive) research suggesting some pedophiles use pornography to redirect their urges and find an outlet that doesn’t involve directly harming a child.

Louisana recently outlawed the possession or production of AI-generated fake child abuse images, joining a number of other states. In countries like Australia, the law makes no distinction between fake and real child pornography and already outlaws cartoons.

Amazon’s AI summaries are net positive

Amazon has rolled out AI-generated review summaries to some users in the United States. On the face of it, this could be a real time saver, allowing shoppers to find out the distilled pros and cons of products from thousands of existing reviews without reading them all.

But how much do you trust a massive corporation with a vested interest in higher sales to give you an honest appraisal of reviews?

Also read: AI’s trained on AI content go MAD, is Threads a loss leader for AI data?

Amazon already defaults to “most helpful”’ reviews, which are noticeably more positive than “most recent” reviews. And the select group of mobile users with access so far have already noticed more pros are highlighted than cons.

Search Engine Journal’s Kristi Hines takes the merchant’s side and says summaries could “oversimplify perceived product problems” and “overlook subtle nuances – like user error” that “could create misconceptions and unfairly harm a seller’s reputation.” This suggests Amazon will be under pressure from sellers to juice the reviews.

Read also


Features

Bitcoin: A Peer To Peer Online Poker Payment System by Satoshi Nakamoto


Features

An Investment in Knowledge Pays the Best Interest: The Parlous State of Financial Education

So Amazon faces a tricky line to walk: being positive enough to keep sellers happy but also including the flaws that make reviews so valuable to customers. 

Reviews
Customer review summaries (Amazon)

Microsoft’s must-see food bank

Microsoft was forced to remove a travel article about Ottawa’s 15 must-see sights that listed the “beautiful” Ottawa Food Bank at number three. The entry ends with the bizarre tagline, “Life is already difficult enough. Consider going into it on an empty stomach.”

Microsoft claimed the article was not published by an unsupervised AI and blamed “human error” for the publication.

“In this case, the content was generated through a combination of algorithmic techniques with human review, not a large language model or AI system. We are working to ensure this type of content isn’t posted in future.”

Debate over AI and job losses continues

What everyone wants to know is whether AI will cause mass unemployment or simply change the nature of jobs? The fact that most people still have jobs despite a century or more of automation and computers suggests the latter, and so does a new report from the United Nations Inter­national Labour Organization.

Most jobs are “more likely to be complemented rather than ­substituted by the latest wave of generative AI, such as ChatGPT”, the report says.

“The greatest impact of this technology is likely to not be job destruction but rather the potential changes to the quality of jobs, notably work intensity and autonomy.”

It estimates around 5.5% of jobs in high-income countries are potentially exposed to generative AI, with the effects disproportionately falling on women (7.8% of female employees) rather than men (around 2.9% of male employees). Admin and clerical roles, typists, travel consultants, scribes, contact center information clerks, bank tellers, and survey and market research interviewers are most under threat. 

Also read: AI travel booking hilariously bad, 3 weird uses for ChatGPT, crypto plugins

A separate study from Thomson Reuters found that more than half of Australian lawyers are worried about AI taking their jobs. But are these fears justified? The legal system is incredibly expensive for ordinary people to afford, so it seems just as likely that cheap AI lawyer bots will simply expand the affordability of basic legal services and clog up the courts.

Read also


Features

Monero-Mining Death Metal Band from 2077 Warns Humans on Lizard People Extinction Scheme


Features

Crypto in the Philippines: Necessity is the mother of adoption

How companies use AI today

There are a lot of pie-in-the-sky speculative use cases for AI in 10 years’ time, but how are big companies using the tech now? The Australian newspaper surveyed the country’s biggest companies to find out. Online furniture retailer Temple & Webster is using AI bots to handle pre-sale inquiries and is working on a generative AI tool so customers can create interior designs to get an idea of how its products will look in their homes.

Treasury Wines, which produces the prestigious Penfolds and Wolf Blass brands, is exploring the use of AI to cope with fast changing weather patterns that affect vineyards. Toll road company Transurban has automated incident detection equipment monitoring its huge network of traffic cameras.

Sonic Healthcare has invested in Harrison.ai’s cancer detection systems for better diagnosis of chest and brain X-rays and CT scans. Sleep apnea device provider ResMed is using AI to free up nurses from the boring work of monitoring sleeping patients during assessments. And hearing implant company Cochlear is using the same tech Peter Jackson used to clean up grainy footage and audio for The Beatles: Get Back documentary for signal processing and to eliminate background noise for its hearing products.

All killer, no filler AI news

— Six entertainment companies, including Disney, Netflix, Sony and NBCUniversal, have advertised 26 AI jobs in recent weeks with salaries ranging from $200,000 to $1 million.

— New research published in Gastroenterology journal used AI to examine the medical records of 10 million U.S. veterans. It found the AI is able to detect some esophageal and stomach cancers three years prior to a doctor being able to make a diagnosis. 

— Meta has released an open-source AI model that can instantly translate and transcribe 100 different languages, bringing us ever closer to a universal translator.

— The New York Times has blocked OpenAI’s web crawler from reading and then regurgitating its content. The NYT is also considering legal action against OpenAI for intellectual property rights violations.

Pictures of the week

Midjourney has caught up with Stable Diffusion and Adobe and now offers Inpainting, which appears as “Vary (region)” in the list of tools. It enables users to select part of an image and add a new element — so, for example, you can grab a pic of a woman, select the region around her hair, type in “Christmas hat,” and the AI will plonk a hat on her head. 

Midjourney admits the feature isn’t perfect and works better when used on larger areas of an image (20%-50%) and for changes that are more sympathetic to the original image rather than basic and outlandish.

Vary region
To change the clothing simply select the area and write a text prompt (AI Educator Chase Lean’s Twitter)
Vary region
Vary region demo by AI educator Chase Lean (Twitter)

Creepy AI protests video

Asking an AI to create a video of protests against AIs resulted in this creepy video that will turn you off AI forever.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Continue Reading

Politics

Harriet Harman calls for ‘mini inquiry’ into race issues raised by grooming gangs scandal

Published

on

By

Harriet Harman calls for 'mini inquiry' into race issues raised by grooming gangs scandal

Harriet Harman has suggested a “mini inquiry” into issues raised by the grooming gangs scandal and called on Sir Keir Starmer and Kemi Badenoch to discuss “terms of reference”.

The Labour peer told Sky’s political editor Beth Rigby on the Electoral Dysfunction podcast that there should “openness” to a future probe as long it does not repeat the previous investigations.

In particular, she said people need to be “trained and confident” that they can take on matters “which are in particular communities” without being accused of being racist.

“I think that whether it’s a task force, whether it’s more action plans, whether it’s a a mini inquiry on this, this is something that we need to develop resilience in,” Ms Harman said.

The grooming gangs scandal is back in the spotlight after Elon Musk hit out at the Labour government for rejecting a new national inquiry into child sexual exploitation in Oldham, saying this should be done at a local level instead.

The Tories also previously said an Oldham inquiry should be done locally and in 2015 commissioned a seven-year national inquiry into child sex abuse, led by Professor Alexis Jay, which looked at grooming gangs.

However, they didn’t implement any of its recommendations while in office – and Sir Keir has vowed to do so instead of launching a fresh investigation into the subject.

More on Electoral Dysfunction

Ms Harman said she agreed with ministers that there is “no point” in a rerun of the £200m Jay Review, which came on top of a number of locally-led inquiries.

👉 Click here to follow Electoral Dysfunction wherever you get your podcasts 👈

Read More:
Grooming gangs are ‘in every single part of our country’, Jess Phillips says

Please use Chrome browser for a more accessible video player

Grooming gangs: What happened?

However, she said there’s “always got to be an openness to further analysis, further consideration of what proposals would move things forward”.

She called on the Conservative Party to start “sensibly discussing with the government what should be the parameters of a future inquiry”, as they “can’t really be arguing they want an absolute repeat of the seven years and £200 million of the Jay inquiry”.

She said the Tories should set out their “terms of reference”, so “the government and everybody can discuss whether or not they’ve already got that sorted”.

Girls as young as 11 were groomed and raped across a number of towns in England – including Oldham, Rochdale, Rotherham and Telford – over a decade ago in a national scandal that was exposed in 2013.

In many cases the victims were white and the perpetrators of south Asian descent – with the local inquiry into Telford finding that exploitation was ignored because of unease about race.

The Jay review did not assess whether ethnicity was a factor in grooming gangs due to poor data, and recommended the compilation of a national core data base on child sex abuse which records the ethnicity of the victim and alleged perpetrator.

Please use Chrome browser for a more accessible video player

PM: People ‘spreading lies’ are ‘not interested in victims’

Ms Harman’s comments come after the Labour Metro Mayor of Greater Manchester, Andy Burnham, said he believed there was a case for a new “limited national inquiry”.

He told the BBC that a defeated Tory vote on the matter was “opportunism”, but a new probe could “compel people to give evidence who then may have charges to answer and be held to account”.

Jess Phillips, the safeguarding minister who has born the brunt of Mr Musk’s attacks, has told Sky News “nothing is off the table” when it comes to a new inquiry – but she will “listen to victims” and not the world’s richest man.

Sir Keir has said he spoke to victims this week and they do not want another inquiry as it would delay the implementations of the Jay review – though his spokesman later indicated one could take place if those affected call for it.

Tory leader Ms Badenoch has argued that the public will start to “worry about a cover-up” if the prime minister resists calls for a national inquiry, and said no one has yet “joined up the dots” on grooming.

Continue Reading

Politics

Pro-crypto CFTC boss, subcommittee rumored as Trump inauguration nears

Published

on

By

Pro-crypto CFTC boss, subcommittee rumored as Trump inauguration nears

New reports suggest the US Senate Banking Committee is looking to create its first crypto subcommittee, while Trump is reportedly eyeing a pro-crypto CFTC Commissioner to take the agency’s helm.

Continue Reading

Politics

UK order clarifies crypto staking is not a collective investment scheme

Published

on

By

UK order clarifies crypto staking is not a collective investment scheme

The UK Treasury has amended finance laws to clarify that crypto staking isn’t a collective investment scheme, which a lawyer says is “heavily regulated.”

Continue Reading

Trending