Connect with us

Published

on

AI Arena

AI Eye chatted with Framework Venture’s Vance Spencer recently and he raved about the possibilities offered by an upcoming game his fund invested in called AI Arena in which players train AI models how to battle each other in an arena.

Framework Ventures was an early investor in Chainlink and Synthetix and three years ahead of NBA Top Shots with a similar NFL platform, so when they get excited about the future prospects, it’s worth looking into.

Also backed by Paradigm, AI Arena is like a cross between Super Smash Brothers and Axie Infinity. The AI models are tokenized as NFTs, meaning players can train them up and flip them for profit or rent them to noobs. While this is a gamified version, there are endless possibilities involved with crowdsourcing user-trained models for specific purposes and then selling them as tokens in a blockchain-based marketplace.

AI Arena
Screenshot from AI Arena

“Probably some of the most valuable assets on-chain will be tokenized AI models; that’s my theory at least,” Spencer predicts.

AI Arena chief operating officer Wei Xi explains that his cofounders, Brandon Da Silva and Dylan Pereira, had been toying with creating games for years, and when NFTs and later AI came out, Da Silva had the brainwave to put all three elements together. 

“Part of the idea was, well, if we can tokenize an AI model, we can actually build a game around AI,” says Xi, who worked alongside Da Silva in TradFi. “The core loop of the game actually helps to reveal the process of AI research.”

Read also


Features

Before NFTs: Surging interest in pre-CryptoPunk collectibles


Features

Why Animism Gives Japanese Characters a NiFTy Head Start on the Blockchain

There are three elements to training a model in AI Arena. The first is demonstrating what needs to be done — like a parent showing a kid how to kick a ball. The second element is calibrating and providing context for the model — telling it when to pass and when to shoot for goal. The final element is seeing how the AI plays and diagnosing where the model needs improvement.   

“So the overall game loop is like iterating, iterating through those three steps, and you’re kind of progressively refining your AI to become this more and more well balanced and well rounded fighter.”

The game uses a custom-built feed forward neural network and the AIs are constrained and lightweight, meaning the winner won’t just be whoever’s able to throw the most computing resources at the model.

“We want to see ingenuity, creativity to be the discerning factor,” Xie says. 

Currently in closed beta testing, AI Arena is targeting the first quarter of next year for mainnet launch on Ethereum scaling solution Arbitrum. There are two versions of the game: One is a browser-based game that anyone can log into with a Google or Twitter account and start playing for fun, while the other is blockchain-based for competitive players, the “esports version of the game.”

Also read: Exclusive — 2 years after John McAfee’s death, widow Janice is broke and needs answers

This being crypto, there is a token of course, which will be distributed to players who compete in the launch tournament and later be used to pay entry fees for subsequent competitions. Xie envisages a big future for the tech, saying it can be used “in a first-person shooter game and a soccer game,” and expanded into a crowdsourced marketplace for AI models that are trained for specific business tasks.

“What somebody has to do is frame it into a problem and then we allow the best minds in the AI space to compete on it. It’s just a better model.”

Chatbots can’t be trusted

A new analysis from AI startup Vectara shows that the output from large language models like ChatGPT or Claude simply can’t be relied upon for accuracy.

Everyone knew that already, but until now there was no way to quantify the precise amount of bullshit each model is generating. It turns out that GPT-4 is the most accurate, inventing fake information around just 3% of the time. Meta’s Llama models make up nonsense 5% of the time while Anthropic’s Claude 2 system produced 8% bullshit.

Google’s PaLM hallucinated an astonishing 27% of its answers.

Palm 2 is one of the components incorporated into Google’s Search Generative Experience, which highlights useful snippets of information in response to common search queries. It’s also unreliable.

For months now, if you ask Google for an African country beginning with the letter K, it shows the following snippet of totally wrong information: 

“While there are 54 recognized countries in Africa, none of them begin with the letter ‘K’. The closest is Kenya, which starts with a ‘K’ sound, but is actually spelled with a ‘K’ sound.”

It turns out Google’s AI got this from a ChatGPT answer, which in turn traces back to a Reddit post, which was just a gag set up for this response:

“Kenya suck on deez nuts lmaooo.”

Deez nuts
Screenshot from r/teenagers subreddit (Spreekaway Twitter)

Google rolled out the experimental AI feature earlier this year, and recently users started reported it was shrinking and even disappearing from many searches.

Google may have just been refining it though, as this week the feature rolled out to 120 new countries and four new languages, with the ability to ask follow-up questions right on the page. 

AI images in the Israel-Gaza war

While journalists have done their best to hype up the issue, AI-generated images haven’t played a huge role in the war, as the real footage of Hamas atrocities and dead kids in Gaza is affecting enough.

There are examples, though: 67,000 people saw an AI-generated image of a toddler staring at a missile attack with the caption “This is what children in Gaza wake up to.”  Another pic of three dust-covered but grimly determined kids in the rubble of Gaza holding a Palestinian flag was shared by Tunisian journalist Muhammad al-Hachimi al-Hamidi.

And for some reason, a clearly AI-generated pic of an “Israeli refugee camp” with an enormous Star of David on the side of each tent was shared multiple times on Arabic news outlets in Yemen and Dubai.

Israeli refugees
AI-generated pic picked up by news sites (Twitter)

Aussie politics blog Crikey.com reported that Adobe is selling AI-generated images of the war through its stock image service, and an AI pic of a missile strike was run as if it were real by media outlets including Sky and the Daily Star.

But the real impact of AI-generated fakes is providing partisans with a convenient way to discredit real pics. There was a major controversy over a bunch of pics of Hamas’s leadership living it up in luxury, which users claimed were AI fakes. 

But the images date back to 2014 and were just poorly upscaled using AI. AI company Acrete also reports that social media accounts associated with Hamas have regularly claimed that genuine footage and pictures of atrocities are AI-generated to cast doubt on them.

In some good timing, Google has just announced it’s rolling out tools that can help users spot fakes. Click on the three dots on the top right of an image and select “About This Image” to see how old the image is, and where it’s been used. An upcoming feature will include fields showing whether the image is AI generated, with Google AI, Facebook, Microsoft, Nikon and Leica all adding symbols or watermarks to AI imagery.

OpenAI dev conference

ChatGPT this week unveiled GPT-4 Turbo, which is much faster and can accept long text inputs like books of up to 300 pages. The model has been trained on data up to April this year and can generate captions or descriptions of visual input. For devs, the new model will be one-third the cost to access.

OpenAI is also releasing its version of the App Store, called the GPT Store. Anyone can now dream up a custom GPT, define the parameters and upload some bespoke information to GPT-4, which can then build it for you and pop it on the store, with revenue split between creators and OpenAI.

CEO Sam Altman demonstrated this onstage by whipping up a program called Startup Mentor that gives advice to budding entrepreneurs. Users soon followed, dreaming up everything from an AI that does the commentary for sporting events to a “roast my website” GPT. ChatGPT went down for 90 minutes this week, possibly as a result of too many users trying out the new features. 

Not everyone was impressed, however. Abacus.ai CEO Bindu Reddy said it was disappointing that GPT-5 had not been announced, suggesting that OpenAI tried to train a new model earlier this year but found it “didn’t run as efficiently and therefore had to scrap it.” There are rumors that OpenAI is training a new candidate for GPT-5 called Gobi, Reddy said, but she suspects it won’t be unveiled until next year. 

Read also


Features

Tornado Cash 2.0: The race to build safe and legal coin mixers


Features

Experts want to give AI human ‘souls’ so they don’t kill us all

X unveils Grok

Elon Musk brought freedom back to Twitter — mainly by freeing lots of people from spending any time there — and he’s on a mission to do the same with AI.

The beta version of Grok AI was thrown together in just two months, and while it’s not nearly as good as GPT-4, it is up to date due to being trained on tweets, which means it can tell you what Joe Rogan was wearing on his last podcast. That’s the sort of information GPT-4 simply won’t tell you.

There are fewer guardrails on the answers than ChatGPT, although if you ask it how to make cocaine it will snarkily tell you to “Obtain a chemistry degree and a DEA license.”

“The threshold for what it will tell you, if pushed, is what is available on the internet via reasonable browser search, which is a lot …” says Musk.

Within a few days, more than 400 cryptocurrencies linked to GROK had been launched. One amassed a $10 million market cap, and at least ten others rugpulled. 

All Killer No Filler AI News

— Samsung has introduced a new generative artificial intelligence model called Gauss that it suggests will be added to its phones and devices soon.

YouTube has rolled out some new AI features to premium subscribers including a chatbot that summarizes videos and answers questions about them, and another that categorizes the comments to help creators understand the feedback. 

— Google DeepMind has released an AGI tier list that starts at the “No AI” level of Amazon’s Mechanical Turk and moves on to “Emerging AGI,” where ChatGPT, Bard and LLama2 are listed. The other tiers are Competent, Expert, Virtuoso and Artificial Superintelligence, none of which have been achieved yet. 

Amazon is investing millions in a new GPT-4 rival called Olympus that is twice the size at 2 trillion parameters. It has also been testing out its new humanoid robot called Digit at trade shows. This one fell over.

Pics of the week

An oldie but a goodie, Alvaro Cintas has spent his weekend coming up with AI pun pictures under the heading “Wonders of the World, Misspelled by AI”.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Continue Reading

Politics

Specialist teams and online investigators deployed across England and Wales to tackle ‘national emergency’ of violence against women and girls

Published

on

By

Specialist teams and online investigators deployed across England and Wales to tackle 'national emergency' of violence against women and girls

Specialist investigation teams for rape and sexual offences are to be created across England and Wales as the home secretary declares violence against women and girls a “national emergency”.

Shabana Mahmood said the dedicated units will be in place across every force by 2029 as part of Labour’s violence against women and girls (VAWG) strategy due to be launched later this week.

The use of Domestic Abuse Protection Orders (DAPOs), which had been trialled in several areas, will also be rolled out across England and Wales. They are designed to target abusers by imposing curfews, electronic tags and exclusion zones.

The orders cover all forms of domestic abuse, including economic abuse, coercive and controlling behaviour, stalking and ‘honour’-based abuse. Breaching the terms can carry a prison term of up to five years.

Please use Chrome browser for a more accessible video player

Govt ‘thinking again’ on abuse strategy

Nearly £2m will also be spent funding a network of officers to target offenders operating within the online space.

Teams will use covert and intelligence techniques to tackle violence against women and girls via apps and websites.

A similar undercover network funded by the Home Office to examine child sexual abuse has arrested over 1,700 perpetrators.

More on Domestic Abuse

Abuse is ‘national emergency’

Ms Mahmood said in a statement: “This government has declared violence against women and girls a national emergency.

“For too long, these crimes have been considered a fact of life. That’s not good enough. We will halve it in a decade.

“Today, we announce a range of measures to bear down on abusers, stopping them in their tracks. Rapists, sex offenders and abusers will have nowhere to hide.”

Please use Chrome browser for a more accessible video player

Angiolini Inquiry: Recommendations are ‘not difficult’

The target to halve violence against women and girls in a decade is a Labour manifesto pledge.

The government said the measures build on existing policy, including facial recognition technology to identify offenders, improving protections for stalking victims, making strangulation a criminal offence and establishing domestic abuse specialists in 999 control rooms.

Read more from Sky News:
Demands for violence and abuse reforms
Women still feel unsafe on streets
Minister ‘clarifies’ violence strategy

Labour has ‘failed women’

But the Conservatives said Labour had “failed women” and “broken its promises” by delaying the publication of the violence against women and girls strategy.

Shadow home secretary Chris Philp said that Labour “shrinks from uncomfortable truths, voting against tougher sentences and presiding over falling sex-offender convictions. At every turn, Labour has failed women”.

Home Secretary Shabana Mahmood will be on Sunday Morning with Trevor Phillips on Sky News this morning from 8.30am.

Continue Reading

Politics

The Securities and Exchange Commission publishes crypto custody guide

Published

on

By

The Securities and Exchange Commission publishes crypto custody guide

The United States Securities and Exchange Commission (SEC) published a crypto wallet and custody guide investor bulletin on Friday, outlining best practices and common risks of different forms of crypto storage for the investing public.

The SEC’s bulletin lists the benefits and risks of different methods of crypto custody, including self-custody versus allowing a third-party to hold digital assets on behalf of the investor.

If investors choose third-party custody, they should understand the custodian’s policies, including whether it “rehypothecates” the assets held in custody by lending them out or if the service provider is commingling client assets in a single pool instead of holding the crypto in segregated customer accounts.

Bitcoin Wallet, Paper Wallet, Wallet, SEC, United States, Mobile Wallet, Hot wallet, Self Custody
The Bitcoin supply broken down by the type of custodial arrangement. Source: River

Crypto wallet types were also outlined in the SEC guide, which broke down the pros and cons of hot wallets, which are connected to the internet, and offline storage in cold wallets.

Hot wallets carry the risk of hacking and other cybersecurity threats, according to the SEC, while cold wallets carry the risk of permanent loss if the offline storage fails, a storage device is stolen, or the private keys are compromised. 

The SEC’s crypto custody guide highlights the sweeping regulatory change at the agency, which was hostile to digital assets and the crypto industry under former SEC Chairman Gary Gensler’s leadership.