AI Eye has been out and about at Korean Blockchain Week and Token2049 in Singapore over the past fortnight, trying to find out how crypto project leaders plan to use AI.
Probably the most well-known is Maker founder Rune Christensen, who essentially plans to relaunch his decade-old project as a bunch of sub-DAOs employing AI governance.
“People misunderstand what we mean with AI governance, right? We’re not talking about AI running a DAO,” he says, adding the AI won’t be enforcing any rules. “The AI cannot do that because it’s unreliable.” Instead the project is working on using AI for coordination and communication — as an “Atlas” to the entire project, as they’re calling it.
“Having that sort of central repository of data just makes it actually realistic to have hundreds of thousands of people from different backgrounds and different levels of understanding meaningfully collaborate and interact because they’ve got this shared language.”
Near founder Illia Polosukhin may be better known in AI circles as his project began life as an AI startup before pivoting to blockchain. Polosukhin was one of the authors of the seminal 2017 Transformer paper (“Attention Is All You Need”) that laid the groundwork for the explosion of generative AI like ChatGPT over the past year.
Polosukhin has too many ideas about legitimate AI use cases in crypto to detail here, but one he’s very keen on is using blockchain to prove the provenance of content so that users can distinguish between genuine content and AI-generated bullshit. Such a system would encompass provenance and reputation using cryptography.
“So cryptography becomes like an instrument to ensure consistency and traceability. And then you need reputation around this cryptography, which is on-chain accounts and record keeping to actually ensure that [X] posted this and [X] is working for Cointelegraph right now.”
Sebastien Borget from The Sandbox says the platform has been using AI for content moderation over the past year. “In-game conversation in any language is actually being filtered, so there is no more toxicity,” he explains. The project is also examining its use for music and avatar generation, as well as for more general user-generated content for world-building.
Meanwhile, Framework Ventures founder Vance Spencer outlined four main use cases for AI, with the most interesting by far training up AI models and then selling them as tokens on-chain. As luck would have it, Frameworks has invested in a game called AI Arena, in which players train AI models to compete in the game.
Keep an eye out for in-depth Magazine features outlining their thoughts in more detail.
AI is for communists?
Speaking of AI and crypto, are they pulling in opposite directions? Dynamo Dao’s Patrick Scott dug up PayPal founder Peter Thiel’s thoughts on AI and crypto in his forward to the re-release of the 1997 non-fiction book The Sovereign Individual, which predicted cryptocurrency, among other things. In it, Thiel argues AI is a technology of control, while crypto is one of liberation.
“AI could theoretically make it possible to centrally control an entire economy. It is no coincidence that AI is the favorite technology of the Communist Party of China. Strong cryptography, at the other pole, holds out the prospect of a decentralized and individualized world. If AI is communist, crypto is libertarian.”
Roblox has unveiled a new feature called Assistant, which will let users build virtual assets and write code using generative AI. In the demo, users write something like “make a game set in ancient ruins” and “add some trees,” and the AI does the rest. It’s still being developed and will be released at the end of this year or early next year. The plan is for Assistant to one day generate sophisticated gameplay or make 3D models from scratch.
Terrible workers benefit most from AI
The worst workers at your place of employment are likely to benefit the most from using AI tools, according to a new study by Boston Consulting Group. The output of below-average workers improved by 43% when using AI, while the output of above-average workers improved by just 17%.
Interestingly, workers who used AI for things beyond its current abilities performed 20% worse because the AI would present them with plausible but wrong responses.
Google Gemini gears up for release
Google’s GPT-4 competitor is nearing release, with The Information reporting that a small group of companies has been given early access to Gemini. For those who came in late, Google was seen leading the AI race right up until OpenAI dumped ChatGPT on the market in November last year (arguably before it was ready) and leaped ahead.
Google hopes Gemini can best GPT-4 by offering not just text generation capabilities but also image generation, enabling the creation of contextual images (rumors suggest its being trained on YouTube content, among other data). There are plans in future for features like using it to control software with your voice or to analyze charts. Highlighting how important Gemini is, Google co-founder Sergey Brin is said to be playing an instrumental role in the evaluation and training of the models.
AI expert Brian Roemmele says he’s been testing a version of Gemini and finds it “equivalent to ChatGPT-4 but with newly up to the second knowledge base. This saves it from some hallucinations.”
Google CEO Sundar Pichai told Wired this week he has no regrets about not launching its chatbot early to beat ChatGPT because the tech “needed to mature a bit more before we put it in our products.”
“It’s not fully clear to me that it might have worked out as well,” Pichai said. “The fact is, we could do more after people had seen how it works. It really won’t matter in the next five to 10 years.”
AI meets 15-minute cities
Researchers at Tsinghua University in China have built an AI system that plans out cities in line with current thinking about walkable “15-minute cities” that have lots of green space (please direct conspiracy theories about the topic to X).
The researchers found the AI was better at tedious computation and repetitive tasks and was able to complete in seconds what human planners required 50 to 100 minutes to work through. Overall, they determined it was able to improve on human designs by 50% when assessed on access to services, green spaces and traffic levels.
The headline figure is a bit misleading, though, as the finished plans only increased access to basic services by 12% and to parks by 5%. In a blind judging process, 100 urban planners preferred some of the AI designs by a clear margin but expressed no preference for other designs. The researchers envisage their AI working as an assistant doing the boring stuff while humans focus on more challenging and creative aspects.
Stephen Fry is cloned
Blackadder and QI star and much-loved British comedy institution Stephen Fry says he has become a victim of AI voice cloning.
On September 14, Fry played a clip from a historical documentary he apparently narrated at the CogX Festival in London last week — but revealed the voice wasn’t him at all. “I said not one word of that — it was a machine,” he said. “They used my reading of the seven volumes of the Harry Potter books, and from that dataset an AI of my voice was created, and it made that new narration.”
Training AI to rip off the work of actors and repurpose them elsewhere without payment is one of the key issues in the current actors and writers strike in Hollywood. Fry said the incident was just the tip of the iceberg, and AI will “advance at a faster rate than any technology we have ever seen. One thing we can all agree on: it’s a fucking weird time to be alive.”
How not to cheat using ChatGPT
The sort of academics drawn to cheating using ChatGPT appear to be the sort of people who make dumb mistakes giving that fact away. A paper published in the journal Physica Scripta was retracted after computer scientist Guillaume Cabanac noticed the “regenerate response” in the text, indicating it had been copied directly from ChatGPT.
Cabanac has helped uncover hundreds of AI-generated academic manuscripts since 2015, including a paper in the August edition of Resources Policy, which contained the tell-tale line: “Please note that as an AI language model, I am unable to …”
All Killer No Filler AI News
— Meta is also working on a new model to compete with GPT-4 that it aims to launch in 2024, according to The Wall Street Journal. It is intended to be many times more powerful than its existing Llama 2.
— Microsoft has open-sourced a novel protein-generating AI called EvoDiff. It works like Stable Diffusion and Dall-E2, but instead of generating images, it designs proteins that can be used for specific medical purposes. This is expected to lead to new classes of drugs and therapies.
— Defense contractor Palantir, along with Cohere, IBM, Nvidia, Salesforce, Scale AI and Stability, have signed up to the White House’s somewhat vague plans for responsible AI development. The administration is also developing an executive order on AI and plans to introduce bipartisan legislation.
— Sixty U.S. senators attended a private briefing recently about the risks of AI from 20 Silicon Valley CEOs and wonks, including Sam Altman, Mark Zuckerberg and Bill Gates. Elon Musk told reporters afterward that the meeting “may go down in history as very important to the future of civilization.”
— ChatGPT traffic has fallen for three months in a row, by roughly 10% in both June and July and a further 3.2% drop in August. The amount of time users spend on the site fell from 8.7 minutes on average in March to seven minutes last month.
— Finnish prisoners are being paid $1.67 to help train AI models for a startup called Metroc. The AI is learning how to determine when construction projects are hiring.
— The U.S. is way out in front of the AI race, with 4,643 startups and $249 billion of investment since 2013, which is 1.9 times more startups than China and Europe combined.
Writer and storyteller Jon Finger tried out the HeyGen video app, which is able to not only translate his words but also clone his voice AND sync up his lip movements to the translated text.
Testing out @HeyGen_Official translation on French and German. I don’t speak either language so let me know if it sounds natural if you do. I hope if you pay you can turn off the color correction. It didn’t work on my phone so I had to upload on my pc.https://t.co/FMJp9sJEBIpic.twitter.com/iF5eONAQ3c
The most engaging reads in blockchain. Delivered once a
week.
Andrew Fenton
Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.
Harriet Harman has suggested a “mini inquiry” into issues raised by the grooming gangs scandal and called on Sir Keir Starmer and Kemi Badenoch to discuss “terms of reference”.
In particular, she said people need to be “trained and confident” that they can take on matters “which are in particular communities” without being accused of being racist.
“I think that whether it’s a task force, whether it’s more action plans, whether it’s a a mini inquiry on this, this is something that we need to develop resilience in,” Ms Harman said.
The grooming gangs scandal is back in the spotlight after Elon Musk hit out at the Labour government for rejecting a new national inquiry into child sexual exploitation in Oldham, saying this should be done at a local level instead.
The Tories also previously said an Oldham inquiry should be done locally and in 2015 commissioned a seven-year national inquiry into child sex abuse, led by Professor Alexis Jay, which looked at grooming gangs.
However, they didn’t implement any of its recommendations while in office – and Sir Keir has vowed to do so instead of launching a fresh investigation into the subject.
More on Electoral Dysfunction
Related Topics:
Ms Harman said she agreed with ministers that there is “no point” in a rerun of the £200m Jay Review, which came on top of a number of locally-led inquiries.
Spreaker
This content is provided by Spreaker, which may be using cookies and other technologies.
To show you this content, we need your permission to use cookies.
You can use the buttons below to amend your preferences to enable Spreaker cookies or to allow those cookies just once.
You can change your settings at any time via the Privacy Options.
Unfortunately we have been unable to verify if you have consented to Spreaker cookies.
To view this content you can use the button below to allow Spreaker cookies for this session only.
Please use Chrome browser for a more accessible video player
3:07
Grooming gangs: What happened?
However, she said there’s “always got to be an openness to further analysis, further consideration of what proposals would move things forward”.
She called on the Conservative Party to start “sensibly discussing with the government what should be the parameters of a future inquiry”, as they “can’t really be arguing they want an absolute repeat of the seven years and £200 million of the Jay inquiry”.
She said the Tories should set out their “terms of reference”, so “the government and everybody can discuss whether or not they’ve already got that sorted”.
Girls as young as 11 were groomed and raped across a number of towns in England – including Oldham, Rochdale, Rotherham and Telford – over a decade ago in a national scandal that was exposed in 2013.
The Jay review did not assess whether ethnicity was a factor in grooming gangs due to poor data, and recommended the compilation of a national core data base on child sex abuse which records the ethnicity of the victim and alleged perpetrator.
Please use Chrome browser for a more accessible video player
3:31
PM: People ‘spreading lies’ are ‘not interested in victims’
Ms Harman’s comments come after the Labour Metro Mayor of Greater Manchester, Andy Burnham, said he believed there was a case for a new “limited national inquiry”.
He told the BBC that a defeated Tory vote on the matter was “opportunism”, but a new probe could “compel people to give evidence who then may have charges to answer and be held to account”.
Jess Phillips, the safeguarding minister who has born the brunt of Mr Musk’s attacks, has told Sky News “nothing is off the table” when it comes to a new inquiry – but she will “listen to victims” and not the world’s richest man.
Sir Keir has said he spoke to victims this week and they do not want another inquiry as it would delay the implementations of the Jay review – though his spokesman later indicated one could take place if those affected call for it.
Tory leader Ms Badenoch has argued that the public will start to “worry about a cover-up” if the prime minister resists calls for a national inquiry, and said no one has yet “joined up the dots” on grooming.
New reports suggest the US Senate Banking Committee is looking to create its first crypto subcommittee, while Trump is reportedly eyeing a pro-crypto CFTC Commissioner to take the agency’s helm.
The UK Treasury has amended finance laws to clarify that crypto staking isn’t a collective investment scheme, which a lawyer says is “heavily regulated.”