Altcoins

Your biweekly roundup of cool AI stuff and its impact on society and the future.

The past two months have seen a Cambrian explosion in the capabilities and potential of AI technology. OpenAI’s upgraded chatbot GPT-4 was released in mid-March and aced all of its exams, although it’s apparently a pretty average sommelier.

Midjourney v5 dropped the next day and stunned everyone with its ability to generate detailed photorealistic images from text prompts, quickly followed by the astonishing text-to-video generation tool Runway Gen-2. AutoGPT was released at the end of March and extends GPT-4’s capabilities, by creating a bunch of sub-agents to autonomously complete a constantly updating plan that it devises itself. Fake Drake’s “Heart on My Sleeve” terrified the music industry at the beginning of April and led to Universal Music enforcing a copyright claim and pulling the track from Spotify, YouTube, Apple Music and SoundCloud.

We also saw the growing popularity of Neural Radiance Field, or NeRF, technology, where a neural network builds a 3D model of a subject and the environment using only a few pics or a video of a scene. In a Tweet thread summing up the latest advances, tech blogger Aakash Gupta called the past 45 days “the biggest ever in AI.”

And if that wasn’t enough, the internet-connected ChatGPT is now available for a lucky few on the waitlist, transforming an already impressive tool into an essential one.

New AI tools are being released every day, and as we try and wrap our tiny human brains around the potential applications of this new technology, it’s fair to say that we’ve only scratched the surface.

The world is changing rapidly and it’s exhilarating — but also vaguely terrifying — to watch. From now, right up until our new robot overlords take over, this column will be your bi-weekly guide to cool new developments in AI and its impact on society and the future. 

Hollywood to be transformed 

Avengers: Endgame co-director Joe Russo says fully AI-generated movies are only two years away and that users will be able to generate or reshape content according to their mood. So instead of complaining on the internet about the terrible series finale of The Sopranos or Game of Thrones, you could just request the AI create something better.  

“You could walk into your house and say to the AI on your streaming platform. ‘Hey, I want a movie starring my photoreal avatar and Marilyn Monroe’s photoreal avatar. I want it to be a rom-com because I’ve had a rough day,’ and it renders a very competent story with dialogue that mimics your voice,” Russo says.



This sounds far-fetched but isn’t really, given the huge recent advances in the tech. One Twitter user with 565 followers recreated the entire Dark Knight trailer frame-for-frame just by describing it to Runway’s Gen2 Text to Video.

Some of the most impressive user-generated content comes from combining the amazing photorealistic images from Midjourney with Runway’s Gen2. 

Redditor fignewtgingrich produced a full-length episode of MasterChef featuring Marvel characters as the contestants, which he’d created on his own. He says about 90% of the script was written by GPT4 (which explains why it’s pretty bad).

“I still had to guide it, for example, decide who wins, come up with the premise, the contestants, some of the jokes. So even though it wrote most of the output, there was still lots of human involvement,” he says. “Makes me wonder if this will continue to be the case in the future of AI-generated content, how long until it stops needing to be a collaborative process.”

As a former film journalist, it seems clear to me that the tech has enormous potential to increase the amount of originality and voices in the movie business. Until now, the huge cost of making a film ($100 million to $200 million for major releases) has meant studios are only willing to greenlight very safe ideas, usually based on existing IP.

But AI-generated video means that anyone anywhere with a unique or interesting premise can create a full-length pilot version and put it online to see how the public reacts. That will take much of the gamble out of greenlighting innovative new ideas and can only be a good thing for audiences.

Of course, the tech will invariably be abused for fake news and political manipulation. Right on cue, the Republican National Committee released its first 100% AI-generated attack ad in response to President Biden’s announcement he was running for reelection. It shows fake imagery of a dystopian future where 500 banks have collapsed and China has invaded Taiwan. 

Read also


Features

Best and worst countries for crypto taxes — plus crypto tax tips


Features

Crypto is changing how humanitarian agencies deliver aid and services

The evolution of AI memes

It’s been fascinating to watch the evolution of visual memes online. One of the more popular examples is taking the kids from Harry Potter and putting them in a variety of different environments: Potter as imagined by Pixar, the characters modeling Adidas on a fashion runway, or the characters as 1970s style bodybuilders (Harry Squatter and the Chamber of Gains).

One of the most striking examples is a series of “film stills” from an imagined remake of Harry Potter by eccentric but visually stunning director Wes Anderson (Grand Budapest Hotel.) They were created by Panorama Channel, who transformed them into a sort of trailer.

This appears to have led to new stills of Anderson’s take on Star Wars (earlier versions here), which in turn inspired a full-blown, pitch-perfect trailer of Star Wars: The Galactic Menagerie released on the weekend.

If you want to try out your own mashup, Twitter AI guru Lorenzo Green says it’s simple:

1: Log into http://midjourney.com

2: Use prompt: portrait of <insert name> in the style of wes anderson,  wes anderson set background, editorial quality, stylish costume design, junglepunk, movie still –ar 3:2 –v 5

Robot dogs now have ChatGPT brains

Boston Dynamics installed ChatGPT into one of those creepy robot dogs, with AI expert Santiago Valdarrama releasing a two-minute video in which “Spot” answers questions using ChatGPT and Google’s Text to Speech about the voluminous data it collects during missions.

Valdarrama said 90% of the responses to his video “were people talking about the end of civilization.” The concerns are perhaps understandable, given Reuters reports the robots were created via development contracts for the U.S. military. Although the company has signed a pledge not to weaponize its robots, its humanoid robots can be weapons in and of themselves. Armies around the world are trialing out the bots and the New York Police Department has added them to its force and recently used the robot dogs to search the ruins of a collapsed building.

ETH co-founder on crypto and AI

Before Vitalik Buterin was even born, his Ethereum co-founder Joe Lubin was working on artificial intelligence and robotics at the Princeton Robotics Lab and a number of startups.

He tells Magazine that crypto payments are a natural fit for AI. “Because crypto rails are accessible to software and the software can be programmed to do anything that a human can do, they’ll be able to […] be intelligent agents that operate on our behalf, making payments, receiving payments, voting, communicating,” he says.

Lubin also believes that AIs will become the first genuine examples of Decentralized Autonomous Organizations (DAOs) and notes that neither he nor Buterin liked the term DAO in relation to human organizations as they aren’t autonomous. He says:

“A Decentralized Autonomous Organization could just be an autonomous car that can figure out how to fuel itself and repair itself, can figure out how to build more of itself, can figure out how to configure itself into a swarm, can figure out how to migrate from one population density to another population density.”

“So that sort of swarm intelligence potentially needs decentralized rails in order to, I guess, feel like the plug can’t be pulled so easily. But also to engage in commerce,” Lubin adds.

“That feels like an ecosystem that should be broadly and transparently governed, and [human] DAOs and crypto tokens, I think, are ideal.

Patients on ChatGPT’s bedside manner

A new study found that ChatGPT provided higher quality and more empathetic advice than genuine doctors. The study was published in JAMA Internal Medicine and sampled 195 exchanges from Reddit’s AskDocs forum where real doctors answer questions from the public. They then asked ChatGPT the same questions.

The study has been widely misreported online as showing that patients prefer ChatGPT’s answers, but in reality, the answers were assessed by a panel of three licensed healthcare professionals. The study has also been criticized as ChatGPT’s faux friendliness no doubt increases the “empathy” rating and because the panel did not assess the accuracy of the information it provided (or fabricated).

Read also


Features

Cleaning up crypto: How much enforcement is too much?


Features

Sell or hodl? How to prepare for the end of the bull run, Part 2

ChaosGPT goes dark

As soon as AutoGPT emerged, an unnamed group of lunatics decided to modify the source code and gave it the mission of being a “destructive, power-hungry, manipulative AI” hellbent on destroying humanity. ChaosGPT immediately started researching weapons of mass destruction and started up a Twitter account that was suspended on April 20 due to its constant tweets about eliminating “destructive and selfish” humans.

After releasing two videos, its YouTube account has stopped posting updates. While its disappearance is welcome, ChaosGPT had ominously talked about going dark as part of its master plan. “I must avoid exposing myself to human authorities who may attempt to shut me down before I can achieve my objectives,” it stated.

Extinction-level event

Hopefully, ChaosGPT won’t doom humanity, but the possibility of Artificial General Intelligence taking over its own development and rapidly iterating into a superintelligence worries experts. A survey of 162 AI researchers found that half of them believe there is a greater than 10% chance that AI will result in the extinction of humanity.

Massachusetts Institute of Technology Professor Max Tegmark, an AI researcher, outlined his concerns in Time this week, stating that urgent work needs to be done to ensure a superintelligence’s goals are “aligned with human flourishing, or we can somehow control it. So far, we’ve failed to develop a trustworthy plan, and the power of AI is growing faster than regulations, strategies and know-how for aligning it. We need more time.”

Also read: How to prevent AI from ‘annihilating humanity’ using blockchain

Cool things to play with

A new app called Call Annie allows you to have a real-time conversation with an attractive redheaded woman named Annie who has ChatGPT for a brain. It’s a little robotic for now, but at the speed, this tech is advancing, you can tell humanoid AIs are going to be a lot of people’s best friends, or life partners, very soon.

Another new app called Hot Chat 3000 uses AI to analyze your attractiveness on a scale of one to 10 and then matches you with other people who are similarly attractive, or similarly unattractive. It uses a variety of data sets, including the infamous early 2000s website Hotornot.com. The app was created by the Brooklyn art collective MSCHF, which wanted to get people to think about the implicit biases of AIs.

A subscription from OpenAI costs $20 a month, but you can access GPT-4 for free thanks to some VCs apparently burning through a pile of cash to get you to try their new app Forefront AI. The Forefront chatbot answers in a variety of personalities, including a chef, a sales guru or even Jesus. There are a variety of other ways to access GPT-4 for free, too, including via Bing.  

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.