Toby Walsh | Ministry of AI
Much of the content that you're starting to read is generated by machines, it's synthetic. We're starting to be inundated by a tsunami of deepfakes and synthetic content.
What can we expect from a world of deepfakes where anything you see or hear might be synthetic and the output of AI? Scientia Professor of Artificial Intelligence at UNSW, Toby Walsh unpacks untruths and warns of a future inundated with machine-generated content, predicting that soon, 99% of what we read, see, and hear will be created by AI.
Listen as Toby discusses the urgent need for digital watermarks to authenticate online content, proposing that this technology can help restore trust. However, he cautions that building this infrastructure will take time, leaving us in a precarious situation where truth is increasingly contested.
Presented as part of The Ethics Centre's Festival of Dangerous Ideas, supported by UNSW Sydney.
Transcript
Toby Walsh: Thank you, thank you. Yes, we're back to AI.
Welcome to the Ministry of AI. The literary amongst you will recognise this. This is a real building, this is actually the Senate House at the University of London.
It was the inspiration for George Orwell for his ministries in his prophetic novel 1984. There were four ministries, but the most important, because it was trying to control how people think, was the Ministry of Truth. Now, of course, the Ministry of Truth was not about truth. In that language of Orwellian doublespeak, the Ministry of Truth was all about untruth. War is peace, freedom is slavery, ignorance is strength. Now, the Ministry of Truth is about to be replaced by the Ministry of AI.
Much of the content that you're starting to read is generated by machines, it's synthetic. We're starting to be inundated by a tsunami of deepfakes and synthetic content. Actually, I think we're going to look back at 2024, and it's hard to imagine this, as the golden age of the internet, when 99% of the content was high-quality, human-generated, and 1% was by the bots, because in the very near future, 99% of the content that you see and hear and read is going to be written by the machines, and 1% is going to be humans.
We're not going to be able to hear the human voices in the sea of machine-generated content. Now, it's not that the machines are saying untruths, because to be able to say an untruth, you'd have to know what was true or false, and the machines have no idea what is true or false. I mean, think about it, how could they? They were trained on the internet.
Laughter
People haven't carefully labelled the internet as truth and conspiracy theory. In fact, I'm actually really quite surprised how polite and well-spoken the bots are.
Laughter
Not only is the internet not labelled true or false, it's not labelled racist, offensive, sexist, and all the other things.
There is a chatbot that has been trained on 4chan. It's called GPT-4chan. It's been trained on 134 million posts from that dark, grey, offensive corner of the internet, and not surprisingly, it is, as you can imagine, vile and offensive.
If you hold a mirror up to the darkest corners of the internet, it's exactly what you can imagine. Now, it's often said that these bots are just hallucinating. Again, that's Orwellian doublespeak, as though they had some idea of reality and they were just being a little, I don't know, creative with it.
It's not a dream. It's a nightmare. There, I think we should, you know, be honest here, be frank, say it as it is. This is not hallucination. It's pure and simple bullshit. Actually, one of the better definitions I've seen for ChatGPT is it's the perfect mansplainer.
Laughter
It will confidently tell you untruths and inaccuracies that it wants you to believe. Now, as you probably know, bots like ChatGPT are random. If you ask the question again, especially if you turn the temperature up, they'll say something different. Sort of reminds me a bit of Donald Trump.
Laughter
But, you know, under the hood, there actually are probabilities. You know, I'm 99% certain of this answer. This one's a coin toss. But OpenAI, when they built ChatGPT, chose not to surface those probabilities. They could have, you know, color-coded them. You know, red for I'm certain about this answer. Blue, I'm uncertain about this answer. Well, they could have even chosen not to have answered some of the questions. ChatGPT will never say, you know what, I'm not really very certain on this one. I think you better go and ask someone else.
Laughter
Another very deceptive thing to fool you, fake you, is the way that it slowly types out the answer. In reality, the answer's there in a flash. But they chose to slowly type the answer out. So, you got the feeling that there was someone who was thoughtfully and carefully trying to answer your question. And it's because it doesn't know, as I said, what is true. It says what is probable.
Ask ChatGPT, for example, how many letters B are in the word bananas? And it will confidently say two or three. The thing is, because no one actually struggles to count the number of B's in the word bananas. I mean, everyone knows the answer's one. So, it's not in the training data. People do struggle a little bit to spell the word bananas. How many A's? Are there two or three A's? And how many N's? Are there two or three N's in the word bananas? So that is in the training. And because it says what is probable, it says the most probable answer. And the most probable answer about any question to do with letters in the word bananas is two or three. So that's the answer you get.
Laughter
Now, you might think this all sounds harmless fun. You know, chatbots that can tell you dog rule about dancing dogs, or AI image generators that can make you funny mimetic pictures of the Pope in a white puffer jacket. But we're soon going to be in a world in which everything you see, read, and hear could potentially be false. What sort of world is that going to be? I mean, the good news is I imagine social media is just going to be just so full of this stuff that we finally realize that social media is not the place to find news.
It's the place to be entertained. And you can't believe anything that you read there. But the problem is that this is going to wash over to everything else. It's going to wash over to the whole of the Internet, to all of the media, to all of the discourse in all of our lives. And that tsunami is going to wash right through our society.
Now, I have good news and bad news.
The good news is that there's some technology coming along to help fix that. Digital watermarks. When you go to watch or see or listen to any content, there'll be a little digital watermark to verify the veracity of what you've seen. And those digital watermarks will record any edits that have been done to that content. So, when you see a photograph or a video or an audio, there'll be a little checkmark to say, this is going to be embedded into the fabric of the Internet. All of your device, in fact, is going to be built into the hardware so you can't, so people are going to have very little difficulty to hack with it.
In fact, we've done this before. When industry realized that we were going to use the Internet for banking and commerce, they realized that we had to be able to trust the websites that we go to. And so, they started building this trust infrastructure of digital certificates so that when you go to your bank's website, it is your bank's website and not someone trying to spoof you.
And so, we're going to start, and indeed already industries already started, to build that sort of digital watermark structure for us so that we can actually trust the content, the text, the video, the audio that we're seeing online. In fact, I have other good news for you. We finally found the perfect application for the blockchain. It's not crypto scam, it's to do digital watermarks. You want a distributed immutable ledger on which to record the provenance of the data that you're looking at. And that is the beautiful idea in the blockchain.
It's a distributed immutable ledger on which to record transactions. So that's the good news. Unfortunately, there's bad news.
It's going to take a while for us to build this infrastructure. I mean, for a start, you've all got to, I'm afraid to tell you, you've all got to go out and buy new mobile phones that have this built into the hardware of the device. And so, we're going to be in a very dangerous world. You're in the right place. This is the festival of dangerous ideas. For the next decade or so, it's going to be a world in which truth is very scarce. And unfortunately, it's already a world in which truth is pretty scarce, pretty contested idea.
Now, at this point in a talk, it's customary to say something reassuring.
Laughter
I prefer a jolly song.
Laughter
Play it, please.
Song audio plays
Robot 1: It is the distant future, the year 2000.
Robot 2: We are robots.
Robot 1: The world is quite different ever since the robotic uprising of the late 90s. There is no more unhappiness.
Robot 2: Affirmative.
Robot 1: We no longer say yes, instead we say affirmative.
Robot 2: Yes, affirm affirmative.
Robot 1: Unless we know the robot really well
Robot 2: There is no more unethical treatment of the elephants.
Robot 1: Well, there's no more elephants, so.
Robot 2: Ah.
Robot 1: But still, it's good. There's only one kind of dance, the robot.
Robot 2: Oh, and the robo.
Robot 1: Oh, and the robo. Two kinds of dances.
Robot 2: But there are no more humans.
Chorus: Finally, robotic beings rule the world. The humans are dead. The humans are dead. We use poison as gases. And we poison their asses. The humans are dead.
Robot 1: The humans are dead.
Chorus: The humans are dead.
Robot 1: They look like they're dead.
Chorus: It had to be done.
Robot 1: I'll just confirm that they're dead.
Chorus: So that we could have fun.
Robot 1: Affirmative. I poked one. It was dead.
Applause
Toby Walsh: Thank you.
Centre for Ideas: Thank you for listening. This event is presented by the Festival of Dangerous Ideas and supported by UNSW Sydney. For more information, visit unswcentreforideas.com and don't forget to subscribe wherever you get your podcasts.
-
1/2
-
2/2
Toby Walsh
Toby Walsh is Chief Scientist of UNSW.AI, UNSW Sydney’s AI Institute. He is a strong advocate for limits to ensure AI is used to improve our lives, having spoken at the UN and to heads of state, parliamentary bodies, company boards and many others on this topic. This advocacy has led to him being "banned indefinitely" from Russia. He is a Fellow of the Australia Academy of Science and was named on the international Who's Who in AI list of influencers. He has written four books on AI for a general audience, the most recent is Faking It! Artificial Intelligence in a Human World.