Skip to main content
Scroll For More
listen   &   read

The Art of Scepticism in a Data-Driven World

We are each being fed our own reality from these algorithms.

Carl Bergstrom

In a world inundated with deep fakes, fake news and misinformation, is it possible to see beyond the tall tales and reach the truth?  Evolutionary biologist Carl Bergstrom believes we can. His book, Calling Bullshit: The Art of Scepticism in a Data-Driven World, serves as a vital tool to unblur the line between fact and fiction in our fast-paced digital age. With provoking insights and analytical prowess, Bergstrom equips us with the critical thinking tools needed to navigate the labyrinth of information flooding our lives.  

Hear Carl Bergstrom in conversation with UNSW Scientia Professor Rob Brooks as they uncover how to question the reliability of sources, dissect the credibility of data, and hone your ability to discern fact from fiction in our technologically-driven world. Wielding the power of scepticism and a bullsh*t detector as a force for good may be society’s greatest asset and greatest challenge.

Presented by the UNSW Centre for Ideas and UNSW Science

Transcript

Rob Brooks: Evening everybody, thank you very much for making it out tonight. My name is Rob Brooks. I am a Scientia Professor of Evolution here at UNSW and I’ll be your host and facilitator for this event, The Art of Scepticism in a Data-Driven World. Before I begin and before I introduce our guest of honour, I’d like to acknowledge the Bidjigal people who are the traditional owners and custodians of this land – where we are. I would also like to pay my respects to their elders both past and present, and extend that respect to both Aboriginal and Torres Strait Islanders who are here with us today.  

I would also like to acknowledge you, the audience who’ve come out on a surprisingly cold Thursday night in the season that I like to call ‘just say No-vember’. Your enthusiasm and interest are the fuel that propels our amazing Centre for Ideas and allows them to put on the very best events at any University I’ve yet encountered.  
 
Our event tonight is the 2023 Wallace Wurth Lecture and that’s a really big day on our calendar. This lecture commemorates the memory of the late Wallace Charles Wurth who is the very first Vice-Chancellor of UNSW Sydney and the first president of the council of this University, and I’d like to welcome the Wurth family who I understand are here tonight to celebrate his legacy.  
 
The very first Wallace Wurth Lecture was delivered in 1964 by then Prime Minister of Australia, the right honourable Sir Robert Menzies and while I cannot claim to have been here when that happened and nor can most of you… I do recall some of the incredible speakers that we’ve seen here in recent years including Gail Kelly, Stan Grant and we just saw a moment ago, Daniel Dennett.  
 
Tonight, we have the pleasure of welcoming Professor Carl Bergstrom – Professor in the Department of Biology at the University of Washington in Seattle. Carl’s an evolutionary biologist and mathematical population geneticist, and infectious disease epidemiologist renowned for the range of his work and the way in which it spans the natural and the physical and the social sciences. He is an incredible deep thinker who has made original contributions to our understanding of information, from the information in our genomes and the nature of scientific information, to how information and disinformation flow through large-scale networks. And as you can imagine, he’s been very busy in recent years, if you were glued, like I was, to Twitter during the recent pandemic. You’ve probably encountered Carl leading the good fight against two incredibly fast spreading enemies, the COVID-19 virus, and the misinformation about the virus and vaccine.  

Not only is Carl a top scientist and public intellectual, he’s also made enormous contributions as a teacher. His evolutionary biology textbook which was co-written with Lee Alan Dugatkin has proved an incredible teaching resource through three editions, as very forward thinking and very modern. And in recent years at the University of Washington, he has teamed up with his colleague Jevin West to fight misinformation by teaching an exceptionally popular and effective course on quantitative reasoning, digital literacy, and that course is now taught at over 200 universities worldwide and has scaled amazingly.  
 
Spinning out of that course is the book that is on sale just outside of the arena tonight, Calling Bullshit: The Art of Scepticism in a Data-Driven World. He’s a scientist, a communicator, and I hope he won't mind me saying that he is a world-leading bullshit detector.  

Carl Bergstrom: (Laughs)  

Rob Brooks: Please join me in a very grateful and warm Sydney welcome for Carl Bergstrom. 
 
Applause 
 
Carl Bergstrom: Thank you Rob.  
 
Rob Brooks: Carl, I love the title of the book. We were just discussing in the green room about the politics of getting Calling Bullshit on the cover of your book. But you’re in Australia now and you’re speaking our lingo, and we have a very versatile and quite very gated use of that term here. Perhaps you could help us reflect a little bit, what is bullshit? 
 
Carl Bergstrom: Yeah so we thought about this a lot when we went to write the book and it may be a little surprising but there is fairly rich philosophical literature about what bullshit is and it started with work from Harry Frankfurt, who wrote an essay that’s turned into a small book called On Bullshit.  
 
And for us, we kind of take-off where from Harry Frankfurt… the idea is that bullshit is when people use language, or data visualisations, figures or statistics or something like that to try to persuade and impress somebody without any attention to the veracity of what they are trying to present, without any attention to whether they are effectively communicating the ideas but it’s rather this sort of verbal or numerical or rhetorical ‘shock and awe’ that tries to make you think, "Wow, that guy really knows what he is talking about” or “Wow, those numbers must be so unsaleable… they are so complex." That’s bullshit for us.  
 
Rob Brooks: Impressive over kind, and especially over true.  
 
Carl Bergstrom: Yes exactly. It’s about being impressive rather than about being true.  

Rob Brooks: Well there’s a bit of that around. And I guess, we can encounter quite a few examples. One of the things that I was really entertained with your view of bullshit and how it propagates and how we might combat it is, your broad view of human communication and cooperation. You make this wonderful analogy in the book with foraging for food. You call humans – information foragers, why is that important?  
 
Carl Bergstrom: This is super important. I think that there are so many different ways that animals make a living in the world. Most of them, we think about the way they forage. If you are a woodpecker, actually I should pick an Australian example. If you are an echidna, you go and forage one specific thing you do, you sort of hoover termites out of a mound and there’s like a specific thing you do. So animals forage for food in ways that they have evolved to do.  
 
Humans of course forage for food as well but we are very creative in the ways we forage for food. There’s not one thing we do, we don't just filter krill out of the sea with the baleen in our mouths. We figure out what are the good sources of the things that we want in the environment – what are the clever ways we can extract food from this and this. You know we see this with the incredible ethnobotany in the native knowledge about where food is in the environment and how to extract that. We see this with the incredible development of human technology to create foodstuffs. And it goes beyond finding food of course, there are a lot of things that people are interested in as well, social relationships, romantic relationships, all of that. And in order to attain all of these goals, humans need information about the world they are in. So we get to our goals not by doing one specific thing but by learning about the world around us. And we do that individually, by trial and error learning, exploration and so forth. And critically we do it collectively and socially, by talking a lot through communication networks whether it is face-to-face, books, online technologies that are now developed.  
 
And when we are spending our time doing these things, we are information foraging. If you think about natural selection, I know you think about natural selection a lot. And if you think about natural selection, how does it get us to do the things that further our reproduction success? It makes them feel good. How does natural selection make you eat? It feels good to eat. It makes sex feel good.  
 
As where information forages, natural selection will make us feel good to get information. There is a whole psychological reward system associated with acquiring information. And so just like we have evolved to crave sugar and fat and things like that, we have evolved to crave information. And when we crave sugar and fat and then we got control over our environments we were able to create junk food aisles, and we poisoned ourselves. More recently, we’ve evolved and we got this craving for information. We have created an incredible information firehose that sits on our desk and in our pockets in the form of our cell phones and streams as much information as we could possibly want and a lot more at us 24/7. And we may be poisoning ourselves as much as we do in the junk food aisles.  
 
Rob Brooks: So that’s got us into trouble. Kind of an information malnutrition or… over nutrition, I suppose.  
 
Carl Bergstrom: A little of each, right? I think that we got too much information of too low quality. It’s the same as too many calories are too low quality. It's quite a similar analogy.  
 
Rob Brooks: I’ve always thought of humans as being kind of fixated on stories and storytelling and bringing that information out not just in facts. And I love the facts… I don't like Malcolm Gladwell because he is going to tell me what colour the sandstone was on steps on the way up to where the old people live, so tell me about the old people and how do they get to be so old, but lots of people love that kind of storytelling and I think we really crave storytelling. Is that a part of the equation as well?  
 
Carl Bergstrom: You know a lot about this probably more than I do, and I think stories are tremendously important to the way humans organise knowledge and maybe part of our evolved knowledge processing and information foraging apparatus as well. We are not good at retaining large numbers of independent facts, but we are very good at creating causal models and hypotheses as we observe the world around us. We see this happen, and then that happens, and then that happens, so we create narratives about this causing this, causing this. And that happened, we want a narrative so we will try to ascribe that to intentionality. We’ve got to find an actor, this bad thing happened, so who made that happen? Maybe it was a deity, maybe it is a conspiracy. And so we are looking to arrange the knowledge that we have and arrange the facts that we are discovering in the form of story. What would you say about all that?  
 
Rob Brooks: That absolutely makes sense. I think we really want heroes, we really want to know about their journeys, we absolutely want there to be an enemy or an antagonist of some sort, a villain. Even when there isn't one. And what you just said about the chains of causation, we will infer cause even when there is no cause.  
 
Carl Bergstrom: It is a very important thing right? I mean we talk about this in our class, one of the things we talk about a lot is causality, and the way often we see correlation in the world, when we see A happens, then B happens.  
 
A classic example is with a very common misperception that vaccines are responsible for autism, and of course there is no truth to this, it has been one of the most carefully studied things in all of medicine. But the perception is there, and it is very natural perception that people have because there is a logical chain of events. There is a young child to be vaccinated because of the childhood vaccination schedule. Autism later develops because of the timing of autism development, and both things are salient in the life of a parent, A happened, and then B happened, so people create a causal link there and I think this can explain a lot of this understanding of the way we evolved to create models of the world. We are information foragers, we are trying to understand the world and that is why we are so successful on this planet and transforming the planet but that also leads us into mistakes.  
 
We talk a lot with the students how even these mistakes get made with science. People will see a correlation and they’ll think it means A caused B. Some of them will publish a paper and it’s wrong about that and it tells the students, “Look don’t feel bad about if you’ve made this mistake. You made this mistake because you’ve evolved to make this mistake.” That’s who we are as species. We are pattern finders.  

Rob Brooks: So we are pattern finders, we’re pretty good at pattern finding but we are not perfect at it. There’s something else here and this is this environment, just like I suppose with malnutrition, the environments changed, we’ve got these sudden huge networks. What is it about the data driven world of right now that is getting us into trouble with information?  
 
Carl Bergstrom: There is so much you can go into around that. With this information foragers, and we learn and evolve to forage for information and we are thinking about this very much collectively. When I forage for information, most of it isn’t poking something and seeing what happens. It’s me talking to you about what you experience today. You tell me a story and then I have learned something. For all of human history, that has happened in small groups. Until 2000 years ago, groups under 100. There is one way in which information flows through networks of that size.  
 
Then all of a sudden, first you go to urbanisation and then the last 20 years, this radical transformation where someone says, "Hey, let's take a few billion computers and link them all up together and share some information on that, and let’s invent in all these new kinds of information”, and then you have all these new kinds of communication where you have a pretty looking omelette for breakfast and so you take a picture of it and you blast that out to your 200 nearest friends. You know people didn’t do this in the Victorian era, right? So the whole new notion of communication and whole new kinds of information that are getting shared, it’s scales that are completely different than we have ever experienced.  

And what ends up happening is that our evolved psychology for processing and creating those patterns and drawing conclusions that may have worked very well in these small-scale communities, may work extremely poorly on these gigantic networks where we have very different scales, very different speeds, very different incentives for the actors. You know, no face-to-face interaction and so forth. I think that has gotten us in a lot of trouble. It’s not that misinformation started with the modern internet, but it’s certainly something that has caused it to accelerate because of the structure of the way these networks operate.  
 
Rob Brooks: So what do we need if we are going to negotiate this world of these massive fast spreading networks. What weapons can we deploy?  

Carl Bergstrom: I mean yeah, this is something we have worried about tremendously. This is one of my main concerns. I think about 10 years ago I got really, really worried that misinformation and the way that information was abused could be a very serious threat to humankind because all of the big problems that we think of as existential threats – global climate change, extinction, pandemics, war, racism. You can't solve these without people having access to good information.  

So at University of Washington, we founded a research centre that’s grown quite large now, Centre for an Informed Public, that studies these issues. And we’re looking at a whole bunch of possible different solutions and I suspect none of them are going to work on their own. So you can think about technological approaches to this. I am not super optimistic about this but you can imagine the technology could help you filter for better information. You would need the financial incentives to be there for that to happen and that has not aligned yet. There are regulatory approaches that the government could be involved in saying the power of an individual information, in consumers to see the content that they want instead of letting the large technology platforms choose what you see with their algorithmic feeds. I personally think that education is extraordinarily important. That’s where I have put my chips, of course. And so, I teach this course trying to teach people how to navigate a digital world like ours that is so highly interconnected and has new forms of misinformation and disinformation that we couldn’t have even imagined 20 years ago. So I think that can be very important.  
 
And there is something that is a little bit looser or more nebulous that I call stewardship that I think we should think about as well. And when I talk about stewarding a digital space, I'm not talking about a censor or something like that, I'm talking about thinking about the way that an online platform facilitates or inhibits the spread of good information and inhibits or facilitates the spread of bad information.  
 
Right now, we are using these online platforms that are in the United States and to some degree here and a little less so in Europe – largely unregulated in terms of how they operate. They’re opaque, we don't really know exactly what is happening, we don’t know the algorithms they use, we don't know who is seeing what. They are designed with essentially a single goal in mind which is to keep us engaged so that they can continue to draw a profit by having us click on webpages which sells ads. And that is the entire way that modern Web 2.0 is monetised. These systems are just trying to keep us clicking. And they have fantastic amounts of data that would make any psychology field jealous because they see everything, they record everything we do. You know, they change the colour of the menu bar and then we use it more or less. And all these individual decisions, trillions of them everyday are being recorded and analysed by machines learning programs that optimise the structure of this platform to keep us engaged. So they are not trying to keep us or make us well-informed, they're trying to keep us clicking.  
 
And so the idea that these platforms might coincide with some of these invisible hands that would say the most sticky and lucrative platforms will also be the best at giving you good information, I don't buy that at all. The idea of stewardship is to try to build up a better theoretical understanding of how the structure and the user experience and everything else of online platforms influences the kinds of information that spread there. And then moving toward developing, deploying, finding financial models that support platforms that provide information that further human flourishing and accurate information instead of fostering extremism and division.  
 
Rob Brooks: It does seem like an incredibly unfair fight. First of all, as a person who does research occasionally on humans, if I want to do a fairly mundane piece of research, I need to apply for ethics and several rounds of feedback from the ethics committee and use up a lot of people's time and it is very laborious. And that’s kind of as it should be or as we believe it should be.  

Carl Bergstrom: I think it should be. Yeah.  

Rob Brooks: But if one of the big companies wants to do something, they simply throw something up on their website like you have spoken about and check the colour of the menu bar, etcetera. Are these companies the forefront of discoveries about human behaviour now? Are they going to be the place where all new knowledge is generated?  
 
Carl Bergstrom: I certainly wouldn't say that all new knowledge is being generated there. I think they're coming up with vast amounts of data. They are asking a very narrow set of questions. They are basically interested in one thing – what can I do to the design and content on this page that will keep you clicking instead of making you decide that you got something better to do. So they know a fair bit about that, so are they moving us towards better theories of human behaviour and motivation and all of that? Not necessarily, because that’s not really their aim. They might be after these kinds of narrow engineering decisions. I don't think that this is their ploys to revolutionise psychology or something like that because simply, there is nothing in it for them. To develop the large-scale explanatory synthetic theories that we value at this point.  
 
Rob Brooks: This business of learning from my data or from our data, machines learning me, or how to manipulate people like me is obviously something I'm a little bit touchy about. I think a lot of us are.  
 
Carl Bergstrom: We should be. I don't like asymmetries. I don't like it when the information that is been given to me is being given to me because someone knows a lot about me, and I know nothing about them. That makes me very uncomfortable. In the United States it makes the Federal Trade Commission very, very uncomfortable. These are information asymmetry between consumers and corporations, and I think we should all be anxious about that.  
 
Rob Brooks: We are in this moment right now where AI’s the thing everyone is talking about. I suppose, it’s become very real for us. Not because we have suddenly taken an interest in machine learning ourselves, but because the AIs are suddenly doing the thing that we do, we just talk. And communicate with each other. What are you making of this explosion of interest in large language models?  
 
Carl Bergstrom: The last year has been absolutely wild with large-language models. I had a couple of colleagues in computer science who told me for a few years that this is coming and that they would have all these capacities, and I basically said “I have heard this story before, yeah, whatever.” Of course, I was completely wrong. This time they were right. About a year ago, these things hit my conscious fully and I started to realise, "My goodness, there are these machines that can generate fantastic sounding text, as if it were written by a human on any topic, with any valence you want and particular voices and this is going to be transformational." How is going to be transformational? Well there is an optimist’s point of view and a pessimist's point of view. The optimist's point of view is that we have created true thinking machines that are moving toward actual general intelligence that are going to be these amazing allies in creating an intellectual world that we want to, and everyone could have Plato as their own partner in dialogue and tutor and all of that, except Plato who has access to the entire Wikipedia, and this is going to bring untold riches of thought and understanding. That is the optimist’s side. The pessimists’ side is that these machines are very compelling, stochastic parrots that are essentially auto-complete on your phone on steroids. You ask the question; give them some specifics and they try to string together text that sounds like what an answer would sound like without having any understanding or underlining meaning.  
 
And so, I have brilliant colleagues who are computer scientists who have developed some of this technology. They tend to fall towards the optimist spectrum. I work closely with people in other areas that are in AI ethics and AI safety, etcetera that have positions that are towards the pessimists’ side of the spectrum. I like to say I'm in the middle because that you are supposed to say but I am on the pessimist side of the spectrum as well.  

In the sense that I am very pessimistic that these will turn out to be untold riches in terms of their thinking ability. I do think that people will find a tremendous number of disruptive things to do with them because being able to harness speech at massive scale and being able to harness writing at a massive scale offers you tremendous power and we can talk about where that goes. Like you say, kind of bring it around to what you're talking about,  even tuning that information to somebody. So I could take what I know about you, if I am Google, and so I know a little awful lot about you from Google from your search history and from your consumer behaviour that has been shared with me and from Facebook that gave me your information about your social network and all of that.  
 
Now you may say, "Hey Google, I would like you to give me this tutorial using your AI about that area." I could of course, give you that tutorial straight up or I could imbue that with a set of messages that I thought would be effective at persuading you, not just anybody, but you, that Google is really great and AI is the future. And that could be very persuasive. And so I think that this is some of these cases, where we have a profound information asymmetry that can be used against us for persuasion and those things concern me.  
 
Rob Brooks: So it’s sort of each of us against all the data and whatever black magic they are doing with the data? 

Carl Bergstrom: I think there is that, but there is this scary thing that I think about. We talked about this... we had a lunch today, it was lovely. We talked about this a little bit at lunch. I have not seen the questions yet so this is all coming to me like... 
 
Rob Brooks: I did not write them, ChatGPT did.  
 
Carl Bergstrom: (Laughs) But in any case, we talked about this a little bit. There is the old movie, The Truman Show, where the guy thinks he is living his real life but everything around him is being scripted and acted out and already. 

The information that we see on social media has been chosen for us by these very complicated algorithms. The most complicated algorithms ever developed, trained on petabytes of data about human behaviour to try and show us what the platforms think they want us to see. So instead of us all sitting down and collectively watching Walter Cronkite on the news every night, and living in a shared epistemic reality, we are each being fed our own reality from these algorithms. But at least the pieces of content are typically human generated, and if I see a piece of content and you see the same piece and it says the same thing, it’s just we may not see them. As we move toward a world where content is increasingly generated specifically for us in large language models, everyone is going to be telling us the story that the algorithm think we wants to hear and it is going to be custom written for each of us and we are not going to have a collective epistemic foundation. That is a possible dark view of where all of this could take us.  

Rob Brooks: That is an incredibly dark view if you think about it for a minute. How do we cooperate? How do we build anything?  
 
Carl Bergstrom: I think this is where stewardship comes in. We have to think very carefully about the nature of these massive communication networks. We’ve created the way that information flows on them, the nature of that information and how that creates common views and disparate views. We have already seen how wrong that can go with the extreme polarisation in the United States that’s been driven by among other things, internet technology.  

And we have to think about what are the platforms that we want that will instead of dividing people and keeping everyone in their own Truman Show shell – bring people together and create the common ground and allow us to find a shared epistemic reality and move forward on these absolutely fundamental problems that we need to face.  
 
Rob Brooks: I find it really easy to spot other people’s Truman-like Show shell, I don’t find it nearly as easy to spot my own. Quite hard to cultivate that humility of… 
 
Carl Bergstrom: That’s cause you don’t have one. And the rest of us all do.  

Rob Brooks: My reality is the real… 

Carl Bergstrom: Your reality is the real reality. But no it’s totally right, and it’s by design right? And of course, you know more about this than I do, but this also feeds into a whole lot of human heuristics. You come back to us being information foragers and we’ve evolved these heuristics, these rules of thumb that are used to process information in the world and they are not always the same ones you’d consider epistemic virtues as a scientist so we’re very susceptible to confirmation bias.  

If we believe something and we see evidence that supports that, we’re much more likely to accept that evidence than evidence that contradicts it, and so of course now when you’re told your own bespoke story from the algorithms about the world – that sounds quite sensible to you because you knew that already.  
 
Rob Brooks: So one of the things we are told with food, and particularly the crisis of over nutrition is ‘go for slow food’ – take your time, take pleasure in it, cook it yourselves, all of those kind of things which nobody does. But anyway, only very very rich people who can buy time can do that. But is it the same thing with information, that we need more slow information? Do you think, or do we need small networks?  
 
Carl Bergstrom: It's a really good question. I think small networks may be really, really important. I mean information is delivered to us and the availability of that information radically changes the way we consume it so we consume information so differently that we used to.  

We used to go to the… I remember as a graduate student you would go to the stacks of the library and try to persuade the librarian to, “Please, if you don't mind, would you take a few minutes and go get this volume from the stacks for me?" and they’d come back and bring you the volume then you would bicycle home, and you’d open up this volume that you spent a day figuring out what volume to get and you’d spent an hour and a half getting it and now you’d sit down with it and it would seem perfectly reasonable to spend the next three days reading and thinking deeply about this, because the cost of moving onto the next item would be too large.  
 
The same about the echidna again, you have the echidna running around, hoovering up termites, and imagine it has one termite nest and gets about halfway through, and the termites are starting to run dry but it’s half a mile to the next nest. It’s going to suck most of the last termites out of there, but now imagine that all of the nests are lined up next to each other right? It’s going to, from halfway on this one and then jump to the next one and then jump to the next one. Same thing happens with the way we acquire information, is you move to a hyperlink-world with the World Wide Web, you’re reading one thing but right there on the page, you can either go to Google and there’s often links that will go to another thing so you don’t go as deep into the text, and you’d kind of take this into its natural conclusion and end up with something like TikTok where you got bottomless stream of information. You watch a 10 second video and it goes to the next one, next one and the average user on TikTok spends about three seconds on a page before swiping to the next and that’s how, you know I think perfectly intelligent people like my own children consume media today, it is tremendously different. Is it all bad? Probably not. There’s a certain thing about volume and they’re exposed to a lot more ideas, and there’s got to be some big upsides, but it is a dramatic change. We are only starting to get our heads around the scale of change of what it means to be a thinking human being and the way that we think, and the way that we consume information, we can talk about this in a minute but the way we consume information actually shape the information that is there for us to consume.  
 
Rob Brooks: Sure. Sometimes there are lots of really bad ideas out there and sometimes there is just a particular bad idea. And I guess your experience during the pandemic with lots of people's ideas about vaccinations or testing and lots of conspiracy theories that you are encountered there and you spent a bunch of time and you chose a very deliberate course of action in terms of how to stop that flow of misinformation. I want to quickly look here, the principle you talk about, Brandolini's principle.  
 
Carl Bergstrom: Yes, please bring up Brandolini’s principle.  
 
Rob Brooks: Brandolini's principle is, "The energy required to refute bullshit is an order of magnitude greater than the energy that bullshit has spent putting it out there."  
 
Carl Bergstrom: That’s right. 

Rob Brooks: And so you’ve been the one putting the order of magnitude more work… 

Carl Bergstrom: I mean this is one of the problems is there is this sort of psychometry of bullshit that really runs against us right? And Jonathan Swift started it out with this, “Falsehood flies, and the truth comes limping after it” and this is, kind of always has been the way of things and it’s just the… you know falsehood used to have to fly, and now it can travel at light speed on our optical fibres. And this makes the problem of what to do with misinformation and society a really challenging one.  

Because it is true, that when people put out compelling bullshit, and it taps into something you said earlier, the importance of story, so if someone can hang bullshit on a particularly persuasive powerful story, often with actors. Like you talked about the villain, instead of hapless chance, that bullshit takes off and we see these very strongly in the theories, and I’m not going to say out that they are conspiracy theories, because I don’t think they are - not that they need to be but there is a strong attraction to the idea that the Coronavirus was not a natural introduction and from wild bats or other species but rather something that was engineered in a lab. 

Now I think there is very strong evidence that it wasn’t deliberately engineered in a lab. There’s not strong evidence that it was even accidentally released from a lab. I don’t think there is much evidence one way or another but anyway people desperately want to find a narrative that this was a deliberately engineered virus. Why? Why do they want that? Well I think it’s one thing to think that there is a bad enemy out there who did a horrible thing that disrupted and upended all our lives and cost us loved ones and all of this for three years. That’s one thing to try to live with. It is an entirely different one to think we live in a world, take Thomas Hardy's words “Where hapless chance thrust this upon us”. That’s terrifying.  
 
And so we look for these stories where they have these actors and elements, and once we find them, convincing people otherwise is nearly impossible. I think we still need to try. I think communities have to come together to try to clean up their information environments. But it gives you a hint of about where you start. If you want a clean information environment, I suppose, in the US we had a terrible litter problem through the 1970s and we’ve really cracked down on roadside littering, and things like that. When you drive through the beautiful parts of the country now, it’s actually quite beautiful. There’s not trash lining the streets.  

It's a lot more work even just to pick that stuff up rather than throw it out the window, and bullshit is even worse. The solution is not to create clean-up crews, it was to get people to not litter in the first place.  
 
So, you know, in our Centre for an Informed Public, when we teach high school kids, we have stickers with logos, "Think more, share less." Because social media is designed to get us to do exactly the opposite.  

“Share more, think less”, and we like it and it taps into these evolved desires right? Because it’s not only are we information foragers but we’ve also have these evolved desires to become part of community, to interact with people, to send out… it’s not reading the message that comes in that someone wrote at random, that comes in that gives you the really big dopamine hit. It’s when you write someone something and someone responds to you right? It’s this interaction.  

So we’ve created ourselves not only a information firehose but an interaction firehose that allows us 24/7 social interaction, maybe not with people we’ve even ever met but it’s triggering the same reward that we would have gotten from having a conversation like this in front of 300 people.  
 
Rob Brooks: Where do you sit on the capacity of new kinds of AI? The AI that is just over the horizon that could hold up...entirely one side of the conversation in an incredible kind of way and act as a friend? It is obviously something I'm incredibly interested in so I’m asking from a proposition of self-interest - but also that can generate pictures and videos of things that have never happened, that look entirely credible. Are we in a new level of trouble there?  

Carl Bergstrom: I think you are right. We basically have the technologies that can do all these things now, unfortunately or fortunately. Many of them were developed at the University of Washington, deepfake video is a University of Washington product, so my institution is culpable in some of this.  
 
And I think, so yes, we can make deep fakes video that never happened. We can do deep fake audio, and any kind of picture stuff. Now we’ve got these conversational chatbots that are adequate to make people think they are friends. You’ve written about how people can fall in love with them and what you’re fascinated by as you said to me, not only are people… I say it is amazing you can have a chat with the chatbot and you can’t tell it’s a person and it’s sort of passed the Turing Test, and what you said as well, even if you know it’s not a person and you can still fall in love with it, and this is the amazing work you have done on Replica in some of these other systems.  
 
So yeah, we have all these capacities now and in some ways of course, they're going to create a lot of new problems. I actually think the scariest one is text, which is really interesting. In some ways being able to create text seems so much simpler than being able to create a deep fake video of something that never happened.  
 
I think every time we’ve had a new technological innovation that allows people to put falsehoods out there, people learn to triangulate their information sources and work around it. When people figured out – there was photography and people can manipulate photographs, there was some panic about that, and then Photoshop lead to tremendous panic. Now that we got Photoshop we can't believe photojournalism anymore. But we weathered that just fine and I think we will continue to weather just fine similar things to video. 

Text is really weird, text is, one author described it as an operating system for the human brain. There is something very special about the way we interact with respect to speech, language and text. And so, we’re particularly likely to ascribe humanity, or thought or consciousness or intelligence to agents that create text and I think it taps into a whole different way of thinking and responding to the world when we are interacting via text.  
 
If I watch a video, I’m not under the perception that the video is another conscious agent. But when I interact by text, I feel that the other person...providing the text… I very much have this feeling, even though I know perfectly well I'm talking to a large language model, I feel there is this agency there, and it influences me in ways I have evolved to be influenced. And I think that is a very important insight but you have worked on this a lot more than I have so I’d love to hear your thoughts on what I am saying.  
 
Rob Brooks: I think as a zoologist by training I always think, what do primates do? They groom each other by picking at each other's hair and skin and that is how they make their alliances and friendships and basically how they do their information foraging.  
 
Carl Bergstrom: Absolutely.  
 
Rob Brooks: We just happen to have found this way of grunting at each other that requires a lot more hardware and also requires a lot more, has enabled us to groom at scale. You and I could groom each other here, and pretend that there’s no one over there because there’s bright lights shining in our faces, but we know that there’s lots of people out there and they are going to start asking questions in a minute. And we are grooming at scale which means we can be in societies of thousands or millions of people. We can send out our fittest and finest young people in a metal tube across the world and when they get there they compete in running races, rather than trying to kill each other. It is bizarre from a primatological point of view.  
 
But I think that this capacity to recognise that other people are like us and that if they speak to us, we somehow we develop some notion of who they are in our sense of self, it is just this incredibly weird, eye-opening thing that we do, that we take for granted every day, and…  
 
Carl Bergstrom: And the machines are hijacking that, and I suppose maybe we will learn to get around that too but right now even in my own behaviour, even as somewhat of a large language models sceptic, I find myself in my first experiments with large language models I would be critical when it told me something wrong, but when it tell me something right I would say, "Thank you, that's a very good answer." And it's crazy. We're just so hard wired, but when you think about it, there are millions of years of adaptive evolution that have gone into shaping who we are and interacting with these technologies, because they are not random technologies dropped down on us, because we created them based on exactly what have evolved to want and what our evolved desires are. It makes an interesting dynamic.  
 
Rob Brooks: They have snuck in to tap into exactly that.  
 
Carl Bergstrom: Now you’ve talked about these in sort of parasites.  

Rob Brooks: Yeah, parasites. 

Carl Bergstrom: Yeah, tell us a little bit about parasites.  
 
Rob Brooks: Well animals or bugs, you know germs, they are, simply are, all about finding their own energy source. Well occasionally, live on an organism, we’ve got all sorts of things living on us at any time, and sometimes they will find the best way to make more copies of themselves so as to make us sick or to drain us of something. I think that is probably what is happening with smart phones at the moment. It started out as something that we wanted and we designed. It does the thing that we wanted to do. But sooner or later, that smart phone or whoever owns it has interests of their own, like we have spoken about. And those interests involve getting you to do something on it and to buy something on it and suddenly this thing in your hand that you thought was your friend, you’ve thought was like your dog, we’d call is a commensal in zoology,  is now no longer that. It is actually a parasite that is sucking something out of you.  
 
Carl Bergstrom: You think that AIs could parasites as well, or are they already parasites?  
 
Rob Brooks: I think that there are some AI parasites out there right now. I think that these social media platforms are exactly that. They’re attention parasites. It does not mean that they are all bad.  
 
Carl Bergstrom: What do we do, what are our antibiotics?  
 
Rob Brooks: It seems like the only stable solution is to get off the platforms.  
 
Carl Bergstrom: But we don't want to. We’ve evolved not to want to.  
 
Rob Brooks: That is what I wanted to interview you, Carl, is because you wrote a book about this stuff. It is some weird black magic you're doing there. I have a copy of your book. I have read it, I'm going to do your course, et cetera, and yet I am still going, what is the first thing that I can do? I want a listicle Carl. What are the three things, the three principles I can live by that will allow me to be more healthy? The audience obviously wants to use the same ones of this stuff. You can give two more in the green room afterwards.  
 
Carl Bergstrom: Sure. I think you already hit on one, right? Which is slow information. This may be the sort of thing that does not stand up well to the test of time. Because there is always some old guy on stage saying you know, “If you just do things the way we used to when I was little, everything would be okay”, but I’m saying that, so here you go. So that’s number one.  
 
Number two is to develop a sort of healthy scepticism, not a nihilism about understanding the truth but a healthy scepticism as a practised habit of mind. So when you are reading things instead of just reading every headline and deciding it is true, first of all, decide whether it is even worth thinking about and if it’s worth thinking about, ask yourself some simple questions. Who’s telling me this? How do they know it? What are they trying to sell me? Ask yourself those three questions. That will tell you a lot about how to interpret the piece of information that you are reading.  
 
And then my third would probably be again, kind of a simple aphorism but something I’ve really stressed in the book. As you're reading this media information, the way that we are drawn to click on things, what attracts us to things is that they seem fresh, they seem surprising, they seem timely or urgent. And they seem extreme. Those are the kinds of things that – because we used to go around, we would spend most of our time looking for information. Something worth even looking at and then when you would look at it carefully, when you found it, nowadays we are filtering constantly, there’s all these things coming at us. We are trying to pick which ones to pay attention to. As we do that, we are drawn to things that are flashy. And so ask yourself, if something seems to good or bad to be true, it probably is.  
 
And if you care, then dig deeper. Try to go back to the original sources. Again, who is saying this? How do they know it? Where did that come from? You can track back to sources and that is one of the things we really focus on teaching our students is that has been transformational in the way that they view information. I will finish on an idea from my friend, Mike Caulfield, who’s noticed that one of the great powers of hypertext and online environments is that sure, all this bullshit can come at us really fast, but we have an unprecedented set of tools for fighting it because we can instantly find out enormous amounts of information about any particular claim that comes at us. So claim comes at me, it’s got an author attached and you used would have to go to the library and try to figure out who the author is. Now I google the author’s name. The author then has a set of bylines. Look at those. What are the political orientations of those? We can do what is called ‘lateral reading’ and so we can immediately check out the provenance of any piece of information there, so that is what the web is good at, and so we can use the web’s strength; using lateral reading to fight back against this.  
 
Rob Brooks: And your students in your course do?  
 
Carl Bergstrom: We teach this as a major focus. The students are great. They love the course. They have a wonderful time. For me, one of the most rewarding things, we go through in the course a lot of the, a lot of the course is about data and the way that people lie with data. There are these digital sayings that ‘data don't lie’ and stuff like that. Of course, people can mislead and they can create any story in the narrative that they want with data, and we can teach you how to spot that.  

About halfway through the course, the students start coming up to me before class, it’s a big class about this size. I'm trying to get ready for the lecture and they’re like, “I know you’re trying to get your slides ready but I got to show you this, I found this in the New York Times and we were talking about selection bias in class, and it seems isn’t this just the…” and it’s like, “Yes it is, and great job!” 

And they’re so excited and instead of being something that leaves people feeling untethered, it makes people feel very empowered. And that’s really the aim of the whole book and that’s to say  
people are pushed around with data because you haven’t been told that you can use your common sense, that you already have to think clearly about these data driven claims. You do not need to go get a master’s degree in data science.  Just think clearly using a few rules and then you can push back and you don’t have to be the passive recipient of these data-driven stories and be an active interrogator of those claims.  

Rob Brooks: That is really inspiring and optimistic.  
 
Carl Bergstrom: Inspiring, it is optimistic. The optimism comes from one-on-one interactions, right? How we scale that up to an entire society is a big question.  
 
Rob Brooks: You have to begin someway. You’ve scaled it up to 200 universities which is fabulous. So I have a few questions here on my iPad. I'm going to go with the really dangerous one. What are the top three conspiracies or misconceptions you have heard of your time? I'm happy if you just give me one.  
 
Carl Bergstrom: The top three conspiracies or misperceptions? Well. I think in the United States, the entire complex of QAnon conspiracy has been enormously powerful and enormously dangerous. I don't pretend to understand what it all is but there’s some old guy named ‘Q’ who knows that there are lots of dark cabals doing some horrible things and secretly running the world as a kind of a malicious deep state and revealing this to us through strange prophecies. Which actually, if you kind of take that description, fits an awful lot of conspiracy theories over all of human history.  

Rob Brooks: Some very old books that talk like that.  
 
Carl Bergstrom: There are some very old books that talk like that.  
 
Rob Brooks: Alright, what could possibly incentivise social media problems to counter misinformation? To do the work for themselves and to promote critical thinking?  
 
Carl Bergstrom: Yeah so this is fascinating. One thing that could incentivise them and that’s actually the most important question. It’s one of those sort of central things that we spend a lot of time thinking about, is that the government does not understand the internet well enough to regulate it. Social media companies are not going to do anything they don't have incentives to. So what would the incentives be? I think that one thing is an informed consumer base. So if consumers start to understand the nonsense that they are being fed, they understand the way the system is operating and they have experience with platforms that give them more control over what they see, they will become more demanding.  

So if I go to the library and I walk into the library and say “Hey look, I want to read this latest novel. And the librarian and says “No, take this, now get out of here”, and hands me something I don't want, that is not a library. I’m not going to put up with that and I'm certainly not going back. But this is what happens to us always on social media. I just want to know what my colleagues have written about the latest papers they have seen but the social media algorithm feed is telling me, “Oh here’s some cats that look like Disney princesses,” and whatever and whatever. And then worse, I click on the cats because who could resist? This is the mental stuff.  
 
Some of the new social platforms are allowing people, like Bluesky, are allowing people to have complete control over their own feed, they allow people to design different algorithmic feeds and you can subscribe to whichever one you want. We are seeing this very starkly right now as Twitter has changed ownership and is becoming much more aggressive about pushing what it wants you to see on the users, which is mostly these days like, shock videos and things like this. Then you have some replacements giving users a lot more control.  
 
We could easily move to a position where people say “Gosh, remember in the 2020s when the platforms used to choose what you saw? How do people put up with that for a decade?” Trying to strike a note of optimism here.  
 
Rob Brooks: You famously have left Twitter and your 200,000 followers behind. Which is ‘passing through the eye of a needle’ kind of self-control. Well done.  
 
Carl Bergstrom: Thank you.  
 
Rob Brooks: How are things over at Truth Social?  
 
Carl Bergstrom: I don't have a Truth Social account but I have enjoyed being on Mastodon and Blue Sky, I have missed the old Twitter for what it was. I wrote a piece in the New York Times about how the old Twitter was absolutely essential for mounting the largest mobilisation of scientific effort in human history. About a third of the scientist in the working world pivoted in some way to contribute to the pandemic response which is extraordinary. We didn't have the institutions in place to do that and we did not have the communication channels. We couldn’t leave our houses for the first few months of it and yet we managed to do this and make enormous progress and solve big problems. One of the most important things I did during the pandemic was the paper that I wrote when at midnight in the United States and I said “Hey this report just came out, and is anybody still up? I want to work through it”, and a bunch of colleagues in Australia because of the time difference said “Yeah, we’re around”, and we worked until eight in the morning the next day and had a very important explanation. That ability to self organise on a social media platform is a tremendously important.  
 
I am bringing it up because I don't want to come off as this anti-technology person. Things like Twitter were fantastic for the self organisation of human cognitive effort. We can continue, this is the stewardship part. We can think about what worked and what didn't and this critical question that we have that is so important, how do we align what works with the financial incentives so that we and our working with things that are effective? And good for us. Instead of ineffective and bad for us. I guess as a final note, I do worry about the commercial models in general. In my opinion, what works best at scale on the internet? It is the Wikipedia. What is the one major platform that does not have a profit motive? It is the Wikipedia. It is very interesting how that has not degraded the way that some of other platforms have.  
 
Rob Brooks: Absolutely. I must say, people, I am very proud of you for your grooming habits in terms of the number of questions that you have sent through here.  
 
Carl Bergstrom: I’ll try to answer a few more quickly. Let's bang a few out.  
 
Rob Brooks: If we don’t have a shared reality, how do leaders implement broad scale change?  
 
Carl Bergstrom: I think it’s an enormous problem. I think in the US, we are not seeing any broad scale change whatsoever because there is no shared reality. We’ve got complete polarisation and the US government can’t do a damn thing. I don’t know here in Australia but I hope it could be more effective.  
 
Rob Brooks: Your work on teaching college students to call bullshit is very important. Somebody here is asking about high school teachers, what can we ask high school teachers to do?  
 
Carl Bergstrom: I think this content resonates with high schools so well. We’ve piloted this in a whole bunch of high schools. The students love it. Try to get a 16 year old to learn statistics, forget about it. Try to get a 16-year-old call bullshit on an adult, they are in. And this works fantastic.  
 
Rob Brooks: How do we deal with those people who still consumed misinformation that agrees with their worldview even in the face of it being proved as false?  
 
Carl Bergstrom: I don't know what to do about this. One very interesting question is why as humans we are so drawn to it. That might be an important research direction that we go in.  
 
Rob Brooks: Every single person here is asking, how do we assess students in the light of large language models?  
 
Carl Bergstrom: Yeah, that’s really hard. We worry about this a lot. There are certain exercises that one can do. Of course, talking to students. Going back to face-to-face. Oral exams and tutorials and things could be effective. There are things largely that models can't do. Now I'm going to have my students design what are called visual abstracts where they design a sort of data visualisation for a paper. But soon large language models will be able to do that ,and I think what we are going to have to do is start thinking about, what are the tasks that large language models can't do? Because those are the tasks that our students have to. And moving forward to assess that. It is going to be a real chance for education to look back at itself. Look at its practices and figure out how we can best educate people for the age we are moving into.  
 
Rob Brooks: And because every single question here was asked by 'anonymous' and this is one of our last questions for the night, is the ability to remain anonymous online contributing to this problem of misinformation?  
 
Carl Bergstrom: This is a fascinating question. The ability to remain anonymous online is extremely important for the organisation of counter movements in places. We have seen that as very important for groups that may be marginalised or discriminated against and it provided a sense of meaning and belonging to countless gay youth and the southern United States and is so critical to online culture. It is one of the best things that the internet has brought about. It may contribute to the spread of disinformation but most of the worst disinformation spreaders during the COVID-19 pandemic, for example, and we have studied this, were not anonymous accounts, they were well known, named actors who were in fact trading on their named reputation.  
 
But there is a concerning twist to this, which is during the COVID epidemic, if someone didn't give their name but they wrote an impassioned plea for something, I didn't know who they were, but I knew they were a person. And in 2023 that is not true anymore. In 2023, the fact someone has written an impassioned text, someone has written jargon correct scientific text, someone is using all the language of my end group, whatever, it doesn’t assure me that there is even a thinking person who had the time to think those thoughts, or to bother to write them anymore. And that is going to do something really interesting to the nature of identity because we can no longer, often we don't care about the specific identity, but we definitely care that it was a person.  
 
I don't want to waste time reading words that no one could be bothered to write, arguing with a person that isn't actually a person. As we move into a large language model world, I am very concerned that is going to be a big problem for the notions of anonymity in society and on the net.  
 
Rob Brooks: What a fascinating place to end, it has been an absolute pleasure.  
 
Carl Bergstrom: It has been a delight. Thank you.  
 
Rob Brooks: It has been an education for me I must say, and your questions this evening, of which we only got to a small number, were impeccable and well judged. Thank you very much to anonymous, all of you. Carl is going to be signing books outside and  selling books through the UNSW bookshop. So if you do not yet have a copy of Calling Bullshit: The Art of Scepticism in a Data-Driven World, then do get one and line up to sign.  

To hear about more upcoming events and podcasts, please subscribe to the UNSW Centre for Ideas newsletter or visit unswcentreforideas.com, and we will see you at the next event after this one. Thank you all very much for joining us tonight for this event and I’d like you to join me in an enormous, rousing thank you very much to Carl Bergstrom for an absolutely spellbinding hour. 

Carl Bergstrom: Thank you.  
 
Applause 
 
UNSW Centre for Ideas: Thanks for listening. This event was presented to you by the UNSW Centre for Ideas and UNSW Science. For more information, visit unswcentreforideas.com and don’t forget to subscribe wherever you get your podcasts.  

Speakers
Carl Bergstrom

Carl Bergstrom

Carl T. Bergstrom is a Professor in the Department of Biology at the University of Washington in Seattle. Trained in evolutionary biology, mathematical population genetics, and infectious disease epidemiology, Carl is best known for working crossing field boundaries and integrating ideas across the span of the natural and social sciences. The unifying theme running through his work is the concept of information. Within biology, he studies problems such as the evolution of communication, and how the process of evolution by natural selection creates the information that is encoded in genomes. In philosophy and sociology of science, his work explores how the incentives created by scientific institutions shape scholars’ research strategies and in turn our scientific understanding of the world; in network science, how information and disinformation flows through massive-scale networks. In epidemiology, he played a prominent role during the COVID-19 pandemic as a science communicator and developed models used to implement proactive testing programs worldwide. Most recently, Carl has teamed up with Jevin West to fight misinformation online by teaching quantitative reasoning and digital literacy. Together, they coauthored Calling Bullshit: The Art of Scepticism in a Data-Driven World.

Rob Brooks

Rob Brooks

Rob Brooks is Professor of Evolution at UNSW Sydney and a popular science author. He has spent his career understanding the complexities and conflicts that sex and reproduction bring to the lives of animals, including human animals. His popular writing explores the murky confluence of culture, economics and biology, and how new technologies interact with our evolved minds and bodies. He has won the Queensland Literary Award for Science (for his first book Sex, Genes and Rock ‘n’ Roll), and the Eureka Prize for Science Communication. His articles have been published in Psyche, CNN, The Atlantic, The Sydney Morning Herald, Areo, and many other publications. His latest book Artificial Intimacy: Virtual Friends, Digital Lovers, and Algorithmic Matchmakers considers what happens when new technology collides with our ancient ways of making friends, growing intimate, and falling in love.  

For first access to upcoming events and new ideas

Explore past events