Skip to main content
Scroll For More
Listen

Guardians of the future: Māori knowledge in the age of AI

Karaitiana Taiuru

So the tribal radio stations around the North Island had thousands and thousands of hours of recordings of language speakers. And so that’s been digitised over the years. And a small community in the North Island have applied for government funding. They’ve brought international Indigenous experts, AI engineers over to New Zealand and created what we call a large language model or an AI that speaks the Māori language.

Karaitiana Taiuru

Dr Karaitiana Taiuru is a leading global voice at the forefront of Māori data sovereignty and AI ethics. His practice draws on tikanga Māori and mātauranga Māori to facilitate conversations about the ways emerging technologies must uplift, protect and empower Indigenous knowledge, culture and communities.

Hear him in conversation with Dr Sue Keay, where he explores why protecting Māori knowledge as a living treasure is one of the most exciting and urgent challenges of our tech future.

Transcript

Sue Keay: Good evening, everyone, and welcome to tonight’s event, AI: Breakthrough or Breaking Point?, and our talk, Guardians of the Future: Māori Knowledge in the Age of AI. My name is Sue Keay and I’m the Director of the UNSW AI Institute.

First, I would like to acknowledge the Gadigal people of the Eora Nation, the traditional owners of the land and waters on which we gather tonight. I would also like to pay my respect to their elders, both past and present, and extend that respect to other Aboriginal and Torres Strait Islander people who are with us here today.

Tonight, we have the pleasure of welcoming Karaitiana Taiuru to join us to share his experience working to ensure that Indigenous communities maintain control over their cultural, biological, and digital data in an age where AI steals everything.

Dr Karaitiana Taiuru is a Māori AI and data and emerging technology ethicist, working at the forefront of Māori data sovereignty and AI ethics. With more than 30 years experience in the technology sector, Dr Taiuru began his career in IT in the 1990s, quickly becoming an advocate for Māori rights and protections online. Dr Taiuru is the Chair of New Zealand’s AI Forum and holds a PhD from Te Whare Wānanga o Awanuiārangi, where his doctoral research focused on Māori data sovereignty and the ethics of genomic and DNA data relating to humans, Taonga species, and Indigenous knowledge.

His research continues to inform national policy development and international Indigenous collaborations on data governance and AI. His work consistently reframes data as a living Taonga, advocating for collective stewardship models grounded in, you might have to help me with this, Te Tiriti o Waitangi and Indigenous self-determination in the digital age.

So, please join me in welcoming Karaitiana.

So, I was listening to a podcast interview with you, and you mentioned that you came from a community of Indigenous rights people. So, how did you end up in the world of tech and AI?

Karaitiana Taiuru: Okay, good point. So, as a child, I was brought up in two different worlds. I was brought up in a traditional Indigenous world where, what you’d call the bush, I learned Indigenous values and become well skilled in that area. And I was also brought up by, you know, my parents were European grandparents, your typical World War II return servicemen, a typical grandmother who was a lady. And they taught me values.

And then I was, my education ended up being at one of New Zealand’s most prestigious private schools. So, I had a really good balance of traditional knowledge and of the new world. So, I’ve blended that together now.

Sue Keay: Okay. I’m still not quite sure how you got into tech and AI.

Karaitiana Taiuru: Sure. So, basically, I just got offered a job in tech soon after I left school, when I thought I was going to be a lawyer, which that wasn’t my world. And it just happened to be that a chief executive wanted someone to work for them who could speak the Māori language, and who was prepared to be upskilled for the next two or three years in technologies, which back in those days was networking, databases, and then eventually the World Wide Web came to New Zealand. So, that’s, yeah.

And within, yeah, probably the first few months, I realised, oh my gosh, this digital world is extracting our Indigenous knowledge and not recognising us. So, I, yeah, I stayed there so I could advocate and, yeah, ensure that we could have a voice.

Sue Keay: Do you think you could explain to everyone how you went about creating an Indigenous large language model, what that is, and what inspired you to do this?

Karaitiana Taiuru: So, first of all, I mean, it’s about sovereignty and about data governance and having something by Māori for Māori.

So, we always advocate that it’s better to do things for ourselves. That way we can ensure we can put in protection mechanisms, we can have a model or a system that talks about us and respects our traditional knowledge. If we don’t do that, then we’re stuck with having to give our traditional knowledge over to big tech companies and we’ll lose that knowledge.

That knowledge will be interpreted in a way that American tech wants it to be interpreted. So, yeah, so that’s some of the key reasons.

Sue Keay: And what can you do now you’ve developed a large language model? And also, maybe you could take us through the process of developing it. Where did you get the material to be able to build it?

Karaitiana Taiuru: So, for the languages, there’s a lot of languages that have been recorded over the years. I mean, despite in New Zealand, the Māori language, it was illegal in the early 1900s. It was not spoken in the 1980s to the point it was almost extinct.

But some of the curators of the past had the foresight to record through television, audio and video, all of our native speakers. There was a lot of interest from missionaries to record the language for the Bible, so the Bible could be translated. So there’s a lot, a lot of resources around.

And then we had tribal radio stations. So the tribal radio stations around the North Island had thousands and thousands of hours of recordings of language speakers. And so that’s been digitised over the years.

And a small community in the North Island have applied for government funding. They’ve brought international Indigenous experts, AI engineers over to New Zealand and created what we call a large language model or an AI that speaks the Māori language.

Sue Keay: And what can you use that for?

Karaitiana Taiuru: So there’s an app that you can download on your smartphone, which will transcribe the Māori language. It will help you translate, it’s still at early stages, but it can translate into the Māori language.

So one of the issues that we do have, if you go to Google Translate, for example, and ask it to translate English into Māori, it’s quite horrifying what the Māori language actually comes out saying. And I’ve heard of a number of instances when people have been really embarrassed and just freaked out about what they said compared to what they thought they were going to say.

Sue Keay: Someone suggested to me that it’s only people who have faced that situation where they’ve been forbidden from using their own language that really fully appreciate the importance of the preservation of language and having large language models and artificial intelligence that is reflective of their culture. In Australia, we often are told that we’re too small a nation and that we’re too late to be able to look at developing our own language models. So I guess, you know, I’m interested in your response to that.

Karaitiana Taiuru: Yeah. So, I mean, I see in New Zealand, there’s a number of attempts and projects to develop our own LLMs, our own large language models. There’s a number of different companies are creating their own sovereign AI systems.

They’re not 100% sovereign because we still rely on big tech hardware and the internet, etc, etc. But I think that, yeah, it’s never too late and you’re never too small. I mean, little old New Zealand can do it. Australia definitely can. And you’ve got a lot of experts in AI. I saw a few of them here earlier on.

So, yeah, I believe there’s no reason for you not to. In New Zealand, I see the big tech companies, they try and convince us that we shouldn’t be creating our own AI. They lobby our ministers. They do deals with the government to get AI into all the big tech into the schools. And so we have schoolchildren who don’t know anything different except for using Microsoft this and Microsoft that, Google this, Google that or Apple. So, yeah, I think it’s a lot of politicking, but I think it’s definitely worth looking into it.

And in particular, if you want to retain your own country’s knowledge and your own country’s history, then looking at your own sovereign AI, I think it has to be considered.

Sue Keay: Thank you. So, maybe you could tell us a bit about the similarities in Māori storytelling traditions and data preservation.

Karaitiana Taiuru: So, often people say to me, oh, why do Indigenous peoples worry about their data? I mean, this is a new thing. But the reality is Māori and all other Indigenous peoples, we’ve had data for thousands of years. But we didn’t have computers to store that data in. We had stories, songs, things like tattoos, dance, carvings, our environment, the animals in the environment, the way they act. This is all our data. And we’ve always had very sophisticated ways of protecting that data.

So, in Māori, different families would be in charge of different types of knowledge, and that knowledge would be handed down intergenerationally. So, for example, at one of my communities that I belong to, there’s one family who’s always had the traditional knowledge of the mussels, how to cultivate them, how to look at the way the flax is growing, to look at the – we have different flowers that bloom. So, it’s always been their role to do that.

And so now, in a modern-day world, they look after the legal compliance of that area. We have people who have always looked after the rituals, as we saw the smoke ceremony this evening. Traditionally, that’s another set of data that’s always been preserved.

But then, of course, with colonisation, our physical objects have been shipped all around the world, as with all Indigenous peoples and with other cultures. So, I guess there’s a discussion to be had about how do we preserve those physical objects that we can’t physically see in our own country. And so, I think there’s an argument to be had and a conversation to be had, how do we use AI to give access to physical objects, which are probably in a museum’s basement.

There’s an argument to be had, it might break traditional protocols, but having our sacred items locked up, where no one can see, is also breaking our cultural protocols. So, there’s an opportunity here for us to have that conversation.

Sue Keay: So, it sounds like we really should broaden our understanding of what data actually means. In Australia, we have a concept of Indigenous cultural intellectual property, and one of the challenges is that unlike traditional copyright, it’s not protected. And so, if you were to ask an AI tool like ChatGPT to produce an image of typically Australian artwork, then it’s likely to show you a dot painting, and you have no idea how it’s come up with that knowledge, but presumably it has ingested a huge number of images without permission of Indigenous artwork. So, what do you see as some of the threats to Indigenous culture through AI?

Karaitiana Taiuru: So, I think talking about artwork is a hot topic at the moment. I’ve seen artwork that claims to be Māori-generated artwork through AI, which has absolutely nothing to do with Māori at all. It probably looks more like it’s from Papua New Guinea or from elsewhere.

We have an issue where AI carries a bias. So, just recently, I asked some different AI to create me an image of the early Māori explorers. So, history tells us, and archaeology and the other sciences prove that the early explorers were physically athletic. They were very healthy people. AI generated a very overweight man with strange pictures on his body. It was kind of really grotesque. At the moment, AI imagery doesn’t understand what a traditional wedding is, and this is for many cultures, not just Māori, but for many cultures.

So, this highlights the issue of bias in the tech industry, where the tech leaders in America create the AI and create tech in their worldview. So, in their worldview, a wedding is probably a man and a woman, and the woman is probably going to wear a white dress. The man will probably have a tuxedo. So, that becomes everyone’s cultural idea. So, there’s a risk there.

We have a risk, as we touched on before, with language revitalisation. If Indigenous peoples aren’t working in partnership with tech to revitalise and preserve languages, then we have all sorts of things going wrong. Already, we know some of the big tech, some of the big AI names, have created their own dialect of Māori language. Māori language speakers can already pick up that when it’s AI-generated Māori language or a human-generated Māori language.

Sue Keay: So, what possibilities open up when Māori culture and traditions sit at the heart of AI? So, what are some of the possibilities when you actually put Māori culture and traditions at the heart of AI, as opposed to either an afterthought or based on stereotypes? What are some of the possibilities that open up?

Karaitiana Taiuru: So, there’s a number of possibilities here. So, you’ll have AI that won’t create bias and incorrect or racial outputs. So, you’ll have AI that’s trained on data that techs had permission to use.

So, in New Zealand, we advocate for three different categories of data that AI can use and should be able to use. So, the first one is public. It’s just public data. It doesn’t matter what the AI does with it. It doesn’t matter if you share it. It’s just in the public domain. It’s not sacred.

The second realm of data is sacred data. This is when you need to talk to the communities and decide whether or not it should be put into AI, what sort of protocols go into place, what are the risks.

Then the third type of data is data that should never be digitised and or never be put into an AI. And this is sacred data. This is data that, for example, images of the deceased people, sacred ritual ceremonies, prayers and chants, things that should not be in the public domain should never be digitised and put in there.

So, by working with Indigenous communities or with Māori communities, following cultural protocols, you do a number of things. You create safe tech. You create tech that creates everybody’s privacy, tech that respects minorities. It removes a lot of the technical and digital harm that we often see. And at the same time, it creates a safe space for Māori people to want to work in tech. Research over the past 10 years has said in New Zealand, Māori people don’t want to work in tech because they find it culturally unsafe, they find it racist and they find that the tech industry doesn’t understand them.

And so, if we want to get our Indigenous peoples into these high-level positions and jobs and creative positions, we need to create a safe space. And likewise, using Māori culture as a foundation, our culture is open to other cultures, so we respect other cultures, other minorities. Yeah, we have traditional knowledge that lets us understand all the differences. We are very open to religious groups.

So, basically, it creates a more inclusive tech for everybody by using and considering Māori values.

Sue Keay: But it sounds like it might be hard to steer some of the existing tech companies to change into environments that are suitable. So, is there an opportunity to encourage young Indigenous people to start their own businesses? What is the most effective way to create that cultural safety?

Karaitiana Taiuru: So, in the last couple of years, I’ve seen a huge increase in Māori-owned startup tech companies using AI. And they embed Māori cultural values into the everyday tech development as part of the workplace culture, as part of the technical stack. And they’re creating some amazing commercial products. So, they employ Māori and they employ other people as well.

So, in New Zealand, we have a saying, what’s good for Māori is good for everybody. So, I just have to reiterate that. It’s really important.

I’m just using some examples. So, Microsoft and AWS have created Māori data governance frameworks. I don’t know to what degree they actually implement them, but they’ve made commitments. We’ve also got at least 38 private companies in New Zealand that aren’t Māori-owned, who’ve also created their own Māori data and Māori cultural frameworks that they implement. So, I think, yeah, I mean, at this stage, we’re not seeing AI being regulated in New Zealand.

New Zealand is following big tech American philosophy, implement it, design it. If it breaks, if it hurts people, we’ll worry about the consequences later. And unfortunately, we know that the people who traditionally get hurt are Indigenous peoples, Indigenous women, Indigenous peoples, disabled people, rural and lower socioeconomic people.

So, again, if we carry on with this attitude, we’re going to have a whole lot of harm in the communities, which doesn’t serve our countries, right? So, I think, yeah, it’s important to understand that because even though it’s tech, tech has a human implication to it.

Sue Keay: So, I guess these frameworks cover off on something that where I think this is where the tension lies, as you’ve described it, is that to be more culturally sensitive and to actually consider how to include some of this data, it requires time because you are engaging with people, finding out what is sensitive data. And of course, that seems to be the opposite model for how big tech would like to operate, which is, as you say, move fast and break things. So, how can you balance that tension? It sounds like a lot of companies are prepared to actually invest some time to be able to develop AI according to these frameworks.

Karaitiana Taiuru: So, I am a big advocate of regulation. So, I mean, I think there’s two ways here.

The first thing, governments should regulate AI from a human rights perspective. We know the harms that the internet, social media, other tech have done, algorithmic bias. If we regulate AI and force big tech companies to change the way they roll out AI is, in my opinion, the only way we can get change.

Of course, the argument is, well, governments have already procured a lot of big AI, but I say, well, how about change the regulations and the procurement rules? Put the emphasis back on human rights, back on country sovereignty, and push the big tech companies to make changes. We’re still relatively new, but things are going really fast, as we heard with the opening presentation before. We’ve got agentic AI. We’ve got some massive changes coming, but I think we need to lobby our politicians for regulation.

In New Zealand, as I said, our government doesn’t want to regulate. The main opposition party are talking about regulation from a human rights perspective. So, I think, yeah, we just need to keep on raising the issues and, yeah, pushing for regulation.

And then I think it’s also important that, as community groups, we create our own frameworks, whether it’s an Indigenous framework, whether it’s a LGBTQ+ framework. All minorities, all people without a voice need to start creating community frameworks and pushing those developments.

We have lots and lots of startup tech companies, and so often those companies need a professional and a community reputation to help them. So, that’s when you can get in with your community frameworks and push these small startup companies to listen. And while I know some of my commercial colleagues tell me that’s impossible, I’ve seen the opposite.

I’ve seen a number of AI companies in health in New Zealand start up, and we’ve just gone in there early, and we’ve just put the pressure on talking about health inequities, the health systems traditionally failed different groups, and got different ethical frameworks in the AI designs.

Sue Keay: So, it sounds like to get true diversity into AI, on the one hand, you want regulations, but on the other hand, it’s also very handy to have that grassroots movement and investment in developing, as you say, community frameworks.

Karaitiana Taiuru: Yes, I think it’s absolutely essential. And if the communities are creating their own frameworks and pushing them and implementing them, then the regulators, they can’t ignore it and say it’s too late, because the communities have the voice and the people vote.

Sue Keay: And what if people are feeling that they don’t know enough about AI, how would you suggest they could best contribute?

Karaitiana Taiuru: Yeah, I think that at this stage, there’s lots of free resources online about AI. All the big tech companies are offering free training. There’s a number of human rights AI courses, different various universities online are offering free ethical AI courses. I think depending on your generation, you might want to talk to your grandkids or your kids. Personally, in New Zealand, I advocate that my parents’ generation should create interest groups and get people like myself or other people in and have community nights and talk about AI, talk about the benefits, talk about the cons, the risks.

And I think it’s important that we let our younger generation experiment in a controlled environment, making sure that they’re not going to be too damaged by seeing things they shouldn’t see. But definitely don’t be scared of AI. It’s here, it’s not going away, it’s just going to get bigger and bigger and more advanced.

And ask questions. I think if you watch YouTube, there’s thousands of free videos on there as well. There’s a number of low-level introductory books about artificial intelligence from libraries, from bookshops.

But yeah, there’s definitely a wealth of information out there.

Sue Keay: And from Toby’s hologram up on level three, if you get the chance to have a look. But finally, talking about future generations, maybe you’d like to talk about how AI can be a tool for language revitalisation.

Karaitiana Taiuru: I talk about, I say AI, we’re at the crossroads of AI as Māori, but I definitely know this is reflective in other communities that I associate with, disabled communities, rural communities, other communities. If we ignore AI and turn left, it won’t be a tool or a resource, it’ll be a weapon. If we turn right and embrace it, and create opportunities, create the frameworks, then it’s going to be a resource, it will empower us.

Again, I say we’ve got to just embrace it, don’t be afraid of it. If we break the rules a little bit, slowly challenges for language revitalisation.

In New Zealand, we’ve got two different perspectives.

There’s what I call the younger generation who they were able to speak Māori because they could go to school and learn it. Then there’s my generation who learnt it in secret, because my parents’ generation, grandparents’ generations, they were physically beaten at school for speaking Māori. So my generation say, put it out there, let’s just do what we can.

The younger generation say, no, we have to keep it for ourselves. It’s in New Zealand, the Māori language is now an official language of New Zealand, so it’s there for everybody. My generation, we, sorry, I should say the generation before me, protested on the streets, protested on Parliament so that everyone could speak the language.

So I think for Indigenous communities, use AI for language preservation and revitalisation, but categorise, don’t put your sacred language in there. Just put the day-to-day language that everyone can use. Don’t be afraid if big tech uses your language sets, because that means that you’ve achieved normalisation of language.

Sue Keay: Thank you for joining us tonight at AI Breakthrough or Breaking Point, co-presented by the UNSW Centre for Ideas and the Museum of Contemporary Art Australia. I’d like to thank Karaitiana for taking part in this important conversation and for sharing so much. And I do take some heart in the fact that Karaitiana consults around AI ethics and he’s so busy, he has to rush back to New Zealand tomorrow.

So it’s good to see that there are so many companies who are interested in making sure that the AI that’s being used is done in an ethical way. Thank you very much and good night.

Speakers
Photo of Karaitiana Taiuru

Karaitiana Taiuru

Dr Karaitiana Taiuru is a Māori AI, data, and emerging-technology ethicist at the forefront of Māori data sovereignty and AI ethics. As a leading global voice, he delivers practical guidance for boards, agencies, and industry leaders about how emerging technologies must uplift, protect and empower Indigenous knowledge, culture and communities.  

Karaitiana asserts protecting Māori knowledge as a living treasure is one of the most exciting, urgent challenges of our tech future. His practise draws on tikanga Māori and mātauranga Māori, to deliver practical guidance for boards, agencies, and industry leaders on AI governance, Māori Data Sovereignty, IP, and preventing bias and discrimination across the AI lifecycle.

Sue Keay

Sue Keay

Sue Keay is the Director of the UNSW AI Institute and founder of Robotics Australia Group, the peak body for the robotics industry. As an expert in robotics, AI and automation, she led the development of Australia’s robotics roadmap leading to Australia’s first National Robotics Strategy. A strong advocate for the Australian AI ecosystem, Sue is a fellow of the Australian Academy of Technology and Engineering (ATSE), a Chaikin medallist, a member of the Kingston AI Group and Chief Executive Women, and is on the board of computer vision start-up, Visionary Machines. Sue has an MBA from UQ Business School, PhD in Earth Sciences from ANU and is a Graduate of the Australian Institute for Company Directors.

 

For first access to upcoming events and new ideas

Explore past events