Automated Decision-Making and Society

An interview with Brooke Myler and Professor Julian Thomas, Director of the ARC Centre of Excellence for Automated Decision-Making and Society.

Brooke: Welcome to the Automated Decision- Making and Society podcast, my name is Brooke Myler and today we are discussing automated decision-making systems, how they are evolving and what this means for society.

Joining me in this episode is Professor Julian Thomas. Professor Thomas is the Director of the ARC Centre of Excellence for Automated Decision- Making and Society and a researcher in digital media and the internet. Julian is a member of the Council of the Australian Academy of the Humanities, a Board member of the Australian Communications Consumers Action Network (ACCAN), and an Advisory Board member of Humanitech, an initiative of the Australian Red Cross.

Brooke: Thank you for joining me today Julian!

Julian: Hi Brooke! Great to be with you.

Intro: When we talk about automated decision-making technologies we can think of simple things like text message appointment reminders, automatic loan approvals, to more complex things like the technology needed for driverless cars. Automated decision making is used in our news and media, transport and mobility, health and social services. It is pretty much used anywhere businesses can streamline processes with the use of automation and algorithms.

The rapid expansion of automated decision-making enabled by technologies from machine learning to the blockchain has great potential benefits, but it also creates potential harms from data discrimination against disadvantaged communities to the spread of disinformation for political and commercial gains. The ARC Centre of Excellence for Automated Decision-Making and Society, known as the ADM+S, brings together universities, industries, Government and the community to help mitigate these harms and inform the development of automated decision-making that is responsible, ethical and inclusive.

Brooke: Firstly Julian, can you tell us a bit about yourself, your research interests and why you were drawn to these interests?

Julian: Sure! Happy to do that. Well, I’m a media studies scholar, really, I’ve been working on digital media and internet for some time. My training is in history and what I’m really interested in is what I describe as the histories of new communications technologies, what kind of social impact they have, how we try to manage these things, the sorts of transformations they make to the way we live, the way we communicate, everyday culture, life, work all of those things. So, the theme of our Centre which is really about this new form of automation, we’ll talk a bit more about that as we go along, but that really brings together a whole lot of things I’ve been interested in for a long time. How we regulate these new technologies, what their social disruption is, especially what sorts of impacts do they have on disadvantaged communities or vulnerable populations? And, also an area I’m really interested in, what happens with those areas of technology’s that are not regulated, the emergence of markets which are not governed or controlled legally or in other ways. So really this whole problem of the emergence of new kinds of automated technologies, AI and all these other things, just sort of bring all my interests together so it’s a really great convergence of things for me.

Brooke: Can you describe for us what automated decision-making is?

Julian: Yeah, so what’s the sort of scope of the Centre and what are we really interested in and I think it is this transformation which we are living through right now and we’ve seem happening for some time. And, in order to understand it we just have to go back a little bit and think about that key word in the title of our Centre “automated”.

What is automation, what are we thinking about there? If you remember for a long time there’s been debate about automation, about the way machines take the jobs, for example, of people and how machines since the industry age, the late 18th early 19th century, have taken over the work of humans, the work of animals in making things.

So, we now have modern factories where the manufacture of goods is largely undertaken through automated process by often heavy machinery, that’s our image of an industrial factory. So, what interested us with what we can see going on around us now, is what we sometimes call a second wave of automation where the machines aren’t just making things anymore, they’re also making decisions.

So, we find that there are a whole range of new and emerging technologies, different flavours of artificial intelligence, robotics, distributor ledger technologies, we sometimes refer to as blockchain, all kinds of mechanisms whereby we’re seeing that computers are able to gather and process data, apply rules to particular situations, chose actions and determine outcomes for people in all sorts of everyday contexts. So, this isn’t the artificial intelligence of science fiction, we’re not talking about the remote future, we’re talking about the present, our contemporary situation where we find that algorithmic systems are, for example, in certain cases which have been in the news recently- making decisions about what grades people get at school, making decision about whether people are eligible for a social security benefit, or a loan, or insurance or any of those kinds of things. So, this is very much about automation in the everyday. But it’s about this process whereby machines as I say, are making decisions which have outcomes for us and what we think is going with this, which makes this particularly interesting from our point of view, is that beyond what happened with that first wave of automation that I mentioned where machines are making things and they’re replacing the labour of people, what’s also happening in this wave is that we find the critical functions of some of our institutions are also being misplaced and substituted, so the sorts of rules and organisations that we’ve established sometimes to mitigate risks or manage human welfare, we’re finding that their work is being taken over by machines as well, so this is what we are concerned about with the Centre, they are the risks that are presented. But that’s really the kind of automated decision-making that we are interested in.

Brooke: So automated decision-making otherwise known as ADM, is currently being used in society, but how is it evolving?

Julian: Yeah, so we’ve talked a little bit about how automated systems are appearing in news and media, we’ve talked a little bit about how they are appearing in critical areas of government services like social security, but we can see them going on in other parts of our lives and the economy as well. So, our Centre has quite a broad range of areas that we’re interested in, those areas certainly but we’re also particularly interested in how automation plays out in transport and mobilities. So, there we’re looking at how automated systems are being built into the design of future transport and mobility services for example that may or may not meet the needs of people and social institutions. We are seeing how automated systems are used to decide how people travel, what sorts of modes they take, and what routes they take across the city. All of these are decision making systems which have considerable implications for individuals, but also for society as a whole.

We’re also interested in the health sector, so there of course we’re seeing systems being used increasingly to make decisions about what kind of healthcare people have, how treatment protocols are managed in particular kinds of institutions or situations, how diseases are diagnosed, how illness may or may not be predicted and how care can be tailored more effectively for the personal and individual needs of particular people. So, we’ve chosen a range of these areas, health, transport mobility, social services, news and media they are all quite different. They are interesting to us because they are all areas where automation has proceeded a long way, it has proceeded quite rapidly. So, we can learn a bit from what’s going on in those areas, to understand better of what might happen in other fields where it’s not so advanced and we can also compare and contrast what’s going on in those fields. So, what we find is that similar systems are developed in all those different domains, but the people who are involved in them, in developing them, in understanding how they work, in administering them, don’t always have a strong sense of what the debates may be, what the lessons and experiences may be of how they have worked in other areas. So, we are trying to break down some of those silos a little bit by working and looking across all of those areas.

Brooke: It sounds like ADMs will make things more efficient but what some of the risks?

Julian: We think that there are a range of risks in almost all of the areas that we are interested in and they do take different forms. We are understanding a little bit better how it is possible that artificial intelligence, or machine learning systems which are trained on large historical data sets, can reproduce and amplify the biases or patterns of discrimination, which may be built into the data that trains those systems. We’re beginning to understand what might be involved if we have to understand how an automated system has made a decision, and what sorts of problems might be encountered there. But the main risks that we are concerned with arise from the shift in responsibility that I referred to, where automated systems are taking over the functions of intuitions in rule setting and in reviewing and overseeing decision-making processes. So, the risks at one level are quite simple the risks are that an automated system can make a mistake and we’ve seen lots of examples of that. And when we are giving automated systems responsibility for decisions that affect people’s lives, a mistake can have traumatic and far-reaching consequences. So, it may be just one mistake in a million, but that’s a serious mistake for the people who are affected by it and it’s something that we really need to avoid, even if it is that low probability.

So, we’ve got to build these systems to a very high standard and we’ve got to make sure that there are systems of review that are built into them, that there is explainability in terms of us being able to understand how decisions are being made and to go back. Because we’ve seen that it’s not difficult to build automated systems, it’s difficult to build automated systems that don’t make mistakes. Sometimes we only discover those once they’ve been operating for a while, so one of the things we’ve got to think of is how do we test, how do we evaluate, how do we understand the performance of these systems. So, there are a lot of elements to all of this, but the fundamental area of concern is where we are giving machines responsibility, how do we make sure we can review the decisions that are made and make sure that they are in line with the law, with best practice, with what is ethical, with what is responsible in terms of social impact and what is transparent in terms of accountability.

Brooke: The Australian Research Council funded the ARC Centre of Excellence for Automated Decision-Making and Society as it recognised research in the field as a national priority. Can you tell us more about the Centre?

Julian: The Centre is,as you say, brings together people from across a whole range of disciplines to undertake a program of research over 7 years to address these significant national questions around how we manage automation and how we make sure that this enormous transformation in our economy and in our ways of living is done in a way which is ethical, responsible, and inclusive.

We are very lucky to have the ARC and the ARC Centre’s of excellence have been an extraordinarily successful program. What they do and what is so exciting about this, is the way in which they bring together researchers from different disciplines and from a whole range of different universities with industry partners from government, from the community sector and from the corporate sector to address these really difficult problems. And they also connect us up with an international network, so it’s an extraordinary opportunity to really prosecute an ambitious program for research, research training, for understanding how the kind of work we do might play out in practice and what we can learn from experience on the ground in a whole range of different industry contexts. The idea of the Centre is that we bring together a critical mass of some of the country’s best researchers in these fields, in a whole range of disciplines and we attract outstanding post-doctoral research fellows, early career researchers and PhD students to create a new generation of researchers. So, the ARC Centre’s are a fantastic initiative, and they really do give us an extraordinary opportunity to create a lasting legacy in this area.

Brooke: What is the Centre doing to contribute to the mitigation of the social and economic risks associated with ADM?

Julian: Well, the Centre, as I said, has an ambitious research program and there are a whole range of things we need to do in order to mitigate the risks and to make sure that there are better outcomes out of this transformation as we go ahead. In general, what we have to do first of all is to understand the distribution of automated systems across the economy and across everyday life, what are the dynamics of these things where are they appearing how rapidly and what they are actually doing. Because this is something we do not yet know in this country and that work has not been done internationally, so our first job is really to understand the environment in which we’re operating which is increasingly an environment in which humans are working alongside machines. So, we really need to understand that a lot better. We then are bringing together expert researchers in the areas of the technologies, in the law and governance and ethics area, in the social impacts and social uses of technologies, anthropologists, sociologists and others and in the area of how data moves around.

So, looking at this problem from all those four dimensions because we do not see it as simply a technological problem, but as something that involves all of those critical elements. Institutions we’ve talked a little bit about, the technology we’ve talked a little bit about, the kinds of data that these systems use, how it’s generated, and what you do with it, and of course, the people who are at the centre of it all, either designing these systems or people who are subject to the outcomes of these decision-making processes. So, we need to bring all of those perspectives together and they all require different disciplines, but then what we are doing is of course turning to those key domains that we are concerned with and this is where we think that it is possible to make a difference if we understand how automated systems are working better.

Brooke: How can we make these technologies responsible, ethical and inclusive?

Julian: So, for example, in that news and media area we’ve mentioned if we look at how search engines are working, how people are discovering news, how news is getting to people, how automated news feeds are working, content moderation systems, programmatic advertising which is really about the application of different kinds of AI to the delivery of targeted advertising, how are they all working in our contemporary digital media platforms and what sorts of risks do they pose to our democratic process and to the social cohesion that follows from those. How do we make sure that public interest journalism for example is prominent, accessible, discoverable and in front of people when they need to know about it, so these are the kinds of questions that we have to ask.

A lot of what we’re doing is looking closely at how automated systems are working and then trying to understand the degree to which they may fall short in terms of accounting for their social impact, operating transparently and ethically and including everybody’s interests, not just the consumer and the provider as it were, that of you know both sides of the marketplace, that how these things work in what we might describe as a sort of public interest of context and that involves going back into how these systems work understanding better what sorts of factors they take into account when they do make decisions, and whether we can bring public interest considerations into those decision making processes. Whether we can also make sure that when they are making those decisions, they’re doing so in line with existing public policy and legal and regulatory requirements. So, it’s a big job but that’s the approach we’re taking.

Brooke: So, Julian, what have you found so far?

Julian: We’ve been going six months and we’ve been getting projects under way, we’ve been getting a lot of projects under way because in the era of COVID, we haven’t been able to do some of the sorts of things were planning to do, but we have been able to bring some work forward. We’ve been able to recruit a very exciting group of early career researchers and PhD students and we starting to get those people working together, so this is our first job as a Centre really to build cohesion and communication across the centre as a whole. It’s very exciting to do it and we’re already seeing new collaborations emerge from people in different disciplines, in different institutions who otherwise would never be working with each other. But I suppose what we’re starting to understand, and the work that we’re starting to do substantively, does relate to this question of how we understand what we call the dynamics of automated decision-making in Australia, and its distribution not just in Australia because we interested in other places as well, we’ve got researchers who are working on South Pacific and Southeast Asia and of course in Europe and North America as well.

But we’re starting to understand what the distribution of automated decision-making looks like, in those key areas we talked about, but also more widely because I think what we’re finding is that automation, automated decision-making systems, are being deployed very wide. We’re starting see what we might call an industrialisation of artificial intelligence and related sorts of computational processes in all kinds of areas all public administration, industry, and in the community sector as well to a significant part. So, we’re starting to get a sense of what we’re encountering, the speed, the velocity of automation is striking the appearance of automation right across our economy is striking, and so what we starting to do is understand that, start to map that, so we can provide a benchmark account of where we are now and what we can see will happen in the next little while, so we can monitor it, and then also when looking at, you know, the policy debates that are going on in Australia and elsewhere that are contributing were we can to the development of better regulation in this space, and these debates are of course happening all the time. The last thing I think we’re doing is getting involved in the design of better systems, of more responsible and ethical automated systems working with partners like the Red Cross in, for example, in their organization Humanitech which is a spin off from the Red Cross that works on technology solutions in the humanitarian sector, so just understanding better how we can, working with them, how we can develop systems that assist people in those quite specific contexts. So, that’s the kind of work we’ve been doing. I think we’ll have a lot more to say about this in the next year or so.

Brooke: So, Julian, the Centre is funded until 2027, what do you hope to achieve in this time?

Julian:  We have some ambitious targets for 2027, we’re trying to do a lot as we’ve been saying the Centre’s an extraordinary opportunity and we are naturally wanting to make the very most of it. So, really our critical first goal has got to be to help shape what would be world-leading policy and practice, in responsible, ethical and inclusive automated decision-making systems, so that’s a big step up from where we are now, that means we are reducing risks in the key areas, the key fields, that we’ve been talking about and producing better outcomes in those areas. We would really like to be able to say that we have made a significant difference in in those fields.

I’ve talked a little bit as well through our discussion about early career researchers and our students, the Centre has a big responsibility in training, in research training, so we are hoping, we want to produce effectively a new generation of researchers and practitioners who are trained in the technologies and the systems we’ve been describing, who can bring a range of disciplinary perspectives to bear on those problems as they develop, who have had experience of working across our national and international networks, and we feel that if we able to do, that we will be out to provide capability not just into our universities but into Australia’s non-government sector and into the corporate sector. So, and of course, in policy making and, in those areas, as well, so that we feel as though we will be securing better debate, better outcomes in those fields, you know, for the long term. More broadly though the last thing I suppose I’d say is that we want to inform public debate in this area we want to encourage public debate, as I was saying there are rapid transformations going on right now, I’m not sure whether any of us fully appreciate how fast systems are changing and the significance of the changes that we’re living through. So, it’s very important that we have an informed and knowledgeable public debate about automation, about how far it should go and in which circumstances what ethical, responsible and inclusive automation should look like, so that we can make informed decisions as a community about our future, rather than just letting it happen. So, they are the key things that we want to do.


Brooke: How can listeners get involved in the Centre and find out more about the work that you are doing?

Julian: Oh sure! Very happy to encourage people to get involved and there’s lots of ways to connect through our website which is  we’ve got a monthly newsletter which lists all our events, many of them open to everybody to the public and everyone of course, and we’re very keen for people to register for those. Almost all our events I think are free of charge for the public, so lots of talks and webinars, all of those kinds of things and as we go along, we plan to have a wider range of activities including exhibitions and more creative sorts of activities there are a whole range of talks which we’ve recorded on our YouTube channel, so encourage people to take a look at that, and of course we encourage people to follow us on Twitter and Instagram and Facebook and elsewhere. We’ve got a great series of podcasts here and this is a really great way to hear a little about more what people are doing and get a little bit more detail about some of the issues that we’re working with. So, all of those are really good ways for people to keep in touch with the centre, we’ve got an were you can get in touch and you can also get in touch by the website, so for anybody wants to know more that’s coming, please do just contact us and we’re very happy to follow up.

Brooke: I think often we just accept automation of systems as something that will streamline transactions and online experiences however as you mentioned they are not just technical and there also social, cultural, and institutional considerations.

Thank you for joining me Julian, and talking about some of the potential issues that the Centre are researching to ensure that the future of automated decision-making is responsible, ethical and inclusive.

Julian: Thank you, Brooke, for a great discussion.

End: You’ve been listening to a podcast from the ARC Centre of Excellence for Automated Decision- Making and Society. For more information on the Centre go to