Transparent Machines: From Unpacking Bias to Actionable Explainability

PROJECT SUMMARY

Person typing on computer

Transparent Machines: From Unpacking Bias to Actionable Explainability

Focus Area(s): News and Media, Health, Social Services, Transport and Mobilities
Research Program: Machines

ADMs, their software, algorithms, and models, are often designed as “black boxes” with little efforts placed on understanding how they work. This lack of understanding does not only impact the final users of ADMs, but also the stakeholders and the developers, who need to be accountable for the systems they are creating. This problem is often exacerbated by the inherent bias coming from the data from which the models are often trained on.

Further, the wide-spread usage of deep learning models has led to increasing number of minimally-interpretable models being used, as opposed to traditional models like decision trees, or even Bayesian and statistical machine learning models.

Explanations of models are also needed to reveal potential biases in the models themselves and assist with their debiasing.

This project aims to unpack the biases in models that may come from the underlying data, or biases in software (e.g. a simulation) that could be designed with a specific purpose and angle from the developers’ point-of-view. This project also aims to investigate techniques to generate actionable explanations, for a range of problems and data types and modality, from large-scale unstructured data, to highly varied sensor data and multimodal data.

RESEARCHERS

ADM+S Investigator Flora Salim

Prof Flora Salim

Lead Investigator

Learn more

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Chief Investigator

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator

Learn more

Daniel Angus

Prof Dan Angus

Associate Investigator

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator

Learn more

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Associate Investigator

Learn more

ADM+S Investigator Maarten de Rijke

Prof Maarten de Rijke

Partner Investigator

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

Quantifying and Measuring Bias and Engagement

PROJECT SUMMARY

Man working on laptop

Quantifying and Measuring Bias and Engagement

Focus Area(s): News & Media, Health
Research Program: Machines, Data

Automated decision making systems and machines – including search engines, intelligent assistants, and recommender systems – are designed, evaluated, and optimised by defining frameworks that model the users who are going to interact with them. These models are typically a simplified representation of users (e.g., using the relevance of items delivered to the user as a surrogate for system quality) to operationalise the development process of such systems. A grand open challenge is to make these frameworks more complete, by including new aspects such as fairness, that are as important as the traditional definitions of quality, to inform the design, evaluation and optimisation of such systems.

Recent developments in machine learning and information access communities attempt to define fairness-aware metrics to incorporate into these frameworks. However, there are a number of research questions related to quantifying and measuring bias and engagement that remain unexplored:

  • Is it possible to measure bias by observing users interacting with search engines, recommender systems, or intelligent assistants?
  • How do users perceive fairness, bias and trust? How can these perceptions be measured effectively?
  • To what extent can sensors in wearable devices and interaction logging (e.g., CTR, app swipes, notification dismissal, etc) inform the measurement of bias and engagement?
  • Are the implicit signals captured from sensors and interaction logs correlated with explicit human ratings w.r.t. bias and engagement?

The research aims to address the research questions above by focusing on information access systems that involve automated decision-making components. This is the case for search engines, intelligent assistants, and recommender systems. The methodologies considered to address these questions include lab user studies (e.g., Wizard of Oz experiments with intelligent assistants), and the use of crowdsourcing platforms (e.g., Amazon Mechanical Turk). The data collection processes include: logging human-system interactions; sensor data collected using wearable devices; and questionnaires.

RESEARCHERS

ADM+S Investigator Damiano Spina

Dr Damiano Spina

Lead Investigator

Learn more

ADM+S Chief Investigator Anthony McCosker

Assoc Prof Anthony McCosker

Chief Investigator

Learn more

Sarah Pink

Prof Sarah Pink

Chief Investigator

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator

Learn more

ADM+S Associate Investigator Jenny Kennedy

Dr Jenny Kennedy

Associate Investigator

Learn more

ADM+S Chief Investigator Falk Scholer

Prof Falk Scholer

Associate Investigator

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Associate Investigator

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Research Fellow

Learn more

PARTNERS

ABC logo

Australian Broadcasting Corporation

Visit website

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Bendigo Health logo

Bendigo Hospital

Visit website

Google Logo

Google Australia

Visit website

RMIT ABC Fact Check Logo

RMIT ABC Fact Check

Visit website

Mapping ADM Machines in Australia and Asia-Pacific

PROJECT SUMMARY

People walking in city centre

Mapping ADM Machines in Australia and Asia-Pacific

Focus Area(s): Social Services
Research Program: Machines

This project involves adopting the (draft) taxonomy for automated decision-making (ADM) in undertaking a mapping exercise of ADM machines in Social Services in Australia. A key purpose is to test and refine the taxonomy and to provide foundational empirical and conceptual knowledge of ADM in social services beyond Europe and North America, and into the Asia-Pacific region. This mapping exercise will provide necessary baseline empirical understanding of where ADM is and how it is being used.

The approach will use a critical data studies theoretical framework to develop a countermapping of ADM systems in social services. This approach views ADM as an assemblage of data systems and decision making in social-political context, and aims to build knowledge about what ADMs are being used in government, and how they are used, and who is effected by this.

RESEARCHERS

Paul Henman

Prof Paul Henman

Lead Investigator

Learn more

Lyndal Sleep profile picture

Dr Lyndal Sleep

Research Fellow

Learn more

PARTNERS

AlgorithmWatch Logo

Algorithm Watch (Germany)

Visit website

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

PROJECT SUMMARY

People on bus using mobile phones

Adaptive, Multi-Factor Balanced, Regulatory Compliant Routing ADM Systems

Focus Area(s): Transport and Mobilities
Research Program: Machines

This project aims to study and develop new approaches that combines fairness, privacy and legal guarantees for ADM systems, such as recommender and machine learning based systems. It takes a multi-disciplinary approach and although focused on the transportation focus area, can potentially be applicable in other areas.

The project is divided into three work packages, roughly one year in length each.

RESEARCHERS

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Lead Investigator

Learn more

ADM+S Chief Investigator Megan Richardson

Prof Megan Richardson

Chief Investigator

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator

Learn more

Kimberlee Weatherall

Prof Kimberlee Weatherall

Chief Investigator

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator

Learn more

ADM+S Investigator Sarah Erfani

Dr Sarah Erfani

Associate Investigator

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Associate Investigator

Learn more

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

PROJECT SUMMARY

Internet of things marketing concepts,smart augmented reality

Considerate and Accurate Multi-party Recommender Systems for Constrained Resources

Focus Area(s): News and Media, Health, Social Services, Transport and Mobilities
Research Program: Machines

This project will create a next generation recommender system that enables equitable allocation of constrained resources. The project will produce novel hybrid socio-technical methods and resources to create a Considerate and Accurate REcommender System (CARES), evaluated with social science and behavioural economics lenses.

CARES will transform the sharing economy by delivering systems and methods that improve user and non-user experiences, business efficiency, and corporate social responsibility.

RESEARCHERS

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Lead Investigator

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator

Learn more

Julian Thomas

Prof Julian Thomas

Chief Investigator

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator

Learn more

Danula Hettiachchi

Dr Danula Hettiachchi

Research Fellow

Learn more

Indigo Holcombe-James Headshot

Dr Indigo Holcombe-James

Research Fellow

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Associate Investigator

Learn more

PARTNERS

University of Amsterdam logo

University of Amsterdam

Visit website

Building Ethical Machines in Social Services: Examining, Evaluating, Building Fairness and Explainability in ADM

PROJECT SUMMARY

A mother and daughter smiling at a garden centre.

Building Ethical Machines in Social Services: Examining, Evaluating, Building Fairness and Explainability in ADM

Focus Area(s): Social Services
Research Program: Machines

A significant area of automated decision-making (ADM) in social services relates to the use of predictive measures – such as predictions of risk to children to abuse/neglect in child protection, predictions of recidivism or crime in policing and criminal justice, predictions of welfare/tax fraud in compliance systems, predictions of long term unemployment in employment services. While earlier and current versions of these systems are based on standard statistical analyses, they are increasingly having machine learning developed and deployed.

Despite these changes in the machine/algorithm design, the issues of bias, fairness and explainability are not substantially shifted and have not been dealt with in the past. Working with computer scientists, lawyers, social scientists, and users of social services, this project will engage with substantive empirical examples of ADM in disability services, child protection, criminal justice and social security to develop an understanding of what social service users and professionals regard as fairness and explanation.

RESEARCHERS

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Lead Investigator

Learn more

ADM+S Chief Investigator Dan Hunter

Prof Dan Hunter

Chief Investigator

Learn more

Terry Carney

Prof Terry Carney AO

Associate Investigator

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator

Learn more

Amelia Radke

Dr Amelia Radke

Associate Investigator

Learn more

Paul Harpur

Assoc Prof Paul Harpur

Associate Investigator

Learn more

PARTNERS

ACOSS logo

Australian Council of Social Service

Visit website

Australian Human Rights Commission logo

Australian Human Rights Commission

Visit website

Australian Law Reform Commission logo

Australian Law Reform Commission

Visit website

Australian Red Cross Logo

Australian Red Cross

Visit website

A taxonomy of decision-making machines

PROJECT SUMMARY

Blurred people walking towards a city building with green trees on the side of the pathway

A taxonomy of decision-making machines

Focus Area(s): News and Media, Health, Social Services, Transport and Mobilities
Research Program: Machines

To date, no research exists that classifies the growing diversity of automated decision-making (ADM) machines or describes the relations between them. Instead, ADM systems are typically examined as distinct technologies in isolation from each other.

The project draws on the expertise within the Centre, together with published material, to develop an innovative three-dimensional taxonomy. It provides a categorisation of ADM that will support work across the Centre.

RESEARCHERS

ADM+S Chief Investigator Paul Henman

Prof Paul Henman

Lead Investigator

Learn more

ADM+S Chief Investigator Dan Hunter

Prof Dan Hunter

Chief Investigator

Learn more

ADM+S Chief Investigator Christopher Leckie

Prof Christopher Leckie

Chief Investigator

Learn more

ADM+S Chief Investigator Mark Sanderson

Prof Mark Sanderson

Chief Investigator

Learn more

Julian Thomas

Prof Julian Thomas

Chief Investigator

Learn more

Jeffrey Chan

Dr Jeffrey Chan

Associate Investigator

Learn more

ADM+S Investigator Philip Gillingham

Dr Philip Gillingham

Associate Investigator

Learn more

ADM+S Associate Investigator Jake Goldenfein

Dr Jake Goldenfein

Associate Investigator

Learn more

ADM+S Investigator Flora Salim

Prof Flora Salim

Associate Investigator

Learn more

PARTNERS

AlgorithmWatch logo

AlgorithmWatch
Visit website

Data and Society logo

Data & Society
Visit website