Ethics, transparency and accountability of AI

Ethics, transparency and accountability of AI

We have been actively involved in fairness-aware machine learning research since the beginning of the field around 2009.

Nowadays, many decisions are made using predictive models built on historical data, for example, automated CV screening of job applicants, credit scoring for loans, or profiling of potential suspects by the police. Algorithms decide upon the prices we pay, the people we meet, or the medicine we take.

Growing evidence suggests that decision making by algorithms can potentially discriminate people. This may happen even if the computing process is fair and well-intentioned. This is because most data mining methods are based upon assumptions that the historical data is correct, and represents the population well, which is often not true in reality. Moreover, usually predictive models are optimised for performing well in the majority of the cases, not taking into account who is affected the worst by the remaining inaccuracies.

Fairness-aware machine learning studies in which circumstances algorithms may become discriminatory, and how to make predictive models free from discrimination, when data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions.

Selected publications

  • Fairness-aware machine learning: a perspective by Žliobaitė 2017 in Arxiv. DOI
  • Measuring discrimination in algorithmic decision making by Žliobaitė 2017 in Data Mining and Knowledge Discovery. PDF DOI
  • Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models by Žliobaitė and Custers 2016 in Artificial Intelligence and Law. DOI PDF
  • On the relation between accuracy and fairness in binary classification by Žliobaitė 2015 in FATML workshop PDF arXiv
  • Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures by Calders and Žliobaitė 2013, a book chapter. DOI PDF
  • Explainable and Non-explainable Discrimination in Classification by Kamiran and Žliobaitė 2013, a book chapter. DOI PDF
  • Handling Conditional Discrimination by Žliobaitė et al. 2011 in IEEE ICDM. DOI PDF

Selected talks

  • May 2016 » Ethical machines: data mining and fairness AID-Forum, Helsinki summary
  • Jul 2015 » Can algorithms discriminate? at EU Fundamental Rights Agency, Vienna, Austria slides
  • May 2015 » Can machines discriminate? and how to avoid that, seminar talk at HIIT slides

Teaching

  • Fall 2015 » Non-discriminatory machine learning (T-61.6010) at Aalto University and University of Helsinki. (to the best of our knowledge this was the first full course in the world on this topic)

Workshop