Growing evidence in the media and research suggests that decision making by algorithms can potentially discriminate people. This may happen even if the computing process is fair and well-intentioned. This is because most data mining methods are based upon assumptions that the historical data is correct, and represents the population well, which is often not true in reality. Moreover, usually predictive models are optimized for performing well in the majority of the cases, not taking into account who is affected the worst by the remaining inaccuracies.
Non-discriminatory machine learning (T-61.6010) at Aalto University and University of Helsinki.
Fairness-aware machine learning and data mining studies in which circumstances algorithms may become discriminatory, and how to make predictive models free from discrimination, when data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions.
Žliobaitė, I. and Custers, B. (2016).Using sensitive personal data may be necessary for avoiding discrimination in data-driven decision models.
Artificial Intelligence and Law 24(2), p. 183-201. PDF
Žliobaitė, I. (2015).
Calders, T. and Žliobaitė, I. (2013).
Kamiran, F., Žliobaitė, I. and Calders, T. (2013).
Quantifying explainable discrimination and removing illegal discrimination in automated decision making. Knowledge and Information Systems 35(3), p. 613-644. DOI PDF
Kamiran, F. and Žliobaitė, I. (2013).
Explainable and Non-explainable Discrimination in Classification. Discrimination and Privacy in the Information Society, series: Studies in Applied Philosophy, Epistemology and Rational Ethics, Vol. 3., p. 155-170. DOI PDF
Žliobaitė, I., Kamiran, F., Calders, T. (2011).
Handling Conditional Discrimination. Proc. of the 11th IEEE Int. Conf. on Data Mining (ICDM'11), p. 992 - 1001. DOI PDF code
We organized a workshop on Discrimination and Privacy-Aware Data Mining at IEEE ICDM 2012 in Brussels.