Philosophy and ethics of AI

[update pending]

We have been actively involved in fairness-aware machine learning research since the  beginning of the field around 2009. 

Nowadays, many decisions are made using predictive models built on historical data, for example, automated CV screening of job applicants, credit scoring for loans, or profiling of potential suspects by the police. Algorithms decide upon the prices we pay, the people we meet, or the medicine we take. 

Growing evidence suggests that decision making by algorithms can potentially discriminate people. This may happen even if the computing process is fair and well-intentioned. This is because most data mining methods are based upon assumptions that the historical data is correct, and represents the population well, which is often not true in reality. Moreover, usually predictive models are optimised for performing well in the majority of the cases, not taking into account who is affected the worst by the remaining inaccuracies.

Fairness-aware machine learning studies in which circumstances algorithms may become discriminatory, and how to make predictive models free from discrimination, when data, on which they are built, may be biased, incomplete, or even contain past discriminatory decisions. 

Selected publications

Selected talks

Teaching

Workshop