In the age of big data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. These systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. This can inflicit serious harms to people's human rights. In this project, we address these challenges through a trans-disciplinary design process grounded in values embodied by human rights, empirical study of context, and direct involvement of societal stakeholders.

For an in-depth look, check out our paper:
Aizenberg E, Van den Hoven J (2020) 'Designing for Human Rights in AI.' Big Data & Society 7(2).

Designing in practice:
Empowering job seekers with effective self-representation