In the age of big data, companies and governments are increasingly using algorithms to inform hiring decisions, employee management, policing, credit scoring, insurance pricing, and many more aspects of our lives. These systems can help us make evidence-driven, efficient decisions, but can also confront us with unjustified, discriminatory decisions wrongly assumed to be accurate because they are made automatically and quantitatively. In this project, we apply the framework of Design for Values, drawing on methodologies of Value Sensitive Design and Participatory Design, to put fundamental human rights at the forefront of the design process and involve relevant stakeholders early on to achieve a socially-aware, structured, and transparent integration of AI in our society.

For an in-depth look, check out our paper:
Aizenberg E, Van den Hoven J (2020) 'Designing for Human Rights in AI.' Big Data & Society 7(2).

Designing in practice:
Empowering job seekers with effective self-representation