Work with us
We are currently recruiting postdoctoral researchers for two positions
Intelligent systems adhering to human responsibility and normative constraints
As a result of rapid and impressive technological developments, applications and the impact of autonomous intelligent systems on our society becomes increasingly present. Several ethical codes and normative frameworks have been proposed to support the development of ‘responsible AI’, usually based on principles such as beneficence, non-maleficence, justice, autonomy, and explainability. It is implicitly assumed that these principles can be translated into operational constraints for software and systems. However, such translation to concrete technical requirements is not a straightforward process, leads to conflicting requirements, and demands deliberation between a number of stakeholders (developers, users, individuals affected by system’s actions, legal experts, etc.). This postdoctoral research position will focus on diminishing the gap between principles and practice of moral responsibility and adherence to (social and legal) norms for intelligent autonomous systems. Focusing on the question of how we can achieve meaningful human control over autonomous systems, different ethical frameworks and principles will be studied to understand the challenges and processes necessary to incorporate such concepts into computational models. This will lead to case studies of different design approaches that will engage relevant stakeholders to capture adherence to social, legal, ethical norms, and, ultimately, human responsibility. Candidates with a PhD degree in technical and/or social sciences are strongly encouraged to apply.
Understanding and predicting the performance of machine learning systems
For an autonomous intelligent system to be under meaningful human control it has to demonstrably and verifiably be responsive to relevant human interventions on social, moral or legal grounds. Machine learning algorithms may be considered a natural candidate to achieve this responsiveness, given their efficiency and adaptability. However, due to training data scarcity, biases in data sets, noisy human behaviour, and the difficulty to model social, moral and legal norms, most of today’s machine learning algorithms are rather unqualified for dealing with such complex situations. An important reason is that these algorithms are often unaware of their own limitations and reliability, and they lacking the ability to proper capture human behaviour and values. This postdoctoral research position will focus on the research and development of machine learning algorithms that combine model-based (e.g., cognitive models) and data-driven approaches. The resulting AI solutions will have a higher degree of explainability and self-awareness than existing machine learning systems, and will fulfill the responsiveness requirement needed for autonomous systems to be under meaningful control by reporting their own decision performance, reliability, dilemmas and failures.
Please submit your research proposal, resume and letter of motivation by sending them in an email to Anne Jonker.
We accept open applications for postdoctoral and PhD positions. Potential applicants should have a demonstrable affinity to work in an engineering context and should have a keen interest in societal impact of autonomous intelligent systems. Applicants for postdoctoral positions can send their research proposal, resume and letter of motivation to Anne Jonker. For PhD positions, please send your application to a member of the AiTech Core Team best matching your research interest.
AiTech aims to build a community of pioneers, aiming at profoundly interdisciplinary research and societal impact in an engineering context. We seek collaborators with a ‘can do attitude’ in the context of academic, industrial, governmental or cross-border European projects.
We solicit concrete proposals for master thesis projects across all disciplines relevant to AiTech and in particular proposals that line up with current AiTech projects.