Design at Scale Lab
Human-AI Collaboration in Design for Social Good
We can harness unprecedented amounts of data using AI, creating opportunities to tackle major societal problems in areas such as health, well-being, and mobility. To make AI useful, we need to find new ways to combine the creative power of humans with the analytical capabilities of computers. That’s why the Design at Scale Lab (D@S Lab) is developing new methods for ‘Hybrid Intelligence’ (HI) – the combination of artificial and human intelligence.
A key challenge lies in finding out how to help designers, experts and societal stakeholders work together with AI, to prepare, realise and evaluate design interventions. Our aim is to reduce design complexity for large-scale social interventions. D@S Lab research will focus on orchestrating large-scale design activities involving people, data, and machines. We will collect data on effective design interventions and investigate how to predict their impact.
Our research will establish new methods for integrating Participatory Design, Crowd Computing and AI, ultimately enabling designers to better address complex social problems.
The D@S Lab is part of the TU Delft AI Labs programme.
The team
Education
Courses
- CS4145: Crowd Computing
- CS3500: Human-Computer Interaction
- CSE1500: Web and Database Technology
- CS4305TU: Applied Machine Learning
- IN4325: Information Retrieval
- IN4252 Web Science & Engineering
- ID5417: Artificial Intelligence and Society
- ID5416: Machine Learning for Intelligent Products
- IOB6-E8: Design Analytics
- IOB4-T3 Machine Learning for Design
Master Projects
Openings
- Designing a game with a purpose (GWAP) for requirements elicitation and to harness contextual knowledge in a given domain
- Human-AI collaboration for scalable ethnography
- Understanding the role of social influence in Human-AI Interaction
- In-context scene understanding through entity saliency modeling
- Supporting personalized conversation at scale with NLP
- Application: Large-scale well-being assessment through hybrid intelligence
- Conversational Style Alignment to Foster Trust in Human-AI Interactions
- Building Health Interventions for Workers in Crowdsourcing Marketplaces
- Crowdsourced Explanations for Intelligibility in Human-AI Interaction
- Building Expressive Conversational Interfaces Using Phonetic Spellings, Creative Punctuation and Emoticons
- Benchmarking Interpretability and Diagnosis Methods for Machine Learning
- Bias-aware Active Learning with Humans in the Loop
- Bridging the Gap Between BERT and Knowledge Bases
- Interpreting BERT Using the Right Terms
- Debugging Deep Learning Models on Embedded Devices
Ongoing
- Multimodal Explanations in Credibility Assessment Systems [Vincent Robbemond - supervised by Ujwal Gadiraju]
- Understanding Factors that Shape First Impressions in Human-AI Interaction [Ana Semrov - supervised by Ujwal Gadiraju]
- Building Interactive Text-to-SQL Systems [Reinier Koops - supervised by Ujwal Gadiraju]
- (What) Did the machine learn? Evaluating the accuracy and precision of computational text analysis in classification and clustering of COVID-19 policy responses [Ye Yuan - supervised by Ujwal Gadiraju & Jie Yang]
- Understanding Factors that Influence Trust Formation in Conversational Human-Agent Interaction [Ji-Youn Jung - supervised by Ujwal Gadiraju & Dave Murray-Rust]
- One Step Ahead: A weakly-supervised, adversarial approach to training robust, privacy-preserving machine learning models for transaction monitoring [Daan van der Werf - supervised by Jie Yang, conducted in Bunq]
- Exploring the Role of Domain Experts in Characterizing and Mitigating Machine Learning Errors [Pavel Hoogland - supervised by Jie Yang and Oana Inel, conducted in ILT, Ministry of Water and Infrastructure Management]
- Characterising AI Weakness in Detecting Personal Data from Images By Crowds [Ashay Somai - supervised by Jie Yang and Agathe Balayn]
- An Agent-based Opinion Dynamics Model with a Language Model-based Belief System [Django Beek - supervised by Jie Yang and Sergio Grammatico]
- Declarative Image Generation from Natural Language [Anitej Palakodeti - supervised by Jie Yang and Asterios Katsifodimos]
- Philosophy-grounded Machine Learning Interpretability [Shreyan Biswas - supervised by Jie Yang and Stefan Buijsman]
Finished
- Effects of Time Constraints and Search Results Presentation on Web Search [Mike Beijen - supervised by Ujwal Gadiraju]
- Impact of Biased Search Results on User Engagement in Web Search [Wessel Turk - supervised by Ujwal Gadiraju]
- Clustering Small and Medium Sized Dutch Enterprises Using Hybrid Intelligence [Shipra Sharma - supervised by Jie Yang]
- RACE:GP – a Generic Approach to Automatically Creating and Evaluating Hybrid Recommender Systems [Arjo van Ramshorst - supervised by Jie Yanga and Neil Yorke-Smith]