Design at Scale Lab

Human-AI Collaboration in Design for Social Good

We can harness unprecedented amounts of data using AI, creating opportunities to tackle major societal problems in areas such as health, well-being, and mobility. To make AI useful, we need to find new ways to combine the creative power of humans with the analytical capabilities of computers. That’s why the Design at Scale Lab (D@S Lab) is developing new methods for ‘Hybrid Intelligence’ (HI) – the combination of artificial and human intelligence.

A key challenge lies in finding out how to help designers, experts and societal stakeholders work together with AI, to prepare, realise and evaluate design interventions. Our aim is to reduce design complexity for large-scale social interventions. D@S Lab research will focus on orchestrating large-scale design activities involving people, data, and machines. We will collect data on effective design interventions and investigate how to predict their impact.

Our research will establish new methods for integrating Participatory Design, Crowd Computing and AI, ultimately enabling designers to better address complex social problems.

The D@S Lab is part of the TU Delft AI Labs programme.

The Design @Scale Lab is developing new methods for ‘Hybrid Intelligence’.

The combination of AI and AR holds significant potential for creating intelligent and personalised, experiences, enhancing the design process, and enabling new possibilities in entertainment, education, retail, architecture, and more.

Design @Scale’s research will establish new methods for integrating human-centered design, crowd computing and AI.

The team

Directors

PhD's

Associated faculty

Education

Courses

Master Projects

Openings

  1. Designing a game with a purpose (GWAP) for requirements elicitation and to harness contextual knowledge in a given domain
  2. Human-AI collaboration for scalable ethnography
  3. Understanding the role of social influence in Human-AI Interaction
  4. In-context scene understanding through entity saliency modeling
  5. Supporting personalized conversation at scale with NLP
  6. Application: Large-scale well-being assessment through hybrid intelligence
  7. Conversational Style Alignment to Foster Trust in Human-AI Interactions
  8. Building Health Interventions for Workers in Crowdsourcing Marketplaces
  9. Crowdsourced Explanations for Intelligibility in Human-AI Interaction
  10. Building Expressive Conversational Interfaces Using Phonetic Spellings, Creative Punctuation and Emoticons
  11. Benchmarking Interpretability and Diagnosis Methods for Machine Learning
  12. Bias-aware Active Learning with Humans in the Loop
  13. Bridging the Gap Between BERT and Knowledge Bases
  14. Interpreting BERT Using the Right Terms
  15. Debugging Deep Learning Models on Embedded Devices

 

Ongoing

  1. Multimodal Explanations in Credibility Assessment Systems [Vincent Robbemond - supervised by Ujwal Gadiraju]
  2. Understanding Factors that Shape First Impressions in Human-AI Interaction [Ana Semrov - supervised by Ujwal Gadiraju]
  3. Building Interactive Text-to-SQL Systems [Reinier Koops - supervised by Ujwal Gadiraju]
  4. (What) Did the machine learn? Evaluating the accuracy and precision of computational text analysis in classification and clustering of COVID-19 policy responses  [Ye Yuan - supervised by Ujwal Gadiraju & Jie Yang]
  5. Understanding Factors that Influence Trust Formation in Conversational Human-Agent Interaction [Ji-Youn Jung - supervised by Ujwal Gadiraju & Dave Murray-Rust]
  6. One Step Ahead: A weakly­-supervised, adversarial approach to training robust, privacy­-preserving machine learning models for transaction monitoring [Daan van der Werf - supervised by Jie Yang, conducted in Bunq]
  7. Exploring the Role of Domain Experts in Characterizing and Mitigating Machine Learning Errors [Pavel Hoogland - supervised by Jie Yang and Oana Inel, conducted in ILT, Ministry of Water and Infrastructure Management]
  8. Characterising AI Weakness in Detecting Personal Data from Images By Crowds [Ashay Somai - supervised by Jie Yang and Agathe Balayn]
  9. An Agent-based Opinion Dynamics Model with a Language Model-based Belief System [Django Beek - supervised by Jie Yang and Sergio Grammatico]
  10. Declarative Image Generation from Natural Language [Anitej Palakodeti - supervised by Jie Yang and Asterios Katsifodimos]
  11. Philosophy-grounded Machine Learning Interpretability [Shreyan Biswas - supervised by Jie Yang and Stefan Buijsman]

 

Finished

  1. Effects of Time Constraints and Search Results Presentation on Web Search [Mike Beijen - supervised by Ujwal Gadiraju]
  2. Impact of Biased Search Results on User Engagement in Web Search [Wessel Turk - supervised by Ujwal Gadiraju]
  3. Clustering Small and Medium Sized Dutch Enterprises Using Hybrid Intelligence [Shipra Sharma - supervised by Jie Yang]
  4. RACE:GP – a Generic Approach to Automatically Creating and Evaluating Hybrid Recommender Systems [Arjo van Ramshorst - supervised by Jie Yanga and Neil Yorke-Smith]