ELLIS Delft Research theme - Verification & Safe and Responsible AI

Topics: verification, monitoring, anomaly detection, Interpretable models

Machine learning (ML) has achieved superhuman performance in numerous applications. However, most existing ML techniques are domain specific and their results are often not interpretable. For domains where correctness is critical, ensuring that ML provides worst-case guarantees is an open problem. Formal methods, such as verification and monitoring, are rich in languages and algorithms to specify and ensure correctness, yet their application to systems with ML components has been explored only in a few specific problems. Verified and Responsible AI (VRAI) unites research efforts in making machine learning and formal methods speak a common language and providing guarantees for ML-enabled systems during their design and deployment

Theme Coordinator

Related AI Labs