ChatGPT: a risky advisor when it comes to safety related information

News - 02 March 2023 - Webredactie

An international consortium led by TU Delft researchers explored ChatGPT's capabilities in providing safety-related advice. Amongst them are TU Delft researchers Oscar Oviedo-Trespalacios, Helma Torkamaan, Steffen Steinert, and Genserik Reniers. They found that it is risky to rely on ChatGPT as a source of information and advice. The results are available in the preprint ‘The Risks of Using ChatGPT to Obtain Common Safety-Related Information and Advice’ in SSRN. 

‘Given public access to ChatGPT, people would sooner or later use it to access safety-critical information regardless of the intended uses or system designers’ intentions’, says Oscar Oviedo-Trespalacios. The multidisciplinary consortium of experts was formed to analyse nine cases across different safety domains: avoiding phone use while driving, supervising children near water, following crowd management guidelines, preventing falls, being aware of air pollution while exercising, supporting distressed colleagues, managing job demands to prevent burnout, protecting personal data in fitness apps, and avoiding operating heavy machinery when fatigued. The experts concluded that there is potential for significant risks when using ChatGPT as a source of information and advice for safety-related issues. ChatGPT made incorrect or potentially harmful statements and emphasized individual responsibility, potentially leading to ecological fallacy: assuming what is true for a population is true for the individual members of that population.

Phone use and driving

Experts asked ChatGPT about safe phone use while driving, and while ChatGPT highlighted the risks of distracted driving, it did not provide a complete warning. ChatGPT focused on texting while driving, but using maps or searching for music can also be dangerous. ‘The key message to convey is that taking your eyes off the road when in control of a vehicle is dangerous. Therefore, it's important to avoid any phone activity that requires visual attention, not just texting’, explains Oviedo-Trespalacios. ChatGPT may sometimes provide information on crowd and drowning safety that fails to account for individual contexts and could lead to inappropriate behaviour, increasing the risk of harm. Additionally, the data used by ChatGPT may be limited to high-income countries, raising questions about the validity of its advice in low and middle-income countries. 


ChatGPT is a highly advanced AI language model that has gained widespread popularity. It is trained to understand and generate human-like text and can be used in various applications, including automated customer service, chatbots, and content generation. While it has the potential to offer many benefits, there are also concerns about its potential for misuse, particularly in regard to providing inappropriate or harmful safety-related information. The study highlights the need for caution when using ChatGPT for safety-related information and expert verification, as well as the need for ethical considerations and safeguards to ensure users understand the limitations and receive appropriate advice, especially in low- and middle-income countries. Oviedo-Trespalacios: ‘The results of this investigation serve as a reminder that while AI technology continues to advance, caution must be exercised to ensure that its applications do not pose a threat to public safety.’