AI tools in projects and assignments
TU Delft Assessment Taskforce has created practical guidelines on how lecturers can deal with the influence of AI chatbots on unsupervised assessments and what mitigating measures can be taken.
The possibilities of AI chatbots in assessment require further investigation. A group of AI experts is conducting a risk analysis of AI chatbots on assessment (Special Interest Group AI & Assessment, SIG AI&A). We update this page if their findings require additions to the advice below. This page was last updated on 21 April 2023, based on input from the SIG AI&A.
About AI tools
AI tools like the chatbot ‘chatGPT’ can produce convincing answers to certain questions, depending on their nature. However, their output is not always reliable: outputs can contain convincingly presented factual errors (so-called hallucinations). Furthermore, their training data can be outdated. For example, ChatGPT currently uses training data up to September 2021. It is also important to note that most chatbots do not list their resources (Bing Chat does, though).
On the positive side, chatbots can help with:
1. Checking grammar, spelling, and references in a text
2. Generating ideas by listing information from different sources in an accessible way
3. Giving feedback
4. Summarising or expanding texts and concepts
5. Coding in a wide variety of computer languages
Use by lecturers for assessments: AI chatbots can help lecturers in creating assessments (including different versions of an assignment), answer models, and rubrics.
-
The following assumptions have been used as the basis of the practical guidelines in the how-to section:
1. Students’ use
We are assuming that AI tools are used by our students and graduates, especially because these services are currently free of charge.2. Quality requirements for assessment
This is how the quality assessment requirements might be affected by AI tools:
a. Validity:
i. Allowing the use of AI tools while not changing the learning objectives and assessment criteria will diminish validity: the ability to use AI tools will influence the grade, probably even more than the level of mastering the learning objectives without the use of AI tools.
ii. Allowing AI tools to change the assessment criteria (or their weight) but not the learning objectives will diminish the validity.
b. Reliability: If students use AI tools while the teaching staff does not anticipate this, AI tools will increase students' grades.
c. Feasibility for students: Mitigating measures could increase the number of assessments and therefore increase the study load.
d. Feasibility for teaching staff: Extra assessments (see point c) will also increase the workload for teaching staff.
e. Transparency: In addition, teaching staff may forget to communicate some of their changed expectations to students.3. AI chatbot detection
It is currently unknown what the reliability of AI chatbot detectors is.4. Definition of fraud
The definition of fraud is (source: model R&G, article 7.1):
“Fraud is taken to mean any act or omission by a student that makes it fully or partially impossible to properly assess the knowledge, insight and skill of that student or another student.AI tools can lead to this.
5. Attribution:
The use of AI chatbots (and of tools in general) should be acknowledged and properly referenced, to ensure the distinction between the students' original ideas and those provided by AI, and to check whether the student critically checked the output of the (chat)bot (or other tools). This can be done by indicating the purpose for which the AI tool was used, and listing the (successful) prompts that the students used.6. Accessibility
At the moment, most chatbots are still free-of-charge- which makes it a relatively low threshold for students to use this. In the (near) future, it is likely that users will need to pay a fee. This could potentially lead to the need for higher education institutions to accommodate these chatbots when they are actively used in our education and that all students have equal access to these types of tooling.7. Security & privacy
AI tools depend on user input to train future versions. This can have consequences for the privacy and intellectual property of information that is fed to the (chat)bot during its use. Almost every online tool requires the use of personal data.ChatGPT-specific information:
OpenAI is the company that offers the ChatGPT service. Users need to sign up for an account in order to use chat.openai.com. Besides your user account information OpenAI also processes the following personal information:
a. User Content: when you use ChatGPT, OpenAI will collect personal Information that is included in the input, file uploads, or feedback that you provide to ChatGPT during interactions
b. IP-Addresses
c. Browser user agents
d. Operating System and device information
e. Cookies
f. Tracking Identifiers
It is evident that OpenAI uses technology that requires sharing personal data with third parties. That data is stored on servers in the USA. At this moment is unclear with whom personal data is shared and what parties are responsible for the use and protection of your personal data. The security and privacy team will update when new information becomes available.8. Ethical issues
In addition, ethical questions are arising regarding the current and future influence of AI chatbots on truth finding and society as a whole, as well as regarding the power of its owners (big tech companies).9. Rapid evaluation of AI tools
Many of the current shortcomings will be (partially) solved in the next versions of AI tools. Therefore, it is important to focus on the possibilities and not so much on current shortcomings since the latter change. However, we should consider the more static risks of these technologies, which are unlikely to change. In other words, we should distinguish between shortcomings and risks.
How to assess in assignments and projects
Invigilated exams versus assignments and projects
During classical written exams and digital exams in the TUD secure exam environment, students do not have access to the internet, and therefore chatbots cannot be accessed by your students. The same holds for oral exams that are held on campus.
On the other hand, if students work on assignments (exam-like or other) outside an exam hall and without invigilators (Dutch: surveillanten), the use of AI tools cannot be prevented.
Advice for fraud prevention in (non-invigilated) assignments and projects
-
Feed your assignments to chatbots and study their output. How would you assess the output using your answer model or rubric? How is this different from student answers? You can use this information to get a feeling of whether or not students used AI chatbots in their work.
Discuss the possibilities and limitations with your students of using AI bots in unsupervised assignments. Train students to not trust the answer of AI chatbots, even for questions that are not too difficult and require mostly factual knowledge. Students need to internalize that they need to double-check all output of AI chatbots, to prevent them from learning incorrect facts and reasoning.
-
TU Delft recognises the value of AI tools but sharing data is never without any risks. If you choose to use AI tools like ChatGPT or AI plugins, we recommend you take the following recommendations to heart:
Reveal nothing: Do not share any personal data, internal information or (highly) confidential information during your interactions with the AI tool;- Private/incognito window: Use ChatGPT while browsing in a private or incognito window;
- VPN connection: Use a VPN connection while interacting with the AI tool.
- Password: Currently, AI tools do not offer Single Sign On, so account and password management is up to the individual user. Do not reuse passwords, preferably use a password manager or other ways to create a safe password and change the password on a regular basis.
- AI plugin awareness: The new plugin functionality of ChatGPT offers the possibility to include external sources. This makes it easier to share data. Keep in mind that the above recommendations also apply to these plugins.
-
If you consider the use of AI bots in your course detrimental to achieving the learning objectives, clearly state your reasons to your students. Make sure they understand that they will fall through the cracks in the summative assignment when they do not have access to these chatbots. Refer students back to the definition of fraud and our TU Delft code of conduct.
-
Inform your students on how you expect them to correctly attribute the use of AI tools. Have students reflect on their use of AI tools.
Examples:a. Attach prompts and output: Attach the chatbot conversations to the documents, or at least the (successful) prompts that they used.
b. Reflection: Have them write a short section on how they used chatbots and in what ways it was and was not helpful, and what they learned.
c. Coding: Give instructions on using AI tools for developing software code, and on how to acknowledge their use.
-
- Have sufficient feedback moments during the course and ask students to reflect on how they processed the feedback. If possible, do this in a discussion.
- Regularly check the progress of individual students during their projects/assignments (if feasible). This is also good for learning and confidence building if you turn it into a supervision/feedback moment. Check during the creation of the deliverable (of a project/assignment) whether they are all contributing / learning, for example by brief oral discussion of a product they are working on (e.g. after finishing a specific (small) step in a project/code assignment).
-
Currently, e.g. ChatGPT 3.5 only uses sources up to September 2021, which implies that students would need to feed the chatbot with this new information, which makes the use of chatbots more cumbersome. Add elements that make it hard or impossible to copy information and let AI chatbots change the text (i.e. ‘disguise’ as original information).
Examples:
a. Ask students to use recent articles as references and check these references. The latter is quite time-consuming, though.
b. Cross-check references across students: if they all use the same references, they may have ‘disguised’ as having rewritten each other's homework (with the help of AI chatbots).
c. Ask for a personal touch to an essay.
d. Provide and ask for multimodal input as part of the assignment (drawing, graphs, and schematic representations). However, AI tools are increasingly good at extracting information from drawing, graphs, and schematic representations. As an example, ChatGPT-4 accepts images as well as text. If you ask your students for multimodal information in their work, be aware that students can produce these using AI tools: Text-to-image generators like DALL-E can create pictures, and ChatGPT can do statistical analyses and visualize data in plots.Warning: adding multimodal information might decrease the accessibility of your assignment for students with special needs.
-
- Shift assessment criteria towards the process instead of the deliverable. Make sure that the assessment is valid and transparent for students.
- Version control: Track the progress of students through version control in e.g. Word or Gitlab. Are they processing feedback proactively?
-
Take fraud detection measures and report suspicions of fraud to your Board of Examiners:
- Oral authenticity check: Do an oral authenticity check (4a) to check if it is likely that the student-produced the text in themselves. This should either be a random check, or based on justifiable parameters to prevent bias.
- Use an AI detector, like GPTZero, to investigate the probability that an AI chatbot was used. Significantly high(er) scores should be reported to Board of Examiners as possible fraud (if the use of AI chatbots was not properly attributed). Although the detectors do not detect all AI use, the amount of false positives is quite low, according to this article.
- Check the transfer of skills & knowledge: Consider adding a written exam to a project in which students have to answer questions on a case that is similar to their project. That way, you can test the student’s ability to transfer their knowledge to another situation. Additionally, this aids with the retention of knowledge & skills, especially if you discuss the exam in class. Carefully consider the timing and the weight of the exam (consider making the exam pass-fail) to prevent students from focussing on the exam instead of on their project. Adding assessments adds to the study load and is not permitted during the academic year without the permission of the board of examiners (and the programme director).
-
Rethink your course’s assessment plan. If necessary, adjust the learning objectives. This doesn’t necessarily mean that the taxonomy level should be increased since this could lead to an increase in the difficulty and study load of the course. Consider the relation to other courses in your decision.
Keep in mind that these changes require a study-guide adjustment and will therefore have to be approved by your programme director, the Board of Studies and the Faculty Student Council before the start of the academic year. Changes during the academic year can only occur in very special circumstances after approval by the board of examiners, see here).
Need support?
Get in touch with us! We are happy to help.
+31 (0)15 27 84 333