Teaching students to use AI responsibly
AI tools are becoming part of everyday life, so it’s vital for academics to guide our students so that they will be able to use AI both ethically and effectively.
We need to:
- foster a culture of trust in the classroom by being transparent about our use of AI, and provide clarity about our expectations for students’ use
- model AI practices in our teaching, showing students how AI can be used responsibly.
By demonstrating responsible AI use and setting clear guidelines, we can foster ethical habits that will benefit students now and in the future.
Ethical considerations
UQ students who participated in the Student perspectives on AI in higher education project raised valuable ethical questions.
Students identified challenges around:
- trust, ethics and integrity
- human-centred and equitable learning
- responsible use and future preparedness
- Indigenous data and AI
A checklist for ethical AI use (PDF, 134.68 KB) was inspired by UQ students’ reflections during focus groups, highlighting the importance of engaging ethically and responsibly with AI in learning. This checklist can be used with students to help engage responsibly with AI.
Further resources to explore
- AI legal, ethical and social issues in the Artificial Intelligence Digital Essentials module.
- Understand how Indigenous writers from around the world, including Australia, are thinking about AI by reading the Indigenous AI Position Paper.
- The UNESCO Recommendation on the Ethics of Artificial Intelligence offers a more global understanding of AI’s impact.
- Specifically for students, the Student Guide to AI Literacy was developed by writing experts and embeds ethical considerations into an AI literacy “how to” guide format.
Resources: Discussing AI ethics in your classes
It's critical for us to help our students to understand AI's ethical and effective use. Our students have identified a need for us to do more to prepare them.
There are many ways we can incorporate discussions and consideration of the ethical use of AI in learning activities and assessment, including:
- reviewing articles, websites and media about AI ethics
- analysing and discussing cases of AI use and the ethical implications involved
- engaging in debates and dialogues about the ethics around AI
- asking students to identify and prioritise the ethical considerations around their AI use.
Example activities
Socratic dialogue: Ethical and effective use of AI in research essays
A Socratic dialogue involves the facilitation of open-ended inquiries and active listening, encouraging participants to analyse their own beliefs and arrive at deeper insights through reasoned discourse.
Statement for discussion
"A good way to use AI for my research essay is to get it to do the online searching and give me the most relevant results."
- Select a topic. Focus on the principle of transparency and accuracy.
- Initial question. Participants create open-ended questions in a shared space (e.g., Padlet, whiteboards). Examples:
- "What are the potential biases in using AI for research?"
- "How can we ensure the results provided by AI tools are accurate and reliable?"
- "What are the ethical concerns of relying on AI for academic tasks?"
- Begin dialogue. Choose one question to discuss, such as:
- "How can students critically assess the relevance and accuracy of AI-generated research results?" Encourage responses with further questions and counter-arguments:
- "What criteria should students use to judge AI-generated content?"
- "Could relying on AI discourage the development of critical thinking skills?"
- Socratic dialogue. Engage deeply by:
- Questioning for clarity: "What do we mean by 'most relevant results'?"
- Counterarguments: "If AI is biased, would its 'relevant results' align with academic standards?"
- Challenging assumptions: "Are we assuming AI can effectively replace human judgment in research?"
- Reflective summary. Participants reflect on how their understanding has evolved:
- "Has your perspective on using AI for research changed? Why or why not?"
- "What new insights did you gain about AI’s capabilities and limitations in research?"
This structure fosters critical thinking and explores the ethical and practical implications of using AI in academic work.
Think-pair-share activity: Evaluating AI in academic tasks
Facilitators guide participants in collaborative thinking and active engagement, as participants individually reflect on a topic or question, pair up with others to discuss their thoughts, and then share their ideas with the larger group. This activity fosters deeper understanding and meaningful participation.
Instructions
1. Think (individual work)
Reflect on the following prompt individually:
- “There are a number of distinct sub-tasks needed to create this essay. Which parts can be aided by AI? Which should be aided by AI? Which should not? Why?”
- Write down your thoughts, considering specific examples of sub-tasks like research, drafting, editing, or citation formatting.
2. Pair (group discussion)
- Pair up with a partner.
- Review and discuss the sub-tasks involved in essay creation that are displayed for the group (e.g. research, brainstorming, writing, editing, citations).
- Together, go through each sub-task systematically, identifying:
- tasks where AI is helpful
- tasks where AI use might hinder learning or skill development
- tasks that should be completed without AI for ethical or academic reasons.
3. Share (partner discussion)
Share your reflections with your partner. Discuss:
- what you thought were effective uses of AI
- what you believed were not conducive to good learning
- prompt your partner with questions to clarify their perspective or provide additional examples.
4. Reporting (group sharing)
- Each pair shares a summary of their discussion with the class.
- Use a tally or voting tool to identify consensus points across all pairs:
- Which tasks are most suitable for AI assistance?
- Which tasks should remain free of AI involvement?
5. Community discussion (whole-class reflection)
Facilitate a group discussion around the following questions:
- Where were the points of agreement and disagreement?
- What ambiguities or uncertainties arose?
- Which scenarios seemed likely or unlikely?
- What skills were already known by some students but not by others?
- How did personal experiences shape differing perspectives on AI use?
This structured activity encourages thoughtful evaluation of AI’s role in academic tasks, fostering critical thinking, collaboration and inclusivity.
Six Hats
De Bono’s 6 Hats activity asks participants to wear different metaphorical hats representing different perspectives to systematically explore a topic or problem from multiple angles, fostering critical thinking, balanced decision-making and creative problem-solving.
Instruction activity example
This 6 Hats activity explores the ethical and practical implications of non-native Japanese-speaking students using AI language tools for writing.
Participants will analyse the scenario through 6 perspectives (hats) to foster critical thinking and collaborative problem-solving, culminating in group presentations and reflective discussions.
Scenario
Sam is going on exchange to Japan. Japanese is their second language and Sam sometimes uses an AI tool for writing. Should Sam use AI to help with writing, particularly in assessment?
Six Hats perspectives
- White Hat (facts and information): AI language models may reflect biases from their training data, prioritising correctness over individuality or cultural nuances.
- Red Hat (feelings and emotions): The student might feel frustrated, disheartened, or insecure about their language skills due to altered or erased authenticity.
- Black Hat (the big picture): Over-reliance on AI could hinder natural language growth, cause miscommunication, and lead to unfair assessments.
- Yellow Hat (positive): AI can support grammar and clarity, bridge resource gaps, and teach editing skills if used critically.
- Green Hat (new ideas): Developers could include tone customisation, cultural preferences and explanatory features to accommodate diverse linguistic styles.
- Blue Hat (negative): Students and teachers can collaborate to establish ethical AI use guidelines, workshops and open discussions to preserve authenticity.
Community sharing
- Each team shares their reflections on their assigned hat, using specific examples to support their points.
- Identify commonalities and differences across perspectives, emphasising shared roles of students, teachers and developers in ethical AI use.
Reflective discussion
- Evaluate which perspectives were most important or surprising and why.
- Discuss benefits and challenges of integrating ethical AI practices, balancing AI use with learning development and linguistic diversity.
Resources: Helping students to use AI as a learning tool
When using AI, you can let AI do all the thinking for you, which can suppress your ability to problem solve on your own.
- UQ student voice forum, July 2024
Learning requires time, effort, challenge and reflection. The Higher Education Learning Framework highlights:
- learning as becoming
- contextual learning
- emotions and learning
- interactive learning
- learning to learn and higher-order thinking
- learning challenges and difficulties
- deep and meaningful learning.
AI can impact all of these themes and principles for learning. Further relying on AI for specific activities can undermine skills in that area, for example writing skills can deterioriate if students rely on AI for this work rather than practicing these skills.
These concerns about AI supporting learning are well founded. Research projects demonstrated that AI can be used to either help or hinder learning. For example, recent advances in AI have been valuable in feedback simulations for teacher education, as personal tutors for school students in Nigeria and Turkish school students and for enhancing writing productivity. Alongside this there are a range of projects demonstrating ways to use AI to avoid or minimise learning.
- For AI to work as a learning tool, our students’ needs to do the work of learning. AI can provide feedback, encouragement, and guide practice. The AI student hub has a range of examples of AI prompts for learning versus cheating which can be useful for sharing and discussing with your students to highlight the boundaries of AI for learning.
Prompt collections
- AI digital essentials has example prompts as part of UQ Library advice and support for students around the use of AI
- Prompts to support students following the steps for writing an assignment (as adapted by David Rowland)
- Prompts for students developed by Ethan Mollick
- How to use generative AI in education developed by students and staff at The University of Sydney
Examples of teaching students about AI limitations
AI tools have a range of limitations. These tools:
- may generate incorrect information
- may produce harmful instructions and/or biased content
- may keep, use or share the personal data you enter.
To develop ethical and effective uses of these tools, we need to teach our students about these limitations and the critical skills to use these tools well.
Example activities
- Modelling and practicing the effective use of AI with prompts that produce correct and incorrect outputs or bias.
- Discuss cases where AI errors or bias have caused issues (e.g. Google’s error when first promoting the Bard AI chatbot and AI use in recruitment.)
- Ask students to think about sharing their data with AI.
For seminars, workshops and info sessions related to UQ's Lead through Learning strategy (2025-2027).