Section 2

Ethics & Wellbeing

Bias, deepfakes, social chatbots, environmental impact, equity, and student safety.

These guiding statements address the ethical and wellbeing dimensions of AI in schools — issues that cut across curriculum, pastoral care, and community engagement.

Guiding Statements

2.1

Bias and Discrimination

AI models reflect the biases of their training data, which is disproportionately sourced from English-language, Western, and male-dominated internet content. The VINE School acknowledges this and takes active steps to identify and mitigate bias in AI outputs used in teaching, assessment, and school communications.

Where AI is used to generate or assess student-facing content, staff are required to review outputs for stereotyping, cultural insensitivity, and harmful representations. The school is particularly attentive to the impact of AI bias on Indigenous students, multilingual learners, and students from culturally diverse backgrounds.

2.2

Deepfakes and Synthetic Media

The creation and distribution of deepfakes — AI-generated images, audio, or video that depict real people in fabricated scenarios — is a serious and growing threat to student safety. The VINE School treats deepfakes as a wellbeing and safeguarding issue, not merely a technology concern.

Key legal context: The Criminal Code Amendment (Deepfake Sexual Material) Act 2024 makes it a criminal offence to create, distribute, or threaten to distribute sexually explicit deepfake material. Creating an intimate image of a person under 18 is illegal regardless of whether the source images are "unaltered."

2.3

Social Chatbots and Emotional AI

AI-powered social chatbots and companion apps — designed to simulate friendship, romantic relationships, or emotional support — represent a distinct category of risk for young people. These tools are engineered to create emotional dependency through personalised, responsive interaction.

The VINE School does not approve the use of social chatbots or emotional AI companions by students during school time or on school devices. Education about the design, purpose, and risks of these tools is included in the school's wellbeing and digital citizenship programs.

2.4

Environmental Impact

AI systems consume significant energy and water resources in their training and operation. The VINE School acknowledges the environmental impact of AI and considers this as one factor — alongside educational value, privacy, and equity — when making decisions about AI tool adoption.

The school does not adopt a prohibitionist stance on environmental grounds but encourages thoughtful use: where a non-AI approach achieves the same educational outcome, it may be preferred.

2.5

Copyright and Intellectual Property

The legal position on AI-generated content and intellectual property is changing rapidly. The VINE School takes the following positions:

  • AI-generated content is not automatically the intellectual property of the person who prompted it
  • Staff using AI to generate school communications or teaching resources should be aware of the copyright status of AI outputs
  • Students submitting AI-generated work in creative disciplines must disclose AI use
  • The school monitors developments in Australian copyright law as they relate to AI-generated content

2.6

Equity of Access

Premium AI technologies cost money, and students from less resourced families may have less access to advanced AI capabilities. The VINE School is committed to ensuring that AI-related expectations in assessment and learning do not disadvantage students based on their access to technology outside of school.

Where AI is permitted or required in assessment tasks, the school provides equitable access to appropriate tools during class time.

2.7

Student Wellbeing and AI

The VINE School considers the wellbeing implications of increased AI use, particularly for younger students, as part of its broader approach to digital wellbeing. This includes attention to quality screen time, the quality of AI-mediated interactions, and the potential for AI to displace activities that support student development — including unstructured play, face-to-face conversation, and independent creative work.

Key Roles, Key Questions

Role Key Questions Guidance
Head of Student Wellbeing How do we protect students from deepfakes and AI-generated harm? What are the warning signs of problematic social chatbot use? 2.2, 2.3, 2.7
Director of Marketing & Comms Can we use AI-generated images and text in school communications? What requires disclosure? 2.5
Teachers How do I teach students to think critically about AI without being preachy? How do I handle a deepfake incident in my class? 2.1, 2.2, 2.3
Students What happens if someone makes a deepfake of me? Who do I tell? Is it okay to use AI for creative work? 2.2, 2.5
Parents Is my child safe? What is the school actually doing about deepfakes and social chatbots? 2.2, 2.3, 2.7
Board Directors Are we treating AI-related safeguarding with the same rigour as other child protection obligations? 2.2, 2.3, 2.7