AI Policy
Public Summary Version 1 | May 2025
Purpose
Skill Path is a human centred, AI-native organisation. This policy outlines how Skill Path uses artificial intelligence (AI) — including generative and agentic systems — to support its mission: enabling refugees to access education, training, and professional licensing in Australia. It ensures all AI use is ethical, transparent, empowering, and consistent with Skill Path’s legal obligations, operating principles, and commitment to user trust. This policy supports Skill Path’s broader commitment to ethical innovation and social impact.
Why we use AI
Skill Path adopts AI to:
Empower refugees by removing information bottlenecks and helping them solve their own educational, professional and licensing challenges.
Improve responsiveness by automating routine enquiries and freeing staff to focus on strategic and complex support.
Maximise impact by using limited nonprofit resources efficiently to deliver high-quality services at scale.
Scope
This policy applies to all AI systems developed, used, or integrated by Skill Path, including internal and public-facing tools approved for organisational use, and AI tools accessed via personal accounts and occasionally used for work purposes.
International alignment
Skill Path’s approach to AI is informed by global ethical guidance, including:
Australia’s AI Ethics Principles
UNESCO Recommendation on the Ethics of AI: centring human rights, data sovereignty, and democratic accountability.
These frameworks reinforce our commitment to deploying AI responsibly, particularly in contexts affecting displaced and marginalised populations.
AI use principles
Human-centred and empowering
AI is used to support refugee agency, access to information, and achievement of education and professional goals. AI supports but does not replace human decision-making. Users stay in control of what they share and can request human help anytime.
Transparent and accountable
Users are informed when AI is used. Disclaimers and opt-in consent are standard. Skill Path applies a human-in-the-loop (HITL) approach for high-risk tools, ensuring that human oversight is built into systems that affect eligibility, user data, or service access. All complaints are investigated and addressed through our complaints policy.
Ethical and inclusive
AI tools are tested with refugee users. We actively monitor and reduce bias.
Secure and private
We only use trusted platforms with enterprise-grade security for handling personal or sensitive data. No personal data is used for AI model training without consent. All tools must comply with our Privacy and Data Protection Policy.
How we ensure responsible AI use
AI tools are classified by risk level:
Low: Admin and brainstorming tools (no sensitive data)
Medium: Internal tools that support decision-making
High: Tools interacting with users or using personal data
Key measures for ensuring responsible AI use are:
Oversight: All tools are reviewed by Skill Path’s CEO. High-risk tools are also approved by the Technology Advisory Group (TAG).
Tool approval: Team members must seek approval before using any new AI tool.
Monitoring: All tools are evaluated continuously for fairness, accuracy and harm.
Redress: Users can contest AI outputs and request review or deletion of their data.
Acceptable use of AI
Approved AI uses include:
Discrete tasking: Summarising documents, taking notes in meetings, distilling knowledge, and basic research.
Co-piloting: Co-creating documents, communication materials, program strategies, and plans, conducting multi-step deep research, analysing structured data for operational insights, and producing content or artefacts — including images and video.
Delegation: Handling operational or service-related tasks such as responding to user enquiries (with supervision and disclosure), generating reports, handling travel plans and bookings or drafting administrative communications.
Translation: Using AI tools to translate written content for internal or external use, with human review to ensure accuracy and cultural relevance.
Prototyping and product development: Designing and building new tools or products using AI functions, with appropriate safeguards in place.
AI must not be used to:
Make final decisions affecting individuals without human oversight
Input confidential information or user data into personal or unapproved AI tools
Generate content for external use without human review and fact-checking
Feedback and redress
If you have concerns about an AI tool or interaction, you can lodge a complaint via:
👉 https://www.skillpath.org.au/complaints