Skill Path and Ethical AI
Building a startup nonprofit in the middle of a technological revolution has been one of the most exciting parts of founding Skill Path. It also brings a deep responsibility to use this technology ethically and in a human-centred way.
We’re in a moment where AI is radically accelerating efficiency, and that’s a game-changer for organisations like ours. Nonprofits doing social impact work are chronically (and increasingly) underfunded. We need to maximise the impact it can achieve with limited resources, and AI now makes that possible in ways that felt impossible just 12 months ago.
Balancing AI’s promise and risk
It is also an enormous responsibility. Deploying AI in an ethical and human-centred way is a constant preoccupation of mine, and one that Skill Path has now formalised through our new AI Policy. The policy is being overseen by our Technology Advisory Group, chaired by technology and product leader Joydip Das. This group ensures that we assess risk carefully, build with transparency, and stay grounded in the values that matter most, especially when working with displaced communities.
How AI powers our mission
Skill Path uses AI in almost everything we do. We are a human-centred, AI-native organisation, because I believe this is the best way we can deliver transformative outcomes for refugees at this time of technological revolution.
We use AI to empower refugees with the information they need, free up our team for deeper support, and stretch limited resources to deliver high-impact services at scale.
AI supports us with discrete tasks like summarising documents, taking meeting notes, and handling translation and transcription. We’ve built internal automated workflows using AI tools, and we co-pilot with AI for strategic planning, systems integration and workflow design, content creation and deep research. AI also helps us analyse data to make smarter, faster decisions.
When you reach out to Skill Path our chatbot provides the first line of support, although we monitor chatlogs and you can always ask to speak to a member of our team. We’re also prototyping refugee-focused AI tools which we’ll announce when ready.
There’s virtually no part of our operations that isn’t enhanced by AI. But crucially, our commitment to being human-centred and ethically grounded remains at the core of everything.
Our AI principles
Our use of AI is guided by four core principles that ensure our tools work for people, not the other way around.
1. Human-centred and empowering
AI is used to support refugee agency, access to information, and achievement of education and professional goals. It supports but does not replace human decision-making. Users stay in control of what they share and can request human help anytime.
2. Transparent and accountable
Users are informed when AI is used. Disclaimers and opt-in consent are standard. Skill Path applies a human-in-the-loop (HITL) approach for high-risk tools, ensuring that human oversight is built into systems that affect eligibility, user data, or service access. All complaints received through our complaints process are investigated and addressed.
3. Ethical and inclusive
AI tools are tested with refugee users to ensure relevance and accessibility. We actively monitor for bias and take steps to reduce it.
4. Secure and private
We only use trusted platforms with enterprise-grade security for handling personal or sensitive information. We do not allow user data to be used for model training without the user’s express consent and all tools comply with our Privacy and Data Protection Policy.
AI out in the open
We’ve published our AI Policy Public Summary to share our approach openly and invite others to learn alongside us. Whether you’re a refugee, a partner, a policymaker, or someone building AI systems, we hope this gives you insight into what we’re building, and how we are working to build it responsibly.