The AI revolution in children's services - what to consider
Senior Researcher (Data Specialist)
AI is all around us. Whether you’re looking to avoid a Whatsapp mistake with text prediction or sprucing up your Instagram with filters - or perhaps even generating new wonders with ChatGPT and other such tools! - it's certainly not going away and is playing a significant role in not just our lives, but the lives of children and families. So, what does this mean for AI in the youth sector? And what could it do for your work and organisation? Dartington Service Design Lab has been at the forefront of progressive applications of research for more than fifty years, and we’re intrigued by the advances in artificial intelligence (AI), which are already changing the way we develop, deliver and evaluate services for children, young people and families – bringing a wealth of opportunities and challenges to navigate.
AI can be applied in two distinct ways to our research and evaluation work:
In a supportive role to generate content and facilitate data collection and analysis
For project-specific analysis and development of AI models trained in evaluating data
We’re still learning how AI could benefit us and hope to integrate it more into our work. In this blog, Data Specialist Sean Manzi introduces ways AI could support the delivery and design of services across the sector.
The benefits of AI for children and young people’s services
One of the perks of AI is its speed and ability to predict. This could be incredibly useful in the CYP sector; it has the potential to predict the service needs and outcomes of children, young people, and families, informing the redesign and personalisation of services and systems of support. It could also help improve the speed and insight gained from the available data. This is an exciting prospect for the evaluation process, from piloting and service development, all the way through to evaluation and service provision. Using AI in evaluation will enable prediction for early intervention and the provision of personalised support to improve outcomes across the sector.
Similarly, AI represents a tool for improvement. We know that children, young people, and families often experience systemic challenges and difficulties in their lives; but AI has the possibility to provide us with powerful ways to determine changes to systems of support. This might be through identifying geographic areas with the greatest need or scheduling appointments and resources on behalf of organisations and their users. It can also be used to identify common areas of need and differences in the support needs of different groups of service users, which is incredibly helpful for practitioners and those delivering much-needed services to children and families.
What to consider when evolving your AI practice
These advances in AI are full of potential, but they also come with their challenges. We need an AI-informed and capable workforce who are confident in the use and opportunities of AI, alongside its outputs. The impact of AI in your organisation will be driven by the values of those building and those using the AI, to ensure accountability and responsibility when applying it to practice.
Underpinning this, we also need strong guiding ethical principles to ensure that our use of AI promotes equity, wellbeing and social justice. There are eight ethical principles we believe you should include when developing your own policy or applying AI in your work, and they are:
Fairness – ensuring AI is not biased
Improvement-focused – AI is used to produce positive change
Sustainability – can be continuously developed and re-used
Supportiveness – support human decision making not replace it
Empowerment – informed and understood by those it seeks to help
Transparency – how the AI model was constructed, and its output derived can be understood by all
Responsibility – AI is used for the public good
Accountability – AI and its use is replicable and can be interrogated
To ensure inclusivity and engagement across the sector, we need to make sure that it is understandable and accessible to all. By using open and collaborative practices, we can help ensure that AI is used to enhance our society in an ethical and just way.
Getting AI ready
There are three areas where you can take action to make sure AI is used responsibly in your organisation:
Develop an AI policy. This policy will include a statement of values.
Improve your data. The better the data quality, the better the AI and information to support your decision-making. You can develop your data collection practices by; reducing missing data, making sure it is consistent and well structured, collecting it at appropriate regular time points and aligning your data collection to a robust theory of change.
Create an AI use framework. This will describe where and how you will use AI in your organisation. A framework will also Include standards for its use and information on how you will ensure data quality and ethical standards.
Learn more
We are starting to develop our own AI practices and policies, and we will continue to share our learning as our understanding of using AI grows. You can follow our progress here.
For more information and practical suggestions about applying AI in your work, check out Sean Manzi’s recent keynote at the 2024 Children and Young People Now Evaluation Evidence and Impact Conference. Here, we shared more on the benefits and limitations of AI in the children and young people’s sector. Slides and accompanying handout are available here.
Feel free to get in touch with questions about how you can use AI in your practice or to share your learning so far. You can also connect with us on Twitter and LinkedIn.