The responsible use of Generative AI in adult social care: A value led approach – White Paper

This White Paper describes the Oxford collaboration on the responsible use of generative AI in adult social care.

It includes: background to the project, use cases of generative AI in social care, policy and regulation issues, the collaborative approach taken by the project, the definition of the responsible use of generative AI in adult social care, and plans for next steps.

Read the executive summary below, or download the full report.

 

Executive Summary

The ‘responsible use of (generative) AI in social care’ means that the use of AI systems in the care or related to the care of people supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.”

 (The Oxford collaboration’s value-led definition of the responsible use of (generative) AI in adult social care)

Adult social care supports individuals with disabilities, illnesses, or other needs, enabling them to live independently with dignity, choice, and respect for human rights. This includes both practical assistance, such as personal care and food preparation, as well as emotional and social support. Care services are delivered by formal care providers in care homes, day centers, and individuals’ homes, as well as by informal caregivers, including family and friends. Central and local governments play crucial roles in shaping policy, providing financial support, and regulating care quality. Artificial Intelligence (AI) encompasses technologies that replicate human cognitive functions such as learning and decision-making. Generative AI creates new content—including text, images, and videos—based on prompts. Popular general-purpose AI tools like ChatGPT and Microsoft Co-Pilot, as well as social care-specific AI solutions, are transforming how care is planned and delivered.

Current use cases of generative AI in social care include:

  • Assisting in generating care plans, meeting notes, and activity schedules.
  • Supporting non-native English speakers in written communication.
  • Checking health symptoms for preliminary insights.
  • Managing administrative tasks like emails and letters.
  • Providing AI-powered chatbot support for mental health and well-being.

While generative AI offers potential benefits to people in social care, it also poses many ethical considerations and risks due to technical limitations as well as related to how people may use generative AI in social care.

The Oxford collaboration on the responsible use of generative AI in adult social care launched in February 2024 at the University of Oxford, with the aim to define what the responsible use of generative AI in adult social care means and to address the gaps in official guidance and support. The collaboration included over 70 individuals and organisations from across the care community, including people who draw on care and support, careworkers, care providers, tech developers, academics, advocacy groups and policy makers.

We co-produced a value-led approach to the responsible use of (generative) AI in social care, with a focus on what care can do for people and on fundamental values of care. We published guidance and a call for action to define future steps.

All resources are available on the Digital Care Hub website.

 

Acknowledgements

This collaboration has included many individuals and organisations across the care community. We would like to thank all of them. A special thank you to the leads of our working groups: Dr Donald Macaskill, Dr Jane Townson OBE, Tyler Reinmund, John Boyle and Karolina Gerlich.

This white paper was authored by Dr Caroline Green, Katie Thorn, John Boyle, Kate Jopling and Daniel Casson. To cite, please use: Green, C., Thorn, K., Boyle, J., Jopling, K. and Casson, D. (2025) White paper of the Oxford Collaboration on the responsible use of generative AI in social care: A valued led approach.