The ‘Oxford collaboration on the responsible use of (generative) AI’ (‘The Oxford collaboration) refers to a cross-care community collaboration that began in February 2024 following a roundtable event held at the University of Oxford to discuss the implications of generative AI on care and the publication of a shared statement This statement was endorsed by over 30 organisations.
The aim of the collaboration has been:
- to co-produce an understanding of what the ‘responsible use of (generative) AI’ in adult social care means and;
- to address a lack of official guidance and support for people with lived experience, care providers and frontline care workers in this area.
We recognise that this work was only the beginning. At our AI in Social Care Summit on 27 March 2025, we announced plans for a new alliance on AI in social care. Further details will be published shortly, in the meantime you can sign up to join the alliance when you sign the Call to Action or Tech Suppliers Pledge. You can also sign up to the Digital Care Hub newsletter for updates, and follow #AIinSocialCare on social media.
Background
Further information on the background and value-led approach taken by the collaborative is available in this White Paper.
The collaboration grew beyond the initial signatories of the statement and included more than 70 organisations and people who draw on care and support , family caregivers, care providers (domiciliary/residential care/supported living), care workers, tech suppliers, academics, policy makers in local and national government, and wider civil society mainly from England, but also Scotland and Wales. Social workers were also part of this collaboration.
We co-produced a value-led approach to the responsible use of (generative) AI in social care, with a focus on what care can do for people and on fundamental values of care. We published guidance and a call for action to define future steps. All resources are available on the Digital Care Hub website.
Governance and outputs
The collaboration was governed by a steering committee with representatives from across the care community and working group leads, who volunteered to move forward five working groups. The working groups enabled individuals with shared professional backgrounds to convene, explore the meaning of generative AI for their work and lives and identify the outputs that they would like to produce.
The outputs of all working groups were then subject to an overarching co-production working group, that scrutinised and deliberated on the overall outcomes of the collaboration. We also convened a deliberation for the entire collaboration at the University of Oxford, to decide on our definition of ‘the responsible use of (generative) AI in social care’, our guidance and call for action.
Steering Committee
The purpose of the Steering Group was to guide the co-production and consultation process towards creating understanding and resources to define the ‘responsible use of generative AI in adult social care’ as defined through the shared Oxford statement on generative AI in adult social care. The Steering Group worked in parallel with the ‘Co-production group’ consisting of a diverse membership of people drawing on and proving care services. Find out more, including Steering Committee terms of reference.
Outputs:
- Oxford Statement on generative AI in adult social care
- Call to Action: Actions we want social care stakeholders – including national policymakers – to take to support ethical use of AI in social care.
Co-production
Our co-production group consisted of a diverse range of people with lived experience of social care, either drawing on care or providing care. The co-production had two workstreams, one of which is led by Think Local Act Personal (TLAP) and the other one led by Caroline Green or the Institute for Ethics in AI who set up a local community working group in Catford South London.
Find out more about our co-production work.
Output: Principles and priorities for the responsible use of Generative AI in care and support (TLAP website)
Working groups
Ethical Principles Working Group
This group was chaired by Dr Donald Macaskill, Scottish Care. Members of the group include representatives from: Scottish Care; National Care Forum; Skills for Care; Rowcroft Hospice; Digital Care Hub; ADASS; and independent contributors.
The purpose of this group was to develop a framework of ethical principles for anyone in adult social care to consider before implementing AI.
Output: Ethical principles for the use of AI in social care contexts
Care Provider Working Group
This group was chaired by the National Care Forum. The aim of this group was to work with adult social care provider organisations to develop guidance and support for care organisations planning to implement AI. The group held a series of virtual meetings in order to draft the guidance. The guidance is in the form of ‘I’ and ‘We’ statements for care providers and other stakeholders to consider in relation to the use of AI.
Output: Guidance on AI: ‘I’ and ‘We’ statements
Technologists Working Group
The technologists working group was chaired by Daniel Casson, Casson Consulting. This group was for software suppliers who work in the adult social care space. The group met virtually to discuss the risks associated with introducing AI in social care. They have published the provider pledge on which they will report annually to ensure they are upholding the promises made in the pledge.
To find out more about this group and their work, please contact [email protected].
Output: Tech suppliers’ pledge on AI in social care
Care Workers Working Group
This group was chaired by the Care Workers Charity. Following a roundtable in May 2024, the Care Workers Working group have published their own statement on the responsible use of AI in social care. It covers a wide range of staff working in adult social care.
Output: Care Workers Statement
Evaluation
Evaluation of AI governance in formal care (not yet published).