Ensuring the responsible use of Generative AI in social care: A collaborative call to action

We are a group of people who draw on care and support, people who provide unpaid care for friends or family members, care workers, care providers, strategic leaders in social care, providers and developers of technology and others who have spent time individually, and collectively, considering how Generative AI can be used responsibly and ethically in social care.

The ‘responsible use of (generative) AI in social care’ means that the use of AI systems in care or related to the care of people supports and does not undermine, harm or unfairly breach fundamental values of care, including human rights, independence, choice and control, dignity, equality and wellbeing.

We are concerned that Generative AI is being adopted at pace, without clear guidelines or guardrails to ensure that its use is responsible, safe and effective. This rollout is often a response to pressures on the current system of care and support, and this may detract from the focus on human rights, equality and legal frameworks which ought to inform its use.

We have developed our own guidance to start to address these gaps, but we also need the following actions to be taken:

1. Everyone – to use the “I” and “We” statements set out in our guidance, to guide the use of Generative AI in social care.

We’re asking everyone with a stake in the use of Generative AI in social care to consider how you can apply the guidance (“I” and “we” statements) we have collectively produced, in your work. If you’re a person who draws on care and support or someone who works in social care, you can use these “I” statements to check whether technology is being used responsibly. If you’re a provider of care (in a local authority, a care organisation or somewhere else), or a technology provider or developer, you should consider whether the “we” statements reflect how you’d describe your work.

2. Everyone working on Generative AI in social care – to continue to work collaboratively on key issues.

We’ve learned a lot by bringing together different perspectives across social care, and we know we’re not the only ones thinking about this and work is ongoing across adult social care, in relation to children and young people and in other related fields. We’re keen to continue the collaboration, making sure that we’re bringing different perspectives together, and crucially ensuring that there is active participation by people who draw on care and support, unpaid carers, people who work in social care and others, as we move forward. We know there is still significant work to do, including to address issues around cybersecurity and the environmental impact of Generative AI.

3. UK governments to work with the UK’s regulators to develop appropriate regulatory and accountability structures to govern the use of Generative AI (and other AI technologies) in social care.

Our discussions have laid bare the need for clear and enforceable guidelines around the use of Generative AI in social care. We also need clearer mechanisms for accountability. We envisage a role for UK care regulators, local authorities in relation to their safeguarding responsibilities, and professional bodies linked to the social care sector.  UK governments need to recognise the current regulatory gaps and designate a responsible body to address these. We believe our guidance can provide a helpful starting point for this work.

4. UK governments to take a lead in developing and maturing the infrastructure for innovation and entrepreneurship in social care, in partnership with other key stakeholders including local authorities.

We need to create a supportive ecosystem and infrastructure which can enable high quality, inclusive innovation and entrepreneurship in social care technology. We need to build infrastructure to support inclusive, human-centred design and to enable and support coproduction and involvement across the innovation life cycle.

5. UK governments to take a lead in developing and nurturing new business models in the social care technology field, in partnership with other key bodies including local authorities.

The fragmentation of the social care sector makes it hard to drive economies of scale in care technology development. Power is not held equally between large, often national or global, technology providers; small, often local, care providers; and care workers and people who draw on care and support. Individuals, communities and local economies do not always feel able to share in the rewards from technologies built using their data, and around their lives. We believe governments can play a role in supporting the development of new business models in the care technology field, which are more efficient better support growth across local communities and economies.

6. The Department of Health and Social Care to ensure that it’s promised National Standards around the use of technology in social care are ethically informed and aligned with existing legal frameworks, including human rights and equality law, and the wellbeing principle established in the Care Act 2014.

The DHSC has promised new National Standards for the use of care technology in England. We hope that the work we’ve already done can offer a starting point for these in England and in the development of similar standards across the UK.

Endorsed by

Dr Caroline Green, Director of Research at the Institute for Ethics in AI

Casson Consulting

Digital Care Hub