Oxford Statement on the responsible use of generative AI in Adult Social Care

This statement was initially published on the Institute of Ethics in AI website.

On the 1st of February 2024, representatives of thirty organisations and individuals working in Adult Social Care met at the University of Oxford, Reuben College, to discuss the benefits and risks of using ‘generative AI’ in social care.

 

This was the first of a series of ‘AI in adult social care’ roundtable events to take place as part of a co-production initiative for people in adult social care organised by Dr Caroline Green at the University of Oxford Institute for Ethics in AI, Reuben College, the Digital Care Hub and Casson Consulting. 

 

Generative AI is developing rapidly, especially ‘AI Chatbots’ that can create new text that mimics human conversation. AI chatbots are already being used in caring contexts, for example to generate care plans or writing meeting notes. This reflects the many possibilities for generative AI in social care.

 

Whilst there are many possible use cases and potential benefits there are also risks in using this type of AI in care, to people drawing on care, people providing it, care tech providers, and other people and organisations operating in social care.  Some of the risks are inherent in the limitations of the technology, such as outputs reproducing bias or not always providing truthful or safe information. Other risks relate to inappropriate or irresponsible use, for example by inputting personal data, not collecting informed consent from people subject to outputs or not checking outputs for safety and reliability.

 

Without careful oversight and transparency when this technology is being used, these risks could have a direct impact on peoples’ human rights and core issues such as safeguarding, data privacy, data security, equality, choice and control, and the quality of care. Social care and tech providers integrating AI chatbots into their services need to develop specialist skills and understanding of the current applications immediately to capture the positive approaches and minimise risk.

 

With all the possible benefits generative AI can bring to the people in social care, these risks need to be highlighted and addressed now.  The group therefore agreed that we urgently need to develop a shared, co-produced framework to underpin the ‘responsible use of generative AI’ in adult social care following this statement and over the next six months (autumn 2024).

 

At the heart of the shared understanding of the ‘responsible use of generative AI’ should be a definition of ‘care’ that recognises the central role of human rights and trusting human relationships between people drawing on care and people providing care, as well as relationships between other groups in social care, including family carers, social workers, commissioners, regulators and inspectors of services. Its use should centre on values underlying high quality care, such as autonomy, person-centredness, and wellbeing.

 

It should furthermore be based on a co-produced vision of care. The primary purpose of generative AI and other types of AI should be to support care provision so that it becomes better (‘augmentation’) and to support people drawing on care to lead more autonomous and better lives and to support those providing care and their well-being.

 

We also need to co-produce actionable guidelines for appropriate usage and deployment of generative AI in social care as well as developing a plan to upskill the whole sector in the use of this technology, a roadmap of existing use cases, clarity around the position of government and regulator, and a compendium of learning from across the globe.

 

We will therefore be engaging in a co-production and consultation process drawing in more people and organisations in social care. This will include following groups: People drawing on care, care workers, family caregivers and tech companies, advocacy and representative groups, government and regulators, academics amongst others to develop, share and disseminate these resources.

 

Of course, generative AI is only one type of AI that is already affecting the lives and work of people in social care. We will therefore need similar processes that we will be following over the next six months in relation to generative AI for other types of AI being used in social care services. We therefore call on people involved in social care to instigate research, dialogues and good practice exchanges on the use of various types of AI in adult social care, define the potential risks and benefits as well as the possible need for resources, training and/or other outputs and documents.

 

Endorsed by:
  • The Caring View
  • Scottish Care
  • Social Care Institute for Excellence
  • The Care Workers’ Charity
  • The Access Group
  • National Care Association
  • System C
  • Essex Cares
  • Partners in Care and Health (a partnership between the LGA & ADASS)
  • Skills for Care
  • Digital Care Hub
  • Homecare Association
  • PredicAire
  • Dementia Support
  • InvictIQ
  • IMproving Adult Care Together (IMPACT) and Centre for Care, University of Sheffield
  • ADASS
  • BelleVie Care Ltd
  • TEC Services Association (TSA)
  • National Care Forum
  • Casson Consulting
  • Orchard Care Homes
  • Care England
  • ARC England