Oxford Project: The Responsible Use of Generative AI in Social Care

Generative AI in adult social care, such as GPT-powered AI chatbots, has many potential and actual use cases.

Whilst this quickly developing type of AI may benefit people in adult social care, we need to understand the ethical risks and implications of using this new technology in social care provision. The process towards defining, disseminating knowledge and implementing the ‘responsible use of generative AI’ in adult social care must include all groups of people in social care, such as people drawing on care, family and professional carers, care provider organisations, policy makers and regulators, local authorities, representative and advocacy groups amongst others.

In response to this, the Institute for Ethics in AI at Oxford University, particularly Dr Caroline Green, together with Reuben College, Katie Thorn from Digital Care Hub and Daniel Casson at Casson Consulting organised the first roundtable on the responsible use of generative AI in Adult Social Care on 1 February 2024 and released the ‘Oxford Statement’ following this.

These web pages have more information about our work, how to be involved and what we hope to achieve.

You can also view a recording of our webinar on generative AI in social care held in June 2024.