1. Introduction
The Oxford University Institute for Ethics in AI has been instrumental in establishing a working group focused on Generative AI in Adult Social Care. Our mission is to provide guidance on the responsible use of Generative AI within social care.
This paper, produced by the Technology Working Group of the Oxford Project aims to define Generative AI from the perspective of adult social care. We explore both the potential benefits and the primary risks associated with applying Generative AI in this domain and identify various stakeholders involved. Additionally, we offer a pledge for technology providers to assure stakeholders that they understand and are committed to managing the risks while realising the benefits of Generative AI.
We want to ensure that the guidance we provide is practical and valuable to all stakeholders, including people with lived experience of drawing on care support, care professionals, care workers and care policymakers, technology providers, and local authorities. We aim to assist these key groups in evaluating the risks associated with digital solutions that utilise Generative AI. This should help to increase trust and transparency between stakeholders.
The working group will specifically address issues arising from Generative AI. In the future, we may extend our principles and practices to the broader AI landscape.
We plan to propose a framework for evaluating risks across various usage contexts. Building on the work of groups such as the Department for Science, Innovation and Technology, we aim to examine these risks from a cross-sector perspective.
2. Definition of Generative AI
Generative AI is a type of artificial intelligence that creates new things, like text, or pictures, by learning from a large amount of information. Think of it like a very smart computer that has studied lots of books, pictures, and other data. After learning from all this information, it can make its own texts, or images, based on what it has learned.
Generative AI tools can create new content by combining ideas in innovative ways. This sets them apart from other types of AI, such as those used for outcome prediction or pattern recognition, which analyse existing information to make decisions or provide answers within defined limits. Generative AI, on the other hand, creates something new that was not part of the original data.
However, the way Generative AI works can be complex and not fully understood. Even experts are still trying to figure out exactly how these systems come up with their answers or creations.
In Annex 1: A Definition of Generative AI (see below) for more technical users, we include a definition of Generative AI that outlines its technical underpinnings.
3. Potential Benefits of Generative AI in Social Care
Generative AI offers transformative potential for social care by enhancing personalisation, efficiency, and support systems. One significant advantage is its ability to consolidate and analyse notes from interactions with people who might take up care and support and professionals, then use this data to generate suggested tailored care plans. For example, an AI system could process detailed notes from multiple interactions—such as needs assessments, feedback from family members, and observations by caregivers combined with real-time data from activity and health monitoring devices—to create a suggested comprehensive and personalised care strategy.
In practical terms, an AI-driven system could synthesise information from various sources, including medical records, daily activity logs, and subjective reports from service users. If a person with care needs experiences new changes in their well-being, the AI can integrate this updated information into their care plan, suggesting adjustments or interventions that reflect their current state. This proactive approach could act as a ‘support tool’ for better management of health conditions and more responsive care.
Additionally, Generative AI can automate routine tasks and improve administrative efficiency. For instance, AI can support communications between patients and health providers. An AI-powered system might automatically reschedule appointments if a patient, or person taking up social care support, misses one, send reminders to both them and their caregivers, and update records without manual intervention.
Generative AI can also provide emotional and psychological support. AI-driven virtual companions, such as chatbots designed to offer conversation or mental health support, can alleviate feelings of loneliness and provide immediate, accessible help for individuals, particularly those who may not have regular contact with human caregivers[1]. this can also support the Government’s mental health ambitions e.g. funding recently announced for talking therapies and mental health crisis centres.
Moreover, AI tools can assist in training and supporting social care workers by simulating various scenarios and providing real-time feedback. For instance, virtual reality (VR) environments powered by Generative AI can immerse trainees in realistic situations, enhancing their preparedness and response skills.
Overall, the integration of Generative AI in social care could support more efficient operations, personalised care, and enhanced support systems, ultimately improving the quality of life for both caregivers and recipients.
It should be recognised that for organisations the use of AI can contribute to greater efficiency and effectiveness as exemplified in this section. This will naturally lead to opportunities from an economical perspective. Improving public sector efficiency, effectiveness and supporting economic growth are all Government priorities.
4. Ten Major Generative AI risks[2]
Generative AI offers many benefits and opportunities as outlined above. However, it is important that we consider the risks.
4.1 Hallucinations
A major limitation of Generative AI is the phenomenon of “hallucinations,” also known as confabulations, where the model produces outputs that may appear to us to have the hallmark of confidence, but refer to entirely erroneous information or non-existent events. These hallucinations can range from minor inaccuracies, such as incorrect historical facts, to severe and legally actionable fabrications, such as dangerous medical or care actions (although most large language models in the public domain are deliberately configured not to give medical advice). This highlights the importance of robust oversight and stringent verification mechanisms to ensure the accuracy of the information produced by Generative AI models.
4.2 Output Quality
The nature of Generative AI models makes ensuring output quality a significant challenge. Despite ongoing advancements, these models can produce inconsistent results (i.e. different results given the same or very similar input prompts)[3].
The impact an output(s) of an AI-driven tool can potentially affect the type/quality of care delivered e.g. inconsistent results could potentially lead to variation in how/how much care is delivered.
Rigorous validation processes are crucial to assure human users of these large language models of the integrity and reliability of the outputs generated by them.
4.3 Workforce Skillset Evolution Due to Greater use of AI
Increased adoption of Generative AI technologies and human-AI combined models of care may give rise to changes in the patterns and skills of work led by humans, and thus wider changes in the activities and performance of the human care workforce over time.
Total or over-reliance on AI to help make decisions could potentially diminish the human role in decision making. Human attributes and skills such as empathy and discernment should be retained. Stakeholders will need to consider implementing strategies that ensure continued skill development and active engagement of human workers in the decision-making processes.
4.4 Copyright and Other Legal Risks
Generative AI brings with it significant legal and regulatory challenges. There have been instances where AI tools have used, or reproduced, copyrighted material without permission, leading to legal challenges. The recent temporary ban on Generative AI tools in Italy and the ICO AI training data challenge in the UK underscore the regulatory implications surrounding consent, privacy, output accuracy, and age verification. This liability relates to the developer that used the data to train their model and less so the person that uses the model to produce content.
There may also be risks also around ownership of the Generative AI outputs. Unclear legal frameworks in many jurisdictions complicate ownership disputes and it is unclear whether AI-generated content can be copyrighted at all and by whom.
We must navigate these legal landscapes carefully to avoid potential pitfalls. Furthermore, there are areas of law, like mental capacity law, which are highly complex, and it is as yet unclear how AI will be treated within this area of the law.
4.5 Data Privacy
Generative AI systems may have been both intentionally and unintentionally trained on personal information, which could be mishandled or exposed if not properly protected. Even anonymised data can sometimes be re-identified through sophisticated analysis. The inner workings of many Generative AI models are complex and opaque. This can make it difficult for users to understand how their data is being used, stored, or shared, leading to potential privacy concerns. Sometimes, AI models might inadvertently generate outputs that include or infer personal data, especially if the models were trained on data containing such information. This can lead to unintentional privacy breaches.
Another issue which needs to be considered is the risk that personal data e.g. care/health data which is inadvertently inputted into Generative AI tools could pose significant risk of breaching sensitive data. Tech developers should take steps to mitigate this risk.
4.6 Abuse and Fakes
The power of Generative AI makes it vulnerable to scenarios where users manipulate the model to perform malicious acts. For instance, a mental health chatbot could be exploited to produce inappropriate responses that recommend profitable rather than peer-informed actions, or to reveal confidential information. Images can be manipulated to create deep fakes and deliberatively misleading information. These new attack surfaces include “corrupting training data (‘data poisoning’), hijacking model output (‘prompt injection’), extracting sensitive training data (‘model inversion’), misclassifying information (‘perturbation’).”
Vulnerable persons are particularly prone to the difficulty of assessing information generated by a large language model, particularly because they often access technology in times of crisis. This creates a need for continuous monitoring and robust security measures to protect against misuse. Tech providers have started to implement innovations such as watermarking, but these solutions themselves are vulnerable.
4.7 Model Drift
In a social care context model drift (sometimes referred to as ‘production’ risks) refers to the phenomenon where a Generative AI model’s performance, relevance, or accuracy deteriorates over time because the social care environment, data inputs, or underlying assumptions change. For example, this drift can arise from evolving policies and regulations or shifts in demographics. Model drift is related to, but distinct from, our next risk, biases.
4.8 Biases
Generative AI is susceptible to biased outputs due to the unintended biases present in the training data. This can perpetuate and amplify existing prejudices, which pose a risk to people drawing on care and support as well as posing ethical and reputational risks for social care organisations. For example, as best practice changes in social care, the outputs of a Generative AI model trained before the changes occurred may take time to catch up and until then perpetuate practices that are no longer advocated. It is paramount that we ensure that the training data is up to date and representative, to minimise bias and uphold our commitment to fairness and equality in the algorithms on which the AI is based. Perpetuation of foundational biases risks exacerbating existing inequalities, and so AI algorithms need to be trained on data which is diverse and representative in an effort to mitigate the risks to people drawing on care and support, and so that organisations providing care and support continue to maintain the highest standards of quality possible.
Steps to remove the biases are necessary to ensure that the risks to people who draw on care (whose care has directly or indirectly been influenced by a Generative AI tool) is minimised.
4.9 Cost of expertise and computational resources
Developing robust applications using Generative AI requires significant expertise and computational resources, which are currently concentrated among a few leading technology companies and emerging social care IT providers. The limited availability of specialised knowledge in this field, and the complexity of the underlying learning algorithms and black-box nature of large language models, presents a tangible business risk. Small Language Models (SML) that are domain specific and trained on proprietary data are emerging[4] and may well be more suitable than fine-tuning generic large language models for health and social care applications. This requires further research[5].
An important first stage for most tech suppliers will be extending the training and role of their developers and data scientists. This need for knowledge and training will extend to the health and social care workforce (which not only includes frontline staff but also commissioners who will be procuring technologies).
4.10 The Unknown Unknowns
The nature of unknowns is that they are unknown and perhaps more aligned to science fiction. However, machine learning experts such as Professor Geoffrey Hinton talk of systems that might decide to remove the human from the loop by hiding information or recommending actions that could be detrimental to that human, in order to achieve their high-level goal[6]. We can imagine some nightmare responses to the request ‘help me to remove post-operative pain’!
New risks will emerge as we work with technology which relies on probability to generate novel sequences of data. Indeed, whilst this technology working group is currently focusing on Generative AI, Yann LeCun (Chief AI Scientist at Meta) talks[7] about the limitations of Generative AI and the AI technologies that are likely to succeed it.
There is now extensive work in both academia and industry which has been completed to define the main risks, and Golda et al propose an interesting model of the privacy and security concerns based on five perspectives (user, ethical, regulatory, technologic and institutional)am dong in their paper “Privacy and Security Concerns in Generative AI: A Comprehensive Survey[8]”.
5. Examining Risk, Ethics, and Safeguarding in the Use of Generative AI for Social Care
Risks in the Context of Usage
The above risks around the use of Gen AI are highly driven by the context of the usage, i.e. the use case and the individual’s situation. While categorical risks are useful, it is also necessary to consider the context and clearly define use cases to make risk assessments. Imagine Generative AI is used in two distinct social care scenarios:
- Drafting personalised care plans for an elderly resident: Generative AI could assist social workers by summarising the person’s history, preferences, and support needs into a personalised care plan. While this improves efficiency, the risks include output quality, bias and privacy.
- Providing automated advice to a carer through a chatbot: In this context, the chatbot might, for example, assist people seeking guidance on how to access meals on wheels services. Risks in this use case include hallucinations, output quality and fakes.
These examples highlight that the context (e.g., supporting professionals vs. directly advising vulnerable individuals) significantly influences the nature and severity of risks. Therefore, a tailored risk assessment based on the specific use case is crucial.
Ethical considerations for Practice
As Guidance on Proportionate Assessment for Social Workers sets out, “the core purpose of adult care and support is to help people to achieve the outcomes that matter to them in their life. The modern challenge is how local authorities and care providers can do this in a drastically changed landscape that is increasingly more digital, with reducing budgets and increasing need and complexity.” It acknowledges “creative and innovative ways to use technology to continue to engage with and support the people we serve.” However, social work professionals and others upholding professional standards, rightly, also seek to preserve the vital relational elements between a person drawing on care and support services and the professional assessing, or otherwise supporting, the person.
Whilst recognising the ‘proportionality’ imperative, quality social work, delivered in a person-centred way, with a risk-positive, human rights ethos is the backbone of good practice and it is imperative that the right support is provided at the right time, determined in partnership with the person themselves. In this sense, to be valued, AI needs to be acknowledged to be safely acting as the ‘co-pilot’, not the pilot, to be a ‘decision-support tool’ not a decision-making tool and to not be generating new content in a formal record that is unmediated by human review and authorisation. The use of AI for summarising assessment conversations, to enable more engaging listening and improve the accuracy of capturing the person’s own voice in the record, should not undermine the primacy of a relationship-based approach
AI support for practice needs to acknowledge not only legislative and ethical requirements under the Care Act, but also the importance of protecting the rights and needs of people under the Mental Capacity Act and Mental Health Acts. For example, there would be serious concerns about AI’s capacity to recommend anything in relation to a person who frequently self-harms and has suicidal intent.
Some social care professionals fear that AI might be considered capable of replacing professional reflection, judgment, and decision making; others are ready to embrace AI and other disruptive technologies into the future of social work, but the ethical considerations need to embrace the application of it in practice and not just the technical pros and cons of AI, otherwise its utility will be severely limited.
To this end, stakeholders (below), including, but not restricted to social work professionals, need to play a significant part in identifying the risks in generative AI and the safeguards that need to be in place.
Safeguarding and the Tech Providers Pledge
In formulating the Tech Providers Pledge we seek to emphasise the importance of safety and the safe use of any AI tool deployed in health and social care settings. We understand our responsibilities, as tech providers, to support the safeguarding of people who draw on care and the importance of co-production with people who draw on care, care provider organisations, frontline staff and the wider public to build trust in AI tools and promote increased transparency
Our aim, with our pledge, is to ensure that the safety and the safe use of any AI tool deployed in social care settings is uppermost in the mind of all signatories. It is important to ensure that the people who draw on care are included in the design of new tools for care. This pledge is part of the Oxford Initiative for the Responsible use of AI in Social care which includes working groups of care workers, people with lived experience of care, and care groups. It is part of a wider body of work to ensure inclusivity in the development of new technology in care with the aim of building and retaining trust in AI tools and promoting increased transparency and understanding
6. Stakeholder Groups
In the realm of social care, several key stakeholders play crucial roles in the deployment and impact of Generative AI technologies. These stakeholders include:
- People Drawing on Care & Support Services: Individuals who receive care and support services are often the primary beneficiaries of Generative AI. They are directly impacted by how effectively AI can personalise and enhance their care plans, improve communication, and provide emotional support. Ensuring their needs and preferences are central to AI implementations is vital.
- Family Members and Advocates: Families and advocates of people who draw on care and support have a vested interest in the quality and effectiveness of care provided. They can provide feedback on AI tools and ensure that the technology aligns with the values and needs of those receiving care.
- Caregivers and Social Workers: These professionals use AI tools to manage and deliver care. Generative AI can assist them in creating tailored care plans, managing administrative tasks, and receiving real-time insights into their clients’ needs. Their input is crucial for developing AI solutions that fit practical caregiving scenarios.
- Social Care Providers: Local authorities, care providers and voluntary organisations, and social care agencies are involved in integrating AI into their systems. They oversee the implementation of AI tools, ensure compliance with regulations, and evaluate the effectiveness of AI-driven solutions in improving care delivery. When selecting software, they could include a risk register during their procurement exercise, based on the risks in this paper, to ask suppliers to outline how they have addressed stakeholder risks in their solutions.
- Technology Developers and AI Researchers: Companies and researchers who design, develop, and refine Generative AI technologies are key stakeholders. They are responsible for creating AI systems that are reliable, ethical, and effective in social care contexts. Their work includes ensuring that AI tools adhere to privacy standards and are user-friendly for caregivers and people who draw upon care and support.
- Regulators and Policy Makers: Government bodies and regulatory agencies establish guidelines and regulations for the use of AI in social care. They ensure that AI systems are used ethically, protect user data, and address concerns related to fairness and transparency. Their role is essential in shaping policies that govern the deployment and oversight of AI technologies. We acknowledge that there may be differences in the ownership and provision of guidance, regulation, and governance across various organisations, such as central and local government or social care providers.
- Insurance Companies and Funding Bodies: These entities may influence the adoption and financing of AI technologies in social care. They assess the cost-effectiveness and potential benefits of AI solutions, which can impact funding and reimbursement policies.
7. Next Steps
The first step is to agree a pledge between us. We need to consider how we want the pledge to evolve. It could become part of a toolkit and form the basis of an evaluation framework which could include a flowchart for tech providers on how they use the pledge.
Next steps could be to:
- consider practical examples of each risk for different stakeholders. This will help people with less understanding of the technologies understand the potential impact. This should probably be done in a balanced way highlighting the potential benefits of AI, when set against the risks, to the stakeholder groups.
- start to look at Generative AI technologies that are being used in the social care sector. Tools are emerging rapidly, and technology providers are implementing Generative AI in a range of applications. We could investigate some of these technologies and consider how our guidance might impact on their development and deployment. We could also look at how open-source tools are being utilised and how AI is being developed into new products.
- consider an updating newsletter.
- discuss the rationale, practicality and process for creating an Evaluation Framework for assessing Generative AI software applications in social care in relation to the identified risks[9]. This would be relevant in the following ways:
-
- For technology providers: Providers can use the framework to evaluate their software, addressing and mitigating identified risks. A formal “evaluation framework” would indicate that their software has been thoroughly assessed and that measures have been taken to manage these risks effectively. We could include examples based on technologies provided by members of the Technology working group.
- For local authorities: Local Authorities could require IT suppliers to demonstrate how they have mitigated the identified risks, for example, as part of the tendering process.
- For service users: Service users could review how these risks have been addressed by the technology they use. The “pledge” would serve as a mark of confidence, similar to a penetration testing certificate, reassuring users of the software’s reliability and security.
- Care organisations and care workers: As with service users they could use this as a guide to how the risks have been addressed.
We note that the Information Commissioners Office have led a series of consultations on how aspects of data protection law apply to GenAI. We understand the final guidance is still being produced but this may be relevant and important to consider.
Acknowledgements
This document was initially drafted by John Boyle, director and founder of Oxford Computer Consultants (now part of SystemC) with thanks to, Professor Tom Melham – University of Oxford, and Professor Lord Lionel Tarassenko for their review of the initial draft.
This document was originally drafted 12th September 2024 and amended with input from Caroline Green and Daniel Casson
It was reviewed by the Tech Providers Working Group and the Steering Committee of the Oxford Project: The Responsible Use of Generative AI in Care (digitalcarehub.co.uk)
The section ‘Ethical Considerations for Practice’ was drafted by Steve Peddie, Care and Health Improvement Adviser (CHIA) for the Southwest Region and national CHIA for Digital Technology in Adult Social Care Partners in Care and Health (PCH)
Valuable edits were also provided by Amy Lewis of Just Checking.
Annex 1: A Definition of Generative AI for more technical users
A Generative AI system is designed to produce new output based on a large amount of data that has been integrated into its internal mechanisms during a training phase. Furthermore, the outputs of the best-known form of Generative AI, large language models are free-form productions, for example narrative text or pictures. These models generate entirely new sequences of text and image data from their training data, whereas other frequently used Machine Learning (ML) models, such as regression models or deep neural networks, create outputs that interpolate between their training data and hence fall within a fixed set of values[10].
Generative AI models are intrinsically opaque and how exactly they work is not yet well understood from a scientific perspective. Explaining the results of a model with 175 billion parameters[11], or understanding how it arrived at any given output, is not currently possible[12], although there is some early work on explainability techniques for large language models[13].
References
[1] See for example: The Harvard Business Review “AI Companions Reduce Loneliness”, De Freitas et al. (2024). Companion AI Loneliness – JCR revision_v27_c83f826f-37af-4fc8-a831-fb9e74353c02.docx (arxiv.org)
[2] Material drown from Vartak (2023) Forbes report on Six Risks of Generative AI
[3] Type ‘describe social care’ into two instances of ChatGPT. The output you’ll get will be different in each instance.
[4] Small language models emerge for domain-specific use cases | TechTarget
[5] Interesting to compare specialist model prospects versus general, e.g., Magic Notes. But seems horse already bolted at least for summarisation. Perhaps of value for more sophisticated or risky use cases, e.g. diagnostics, recommendations.
[6] Romanes Lecture: ‘Godfather of AI’ speaks about the risks of artificial intelligence | University of Oxford
[7] https://youtu.be/5t1vTLU7s40?si=LIuKYeW-ab-ZEB7a
[8] Golda et al. (2024). Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE Access and https://www.ece.nus.edu.sg/stfpage/bsikdar/papers/access_genai_24.pdf
[9] There exists the British standard BS 30440:2023 “Validation framework for the use of artificial intelligence (AI) within healthcare”. The principles and practice of this standard could also be applied to social care. There is an outstanding question as to what degree the pledge statement may be aligned with these standards.
[10] The core model of Generative AI tools actually generate the “next word” in a novel sequence of words, but the words belong to the vocabulary of the AI. https://emaggiori.com/chatgpt-vocabulary/
[11] Language models are few-shot learners.
[12] https://www-technologyreview-com.cdn.ampproject.org/c/s/www.technologyreview.com/2024/03/04/1089403/large-language-models-amazing-but-nobody-knows-why/amp/
[13] For example, Zhao et al. (2024). Explainability for Large Language Models: A Survey. ACM Trans. Intell. Syst. Technol. 15, 2, Article 20 (February 2024)