What is generative AI? Our AI Mythbuster!

Definitions

  • ‘Artificial Intelligence’ (AI) refers to technological systems created to learn and solve problems in a human like way.  To do so, AI systems are trained on large amounts of data and information. They learn how to identify patterns, and this means that they can carry out many different tasks like having conversations that sound human-like.
  • Generative AI’ is a type of AI that has been trained using vast amounts of text, images etc. to create new text, photos, videos etc. that read, look or sound like they have been produced by a human.
  • ‘AI chatbots’ refer to a type of generative AI that usually generate text that mimics human conversation. They can technically be prompted to generate text on most problems, like writing a blog post on a specific topic or creating suggestions and advice.
  • Adult Social Care’ refers to “Care and support for adults who need extra help to manage their lives and be independent – including older people, people with a disability or long-term illness, people with mental health problems, and carers. Adult social care includes assessment of your needs, provision of services or allocation of funds to enable you to purchase your own care and support. It includes residential care, home care, personal assistants, day services, the provision of aids and adaptations and personal budgets.” (Think Local, Act Personal)

Background

  • Artificial Intelligence (‘AI’) is developing rapidly, with new systems and products emerging constantly. There are many different types of AI systems that are relevant and are already being used by people drawing on care, professional and family carers as well as by adult social care services and organisations, for example voice assistants, AI chatbots and fall predictors. AI has the potential to benefit people drawing on and people providing care, especially when these system as used as ‘co-pilots’ to care.
  • Generative AI, particularly ‘AI chatbots’ such as OpenAI’s ChatGPT or Google’s Bard, have already disrupted how millions of people around the world go about tasks in their working and private life. AI chatbots are already being used by some social care providers, including in care homes and home care services, by carers and people drawing on care. Many care tech developers and care support organisations have been integrating AI chatbots into their products and services.
  • AI chatbots have technical limitations and inherent risks, which means that their use in social care contexts is not unproblematic. AI chatbots work through text prediction, having been trained with vast amounts of data. They do not understand the world and cannot contextualise, even if the generated text sounds human like. Generated texts may produce potentially harmful suggestions if actioned without appropriate human judgement. In caring contexts, this means for example that an AI chatbot generated suggestions towards someone’s care may sound reasonable but may potentially harm a person if the user follows the AI chatbot’s outputs without the necessary human judgement and caring expertise. Generated texts will also reproduce biases that are inherent in the training data. Again, using AI chatbot generated text without knowledge of this problem and how to spot biases in the outputs could result in harm. Any new data, including personal data protected under the ‘General Data Protection Regulation (GDPR) put into AI chatbots will become part of the larger training data.