Responsiveness: User Feedback and the Evolving Care Landscape

The importance of reviewing and evaluating the use of AI is crucial in what is a fast evolving and developing landscape. Such a review process is at the heart of the FAIR model and results in the garnering of facts, an analysis of rights which are engaged, and an assessment of response and action which may result in a change in practice. Evaluation and responsiveness are critical to ensuring that the tool serves the person rather than becoming an end in itself.

Case Example:

A social care organisation introduced an AI system to assist with managing care plans. The AI recommended adjustments to care routines based on patterns in data, such as medication management and mobility monitoring. Initially, the system was praised for its efficiency, but over time, care workers and families reported that the AI’s recommendations were not adapting to changes in the residents’ health conditions as quickly as expected.

The AI system failed to account for rapid changes in individual health needs or sudden changes in care regulations, causing a delay in updating care plans. This lack of responsiveness frustrated care workers and led to poorer support for some residents.

After undertaking a full FAIR review, the AI was updated with a more flexible, adaptive model, designed to respond dynamically to new data and changing regulations. Feedback loops were formalised, allowing care workers to input real-time observations directly into the system for immediate adjustments. As a result, it was much more responsive and person-led in nature.