Truth in AI means the technology must be grounded in accurate, reliable data and outcomes. AI systems should be evidence-based, using information reflecting the realities of the people we care for and support. This principle requires that we ask tough questions about the data being used and ensure that it reflects the diverse experiences of those in our care and thus is both valid and transparent. At the heart of the FAIR model is the principle of truth, or as the model would say, getting the Facts right. If the facts are wrong, the AI will make decisions that could harm those in our care.

Case Example:

In one care setting, AI was used to predict which residents in a care home were at risk of developing pressure sores. The algorithm was initially trained on data from a population that didn’t reflect the diversity of the home’s residents, particularly those from minority ethnic backgrounds. As a result, it missed key indicators in people of colour, leading to unequal care. By revisiting the Facts, and ensuring that the data was representative, the AI was improved to provide truthful, equitable predictions for all residents, regardless of their ethnicity.