News

News

News

March 10, 2024

March 10, 2024

March 10, 2024

Unregulated AI chatbots affect social care plans in Britain

Unregulated AI chatbots affect social care plans in Britain

Unregulated AI chatbots affect social care plans in Britain

University of Oxford study shows benefits and risks of technology to healthcare, but ethical issues remain and the adoption of new AI technologies requires clearer guidelines from regulatory bodies to ensure the ethical deployment of AI in social care.

James Tapper reports today for The Guardian that Britain's carers are under significant pressure and need support, but incorporating unregulated AI bots into their workflow may not be advisable. While researchers emphasize the necessity for ethical considerations in integrating AI into social care, a study conducted by University of Oxford academics revealed that some care providers have utilized generative AI chatbots like ChatGPT and Bard for developing care plans, raising concerns about patient confidentiality. Dr. Caroline Green, an early career research fellow at the Institute for Ethics in AI at Oxford, pointed out the risks of inputting personal data into such chatbots, given it could be used to train the model and potentially expose sensitive information.

Dr. Green highlights the double-edged nature of AI in social care, noting that while it could streamline administrative tasks and facilitate more frequent review of care plans, the technology's potential to disseminate biased information or produce inferior care plans cannot be overlooked. However, she acknowledges ongoing efforts to create tools for safer application of AI in this field.

Existing health and care applications of technology include PainChek (a phone app that uses AI-trained facial recognition to identify whether someone incapable of speaking is in pain by detecting tiny muscle twitches), Oxevision (which monitors patients in NHS mental health trusts for various health indicators), Sentai (an Alexa-based system aiding those without constant care by reminding them to take medication and enabling relatives to track them), or the Bristol Robotics Lab's device designed to prevent household accidents for individuals with memory issues.

Despite fears in creative fields of AI supplanting jobs, the social care sector, with its significant workforce gaps and reliance on 5.7 million unpaid carers looking after relatives, views AI differently. Professor Lionel Tarassenko, professor of engineering science and president of Reuben College in Oxford, argues that AI could augment the skills of less experienced carers, providing them with expertise similar to that of professionals. However, concerns persist among care managers about the potential legal implications of using AI technologies, with fears of violating Care Quality Commission regulations. Mark Topps, a social care professional and co-host of The Caring View podcast, highlighted the industry's hesitancy to adopt new technologies without clear guidelines from regulatory bodies.

A recent gathering of 30 social care organizations—including the National Care Association, Skills for Care, Adass, and Scottish Care—aimed to address these concerns, seeking to establish a set of best practices for responsible AI use in the sector. Dr. Green, who convened the meeting, expressed hope for developing guidelines that could be enforced to ensure the ethical deployment of AI in social care.




Credits

James Tapper initially wrote and reported this story for The Guardian on March 10, 2024. under the title → "Warning over use in UK of unregulated AI chatbots to create social care plans."

Photo: An artist’s illustration of artificial intelligence (AI). This image was inspired by how AI tools can amplify bias and the importance of research to mitigate these risks. Photo © Google DeepMind.