AI has the potential to transform many aspects of our healthcare system, from administration and analysis through to patient adherence and physical robots. However, this collection of technologies can present as many challenges as solutions, and Health New Zealand|Te Whatu Ora has a simple request: talk to us first.
Dr Robyn Whittaker co-leads Health NZ’s National Artificial Intelligence (AI) and Algorithm Expert Advisory Group (NAIA) which advises on issues including those around privacy, bias and data sovereignty of health products using these platforms. Previously Clinical Director of Innovation at the Institute for Innovation and Improvement, Te Whatu Ora Waitematā Whittaker led the Leapfrog Programme of strategic innovation projects. She is an international expert in research and development in the use of mobile health technology for population health interventions, and is now Director of Evidence, Research and Clinical Trials.
Whittaker says Health NZ is keen to review proposals and provide advice early on in the development pathway, and has formulated a perspectives checklist to help with potential AI products and services.
“Our goal is to understand the problem you’re trying to solve with AI – it’s scale, inequities, how many people it impacts, the impact for consumers and Māori in particular, and current solutions/management. That gives rise to a raft of questions we need to explore before you try to promote your product or service to us.
“We want to know about your team, their qualifications and expertise, who designed the AI, and why you think it’s appropriate in addressing this problem. Were consumers involved its development? What potential risks and benefits are there for consumers? Did you work with Māori and embed their perspectives from design through to data sovereignty? What about equity issues and their mitigation?
“We want to know how it will be tested and validated. How will it be implemented, monitored and audited? Are ethics approvals required?”
Whittaker says clinicians must be involved in developing AI tools. “Developers need to understand how their tool will be delivered in the clinical environment, how clinicians might use it, how new tools can be integrated into existing systems, how it might impact patients.”
Whittaker emphasises that while there is a lot of interest in AI, there are still many issues to be resolved, particularly around security and confidence. “We know, for example, that patients and their families are willing to help out if data collection is done well, data is used for the public good, and it is protected and de-identified.
“For Māori, there are issues around data sovereignty and we need to consider that Generative AI tools have probably been trained on data that doesn’t give a full picture. This also applies to women, older people, the disabled and those who are gender diverse. This can perpetuate biases.”
The AI environment is moving fast but Whittaker says it’s only started to be useful in the health sector in recent times. “We’re seeing benefits in three key areas – Algorithms for risk scores and predictions, image analysis in areas such as CT and retinal scans, and MRIs , and digital scribes where AI ‘listens’ to a conversation and provides a structured note to the GP.
“We know AI could make a huge difference but that’s still a long way off. Some tools are still ‘not there’, we need to consider both upstream and downstream implications.”
Whittaker says while Health NZ does not currently endorse the use of public Large Language Models (LLMs) or Generative AI where non-public information is used to train the model or is used within the context of the model, they are very happy to consider new ideas and products.