What are the risks?
TGA approval and registration as a medical device for any tool used in the process of making diagnostic or treatment recommendations, the use of any AI tools in this would be included under this requirement.
If deciding to make use of AI tools in a medical setting, the selection of which tool used should be a key decision. Tools such as Google’s MedPaLM and Microsoft’s BioGPT which are designed specifically for use by medical practitioners will be more accurate and provide more reliable information than general tools such as ChatGPT.
The other key risks in using AI in healthcare include:
- Unnecessary disclosure or use of patient data, particularly when using AI models or large language models (LLMs) developed by third-party providers. Patient data may be used to train these systems, sometimes without full transparency or consent.
- Overreliance on AI-generated outputs without proper clinical review. This is a common risk across all industries, but it carries far greater consequences in healthcare, where patient safety and wellbeing are at stake.
These risks involve not only ethical concerns but also legal responsibilities. In Hong Kong, the use and handling of personal data are governed by the following regulation: Personal Data (Privacy) Ordinance (Cap.486).
Healthcare providers must also be aware of international standards and regulations that may apply, especially when treating patients from other regions. Key examples include:
- The European Union's General Data Protection Regulation (GDPR), which protects the personal data of EU citizens globally. If your organisation treats European patients, you may be required to comply.
- The EU AI Act came into force in August 2024. This legislation outlines requirements for the use of AI technologies, with phased compliance deadlines extending to 2030.