Artificial intelligence (AI) is rapidly transforming the way healthcare is delivered. From drafting patient notes to supporting diagnostic decisions, AI and large language models (LLMs) are being adopted across hospitals, clinics, and general practices to streamline workflows and reduce administrative burdens.

But with these benefits come important questions especially around how patient data is used, stored, and protected. For healthcare professionals, understanding the privacy, ethical, and regulatory implications of using AI tools is now essential. In this article, we explore the risks, responsibilities, and opportunities of AI in healthcare, along with practical guidance for clinicians and patients alike.

dqs-a staircase of wooden blocks with the lettering risk
Loading...

What are the risks?

TGA approval and registration as a medical device for any tool used in the process of making diagnostic or treatment recommendations, the use of any AI tools in this would be included under this requirement.

If deciding to make use of AI tools in a medical setting, the selection of which tool used should be a key decision. Tools such as Google’s MedPaLM and Microsoft’s BioGPT which are designed specifically for use by medical practitioners will be more accurate and provide more reliable information than general tools such as ChatGPT.

The other key risks in using AI in healthcare include:

  • Unnecessary disclosure or use of patient data, particularly when using AI models or large language models (LLMs) developed by third-party providers. Patient data may be used to train these systems, sometimes without full transparency or consent.
  • Overreliance on AI-generated outputs without proper clinical review. This is a common risk across all industries, but it carries far greater consequences in healthcare, where patient safety and wellbeing are at stake.

These risks involve not only ethical concerns but also legal responsibilities. In Hong Kong, the use and handling of personal data are governed by the following regulation: Personal Data (Privacy) Ordinance (Cap.486).

Healthcare providers must also be aware of international standards and regulations that may apply, especially when treating patients from other regions. Key examples include:

  • The European Union's General Data Protection Regulation (GDPR), which protects the personal data of EU citizens globally. If your organisation treats European patients, you may be required to comply.
  • The EU AI Act came into force in August 2024. This legislation outlines requirements for the use of AI technologies, with phased compliance deadlines extending to 2030.

And the opportunities?

Like other industries, healthcare stands to benefit significantly from AI and LLM technologies. These tools can streamline administrative tasks, such as writing clinical notes or processing routine paperwork, allowing healthcare professionals to spend more time focusing on patient care.

This opens opportunities for medical professionals to focus their attention and time on the patient to improve outcomes for them and their overall experience. This also reduces burnout and potentially allows for more thorough notes to be written for the patient to give a better insights in future appointments.

As computational power increases, particularly with the advent of quantum computing, the potential of AI systems continues to grow. In the future, AI may be able to process and learn from vast and complex datasets to offer diagnostic suggestions or insights currently beyond human capability.

What can you do as a patient?

Patients have a right to understand how their information is used. Consider asking your doctor:

  • Will this appointment be recorded or transcribed?
  • Where will the data from this appointment be stored and processed?
  • Will my data be used to train AI systems or shared with third parties?
  • Will I receive the same level of care if I choose not to allow my data to be used?

What can you do as a clinician?

Ultimately, clinicians are responsible for the accuracy and integrity of any diagnoses, notes, or decisions, regardless of whether AI tools were used to assist.

Here are key considerations:

  • Always review outputs from any AI or software tool. These tools are meant to assist, not replace, professional judgement. AI systems are not yet fully reliable. They are still prone to hallucinations (i.e., generating incorrect or misleading outputs which can be convincing) which is a significant concern in clinical environments.
  • Understand where data is stored. If you use a third-party AI assistant or software provider, confirm where their servers or cloud infrastructure are located and whether data is stored within Hong Kong  or overseas.
  • Be clear on whether the AI service uses input data for training. Some services retain user prompts and data for further development of their models, which may not align with your privacy obligations.
  • Respect patient choice. Ensure patients who opt out of AI use receive the same standard of care and have access to alternative processes if needed.
Vector English Revolutionizing Healthcare The Impact of Machine Learning-Enabled Medical Devices now
Loading...

Final thoughts

AI has immense potential to enhance healthcare delivery, but its use must be accompanied by careful consideration of data privacy, ethical obligations, and professional responsibility. Staying informed and transparent with patients will help build trust while ensuring compliance with regulatory requirements in Australia, New Zealand, and globally.

Author

Brad Fabiny

DQS Product Manager - Cyber Security and auditor for the ISO 9001, ISO 27001 standards and information security management systems (ISMS) with extensive experience in software development.

Loading...