In Australia, intended purpose still decides whether GenAI is a medical device
The most important governance rule in Australian healthcare GenAI is that model type does not decide regulation on its own. Intended purpose does. If a GenAI system is intended to diagnose, monitor, predict, prognose, or treat a condition, the TGA may regulate it as medical-device software regardless of whether the underlying model is generative, classical, or rule based.
This matters because the same technical stack can sit on opposite sides of the line. A summarization assistant for administrative handover has a different regulatory posture from a chatbot that gives diagnostic or treatment recommendations to clinicians or consumers.
Intended-purpose boundary for healthcare GenAI
Loading diagram...
TGA AI and medical device software regulation
Current TGA guidance explaining how intended purpose determines whether AI-enabled software is regulated as a medical device.
Review the TGA regulation guidanceTGA digital scribes guidance
TGA page highlighting how intended purpose changes the regulatory status of ambient documentation products.
Review digital scribe guidancePrivacy, transparency, and accountable ownership must be designed into the operating model
Healthcare GenAI programs must still satisfy mainstream privacy and accountability obligations. In Australia that means handling health information under applicable privacy law and guidance, while public-sector programs also need to align with the Commonwealth policy for the responsible use of AI in government.
Current government baseline
Australia’s Policy for the responsible use of AI in government v2.0 took effect on 15 December 2025 and expects accountable officials, transparency statements, internal use-case registers, and impact-assessment discipline for in-scope government AI use cases.
Governance expectations that should appear in the operating model
| Control | Why it matters |
|---|---|
| Named accountable owner | Someone must own safety, data use, and release decisions for the use case |
| Transparency statement | Users and stakeholders need to know where and why AI is being used |
| Impact assessment | Risk should be reviewed before a high-stakes workflow is launched |
| Health-privacy review | Sensitive health data needs collection, use, disclosure, and retention controls |
Policy for the responsible use of AI in government v2.0
Digital.gov.au root policy page summarizing the v2.0 mandatory requirements, effective date, and linked standards for accountable officials, transparency statements, internal use-case registers, and AI impact assessment.
Review the v2.0 policyAI impact assessment tool
Digital.gov.au guidance and downloads for the Australian Government AI impact assessment tool that supports the policy operating model.
Review the impact assessment toolOAIC Guide to Health Privacy
Australian privacy guidance for handling health information in service delivery and operations.
Review the privacy guidanceAssurance needs explicit change control, monitoring, and review loops
Healthcare GenAI systems change over time even when the base model is static. Prompt templates evolve, retrieval corpora refresh, policies update, and workflow integration changes. For higher-risk products, teams should define in advance which modifications are expected, how they will be validated, and when additional regulatory review or governance approval is required.
Healthcare GenAI assurance lifecycle
Loading diagram...
- Define which model, prompt, retrieval, or workflow changes are allowed without a new approval cycle
- Re-test grounding, safety, and user-behavior impacts after material changes
- Log the exact model, prompt, and evidence version used for important outputs
- Monitor postdeployment failure reports, overrides, and drift indicators
- Escalate when a product moves from assistive drafting toward diagnosis, treatment, or automated action
FDA PCCP guidance for AI-enabled device software functions
Final FDA guidance on predetermined change control plans for AI-enabled device software functions, useful as a change-discipline reference point.
Review the FDA PCCP guidanceNIST AI RMF Playbook
NIST playbook guidance for operationalizing AI risk management activities across the system lifecycle.
Review the NIST playbookWHO ethics and governance of AI for health
WHO guidance covering human oversight, transparency, accountability, and safety in health AI programs.
Review the WHO guidanceKnowledge Check
Test your understanding with this quiz. You need to answer all questions correctly to mark this section as complete.