Think twice before using ChatGPT to help with aged care documentation
With last year’s aged care reforms requiring a higher evidentiary standard from the Aged Care Quality and Safety Commission, many operators are turning to generative AI tools such as ChatGPT.
Isabelle Kimler, Aged Care Management Consultant with Aged Care Provider Assistance, says providers are “understandably” exploring AI tools, particularly to help write policies and procedures and in applications to become approved providers.
“Under the strengthened framework, assessors are no longer asking whether a policy exists, they are asking whether documentation demonstrates compliance with each relevant Quality Standard outcome and whether it is supported by evidence,” Isabelle writes.
AI-generated drafting has several significant weaknesses, including not being able to apply the strengthened Quality Standards correctly and being unable to provide evidence about how obligations are met in practice.

Aged Care Provider Assistance is “increasingly” seeing the Commission fail policies, assessments and applications clearly drafted by AI tools.
Isabelle told The Weekly SOURCE that the Commission is asking providers to re-draft applications to become approved providers and following compliance audits when the provider, or applying provider, could not answer the Commission’s follow up questions about policy documents that appeared to be AI-generated.
Isabelle provided another example. AI-generated policies commonly state: “Residents will be informed of changes to fees or services and consulted where appropriate.” But under the strengthened Standards, this wording is insufficient.
Where AI fails
When it comes to resident policies, the Commission’s assessors now test:
- How the resident is informed (method, accessibility, supports);
- When this occurs (specific legislative timeframes);
- Who is responsible (role clarity and delegation);
- How consent or agreement is obtained or reviewed, and
- What records evidence this occurred consistently
Isabelle notes that AI-generated policies typically fail because they:
- Do not specify mandatory notice periods, triggers, or escalation steps required by the Act and Rules;
- Are inconsistent with the resident agreement, fee schedules, or termination provisions;
- Do not demonstrate alignment with supported decision-making requirements under Standard 1;
- Blur accountability, undermining governance expectations under Standard 2; and
- Or create vague commitments that cannot be evidenced in practice.
As well as being asked to rewrite policies, operators might receive an improvement notice or non-compliance finding “not because care is inadequate, but because documentation does not meet the strengthened Standards’ evidentiary expectations”, Isabelle states.
Privacy protections

Another factor to keep in mind when considering the use of generative AI tools are Federal and State laws preventing personal or sensitive health information from being entered into public AI platforms, says Consultant with Advisors to Aged Care Australia, Karina Peace.
Laws also prevent personal data from being stored overseas, as is the case with many AI tools, and require informed consent from residents before their personal data can be used.
As AI tools become more prevalent, aged care operators should develop clear AI usage policies and stay up to date with Federal and State regulations. Operators need to consider organisation-specific AI tools and maintain human oversight, Karina added.