This article discusses AI liability risks for strata committees and managers and explains how to use artificial intelligence responsibly without creating legal, privacy or governance problems.
Artificial intelligence is no longer on the horizon for strata. It is already embedded in everyday operations, from communication tools and document drafting to workflow automation and data analysis.
In our recent national webinar, The AI liability trap – navigating the risks of artificial intelligence in strata, Chris Irons from Strata Solve, Brendan Pitman from Grace Lawyers and Alex McCormack from SOCM explored what this shift means for committees, managers and schemes.
The central message was clear. The real question is not how to use AI, but how to use it without creating legal, ethical and governance problems.
AI is already part of strata management
One of the first themes discussed was how quietly AI has entered the sector.
Many strata professionals already rely on systems that automate repetitive tasks, assist with drafting correspondence, summarise documents or streamline communication. These tools improve efficiency and reduce administrative burden. Used carefully, they allow managers and committees to focus on more complex issues that require judgment and experience.
The panel agreed that technology is not inherently problematic. In fact, refusing to engage with it may create risks of its own, particularly as owners increasingly expect digital convenience and responsiveness.
The issue arises when AI moves from a support tool to a decision-maker.
The danger of replacing judgment with automation
A recurring theme throughout the session was the risk of over-reliance.
AI can generate responses, draft advice, and analyse data at speed. What it cannot do is understand context in the way a human can. It does not appreciate nuance, competing interests, long-term relationship dynamics or the emotional reality of community living.
Strata decisions are rarely black-and-white. They often involve competing interpretations of legislation, delicate disputes between neighbours, or financial choices that affect people’s homes and investments.
When AI generated material is treated as authoritative without review, problems can arise. If incorrect advice is relied upon, communications are poorly framed, or personal information is mishandled, liability does not rest with the software. It remains with the committee, the owner, or the strata manager who acted on it.
As Brendan Pitman emphasised during the discussion, existing legal principles already apply. Duties regarding reasonableness, confidentiality, privacy, and proper decision-making do not disappear simply because a tool was used.
Legal gaps and grey areas
The panel also noted that legislation has not kept pace with emerging technology.
There is no specific “AI Act” governing strata. Instead, we must apply existing laws to new circumstances. That creates uncertainty. For example:
- If AI generated advice contributes to a poor decision, who is responsible?
- How should committees assess the reliability of automated outputs?
- What happens if bias is embedded in an algorithm used to prioritise complaints or maintenance?
- Are schemes exposing themselves to privacy risks when uploading documents to external platforms?
These questions do not yet have neat answers. That uncertainty is precisely why governance and caution are so important.
Bias, privacy and data security
Another major theme was data.
AI systems rely on information. In a strata context, this may include meeting minutes, financial records, by laws, correspondence, personal details and dispute history. Uploading this information to external platforms can create privacy and confidentiality risks, particularly if the terms of use are unclear or the data storage locations are unknown.
The panel discussed the importance of understanding where data goes, how it is stored and who has access to it. Committees and managers must consider whether they are inadvertently breaching privacy obligations by using certain tools.
The panel also raised algorithmic bias. If automated systems are used to filter complaints, prioritise maintenance, or generate responses, there is a risk that, without human oversight, certain patterns will be reinforced. In community living, fairness and transparency are essential. Blind reliance on automated systems can undermine trust.
Governance first, technology second
Arguably the most practical takeaway from the session was this. AI should sit within a clear governance framework.
Before adopting new tools, committees and managers should ask:
- What problem are we trying to solve?
- Is this task appropriate for automation?
- Who reviews the output before action is taken?
- Are we protecting personal and sensitive information?
- Do our contracts and policies address the use of AI?
Technology should support existing decision making structures, not bypass them.
Human oversight remains critical. Even if AI drafts a response or prepares a summary, someone must check it. Even if a system flags a potential breach of by laws, a person must assess the context.
Strata governance is built on accountability. That does not change simply because processes become more efficient.
A balanced and realistic approach
Importantly, the webinar did not suggest that we should avoid AI altogether. Used responsibly, it can improve efficiency, reduce workload and enhance service delivery.
The panel encouraged a balanced approach:
- Use AI to assist with repetitive, administrative or drafting tasks.
- Avoid using AI as a substitute for legal advice or complex judgment.
- Maintain clear oversight and approval processes.
- Communicate transparently about how technology is being used.
Strata is fundamentally about people managing shared property. Technology can support that task, but it cannot replace accountability, experience and professional judgement.
A practical takeaway: the AI traffic light test
One of the most useful suggestions from the session came from Brendan Pitman, who proposed a simple “traffic light” framework to help strata committees and managers assess AI risk.
Rather than banning AI or embracing it without limits, the traffic light test encourages schemes to categorise AI use based on risk.
- Green: Low-risk administrative assistance. This includes tasks such as drafting general correspondence, summarising meeting notes, or generating template documents, provided a human reviews the output before it is used.
- Amber: Medium-risk tasks that may influence decision-making. Examples include analysing maintenance data, identifying potential by-law breaches, or generating recommendations. These uses require clear oversight, documented review processes and active human judgment.
- Red: High-risk decisions that affect legal rights, enforcement action, financial commitments or reputational outcomes. These decisions should never be delegated to AI. A human decision-maker must remain fully accountable.
The message was clear. AI can support strata governance, but it cannot replace it. A structured framework like this allows committees and managers to innovate cautiously while maintaining legal and ethical responsibility.
Download the Presentation
As artificial intelligence becomes more integrated into daily operations, the risks will only grow more complex.
This national webinar offers practical insight into how AI is already influencing strata and what committees and managers should consider now to avoid future liability.
If your scheme is using, or is considering using, AI-driven tools, this session is essential viewing.
Download the presentation from the webinar here: The Al liability trap – navigating the risks of artificial intelligence in strata.
Presenters
Chris Irons Strata Solve E: chris@stratasolve.com.au P: 0419 805 898
Brendan Pitman Grace Lawyers E: brendan.pitman@gracelawyers.com.au P: 07 5554 8560
Alex McCormick SOCM alex@socm.com.au P: 03 9495 0005
This post appears in Strata News #779.
Have a question or something to add to the article? Leave a comment below.
Read next:- NAT: Strata management in a changing environment: what’s ahead in 2026
- NAT: Professional Standards for Strata Managers
- VIC: When AGM minutes are wrong, what are the strata manager’s obligations?
Visit our Strata Managers, Strata Committee Concerns, Strata By-Laws and Legislation OR state-specific strata information, take a look here.
After a free PDF of this article? Log into your existing LookUpStrata Account to download the printable file. Not a member? Simple – join for free on our Registration page.
