Artificial intelligence (AI) has been making huge waves across many professions, and the legal world is no exception. But as the recent events in Victoria show, that doesn’t mean it’s ready to replace lawyers—or even be trusted without human oversight. Here’s what’s really going on in the courts, why it matters, and what it means going forward.
What Happened?
In Victoria, the use of generative AI in legal practice has gone from theoretical to real—and problematic. According to the Legal Services Board of Victoria, two serious incidents in the past year involved AI-generated errors in court documents. One lawyer had his practising certificate restricted, while a law firm was ordered to pay costs after submitting documents containing false case citations.
In one high-profile case, a senior counsel filed submissions in the Supreme Court of Victoria related to a murder trial that contained non-existent case law and fabricated quotes—both generated by the AI tool used. The judge described the situation as “unsatisfactory” and emphasised the court’s reliance on accurate submissions, according to reporting by ABC News.
More broadly, research from the University of New South Wales (UNSW) revealed over 80 reported cases where generative AI was used in Australian courts by self-represented litigants—and increasingly by lawyers too.
Why This Matters
Judgment, Accuracy and Trust Are on the Line
Courts operate on the basis that submissions, affidavits, case citations and legal arguments are accurate and truthful. When AI fabricates references or quotes (sometimes called “hallucinations“), it undermines foundational trust. As per the Supreme Court of Victoria’s guidelines, any AI-generated text must be checked to avoid factual or legal errors.
Efficiency vs. Risk Paradox
AI promises efficiency in tasks like legal research and drafting. But if its outputs require extensive human verification due to possible errors, the promised efficiency gains may evaporate. As the Victorian Law Reform Commission notes, inaccuracies might appear credible while still being wrong—a dangerous combination in a legal setting.
Access to Justice and Equality Concerns
On the positive side, AI tools could support self-represented litigants or smaller law firms. However, UNSW research shows most AI-related mistakes come from users without legal training—potentially worsening outcomes for those most in need of support.
Professional and Ethical Obligations Remain
Lawyers using AI are still bound by professional duties: confidentiality, competence, accuracy, and candour. As per the Supreme Court’s guidance, legal practitioners must understand how the AI works, verify its outputs, and ensure courts are not misled.
What the Law and Courts in Victoria Are Doing About It
The Supreme Court of Victoria has released “Guidelines for Responsible Use of AI in Litigation,” setting clear expectations:
- Practitioners must understand their AI tools and their limitations.
- AI-generated content must be verified for accuracy.
- Disclosure of AI use may be required where relevant.
- Generative AI should not replace human legal reasoning.
The approach is not to ban AI, but to regulate and supervise its application responsibly.
What Went Wrong in Victoria: A Breakdown
A typical AI-related legal blunder looks like this:
- A legal practitioner or litigant uses a generative AI tool to draft submissions.
- The AI produces citations or quotes that look valid but are fabricated or incorrect—especially risky in jurisdiction-specific contexts.
- The user assumes the AI output is accurate and submits it unchecked.
- The court discovers errors, leading to delays, cost penalties, or professional consequences.
As per the Victorian Law Reform Commission, these issues often stem from unverified reliance on AI models trained on non-Australian legal data.
Why It’s Not Just a Tech Problem, But a Cultural One
The real issue goes beyond technology. It’s about how people perceive and use it:
- Over-reliance on “magic”: Belief that AI can independently produce reliable legal work. As one lawyer told ABC, “we checked the initial citations and assumed the rest were also correct.”
- Lack of oversight: Junior staff or paralegals may use AI tools without supervision or adequate understanding.
- Jurisdictional mismatch: AI trained on global legal content may cite laws irrelevant to Australia. As per the Supreme Court of Victoria, this creates obvious risks.
- Cultural lag: The legal industry is adopting tech faster than it is building the regulatory and educational frameworks needed to manage it.
What This Means Going Forward
For Lawyers and Law Firms:
AI can be a powerful support tool—but not a substitute for human judgment. Always verify quotes, legal references, and arguments. Supervise junior staff, and document all AI use and outputs.
For Self-Represented Litigants:
Use caution with AI tools. Public-facing AI can generate content that looks professional but is legally flawed. Mistakes can result in costly setbacks.
For the Justice System:
Continue enforcing clear AI use guidelines. Provide training and resources to ensure judges, lawyers, and court staff use AI tools responsibly.
For the Public:
AI in courtrooms must not erode public trust. As the Victorian Law Reform Commission warns, flawed AI use could undermine fairness and transparency in justice.
Final Takeaway
“Robot lawyers” may sound like the future, but Victoria’s experience shows we’re not there yet. In law, accuracy and accountability are non-negotiable. AI should be seen as an assistant—never a replacement.
Before letting AI draft your legal argument, ask: Who’s verifying it? Who’s responsible? The answer must always be: you, the legal professional. That’s what the court expects—and what justice demands.