You need to be VERY careful. The respondents could apply for costs against you and you may need to pay. AI can generate fake cases and this will harm you, not help you. (You may want to check to see if the lawyers are giving you fake cases. It has happened and lawyers have gotten into a lot of trouble over it.)
This case was posted just last week, Thursday January 15th, 2026. If you are interested in the case itself, you can click on the case link below and give it a read. However, I have pulled some key paragraphs regarding AI use.
RR v. Fraser Health Authority and others (No.3), 2025 BCHRT 287
[223] The use of AI tools by parties to assist in the presentation of their cases has increased dramatically over the past several years. This has yielded both positive and negative consequences. On the positive side, people who are self-represented before the Tribunal may have better access to information about legal tests and precedents, and how their specific situation may have been handled by the Tribunal in the past. On the negative side, it has become widely recognised that AI tools frequently generate false information, including fake cases, which can appear to be legitimate.
[224] Recently, the Tribunal has cautioned parties about the responsible and appropriate use of AI tools in the Tribunal’s process. In Duarte v. City of Richmond, 2024 BCHRT 347, the Tribunal stated that parties appearing before the Tribunal must carefully assess the information that AI tools produce before using such information in the Tribunal process, and that deliberate attempts to mislead the Tribunal, or even careless submission of fabricated information, can form the basis for an award of costs under s. 37(4) of the Code. The Tribunal emphasised that the integrity of the Tribunal’s process, and the justice system more broadly, requires parties to exercise diligence in ensuring that their engagement with artificial intelligence does not supersede their own judgement and credibility: at para 53.
[225] Similarly, the BC Court of Appeal has recently held that although parties may use AI tools to assist them in the Court’s process, “like any litigation aid, the human behind the tool remains responsible for what comes before the Court”: Wu v. Murray, 2025 BCCA 365, at para. 14.
[226] In the present case, I do not believe RR purposely attempted to mislead the Tribunal or the Respondents. Further, the Respondents have not alleged that RR has breached any Tribunal rule, order, or policy. The Tribunal does not yet have a published policy regarding the use of AI tools in its process, or information cautioning parties about its use. Although the Tribunal has published one decision that talks about the improper use of AI tools in closing submissions, I do not expect that RR, as a self-represented person without legal training, would have known about that decision.
[227] Further, although RR included numerous fake cases in her submissions, and although the Tribunal and the Respondents were required to expend resources to establish that the cases were not valid, it cannot be said that either the Respondents, or the complaint resolution process more generally, were significantly prejudiced. In the present situation, it was RR who was most prejudiced by her use of the fake cases. This is because the Tribunal could not rely on the majority of the legal propositions she cited, or the factual contexts from the fake cases that she said resembled the context in her own complaint.
[228] Ultimately, in these circumstances, I do not find that RR’s inclusion of the fake cases amounts to improper conduct warranting an order of costs. As such, I decline to exercise my discretion to award costs against RR for improper conduct. These reasons should not be taken to condone the inclusion of fake cases with a party’s submissions or suggest that in other cases an order for costs would not be appropriate.
*******
For a blog on financial risk navigating the BC HRT read: Is there a financial risk to filing a human rights complaint?