Industrial Tribunal launches AI pilot as Winder calls for safeguards against misuse

Chief Justice Ian Winder has issued a stark warning about the potential risks of artificial intelligence (AI) in judicial processes, emphasizing that technology must never compromise the integrity of justice. Speaking at the opening of the Industrial Tribunal’s legal year, Justice Winder acknowledged the benefits of digital tools in enhancing efficiency and transparency but stressed the need for robust safeguards to prevent misuse. He highlighted the importance of maintaining judicial fairness, particularly in cases involving self-represented litigants, where AI can serve as a valuable aid when used responsibly. Justice Winder urged the tribunal to adopt guidelines recently issued by the Supreme Court on the ethical use of AI, ensuring that technological advancements do not undermine thoroughness or public trust. Industrial Tribunal President Indira Demeritte-Francis announced the launch of an AI pilot project aimed at assisting with legal research, judgment formatting, and case management. This initiative is part of a broader modernization effort, with its effectiveness set to be evaluated in 2026. The move follows a recent directive from the Supreme Court, prompted by an incident where an attorney submitted AI-generated “fake cases” in support of a legal argument. The directive underscores the need for accountability in AI usage, requiring court users to disclose AI involvement in document preparation and ensure the accuracy of submissions. Justice Winder also cautioned against inputting sensitive or privileged information into unsecured AI platforms, as such data could be inadvertently shared with other users. He emphasized that while AI can aid judicial processes, its deployment must never erode confidence in the impartiality and fairness of the courts.