Immigration Solicitor’s ‘Soul Power’ Blog Strikes Chord on Professional Ethics

Immigration solicitor Chris Dias has published a blog post that’s gaining attention across legal circles for its stark framing of professional responsibility in the age of AI. The piece, titled ‘Soul Power’ and published on his firm Lawyery’s website on 4 March, argues that legal AI needs what he calls “a human soul behind it” - professional accountability that cannot be automated.

The timing is pointed. Dias published just weeks after the SRA issued updated compliance guidance on AI use in February, and his argument draws heavily on MS (Professional conduct; AI generated documents) Bangladesh [2025] UKUT 305 (IAC), a recent case where a barrister was sanctioned for using ChatGPT to cite a non-existent case called ‘Y (China)’.

“AI has never felt the sun on its face,” Dias writes. “It has no moral compass, no professional accountability, no skin in the game. It’s a tool - an extraordinary tool with remarkable capacity for processing information - but it’s not a lawyer.”

The MS case provides a sobering backdrop. The Upper Tribunal found the barrister had relied on ChatGPT to generate legal submissions that included citations to fabricated authorities. The judgment noted the barrister’s failure to verify the AI-generated content before submitting it to court, resulting in professional sanctions and costs orders.

Dias uses this as a springboard for a broader argument about where AI fits in legal practice. He doesn’t reject AI - his post acknowledges its “extraordinary capacity” - but positions it firmly as a tool requiring human oversight and professional judgment. The phrase “soul power” appears to reference the human elements that he argues are irreplaceable: ethical responsibility, professional accountability, and the consequences that come with putting your name to legal work.

The blog post comes as the profession grapples with practical questions about AI governance. The SRA’s February guidance emphasised that solicitors remain personally responsible for all work produced with AI assistance, including obligations to verify accuracy and maintain client confidentiality. Several recent disciplinary cases have involved lawyers failing to properly supervise AI-generated content.

Dias, who co-founded Lawyery and created what the firm describes as an “ethical immigration advice code”, appears to be positioning himself within a growing conversation about professional values in an AI-augmented legal system. His argument isn’t that AI is inherently problematic, but that the profession needs to maintain clear boundaries about where human judgment remains essential.

The ‘soul power’ framing suggests this debate extends beyond technical competence to fundamental questions about professional identity. As AI capabilities expand, Dias seems to argue, the profession’s response shouldn’t be to compete with machines but to emphasise the distinctly human elements of legal practice - accountability, ethical judgment, and professional responsibility.

The post reflects broader tensions within the legal profession about how to adopt AI tools while maintaining professional standards. Recent court sanctions have highlighted the risks of inadequate oversight, while the SRA’s guidance attempts to provide a framework for responsible use. Dias’s contribution appears to be gaining traction as a clear articulation of why human accountability cannot be outsourced, even as the tools lawyers use become increasingly sophisticated.

This piece examines a growing professional movement around human accountability in legal AI use, triggered by recent court sanctions and regulatory guidance. - mm!ke