The growing use of artificial intelligence (AI) technology and a lack of regulations on its use in business may lead legal professionals to inadvertently breach ethical obligations.
AI has found its way into various aspects of legal practice, including contract analysis, legal research, due diligence, predictive analytics and document automation.
However, despite this growing prevalence, no formal legislation yet governs its use, which raises concerns about the law's ability to keep pace with technological advancements.
The UK Government is taking a different approach to AI regulation than in the EU. The UK has championed a sector-based framework – but as yet no specific legislation exists; only guidance.
The emergence of ChatGPT, for instance, has exposed the legal community to the many potential benefits of AI, including its various efficiencies.
However, the legal profession tends to follow traditional ways of working, and often exhibits resistance to new ideas.
I anticipate that all lawyers will find some way to benefit from AI in the future, but we as a profession must address the potential misuse and ethical concerns associated.
For example, if you start using it in mortgage applications then AI could learn discrimination from you.
Areas of social deprivation that are frequently damaged, vandalised or undersold then might automatically generate - without human interaction - that we need to increase the premiums there or charge more in those areas; severely affecting those most vulnerable and at risk in our society.
The AI system itself lacks the inherent understanding that certain actions may be unfair or discriminatory, as its primary objective is to optimise and strive for the most favourable outcomes within its capabilities.
It has no morals or values, and trying to program AI networks with human emotions presents a distinct challenge.
A recent incident involved a long-standing New York lawyer who employed an AI tool for case research, referencing non-existent judicial decisions produced by ChatGPT for research in a personal injury claim he was filing for on behalf of a client.
As a result, the lawyer now faces a sizeable lawsuit and the potential of being struck off.
The language model chatbot creates original text on request, but comes with warnings that it can 'produce inaccurate information'.
Data protection will prove to be an issue when it comes to using AI in the legal profession, with hacking incidents are not a matter of if, but when – potentially exposing sensitive information that users have entrusted to the AI network.
There are some obvious legal ramifications in this area.
How do you sue an AI system? And who is really to blame - the users who input the data or the creators of AI software being used?
Answering these important questions proves to be increasingly complex.
We have witnessed similar legal battles involving other social media platforms where they face daily lawsuits for users’ actions. This issue is likely to resurface in the context of AI.
There is not any formal legislation surrounding the use of AI yet and the problem with the law is that it’s so slow to catch up with technology.
The counter argument to that however, could be, can we not use AI to catch up quicker?
Stephen Grant is a solicitor who specialises in corporate, commercial and IT work at Wright, Johnston & Mackenzie