Summary of risks and ethical challenges for legal professionals, and how to navigate the use of AI
針對法律專業人士予以摘要人工智能風險及倫理上之挑戰,與如何駕馭人工智能之利用
When litigators use generative AI to help answer a specific legal question or draft a document specific to a matter by typing in case-specific facts or information, they may share confidential information with third parties, such as the platform's developers or other users of the platform, without even knowing it.
當訴訟律師利用生成人工智能協助回覆特定的法律問題或就某一事件經由繕打特定案件事實或資訊而草擬文件,其將與第3人分享機密資訊,諸如平台開發商或其他平台利用人,甚至訴訟律師不知道有這樣的情形發生。
Many legal software tools are incorporating large language models (“LLMs”), such as GPT-4, to enhance their performance. But litigators should know that using these models carries a substantial degree of risk. You must be careful as to which types of technologies to use, when to use them, and how to use them.
許多法律軟體工具,納入像GPT-4這種大型語言模型,以提升它們的執行。但訴訟律師應知悉利用這些模型承擔實質風險。你必須留意何種科技被利用;何時利用它們及如何利用它們。
Generative artificial intelligence (GenAI) is a still-unfolding story in the legal industry. There remains a lack of clarity in many areas as to how to employ this technology while still fulfilling a litigator’s legal and ethical responsibilities.
There’s no guidebook to tell an attorney how best to do this because the technological environment is changing too rapidly. It’s up to the individual legal professional to keep abreast of developments and responsibly experiment with generative AI. Having a solid grasp of both its capabilities and key issues will be necessary to know when — or when not — to use AI.
現在沒有任何指南告訴律師如何為之方為最佳,因為科技環境變化迅速。此取決於法律從業者是否能趕上人工智能之發展與負責擔起生成人工智能之試煉。如能穩固地掌握前揭兩者特性與關鍵爭點,則知悉何時利用或何時不利用人工智能將成為必然。
Generative AI and its LLMs have a number of limitations and weaknesses, and further flaws of AI models could be disclosed over the next year or two. Using this technology is far from a risk-free endeavor for legal professionals.
生成人工智能及大型語言模型有許多限制與缺點,同時人工智能模型更深層的瑕疵在來年或後年會被接露。對法律從業人員而言,利用此技術遠遠非免風險投注。
There are two primary categories of risk for AI and LLM usage — output risk in which the information generated by the AI system proves too risky to use, and input risk — in which information that gets inputted to an AI model may itself be at risk.
就人工智能及大型語言模型之處理,有兩大主要範疇風險:
輸出風險;人工智能系統生成的資訊證明其被利用時太有風險。
輸入風險;輸入人工智能模型的資訊本身就是個風險源。
Output risks
LLMs can hallucinate. That means, as Thomson Reuters Rawia Ashraf defines it, “that they provide incorrect answers with a high degree of confidence.” The popular image of AI being an artificial brain that thinks by itself is vastly inaccurate. A GPT model cannot reason as human beings do, and its knowledge about a topic is entirely owed to the data that it’s been given. Thus inadequately-prepared models may return murky, nonsensical, or flat-out wrong answers to user queries.
The possibility of AI hallucinations, combined with the relative scarcity of accurate legal domain knowledge currently found in most LLMs, make it particularly risky for litigators to rely on information produced by an LLM at present.
Certainly, the more legal-specific data that gets added to train LLMs — a process called fine-tuning — the less of a chance that a user will encounter a hallucination, and the more accurate the LLM’s information will become. But we’re far from certainty yet.
LLMs, as with any type of AI, are not pure objective engines of reason. The people who program them can be biased, and thus AI programs can be biased.
As Thomson Reuters Ashraf notes, “if biases exist in the data used for training the AI, biases will inform the content that AI generates as well. Models trained with data that are biased toward one outcome or group will reflect that in their performance.”
Here’s an example. One prominent use of AI to date is by companies seeking to automate employee screening and recruitment. AI promises to streamline these processes via sorting, ranking, and eliminating candidates with minimal human oversight. But turning over these tasks to AI systems carries potential risk. Generative AI usage will not insulate an employer from discrimination claims, and AI systems may inadvertently discriminate.
How? AI tools may conduct analyses of internet, social media, and public databases, many of which contain personal information about prospective applicants that an employer cannot legally ask about on an employment application or during an interview. These include an applicant’s age, religion, race, sexual orientation, or genetic information.
Further, AI recruiting tools may duplicate and proliferate past discriminatory practices. Systems may favor applicants based on educational backgrounds or geographic locations, which in turn may skew results based on race. Incomplete data, data anomalies, and errors in algorithms may also create biased outcomes — an algorithm using data from one part of the world may not function effectively in other places.
The greatest input risk that using LLMs presents today is a potential breach of confidentiality. If precautions aren’t taken, using LLMs could wreak havoc on attorney-client privilege and client data security obligations.
Any attorney using an LLM must ensure that the platform does not retain any inputted data, nor allow any third parties to access this data. While platform developers have begun to introduce new functionalities to address privacy issues, such as allowing users to turn off chat histories and prevent information they enter from being used to train the platform, some LLMs have not had such upgrades.
A law firm may want to sign a licensing agreement — with the AI provider or the platform that incorporates the AI — with strict confidentiality provisions that explicitly prevent uploaded information from being retained or accessed by unauthorized persons.
But even with having such an agreement, legal professionals should regard an LLM as being a still-insecure venue. And they certainly shouldn’t put any confidential information into a public model, such as ChatGPT.
Generative AI can’t replace human expertise, nor take the blame for what’s ultimately a human error. The buck will always stop with litigators when they use generative AI in their legal work.
Similar to the penalties that litigators could incur for the conduct of non-attorneys they supervise or employ, those who use generative AI to assist them in legal work without proper oversight could be charged with several ethical violations.