Generative AI in the Law: Issues of and Solutions to Attorney AI use in the State of New York
By Morgan Ratcliff ‘26
Generative AI and the Law
Generative artificial intelligence is quickly seeping into all fields of work in some capacity. The field of law is no different. Lawyers today have the opportunity to rethink how the law will be practiced in tandem with these technological developments. The advancement of generative artificial intelligence and its integration into the legal field continues to be a complex process. In the United States, there is currently no comprehensive federal regulation on the use of AI. Typical of the American judicial system, this lack of federal regulation has left the decision-making on the legality of AI use up to state courts.
The measured process of making changes in the legal system is at odds with the rapid pace of technological advancements. This has led to issues involving a lawyer’s duties of both confidentiality and competence. Today, lawyers and judges are setting precedent for how AI can and will be used in court. States across the U.S. are handling the growth of AI and its implications for the field of law differently. The state of New York, for example, held a “program discussing AI's potential and its pitfalls for the legal profession.” This program brought legal professionals together for a speech and discussion about AI.
The program was led by legal technology expert Paul J. Unger, who cautioned lawyers about the risks of using AI. Unger encouraged lawyers to always review material produced by AI. He stated, “It gets us a first draft, it is never a final draft.” He went on to compare AI’s abilities to those of a “law student or a young lawyer” and further cautioned lawyers to always double-check the work of AI.
Unger’s warning highlights the need for lawyers to be aware of AI’s pitfalls. Without proper AI education, lawyers may face certain risks regarding their duty of competence. Lawyers are required to provide competent representation to their clients. This means possessing “the legal knowledge, skill, thoroughness and preparation” necessary for representation. The use of AI in the law has new implications for a lawyer's duty of competence. Meaning that if a lawyer wishes to use AI in their work, they must be well-versed in its risks and must take the necessary actions to remedy any errors that may occur. Additionally, lawyers will need to know how to communicate about their use of AI to their clients.
While AI’s pitfalls can be daunting, Unger expands on the fact that, when used correctly, AI can be a helpful aid. One way in which AI can be helpful is in legal research. Unger explained that when conducting research, it is important to utilize legal AI tools rather than general generative AI chatbots such as ChatGPT. Unger states that “A legal tool will hallucinate 10% to 20% of the time, while a tool like ChatGPT will hallucinate 50% to 80%.” When made aware of the different kinds of AI tools and the risks of using them, lawyers can implement AI in a way that is both ethical and efficient.
An additional warning that Unger presented related to client confidentiality. Unger warned against putting any confidential information into AI tools such as ChatGPT. Any information entered into these kinds of open system AI tools can be used as data that the large language model learns from. In the past year, New York has implemented a new policy that prohibits judges and other court staff from entering confidential information or documents into AI tools unless they are a “private model.” Under this policy, a “private model” is defined as operating “under the control of the court system” and not sharing data with “public tools.” This policy does not seem to apply directly to lawyers; however, Unger’s warning is similar. To avoid violating their duty of client confidentiality, lawyers should not input confidential information into AI tools without proper protections in place.
AI Use in Court
Generative Artificial Intelligence has come a long way in its ability to help with legal work. In fact, ChatGPT-4 was even able to pass the bar exam. Well-known companies such as Lexis Nexis and Westlaw that specialize in legal research have created their own AI research tools. Today, AI is more accessible than ever and is currently being fine-tuned for use in specific fields. However, issues arise when individuals are uneducated on the risks of using AI. This is especially true in the high-stakes field of law.
On June 22nd, 2023, two lawyers in the state of New York were sanctioned for their use of six case citations that were completely made up. Who made up these cases? The large language model, ChatGPT. The lawyers, Steven Schwartz and Peter LoDuca, along with their law firm, Levidow, Levidow, and Overman, were fined $5,000 for the mishap. This penalty came after Manhattan Judge P. Kevin Castel decided that the lawyers had “acted in bad faith.” This is just one example of how the Courts are responding to the wrongful use of AI in the law. Instances such as this are shaping how AI is used throughout the legal field.
The good faith vs. bad faith decision appears to be a consistent way in which judges are deciding whether or not wrongful AI use by a lawyer should be met with punishment. In a similar case that occurred in the state of New York in 2024, a lawyer included fake citations in a filing made on behalf of his client, Michael Cohen. Cohen, known as “the former fixer for Donald Trump,” had previously pleaded guilty to campaign finance violations in 2018. The filing produced by his lawyer, David Schwartz, sought to “end his supervised release before November 2024.
Schwartz claimed that he was under the impression that Cohen had found the case citations through a fellow lawyer. However, this was not the case, and Cohen had used Google Bard, which made up the citations. Cohen claimed that he did not know Google Bard was a generative AI tool and thought that it was a search engine similar to Google Search. After reviewing this instance of wrongful AI use, U.S. District Judge Jesse Furman decided that there was not “evidence of bad faith to justify sanctions.” Despite finding that sanctions were unjustified, Furman did state that Schwartz’s actions were “certainly negligent, perhaps even grossly negligent.” This highlights the importance of a thoughtful review of citations by attorneys, especially when AI is involved.
The conclusion of this case is in stark contrast to that of the two lawyers from 2023 who were met with sanctions and a fine. This is proof of how issues involving AI use by lawyers are working their way through the judicial system. Lawyers are at the mercy of the judge to decide whether their incorporation of fake case citations was due to a bad-faith decision on their part. These instances reinforce the advice of legal technology expert Paul J. Unger. AI tools can be helpful, but “lawyers must be cautious when using this new technology, lest they break confidentiality or violate ethical standards.”
The Future of AI Use in the Law
In 2024, U.S. Supreme Court Chief Justice John Roberts wrote in his year-end report on the Judiciary that “using AI requires ‘caution and humility.’” He went on to call “citing nonexistent court cases in court papers ‘always a bad idea.’” Justice Roberts' focus on AI shows the importance and relevance of this issue. The warnings from legal technology experts such as Unger prove the importance of training lawyers to enter a workforce in which AI is heavily involved.
As New York Chief Administrative Judge Joseph Zayas explains it, “While AI can advance productivity, it must be utilized with great care.” Zayas additionally states that AI “is not designed to replace human judgment, discretion, or decision-making.” Zayas echoes the opinions of Roberts and Unger. The rise of AI and its use in the legal field has led legal organizations such as the American Bar Association to take action and spread awareness about the risks of using AI.
In 2024, the ABA issued its first guidance on the ethics of AI use in the law. The ABA’s guidance clarifies that attorneys and law firms who use generative AI must “fully consider their applicable ethical obligations…” These obligations include a lawyer’s duty of competence, the protection of client information, and the need to communicate thoroughly with clients. The education of lawyers on AI is imperative in ensuring that individuals in the legal field are able to implement AI into their practice with respect to these obligations.
Morgan Ratcliff is a junior majoring in political science.
Sources
American Bar Association. (2024). ABA Issues First Ethics Guidance on a Lawyer’s Use of AI Tools. https://www.americanbar.org/news/abanews/aba-news-archives/2024/07/aba-issues-first-ethics-guidance-ai-tools/
American Bar Association. (n.d). Rule 1.1 Competence. In Model Rules of Professional Conduct. https://www.americanbar.org/groups/professional_responsibility/publications/model_rules_of_professional_conduct/rule_1_1_competence/
Bliss, J. (2024). Teaching Law in the Age of Generative AI. Jurimetrics (Chicago, Ill.), 64(2), 111–161.
Melnitsky, R. (2025). AI Can do Many Tasks for Lawyers – But be Careful. NYSBA. https://nysba.org/ai-can-do-many-tasks-for-lawyers-but-be-careful/?srsltid=AfmBOorY0sh2LVv26f78cqEdTiMrWWrKlK-1PFlU5LAzqnqghzlWvcHr
Merken, S. (2023). New York lawyers sanctioned for using fake ChatGPT cases in legal brief. Reuters. https://www.reuters.com/legal/new-york-lawyers-sanctioned-using-fake-chatgpt-cases-legal-brief-2023-06-22/
Merken, S. (2025). New York Court System Sets Rules for AI use by Judges, Staff. Reuters. https://www.reuters.com/legal/government/new-york-court-system-sets-rules-ai-use-by-judges-staff-2025-10-10/
O’Keefe, A. (2025, July 28). AI Laws and Regulations Across Practice Areas. https://legal.thomsonreuters.com/blog/navigating-ai-laws-and-regulations-across-practice-areas/
Stempel, J. (2024, March 20). Michael Cohen will not face sanctions after generating fake cases with AI.Reuters.https://www.reuters.com/legal/michael-cohen-wont-face-sanctions-after-generating-fake-cases-with-ai-2024-03-20/