A recent study found that the OpenAI’s GPT-4 AI chatbot can not only pass the ethics exam required by nearly every state in order to practice law, but also outperforms most of the people who take the test.
GPT-4 correctly answered 74 percent of the questions on a simulated Multistate Professional Responsibility Exam (MPRE) while human test takers nationwide get an estimated 68 percent of the questions right on average, a study by LegalOn Technologies found, according to a report by Reuters. The MPRE is a exam required by most states whose purpose is to “measure candidates’ knowledge and understanding of established standards related to the professional conduct of lawyers.”
Dan Kitwood/Getty Images
“Our study indicates that in the future it may be possible to develop AI to assist lawyers with ethical compliance and operate, where relevant, in alignment with lawyers’ professional responsibilities,” read the study by LegalOn Technologies, which sells AI software that reviews contracts.
Sophie Martin, a spokesperson for the National Conference of Bar Examiners, which develops the MPRE, said “The legal profession is always evolving in its use of technology, and will continue to do so,” and that “attorneys have a unique set of skills that AI cannot currently match.”
Some of the subjects on the MPRE that GPT-4 performed very well on included “conflicts of interest,” for which the AI chatbot earned a 91 percent rate of correct responses, and “client-lawyer relationships,” for which the AI technology answered 88 percent of questions correctly.
But GPT-4 wasn’t as accurate when it came to exam questions about legal services and safekeeping funds and property, for which it answered correctly 71 percent and 72 percent of the time, respectively.
“This research demonstrates for the first time that top-performing generative AI models can apply black-letter ethical rules as effectively as aspiring lawyers,” the study said.
You can follow Alana Mastrangelo on Facebook and X/Twitter at @ARmastrangelo, and on Instagram.