How many lawyers need to screw themselves over by using LLMs to write legal briefs before the others realize that doing so just might be a bad idea?
I mean, come on, people. There is no such thing as actual artificial “intelligence.” There are programs that try to mimic intelligence like LLMs but they are not actually intelligent. These models are trained using data from all over the internet with no vetting as to accuracy. When the thing searches for legal cases to cite, it is just as likely to cite a fictional case from some story as it is to cite an actual case.
It’s not like it’s looking up anything either. It’s just putting words together that sound right to us. It could hallucinate a citation that never even existed as a fictional case, let alone a real one.
At this point, everyone should understand that every single thing a public AI “writes” needs to be vetted by a human, particularly in the legal field. Lawyers who don’t understand this need to no longer be lawyers.
(On the other hand, I bet all the good law firms are maintaining their own private AI, where they feed it the relevant case histories directly, and specifically instruct it to provide citations to published works and not make shit up on its own. Then they validate it all, anyway, because their professional reputation depends on it).
The fact that so many lawyers are pulling this shit should have people terrified about how much AI generated documents are making it into the record without being noticed.
It’s probably a matter of time before one these non-existent cases results in decisions that will cause serious harm.
How many lawyers need to screw themselves over by using LLMs to write legal briefs before the others realize that doing so just might be a bad idea?
I mean, come on, people. There is no such thing as actual artificial “intelligence.” There are programs that try to mimic intelligence like LLMs but they are not actually intelligent. These models are trained using data from all over the internet with no vetting as to accuracy. When the thing searches for legal cases to cite, it is just as likely to cite a fictional case from some story as it is to cite an actual case.
It’s not like it’s looking up anything either. It’s just putting words together that sound right to us. It could hallucinate a citation that never even existed as a fictional case, let alone a real one.
At this point, everyone should understand that every single thing a public AI “writes” needs to be vetted by a human, particularly in the legal field. Lawyers who don’t understand this need to no longer be lawyers.
(On the other hand, I bet all the good law firms are maintaining their own private AI, where they feed it the relevant case histories directly, and specifically instruct it to provide citations to published works and not make shit up on its own. Then they validate it all, anyway, because their professional reputation depends on it).
It’s one thing to use it as a fancy spell check, it’s another to have it generate AI slop then present that as a legal argument without reading it
The fact that so many lawyers are pulling this shit should have people terrified about how much AI generated documents are making it into the record without being noticed.
It’s probably a matter of time before one these non-existent cases results in decisions that will cause serious harm.
Yes