The comments and citations the New York attorney submitted, which were taken from ChatGPT, were described by the court as “bogus” by the judge.
Steven Schwartz, a lawyer from New York, is getting flak for utilizing ChatGPT, an AI language model, to conduct legal research for a case against Avianca Airlines. Robert Mata hired Schwartz from the legal team Levidow, Levidow & Oberman to pursue an injury claim relating to a run-in with a service cart on a flight in 2019.
A judge’s discovery of contradictions and factual inaccuracies in the materials submitted changed the course of the case. On May 24, Schwartz confirmed using ChatGPT for his legal study in a written document. The information has aroused debate and prompted concerns about the accuracy and dependability of utilizing AI in legal procedures.
The incident emphasizes the possible dangers of utilizing AI technology only for difficult legal research. It also emphasizes the value of careful fact-checking and human knowledge in legal proceedings, particularly when it comes to providing the court with correct and trustworthy information.
The judge further claimed that certain cases referenced in the submissions did not exist, and there was an instance where a docket number on a filing was mixed up with another court filing.
![Lawyer uses ChatGPT in court and now ‘greatly regrets’ it f2394669 96f2 4010 8a03 53cbf0e6bbe3](https://i0.wp.com/s3.cointelegraph.com/uploads/2023-05/f2394669-96f2-4010-8a03-53cbf0e6bbe3.png?ssl=1)
![Lawyer uses ChatGPT in court and now ‘greatly regrets’ it f2394669 96f2 4010 8a03 53cbf0e6bbe3](https://i0.wp.com/s3.cointelegraph.com/uploads/2023-05/f2394669-96f2-4010-8a03-53cbf0e6bbe3.png?ssl=1)
Schwartz said he also regrets having trusted the artificial chatbot without conducting his own due diligence.
The degree to which ChatGPT can be included into workforces has recently been the subject of heated discussion.