ChatGPT, Attorney at Law — or — Trust, but Verify

Florence Lo/Reuters' ChatGPT screen in perspective, via engaged.com, used w/o permission.

There are times when I almost regret having successfully avoided a conventionally-successful career.

Last weekend was not one of them.

Partly because I saw what happens when an otherwise-smart person forgets to think.


Big-Time Bungle: Bogus References

BBC News article headline, 'ChatGPT: US lawyer admits using AI for case research'. image credit: Reuters. (May 28, 2023)
BBC News headline: ChatGPT and a law firm’s embarrassment, (May 28, 2023)

ChatGPT: US lawyer admits using AI for case research
Kathryn Armstrong, BBC News (May 28, 2023)

“…A judge said the court was faced with an ‘unprecedented circumstance’ after a filing was found to reference example legal cases that did not exist.

“The lawyer who used the tool told the court he was ‘unaware that its content could be false’.

“ChatGPT creates original text on request, but comes with warnings it can ‘produce inaccurate information’….”

First, a little background. Then I’ll give my opinion about ChatGBT and artificial intelligence in general, and why I don’t think humanity is doomed. Not any more than usual, at any rate.

A law firm was helping someone sue an airline. They sent a brief to the airline’s lawyers.

The law firm’s trouble started when the airline’s lawyers did what the law firm should have done before sending a brief: look up the cases referenced in the brief.

That was an exercise in futility, since the cases weren’t real.

The airline’s lawyers wrote the judge, explaining their problem. The judge asked the law firm to explain their “bogus quotes and bogus internal citations”.

Back to that BBC News piece.

“…‘Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,’ Judge Castel wrote in an order demanding the man’s legal team explain itself.

“Over the course of several filings, it emerged that the research had not been prepared by Lawyer P. [redacted], the lawyer for the plaintiff, but by a colleague of his at the same law firm. Lawyer S. [redacted], who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases.

“In his written statement, Lawyer S. [redacted] clarified that Lawyer P. [redacted] had not been part of the research and had no knowledge of how it had been carried out….”
(Kathryn Armstrong, BBC News (May 28, 2023)) [emphasis mine]

(I’ve put a longer excerpt near the end of this post.1 I redacted the names, just in case folks who achieved international fame get touchy. Besides, there’s no point in trying to make their situation any worse.)

Trust, Assumptions and ChatGPT

Image from TECHAERIS article: 'Samsung employees may have leaked sensitive company data to ChatGP', Alex Hernandez (April 7, 2023).My hat’s off to Lawyer S. [redacted], for his clarification. It’s a fine example of taking responsibility for one’s actions.

His not noticing ChatGPT’s warning that it can “produce inaccurate information” is a fine example, too.

But not the sort that most folks want on their resume.

And the story just keeps getting better. Or worse, depending on one’s viewpoint.

Lawyer S. kept screenshots of what looks like a conversation he had with ChatGPT.

“…’Is v. [redacted] a real case,’ reads one message, referencing V.v.C. [redacted], one of the cases that no other lawyer could find.

“ChatGPT responds that yes, it is — prompting ‘S’ (Lawyer S. [redacted]) to ask: ‘What is your source’.

“After ‘double checking’, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.

“It says that the other cases it has provided to Lawyer S. [redacted] are also real….”
(Kathryn Armstrong, BBC News (May 28, 2023))

Double checking references is a good idea. Having ChatGPT do its own double checking, not so much.

Let’s do a thought experiment: assuming that ChatGPT is a person, which I don’t.

ChatGPT was released in November, 2022,2 about six months ago. Let’s say that the law firm, or Lawyer S., began using the chatbot immediately.

That’d make ChatGPT the equivalent of an intern, or maybe a law clerk, fresh out of college with no previous work experience.

I’m no lawyer, but trusting a new-on-the-job clerk to do research is one thing.

Assuming that the same newbie clerk should verify references in its own report — that’s something else.


Two Timelines, a Career and Experience

Dik Browne's 'Hagar the Horrible:' 'It may be the end of civilization as we know it.' (February 25, 1973)I don’t see either ChatGPT, or artificial intelligence in general, as a looming doom.

On the other hand, learning new skills — or at least using common sense — is at least as important now as it was when I was growing up.

Now let’s look at a possible timeline for the 30-years-plus career of Lawyer S.

Let’s make it 31 years, and say that he pursued his career on schedule. That would mean he graduated from law school and started practicing in 1992.

Earning a law degree takes seven years after high school,3 so he graduated from high school in 1985 and was born in 1967.

This is hypothetical, so Lawyer S. might be a bit older than that. But let’s assume he’s in his mid-50s: and look at what’s been happening since his birth.

During his childhood, artificial intelligence was literally science fiction in films like “2001: A Space Odyssey” and “Colossus: The Forbin Project”. Meanwhile, computers were getting smaller, less expensive and a whole lot more capable.

Using 20-20 hindsight, personal computers go back to Edmund Berkeley’s Simon in 1950. But as mass-market consumer electronic devices, personal computers began in 1977 and didn’t take off until the 1980s. I’m oversimplifying, a lot.

Lawyer S. probably heard of personal computers as a high schooler, but my guess is that even then he was focused on less nerdy matters.

Microsoft’s MS-DOS came out around the time Lawyer S. began practicing law,4 assuming that my timeline is accurate: which, remember, is an assumption.

Now, let us turn our attention away from this attorney’s successful career — and look at what I’ve been doing.

A Little of This, a Little of That

Photo: Brian H. Gill, at his desk. (March 2021)By the time Lawyer S. was in college, in the mid-1980s, I’d gotten a B.A. in history.

I’d also flunked out halfway through both a masters in library science and a B.S. in computer science. But I had earned a B.S. in English and done time as a secondary school teacher. I’d had a bunch of other jobs, too, including:

  • Beet chopper
  • Computer operator
  • Employment service interviewer
  • Flower delivery guy
  • Office clerk/customer service
  • Radio disk jockey
  • Sales clerk
  • Staff writer for an historical society

I’d married a woman with a degree in computer science, started a family with her and began working for a small publishing house here in Sauk Centre.

Fast-forward seven years. Lawyer S. has now become a practicing attorney. I’ve been writing advertising copy and doing graphic design for that publisher.

At some point, my employer’s marketing manager noticed my personal website: so I created and launched one for the company.

I became their ‘computer guy’ and list manager. That’s a fancy way of saying that I answered questions and sorted out SNAFUs. When I wasn’t doing that, I was keeping track of their customers and mailing lists.

I’m not sure that any of my job history, aside from that very brief stretch as a teacher and time as a radio DJ, was “professional” work: since I hadn’t been trained or certified for what I did. Apart from some on-the-job training, which was a really good idea.

But, having seen what can happen when folks enjoy more success and less unemployment, I’m grateful for not getting stuck in a rut.

And I’ve been encouraged to think, even on the job.


Using Our Brains: It’s an Option

WiNG's photo of the Beijing Television Cultural Center fire. (February 9, 2009) via Wikimedia Commons, used w/o permission.
Common sense and safety protocols put on hold: Beijing. (2009)

I can’t reasonably argue that artificial intelligence is completely harmless.

No technology, from lightning rods to video games, is utterly idiot-proof.

Even fire, a technology we’ve been using for maybe a million years, is a problem when someone doesn’t use common sense.6 I figure that’ll still be true when the Code of Ur-Nammu, the UN Charter and whatever we try next will seem roughly contemporary.

The problem isn’t technology. It’s us, and I’d better explain that.

We’ve got brains. We’re rational. But we also have free will: so using our brains is an option, not a hardwired response. And our choices have consequences. (Catechism of the Catholic Church, 1701-1709, 1730-1738)

Common Sense and Other Alternatives

From Fritz Lang's 'Metropolis' (1927). the hero is hallucinating: seeing a big machine as Moloch, eating workers.I talked about ChatGPT, fear and getting a grip back in mid-April.

Not much has changed since then, although I’ve been noticing more scary headlines about the existential threat of chatbots, and fewer doomsayers touting economic woes.

I didn’t, and don’t, see chatbots and artificial intelligence as a dire threat.

But folks like that 20-something national guardsman who shared classified military intelligence on social media? And this probably 50-something attorney who put faith (apparently) in the unerring skills of a six-month-old chatbot?5

I don’t think either of them are a threat to humanity. Not as individuals. But if enough folks start putting their minds on hold: we could have problems.

A Skunk, a Wood Pile, Dynamite and the Sixties

RxS' photo: 'Gateway to Lake Wobegon' sign in Holdingford, Minnesota. (2006) via Wikipedia, used w/o permission.Making daft mistakes isn’t new.

My wife tells me of the time when some kids noticed a skunk outside the school.

This was back when the local school had wood-burning heaters, and a wood pile stacked against one wall.

Boys, at least, could and did take rifles to school with them so they could do some hunting on the way home. The point is that social norms were different then.

Anyway, the skunk hid in the wood pile. The kids couldn’t spook it out. So one of them went home; returning with dynamite, a fuse and a blasting cap.

Yes, I know. Today that’d be international news. Back then, it was kids using stump-removal tech without permission.

And remember, these were kids. Smart rural kids, but kids nonetheless.

The one with dynamite used a tad more than was absolutely necessary. When the smoke cleared, the skunk was gone: along with the wood pile and much of the school building’s paint on that side.

Nobody was hurt. Startled by the blast, but not hurt. The kids were tasked with cleaning and repainting that side of the school, and life went on.

That was then, this is now.

I do not, by the way, yearn for ‘the good old days’. I remember them, and they weren’t.

Many reforms of the Sixties were long overdue. Some have worked out fairly well. Some, I think, haven’t. And that’s another topic.

Changing Times, Human Nature

Brian H. Gill's collage: a rotary telephone, ca. 1955; Number One Electronic Switching System, 1976 and after; title card for The Addams Family titles, ca. 1964.; family watching television, 1958; publicity still from Batman, ca. 1967This is not the world I grew up in.

I’m okay with that, and in many ways I think “now” is better than “then”. But in other ways: well, at best it’s no worse.

And that leads me to an online chat I had with my oldest daughter last weekend.

We’d been talking about the debacle involving an attorney and ChatGPT.


Me:
“Yeah. Amazing. I – good grief.
What bothers me, in a way, is that I’m disgusted – but not all that surprised.”

Oldest daughter:
“Well, you’re over 70, lived through the ’60s, and have been paying attention.”

“Disgusted” is a fairly strong word. I don’t use it all that much. When I do, it’s often because someone who should have known better displayed a stellar lack of good sense.

This is where I could launch into a conventional ‘back in my day’ rant about the decline and fall of practically everybody. But I won’t. Again, my memory is too good.

Much as I might enjoy living in a world where folks have always acted rationally, that’s not the world we all live in.

With the passing of decades since my youth, I’ve forgotten names and details: but some high-level national officials became briefly famous after being caught selling state secrets.

At the time, I wasn’t sure what bothered me more: that they were betraying their country’s trust, or that they were selling information at bargain-basement-closeout prices. That may say more about me than the doofuses who got caught, and that’s yet another topic.

The point this time is that human nature hasn’t changed.

We’re still rational, we still have free will, so we can still put our minds on hold.

And that’s still a bad idea.

THE ROBOTS ARE COMING! THE ROBOTS ARE COMING!

Ford Beebe, Saul A. Goodkind, George Plympton and Basil Dickeyvia's malevolent marauding mechanical monster from 'The Phantom Creeps'. (1939) via David S. Zondy's 'Tales of Future Past' http://davidszondy.com/futurepast/ So: what can I, personally, do to save humanity from creeping socialism, acid rain, or the current crisis du jour?

Precious little, actually.

I’m just some guy living in central Minnesota, talking about chatbots and making sense.

I can, however, suggest that using our brains is a good idea.

Even if that means reading past the headlines, and maybe even thinking about the appeals to fear that are in play.

Like this gem:

AI could pose ‘risk of extinction’ akin to nuclear war and pandemics, experts say
Aimee Picchi, MoneyWatch, CBS News (May 30, 2023)

“Artificial intelligence could pose a ‘risk of extinction’ to humanity on the scale of nuclear war or pandemics, and mitigating that risk should be a ‘global priority,’ according to an open letter signed by AI leaders such as Sam Altman of OpenAI as well as Geoffrey Hinton, known as the ‘godfather’ of AI.

“The one-sentence open letter, issued by the nonprofit Center for AI Safety, is both brief and ominous, without extrapolating how the more than 300 signees foresee AI developing into an existential threat to humanity.

“In an email to CBS MoneyWatch, Dan Hendrycks, the director of the Center for AI Safety, wrote that there are ‘numerous pathways to societal-scale risks from AI.’

“‘For example, AIs could be used by malicious actors to design novel bioweapons more lethal than natural pandemics,’ Hendrycks wrote. ‘Alternatively, malicious actors could intentionally release rogue AI that actively attempt to harm humanity. If such an AI was intelligent or capable enough, it may pose significant risk to society as a whole.’…”

Again, no technology is one hundred percent absolutely guaranteed idiot-proof safe.

A breath of good sense in the CBS News piece is “…AIs could be used by malicious actors….” — Hendrycks, at least, apparently realizes that people use technology.

How we use it, and what we use if for, is up to us.

If “malicious actors” use AI, artificial intelligence, with the cunning and wisdom displayed by that attorney: hazmat cleanup might be the biggest problem for the rest of us, after their demise.

Toyota's photo: Kirobo Mini. (2018)Then there was a headline that might have, but didn’t, read “Killer Robot Drone Runs Amok”. The article even, at the very end, included a little background and context.

I put an excerpt in the footnotes.7

One more thing. “Trust, but verify” is a rhyming Russian proverb.8 And that is yet again another topic, which finally brings me to the seemingly-inevitable links:


1 A definition and an excerpt:

  • brief
    Wex, Legal Information Institute, Cornell Law School

A longer, but still redacted, excerpt from that BBC News piece:

ChatGPT: US lawyer admits using AI for case research
Kathryn Armstrong, BBC News (May 28, 2023)

“…A judge said the court was faced with an ‘unprecedented circumstance’ after a filing was found to reference example legal cases that did not exist.

“The lawyer who used the tool told the court he was ‘unaware that its content could be false’.

“ChatGPT creates original text on request, but comes with warnings it can ‘produce inaccurate information’.

“The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward.

“But the airline’s lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief.

“‘Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,’ Judge Castel wrote in an order demanding the man’s legal team explain itself.

“Over the course of several filings, it emerged that the research had not been prepared by Lawyer P. [redacted], the lawyer for the plaintiff, but by a colleague of his at the same law firm. Lawyer S. [redacted], who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases.

“In his written statement, Lawyer S. [redacted] clarified that Lawyer P. [redacted] had not been part of the research and had no knowledge of how it had been carried out….

“…Screenshots attached to the filing appear to show a conversation between Lawyer S. [redacted]) and ChatGPT.

“‘Is v. [redacted] a real case,’ reads one message, referencing V.v.C. [redacted], one of the cases that no other lawyer could find.

“ChatGPT responds that yes, it is – prompting ‘S’ to ask: ‘What is your source’.

“After ‘double checking’, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw.

“It says that the other cases it has provided to Lawyer S. [redacted] are also real.

“Both lawyers, who work for the firm L. L. O. [redacted], have been ordered to explain why they should not be disciplined at an 8 June hearing.

“Millions of people have used ChatGPT since it launched in November 2022.

“It can answer questions in natural, human-like language and it can also mimic other writing styles. It uses the internet as it was in 2021 as its database.

“There have been concerns over the potential risks of artificial intelligence (AI), including the potential spread of misinformation and bias….”

2 I talked about this in April: “ChatGPT and the End of Civilization as We Know It” > It’s New, it’s Scary and it’s (Not) the End of Creative Writing (April 15, 2023)

3 What it takes to be a lawyer:

  • Lawyers
    Occupational Outlook Handbook, U.S. Bureau of Labor Statistics

4 A little history:

5 Headlines:

6 “No photos, no video clips, no in-depth reports” — but hard to ignore:

7 “missing ‘important context'”:

US Air Force denies AI drone attacked operator in test
Zoe Kleinman (June 2, 2023)

“…I spent several hours this morning speaking to experts in both defence and AI, all of whom were very sceptical about Col Hamilton’s claims, which were being widely reported.

“One defence expert told me Col Hamilton’s original story seemed to be missing ‘important context’, if nothing else.

“There were also suggestions on social media that had such an experiment taken place, it was more likely to have been a pre-planned scenario rather than the AI-enabled drone being powered by machine learning during the task – which basically means it would not have been choosing its own outcomes as it went along, based on what had happened previously.

“Steve Wright, professor of aerospace engineering at the University of the West of England, and an expert in unmanned aerial vehicles, told me jokingly that he had ‘always been a fan of the Terminator films’ when I asked him for his thoughts about the story.

“‘In aircraft control computers there are two things to worry about: “do the right thing” and “don’t do the wrong thing”, so this is a classic example of the second,” he said.

“‘In reality we address this by always including a second computer that has been programmed using old-style techniques, and this can pull the plug as soon as the first one does something strange.'”

8 A little more history:

How interesting or useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.

I am sorry that this post was not useful for you!

Let me learn why!

How could I have made this more nearly worth your time?

About Brian H. Gill

I was born in 1951. I'm a husband, father and grandfather. One of the kids graduated from college in December, 2008, and is helping her husband run businesses and raise my granddaughter; another is a cartoonist and artist; #3 daughter is a writer; my son is developing a digital game with #3 and #1 daughters. I'm also a writer and artist.
This entry was posted in Discursive Detours, Journal and tagged , , , , , , . Bookmark the permalink.

2 Responses to ChatGPT, Attorney at Law — or — Trust, but Verify

  1. I like your comparison of ChatGPT to a six-month-old intern! Makes me think of it as an overestimated child prodigy as well, and now I’m remembering how some fiction media depicts ridiculously smart kids, so much that we forget that they’re supposed to be depicted humanly.

    Also, seeing your list of past jobs and your talk about them further cements you as a great writer in my eyes. I mean, I feel like most writers write about writing more than anything else, you know?

  2. Writers and writing? There are writers who write about writing. And thank you for your assessment of what I do!
    As for kids, child prodigies in fiction and depicting them as human – real people? That can be an issue. So can be avoiding depicting characters as more than ‘types’.
    Finally – good to hear from you!

Thanks for taking time to comment!