"Nearly nine out of 10 legal professionals say they use AI in some capacity ..."

Eh?

"... according to a survey for a legal tech company"

Oh. Right.

If it is important that your lawyer uses "AI", I'm afraid that I am not the lawyer for you.

https://www.lawgazette.co.uk/news/ai-is-mainstream-in-law-but-clients-are-not-told/5126414.article

AI is mainstream in law - but clients are not told

Despite the pace of adoption, only 27% of firms have fully embedded the technology, survey shows.

Law Gazette
If your lawyer was using "AI", would you want to be told explicitly?
Yes
30.4%
No
0.9%
At that point I'd try to hire someone else
66.5%
Something else
2.3%
Poll ended at .
@neil Depends what for
@dajb If you feel like expanding, I'd be interested to hear what you have in mind!

@neil I mean, I work with organisations and there are things that we explicitly tell them we're using AI for, and things that we don't.

The things that we tell them: we used it as part of our research process.

The things we don't: we used it to make this sound a bit better.

It's all contextual, I guess. If you're adding value to what you offer to clients, and wouldn't feel bad/ashamed if they found out, then there's no issue.

@dajb

> If you're adding value to what you offer to clients, and wouldn't feel bad/ashamed if they found out, then there's no issue.

My own view is that there are myriad issues with gen AI *irrespective* of whether it is disclosed or not - not disclosing it fully just makes matters worse!

@neil Well, indeed. Have a look at what @epilepticrabbit and I did with this (see Appendix 2) https://policy.friendsoftheearth.uk/reports/harnessing-ai-environmental-justice
Harnessing AI for environmental justice | Policy and insight

Principles and practices to guide climate justice and digital rights campaigners in the responsible use of AI.

Friends of the Earth | Policy and insight
@dajb @neil I'm still proud of that piece of work and use the principles if I use AI, which I do, sometimes but not all the time because believe it or not my brain is still wicked functional :)
@neil If nothing else I'd have privacy concerns, same as if my doctor was using it.
@neil To be fair, I would probably also start looking for a different lawyer.
@neil In the same way I'd want the surgeon to tell me that if they don't wash their hands.
@neil given the number of cases where folk have got into trouble for AI making shit up, you'd have thought that notoriously risk adverse profession would be less likely to use it. But I guess not.
@neil If I were in a situation where I needed a lawyer, and I found out my lawyer was using "AI", I would run screaming into the sea and hope to drown. I'm certain that would be less painful than dealing with the problems the lawyer was about to create.
@neil I think for actually writing any legal text it would terrify me but as an additional assistant reviewing text it is maybe acceptable.
Then again, I work in professional IT am where AI is in everything and I'm expected to use it heavily in near everything (and I've found some value out of it, especially sometimes during reviews)

@neil If the results are good I wouldn't have any opinions on how the work gets done.

I had that same philosophy as a manager. If people wanted to work out of a coffee shop I didn't consider that to be my problem as long as they delivered.

Mandating tools - esp. for people I hire - seems a bit strange.

@troed

So is your answer that you would not want to be told about it (because, to you, "AI" is just a tool)?

@neil I did select "something else" because I feel like there's an assumption in the question as posed that the result of the work will be affected.
@troed But, in practice, you wouldn't feel the need to be told?

@neil No. And this is not even theoretical - depending on which region you get public healthcare from in Sweden they might already be using LLMs for note taking.

Doctors != Lawyers but I feel that for the discussion it would be similarly viewed.

@troed Interesting - thank you!

(FWIW, I would absolutely want to know if a doctor was using AI, or even automatic transcription, as part of treating/assessing me!)

@neil Sorry for the Swedish but here's a clickable map detailing all the different things in healthcare where AI is used at the moment. I think some of the medical terms might be readily understood in English as well due to latin roots :)

(And yes, there is pushback since the transcription has been dangerously wrong at times)

https://vardkartan.ai.se/

Vårdkartan - Utforska AI-initiativ inom vårdsektorn

Vårdkartan visar AI-initiativ i svensk hälso- och sjukvård. Utforska hur regionerna utvecklar framtidens vård med AI. Skapat av Datastory och AI Sweden.

Vårdkartan
@troed Fascinating - thank you!
@troed @neil It's a sound argument thought I do think there should be _some_ restrictions in place. If someone gets the work done by hiring someone else to do it for them, I wouldn't be happy with that.
@neil I'd never hire a lawyer who's using "AI", because how should I trust their expertise and the correctness if their work? No thanks...
@neil There is only one viable business case for generative AI: fraud.
@neil More than most professions, yes. They’re still on the hook and responsible for their advice. Like doctors and engineers and CPAs, giving bad advice is potentially career ending and bankrupting for them. They have some skin in the game.

@neil

there are certain professions where LLM based AI and fuzzy answers just won't cut it. most are actual professions, with licensing requirements, like law, medicine, accounting, civil engineering/architecture, where incorrect or fuzzy answers kill people or incarcerate them.

i don't ever see a good use of LLM AI in such professions.

@neil I expect anyone I interact with in a professional capacity to tell me up front what data they collect and how it's processed, including all AI use. I expect them to tell me if their notes on our meeting will be stored in the cloud, will be uploaded to another jurisdiction, will be processed by a third party... My expectations are unlikely to be met but in my opinion they ought to be normal.
@neil To me this is a modernization of nothing more or less than being able to see who is in the room.
@neil I think that is quite a complicated question and depends how you define AI - using an LLM to write a contract? I'd want to know (and avoid).
Using a Co-pilot suggested reply email of "sure, that time works for me" or using a spell check that was using an LLM (I think Grammarly's spell check is AI now) ? Not so much. I wouldn't necessarily expect a lawyer who specialised in conveyancing to know all the situations where they might be using AI.
@WilliamLeech Nuanced responses are welcome!
@neil I'm afraid that's as nuanced as I get here!
@WilliamLeech @neil This is interesting to me, because I'd be much less concerned with (potentially local) AI drafting assistance on a document that may be largely boilerplate, and much more concerned with AI reading (with no presumed confidentiality) emails.
@neil I'd ask whether they used AI, and if yes, I'd ask what they used, what they used it for, and how. The answer to that would determine whether I find a different lawyer.
@neil messed up my poll choice, but for real, if I found out my lawyer was using AI, I would absolutely ditch their ass
@neil Depends on what kind of AI and what it was being used for.

@neil I'd want to know, and if they do, I'd quiz them on how much they actually know about how LLMs work and how they use them.

The outcome of that discussion would reveal whether I'd be comfortable with trusting their opinions.

@neil i mean it's fine if you are researching some stuff but if you consider every response as a source of truth then it's not good
@neil
I missed the poll, but option C.
@neil If my lawyer uses AI, they are not the lawyer for me.
@neil d'you mean not the 'client' for you..?

@noodlemaz

No?

Am I misunderstanding?

@neil it's me, I no understand this bit!
"If it is important that your lawyer uses "AI", I'm afraid that I am not the lawyer for you."
Probably... I didn't realise you were a lawyer..? 😅

@noodlemaz

> Probably... I didn't realise you were a lawyer..?

This is a very common reaction.

Most people assume that I am a model, or some kind of international playboy.

Or someone in tech.

@neil @noodlemaz I thought that you were an international man of myster

@neil Our firm, like most others of its kind, is gaga for AI. As most lawyers are stupid as a bag of hair about even relatively simple tech like word processors, their enthusiasm about AI tells me all I need to know.

I won’t touch the stuff.

@neil id want to know exactly what they use, like someone’s DPA policy

I think there’s valid use of ML which works but anything generative is a hard no, and wouldn’t be something I’d like a lawyer to use and would be cause to look for alternative council

@neil
Are they trying to get enough uptake rapidly enough that by the time a professional negligence case gets to court they'll have outrun the Hunter v Hanley (Sc) / Bolam (Eng) test?
@neil
The survey was taken by asking 9 different LLMs what they thought lawyers would say!
@neil If you’ve ever used Google Translate for work, wouldn’t you have to answer Yes to that question?

@cyberleagle I haven't seen how the survey defined "AI". I use Bayesian spam filtering, but wouldn't consider that "AI" in the sense likely intended here.

(c.f. https://decoded.legal/ai/)

AI policy

@neil @cyberleagle they probably didn't, so even spell check might count. Expect non-geeks to answer as if they'd specified LLMs

@falken

tbf many translation services are now using LLMs since a year or so, without making the switch clear to the user.

@neil @cyberleagle