
Artificial Intelligence – Help! My Manager Is A Machine (Video) – Discrimination, Disability & Sexual Harassment
Artificial Intelligence (AI) has been touted as the Next Big
Thing in HR. Indeed, it’s already here. Or is it, really? And
what does it mean for employment law and the employment
relationship in any event? When you look beyond the hype, it is
clear there are still profound questions that need answering and
pitfalls to avoid. After all, to err is human: but to really foul
up requires a computer…
self
Transcript
Jane Fielding: Good morning everybody. I am
Jane Fielding and I am head of the Employment, Labour and
Equalities team here at Gowling WLG in the UK and I am delighted to
welcome you to the first in our series of annual update webinars,
where we cover topics which we think you need to know about heading
into 2023. We did contemplate going back to doing these sessions
live, in person but with the triple whammy of rail strikes,
freezing temperatures and various lurgies, we decided to stick to
the tech format of webinars. But we will be going back to in person
in the summer when we do our mid-year reviews and we hope that we
will be able to welcome all of you on our balconies in our offices
and enjoy some summer sunshine.
But for now we are doing our first webinar and the first session
as you can see is about artificial intelligence “AI”. And
it is really a myth busting session about AI in the workplace. How
it is used and how it fits with our current employment laws in the
UK.
And I cannot think of anybody better placed to talk to you about
this topic today than my friend and partner in the team, Jonathan
Chamberlain. And that is for three reasons. Firstly in the twenty
years I have known him, he has always been an early adopter of
tech, dragging me with him with varying degrees of reluctance
depending on what the tech is. Secondly and most importantly, he
has recently been on the Employment Lawyers Association Working
Party, which was responding to a call for evidence from the House
of Commons Science and Technology Committee about the use of AI. So
is absolutely bang up to date with the debate in this area and how
it fits with our laws. And lastly, and most frivolous probably, I
would defy any AI programming team out there dialling in to produce
AI that could match Jonathan’s energy and enthusiasm for this
topic as you will see.
So he is going to present enthusiastically for 35 minutes. I
will keep an eye on the questions, as we go along and I am probably
going to interrupt Jonathan with yours and some of my own and any
that are left over we will sweep up at the end. If you want to ask
a question please can you use the Q&A function in the middle of
the bottom of your screen rather than the “chat” function
which has been disabled. And please do put your name on the
question. That is not so we are going to name you in the webinar,
but if we do not get to your question we want to be able to come
back to you afterwards and that is obviously slightly tricky if you
have not put your name on there.
We hope the tech all works, particularly on this of all
subjects, but if it does not, please can you also use the Q&A
function and Lucy Strong, who is helping behind the scenes to
support us on this, will come back you and hopefully sort out
whatever issue it is for you.
I will make sure that we finish at 11:45 which is what we have
promised you and we will be sending a short email questionnaire
after the webinar for those of you who have attended for the full
webinar so that you can give us your feedback and we can use that
to help shape future sessions. I will remind you about that at the
end as well.
So that is all from me for now and I will hand over to
Jonathan.
Jonathan Chamberlain: Jane, thank you very much
indeed.
Well, good morning. I am talking to you about artificial
intelligence in the field of employment this morning because I was,
and I still am, rather wary of it. Now do not get me wrong I know,
or I believe, that we are not going to need Arnold Schwarzenegger
to come back from the future as a robot, to protect the young Jack
Connor so that he can in the future defeat the machines which have
taken over, as a result of artificial intelligence OK. I am not
that scared, yet. But I did understand that here was a technology
which I did not understand; that was clearly very powerful, that
everyone said is going to impact the world of work, impact what I
do, impact the law on which I advise and, as I say, I knew I did
not understand it.
So part of the work that I did in chairing the Committee that
Jane has just referred to, the Employment Lawyers Association
Working Party responding to the House of Commons Science and
Technology Committee’s call for evidence on governance of AI is
– here is our report by the way, I will be referring to that
later – was so that I could get to grips with it and then
explain it to you.
And I think the key thing that I have learnt is this: It helps
me to understand AI, to accept that I do not understand AI. Now,
that may seem a bit Delphic, it may seem like your therapist
talking, it may seem like a complete load of rubbish. Can I ask you
please to bear with me. All will become clear as we go through this
but, the key to understanding AI, is to accept that we do not
understand AI.
So here is our agenda. What is artificial intelligence (a good
place to start), what can it do in the workplace, what are the
issues with it and what should you be doing about it now?
So we do not need the agenda slide any more, that is the last
slide that we are going to have. If we get a comment in the Q&A
saying I cannot see the slides, it is because there are not any
slides. That was it. That was the only one. But I shall signpost as
we go through.
And my first bit of signposting is: We are going to start with
my first bullet point which is “what is AI?”. And here is
the thing. There is no obvious or easy answer. There is actually no
generally accepted definition of AI. There is not one in law, there
is not even one amongst the scientists and the technologists.
Perhaps the earliest was the famous Turing test: Can a man have a
conversation with a computer and not know that it is a computer?
But there have been plenty of others. We referred to some of them
in the report because obviously, doing a report on the governance
of AI, it sort of helps to know what AI is. So we have this from
the AI pioneer and scientist John McCarthy, this is back in 2004,
he suggests: “It is the science and engineering of making
intelligent machines especially intelligent computer programmes. It
is related to the similar task of using computers to understand
human intelligence but AI does not have to confine itself to
methods that are biologically observable.” Well I hope that is
clear.
IBM, we have the IBM factor: “Through the use of
statistical methods, algorithms are trained to make classifications
or predictions uncovering key insights within data, gradually
improving its accuracy.”
I quite like that one for reasons I will come back to.
The TUC as you might imagine are typically blunt:
“Unfortunately there is no single agreed definition of AI,
algorithm or machine learning.” And they then adopt a
different definition of artificial intelligence: “things that
computers carry out tasks that we would normally expect to be
carried out by humans for example making decisions or recognising
objects, features or sounds. “Machine learning” (I will
come back to machine learning), means when computer programmes are
trained on data so that the programme can learn to carry out
certain tasks. “Trained on data”, “programme can
learn”. And an algorithm used in technology is often a set of
rules that a computer applies to make a decision.
Now, does it completely matter that we do not have an agreed
definition of AI? I do not think it does actually at the moment. I
do not think it does because as we will see when we come onto this,
our regulation, our law, is not actually focussed on the
technology, it is focussed on the impacts of the technology. And I
for one think that has absolutely to be right. I do not really care
what is happening in the black box, I care what it does to people.
So we do not really need to concern ourselves about that.
I will illustrate the difference in this next section, the
difference between artificial intelligence and bog standard
computering if you like. In the next section, when we come to talk
about what AI can do. And I would suggest in our arena, that of a
workplace, perhaps there are six things which it can do. How well
it can do them, how it does them, are different things that we will
come onto. But these are the sort of things it is suggested that AI
can do.
There are six. Firstly, it can read CVs, pick out key
information. Obviously very helpful if you have thousands to look
through, even hundreds in respect of a job application. Perhaps,
and here is the second thing, it can go further, it can interview
candidates. It can analyse a video recording or it can interview
using a ‘chat bot’ where people put in responses and it can
analyse the results of that and help you as a recruitment tool.
Analysis, help, different concepts, we will come back to those.
Now we have used it as a chat bot in recruitment, perhaps we can
use it, and this is the third point, as a chat bot in HR support. I
think this is probably quite common already actually. You write in
“how much holiday have I got left?” and the chat bot will
tell you. It will help with basic information about policies and
processes and it will be able to make sense of what people type in,
that is the AI bit.
Fourth thing. It can look at peoples’ performance, look at
where they are in their career, and suggest appropriate training to
help them develop. That puts it in quite a positive light. What
about perhaps putting it in a bit more threatening way, and this is
the fifth point being for the point of view of the employee, the AI
can give instructions. It can tell you what to do. I am aware that
there are technologies for use in call centres which, not only
monitor employee’s voices – that has happened for a long
time, but now monitor eye movement so they can tell when you are
looking at the screen or not. And they can “snap” bring
you back in the room.
And let us take this a step further for perhaps the sixth use.
What about work allocation, telling you what you have to do when
and, if you do not do what you have to do when, imposing sanctions
of some sort. Now that already happens outside of the workplace and
I will give you an example of that in just a second, perhaps it
happens within the workplace.
So that is what it can do, we are told, the six things that I
talked about, reading CVs, doing an interview with a chat bot, chat
bot support, monitoring output with a view to improving training,
giving instructions, allocating work and imposing sanctions for
failure to do it.
There are two things though, that I would really like you to
take away in relation to that list. And my first one, I will
illustrate by way of example. Because artificial intelligence as we
saw, it has no definition and as I explained to you at the
beginning, it is something I am quite wary of because I do not
understand it. I keep coming back to this point, to understand AI
accept that you do not understand it. And one of things about it is
it is quite frightening and it is quite frightening because we do
not understand it and people slap the label on because it is
something they do not like.
So let me give you a really good example of this. And this
example is actually captured on video. You can go on YouTube and
watch this. If you search for “Politics Joe UK” you will
see this. It is a House of Commons Select Committee interviewing
the European Head of Public Relations for a very large shopping
website. OK. And this hapless individual is pulled apart in a
really good piece of cross examination by the MP. First rate. And
what it was about was the MP said one of his constituents worked in
one of their warehouses and he had a wristband on. And the
wristband monitored his movements around the facility and if the
computer thought he was not doing things quickly enough, he got a
warning and if he got three warnings he was out. And the cross
examination was about whether this was in fact what the company did
and the manager did not handle it well at all, I think all of you
would have handled it a lot better actually because you are not in
PR you are in HR or legal and you would have got to the answer
straight away.
But anyway, he handles it very badly, and people talk about this
as an example of artificial intelligence in the workplace. Actually
from the clip, I am not sure it was at all. I am not sure it was
any different from the old time and motion studies. From the chap,
and it usually was a chap, in a white coat with a clipboard and a
pen and a stopwatch, monitoring what you do in order to improve
processes. And in this particular case this is a time and motion
study carried out by a computer, using a wristband. But that is
actually all it is and that the parameters have been set as you
have to do this within a certain amount of time and if you do not,
then this process follows. But that is not AI as such. It is use of
technology, it is intrusive, it brings up some of the same issues
around AI but fundamentally it is an old fashioned time and motion
study using new technology.
Contrast that with say a global ride sharing app, whereby you
can input on your phone and a car will come and whisk you away to
wherever you want to go. Now, as far as you are concerned you are
placing an order through the phone, but what about the driver? When
the computer has a choice of two drivers, three drivers, four, half
a dozen, 27 if you are in central London, who could fulfil your
order on the phone, which driver does it choose? Well, we do not
know. We certainly do not know as the customer. But the driver does
not know either. The computer will analyse all the data it holds
about the drivers in the area. Where they are, where the
destination is, what the weather is, what the traffic is like, what
that driver’s record is like, what their customer ratings are
like and other bits of data that are monitored and picked up that I
do not even know what it is. And it will have analysed the
performance of the thousands of drivers on the hundreds of
thousands of rides that have taken place. And it will calculate,
“I am not going to give this drive to Joan, I am going to give
it to Fred”. Neither Joan nor Fred know why it has taken this
decision. Heck, the manager, the guy who turns the computer on in
the morning does not know why the computer has taken that decision
because the computer has learnt itself, from all the data that is
in there, what it thinks is going to be the best outcome that will
generate more profit for the ride sharing app.
Now that is artificial intelligence. That is machine learning.
That is the computer making up its own mind. And I see that, that
is my working definition of the two.
And that brings me to my second point which I really want to
stress around what can AI do which is this: Beware snake oil. OK.
There is a huge amount of snake oil around AI.
So let me take you back to the examples that I gave in terms of
AI can conduct an interview in the workplace. Well, because it can
look at videos. Well, just think about that for a moment. Can a
computer really analyse a video and decide that this person is the
best candidate for your job, because of what you see on a camera.
Have these people ever heard of acting? The snake oil sales people,
and there are a lot of these, will say, “ooh we can look at
body language, we can see how people present, we can understand the
key words that they use and we will be able to work out who is the
best candidate, who is the best pool of candidates for you”.
If you think about that for a moment, stand back from that for a
moment, that is nonsense. There is no body of research which says
that that can be done let alone that a computer can do it. I mean I
do not think there is any coincidence that a lot of these things
come from the US where for example they still believe, despite all
the scientific evidence to the contrary and it is all the
scientific evidence, that lie detectors are a thing. Lie detectors
are not a thing. You can train how you respond to a lie detector
and it has a huge margin of error in any event. And similarly, with
this so called interviewing software, it can be ‘gamed’
relatively simply.
Jane: Jonathan, I was just going to ask a
question about the ride sharing app but it also I think is relevant
to the recruitment decisions. How would, how does, to the extent
these things exist already, how do they factor in the obligation
under the Equality Act to make reasonable adjustments? So somebody
may not be hitting the criteria that the artificial intelligence is
looking for in an interview for example or in the drive sharing.
The driver might have a poor record of attendance due to
disability. How does, is it sophisticated enough to take that into
account or, how does that work?
Jonathan: It could be sophisticated enough to
take that into account to a certain extent. Whether it is or not,
is going to vary from programme to programme. And one of things
that we are going to come on to in terms of practical things which
we should be doing now is due diligence. And asking all those sorts
of questions. And if I could just give you a rule of thumb which I
was going to do later but it is relevant to bring it up now. Sales
people for American products will tell you that these have been
vetted for all equality laws. They may well have been vetted for
all American equality laws. It is highly unlikely they will have
been vetted for the UK ones. Especially as they do not originate in
the UK.
So the PR guy who I referred to earlier , judging from his
accent on the film and that is all I know about him, is from the
US. So you and I are looking at that video and actually screaming
reasonable adjustments at him as the MP is talking about this 63
year old, who is not necessarily things on time and was screaming
age discrimination. And if he did have those concepts in his head,
he could not access them as quickly as you or I would have accessed
them. Put it that way.
And I think that runs through in the software. Sales people are
very optimistic about the capabilities of this stuff. And AI can be
gained. My favourite example is actually not a human resources one,
it is a military one. Donald Bott, OK. You have got an computer
with a machine gun which is guarding a facility. This has happened
OK, this has happened and it has been a programme to distinguish
between friend and foe and it will learn from who approaches, who
is friend and foe, not great if it makes a mistake by anyway, that
is what it does. All it took was a squad of US Marines who worked
out very quickly that, if they approach this in a big cardboard box
a couple of them, then the AI would not be able to recognise that
they were in fact a human underneath the cardboard box and so would
let them through.
So there is a lot of snake oil out there. Be careful of it for
exactly the reasons that Jane has just highlighted.
And this brings me on neatly to my third point of the four that
I highlighted. We discussed what is AI, we have discussed what can
it do and I told you what it could do to beware of snake oil, a
primary point. What are the issues around it?
And I have got three points that I want to talk about. Of course
as Jane has already highlighted, discrimination, potential
discrimination is one of those issues. Actually, before I make the
three points, I want to highlight what I see is the fundamental
problem in the workplace of AI, as it relates to Employment Law
which is this: We say that a relationship of trust and confidence
is necessary between employer and employee. It is a fundamental
tenant of the employment relationship, the duty of good faith as it
is sometimes called but actually expressed as a necessary
relationship of trust and confidence which neither party must do
anything which tends to undermine. OK. So let me ask you this: How
can you trust, how can you have confidence in a machine, where you
do not understand at all why that machine is taking
decisions about you? How can you trust it? You can say, just do not
worry, trust the company, but that is not “trust” that is
“faith” OK. That is not the same as a very human concept
of trust and confidence.
So conceptually, if you leave too much to the machine, you
undermine the whole nature of the employment relationship. Now that
is not an unsurmountable obstacle to using AI at all as we will
come back to. But I go back to something that I said at the
beginning, for me, the key to understanding AI is to accept that I
do not understand AI. I do not know what this stuff does and I have
to mitigate the effects of me not knowing what this stuff does.
So for example, the three things that I want to talk about are
data protection, the information imbalance as far as employees are
concerned and then the discrimination point that Jane has already
highlighted.
So the first point is data protection. Well of course, as you
would expect, this is about processing data, so the data processing
rules apply and you will already be aware that within GDPR there is
a right in respect of automated data processing, which cannot be
done without the employee’s consent except, where it is
“necessary” for the performance of the employment
contract. And there is two things which have already been
highlighted as a result of that and there are not any cases on this
at the moment but cases on these will surely come.
The first is “necessity”. We use necessity to process
employee data all the time, we do not seek to rely on consent
because of course consent in the employment relationship cannot
necessarily be said to be freely given. So we use necessity. But is
it really necessary to use an automated data processing facility.
Is it necessary to use AI or is it merely convenient for the
employer. So we are going to expect some cases around necessity.
But then there also one around “automated”. Because one
way of getting round the obligation to provide information if it is
an automated process, is to leave the final decision to a human and
then it is not a fully automated process. Well, yes. But if the
human nine times out of ten is going to take the same decision, to
what extent is that process not really automated. To what extent is
putting a human in this just for window dressing? And again, there
is going to be case law around this in the near future of that I am
as certain as I can be of anything.
So those are the individual issues on data protection which come
up straight way. The second point I wanted to make was on the
information imbalance. I go back to my not understanding point. If
you are an employee and decisions are being taken about you and you
do not understand them, it is making worse an already bad
situation. None of us… I am a partner in a professional services
firm so, I’m an owner of the business, OK. I am one of over a
hundred owners of the business but I’m an owner of the
business. So if anyone should know what is going, it is me. And let
me tell you something, I haven’t a Scooby. And I suspect that
is true about most of my partners, most of the time because
management does stuff, you know. And we know that. And because of
where we are we sort of accept that and we let them get on with it.
But it is not like that for everyone. It is, generally speaking,
much worse and that sense of a lack of agency, and that can have
very important consequences. One of those consequences we will see
in a moment when we come on to talk about discrimination. In fact,
let us talk about discrimination now and we will talk about how we
deal with the information imbalance and the effect of that upon the
employment relationship when I come on to tell you about things
that I think you should be doing now.
The discrimination point I think will be screaming out at you by
now, just in the way that I am screaming at the YouTube clip of the
hapless manager on PoliticsJOE. But let me give you an example. If
you are using AI as a recruitment tool, the machine is going to
have to learn what successful recruitment looks like, so the way
that the machine might look at what successful recruitment looks
like is who are the people that you have had in post for two years,
three years, five years, whatever from their initial recruitment
because if they are still there then they are, it would seem to me
to be successful recruits. But what data is the machine going to
learn about successful recruits and how is it going to apply them?
Go back to that ridesharing app. We have not a clue as to why the
computer is allocating the ride to one particular driver or not
and, for the most part, we as the consumer do not care but if you
are an employee then you would care quite a lot.
So, again, let us go back to professional services which I used
as an example a moment ago. You might conclude if you were a
computer looking at a population of partners in professional
services firms across the UK that successful candidates were people
who were white and male, because that is the information that you
have got. Now, the issue that you have as a manager, and as an
employee, is you do not actually know on what criteria the machine
is making its decisions, and finding that out is really hard. You
have to be able to have the skills to interrogate the machine. We
will come back to the consequences… no actually, let us deal with
the consequences of that now.
When we were preparing for this, this is one of the first
questions that Jane asked me, which is OK, if I am an employee and
I want to sue my employer, or would-be employer, over a recruitment
decision or a promotion decision, whose fault is it? Is the
responsibility of the software? Is it the responsibility of the
software provider? Is it the responsibility of the employer? Now
the answer in law at the moment is actually obvious, it is the
employer. But if you are the employer, at that point, you might
want some help from your software provider and we will come on to
that in just a moment in terms of the things you should be doing
now.
So what are the things you should be doing now? This was my
fourth point if you recall. And the first one is due diligence, OK.
I said to you there is a whole load of snake oil around. How do I
know I am not being sold snake oil. Has this programme been audited
against UK Equality Law. If I am faced with a claim, will you
indemnify me against the costs of that, if it results from a
decision taken from your software. And then the other questions you
would be then expecting to ask that follow from that… Are you
able to explain to me how this software works in terms that I can
usefully understand? Are you going to be around – there are a
lot of start-ups – so that I can enforce the indemnity that I
have just got from you?
If somebody comes to you with an AI solution for your particular
problem, you need to be pressing them really hard on this stuff,
OK. And do not assume that because lots of other people are using
this, that they are on to something that you are not, that they
have done the due diligence and you can rely on that. Some other
snake oil is much more widely accepted particularly in the US than
really it should be.
Jane: I think that begs the question, John
– sorry, two questions. One is when you do your due
diligence, who should be involved in that because there are
different factors for it are there not? It might sit with IT but it
really needs HR and Legal input I suspect. And the other point, as
you talk about indemnification. Well that is fine for the financial
award but it is not going to help – you cannot indemnify for
your reputation, can you, if you are found to have
discriminated?
Jonathan: You cannot. You cannot. I think the
decision to adopt AI should not be an easy one because it does
carry quite a lot of risk and if on a cost-benefit analysis the
savings are marginal, query whether you should be doing it yet. Do
you want to be an early adopter of this stuff or do we need to wait
until it has settled down a bit? I do know colleagues, in-house
employment lawyers who work for very large employers, are under
huge pressure to sign off on AI contracts because in terms of just
recruitment, let alone ongoing monitoring, huge savings have been
touted and, OK, understand that you do not understand this and
that, therefore, you need to get to grips as much as possible with
what you do not know and bring in the right people to do that.
So that is due diligence. What about the other things? Well when
we talked about governance of AI, what the law should be doing, we
said there were two points and, yes, we think there should be some
law on this but, actually, the working party is composed of lawyers
like us who mainly act for you – employers. Also we had
lawyers who worked for individuals and we have representatives of
trade unions there and there was a consensus amongst all of us,
which is the workforce as a whole need to understand this stuff and
individuals need to understand this stuff.
So, my next two points are about the provision of information
and consultation. The unions are arguing that you have to consult
about AI because of existing law and some of us are pretty
sceptical about that but they do so that, you know, you can squeeze
it in the health and safety consultations because people will be
dead stressed if they are managed by AI and stress is a health and
safety issue therefore, you should be talking to your health and
safety reps about this. I can kind of see that but I mean it is a
bit of a stretch. That’s where they are coming from.
But in any event, good practice – remember what I said
earlier, it is an existential threat to the employment
relationship. Issues of trust and confidence. Win the trust, win
the confidence of your workforce by explaining what this stuff is
going to do, and then if it is going to affect individual people,
consult with those individual people. And that goes back to the
point you remember I talked about, the information imbalance. How
do we address the information imbalance? This is how. Use your
existing consultation forums if you have them. If not, think about
setting them up. Again, remember, you should only be doing this if
there are big wins involved for the organisation and if there are
big wins, those big wins need to be supported by appropriate
employee information and consultation processes which may or may
not be provided for in the law at the moment but which I think you
should be adopting as good practice.
So, those are the four points I wanted to cover. What is AI? No
clear definition but do we really need one at the moment? Probably
not. What can it do? Well it is advertised it can do lots of things
but beware of snake oil and understand what AI is and what it is
not. What are the issues? Potentially existential as far as the
employment relationship is concerned but there are issues around
data protection, around the imbalance of knowledge and crucially
around discrimination. What should you be doing in practice? Due
diligence and providing information to and consulting with your
employees on a collective and an individual basis as you introduce
this stuff, if you are to win the necessary confidence of the
workforce and individuals within it.
So we had a couple of questions whilst we were going on. We have
still got I think just about six, if there are any more that have
come up, Jane?
Jane: Yep. So there’s one that goes back to
the point we were talking about in terms of if you get any claims
in a tribunal, you have done your due diligence, you hope, on the
way in, somebody has signed off on buying it, obviously, and
hopefully you have brought your workforce with you in general terms
but somebody claims, who is actually the witness? So even now,
without AI, sometimes we get managers reluctant to stand behind the
decision they have taken and obviously there are cases in
whistleblowing where people have felt pressured to take a
particular decision and the controlling mind has fixed the
organisation with liability.
So in granular terms, if managers are saying, OK, well you, the
company, have bought this AI, is it the individual manager who is
overseeing that recruitment process who would be the witness? Is it
the person who signed off on buying the thing? I mean, how does it
all work out on the ground because for the in-house employment
lawyers under pressure that you referred to, that is a very
relevant consideration, is it not?
Jonathan: Hm, it is, and, of course, at this
point the in-house employment lawyer is going to be watching his
colleagues running for cover, point their fingers in opposite
directions and this is a key point about accountability. The
accountability will be with the line manager of the employee and if
a line manager says, well it was the software that did it, the line
manager is going to have to explain how it was they relied on it,
why it was they relied on that software, and they may need support
from a technical colleague, they may need support from another
colleague in HR but, as ever, it will be – if not the
decision maker because that was the machine – then the person
in the org chart who is notionally identified as the decision
maker.
In practice, I suspect it will not be that easy for precisely
the reasons that we have just identified, which is if this goes
wrong, you know, success has many fathers, failure is an orphan,
and no-one is going to be wanting to take ownership of this. So I
go back to my due diligence point. If you are an adviser involved
in the purchasing of this software internally, then one way to
concentrate people’s minds will be are you prepared to defend
the decisions this software takes in an employment tribunal?
Because if you are not, query if we should be buying this.
Jane: Yep. Yep. There is another question or, I
guess, a comment framed as a question but I think it is interesting
to get your view on it, Jonathan, in light of everything you have
said, should AI therefore be used just for quantitative matters,
not qualitative? Maybe if it is just quantitative it is not
necessarily AI, I do not know. But what is your view on it?
Jonathan: It is AI because it depends what
quantitative data you take into account. So, for example, in the
ridesharing app, you would not necessarily ask the AI to take into
account the weather but the AI may have noticed a correlation,
between the weather and the performance of drivers in a particular
location at a particular time, which is not a correlation which a
human would have spotted but the AI does because it is in the data
and that it is quantitative.
But there is a fundamental point about AI which is that it
actually is not much use at measuring qualitative data. So, lawyers
will charge you literally for staring out of the window. I mean,
whilst we are staring out of the window we are thinking about your
problem and the best way to solve it practically and, how to
deliver the advice but what it looks like we are doing is staring
out the window. So how is the computer going to be able to measure
that and, at the moment, no-one knows.
So I think the answer is that we are someway off a computer
being able to do a qualitative analysis of human performance. Now,
we are not very far off from computers being able to do a
qualitative analysis of written data and if I can give you an
example of that, Gowling uses artificial intelligence to read
leases, to extract from a lease, however written, important
information about break terms, rent reviews et cetera, et cetera
and that is worth applying when you are buying a bulk property
portfolio and investment portfolio of hundreds if not thousands of
properties, and the AI does a better job than lawyers of reading
through the leases and it does it in a fraction of the time. Now
you can see one day, in our line of work in employment tribunals,
AI being applied to all the written evidence in front of them and,
in simple cases, taking the decision itself. So I can see
lawyer’s jobs going that way but I cannot see AI taking over
the management of lawyers given all the variables of performance
involved.
Jane: Yeah. OK, thank you. I think we are
nearly at 11:45 so we will call a halt there on the questions. If
we did not get to your question, we will follow up with you, as I
say, thank you for putting your names in there as requested. So we
will be circulating the questionnaire by email shortly after the
webinar, so do please fill that in if you can but I think Jonathan
has given us lots of food for thought and I really do thank him for
covering that topic for us so well.
I hope you can all join us for our next webinar which is this
Thursday where as a sort of natural evolution of this, or perhaps a
deeper dive we are going to be looking at data protection and
things you need to be aware of, an update on that area which fits
quite neatly with this one.
So thank you again to Jonathan, thank you to Lucy and thank you
to all of you for joining. Enjoy the rest of your day.
Jonathan: Thank you.
Read the original article on GowlingWLG.com
The content of this article is intended to provide a general
guide to the subject matter. Specialist advice should be sought
about your specific circumstances.