Legal Tech – Algorithms in the Legal System
This text comes from a talk we gave at the Legal Technology North conference in November 2018.
In our investigations into the manipulation of social media algorithms, we’ve found countless examples of its occurrence; from the overt and dangerous, like twitter bots artificially pushing political messages, to the subtle and benign, like youtubers altering their output to please their audience and generate views.
In fact, however broadly or narrowly you define it, the “gaming” of algorithms probably occurs a lot more often than you expect.
That’s not to say all the hype and fearmonging is justified; most of the “gaming” of algorithms is less effective and less powerful than you might expect.
But it’s not just people manipulating the algorithms; the algorithms also manipulate the people.
I gave the example of youtubers changing the content they produce in accordance with the rules of the algorithm that determines who sees their video, and the algorithm that determines how much they get paid for their work. In any system, the rules shape the participants’ behaviour.
The medium affects the message.
Any system with clear rules and variable outcomes is going to have some of the participants in that system trying to optimise their outcomes – that is, trying to game the system.
Perhaps part of why people are so shocked to hear about the manipulation of online algorithms is because they were sold an idea of the internet as an organic space, free from rules to manipulate, or at least without /explicit/ rules.
I see an analogy with the dichotomy between common law and a civil code.
For the sake of the analogy, under common law like in the UK, there is more space for interpretation and judgement, whereas under civil law there is more focus on the explicit rules as written.
Imagine a justice system where claims are submitted online, where the data about each claim is submitted online, where big data is used to craft the perfect claim.
Services like DoNotPay show we’ve already taken the first steps in this direction.
Is the introduction of technology to the legal process moving us towards the more codified, explicit rules of a civil code style system? It can introduce more fixed rules, with absolutely no room for human interpretation, by removing the humans completely.
This can lead to more consistency between decisions. I’d argue however that this gives us the worst of both worlds, as elements of the system become rigidly codified, but the rules are hidden from most of the participants.
People have different expectations in a system with clearly explicit rules from one based on judgement.
This creates a power imbalance between those who naively assume the system works somewhat “organically”, and those who have the resources, the time and the wherewithal to discover the rules and optimise their strategy accordingly.
Whether we’re helping to build this new world of legal tech or merely participants within it, we should be very conscious of the ethics and incentives it creates. Our research has shown time and time again, from debt management markets to the evaluation of education, removing human judgement from a process leads to injustices, perverse incentives and kafkaesque processes.
That’s not to say there’s no place for tech in the legal sphere. There are plenty of applications that would benefit hugely from some automation or digitisation. It’s a scandal that in these times of “open government” and when Britain claims to be leading the world in “open data”, the full Land Registry still isn’t freely available online.
Where technology is used to assist, augment or aid human decisions, it can be a good thing. Wherever technology is being used to replace human judgement, however, we should tread very carefully.