One of the largest barriers that must be overcome by a business looking to introduce artificial intelligence (AI) into their current workflows is ensuring the co-operation between machines and humans or else the technology is destined to fail.
Whether it's doctors using an AI powered diagnostics tool or a lawyer using machine learning to speed up the review of contracts, the user must trust the system they are using and they must properly understand their role in the compared to that of the machine.
This can be achieved by designing the tools and processes in one of three ways:
1. Human back-up
AI is not going to replace humans just yet and some systems are incapable of completing a task without any human intervention. It is possible to design systems so that a human acts as back-up when these limits are reached.
For example, in customer call centers chat bots often deal with the most simple of queries but pass these on to a human when the chat bot is not able to resolve the issue.
2. Human sensitivity
There are some areas such as justice, medicine and warfare that are sensitive due to their link with ethics. In these scenarios systems can be designed to make sure that the related task always depends on a person (even if the machine learning is capable of completing the task itself).
For example, machines may be capable of deciding that a life support machine should be turned off as there is no hope of recovery, but many would not be comfortable with a machine making that decision.
3. Human judgment
Perhaps the form of AI that most people are familiar with are the algorithms that identify patterns and analyse vast amounts of data to make recommendations (think Amazon and Netflix!) But importantly, a human is required to take the next steps and apply their judgment.
For example, with an AI powered due diligence platform such as Luminance, the machine learning surfaces potential risks and issues by drawing certain clause types to the lawyer for their attention. But this is where the machine learning stops. A lawyer is required to take the next step and review the level of risk and decide how it should be mitigated as part of the transaction.
A common understanding?
In each of these scenarios it's essential that both humans and machines have a common understanding of the language and processes being used.
This is where digital literacy skills become even more important. It's essential that our lawyers understand how the underlying technology works so that they can get comfortable that the results the machine learning is producing are accurate.
when humans and semi-intelligent systems try to work together, things do not always turn out well.