News emerged last week that the EU is set to launch a trial of an artificial intelligence powered lie detection tool which can use facial expression 'bio markers' to detect whether or not someone is being dishonest about their immigration status.
Essentially the tool is really good at reading people's poker faces.
But, the context of the situation means that it is not just making a judgement about whether or not the person is lying, but it is also making a legal assessment.
If the tool does find that someone is lying about their immigration status they could be charged with a criminal effect. As a result there are potential human rights and justice implications. What's not yet clear is how the tool will interact with investigators or what the process will be once someone has been flagged by the tool as being deceitful.
There are also ethical issues surrounding transparency and the potential for bias in the underlying data used to train the tool's algorithms.
If the trial is successful, the next logical question is where else can this technology be deployed?
the EU is experimenting with a machine learning-based system for facial change indicators in the hope of making what would be a legal assessment of whether someone is lying, in this case with regard to immigration.