By: Juuk van Andel and Evelien Brouwer
Introduction
Imagine that you arrive at a border, tired and uncertain about what lies ahead, seeking access to Europe as a short-term visitor or perhaps even as an asylum seeker. Instead of being interviewed in a regular way, you are asked to sit down for a lie detector test, also referred to as ‘polygraphs’. This means that your responses, your body language, and even micro-expressions are scanned and interpreted by algorithms that are designed to detect lies. While this scenario is not yet a reality in EU border procedures, current EU policy and research projects signal a growing interest in deploying high-tech tools for migration management, including lie-detectors. The adoption of the EU Artificial Intelligence Act (AI Act) led to important legal and ethical debates about the future of AI use in Europe, particularly regarding technologies classified as “high-risk.” One example of AI tools which have been classified as “high-risk” AI systems in Annex III of the AI Act are ‘polygraphs or similar tools’. Their use remains permissible under certain conditions, even in sensitive fields such as law enforcement, but also within the context of ‘migration, asylum, and border control management’.
This blog post explores the human rights implications of such a development, particularly with respect to the right to private life, data protection, and the right to an effective judicial protection as protected in the Charter of Fundamental Rights of the European Union (CFR) and the European Convention of Human Rights (ECHR). Referring to landmark cases of the Court of Justice of the European Union (CJEU), we argue that the use of polygraphs is difficult to reconcile with the aforementioned fundamental rights and therefore should be prohibited altogether.
EU Borders and Polygraphs
Polygraphs are designed to determine a person's emotional state, intention or mental state based on facial expressions, but also other physiognomic features, such as gaze direction, gesture, voice, heart rate, and body temperature. Their use at European borders has been tested in EU-funded research projects, including iBorderCtrl and Tresspass. iBorderCtrl involved a pilot between 2016 and 2019, in which so-called ‘pre-travel’ registration interviews were conducted by ‘avatar’ border guards using an automated deception detection system. This system could produce a risk score of whether the applicant was lying or not, based on an analysis of nonverbal behaviour and fine-grained micro-movements. The Tresspass project explored the feasibility and usefulness of behaviour analysis (including emotions) during interviews by border guards and customs officers. While there is no information on the follow-up of these projects, nor on the current use of lie detectors, research by Derya Ozkul, amongst others, shows the rising use of new technologies, including the use of facial recognition and artificial intelligence, within the field of asylum, border, and immigration control across Europe.
Human rights at stake
The use of polygraph technologies at the EU’s borders would fall within the framework of EU asylum or migration law, which includes the CFR, particularly the right to private life and data protection, as protected in Articles 7 and 8 CFR, but also the right to effective judicial protection in Article 47 CFR. Additionally, as polygraphs involve the collection and processing of personal data, including biometric and psychological data, the General Data Protection Regulation (GDPR) applies as well.
The right to private life, proportionality and free consent
Under Article 7 CFR, everyone has the right to respect for their private and family life. The use of polygraphs entails more than simple data collection, as it involves the analysis of involuntary physiological responses and affects the psychological and emotional integrity of a person. The F. v. Bevándorlási és Állampolgársági Hivatal case is, in this context, an important landmark case decided by the CJEU. The Hungarian asylum authorities used psychological tests to assess the credibility of an applicant’s claim of being homosexual. The CJEU firmly rejected this practice, emphasising that the use of such intrusive methods in migration procedures must be scientifically reliable and proportionate to the legitimate aim being pursued (paras. 56-59). The CJEU held that consent given under pressure cannot be considered as a valid consent within the context of the right to private life and data protection as protected in Articles 7 and 8 CFR (para. 53). According to the CJEU, consent can hardly be ‘free’ when refusal could jeopardise your chance of asylum (para. 52). Therefore, courts must exercise thorough investigation over expert assessments in asylum procedures (para. 46). If we apply this to the use of polygraph, the parallel is obvious. Migrants and asylum seekers may feel pressured or forced to undergo such tests, fearing that refusal might work against them. The CJEU’s demand for scientifically reliable methods (para. 58) means an important barrier for the use of polygraphs at borders, as their accuracy remains controversial and very questionable. Perhaps most crucially, the CJEU stated that any intrusion into private life must be proportionate and necessary (paras. 59-61). Considering the existence of less invasive alternatives, including the use of information and documents as submitted by the asylum seeker or country reports, it is hard to argue that polygraphs meet the bar of proportionality and necessity.
Therefore, given that polygraphs are very much pulled into question when it comes to accuracy, it is highly questionable whether their use would even meet the legal requirements under the CFR.
The right to an effective judicial remedy
In the context of using polygraphs at borders, the right to an effective judicial remedy is essential because polygraphs can lead to decisions that seriously affect a person’s outcome in the asylum procedure, such as denial of entry or refusal of asylum, and thus, affect a person’s rights. Without access to an effective and meaningful way to challenge such decisions before an independent authority, persons cannot defend their rights under EU law, particularly the right to an effective judicial remedy under Article 47 CFR. This provision protects the right of access to an effective remedy before a court. In this context, if there were a case where an individual is subjected to a polygraph test and the result influences the outcome of their asylum application, they must have an opportunity to challenge this. However, algorithmic-driven decisions often operate as ‘black-boxes’, offering no transparency about how decisions were made. This undermines the judicial protection and also makes effective appeal rights unlikely in this instance. In Ligue des droits humains, the CJEU, dealing with the implementation of the PNR Directive on the use of passengers’ data, held that this law precludes the use of AI or ‘machine-learning’ systems capable of modifying without human intervention or review the assessment process. According to the CJEU, the use of such technology would make it impossible for individuals to understand the reason why a given program arrives at a positive match and to challenge the non-discriminatory nature of the results. The ‘opacity which characterises the way in which artificial intelligence works’ (para. 195) would deprive data subjects of their right to effective judicial protection as protected in Article 47 CFR.
These principles are directly relevant to the polygraph context, where there is a power imbalance and a lack of procedural transparency, which could lead to a risk of violating applicants’ rights to a fair and transparent process.
The AI Act
The EU AI Act, which entered into force in August 2024, aims to regulate and foster trustworthy AI while protecting fundamental rights. Within the AI Act, AI systems are classified in different risk levels, including unacceptable and high-risk systems. Those with unacceptable risks are prohibited and listed in Article 5 of the AI Act. High-risk AI systems are defined in Article 6 and Annexe III of the AI Act provides a current list of high-risk systems. This list includes ‘polygraphs or similar tools’ as used by or on behalf of national administrations or Union institutions, bodies or agencies within the field of law enforcement (Annexe III, section 6) and migration, asylum and border control management (section 7). Thus, the AI Act falls short of banning tools like polygraphs, despite serious doubts expressed by scientists concerning their reliability and ethical implications and concerns by organisations, and also the European Data Protection Supervisor and European Data Protection Board (EDPS and EDPB).
The AI Act requires a Fundamental Rights Impact Assessment (FRIA) for high-risk systems (Article 27). This is a meaningful step towards ensuring that AI systems comply with the Charter of Fundamental Rights. However, this requirement does not necessarily guarantee effective protection of fundamental rights in practice. The FRIA might fall short as the FRIA is conducted by the deployer, for example, the national migration authority, in the context of border controls, and therefore, there is no requirement for external review or validation by fundamental rights bodies. This could lead to a conflict of interest, particularly when efficiency pressure exceeds fundamental rights concerns. Even if the FRIA identifies serious risks, there is no legal consequence, meaning that even a negative outcome FRIA might go unnoticed and a system with the risk of violating fundamental rights could still continue to operate. As Melanie Fink notes in her critique, the safeguards in the AI Act may fail to operate in a meaningful way when applying it in real-world scenarios, particularly in the settings of EU border control. Furthermore, the fundamental rights protection within the field of border and migration control is further jeopardised by various exceptions in the AI Act to other safeguards, including human oversight and transparency mechanisms.
Conclusion
The use of polygraphs at EU borders shows a broader tension between technological advancement and human rights. While governments seek more effective border and migration control, the instruments to be employed for this purpose must not only be reliable but also respect dignity, privacy, and due process. As demonstrated in this blog, polygraphs raise serious legal concerns, particularly regarding the right to private life and data protection and the right to an effective judicial remedy. Relying on case law from the CJEU, we conclude that the classification of polygraphs in the AI Act as “high-risk” is insufficient. The FRIA under the AI Act shows flaws, as this assessment would be conducted by the deploying authority and thus lacks independent oversight with no binding legal consequences.
Given the invasive nature of lie detectors, their unreliability and the risks these systems pose to fundamental rights, the conclusion is that polygraphs should not be regulated as “high risk” under the provisions of the AI Act but should be prohibited altogether.
Bio:

Juuk van Andel holds an LL.M. in Law & Technology from Utrecht University. Juuk specialises in personal data protection, smart borders and human rights, as well as ePrivacy and tracking technologies. Her research examines the intersection of technological innovation, governance, and fundamental rights, with particular attention to the challenges posed by technology regarding its impact on fundamental rights.

Evelien Brouwer is a senior lecturer/researcher at the Institute of Jurisprudence, Constitutional and Administrative Law, Utrecht University. Her areas of expertise include migration law, border technologies, and fundamental rights with a particular focus on the right to effective judicial protection, non-discrimination, and privacy. Evelien Brouwer is affiliated to the Montaigne Centre for Rule of Law and Administration of Justice. Publications - Mr. dr. E.R. (Evelien) Brouwer - Utrecht University (uu.nl)