AI and vulnerabilities: the case of Asylum seekers
- Edwina Taylor
- Jun 9, 2023
- 3 min read

Artificial Intelligence (AI) has found its way into several sectors, and the field of asylum procedures is no exception. Incorporating these technologies has the potential to revolutionise the way asylum procedures are carried out. Although the ongoing projects are experimental and not yet widely implemented, they could establish a precedent that requires careful examination.
AI has various applications in asylum procedures, such as remote applications, online interviews, decision-making assistance, automated decision-making, chatbots for communication with applicants, risk assessment and credibility analysis of asylum claims. These procedures involve collecting large amounts of data, which is analysed using algorithms. The emphasis is on evidence-based approaches, statistics, and material proof rather than narratives and interviews.
AI has some benefits in making the asylum process faster and cheaper, but significant risks are associated with using large amounts of data. The quality of the data used to train AI algorithms can impact the outcome of asylum procedures.
There is also a risk of bias and errors in the data, which could result in false positives and feedback loops. Furthermore, automated decision-making can be opaque, making it difficult to challenge decisions and leading to a lack of transparency in the decision-making process. Additionally, these tools raise concerns about privacy, data protection, and discrimination.
These risks are not just theoretical, as they have been observed in practice. Many of these technologies have been found to cause direct or indirect racial discrimination or bias. For instance, facial recognition technologies have struggled to recognise people with specific skin colours, especially women, due to a lack of diversity in the training data. Additionally, voice recognition softwares have difficulty recognising distinctive dialects of minority groups. The iBorderCtrl project of the EU, which employs facial recognition and lie detection technologies to aid border officials in making decisions, has been criticised for violating personal data protection laws and human rights.
Errors and biases in the data used for AI algorithms can result in the wrongful denial of asylum claims, which can significantly impact individuals and their families. This can be particularly complicated and harmful for asylum-seekers who are already vulnerable. It is essential to prioritise transparency and accountability in the design of AI algorithms used in asylum procedures to safeguard the rights of asylum seekers and human rights in general. In addition, given the potential drawbacks, policymakers need to consider whether to use such technologies at all.
Migration management is a complicated task sometimes coloured with human prejudice and wrong perceptions. The UN Special Rapporteur on contemporary forms of racism underlines that “Executive and other branches of government retain expansive discretionary, unreviewable powers in the realm of border and immigration enforcement that are not subject to the substantive and procedural constraints typically guaranteed to citizens.” The situation becomes more complicated when AI systems propose easy solutions to intricate issues, especially when it comes to large-scale challenges.
Human rights organisations have successfully pushed for the ban of certain types of AI that are harmful to the rights of migrants, refugees, and asylum seekers. The Draft AI Act now prohibits automated prediction of crimes, social scoring systems that block access to public services, and emotion recognition technologies. However, some uses, such as profiling systems that assess individuals based on risk, are still allowed, although they remain problematic.
Moreover, what is rarely questioned is the overuse of said rational and mathematical arguments over the individual’s narrative. This narrative is crucial for those seeking asylum as it allows them to share their experiences and the violence they have suffered, sometimes for the first time. It is their moment to be heard and understood by governmental authorities. This is an essential part of current asylum procedures. A change in the type of information collected implies a profound paradigm shift in the system. This opportunity to tell their story mustn't be overshadowed. The narrative process is essential for evidentiary purposes and to ensure the regularity of the application. Still, it also contributes to the process of reconstruction of the person and validation of their lived experience.
When it comes to deciding the fate of people waiting for a new life at the border, it's crucial to carefully examine all relevant questions and risks. Many experts are raising concerns about using "intelligent tools" in the asylum sector and other areas where human rights are at high risk. However, political responses to these concerns can be slow and influenced by conservative viewpoints. The political answer is overdue. Hopefully, it will receive the attention it deserves without needing another scandal to bring it to light.