Artificial Intelligence & Human Rights: Friend or Foe?
By Alberto Quintavalla and Jeroen Temperman
Artificial intelligence (‘AI’) applications can have a significant impact on human rights. This impact can be twofold. On the one hand, it may contribute to the advancement of human rights. A striking example is the use of machine learning in healthcare to improve precision medicine so that patients would receive better care. On the other hand, it can pose an obvious risk to the respect of human rights. Unfortunately, there are countless examples. Perhaps the most obvious one is the use of algorithms discriminating against ethnic minorities and women. More...
Open University of the Netherlands
Alongside the current discussion on the relocation of unaccompanied minor migrants from Greece to other EU Member States, another dialogue is ongoing: restarting ‘Dublin transfers’ from the other Member States to Greece.
In principle, the Dublin Regulation (No. 604/2013) would require the first Member State where someone submits an application for international protection to be responsible for that application, based on the principle of mutual trust. The principle of mutual trust between the EU Member States requires them to trust one another in complying with EU law and recognizing decisions made in their civil and criminal justice systems, asylum law and family law. However, Dublin transfers to Greece had been suspended since 2011 because of possible violations of Article 4 of the EU Charter of Fundamental Rights, i.e. the prohibition of torture and inhuman or degrading treatment or punishment. In practice, this meant that the Member State in which someone submitted an asylum application was responsible for that application instead of Greece, even if the person had previously applied for asylum in Greece.