1 00:00:00,420 --> 00:00:04,720 Hello and welcome to another episode of the Migration Ocean Podcast. 2 00:00:04,740 --> 00:00:15,059 I'm Rob McNeil and I'm Jackie brought out. So today Jackie has been talking to two experts on the use of artificial intelligence 3 00:00:15,060 --> 00:00:21,600 and other modern forms of technology in the management of rivals in Europe. 4 00:00:22,230 --> 00:00:26,400 So, Jackie, what made you want to talk about this? 5 00:00:26,820 --> 00:00:35,610 Thanks, Rob. I feel like AI is kind of everywhere at the moment, from chatty through to, you know, 6 00:00:35,790 --> 00:00:42,120 even within the university, how it might kind of impact teaching and learning here. 7 00:00:42,510 --> 00:00:49,200 And so it felt really important to kind of hear a little bit about what's been happening within migration governance, 8 00:00:49,200 --> 00:00:52,679 both processes that been going on for a long time as ever with these things. 9 00:00:52,680 --> 00:00:53,010 You know, 10 00:00:53,490 --> 00:01:01,020 you think it's the kind of new big thing and it actually turns out that some some of the things have been happening a kind of much longer processes. 11 00:01:01,380 --> 00:01:08,850 But hearing about some of the new technologies in the way that genuinely they are impacting on decision making and migration governance. 12 00:01:09,600 --> 00:01:17,520 One of the questions for me is, I mean, is there a problem? I mean, is it wrong to use modern technology to try and understand who's arriving, 13 00:01:17,520 --> 00:01:23,849 what kind of risk they pose to people and for governments to to to try to ensure that they've got 14 00:01:23,850 --> 00:01:29,220 the most efficient mechanisms possible to assess people's claims and to deal with things quickly? 15 00:01:29,550 --> 00:01:33,420 Yeah, I mean, one of my favourite programs is Mad Men, 16 00:01:33,420 --> 00:01:38,549 and there's a line that Don Draper says in that way he he says, What if change is neither good nor bad? 17 00:01:38,550 --> 00:01:43,530 It just is. And that's something that I thought about quite a lot in this discussion, 18 00:01:43,530 --> 00:01:49,710 which is is it's the tools themselves that are the issue or this kind of systems that they're going into. 19 00:01:50,100 --> 00:01:58,560 I think one of the really interesting points in the discussion is if they are going into a system that contains inequalities, biases, discrimination, 20 00:01:58,950 --> 00:02:03,270 they have the possibility to amplify and replicate that form of discrimination, 21 00:02:03,570 --> 00:02:10,440 and particularly because they allow decisions to be made much quicker than humans might make decisions with fewer moments for reflection. 22 00:02:10,770 --> 00:02:18,840 You know, are we walking into a situation where actually we're going to see these kind of biases replaced on a much larger scale? 23 00:02:19,470 --> 00:02:21,810 But I also think it's something about the moment that we're at. 24 00:02:21,810 --> 00:02:26,250 It's true that these technologies have been around for a while, but this does feel like quite a transformative moment. 25 00:02:26,640 --> 00:02:31,470 And that means that policymakers and governments are kind of writing the rules of the game. 26 00:02:31,800 --> 00:02:33,750 And at the moment that we're writing those rules, 27 00:02:33,900 --> 00:02:39,059 it's probably a good idea for us to be having conversations about what we want these types of decisions to be, 28 00:02:39,060 --> 00:02:45,780 how much we want human judgement as part of them, you know, as opposed to kind of predictive, 29 00:02:45,780 --> 00:02:50,190 you know, how much we want to be If we were making an application for a visa or for asylum, 30 00:02:50,460 --> 00:02:57,690 how much we would want that to be done by a person looking at kind of us versus it being done on a kind of predictive algorithm. 31 00:02:58,110 --> 00:03:03,120 All of these sorts of huge questions I think is kind of grappling with are also come into play 32 00:03:03,270 --> 00:03:09,509 within migration governance and are happening both for us as citizens in deciding what's acceptable, 33 00:03:09,510 --> 00:03:12,630 what might not be acceptable, and then also within governments. 34 00:03:13,080 --> 00:03:19,290 So, I mean, there's a question here in that case about just allocation of resources and the scale of situations. 35 00:03:19,290 --> 00:03:21,990 I mean, the UK obviously at the moment, I mean, 36 00:03:22,360 --> 00:03:28,770 one of the things that we we talk about all the time is this massive backlog in the asylum and the asylum system in the UK. 37 00:03:29,100 --> 00:03:36,360 That's largely a facet of the slow processing of claims by by decision makers. 38 00:03:36,540 --> 00:03:46,619 Now I mean surely removing a degree of the kind of of human biases from decision making is is arguably a good thing sometimes, isn't it? 39 00:03:46,620 --> 00:03:51,749 I mean, it's not to say that, it's not to say that automating processes is a perfect solution, 40 00:03:51,750 --> 00:03:59,370 but if you've got a situation which is increasing consistently in scale, don't we need technologies like this Sometimes? 41 00:03:59,850 --> 00:04:04,680 There's some really interesting examples within the discussion of the kind of tools that have been used. 42 00:04:05,040 --> 00:04:07,380 One of the things that really struck me, 43 00:04:07,620 --> 00:04:16,590 particularly about asylum is that the tools that are being discussed are actually tools that then will aid human decision makers. 44 00:04:16,890 --> 00:04:23,940 So and there's a really interesting example about dialects that that that we chat about and it's, you know, 45 00:04:23,940 --> 00:04:31,890 what are the kind of skills that those decision makers have to be able to kind of deal with this new information that they're getting through. 46 00:04:32,220 --> 00:04:41,700 And it also really reminded me of some fantastic research that Lena Rose, who was formerly of this parish, now elsewhere, did, 47 00:04:41,730 --> 00:04:47,940 looking at the way that we make credibility decisions and the fact that these are such incredibly complex 48 00:04:47,940 --> 00:04:54,089 and kind of sophisticated decisions on a human level about somebody's well-founded fear of persecution, 49 00:04:54,090 --> 00:04:59,730 for example, and the extent to which we are able to understand the kind of new information that's. 50 00:04:59,800 --> 00:05:05,980 Coming in. Whether that is supportive of better decision making or in fact, you know, 51 00:05:05,980 --> 00:05:11,680 whether it's just kind of adding to what kind of pile of information and making those decision making process is actually much more difficult. 52 00:05:12,220 --> 00:05:18,640 I feel like we haven't necessarily just got to grips with the complexity of of making asylum decisions. 53 00:05:19,060 --> 00:05:22,000 And then we're adding this kind of whole extra layer on to it. 54 00:05:22,450 --> 00:05:27,099 I think your point about actually this could remove some of the biases is really interesting, 55 00:05:27,100 --> 00:05:29,169 and I think it's something that could be explored a lot further. 56 00:05:29,170 --> 00:05:40,209 I know a lot of the worry around A.I. is that it far from removing biases, it kind of amplifies those biases because of the way that it's trained. 57 00:05:40,210 --> 00:05:46,600 As a AI, to my layperson's understanding is kind of trained on the stuff that is already out there. 58 00:05:46,870 --> 00:05:53,250 And if the stuff that's already out there contains these these pre-existing biases, then it is also going to have those. 59 00:05:53,260 --> 00:05:56,860 So that's one of the ways in which it kind of becomes almost a little bit more human. 60 00:05:57,460 --> 00:06:01,810 So I think I just feel like these are incredibly interesting and important discussions 61 00:06:01,810 --> 00:06:05,139 and the fact that this is not something that we're projecting forward into the future. 62 00:06:05,140 --> 00:06:09,550 These are technologies that are being used right now. And so it's a discussion that that should be had. 63 00:06:10,240 --> 00:06:15,490 In the same way that air is trained and learns so to people. 64 00:06:15,520 --> 00:06:21,970 I mean, like air is not the only thing that learns. And so, I mean, we're constantly going to be in a situation surely, 65 00:06:22,270 --> 00:06:29,800 where there has to be a balance struck between the way that people learn how to how to either 66 00:06:30,010 --> 00:06:37,749 use the system or to manipulate a system or to play a system against the kind of the against 67 00:06:37,750 --> 00:06:43,060 the sort of the idea of the perfect decision making process that may exist either with humans 68 00:06:43,330 --> 00:06:49,060 making solid sound choices based on complete understanding of a situation which is never there, 69 00:06:49,300 --> 00:06:52,450 or machines doing something based on perfect automated system. 70 00:06:52,750 --> 00:06:57,430 There's always got to be some kind of of balance between these things. 71 00:06:58,640 --> 00:07:05,480 I'm joined by Derek Ozekhome, senior research fellow at the Refugee Study Centre at the University of Oxford, and Catarina Ridley. 72 00:07:05,780 --> 00:07:13,760 EU Policy Analyst Access now. Daria, we hear so much about how I might transform our lives. 73 00:07:14,090 --> 00:07:20,180 Can you tell us about some of the ways in which it might transform and is already transforming migration governance? 74 00:07:20,510 --> 00:07:26,030 Yes. So yeah, we started hearing more about A.I. when it started impacting our lives. 75 00:07:26,060 --> 00:07:31,400 So, for example, AI is transforming the workforce with many types of jobs now being automated. 76 00:07:31,880 --> 00:07:35,330 But it's also used in many other areas, including migration. 77 00:07:36,230 --> 00:07:41,030 And we see various practices that state authorities have started using on migrants. 78 00:07:41,540 --> 00:07:47,419 So right now from in fact, from the moment that someone thinks about even a migrant, 79 00:07:47,420 --> 00:07:51,470 the possible migrant thinks about even migrating to somewhere else. 80 00:07:51,890 --> 00:07:58,340 There are data that Google searches on the news that they're looking for are all being recorded and monitoring monitors. 81 00:07:58,670 --> 00:08:06,040 And so, for example, predictive analytics systems helps states to forecast the number of arrival of migrants on their borders. 82 00:08:06,440 --> 00:08:14,360 And then these systems can then lead to more increased border controls and push backs of people who may need protection. 83 00:08:15,260 --> 00:08:18,900 We also see a variety of other types of new technologies being used. 84 00:08:19,340 --> 00:08:28,520 For example, drones or various types of sensors in border areas for surveillance purposes, apart from those in recent years. 85 00:08:28,880 --> 00:08:34,400 Several states have started using new technologies to automate some of their casework processing. 86 00:08:34,880 --> 00:08:40,730 So these are often in the form of automated systems that take inputs from other databases. 87 00:08:41,240 --> 00:08:47,450 So, for example, in Norway, the immigration authority has automated the processing of citizenship applications. 88 00:08:47,960 --> 00:08:49,770 And the way that they can do that is they, 89 00:08:50,030 --> 00:08:57,470 by taking data from that is already available against that individual from all the databases that they have, 90 00:08:58,250 --> 00:09:02,330 and then checking whether the person fulfils all the requirements to become a citizen. 91 00:09:03,020 --> 00:09:07,460 So the process can be fully automated if they have enough data about a person, basically. 92 00:09:08,120 --> 00:09:17,060 But just the more dangerous form of automation in this field is when immigration authorities are using what is called risk assessment systems. 93 00:09:17,570 --> 00:09:22,910 And we have seen these types of systems, for example, in the UK or in Netherlands as well. 94 00:09:23,750 --> 00:09:27,490 In the UK case, thanks to huge efforts from civil society, 95 00:09:27,500 --> 00:09:33,229 it has been found that applications from certain nationalities were automatically categorised as 96 00:09:33,230 --> 00:09:38,720 high risk and that those applications were receiving a higher level of scrutiny from officers. 97 00:09:39,830 --> 00:09:45,680 The civil society organisations, particularly the Joint Council for the Welfare of Immigrants and Foxglove as well, 98 00:09:46,280 --> 00:09:49,459 They found that the system had also a feedback loop problem, 99 00:09:49,460 --> 00:09:53,780 which means that applications from a certain nationality being rejected at a 100 00:09:53,780 --> 00:09:58,910 higher rate will also influence future applications from that nationality. 101 00:09:59,450 --> 00:10:08,360 So obviously these types of systems can create discrimination on the basis of nationality and it's quite risky and important to monitor them. 102 00:10:09,740 --> 00:10:13,280 Absolutely. It's amazing how much is already happening. 103 00:10:14,140 --> 00:10:20,420 Catarina There are some big changes that are happening at EU level, but not seemingly with very much regulation. 104 00:10:20,450 --> 00:10:25,339 Can you tell us about how some of the technologies that debris are talked about are changing things in the Schengen 105 00:10:25,340 --> 00:10:31,520 zone and and why we might want to kind of regulate things to reduce some of the discrimination that we've heard about? 106 00:10:32,150 --> 00:10:38,750 Yes. Thank you. So I think that's an important starting point for orientate ourselves. 107 00:10:38,750 --> 00:10:45,470 And this discussion is to reflect on the fact that new technologies in the era of EU migration policies are not that new. 108 00:10:45,860 --> 00:10:53,480 And in fact, they were actually rolled out and developed together with the new design of migration policies in the nineties. 109 00:10:53,750 --> 00:11:01,010 So in the nineties the Schengen area was, was started with 95, 110 00:11:01,190 --> 00:11:10,190 the abolishment of internal frontiers between European Member States and the a new European common external frontier was developed. 111 00:11:10,220 --> 00:11:18,350 What civil society nowadays calls fortress Europe and digitalisation of borders and technology was since the beginning, 112 00:11:18,350 --> 00:11:25,249 since the get go crucial for the functioning of Fortress Europe, the first large scale database. 113 00:11:25,250 --> 00:11:32,120 In fact the Schengen Information System was deployed in 95 and it had the purpose of 114 00:11:32,120 --> 00:11:37,640 facilitating police cooperation among Member states as well as border management. 115 00:11:37,790 --> 00:11:43,609 And it had information on different categories of people such as missing persons or people wanted for 116 00:11:43,610 --> 00:11:50,540 arrest in the same way for an implementation of the Dublin Regulation at the beginning of the 2000 stadia. 117 00:11:50,570 --> 00:11:54,740 That database was implemented and the Europe that year. 118 00:11:54,740 --> 00:12:01,280 That database holds in source biometric information on asylum seekers, 119 00:12:01,280 --> 00:12:08,870 and it's fundamental for an implementation of the Dublin Regulation that should support Member States 120 00:12:08,870 --> 00:12:17,030 in distributing asylum seekers that arrive at the first countries at the European southern border. 121 00:12:17,300 --> 00:12:27,680 Usually this is important to say that technology was key from the beginning and is important for implementing punitive 122 00:12:27,680 --> 00:12:35,930 migration policies that are very much focussed on enforcing deportation or impeding people from entering the European Union, 123 00:12:36,230 --> 00:12:45,890 even outside of the European borders. For example, in embassies, while a request applying for a visa, as there was explained before. 124 00:12:46,970 --> 00:12:51,920 And if we look at the European policy landscape, 125 00:12:51,920 --> 00:12:58,880 there is a variety of different legislations that have allowed for the introduction of different technological solutions. 126 00:12:59,150 --> 00:13:06,740 We have European, we have we have migration policies such as policies, 127 00:13:06,740 --> 00:13:16,010 regulations that underpin migration databases such as fear that this which is a database for the information sharing on visas, 128 00:13:16,970 --> 00:13:24,620 the Schengen Information systems and others. But we also have a regulation about interoperability of all of these migration databases. 129 00:13:24,890 --> 00:13:33,230 We have the new pact on asylum and migration that allows for the introduction of new technology, but we also have regulations on other type of labour, 130 00:13:33,230 --> 00:13:41,450 such as digital policies such as the Artificial Intelligence Act that also regulates the use of technology in the migration context. 131 00:13:41,720 --> 00:13:45,770 We also have it in the context of law enforcement, such as the Europol regulation, 132 00:13:45,770 --> 00:13:52,360 that allows for more power for police forces to use air based systems and surveillance technology, 133 00:13:52,790 --> 00:14:03,950 and also legislations or initiatives about funding that allow for the money to invest on these type of technologies. 134 00:14:04,700 --> 00:14:12,589 Thanks, Katerina. Daria, I am really interested in this difference between some of the processes that Catarina has spoken about, 135 00:14:12,590 --> 00:14:19,459 which I guess are around surveillance and collecting information and some of them that are kind of predictive and about making decisions. 136 00:14:19,460 --> 00:14:26,960 And one of the areas that we know where policymakers have to make huge decisions is in the asylum process, 137 00:14:27,230 --> 00:14:31,040 deciding whether somebody has a kind of well-founded fear of persecution. 138 00:14:31,400 --> 00:14:35,540 And if you could tell us a little bit about how these technologies are affecting the asylum system, 139 00:14:35,810 --> 00:14:39,800 both for asylum seekers themselves and also for the people making the decisions. 140 00:14:40,250 --> 00:14:48,709 So in asylum decision making, we see we can see the introduction of some of the new technologies not to automate the decision, 141 00:14:48,710 --> 00:14:53,900 the whole process itself, but to automate some of the evidence that is used in decision making. 142 00:14:54,290 --> 00:14:59,210 So for example, one of them is used to determine the applicant's identity. 143 00:14:59,900 --> 00:15:07,910 The immigration authority in Germany has been using this what is called a dialect recognition tool to identify the applicant's language. 144 00:15:08,150 --> 00:15:13,050 So this technology. It's quite very much like all the types of biometric technologies. 145 00:15:13,500 --> 00:15:18,030 It can identify a person's voice data in terms of percentages. 146 00:15:18,030 --> 00:15:29,550 So it generates the report basically saying that this person speaking speaks 64%, 17 Arabic and 16% golf Arabic and so on. 147 00:15:30,090 --> 00:15:31,650 And it's really fundamentally I mean, 148 00:15:31,680 --> 00:15:40,770 this this particular technology is fundamentally changing the way language is assessed in asylum decision making, whereas in previously it was used, 149 00:15:40,980 --> 00:15:49,170 obviously it was always human linguists doing these kind of analysis, and they were recommended to make a qualitative analysis of a person's language. 150 00:15:49,500 --> 00:15:54,210 So they would say, okay, this person is very likely speaking Lebanese in Arabic and then leave it there, 151 00:15:54,750 --> 00:15:59,700 whereas this new technology is giving percentages and is changing how then the 152 00:15:59,700 --> 00:16:05,999 decision maker is perceiving the applicants because it's 60 44% levels in Arabic, 153 00:16:06,000 --> 00:16:10,200 but that not so much the rest and so on. 154 00:16:10,200 --> 00:16:15,330 And another technology that is used more widely is mobile phone data extraction. 155 00:16:15,840 --> 00:16:20,579 So a number of countries in Europe and elsewhere, such as Germany and Netherlands, for example, 156 00:16:20,580 --> 00:16:25,889 in Europe, have introduced this technology again to determine the applicant's identity, 157 00:16:25,890 --> 00:16:33,930 but also to identify their travel route and then use that as an evidence to determine whether or not the person is telling the truth. 158 00:16:34,620 --> 00:16:43,780 So it's of course, this technology is very invasive to the person's private life, extracting all the data available on their mobile phone. 159 00:16:44,190 --> 00:16:49,770 But also, research in this area shows us that mobile phones are often exchange among migrants. 160 00:16:50,040 --> 00:16:54,240 They can be sold. They can be just used by their friends and family members. 161 00:16:54,690 --> 00:17:00,839 And all types of contradictions that come out from that can potentially be 162 00:17:00,840 --> 00:17:04,890 used against the applicant's credibility during the decision making process. 163 00:17:05,550 --> 00:17:14,490 So overall, we see that these technologies really create additional grey areas for applicants to address during the interview process. 164 00:17:16,130 --> 00:17:17,030 Thanks so much, Terry. 165 00:17:17,160 --> 00:17:26,180 And Katrina, the Protect Not Surveil campaign is looking to make a human rights based argument for regulating some of these forms of new technologies, 166 00:17:26,810 --> 00:17:33,080 but it also banning some others, including some of the kind of predictive and profiling systems that we've talked about. 167 00:17:33,380 --> 00:17:37,730 Can you explain why the campaign is so worried about these forms of profiling 168 00:17:37,730 --> 00:17:43,790 and and what areas actually shouldn't be used in migration governance at all, 169 00:17:43,790 --> 00:17:53,029 in your view? Yes, The particular survey campaign fits into a progressive society effort to throw some red lines on the 170 00:17:53,030 --> 00:18:01,220 use of artificial intelligence based systems in a way that irreversibly violates fundamental rights. 171 00:18:01,490 --> 00:18:03,530 And we see that in the migration context. 172 00:18:03,800 --> 00:18:10,640 People on the move and third country nationals are usually used as the testing ground for some type of technology. 173 00:18:10,640 --> 00:18:19,640 So it's crucial that they are found some type of systems that should be banned not only relate to poor fighting type of systems, 174 00:18:19,640 --> 00:18:23,350 but also systems that amount to biometric mass surveillance. 175 00:18:23,360 --> 00:18:31,759 So there is a called for a ban on remote biometric identification systems that could identify people in in on a remote basis 176 00:18:31,760 --> 00:18:38,300 and that could be used outside detention centre or at the borders to identify people that might have their biometric data, 177 00:18:38,320 --> 00:18:44,450 really not the basis systems such as the motion recognition systems that claim 178 00:18:44,450 --> 00:18:50,419 to infer the emotional status of a person's from their facial micro gestures. 179 00:18:50,420 --> 00:19:00,470 And that reinforces this to make the suspicious suspicion against non European citizens and then profiling systems as the area was explained before, 180 00:19:00,920 --> 00:19:07,220 these systems that claim to assess the risk that a person posts are inherently biased. 181 00:19:08,120 --> 00:19:11,689 The example that Dario was making before from the UK, 182 00:19:11,690 --> 00:19:18,830 but also from the Netherlands of automated risk assessment used in the context of three ageing visa applicants. 183 00:19:19,250 --> 00:19:24,740 Well, these type of systems are assessing how much the person might pose a risk to either 184 00:19:24,920 --> 00:19:31,010 public security or at risk to overstay or their visa based on some categories 185 00:19:31,010 --> 00:19:38,030 that are predetermined by some people that by nature cannot be objective because 186 00:19:38,030 --> 00:19:43,520 they build a system of risks that is embedded in some specific assumptions. 187 00:19:43,790 --> 00:19:49,160 So when it comes to risk assessments, you have some risk indicators that are pre-decided. 188 00:19:49,400 --> 00:20:00,770 So some categories that are decided to be a factor of risks, for example, country of origin or level of education or type of employment. 189 00:20:01,220 --> 00:20:05,660 And then for each risk indicator, a screening rule is associated. 190 00:20:05,660 --> 00:20:10,250 So for each indicator, a different, the risk rate will be calculated. 191 00:20:10,640 --> 00:20:17,120 So in the case of the Netherlands, for example, in the Three Eyes system for visa application, 192 00:20:18,110 --> 00:20:27,170 applicants from Suriname were systematically received systematically a higher score when it comes to the risk based on the country of origin. 193 00:20:27,290 --> 00:20:30,290 And the same went with the case from the UK. 194 00:20:30,710 --> 00:20:39,500 Applicants coming from certain African countries were systematically receiving a higher score when it came to the risk indicator of country of origin. 195 00:20:39,590 --> 00:20:42,829 But the same could apply also for level of education. 196 00:20:42,830 --> 00:20:53,900 When people go to universities that have religious background, they might receive a higher score when their university is Muslim. 197 00:20:54,740 --> 00:21:05,030 University could give this idea that it's related to Muslim affiliation and there are many other thought of examples. 198 00:21:05,030 --> 00:21:11,179 So providing systems as well as other type of systems that protect not surveyed is calling for it, 199 00:21:11,180 --> 00:21:20,840 but are all those type of systems that would reinforce systemic oppression and forms of discrimination under the guise of technical neutrality. 200 00:21:21,560 --> 00:21:23,150 A final question for you both. 201 00:21:23,420 --> 00:21:32,630 I guess a lot of this discussion is quite scary and difficult to hear about some of the risks in particular in relation to discrimination. 202 00:21:33,080 --> 00:21:40,159 But I wonder, are there any causes for optimism in terms of the use of these new technologies in the in the right hands? 203 00:21:40,160 --> 00:21:47,480 Do you think there could be any ways in which they could solve some of the long standing issues within migration governance? 204 00:21:48,650 --> 00:21:53,240 Is there any positive note on which we can end? Derek, I'll start with you though. 205 00:21:53,240 --> 00:21:57,920 Some of the technologies that we have mentioned, such as automated processing of visa applications, 206 00:21:58,340 --> 00:22:03,530 can of course bring benefits around speeds, but those benefits are not equal for all. 207 00:22:03,620 --> 00:22:05,329 So some applicants can, of course, 208 00:22:05,330 --> 00:22:11,780 received a speedier response to their applications in one day because it's automated and it's low risk and they receive a positive response. 209 00:22:12,140 --> 00:22:15,510 But others can be under more scrutiny and probably receive a rejection. 210 00:22:16,840 --> 00:22:22,239 But overall, we can see that most of these technologies have been designed in a way to benefit 211 00:22:22,240 --> 00:22:26,680 state authorities themselves to kind of ease and lessen their workloads. 212 00:22:26,680 --> 00:22:31,480 And they are not. I mean, the only way that they can be beneficial for migrants if they are, 213 00:22:31,840 --> 00:22:36,730 can it can happen if they are first thought about and designed out of migrants or needs. 214 00:22:37,120 --> 00:22:42,399 And we haven't mentioned in this talk, but there are some technologies that are designed, for example, 215 00:22:42,400 --> 00:22:49,480 to match refugees with municipalities and areas that are best for their future employment or specific needs. 216 00:22:50,050 --> 00:22:52,930 There is, for example, the Merchant Imagine project in Germany, 217 00:22:52,930 --> 00:22:58,300 which is trying to basically match refugees needs with municipalities capacities across the country. 218 00:22:58,840 --> 00:23:06,730 So that's a really good example of what technology can achieve if they centred around migrants and refugees needs themselves. 219 00:23:08,500 --> 00:23:13,059 But apart from this, I guess, I mean, we haven't talked about at all in this talk, 220 00:23:13,060 --> 00:23:16,630 but it's worth remembering that even if it's for the best intentions, 221 00:23:17,080 --> 00:23:22,680 every technology also brings new demands on natural resources and has consequences for the environment. 222 00:23:22,870 --> 00:23:29,410 So we really need to be always asking ourselves whether we really need them and whether this is really cannot be done any other way. 223 00:23:29,980 --> 00:23:39,580 Catarina Any grounds for optimism? I think I want to be the one to bring the optimism in this conversation, 224 00:23:39,580 --> 00:23:52,390 but because I don't think the the framework where this debate is has is happening is, is the one that we bring safer solutions. 225 00:23:52,630 --> 00:23:57,040 And I always think from the abolitionist movements, 226 00:23:57,460 --> 00:24:06,850 the idea that we don't need a reform in a system that is already is created to oppress certain people. 227 00:24:06,860 --> 00:24:13,900 So I don't think the question is around the positive use of technology, but about the policies themselves. 228 00:24:14,500 --> 00:24:18,460 Of course, the technology is not a problem. It's not a problem per say. 229 00:24:18,670 --> 00:24:25,750 We see some uses of technology, the same type of technology used in different contexts, for example, drones to detect people in distress. 230 00:24:25,960 --> 00:24:31,780 If you put it in the hands of Frontex, it can lead to the facilitation of pushback through the Libyan coast guard, 231 00:24:31,960 --> 00:24:40,630 whereas also the NGO Sea-watch is using drones to detect the presence of people that are escaping from Libya towards Italy. 232 00:24:40,810 --> 00:24:50,290 And these, based on the detection of the drone, will then look locate their their vessels to start the search and rescue mission. 233 00:24:50,620 --> 00:24:57,550 But the thing is that technology is always used by someone that is always advancing their own priorities. 234 00:24:57,880 --> 00:25:06,880 So when it comes to migration management, what needs to be done is rethink the whole infrastructure, the whole objectives behind the policies. 235 00:25:07,180 --> 00:25:17,020 And if we want to address the the problem of discrimination, the problem of violence that is happening also through the use of technology, 236 00:25:17,350 --> 00:25:26,200 we should be more bold and more courageous in calling for a systems that prioritise justice, 237 00:25:26,200 --> 00:25:29,860 that prioritise the ability and the freedom of people to move. 238 00:25:30,340 --> 00:25:39,010 And that doesn't. That's not tied beyond technologies or opaque systems that in a way, the responsibilities, 239 00:25:39,010 --> 00:25:48,880 the authorities that are in fact legalising very violent policies that are closing that I thought were and within our borders. 240 00:25:49,090 --> 00:25:56,499 So not only optimism, but for for sure, I believe there is a way to to improving that. 241 00:25:56,500 --> 00:26:01,420 And I would like to close to refer to a report. 242 00:26:01,420 --> 00:26:07,240 Very interesting that actually brings some optimism, which was written by the Equinox Project. 243 00:26:08,740 --> 00:26:21,580 And it's it's about Fortress Europe. And it actually details three ways in which we can challenge and change discriminatory EU migration policies. 244 00:26:22,180 --> 00:26:26,200 That seems like as good a place as any for us to leave it. Thanks so much to you both. 245 00:26:26,680 --> 00:26:30,370 You've been listening to the Migration Oxford podcast. I'm Robert Neill. 246 00:26:30,580 --> 00:26:31,540 And I'm Jacqui Broadhead.