1 00:00:00,180 --> 00:00:10,560 Artificial intelligence has come into prominence in the public imagination with the emergence of generative AI tools such as chat GPT, 2 00:00:11,670 --> 00:00:18,540 but it's highly likely that some areas of AI development will disappoint and underwhelm, so it won't live up to the hype. 3 00:00:19,230 --> 00:00:22,620 But developments in other areas are likely to positively surprise, 4 00:00:23,220 --> 00:00:28,920 and the direction of travel for this technology is remarkably exponential in innovation. 5 00:00:30,000 --> 00:00:33,030 So we can see already that AI has huge potential. 6 00:00:33,600 --> 00:00:36,930 But we can also see that AI has real risks and downsides. 7 00:00:37,770 --> 00:00:44,760 As a general purpose technology, AI has been likened by some to the harnessing of electricity in the past, 8 00:00:44,990 --> 00:00:48,690 can be used for many different things to enable many different things. 9 00:00:49,200 --> 00:00:55,590 The uses and influence of AI will likely impact on every aspect of human life. 10 00:00:56,190 --> 00:01:02,550 How we communicate, how we learn, our livelihoods, the economy and the environment. 11 00:01:03,090 --> 00:01:11,760 It will also have implications for peace and security, good and bad, and for all of us seeking a more peaceful and just world. 12 00:01:12,480 --> 00:01:17,010 So I have just under 20 minutes to introduce the key issues. 13 00:01:17,340 --> 00:01:23,850 Uh, and I hope we can then discuss them in more detail in the Q&A and in this afternoon's breakout sessions. 14 00:01:24,810 --> 00:01:31,770 So I'm going to touch briefly on the risks of AI, the way the UN and other organisations are seeking to govern these risks, 15 00:01:32,400 --> 00:01:38,670 and then specific ways that I might support those working in international peace and security. 16 00:01:39,900 --> 00:01:46,890 My starting point is Unesco was the first international organisation to agree ethical principles for AI, 17 00:01:47,640 --> 00:01:54,000 and it's almost a useful cheat sheet for thinking of the issue areas for the challenges AI presents. 18 00:01:54,390 --> 00:02:01,620 So the Unesco principles cover the areas of respect for human rights, transparency, 19 00:02:02,160 --> 00:02:07,590 accountability, justice and fairness, non-discrimination and equality, 20 00:02:08,160 --> 00:02:18,600 privacy and data promotion, participation and inclusivity, safety and security, sustainability and the promotion of peace. 21 00:02:18,810 --> 00:02:26,639 So very much the themes of all of this conference today, and AI in a way, is also a new actor coming onto the scene. 22 00:02:26,640 --> 00:02:34,230 That's not a human or institutional actor. So I'll go into some of these Unesco ish areas. 23 00:02:35,280 --> 00:02:46,469 First is the bias from AI algorithms, the design of algorithms, the data that they're trained on and the use to which they're put perpetuates. 24 00:02:46,470 --> 00:02:55,380 We've seen or intensifies existing prejudices and discrimination and can introduce new ones against disabled people, 25 00:02:55,650 --> 00:03:02,190 older people, young people all around ethnicity, women and LGBTQ persons. 26 00:03:02,700 --> 00:03:07,020 This can increase inequalities, exacerbate social tensions and conflict. 27 00:03:08,340 --> 00:03:20,129 A second areas climate change. Effects of AI from training and testing large language models requires a huge amount of electricity use, uh, 28 00:03:20,130 --> 00:03:29,760 as a climate change effect from mining the minerals that wear a material sense, and elements that go into making semiconductor chips. 29 00:03:29,880 --> 00:03:31,770 The AI is trained on. 30 00:03:32,550 --> 00:03:43,080 And in the use of AI, we're likely to see over the next year or two, a huge increase in the automation of processes linking AI software, 31 00:03:43,500 --> 00:03:51,180 um, to, to to other bits of, um, software and computers through, uh, what are called APIs. 32 00:03:51,870 --> 00:03:55,890 And quite soon I think we'll have tens of millions of agents. 33 00:03:56,070 --> 00:04:00,059 That's kind of AI bots working on our behalf while we sleep, 34 00:04:00,060 --> 00:04:06,230 or while we do other tasks like negotiating with people to buy things for us and arranging our travel and, 35 00:04:06,420 --> 00:04:09,059 and our holiday or whatever itinerary and so on. 36 00:04:09,060 --> 00:04:15,180 So this can be a huge amount of increased electricity use in the background above what we've got already. 37 00:04:16,050 --> 00:04:20,130 But AI models can also hugely help in climate change forecasting. 38 00:04:20,520 --> 00:04:28,889 The better use of of land um a for biodiversity, climate or food security, the discovery of new materials, 39 00:04:28,890 --> 00:04:35,640 nanotechnology, plastics alternatives, new renewable technologies to help in the green transition to a circular economy. 40 00:04:35,790 --> 00:04:41,309 So it'll be a trade off between huge increased electricity use and smart solutions. 41 00:04:41,310 --> 00:04:45,150 On the other side, next is a risk of misuse. 42 00:04:45,510 --> 00:04:55,800 Bad actors being organised, crime terror groups or state actors weaponizing AI for misinformation and disinformation. 43 00:04:56,370 --> 00:04:59,700 We've seen how it can exacerbate ethnic. 44 00:05:00,180 --> 00:05:04,440 Divisions in society influence elections through deepfake videos, 45 00:05:04,800 --> 00:05:10,410 discourage voting by personalised messaging targeting human rights defenders or political opponents. 46 00:05:10,950 --> 00:05:16,590 This area of privacy and mass surveillance risks on this area of military applications. 47 00:05:16,860 --> 00:05:20,520 So that's AI and nuclear guidance and other weapon systems. 48 00:05:20,910 --> 00:05:28,020 The development and deployment of lethal autonomous weapon systems, uh, without any human, um, input in there. 49 00:05:28,090 --> 00:05:36,690 And that killing the AI in combination with biological or chemical weapons, allowing someone to develop these in their own kitchen. 50 00:05:37,920 --> 00:05:41,790 Seventh is the geopolitics of AI and the risk of an AI arms race, 51 00:05:41,790 --> 00:05:49,050 including trade wars over semiconductor chips and the use of AI in conflict, such as through drone technology. 52 00:05:50,070 --> 00:05:58,139 Then there's the risk of what's called, um, AGI, um, where we're talking about artificial general intelligence. 53 00:05:58,140 --> 00:06:03,510 I'm not going to spend too much time on this, but we hear we hear about it from time to time in the media, 54 00:06:03,780 --> 00:06:10,110 the risk of AI becoming smarter than us and no longer aligned with human values and interests. 55 00:06:11,070 --> 00:06:16,320 And I think that is a really, uh, worthwhile area to look at. 56 00:06:16,680 --> 00:06:18,180 But there's some risks with it. 57 00:06:18,210 --> 00:06:28,350 And one of the key risks is distracting away from the immediate problems the AI produces, such as, uh, discrimination and the climate change risk. 58 00:06:28,890 --> 00:06:32,550 And there's also a risk, not that I will become too smart, 59 00:06:32,730 --> 00:06:40,470 but there's a risk that we spend billions on branches of AI that remain too stupid or inaccurate to be of value, 60 00:06:40,890 --> 00:06:44,310 and the wasted opportunity cost of that money and that talent involved. 61 00:06:45,000 --> 00:06:51,450 Ninth AI safety risk. This is what the Bletchley summit last autumn was with about some safety. 62 00:06:51,450 --> 00:06:57,659 But risks are hard to predict. Uh, they may arise from the technology itself some day. 63 00:06:57,660 --> 00:07:04,950 Likened trying to develop standards for AI to keep it safe when I was innovating so rapidly, 64 00:07:05,400 --> 00:07:11,070 likened it to trying to set design standards for a car when the car is still in motion. 65 00:07:11,760 --> 00:07:17,580 I think it's really hard when where you develop standards on the next day the model has innovated, 66 00:07:17,790 --> 00:07:20,790 and to such a degree that that standard is no longer relevant. 67 00:07:21,690 --> 00:07:27,330 And lastly, the risks of widening the digital divide both between and within countries. 68 00:07:27,660 --> 00:07:35,850 So it's that compute kind of computational, um, the hardware on which we can test these, 69 00:07:36,090 --> 00:07:42,960 um, uh, these large language models, um, is largely clustered in particular countries. 70 00:07:43,260 --> 00:07:49,440 And the talent in designing algorithms, it's also this may well change but is clustered around particular countries. 71 00:07:49,800 --> 00:07:52,860 The real two AI superpowers are the US and China. 72 00:07:53,430 --> 00:08:03,210 But there's also emerging clusters in other Western companies in Japan, Korea, Singapore, also in, in uh, Saudi Arabia, UAE. 73 00:08:05,730 --> 00:08:12,270 And lastly, it's largely it's been developed largely in the private sector with all the profit incentives that entails. 74 00:08:12,810 --> 00:08:22,080 So the implications of this of, of this kind of clustering of, of compute power in particular countries is that the design of the model, 75 00:08:22,080 --> 00:08:24,810 a model which is going to influence every aspect of our life, 76 00:08:25,890 --> 00:08:34,800 reflects the world view of what a good, a good life looks like from the perspective of a narrow set of countries, 77 00:08:35,040 --> 00:08:42,060 and possibly even the narrow set of young male, often white programmers who are designing a lot of these things. 78 00:08:42,930 --> 00:08:49,020 And also they use cases which are prioritised and not necessarily those of the poorest billion. 79 00:08:49,800 --> 00:08:59,670 I believe that to be robust and legitimate global, I must benefit from inputs from a wide range of cultural and epistemological perspectives, 80 00:08:59,670 --> 00:09:04,950 including the equal contribution of women and learning from indigenous people's knowledge, 81 00:09:04,950 --> 00:09:09,510 which often has a very small or non-existent digital footprint. 82 00:09:10,650 --> 00:09:16,170 But creating global AI from different cultures also presents challenges over values, 83 00:09:16,590 --> 00:09:21,510 especially regarding women and LGBTQi approaches within the Global South. 84 00:09:21,720 --> 00:09:27,870 Being more conservative another way and exacerbates the digital divide, 85 00:09:27,870 --> 00:09:35,309 especially with the least developed African countries, is that AI rests on a pyramid of electricity access. 86 00:09:35,310 --> 00:09:39,210 Sometimes it's non-existent in rural parts of countries. 87 00:09:39,630 --> 00:09:48,270 Then you need on top of that, um, uh, and tricity, um, uh, you need ICT, you need computers, smartphones. 88 00:09:48,540 --> 00:09:56,700 You then need connectivity through Wi-Fi and only the top of the pyramid can you, can you then have running, running, uh, AI algorithms? 89 00:09:57,120 --> 00:10:08,290 Um, this can change over time. But, uh, it means that, uh, that there is a real risk of exacerbating the existing digital divide. 90 00:10:09,100 --> 00:10:14,410 Women often have less access to computers in the developing world and less access to smartphones, 91 00:10:14,950 --> 00:10:24,540 so their data is less likely to be used to train and refine AI models because those models pick up data from social media and elsewhere. 92 00:10:24,550 --> 00:10:28,870 So it's another way which skews away from a kind of universal AI. 93 00:10:29,140 --> 00:10:32,920 In a way, I think there's an analogy with medicine where medicine, first of all, 94 00:10:33,070 --> 00:10:37,809 was that drugs were designed primarily on on the male body and then found later 95 00:10:37,810 --> 00:10:41,470 that actually women's bodies were different and therefore medicine didn't, 96 00:10:41,710 --> 00:10:48,310 didn't kind of have the same effects. And so I think there's a real risk with AI being designed in a similar way. 97 00:10:49,540 --> 00:10:57,310 And the governance of AI risks not being informed by the perspectives of those countries in, uh, in international fora. 98 00:10:57,850 --> 00:11:01,570 I worked a bit with the Simon Institute last year in the permanent missions of, 99 00:11:01,930 --> 00:11:13,810 of Kenya and Costa Rica at the UN in Geneva to try and train the cadre of diplomats from the global South in understanding AI governance issues. 100 00:11:14,140 --> 00:11:17,350 Diplo Foundation in Geneva was doing a lot in African capitals. 101 00:11:17,740 --> 00:11:24,729 Um, the International International Science Council is curating regional networks of scientists and social scientists and in, 102 00:11:24,730 --> 00:11:28,300 in Latin America, the Caribbean, Africa and the Asia Pacific. 103 00:11:29,290 --> 00:11:33,759 So solutions. Well, I want to talk about, um, a governance of AI. 104 00:11:33,760 --> 00:11:36,370 And then lastly peace and security areas. 105 00:11:37,390 --> 00:11:44,950 Governance of AI is likely to happen in a patchwork way, partly because its dual use as military implications. 106 00:11:44,950 --> 00:11:49,450 So countries are, um, cautious about what they share with other countries, 107 00:11:49,900 --> 00:11:55,990 and also because we're increasingly living as well as we had earlier today in a multipolar world. 108 00:11:57,100 --> 00:12:04,300 Most regulations, therefore, are likely to happen at national level or at a regional level, like the the recent EU AI act, 109 00:12:04,690 --> 00:12:11,200 but guided by international principles as well as a patchwork of bilateral trust building dialogues, 110 00:12:11,200 --> 00:12:17,080 for example, between the US and China, very much against the nuclear, um, weapons domain. 111 00:12:18,220 --> 00:12:26,860 So what's been happening to us? Well, UN Secretary-General Antonio Guterres is the first Secretary-General with a science and engineering background. 112 00:12:26,860 --> 00:12:33,939 So this has been a real passion for him. He set up a high level advisory board on AI, which had an interim report, 113 00:12:33,940 --> 00:12:39,550 already talked about the areas of need, not the solutions, but it will report this July. 114 00:12:40,210 --> 00:12:50,080 And where I think it's probably going to land is recommendations on creating new international AI models modelled on uh, 115 00:12:50,140 --> 00:12:54,880 institutions in other domains. In particular, three have been talked about a lot. 116 00:12:55,210 --> 00:13:03,280 The role of the IAEA in both capacity building for for civil nuclear power, but also in monitoring and enforcement. 117 00:13:03,700 --> 00:13:09,640 In this case, it would be of AI risks. Secondly, the IPCC on climate. 118 00:13:09,820 --> 00:13:15,370 So how do we create a mechanism where we collaboratively across researchers, 119 00:13:15,370 --> 00:13:24,220 scientists and government officials, uh, can can track this fast moving in innovative um, uh, developments. 120 00:13:25,210 --> 00:13:32,290 And it how the International Civil Aviation Organisation is perhaps a good example of how a green standards at an international 121 00:13:32,290 --> 00:13:39,820 level on aviation safety and interoperability can then be implemented at a national level in conjunction with the private sector, 122 00:13:41,020 --> 00:13:47,470 then the UN General Assembly, uh, which is building towards a summit to the future in September of this year, 123 00:13:47,680 --> 00:13:53,890 which will include agreement on a global digital compact with AI at its heart. 124 00:13:54,310 --> 00:13:57,400 And those negotiations are being led by Sweden and Zambia. 125 00:13:58,270 --> 00:14:07,120 Unesco and the OECD have designed ethical principles, and we need to integrate those principles at every point in the AI lifecycle. 126 00:14:07,120 --> 00:14:13,090 So that's in the design, the training, the testing, the deployment and the use of AI. 127 00:14:13,690 --> 00:14:18,639 Then we have standard setting in the International Telecommunications Union next week, uh, 128 00:14:18,640 --> 00:14:25,840 I and some other colleagues from Oxford will be going to help organise an AI governance day at the ITU in Geneva. 129 00:14:26,140 --> 00:14:32,920 Then there's international standards organisations come up with two standards on AI and the IEEE, a technical body. 130 00:14:33,580 --> 00:14:38,500 We've had the first meeting of the UN Security Council under UK auspices. 131 00:14:38,500 --> 00:14:43,870 Our foreign minister spoke at it on AI and Peace and Security in July last year. 132 00:14:44,620 --> 00:14:51,699 At that meeting, the Secretary general pointed to the preamble of the UN charter, which says determined we, 133 00:14:51,700 --> 00:14:59,770 the peoples of the world determined to end the scourge of war, um, and determined to save succeeding generations from. 134 00:14:59,810 --> 00:15:07,070 On the sculpture floor. And he used those words, succeeding generations as almost a mandate for the UN to work on this issue, 135 00:15:07,070 --> 00:15:11,000 because unless we have the interests of future generations, 136 00:15:11,360 --> 00:15:15,770 and we recognise the impact that technology is going to have on the path for humanity, 137 00:15:15,830 --> 00:15:21,050 we're not going to serve them well or prevent, uh, war in the future. 138 00:15:21,410 --> 00:15:28,700 Then there's lethal autonomous weapons. The Argentinean ambassador in Geneva has been presiding over those negotiations in Geneva. 139 00:15:29,000 --> 00:15:33,260 This this US, China dialogues I mentioned two months ago. 140 00:15:33,260 --> 00:15:37,910 The US piloted in actually a very inclusive way. 141 00:15:38,480 --> 00:15:51,290 And the omnibus UN General Assembly resolution on AI, it was co-signed um by uh, uh, co-sponsored by over 130 countries, including, crucially, China. 142 00:15:51,830 --> 00:16:01,940 And at this moment, China's consulting on a follow up resolution on capacity building for developing countries on AI in, uh, the developing world. 143 00:16:03,200 --> 00:16:10,670 We've had the UN Bletchley meeting on AI safety follow up next week in Republic of Korea and France in six months. 144 00:16:11,030 --> 00:16:14,840 So the six months rather than every 2 or 3 years, because the technology's moving. 145 00:16:15,140 --> 00:16:20,610 Um, so, uh, so fast. It's a good example of a mini lateral that included China. 146 00:16:20,950 --> 00:16:29,120 China is very concerned about it being excluded from bodies like the OECD and others that make up AI principles. 147 00:16:29,480 --> 00:16:34,670 So it's, uh, um, pioneering its own AI governance initiative, a global one, 148 00:16:35,120 --> 00:16:41,630 and looking at digital Belt and Road, Shanghai Cooperation Organisation, the BRICs, plus working groups on AI. 149 00:16:43,460 --> 00:16:50,000 So quickly I to have a couple of regional, um, uh, initiatives and then and with the use of AI for conflict prevention, 150 00:16:50,000 --> 00:16:57,860 mediation and peace building, I mentioned the EU AI act focussed on privacy, intellectual property and trade. 151 00:16:58,730 --> 00:17:07,610 There's been an African Union continental um wide strategy being negotiated, um, in Addis Ababa. 152 00:17:08,180 --> 00:17:14,360 But in fact, the real pioneers have been at a national level, particularly in Kenya, uh, 153 00:17:14,390 --> 00:17:23,450 where and uh, um, um, Mauritius, um, uh, uh, Egypt, Nigeria and South Africa, in Latin America, 154 00:17:23,450 --> 00:17:30,590 there was a declaration, uh, agreed in Chile among those countries, um, uh, but uh, but very little, 155 00:17:30,800 --> 00:17:36,110 uh, work so far, individual countries like Brazil were plotting national legislation. 156 00:17:37,130 --> 00:17:40,250 We've had working groups with the Arab League and the Gulf Cooperation Council, 157 00:17:40,250 --> 00:17:44,090 and I believe Saudi Arabia and the UAE are kind of really big actors in this space. 158 00:17:44,870 --> 00:17:48,109 Um, and then we've had ACM ethical principles. 159 00:17:48,110 --> 00:17:53,810 I know from talking over lunch, a number of you work in Southeast Asia. I think Asean is going to be a very powerful actor. 160 00:17:54,230 --> 00:18:02,390 Uh, in the AI space, Singapore floats in its own generative AI international principles that it's seeking global buy in for it. 161 00:18:02,690 --> 00:18:11,420 And Malaysia, who has the chair of Asean in 2025 and then Philippines a year after both looking to advance Asean institutions while they're there? 162 00:18:11,750 --> 00:18:15,530 We should also not forget, um, Central Asia. 163 00:18:15,530 --> 00:18:22,190 The Kazakhstan president is passionate about about AI, and I think we'll probably use CSA to to take it forward. 164 00:18:22,400 --> 00:18:26,959 That kind of his conference on on confidence building and in Central Asia and then the 165 00:18:26,960 --> 00:18:31,040 Pacific Forum countries that are too small to be able to really develop their own, 166 00:18:31,280 --> 00:18:37,219 uh, large language model capability, but they face existential extinction from climate change. 167 00:18:37,220 --> 00:18:40,820 And so looking at climate change as some way to, to help with that. 168 00:18:41,360 --> 00:18:48,230 So to end the use of AI for conflict prevention, mediation and peacebuilding, some of these are happening now. 169 00:18:48,590 --> 00:18:52,790 Some are actually very hard to implement because of mistrust or technological barriers. 170 00:18:53,510 --> 00:18:58,309 So we've already seen a huge increase in the use of AI and data sets by the UN's 171 00:18:58,310 --> 00:19:02,240 Department of Peacebuilding and Political Affairs and the Department of Peace Operations. 172 00:19:02,240 --> 00:19:09,890 So this is use in conflict, uh, prediction, real time monitoring of conflict and human rights abuses and monitoring of ceasefires. 173 00:19:10,730 --> 00:19:18,230 There's a potential for. But I haven't seen evidence of use in monitoring of disarmament and non-proliferation commitments. 174 00:19:19,340 --> 00:19:27,500 Then this a peace building area. I could potentially help us better understand the causes of conflict for multiple angles, 175 00:19:27,890 --> 00:19:33,230 including, you know, an area that the UN had had a blind spot for many years, 176 00:19:33,230 --> 00:19:39,380 the economic drivers of conflict, but also other causes that have been neglected by our existing institutions. 177 00:19:40,340 --> 00:19:43,370 AI tools can help detect and mitigate cyber threats. 178 00:19:44,570 --> 00:19:50,510 It can magnify the voices of marginalised groups. So the Department of our Peace Operations, um, 179 00:19:50,510 --> 00:19:59,540 the trials of large digital dialogues through African peacekeeping missions and particularly targeting and hearing the voices of women and minorities. 180 00:19:59,800 --> 00:20:03,010 In the conversation. But in order to make that happen, 181 00:20:03,280 --> 00:20:09,370 we're also seeing the attempts to develop privacy tools through AI to allow 182 00:20:09,370 --> 00:20:14,830 large canvassing of views in sensitive settings while preserving anonymity. 183 00:20:15,550 --> 00:20:20,950 Then we've got programs on using AI in mediation with the centre for Humanitarian Dialogue in Geneva. 184 00:20:21,250 --> 00:20:24,280 Work done by the new NGO, AI for peace. 185 00:20:25,030 --> 00:20:31,930 But AI is a mediation pathway solution provider, uh, could possibly be very helpful for a mediator, 186 00:20:32,620 --> 00:20:37,359 but also that there'll be great concerns about what happens within the black box where we don't 187 00:20:37,360 --> 00:20:42,550 really know who's pulling the strings inside or whether it's been being hacked in some way. 188 00:20:43,270 --> 00:20:46,630 Uh, I've been in some negotiations with, with, with, um, 189 00:20:46,750 --> 00:20:54,100 different foreign ministries in the South and north on use of AI tools to reduce asymmetry in negotiations. 190 00:20:54,430 --> 00:20:57,430 So a country like Palau, which has a tiny foreign ministry, 191 00:20:57,610 --> 00:21:08,890 could use AI to basically synthesise the whole negotiating position of the US or China on climate change or on other, other, other issues. 192 00:21:09,220 --> 00:21:13,810 Right. Speeches for ministers, etc., etc. So there's real potential to reduce asymmetry, 193 00:21:13,810 --> 00:21:19,810 but still there's real concern among those foreign ministries around information security, um, and bias. 194 00:21:20,800 --> 00:21:24,130 And the last two points would be AI language tools. 195 00:21:24,730 --> 00:21:28,000 Simultaneous translation in real time is now here. 196 00:21:28,900 --> 00:21:34,870 Uh, no longer need to take a dictionary with you when you go on a holiday. You could literally speak into your phone and it will, uh, 197 00:21:34,870 --> 00:21:40,900 speak in the other language and it will translate back almost, um, with a high degree of accuracy. 198 00:21:41,170 --> 00:21:45,910 So I think encouraging intercultural understanding of younger people is one valuable thing. 199 00:21:46,150 --> 00:21:52,090 And the potential to tap into the demographic demographic dividend of, of the African continent, 200 00:21:52,360 --> 00:21:59,800 India and other areas with which are growing and have huge pool of talent in the design of AI, is important for the future. 201 00:22:00,220 --> 00:22:07,000 The biggest boon for peace, I think, is perhaps in the economic prosperity boost of AI. 202 00:22:07,720 --> 00:22:17,500 There's a potential in both the global North and the global South of lifting hundreds of millions out of poverty and through efficiencies in the West, 203 00:22:17,500 --> 00:22:24,640 could release more funds to meet climate change finance promises and transform, um, financing. 204 00:22:25,210 --> 00:22:32,560 Um, but that won't happen unless I think all of us working in every field, um, 205 00:22:32,560 --> 00:22:43,360 speak out about the absolute need for AI to be infused with universal values of equity, fairness, transparency, and always having a human in the loop. 206 00:22:43,720 --> 00:22:45,700 Um, for for AI decisions. 207 00:22:46,000 --> 00:22:54,090 So I've left so many things out I'd like to talk about, but I've stuck to that to keep within my my time and look forward to Q&A and discussion later. 208 00:22:54,100 --> 00:22:54,700 Thank you very much.