Skip to main content
HomePodcastsArtificial Intelligence (AI)

How the UN is Driving Global AI Governance with Ian Bremmer and Jimena Viveros, Members of the UN AI Advisory Board

Richie, Ian and Jimena explore what the UN's AI Advisory Body was set up for, the opportunities and risks of AI, how AI impacts global inequality, key principles of AI governance, the future of AI in politics and global society, and much more. 
Updated Mar 2024

Photo of Ian Bremmer
Guest
Ian Bremmer

Ian Bremmer is a political scientist who helps business leaders, policy makers, and the general public make sense of the world around them. He is president and founder of Eurasia Group, the world's leading political risk research and consulting firm, and GZERO Media, a company dedicated to providing intelligent and engaging coverage of international affairs.


Photo of Jimena Viveros
Guest
Jimena Viveros

Jimena Viveros currently serves as the Chief of Staff and Head Legal Advisor to Justice Loretta Ortiz at the Mexican Supreme Court. Her prior roles include national leadership positions at the Federal Judicial Council, the Ministry of Security, and the Ministry of Finance, where she held the position of Director General. Jimena is a lawyer and AI expert, and possesses a broad and diverse international background. She is in the final stages of completing her Doctoral thesis, which focuses on the impact of AI and autonomous weapons on international peace and security law and policy, providing concrete propositions to achieve global governance from diverse legal perspectives. Her extensive work in AI and other legal domains has been widely published and recognized.


Photo of Richie Cotton
Host
Richie Cotton

Richie helps individuals and organizations get better at using data and AI. He's been a data scientist since before it was called data science, and has written two books and created many DataCamp courses on the subject. He is a host of the DataFramed podcast, and runs DataCamp's webinar program.

Key Quotes

The reason we need a global governance framework is because historically, when things are driven, new technologies are driven by massive market forces, we are very good at privatizing the gains that come from those. And we frequently socialize the losses. That's even true in a great capitalist country like the United States. We socialize the losses. And that means that you get all sorts of negative externalities that no one wants to pay attention to. We've had 50 years of globalization. It's been unprecedentedly good for humanity as a whole. We've driven a global middle class. We've taken billions out of poverty. That's a wonderful thing. But there's also been massive negative externalities that no one wants to pay for. And in absence governance, what happens is the kids pay for them, the poorest people pay for them, the disenfranchised people pay for them, the global south pays for them. And so that's why the United Nations is so interested in this issue, is because AI is gonna be driving, driving, driving for the coming years.

AI industry regulation is really not sufficient for regulation, for governance. That's just a fact. Because it will not cover all of these different areas, which are fundamental, you know, within the rule of law in a law-based society. And in terms of the international legal order, if you're trying to get more scale, which we should, it's fundamental that we address all of these limitations that we have nowadays, because the trend is to just try to find analogies within existing laws, existing legislation, regulations that would cover any type of situation. But that's just not sufficient. And it's proving more and more so, especially with everything that has to do with criminal intent and so on. So that's why we need to set international governance, kind of like the guidelines, the guardrails, where everything else can be contained. That sets a starting point for all states to implement that domestically in terms of national legislations to delimit the scope of the way that both industries and users and governments can act and within those boundaries. As long as we can agree on these global boundaries, I think we're going to be in a much better place.

Key Takeaways

1

AI development and governance should involve collaboration across governments, private sector, NGOs, and citizens to ensure broad-based input and adherence to ethical standards.

2

AI has the ability to either mitigate global inequalities by promoting sustainable development or exacerbate them if benefits remain limited to wealthier nations, stressing the importance of equitable AI access.

3

Contribute to AI and data commons through participating in initiatives that promote the sharing of AI resources and data sets, facilitating innovation and progress towards common humanitarian goals.

Links From The Show

Transcript

Richie Cotton: Welcome to DataFramed. This is Richie. The rise of AI has many opportunities for humanity, with enormous scope for increasing productivity and making better decisions. On the flip side, AI also carries many risks, not least increasing the digital divide. The effect will be profound and felt across the world.

Even the most optimistic of AI advocates believe that AI should somehow be aligned with the goals of humanity. And the UN has decided that a good way to achieve this alignment is through implementing a global AI governance framework. Consequently, it's set up an advisory body on artificial intelligence.

This consists of world experts on AI who've been tasked with forming a plan to create this governance framework. The interim report from this group was recently released, so today we have two experts from the group to discuss the findings. Ian Bremmer is the president of Eurasia Group, a leading political risk research and consulting firm.

He also runs GZERO Media, which provides coverage of international affairs. Ian hosts the weekly GZERO World TV show and regularly appears on national news stations. He also has written 11 books, including two New York Times bestsellers. On top of this, he teaches at Columbia University's School of International and Public Affairs and was previously a professor at New York University.

Jimena Viveros is the Chief of Staff and Head Legal Advisor of Supreme Court Justice Loretta Ortiz in the Mexican Supreme Court. She's also a co founder and th... See more

e Vice Chair of the Mexican Commission of International Criminal Law and Transnational Crime. and a research fellow at Case Matrix Network.

Additionally, she's completing a PhD at the University of Colm, which is to say both Ian and Jimena are the best of the best when it comes to understanding the global implications of AI. Let's hear their views.

Hi, Ian and Jimena, great to have you both on the show.

Jimena Viveros: Great to have to be here. Thank you very much.

Richie Cotton: Excellent, so I'd love to get a bit of background context on your work with the UN. So, first of all, to begin with, can you tell me advisory body set up? Jimena, do you want to lead with this?

Jimena Viveros: Yes. So the secretary general of the UN had the great initiative to create this body. last year he issued, basically a call for nominations and there were almost 2000 applications from 128 countries. the idea was to Summon a great group of experts, from different nationalities, different regions, different expertises obviously gender balanced, but also balanced in terms of age and, just a very diverse Thank you.

body that could bring in a variety of, views so that we were announced on the 26th of October last year, and we're already working on the 27th on our first plenary meeting. so the purpose of, or our mandate is, twofold. So we're supposed to come up with interim report, which we already did.

Ian is a brilliant rapporteur. So, thank you for. And then we're going to come up with the final report, by before the summit of the future next year and then the second part is also to, um, come up with guidelines or preliminary functions for uh, international organization uh, that would be in charge of Implementing the governance of AI.

So that's, that's the idea of the secretary general. So that States could be more informed at the summit of the future. With this expert recommendations. But there are many other bodies also working on this and, there's going to be a consultation phase throughout this year.

so it's not just the body. So the body is just a piece of the whole puzzle. So that's important to mention all in preparation for the summit of the future.

Richie Cotton: Okay, so it sounds like it's a, it's a pretty large scale program with people from all over the world and you're trying to go with some sort of broad, high level governance program. And before we get into some of the details of this program, maybe we'd talk a little bit about the opportunities from AI before we get to risks.

Ian, do you want to talk to me about like, what are some examples of opportunities from AI?

Ian Bremmer: Sure. I'm a political scientist and see a I first and foremost as an opportunity, not a risk. But I want to be clear. But the reason we need a global governance framework is because historically when things are driven, new technologies are driven by massive market forces. We are very good at privatizing the gains that come from those.

And we frequently socialize the losses. That's even true in a great capitalist country like the United States. We socialize the losses. And that means that you get all sorts of negative externalities that no one wants to pay attention to. We've had 50 years of globalization. It's been unprecedentedly good for humanity as a whole.

We've driven a global middle class. We've taken billions out of poverty. That's a wonderful thing, but there's also been massive negative externalities that no one wants to pay for an absence governance. What happens is the kids pay for it and the poorest people pay for them, the disenfranchised people pay for them, the global South pays for them.

And so that's, that's why the United Nations is so interested in this issue. It's because AI is going to be driving, driving, driving for the coming years. It's going to drive much faster, by the way, than globalization. Because, in this environment, usually when you have new technologies, you have a whole bunch of sectors.

that are powerful, that think that they're going to be completely disrupted and displaced if this new thing comes out. So, for example, you want to get away from carbon based energy, you move to post carbon based industry, well, that explosion of investment is going to suddenly undermine the power of the coal companies, the oil companies, the gas companies, and everyone that relies on them.

The supply chain, the infrastructure, the workers, you name it, the shareholders. With AI, every company in every sector in the global economy already understands that they can use this tech inside their firms in order to create more efficiency. The governments feel the same way. Reduce waste, improve productivity, better measure things.

That's incredible. And so that means that the speed of adoption of AI and the money that will be applied to AI and the entrepreneurship and the explosion of the companies that will take advantage of this are like nothing else we've seen because the internal. Political opposition is not there. So I'm enormously excited about the upside.

I just also recognize that the negative externalities are going to come at the same time. And everyone's going to be focused on making the money. Everyone's focused on rolling it out. Those people don't need the United Nations to help them. It's the people that are impacted negatively that need the U. N.,

that need global governance. That's why we're here.

Richie Cotton: Absolutely. So, that does seem like, there are some very well understood problems then that you think might occur and it is going to be a case of it's going to be the poor people and those who don't have their own representation that are going to suffer. So beyond these sort of known things, are there any kind of do you suspect there could be unknown problems that we can't quantify yet?

Like, what else might be coming beyond these problems that Ian just talked about?

Jimena Viveros: Well, I mean, with that question, obviously the, the quote that comes to mind is the unknown rumsfels, right? I mean, the known unknowns. that is precisely one of the AI, because we do need to prepare not only for the risks that exist today, but the risks that can. Develop in the future, especially when you have such a dual purpose technology that's, by nature not conscribed to one or another uh, use domain, whatever.

So, in that sense we just need to be very aware and listen to, every informed opinion so that we can be in a better position to, to allocate the corresponding governance. However one point that could be use for this purpose is using tech neutral language, for example, you know, trying to not focus it too much on the technology, the governance per se, so that we can keep it broad enough for a future.

Use cases or even the development of the intended and unintended uses and so on. So, uh, obviously that's one of the biggest challenges, the unforeseen. Without a lot of thinking or, without coming into these doomsday scenarios, but just being aware that it is in fact very possible.

Richie Cotton: A lot of the AI companies in the talent, they're concentrated in pretty rich countries particularly the USA. So, what do you think are the consequences of this? Ian, do you want to take this?

Ian Bremmer: The fact that we have a very, very small number of countries and companies, and those are different things and they aren't necessarily well aligned. that are going to be, that are controlling these technologies and are going to be controlling these technologies means that they are number one, first and foremost, responding to business forces that are going to impact wealthy people.

Wealthy markets. And secondly, they're being developed with internal biases and mostly on data sets that are more aligned with those privileged people and privileged countries. And so, there are all sorts of things that need to be addressed. In terms of the sustainable development goals, which is the best way that we collectively is eight plus billion people on the planet have been able to assess how we are doing, as a species on this little ball.

Well, if you want to actually have a I affect them in a positive way across the board, you have to ensure that there are mechanisms that will develop. artificial intelligence in ways that address people that don't necessarily move the needle from a business model perspective from a profitability perspective.

You also need to address it in terms of people that won't even have access to A. I. Because they don't have the infrastructure. we can't really talk about artificial intelligence affecting Africa to the degree it needs to. If almost half of the African continent doesn't have access to electricity.

Right? So, I all of these things are really, really important, and they are things that would not be addressed other than at the margins with the occasional charity from these corporations that are driving these outcomes for themselves and their shareholders and have a fiduciary responsibility to do so, by the way and the United States government which is thinking first and foremost on its own citizens as opposed to foreign aid where a pittance is allocated or the Chinese government, which takes a more transactional position towards devoting those commercial investments and things that are going to be strategically useful for the advancement of the Chinese market.

So again, Clearly, these people will be left behind, not because the people driving AI are evil or nefarious, but their priorities are just not going to take care of the majority of people on the planet.

Richie Cotton: Yeah, it's very difficult for OpenAI to say, okay, let's give it to Africans if they don't have any electricity to begin with. So I can certainly see how this is gonna have the poorest people left behind. Jimena, do you have any opinion on, like, how we could prevent um, this sort of developmental divide?

Jimena Viveros: Well, that's the biggest question that we have before us and it's been mentioned everywhere and it's been noticed even more broadly uh, the fact that there's no meaningful participation of the Global South at this point. And uh, that's not only, you know, blame the Global North for excluding the Global South type of thing.

It is indeed very much with the capabilities like real infrastructural issues as Ian mentioned. Even in Mexico, the, at the Supreme Court, we had the opportunity to establish the right to have an electric power as a human right, whereas the state is in the obligation to guarantee it.

But we lost one vote, so then this was two years ago. So that's, that's where we are, how are we going to talk about even the right to AI or, you know, 5G or, you know, all of these things that are consecutive conversations. Whereas we cannot even guarantee the access to electricity and all of the power that compute needs and all of these things, not only to talk about talent and stuff we need investments, large investments but also access access to the conversations access to participation and in a meaningful way.

So, I think this body that the UN Secretary General created is a great step in this direction because a lot of efforts are really being directed towards bridging this divide. so we're on the right track, but it's, it's, it's speed and the exponential development of this technology uh, whatever advance is being bridged one day can be very easily reverted the next day.

So it's important to keep this, inclusion and participation at a steady pace so that there's no backsteps that are even more adverse in the future.

Richie Cotton: Okay, it sounds like there's really it's like the Maslow's hierarchy of needs for humans, but you need food and water to begin with and shelter, and then you kind of move on to more advanced things. It sounds like there's a similar situation here with AI where you need all that electricity, that compute power and probably some skills and things before you get up to, okay, we're going to do really well with AI.

Okay, so, Ian, I know you talked a lot about the world order and I'm just wondering how AI could change that. Do we think it's gonna have a meaningful impact here?

Ian Bremmer: Absolutely. In fact, it's probably more transformative in terms of how we think about societies and politics on the global stage than any other technology I've ever seen. I think there are a couple of ways to think about that. One is at the individual level. Recently at the World Economic Forum, Sam Altman came out and talked about the fact that with GPT 5, which is going to be released in relatively short order in this year that they're going to start training on individual data. In other words, your bot will not be the same bot as my bot. It's actually going to be individualized on the basis of everything that we know and do.

That is a game changer. That means that instead of having an additional, very valuable tool that we all use for productivity, we're going to have something that is a personal helpmate, assistant, friend, productivity enhancer device beyond anything else that we would engage in. And a lot of people will never turn that off.

you'll become almost a hybrid human being, right? It's, it's like, it's almost like an additional arm, but cognitively. Now, there are all sorts of great things and really negative things that can come from that. The thing that's most important is you're creating something that is different from just homo sapiens.

And there are going to be a lot of people that have that. And they're going to be a lot of people that don't. And I'm deeply concerned that the people that don't are not treated as wholly human because they don't, they won't be able to engage. They'll be cut off from this network. It'll be much worse than just the digital divide.

And that's a deeply, deeply dehumanizing and destabilizing thing for global society. That's, that's the first thing. That's the thing that I'm most personally worried about in terms of AI rolling out, let's say in the next three years, not 10, not 20, not the robots taking over. that's the individual level, the bottom up.

Then there's the top down level. Which is when AI becomes this global game changer that everyone and every company and every government is using. I can't yet tell if it's a centralizing or decentralizing technology, who has the data, how powerful are they, what are they doing with it? And, we've seen over the past couple of decades, capitalism, especially comparatively well regulated capitalism, is the most efficient economic system we have and state capitalism is less so.

Centrally planned economies or less. So, well, it may be centrally planned organizations, whether they are governments or corporations that dominate AI and data and measurement and deployment capabilities are the most efficient 21st century economic system. And that market systems are not competitive systems are not and that that would completely upend the way that power is distributed in societies around the world.

That, by the way, is also true for authoritarianism versus democracy. It may be that a I really empowers centralized political systems that can nudge behavior with carrots and sticks. of its citizens or of its consumers, if it's a corporation, in ways that democracies are very vulnerable to and won't be able to.

I don't have a strong view. on which of these models is going to prove having the upper hand or even being inextricably dominant with AI. But I think that there is an enormous amount to play for on the basis of that answer. And I'm not sure governments will have much ability to do anything about it. In other words, I think the answer is going to be more determined by the technology and the business models than it will have to do with what governments do about AI going forward.

Richie Cotton: Okay. That's a very interesting thought that the idea that this sort of sci-fi staple, you've got some AI controlling the world that could be, that could play out in real terms in terms of just authoritarian governments using AI to have more control over how the economy works, how the, controlling the population.

I wonder if there needs to be some kind of accountability of these tech companies, these AI companies in order to make sure that sort of thing doesn't happen, or whether there's some, you said governments won't have much control over whether AI can benefit one governmental system or another. Does it need to be tech companies that worry about this?

Ian Bremmer: I, of course, want to see that. and I'm interested in Jimena's view as well. My concern is that it's not going to happen fast enough. I, I, governments are better at governing than corporations are because it is what they are trained for. It's where they devote their resources to. They're not all equally capable and some of them fail mightily.

But nonetheless, corporations, these guys are spending, and they're all mostly men, are spending enormous amounts of resource and time to just get as fast as possible in developing and rolling out this tech because their competitors breathing down their neck. So I'd much rather there be governments driving this, but the tech is moving so much faster than government capabilities.

If we're talking about three, five years out I suspect that the corporations are going to be more powerful in determining these outcomes than the governments.

Richie Cotton: Okay Jimena, do you have anything to add to that?

Jimena Viveros: I mean, the term, I mean, the whole question about accountability, it's a huge thing, not only just to add on I mean, first of all, I think if you're talking about only industry leaders, I think that would be incomplete. Because we need to think of the whole life cycle. of this technology, right? So since the development to its everyone that's involved in that stage up until deployment, up until the authorization of use and like the specific user at the end of the day.

So each. One of these individuals does need to be examined in terms of accountability and responsibility. That's why the term, the accountability gap in terms of AI is so prominent. Because what happens in effect, it's an atomization. Responsibility because what would traditionally be understood for like the legal responsibility for an act and a consequence by one individual is now fragmented.

And that's going to bring us so many difficulties when trying to enforce any type of liability of, you know, whether it's even administrative or civil, criminal, especially because especially when it comes to criminal, you know, you have to prove the intent and how are you going to prove the intent of a system, right?

And when all of this the life cycle of the technology is so dispersed not even maybe not even one person like companies, entire companies or different companies that are working together. then, you know, you go to the person that. The person that authorized it to use it, you know, all of these things.

So it's really problematic and that's why industry regulation should, is really, really, really not sufficient for regulation period for governance. That's just the fact because it will not cover all of these different areas, which are fundamental within the rule of law and uh, law based society.

And in terms of the international legal order, if you're trying to go more, more scale, which we should it's fundamental that we address all of these limitations that we have nowadays. Because the trend is to just try to find analogies within. Existing laws, existing legislation, regulations that would cover any type of situation, but that's just not sufficient and it's proving more and more so, especially with, everything that has to do with criminal and the intent and so on, but in many other aspects as well.

So, that's why you know, we need to set in international governance, kind of like the guidelines, the guardrails whereas everything else can be contained in there so that. Sets a starting point for all states to kind of like implement that domestically in terms of all of this national legislations to delimit the scope of the the way that In both industries and users and governments can act within those boundaries, right?

so as long as we can agree on this global boundaries, I think we're going to be in a much better place. Because as Ian said governments are way better at governing in most cases than, than the industry itself. So, so I'm very, very cautious with that industry regulation because it's, it's, it's, you set us to a very unintended scenario,

Richie Cotton: It seems like some kind of global governance is in order. Why do you think global governance has an advantage over country based rules? You said you don't think industry based rules are going to work very well because it needs to be governments to do the governing. But why do you What makes global guideline better than individual country based guidelines?

Jimena Viveros: but the sentence just fragmented it's dispersed as we see it already. Now, it's first of all, it's only global north regions, countries that are setting their norms, but they're also in terms of jurisdiction and applicability. It's only limited territorially whilst the technology is not.

Bound territorially. It's very trans boundary. And again, all of this fragmentation of actors can happen obviously across countries across continents. So, it's very inefficient to have one country regulating something. Well, they can just, very well just move. Not even physically just like the VPN and that's it, so that that's the problem and why we need this international governance so that it's, you know, it can permeate everywhere so that everyone starts from the same boundary rules, you know, what are the red lines as long as we have that, all the details could be specified, obviously, you know, States have their own sovereignty and so on, but we do need to adapt to a global governance regime because otherwise it's just not going to be effective at all.

It's going to be absolutely useless

Ian Bremmer: Yeah, I'll add to that. Look, I think that there are some areas that global governance are absolutely indispensable. mean, for example, this technology is going to be massively proliferated in a short period of time. It's not like nuclear weapons where it's really expensive and really hard for any government actor to get a hold or handle on them.

Never mind non government actors that have proven incapable so far. No, I mean, this is first of all, a lot of things are being Published open source that are very high tech and you got a lot of people that will use these technologies and tweak them in ways that are either tinkering or nefarious. And so there are literally millions of potential actors that are engaged in behaviors, To support malware or develop new viruses or develop new weapons.

I mean, that, that of course requires global governance to respond to. Otherwise you'll have rogue elements outside the reach of the law. That will be deeply problematic for everyone and the Americans, the Chinese, the Europeans, everyone has an interest in ensuring there's governance for that thing.

Also there are challenges that come from A. I. Like the disruption of labor forces that we will see all over the world. The I. M. F. Just wrote a report that said 40 percent of all jobs will be significantly disrupted by a I. 60 percent in advanced industrial economies. Well, we want to make sure that when that happens, that the resources that are deployed to respond to it are done so efficiently.

Wait. Well, the best way to ensure they're efficient are if you have a single global observatory, a single global institution that acts analytically to assess how those jobs in those sectors in those geographies are actually being disrupted, like the United Nations does around climate change, a global challenge.

Now, that doesn't mean that all AI governance needs to be global. for example, the Brussels effect is really important. The EU was the first out of the gate with meaningful AI regulation, the EU AI Act, and it's not going to be implemented until 2026. But just the fact that they're already floating it, given how important the EU is as the world's largest common market, means companies that dominate AI are already planning to orient.

Their A. I. Practices towards it. They don't want to wait until it's too late until the last minute they need to, like, gotta get ready for it. and individual governments that are big that drives significant A. I. Governance themselves can have knock on implications for corporations and other governments around the world.

So just because it isn't global or isn't global when it starts doesn't mean we throw it out.

Richie Cotton: Okay, so, if you say some one thing and then the U. S. has another set of rules and it's gonna be it's gonna be conflicting, it's gonna be harder for corporations to comply with. So maybe having that global synchronization is gonna help. So it does seem that there frameworks for doing AI responsibly already.

I'm just wondering what's different about the UN's efforts here?

Jimena Viveros: that it is global, that it is universal, that we have the legitimacy of the United Nations and everything it stands for. the membership of 193 countries and Again, just the authority of the Secretary General, the moral authority in leadership in this matter. So the fact that he created this body directly to advise him is quite an indication of the level of interest he has on this advancement of the global governance, because it is that important as Ian and I just mentioned.

Ian Bremmer: Again, there are a lot of regulations on climate, too, but it's utterly critical that the world comes together and accepts that we've already put 442 parts per million of carbon in the atmosphere. It's essential that we understand methane, deforestation, plastics in the ocean. It's essential that we all agree that we've already warmed the atmosphere by 1.

2 degrees centigrade, and we're heading towards two or 2. 5. You cannot. global responses to global challenges, unless you have global agreements on what the problem is. So the first and foremost thing the United Nations has to do is get people together to understand and agree on the nature of the challenge.

Even if the regulations are very different, understanding the nature of the challenge together allows the deployment of resources to be more efficient. There's never enough resources to take care of the poorest people. You really don't want them wasted. especially when something's moving this fast.

I can't tell you how important it is that you have the United Nations trying to get everyone together to agree on the nature of the challenges.

Richie Cotton: Yeah, just seemed like We're not going to get any kind of solutions until everyone agrees on what the problem is.

Ian Bremmer: can.

Richie Cotton: We've established that you need to have a common understanding of what the problem is in order to get to a solution. I'm curious as to how we get to that solution based on the advice from your body.

So what are the principles of AI governance that your advisory body is suggesting?

Ian Bremmer: So actually, there are five guiding principles in our interim report and of all the bits of our interim report, this is the one I think will be easiest to translate into the final report that comes out because it was wholly uncontroversial among the 39 members of this high level panel what these principles should be, precisely because the United Nations Charter helps to guide it.

So Five principles. Number one, A. I. Should be governed inclusively by and for the benefit of all number two. A. I. Should be governed in the public interest. Number three. A. I. Governance should be built in step with data governance and the promotion of data commons. Data commons, meaning that people will have access.

Companies will have access to all sorts of data sets that can help drive sustainable development goals, help drive Progress for humanity guiding principle for AI must be universal networked and rooted in adaptive multi stakeholder collaboration governments private sector NGOs citizens and then finally guiding principle 5 AI governance should be anchored in the UN charter.

international human rights law, and other agreed international commitments. You don't need to improve on something that we already have all of the world's governments signed up to since 1945. They don't all adhere to it. They don't all live up to it. But we all understand that those principles mean something.

And we want to keep those values. And as we are moving beyond what we define as humanity today, We need to keep fundamentally the values that we enshrine in humanity applying to our next phase. There could, there is no higher goal for the UN's effort on AI than that.

Richie Cotton: Absolutely agreed that this is incredibly important. And it just seemed like most of those are fairly uncontroversial things like using AI inclusively and for sustainable development. The one the multi stakeholder participation, I think that needs a little more explaining. Can you talk me through what that involves?

Ian Bremmer: what that means is that the corporations don't get a veto. Over outcomes, but there is a recognition that you are not going to have effective AI governance unless you bring the private sector in together with the governments and the NGOs and the policy community. And the reason for that is because when you talk about the digital world and you talk about AI specifically the foundational models.

The rollout the nature of your interaction with it as an individual or as a group is being determined almost wholly with sovereignty by corporations. So the idea that governments are going to have the expertise, the wherewithal and the power. to do governance by itself in this environment, especially given the fact that the technology is moving a lot faster than the governance, even with the prioritization of this issue in the last year, technology is moving a lot faster than the government.

So it has to be multi stakeholder. It has to be inclusive. That has worked on climate change, though very slowly, not as much as we want, but we are moving the ship in the right direction. That's what we need for AI.

Richie Cotton: Excellent. So we've got some principles. I'm curious as to how they will actually be implemented. So, Jimena, can you talk me through what's the UN's role going to be in terms of making sure that these principles are actually adhered to?

Jimena Viveros: First of all, these principles are guiding principles of this report. And as I said from the beginning, the report is meant to inform states obviously through the secretary general of the United Nations in order to be in a better position to discuss and potentially hopefully adopt binding commitments at the summit of the future.

That said, it is upon states entirely how they will adopt if hopefully if they adopt any of these principles. So, the UN is at this point just facilitating all of this, as Ian said, convening and summoning up all of the nations of members to the UN. In order to, to promote this dialogue and put all of the information on the table so that commitments can be made, but at this stage, part of the mandate, as I said, it's also coming up with the function, the structure of an international organization that could in turn implement this, whatever agreements are reached.

However, there needs to be an agreement to form this organization first. So it's all going down to what states can agree on. I think there is a great moment. Now we're, we're really building momentum in terms of creating consensus for, for this agreements because of not only the, the, the inclusive participation in this body and all the other actors involved.

But, I think just the mere realization of where we are in the present, realistically, in terms of this technology, it's just, it cannot be postponed any longer. I think most countries realize that and would rather be part of than not. Of the conversation of the agreements. So, in an ideal world the states can come up with binding because it's very important that this agreements are binding commitments.

And then that the U. N. Can Also have this regime of implementation and monitoring verification of all of these agreements actually being respected and potentially even some enforcement mechanisms through cooperation, through different methods that could be available. But all of this needs to be decided by states.

So at this point, we can only speculate what the end of all of this will be, but hopefully it's it's as solid and robust as possible so that it doesn't end up being, you know, dead letters because that would be very tragic.

Richie Cotton: All right. So, if I've understood this correctly, the UN comes up with the principles and guidelines and then some Politicians from different countries meet, they agree to binding commitments and then each country gets to decide on their own implementation, whether that's laws or some other kind of incentives or enforcement structure for their own country.

Is that sort of about right?

Jimena Viveros: Hopefully with the UN, this body that would be created keeping everything standardized. so that everything is done at the same time, because obviously if you have some countries implementing at a different speed than others, that's obviously gonna affect the overall process.

So it's very important to come up with an overall strategy so that everyone can start so that it doesn't happen with, for example, climate change, as Ian said, in many respects, so, we all need to start from the same point so that we can start to, fulfill whatever agreements can be done at the same pace globally and standardly and universally and all of the things.

So that's where the UN would come in handy to have a mandate that's specific enough and broad enough in order to verify that this is happening and come up with specific measures, to try to boost those that cannot fulfill the. Similar pace at the outset and, even to have some other measures in case of.

Ian Bremmer: And of course, there is going to be architecture, global architecture that is created on the back of this final report, and we've already made some preliminary recommendations on what sorts of directions this architecture might be created into, I mean, think about it, the UN already, in a sense, works with a lot of global architecture that's had a part of the founding Whether it's the IAEA in terms of energy inspections and development, or it's the IPCC for assessing the state of the world in terms of climate change.

I mean, there is going to be the creation of global AI architecture. And the intention is for this panel together to make recommendations that by virtue of how broadly representative the panel is, if we can agree on that as members of the private sector and public sector and NGO community, experts, scientists all over the world, the hope is that that's going to actually drive implementation and buy in for those those institutions to be stood up.

and respected, participated in, adhered to. The United Nations has no money. The United Nations has no power. But it has unique global legitimacy. And this panel reflects that.

Richie Cotton: It does seem like it's such a broad scope. So I like the idea that if, between you, you can agree on things and maybe the world can as well. Excellent. All right. So before we wrap up, can you tell me for people who want to learn more about AI governance, where should they look?

Jimena Viveros: Well, first of all, I think they should look at the interim report that we put out and keep abreast of the communications that will be done by the office of the special tech envoy on technology from the UN and all the other network agencies that are working on this issue. There's a lot of information out there, but try to participate in the.

processes that will come out this year and just get involved because the AI conversation is a very broad one and everyone should participate.

Ian Bremmer: And the link for people to check that out directly, www. un. org slash en. Slash A. I. Dash advisory dash body that has all the work that we've been doing all of the supplementary materials and they can follow along.

Richie Cotton: Wonderful. So, yeah, I hope the audience just look into this more because it's a very interesting topic just to learn about. It's going to affect the whole world. All right. Brilliant. Thank you, Ian. Thank you, Jimena. It's been great having you on the show.

Topics
Related

You’re invited! Join us for Radar: AI Edition

Join us for two days of events sharing best practices from thought leaders in the AI space
DataCamp Team's photo

DataCamp Team

2 min

The Art of Prompt Engineering with Alex Banks, Founder and Educator, Sunday Signal

Alex and Adel cover Alex’s journey into AI and what led him to create Sunday Signal, the potential of AI, prompt engineering at its most basic level, chain of thought prompting, the future of LLMs and much more.
Adel Nehme's photo

Adel Nehme

44 min

The Future of Programming with Kyle Daigle, COO at GitHub

Adel and Kyle explore Kyle’s journey into development and AI, how he became the COO at GitHub, GitHub’s approach to AI, the impact of CoPilot on software development and much more.
Adel Nehme's photo

Adel Nehme

48 min

A Comprehensive Guide to Working with the Mistral Large Model

A detailed tutorial on the functionalities, comparisons, and practical applications of the Mistral Large Model.
Josep Ferrer's photo

Josep Ferrer

12 min

Serving an LLM Application as an API Endpoint using FastAPI in Python

Unlock the power of Large Language Models (LLMs) in your applications with our latest blog on "Serving LLM Application as an API Endpoint Using FastAPI in Python." LLMs like GPT, Claude, and LLaMA are revolutionizing chatbots, content creation, and many more use-cases. Discover how APIs act as crucial bridges, enabling seamless integration of sophisticated language understanding and generation features into your projects.
Moez Ali's photo

Moez Ali

How to Improve RAG Performance: 5 Key Techniques with Examples

Explore different approaches to enhance RAG systems: Chunking, Reranking, and Query Transformations.
Eugenia Anello's photo

Eugenia Anello

See MoreSee More