Interview with Dr Guangyu Qiao-Franco – Methods and Modes of Governance from AI to ASEAN  

By Deiniol Brown

Dr Guangyu Qiao-Franco is an Assistant Professor of International Relations at Radboud University in the Netherlands. She has written extensively on the subject of AI and cyberspace governance and security, having been a researcher for the AutoNorms project at the University of Denmark’s Centre for War Studies. Dr Qiao-Franco is also an expert in ASEAN (the Association of Southeast Asian Nations), diplomacy and international cooperation in Southeast Asia, having published the book ‘UN-ASEAN Coordination: Policy Transfer and Regional Cooperation Against Human Trafficking in Southeast Asia’ in 2023. In this interview, we discussed Dr Qiao-Franco’s variety of insights on issues of global governance, beginning with the context of ASEAN and its role in regional and international political dynamics. We then discussed the subject of AI and cyberspace governance in the context of the rapidly shifting technologies and government approaches in this space.   

You have published extensively on the regional framework of ASEAN. There has been a lot of talk about hedging as a practice by southeast Asian states. Does ASEAN itself as an organisation conduct hedging? 

I actually explore hedging and balancing in an upcoming publication, which discusses the ‘ASEAN way’ of institution building. ASEAN itself definitely utilises balancing and hedging, as well as band wagoning to manage competing powers in the region whilst still encouraging them to engage. ASEAN countries are collectively practicing hedging as a bloc, with leaders in this process being Indonesia, Singapore and Malaysia. 

One of your articles discusses the potential future for ASEAN centrality – the practice of ASEAN holding political power in Southeast Asia. What would be the implications of the fall of ASEAN centrality? 

I recently visited Singapore and discussed this issue with diplomats there, many of whom are losing confidence with upholding ASEAN centrality in their regional architecture. Most major powers pay lip service to ASEAN centrality but are building regional organisations that marginalise ASEAN. The deciding factor for ASEAN maintaining it centrality will be how proactive it can be in diffusing political tensions and acting as a mediator. States are currently open to maintaining ASEAN centrality, but this may change if ASEAN fails to stay relevant. There is currently no alternative to ASEAN which is sufficiently inclusive; the Quad for example, or the Belt and Road Initiative, are limited in their inclusivity. However, the US and China will inevitably continue to invest more in their initiatives and exacerbate competition in Southeast Asia. ASEAN has more scope for being able to practice peaceful conflict resolution currently, and needs to capitalise on its position. 

Do you envisage ASEAN taking a greater role in global norm governance in the future and do you think the ASEAN model of regional organisation is exportable to other regions or is it only achievable in Southeast Asia? 

In my research I generally look at one-way policy diffusion from the UN to ASEAN, but we are definitely now seeing more policy diffusion from ASEAN outwards. For example, Indonesian president Jokowi visited the EU in 2022 and gave a speech to EU leaders on the ASEAN way, in the context of the EU’s chaos following Brexit. In his speech he discussed maintaining solidarity rather than diversity in ASEAN, promoting dialogue and flexibility and building a non-confrontational atmosphere with limited institutionalism in regional organisations. In this example we saw norm and policy diffusion from ASEAN to the EU, but this has happened in many cases and regions. The ASEAN-style cooperation model is not unique to the region and could definitely be applied to various regions where there are tensions and ongoing diplomatic issues between states. 

Now turning to your more recent work on AI governance, what are the implications of the divide in approaches to AI regulation for the proliferation of weaponised AI? 

There are definitely identifiable divisions emerging between states on the weaponization of AI and the role of human control within it. Ideas differ particularly between the global North and global South. Both agree on keeping AI under control, but disagree on how it should be developed in relation to warfare. In the global North, states generally feel that humans shouldn’t have to ‘push the button’ on lethal force if it removes the risk to human life and the technology can be kept under check. In the global South there is a movement against the ability for AI to make the decision over life and death, and that only humans should make that decision. They argue that this would violate international law on warfare, as AI is incapable of judging proportionate force. 

These issues have been debated in the UN treaty on Certain Conventional Weapons (CCW), and the most recent UN general assembly voted to develop a legally binding instrument to regulate AI development. However, states in the global North argue that current international law is sufficient for governing AI. It is unlikely that an agreement on further regulation will be agreed because of the politicisation of this issue which has spilled into wider political discourse. For example, we can see the US decision to put controls on exports of semi-conductors is related to military security to an extent, because of the importance of semi-conductors for the development of AI in a defence capacity. 

Will this lead to an AI arms race in the coming years? 

I’m not sure if we’ll be able to call it an arms race, but states will definitely try to push the development of capabilities to maintain an edge in the area of AI for defence. I have personally been involved in some of the Track II dialogues between the US and China regarding this issue, and the worst scenario would be nuclear weapons being combined with AI capabilities. The US and China are trying to negotiate with each other and build confidence on the responsible use of AI, and the US is pushing for an agreement that neither side will use AI in the control or operation of nuclear weapons.

What will the impact of the election of Donald Trump for AI governance? Will it become more polarised? 

Well to start with I think the current lack of AI governance has itself led to an intensification of the rise of populism and echo chambers online. In China, there is a ‘communities of practice’ approach to the governance of AI, led by the private sector and backed by the state, which is the norm in many countries. However there are technical difficulties and ethical concerns with this approach. There are principles and rules in AI governance that are shared across countries, and there is significant scope for countries to set up governance through a multi-stakeholder approach. 

I recently wrote a paper for the Dutch ministry of Foreign Affairs which argued for the encouragement of dialogue and cooperation with other states like China, in non-security contexts in AI. However, the Dutch government saw no clear separation between non-security and security AI development, demonstrating the challenges for dialogue and cooperation globally on AI governance. We need to build trust and confidence between states to make them open to cooperation and negotiation. 

In the case of the US, we can see the polarisation of cyberspace already. The influx of Americans to the Chinese app Xiaohongshu or ‘red note’ away from TikTok is an example, as cyberspace becomes more restrictive. Western countries, like China, have begun to censor in the name of national security. In the UK I recently went to a conference on ‘Digital Sovereignty’, and asked at the conference what the difference was between Digital- and Cyber- Security. They answered that the UK’s ‘Digital Sovereignty’ is the different because it protects individual freedom and rights, however I see the same in China’s cyberspace in many respects, as China has many laws protecting personal information and data security just like GDPR in the UK. The main difference is the ability to criticise the government and party publicly.  Therefore, I think we are seeing the development of a new period in cyberspace and AI governance which is much more conscious of security issues and orientated towards national interests. 

Leave a comment