On September 22, 2023, United Kingdom’s Secretary of State Oliver Dowden addressed the General Debate of the United Nations General Assembly, in New York City. Here are the key points he touched on during his speech:
- The United Kingdom has been offering humanitarian help to Morocco and Libya, because they have gone through earthquakes and floods.
- The “issues of the moment” are climate change, development after Covid-19, human trafficking and the invasion of Ukraine.
- Russia’s “brutal invasion” of Ukraine has its consequences felt all over the world, particularly in developing countries hit by food shortages. The war could be ended by Russia soon, but, until it happens, “the United Kingdom will stand alongside Ukraine” for as long as necessary.
- Another important challenge is artificial intelligence, because it will change all aspects of life. AI could be a force for the common good, helping science, democratizing technology, and making it possible to solve the most pressing problems of our time.
- Yet AI also poses certain dangers, such as facilitating hacking and electoral manipulation by fake online profiles, as well as potentially “losing control of the machines themselves”.
- Accordingly, the UK is taking part in initiatives such an AI Safety Summit, the Hiroshima G7 process, and the Global Partnership on AI. All governments should work together to regulate this new technology, because its risks and opportunities are still mostly unknown. Also, because AI technology is evolving fast, governments should meet and discuss it regularly.
- The United Kingdom is “uniquely placed” to seize AI opportunities, thanks to its “frontier technology companies” and “world-leading universities”.
- The UK government also has a Frontier AI Taskforce, comprised of experts who try to breach AI models (red-teaming) so as to make them safer. It is the government’s intention that this body becomes an international one.
- Many “world-beating technologies” have been developed in nations where freedom of expression is respected, and a “culture of rules and transparency” is essential to making AI both innovative and safe.
Analysis of the Speech
Oliver Dowden, in charge of the UK’s international affairs, represented prime minister Rishi Sunak at the General Assembly. As usual, he began his speech talking about very recent events — namely, the earthquakes and floods in Morocco and Libya. Then, he briefly mentioned the “issues of the moment”, in his opinion. Climate change, development, Covid-19 and the invasion of Ukraine are probably on everyone’s list of international priorities. Yet the mention of human trafficking seemed a bit odd — after all, isn’t Great Britain an isolated island?
The problem is that, since 2022, there has been an ever-increasing record number of irregular migrants trying to cross the English Channel. These are people who are fleeing conflicts like that in Ukraine and end up moving from one European country to the next, searching for a better life. In some cases, these migrants do pay hefty sums of money to human traffickers, to travel to England by boat.
The situation in Ukraine got some attention from Oliver Dowden, who strongly condemned it as a “brutal invasion” and as the “most heinous assault imaginable on everything” the United Nations stands for. Most importantly, he promised that the UK will continue to bankroll Ukrainian defense efforts for as long as possible.
Dowden’s speech, for the most part, focused on the question of artificial intelligence, its opportunities, its dangers, and its proper regulation. The Secretary of State did not say virtually anything that people already know, such as the dual nature of AI — good or evil, depending on the circumstances. He seemed keen on promoting the UK as a major player in AI, neglecting the fact that most advances in this field are conducted in the United States. Also, he seemed intent on promoting the AI Safety Summit, that will take place in UK soil in November 2023, perhaps as a way to rally people to attend it.
At the end of the speech, Dowden underscored the relevance of a “culture of rules and transparency” to make AI both innovative and safe. This might have been meant as a reference to China, an AI superpower that certainly lacks such a culture. In any case, it was a weak argument, because transparent AI models can be more easily weaponized by those with ill intentions.
Unfortunately, the UK’s speech focused so much on AI, and said so few substantial things about it, that it was superficial. Although the relevance of AI in recent times cannot be overstated, it was a surprise to see a world power dedicate the vast majority of a speech to it. In a world fraught with challenges, one would hope that the UK would share its views on other things, too.
Full Text of the Speech
Mr President,
As we meet here this evening millions of people in Morocco and Libya continue to struggle with the aftermath of a devastating earthquake and catastrophic flood.
Let me extend the sympathy of the British people to all those who have lost loved ones.
Our search and rescue teams have been deployed in Morocco and we have increased our humanitarian support for Libya.
We will continue our support — alongside many other nations represented here in the weeks and months to come.
This week, nations have gathered here to recommit to addressing the biggest challenges we face.
Climate change, with catastrophic weather events telling us to act, now.
The Sustainable Development Goals… and how to get them back on track after Covid.
Migration, with millions crossing borders and dangerous seas, at the mercy of human traffickers.
And Russia’s brutal invasion of Ukraine… an attack on a sovereign member of the United Nations by a Permanent Member of its Security Council.
The most heinous assault imaginable on everything this organisation stands for, and was founded to prevent.
With consequences felt not just by the brave people of Ukraine, but by millions more across the globe.
Those hit by food shortages — particularly in developing countries — are Putin’s victims too.
Russia could end this war tomorrow. Putin could end this war tomorrow. That is what the world demands.
But until that happens, the United Kingdom will stand alongside Ukraine.
Whatever it takes.
For weeks, for months — if necessary, for years.
Because if these United Nations — in which the United Kingdom believes, and helped to found — are to count for anything, it is surely for the cardinal principle that aggression cannot, and must not pay.
These are the issues of the moment.
But I want to focus on another challenge.
A challenge that is already with us today, and which is changing — right now — all of our tomorrows.
It is going to change everything we do – education, business, healthcare, defence — the way we live.
And it is going to change government – and relations between nations – fundamentally.
It is going to change this United Nations, fundamentally.
Artificial Intelligence – the biggest transformation the world has known.
Our task as governments is to understand it, grasp it, and seek to govern it.
And we must do so at speed.
Think how much has changed in a few short months.
And then think how different this world will look in five years or ten years’ time.
We are fast becoming familiar with the AI of today, but we need to prepare for the AI of tomorrow.
At this frontier, we need to accept that we simply do not know the bounds of possibilities.
We are as Edison before the light came on, or as Tim Berners-Lee before the first email was sent.
They could not — surely — have respectively envisaged the illumination of the New York skyline at night, or the wonders of the modern internet.
But they suspected the transformative power of their inventions.
Frontier AI, with the capacity to process the entirety of human knowledge in
Seconds, has the potential not just to transform our lives, but to reimagine our understanding of science.
If — like me — you believe that humans are on the path to decoding the mysteries of the smallest particles, or the farthest reaches of our universe, if you think that the Millenium Prize Problems are ultimately solvable, or that we will eventually fully understand viruses, then you will surely agree that by adding to the sum total of our intelligence at potentially dizzying scales.
Frontier AI will unlock at least some of those answers on an expedited timetable in our lifetimes.
Because in AI time, years are days even hours.
The “frontier” is not as far as we might assume.
That brings with it great opportunities.
The AI models being developed today could deliver the energy efficiency needed to beat climate change, stimulate the crop yields required to feed the world, detect signs of chronic diseases or pandemics, better manage supply chains so everyone has access to the materials and goods they need, and enhance productivity in both business and governments.
In fact, every single challenge discussed at this year’s General Assembly – and more – could be improved or even solved by AI.
Perhaps the most exciting thing is that AI can be a democratising tool, open to everyone.
Just as we have seen digital adoption sweep across the developing world, AI has the potential to empower millions of people in every part of our planet, giving everyone, wherever they are, the ability to be part of this revolution.
AI can and should be a tool for all.
Yet any technology that can be used by all can also be used for ill.
We have already seen the dangers AI can pose: teens hacking individuals’ bank details; terrorists targeting government systems; cyber criminals duping voters with deep-fakes and bots; even states suppressing their peoples.
But our focus on the risks has to include the potential of agentic frontier AI, which at once surpasses our collective intelligence, and defies our understanding.
Indeed, many argue that this technology is like no other, in the sense that its creators themselves don’t even know how it works.
They can’t explain why it does what it does, they cannot predict what it will — or will not — do.
The principal risks of frontier AI will therefore come from misuse, misadventure, or misalignment with human objectives.
Our efforts need to preempt all of these possibilities — and to come together to agree a shared understanding of those risks.
This is what the AI Safety Summit that the United Kingdom is hosting in November will seek to achieve.
Despite the entreaties we saw from some experts earlier in the year, I do not believe we can hold back the tide.
There is no future in which this technology does not develop at an extraordinary pace.
And although I applaud leading companies’ efforts to put safety at the heart of their development, and for their voluntary commitments that provide guardrails against unsafe deployment, the starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible.
Indeed, the stated aim of these companies is to build superintelligence.
AI that strives to surpass human intelligence in every possible way.
Some of the people working on this think it is just a few years away.
The question for governments is how we respond to that.
The speed and scale demands leaders are clear-eyed about the implications and potential.
We cannot afford to become trapped in debates about whether AI is a tool for good or a tool for ill; it will be a tool for both.
We must prepare for both and insure against the latter.
The international community must devote its response equally to the opportunities and the risks — and do so with both vigour and enthusiasm.
In the past, leaders have responded to scientific and technological developments with retrospective regulation.
But in this instance the necessary guardrails, regulation and governance must be developed in a parallel process with the technological progress.
Yet, at the moment, global regulation is falling behind current advances.
Lawmakers must draw in everyone — developers, experts, academics — to understand in advance the sort of opportunities and risks that might be presented.
We must be frontier governments alongside the frontier innovators.
The United Kingdom is determined to be in the vanguard, working with like-minded allies in the United Nations and through the Hiroshima G7 process, the Global Partnership on AI, and the OECD.
Ours is a country which is uniquely placed.
We have the frontier technology companies.
We have world-leading universities.
And we have some of the highest investment in generative AI.
And, of course, we have the heritage of the Industrial Revolution and the computing revolution.
This hinterland gives us the grounding to make AI a success, and make it safe.
They are two sides of the same coin, and our Prime Minister has put AI safety at the forefront of his ambitions.
We recognise that while, of course, every nation will want to protect its own interests and strategic advantage, the most important actions we will take will be international.
In fact, because tech companies and non-state actors often have country-sized influence and prominence in AI, this challenge requires a new form of multilateralism.
Because it is only by working together that we will make AI safe for everyone.
Our first ever AI Safety Summit in November will kick-start this process with a focus on frontier technology.
In particular, we want to look at the most serious possible risks such as the potential to undermine biosecurity, or increase the ability of people to carry out cyber attacks, as well as the danger of losing control of the machines themselves.
For those that would say that these warnings are sensationalist, or belong in the realm of science-fiction, I simply point to the words of hundreds of AI developers, experts and academics, who have said — and I quote:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
I do not stand here claiming to be an expert on AI, but I do believe that policy-makers and Governments ignore this expert consensus at the peril of all of our citizens.
Our Summit will aim to reach a common understanding of these most extreme risks, and how the world should confront them. And at the same time, focus on how safe AI can be used for public good.
The speed of this progress demands this is not a one-off, or even an annual gathering.
New breakthroughs are happening daily, and we need to convene more regularly.
Moreover, it is essential that we bring governments together with the best academics and researchers to be able to evaluate the technologies.
Tech companies must not mark their own homework, just as governments and citizens must have confidence that risks are properly mitigated.
Indeed, a large part of this work should be about ensuring faith in the system, and it is only nation states that can provide the most significant national security concern reassurance that has been allayed.
That is why I am so proud that the United Kingdom’s world-leading Frontier AI Taskforce has brought together pioneering experts like Yoshua Bengio and Paul Christiano, with the head of GCHQ and our National Security Advisers.
It is the first body of its kind in the world that is developing the capacity to conduct the safe external red-teaming that will be critical to building confidence in frontier models.
And our ambition is for the Taskforce to evolve to become a permanent institutional structure, with an international offer.
Building this capacity in liberal, democratic countries is important.
Many world-beating technologies were developed in nations where expression flows openly and ideas are exchanged freely.
A culture of rules and transparency is essential to creativity and innovation, and it is just as essential to making AI safe.
So that, ladies and gentlemen, is the task that confronts us.
It is — in its speed, and its scale, and its potential — unlike anything we — or our predecessors — have known before.
Exciting.
Daunting.
Inexorable.
So now we must work – alongside its pioneers – to understand it, to govern it, to harness its potential, and to contain its risks.
We will have to be pioneers too.
We may not know where the risks lie, how we might contain them, or even the fora in which we must determine them.
What we do know, however, is that the most powerful action will come when nations work together.
The AI revolution will be a bracing test for the multilateral system, to show that it can work together on a question that will help to define the fate of humanity.
Our future — humanity’s future — our entire planet’s future, depends on our ability to do so.
That is our challenge, and this is our opportunity.
To be – truly – the United Nations.
Leave a Reply