“I don’t know what necessarily the fair rules are, but you’ve got to start with insight before you do oversight,” ~ Elon Musk
At Britain’s AI safety summit 30 countries have signed the ‘Bletchley Declaration’ which recognises the enormous global opportunities generated by AI as well as the privacy, security and accountability threats that it poses. The summit aimed to discuss the challenges and potential risks associated with AI.
China, the United States, and the European Union have agreed to collaborate with each other and the rest attending countries to inclusively mitigate the risk posed by artificial intelligence on Wednesday
Governments around the globe have raced into developing safeguards against potential threats of artificial intelligence. AI is developing at a quick pace and the future we imagined is here and so is the possibility of AI outsmarting humans and wreaking havoc. Even in its learning stage it could be used for unthinkable crimes. Various bright minds in AI have warned about these threats which started this race.
Two primary prolonged agendas of the declaration are identifying AI safety risks and building respective risk-based policies across the countries. It includes evidence-based understanding of associated risks and collaborating on different approaches. Approaches may vary according to national circumstances and applicable legal frameworks.
Wu Zhaohui, China’s Vice Minister of Science and Technology, announced at the summit’s opening that China is open to enhancing international cooperation on AI safety to contribute to a global governance framework. He emphasized that all nations have equal rights to develop and use AI technology.
The urgency gained momentum in November last year after OpenAI’s ChatGPT, which Microsoft supports, was released publicly. ChatGPT’s human-like interactions have sparked concerns, including from some AI experts, over the possibility that machines might surpass human intelligence, leading to unforeseeable and potentially limitless consequences.
The European Union’s approach to AI regulation has concentrated on data privacy and surveillance, considering their effects on human rights. In contrast, the British summit is examining the existential threats posed by advanced, versatile AI systems, known as “frontier AI.”
Mustafa Suleyman, co-founder of Google’s DeepMind, believes that current AI models don’t pose “significant catastrophic harms,” but advocates for proactive measures as increasingly powerful models are being developed.