AI Technologies vs. Human Rights
Prevent AI technologies from being used to assist authoritarian governments and their human rights abuse
This campaign calls upon democracies to denounce, sanction, and prevent AI technologies from being exploited by authoritarian governments to advance their human rights violation at home and abroad. Such technologies include facial recognition and surveillance apparatus — in both hardware and software capabilities.
Democracies should seek to promote global standards of technologies that are grounded in the human dignity and universal values, particularly those prone to misuse by oppresive governments, including, but not limited to, China.
"China is the leading user of technology as a means of oppression." says economic analyst Christopher Balding. About 200 million surveillance cameras are deployed around the country. Nearly every one of its 1.4 billion citizens is in China’s facial recognition database. That doesn not include Xinjiang, where over 1 million Uighur Muslims have been detained in re-education camps designed to rid the region of its Uighur identity and culture for generations to come. There, technologies like facial recognition and invasive personal data collections, are used at the very least as an instrument of fear, if not precisely tracking the Uighur population at all times.
Such technologies are not just privy to certain economic powers, they become increasingly attractive wholesale items and are being exported from China to countries like Venezuela, Kyrgyzstan, the Philippines, and Zimbabwe — for the same oppressive ends.
This campaign is threefold in addressing this topic:
- Democratic governments and tech communities have a chance to shape the design of these sensitive technologies, as well as promote global standards, in ways that can isolate opague authoritarian practices.
- Increasing targeted sanctions for individuals, companies, government-sponsored entities who are complicit in such abuses.
- In AI education and research, universities in democracies should adapt greater, more serious moral discourse in their curriculum, and set code of ethics when dealing with entities (explicity and implicitly) sponsored by oppresive states.
References:
mcw
As for ethical guidelines, I fully agree, but see a similar problem as in internet governance: Which authority shoud have the right to supervise and channel this deliberation process and, even more tricky, how should finally be decided on such standards? I feel dealing with both questions is not like lawmaking but needs to be dealt with on a meta level, i.e. like setting the framework for lawmaking.So maybe looking into comparable processes in history may help.
As for shaping the AI designing process, I think there will be similar issues. The main challenge seems to me the high complexity of AI development as it deals with consciousness. So, there should in my opinion be no limitation to the development process but governance when it comes to applying the results,