I was first introduced to this field of study when I was listening to “The Daily Zeitgeist – Google wants to watch you 3.1.18”. The Centre for the study of existential risk was something that was just mentioned in passing in the conversation that they were having about the future. However it’s name and premise for existing as a field of study peaked my interest and lead me to want to explore it further.
So as any millennial would do I googled their name and came across their website.
I then stumbled across their news section and started reading some of the papers that had come out of the centre.
The first one I read was about the malicious us of AI.
- Policy-makers and technical researchers need to work together now to understand and prepare for the malicious use of AI.
- AI has many positive applications, but is a dual-use technology and AI researchers and engineers should be mindful of and proactive about the potential for its misuse.
- Best practices can and should be learned from disciplines with a longer history of handling dual use risks, such as computer security.
- The range of stakeholders engaging with preventing and mitigating the risks of malicious use of AI should be actively expanded.
There will be AI in the future that will be able to manipulate, edit and transform live footage to be twisted into a lie. E.g. making politicians look like they are saying something completely different from what they actually said. At this rate there is a high likelihood that there will be no trust or truth left. As it will be so easy to manipulate the truth through the use of AI and augmented/virtual reality.
Some people would claim that this is just fear-mongering. However, since they are so complimentary about the positive applications of AI and how beneficial it can be to people worldwide, I feel that that claim is discredited. This group believe that if we put about legislation now, that we can prevent many of the issues before they arrive. This is what should be talked about in the news and what our global society should be focusing on.
What we need to do as a global society is open up a conversation about what we want our future to look like. Do we want to be able to trust our reality or not? What is the price of truth? The mortal of the story is that we have to promote a culture of responsibility.
We need to take ownership of the problems – because the risks are real. There are choices that we need to make now, and our report is a call-to-action for governments, institutions and individuals across the globe.
On the one hand I would like to get in contact with these people. But on the other hand, the more I read the more paranoid I become. There is nothing comforting about these studies, and they expose how vulnerable our existence is. It exposes how reliant I am on a system that is blindly blundering about. A system that is progressing but without a clue what the ends to it’s means are. This makes me nervous, but weirdly excited for the future.