Alix Rübsaam is researcher on Posthumanism and philosophy of technology. She is a PhD candidate at the University of Amsterdam and currently working at the Amsterdam School for Cultural Analysis.
Her presentations cover a broad range of topics amongst which disruption, HR and Artificial Intelligence, whereby the emphasis is always on the intersection of computers and humans.
Her current research focuses on humans and technology, whereby the role of technology is to complement people’s abilities. We can place current technology in a long range of tools that increase people’s functioning. Contemporary technology however, is much more intrusive and could even take over part of our brain functions. If computers can think, what does it still mean to be a human being? In this context, Alix researches the social and cultural impact of AI and autonomous weapons (drones). If drones are programmed to act autonomously, where do we draw the line for humans?
In the summer of 2015, Alix graduated at the Global Solutions Program of Singularity University, where she studied exponential technologies.
Her most recent publication is a chapter in a book on Augmented Intelligence: The Future of Work and Learning (mid 2017). Work and eduction are seen as human activities. Automating those activities and the associated loss of jobs will increase pressure on how we see ourselves and the role of education and work in our lives. How do we define work and education in the future? How will the definition of human beings and the boundary between people and their tools influence the future of our existence? To adjust to ever faster changing technological developments, it’s important to find out how we can adjust our self-image to changing circumstances.
Some presentations that Alix gave earlier:
Technology is neutral, but doesn’t neutralize
Our technological tools are often described as the agents that guarantee objectivity in the data they generate. But the cultural context in which our technologies are developed influences that technology. The often repeated phrase “Tech is neutral” is basically true. But the person inventing that technology, and the programmer who developed it, have a certain cultural background and ideas on how the world works. Machine learning algorithms are programmed by someone, and the programmes are trained on a good and a bad outcome. Technology is always created in a certain cultural context and it is important to be aware of that if these technologies are used for business- and government purposes.
Why artificial intelligence doesn’t mean that the end of human kind is near
Several thinkers and innovators (Stephen Hawking, Elon Musk, Nick Bostrom a.o.) have predicted recently that the rise of Artificial Intelligence means that the end of humankind is near. They see our brain as a computer. For a long time, the computer has been used as a metaphor to explain the way we think. Alix explains why that view is too limited.
Check the interviews (movie clips) in this article