Business Review Interviews Max Borders.
This interview was originally published in Business Review in advance of their MindChain event.
Business Review talked to Max about some of the issues he frequently writes and talks about, such as technology, politics and the challenges of the future.
Tell us a bit about your manifesto, The Social Singularity, and the concept of collective intelligence — how do you see the future of humanity?
In The Social Singularity I make an important distinction between artificial intelligence (which gets all the headlines) and collective intelligence (which gets hardly any). Everyone is so worried about how the robots will take our jobs that they’re not thinking about how humans are creating the means to work more collaboratively together at scale, and to add exponentially to their collaborative efforts. Distributed ledgers are just the beginning of how we can improve our collective intelligence (CI).
What roles are blockchain, AI or other emerging technologies going to play in human progress?
Blockchain is still clunky and inaccessible to the masses, but the rudiments and protocols of massive social complexity are currently being written into code. Pay no attention to the crypto markets in this regard. UX will improve. And the layers of adjacent possibility are going to form. As such humanity will find these tools not only more useful, but they will start to shape our behaviors in profound ways. One of the mantras of the book is “We shape our tools (and rules) and then our tools (and rules) shape us.” The ability to program incentives at scale should not be underestimated.
Now, of course, artificial intelligence is enormously useful. But the use cases for AI make the use cases for CI much starker. As we move forward, we’ll find these two somewhat discrete domains less discrete. More and more AI and CI will start to weave together until we see something closer to these phenomena merging. Humans will become something akin to sci-fi cyborgs, plugged into a network noosphere. That might sound strange. But it is a less frightening, less dystopian future than one in which AI wakes up, takes over, and doesn’t need us at all.
Do you think the fears that human workforce will be massively displaced by robots in the future are legitimate? If so, what can we do to prevent that from happening on a large scale and what should be done for those who lose their jobs to automation?
These are legitimate concerns, yes, but probably overblown. Those concerned tend to hold human capabilities and human collaboration technologies static–all while applying the logic of Moore’s Law to machines. We should be applying Moore’s Law improvements to everything innovation touches (not to mention Metcalfe’s Law and Reed’s Law).[…]
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!