Until now, invention has been a solely human endeavour — here’s why this could be about to change.
Created by Stephen Thaler, DABUS is a connectionist artificial intelligence system. DABUS functions as a single system, but is comprised of two smaller neural networks that work in conjunction with one another. The first generates novel ideas, while the second examines the validity of these ideas against its pre-existing knowledge base.
As such, academics argue, the system is not designed to solve any specific problem, and demonstrates the qualities of inventorship that we would normally attribute to humans. Leading some to laud it as a “Creativity Machine”.
Although the group are using these patent applications to highlight the oncoming incompatibility of intellectual property rights and AI inventors, it raises questions about the future of human inventors.
The Future of Invention
According to estimates, we can expect AI to surpass human effectiveness in nearly all tasks within the next 50 years. With this in mind, it’s not hard to picture a future where AI inventors are the norm. But what position does this leaves us in?
In the short term human invention is likely to carry on unchanged. While systems such as DABUS currently pose interesting case studies, the effect of their inventions are unlikely to be felt immediately. Moreover, humans are still necessary to create an AI capable of invention.
However, the promise offered by systems like DABUS will likely make this a hotbed of commercial investment. As the creator or owner of an AI, you are credited with its inventions, and by extension the patents you apply for.
As with other areas of AI, the potential of this as an income source is huge, and if mastered, would give owners the ability to deliver novel ideas continuously.
This is the capitalist version of the goose that lays a golden egg, but consistently.
Many suggest that the future of AI is one where it self iterates. But, in reality, this is already happening.
At the end of 2017, Google announced to the world that its Auto ML AI had successfully created its own “child” AI — and what’s more, this “child” outperformed all other comparable human made AI.
While the software specialises in the task of image recognition, and not the ability to independently invent new products, the current rate of development makes this a tangible possibility in the long-term.
The possibilities here are endless, and would trigger a huge leap in the effectiveness of everyday technologies and products. However, in a world where all technological developments are governed by AI, it would be even more crucial than ever to create a system that monitors the morality of its creations.
Morality vs Technological Improvements
As exciting as the promise offered by these systems may be, it’s critical that we use this opportunity to properly evaluate the morality of the inventions we create. What may be a development to some, will come at a cost to others.
But this isn’t solely an issue for AI, human history is testament to our own lack of the fundamental values of perspective and judgement. For many, irrespective of the further technological advancements it enabled, the creation of the first nuclear bomb is a prime example of our inability to balance this question.
The creation of an AI that is capable of original creation is a landmark moment. On the one hand, it opens up a world of opportunities for future technological advancements, such as feeding the developing world and tackling climate change. These advancements, without the power of AI, may not be humanly possible.
However, it also opens up a whole new world of potential issues, with none less important than ownership. As an AI capable of consistently generating original and worthwhile intellectual property would inevitably concentrate wealth, power and influence in the hands of a very small proportion of people.
All Rights Reserved for Oliver Morrison