Kaggle is a website (and a company) dedicated to data scientists where data science projects are published to exchange experience. It is becoming addictive if you are in the field and this is further enhanced by competitions that are being set up challenging those passionate in leveraging data in finding new, more effective ways to extract meaning from data. As one can imagine, there is plenty of artificial intelligence being used (and actually participating in these challenges is a way to train a smart application to become even smarter).
Recently, one of these competitions saw the participation of 3 Google data (and artificial intelligence) experts. They tested their AutoML program. As the name suggest the program has been designed to autonomously create Machine Learning programs. Notice that Machine Learning is about getting smarter (learning) as you get more and more experienced, i.e. you have access to more and more data. However, this is not what AutoML does to get smarter. Starting from a (big enough) data set it works to create new programs that can be smarter in analysing those data. Basically it is an artificial intelligence that creates more advanced artificial intelligence.
The goal of these Google researchers was to prove that it is possible to create an artificial intelligence program that can (could) replace human experts in data analyses. A human expert is not just analysing data, she is looking for new, more effective, ways to analyse data. In a way she is creative in the way to approach the problem. The gatherings at the Kaggle competitions are doing exactly that. The data available are the same to every participant and the challenge is to find the most effective way to analyse them.
This is what has been foreseen in the second White Paper on Symbiotic Autonomous Systems: the birth of an autonomous AI, that as such will become better and better by spawning new programs that represent a mutation from their “parents”, a mutation delivering more effective results.
As it is pointed out in the White Paper this creates issues because of the loss of control on what is the result. This is already happening in systems like AlphaGo that showed a behaviour that was not created by the programmers and that actually surprised them. The outcome of the decisions taken proved to be effective (AlphaGo won) but those who programmed it were not able to find out why it took certain decisions (watch the clip).