Machine Learning Technologies
Learning technologies has been focussing on human beings and how to improve human learning. Significant advances have been made in the last decades leveraging computer and internet power compounded with the availability of more and more flexible, and ubiquitous, devices. This evolution will continue as more understanding on learning processes in the brain becomes available and more effective technologies for gathering, communicating, rendering and personalising information are becoming affordable. Research is going on looking at the possibility of augmenting brain learning capability by tweaking with the brain, as an example through electrical stimulation of the hippocampus or elevating magnesium levels in the brain. (Research results in 2016 pointed out the fragile nature of memories in our brain and the possibility that electrical stimulation of the hippocampus may actually destroy memories, rather than improving the memory processes. A lot of caution is needed in this area).
At the same time machine learning is progressing rapidly, thanks to more processing power and more storage availability in machines, plus the possibility to leverage on the experiences of thousands of machines in the cloud.
Autonomous systems can greatly benefit from embedded learning capabilities and from learning from each other and as a community. This machine learning tends to merge into the human learning given the overlapping of several aspects, although clear difference exists (today making learning easier for humans but the balance is rapidly shifting to the machines).
Learning has, for eons, implied access to something, somebody, who own the knowledge and that was willing to share it in a way that could be “learned”. One way of sharing, of course, is to write down the knowledge in a book and have others reading the book. This goes for explicit knowledge and we can see this kind of knowledge (easily) passed on to an autonomous system by “uploading it” to its “brain” (extending its data base, its programming capabilities).
There is another kind of knowledge, implicit knowledge, like riding a bike, that cannot be coded into a book. You will never learn to ride a bike by reading a book, no matter how precisely it has been written or how many times you read it. You have to experience, fail and learn from failure.
This kind of learning is possible for autonomous systems that can be programmed to experience and improve (learn). Walking robots can learn to walk better and to walk on rough terrain by experience. Roomba learns about its environment by exploring it as it does its vacuum cleaning chores.
There is also a learning that requires the “building” of knowledge. You learn something that -basically- did not existed before you thought about it. Research is an example, finding the demonstration to a new theorem is another example.
It is a time consuming process as we know very well. Autonomous systems, equipped with deep learning technology, are able to explore new ways and create knowledge and can do that faster than humans. We have software that can demonstrate theorems that have not been demonstrated before, software that can play a game (like Go) creating new strategies that it has not “learned” from any book (or observing any other entity doing it).
An autonomous system “brain” can learn by “arguing” with itself, like AlphaGo did to get better at Go. It started with the “normal” learning process, by looking at what good players do, then it started to play against itself thousands of games learning from their outcome and getting smarter and smarter through a process of “deep reinforced learning”. AlphaGo neural networks were trained on over 30 million moves actually made by Go players, becoming able to predict with a 57% accuracy the move a player would execute. This is also an interesting capability for an autonomous system: predicting what may happen next. Then it started playing thousands of games against itself trying new strategies and reinforcing the ones that proved successful.
The possibility for an autonomous system to “autonomously” learn opens up the issue of losing control on the system itself, i.e. in a while the system may learn and therefore act in ways that have not been “designed”, nor, potentially, envisaged.
Collective learning, also called “ensemble learning”, will become more and more common. It is already a reality with Tesla cars. The autopilot system on a Tesla car has been programmed to learn as it gets more and more experience. In addition, since 2016, each Tesla car reports on a daily bases its “experience” and this creates a collective experience that greatly increases the learning speed of each car. The collective experience is processed centrally and emerging “lessons” are then distributed to all cars. It is like each car, every day, would drive over 1 million miles (the Tesla “fleet” is driving every day over 1.6 million miles. Clearly several cars are driving along the same road. Still, they are driving it at different times so they will acquire different experiences), clearly harvesting a huge experience.
There are a host of technologies that are being used and experimented in the autonomous systems learning, and that are contributing to this area, including:
Hitting the market
- Ensemble learning
- Convolutional networks
- Video Image analytics (learning from image analyses)
Peak of expectation
- Deep learning
- Cognitive computing
- Prescriptive analytics
- Augmented data discovery
- Graph analytics
- Predictive analyses
- Data lakes
On the rise
- Human in the loop crowdsourcing
- Artificial general intelligence
- Conversational analytics
- Embedded analytics
- IoT Edge analytics
- Advanced anomaly detection
- Citizen data science
Notice among the technologies on the rise the “human in the loop crowdsourcing” that directly connects to the learning of symbiotic autonomous systems.