Amazon Echo and Google Home have paved the way to interacting with home appliances (at least a few of them) using natural language. Apple is following with its recently announced HomePod (I found interesting the use of an array of six microphones to both improve sound detection and to create a virtual representation of the ambient and adapt the loudspeaker sound to best fit the acoustic of the ambient).
In a way the path started a few years ago (Siri was acquired by Apple on April 28th 2010 and included in IOs) but it is just recently that people are really starting to use voice interaction as their preferred way, and it is getting more and more seamless, that is you do it without actually noticing.
In this area I am a later adopter, basically forced by my kids who started last year sending voice messages rather than text ones, which over time prompted me to switch.
According to a study at Stanford University voice input can be three times faster than texting, which is not surprising at all, and can also be better in terms of error rate (2.98% vs 3.68%). The study makes a comparison of voice and text input both for English and Chinese Mandarin and looks at the uncorrected and corrected error rate. Notice, however, that in the case of text the error is a typing error and normally involves the mistyping of a letter, whilst in case of voice input the error involves the whole word since it is the “word” that is misunderstood by the voice analyser.
Voice recognition has made a significant progress in these last few years thanks to deep learning technology and much bigger data set. It is bound to get even better whilst our ability to type, and mistype, is bound to remain stable. Hence the forecast by the Stanford team that we are likely to see a widespread shift from keyboards to mikes in the next decade. The forecast that by 2030 keyboards will have disappeared is becoming more and more likely.