Do you still remember the Waterfall Model?

The Waterfall Model for Product Development, used in the 1980’s for Software development.

Long long time ago, it was in the 70’s, I worked on the first initiatives of Software Standardisation at the ITU, the International Telecommunications Union, and one of the topic addressed was the Specification of Software Requirements. That gave rise to the ITU Standard SDL – Specification and Description Language– and I wrote, along with R. Reed and J.R.W. Smith, a book on it.

The point was that the development of software was a complex endeavour that needed to be structured according to what was known as the Waterfall Model of which the Requirement/Specification phase was the starting point. Actually, most of the time the specification was lacking, or incomplete or ambiguous, hence the need for coming up with a formal specification language creating a specification that could be tested before moving into the design and implementation phases.

The driver for such an effort was the telecommunications world, both Telecom Operators and Manufacturers, that were shifting from electromechanical and semi electronic systems to modern Storage Program Control switches and equipment. The digital transformation had begun.

The effort required to develop such systems was huge, it took several years and the products were supposed to last for decades. It made sense to dedicate significant time to the specification and to follow a well structured process.

The Waterfall Model had its days and at the turn of the century was fading away, replaced by the idea of a continuous re-development process, where the concept of “release” was central. As something was developed work was going on to look at its bettering in a subsequent release. The idea of continuous improvement won over the Waterfall Model and it was the result of a shortening life cycle of software products.

The continuous development model takes the shape of a spiral where the product functionalities are extended and refined continuously.

This approach was further stretched into the acceptance of Beta-releases, opened up to end users that would be the ones testing them. In some cases the joke was that the idea of the “product” was being replaced by the one of “beta-release” shifting the blame of malfunction to the users… We still enjoy this today…

The point I am making is that the evolution from the Waterfall Model to the Continuous Improvement was not a Software Engineering decision but it was the consequence of increased volumes of users that created a feedback pressure on the developers to keep increasing the functionalities, as result of users experience and demand. In turns this shrunk the lifetime of the software product making a sequential approach to development impossible. Notice that this pressure is stronger than ever resulting in shorter life time of a software product and continuous releases (and patches to fix bugs) that in a way can be considered as “beta”.

Already today, and even more so as we shift towards Industry 4.0, we are seeing that the Specification phase, that was at the starting point in the Waterfall Model, is an integral part (implicitly) of the usage phase (maintenance phase). We are turning the model upside down. The modularity in software (mostly hidden to those using it but an essential, crucial feature for developers) is actually  making possible a bottom up development where software engineers reuse existing modules, glueing them in different ways and adding something extra (that is becoming a module usable by others).

Value Chain Dynamics model proposed by Charles Fine to capture the pressure on development generated by the market. Credit Charles Fine, MIT

This market pressure was captured by the work of Charles Fine in his double helix model of product development in his studies on Value Chain Dynamics.

Basically, as a product is released to the market, the users will put a pressure on the industry in general either towards bettering it (in case they keep buying the product) or dismissing it (if the market uptake is lower than the revenue required for the break even and margin generation). At the same time (and this is where the double helix kicks in) an uptake by the market is stimulating competitors to invest in similar products developments that leverage on the market interest created by the product to propose slightly refined/better product, most of the time addressing niches that hopefully will grow in a short time to win the market.

Clearly, this generates a response from the company that went  first to market and this is also the bases for the strategic discussion on what it is better: be first to the market taking risk of not acceptance, or be a follower taking the risk of having the marked locked up by those who went first?

This is by no means the end of the story and I will address what lays in the future in conjunction with the creation of autonomous systems in the next post.

About Roberto Saracco

Roberto Saracco fell in love with technology and its implications long time ago. His background is in math and computer science. Until April 2017 he led the EIT Digital Italian Node and then was head of the Industrial Doctoral School of EIT Digital up to September 2018. Previously, up to December 2011 he was the Director of the Telecom Italia Future Centre in Venice, looking at the interplay of technology evolution, economics and society. At the turn of the century he led a World Bank-Infodev project to stimulate entrepreneurship in Latin America. He is a senior member of IEEE where he leads the New Initiative Committee and co-chairs the Digital Reality Initiative. He is a member of the IEEE in 2050 Ad Hoc Committee. He teaches a Master course on Technology Forecasting and Market impact at the University of Trento. He has published over 100 papers in journals and magazines and 14 books.