Boost your digitalization: modularization

JanBosch
3 min readMay 1, 2022
Photo by Mourizal Zatvia on Unsplash

One of the concepts I think is poorly understood by many is the notion of modularization. Generally, most people will consider a more modular system as preferable to a less modular system. As a consequence, modularity is viewed as positive and generally used as an alternative word for “good.”

My concern with this view is that it ignores a few realities about systems engineering. The first is that modularity typically has a downside in terms of reduced efficiency and increased resource usage. In a more integrated system, different parts can access each other more directly and with less overhead. This is why in traditional systems engineering, modularity is sacrificed to increase efficiency so that components with lower performance and cost can be used. So, modularity has a price many fail to recognize.

Second, for any architecture, the dimension along which you decide to break the system into its primary components removes modularity along other dimensions. During development and maintenance, a rule of thumb in software architecture is that one change request should lead to change in, preferably, one component. As an example, many know that the primary components of a compiler include a lexer, a parser and a code generator. This is a logical decomposition as it allows each component to focus on its particular responsibility. However, in the case of an extensible programming language where new syntax elements can be added over time, this architecture is very non-modular as every new syntax element will require changes to the lexer, the parser and the code generator. So, every change request causes changes in every component of the system. In short, a system is modular in response to a specific set of expected changes and by choosing an architecture that can be considered modular for these expected changes, the architecture is, as a consequence, non-modular for other types of changes.

This brings me to the third misconception around modularity: the assumption that modularization means the same before and after a digital transformation. Traditional systems often are modular during development time and integrated during execution time. In embedded systems, in the cases where new software is deployed to systems in the field, there typically is an “image” that replaces all software in the system. As a consequence, an upgrade often is quite disruptive in that the system has to go offline, enter some special state required for updating, go through a sometimes lengthy update process, restart and run an extensive self-test before becoming available for operations again.

In a world where we want to continuously deliver new value to customers, the traditional upgrade process is too cumbersome and disruptive to conduct frequently. Instead, we need to support the independent deployment of components without disrupting the system operations. Typically, this requires the old and new version of a component to co-exist in the system for a while as traffic is routed to the new version and the old version concludes its ongoing processing. Once the old version is dormant, it can be removed.

This form of modularity requires capabilities from the architecture that often aren’t present in systems. It requires careful modularization of the architecture, the introduction of infrastructure to manage the co-existence of multiple versions of components, run-time testing of functionality without affecting operations, as well as instrumentation to detect anomalous behavior and perform automated responses, such as roll-back.

Modularity and modularization are concepts that tend to be poorly understood in general. As we’re going through a digital transformation, the meaning of these concepts changes and expands to include the post-deployment stage of systems. This has several architectural implications, including changes to the principles driving architectural modularization, infrastructure to support seamless run-time updates and mechanisms to detect and address anomalous behavior. This is a lot of work and comes with a fair share of risks and challenges, but what’s the alternative? As George Westerman from MIT said: “When digital transformation is done right, it’s like a caterpillar turning into a butterfly, but when done wrong, all you have is a really fast caterpillar.”

Like what you read? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch).

--

--

JanBosch

Academic, angel investor, board member and advisor working on the boundary of business and (software) technology