Systems of all natures affect the outcomes of all engineering projects, but software engineers working with business systems must consider and anticipate the interactions of non-technical systems to a greater degree. Studies of software project failure have not indicated widespread technical deficiencies among practitioners, but instead issues related to understanding requirements, trust in solutions, and how they are rolled out. A software system typically exists within an environment of other systems each affecting the operation of one another in seemingly unpredictable manners. These systems can include other information systems, institutional or political systems, and social systems. Consequently, software engineers often require a level of appreciation for systems that go beyond their technical scope to ensure the success of their projects. The socio-technical nature of software systems has not been lost on the field of software engineering; however, software engineers can always benefit from greater sensitivity to the nature of systems themselves.
Enter Systemantics, the 1975 book by John Gall which earned Gall a law of his own: Gall’s Law. A very entertaining read, it also provides important insight for software engineers. The tongue-in-cheek style of Systemantics communicates concepts that are difficult to communicate save for personal experience, but certainly provides the practitioner with an important viewpoint if taken with a dash of salt. To this end, this article takes three major concepts from Systemantics and applies them to software engineering with the hope that it will enlighten practitioners on the travails of working with systems in a systems-dependent field.
Systems produce unexpected outcomes
One of the first axioms discussed by Gall is that, “New Systems Mean New Problems” (p. 29). This is not all too controversial for anyone familiar with development: the addition of a new feature to one part of a code base can cause failures in other systems for initially inapparent reasons. Within development, this idea is as uncontroversial as its corollary, that “Complex Systems Exhibit Unexpected Behaviour” (p. 40).
However, this wisdom does not pass as easily through the society-engineer barrier. Delivering a technical masterpiece can only account for a single part of what a software engineer can call a success. The software must be delivered within the context of a multitude of other systems that are largely non-technical. As Gall states, “Everything is a part of a larger system” (p. 133). For example, a CRM enables customer encounters to be tracked quantitatively; however, this has the potential to transform customer support managers into mere metric worshipers (how can you argue with quantitative results?), thereby negatively impacting employee morale and deteriorating customer satisfaction. Indeed, the “‘success’ or ‘function’ of any system may be failure in the larger or smaller systems to which the system is connected” (p. 132). New software systems or modifications to existing ones need to be discerned with a healthy amount of caution and humility. Regardless the proficiency of those involved, “The mode of failure of a complex system cannot ordinarily be predicted from its structure” (p. 93).
W. Edwards Deming proposed a similar concept: an organization is itself a system with an aim to optimize the system as a whole, not individual components. With a software system being but one component, it is important to continuously relate its outcomes within the context of the whole. Simply fulfilling a specification is not enough, especially when another component of the system is being negatively affected.
Systems run best when designed to run downhill
Software is often deployed in environments where its success is immediately challenged by forces from non-technical systems. Not only do systems themselves, information or otherwise, pick up inertial forces causing an inherent resistance to change, but there is also the phenomenon of software systems being used as a primary means to effect change upon neighbouring systems. The former being a passive response and the latter an opposing response to an active force, both of these situations result in external forces exerted upon the software system. Gall’s advice for anticipating opposing forces is termed the Systems Law of Gravity which states that “Systems Run Best When Designed to Run Downhill” (p. 102).
When a software system is used as a primary force to impose change on neighbouring systems, an opposing force of some degree is almost a certainty. This is not to say that software cannot be a tool in a broader program to effect change, but expecting to make institutional change solely by using one system to affect another is unpredictable. For example, software systems are often used to enforce policies and procedures: A CRM to strongarm a sales process, an expense reporting system to enforce a high level of rigidity, and checklists in Electronic Health Records (EHR) to require doctors to ask a regiment of questions. And yet, each of these is naturally resisted by the social systems in which they interact. Denton writes of doctors needing to meaninglessly check through lists containing questions of “exercise-induced chest pain and feelings of anxiety” for a two-month old. The burden of a poorly configured CRM is alsowelldocumented. Peter Drucker commented that, “our civilization suffers from a superstitious belief in the magical effect of printed forms.” If the sales process is not already enforced, the expense report procedures not taken seriously, and the medical checklists not already agreed upon, then software will fail to enforce any of these. Even a system that happens to work today at tightly controlling a process might become too rigid in the future. As Gall also states, “Loose Systems Last Longer and Function Better” (p. 103).
Inertial forces are troublesome as they lie dormant out of the control of the engineer. As Gall states in the Law of Systems-Inertia, “A system that performs a certain function or operates in a certain way will continue to operate in that way regardless of the need or of changed conditions” (p. 86). For example, staff may be loathe to retrain, key users may be too busy to use the system to keep necessary data up-to-date, or staff may resent having lost control of their spreadsheet to an application from the IT department. Running downhill in this case often means simply being aware of the non-technical workings of an organization, and then designing the technical system to work with these tendencies (and human nature), and not against them. Integrating a new system within an existing one, or providing interim measures to keep control in the hands of the staff are both examples of running downhill with non-technical inertial forces.
Do it without a system if you can
When presented with a business problem, the easiest answer for a software engineer is more software. Take the hypothetical problem of a business having employees that arrive late and leave early. Even knowing the point I am about to make, I am compelled to think, “They need an electronic check-in/out system so that the employees know that their lateness will be tracked; this will incentivize them to arrive on time.” However, software should not be the immediate solution to a non-software problem. In the example, the software solution is multiple layers removed from the real problem itself. Software in this case is the actualizing force of a new procedure: the need for employees to sign in and out. This procedure could also be implemented by other information management systems such as a sign-in/out book or stamped cards. Notwithstanding the disregard for evaluating alternative methods, the jump from business problem to software solution has effectively bypassed the evaluation of whether a new procedure is an effective solution to the problem. The tardiness may be due to inadequate supervisors; if those supervisors will be known not to check the records, then a record keeping system will be of limited utility. Therefore, a software solution to this non-software problem is at an increased risk of failure even before it begins.
The idea that the jump from a non-software problem to a software solution is unwise follows from Gall’s first bit of advice to those wishing to successfully manage a system: “Do it without a system if you can” (p. 100). The siren song sung by systems is unmistakable: “Systems are seductive. They promise to do a hard job faster, better, and more easily. […] But if you set up a system, you are more likely to find your time and effort now being consumed in the care and feeding of the system itself” (p. 100). It follows that if an adequate solution can be found without all of software’s complexity, then it should be preferred. If you are searching for an agile approach to solving a problem, then pen and paper (or a simple spreadsheet) coupled with disciplined management may provide a solution (if only a prototype) that would have far greater agility than any complex system, software included.
Gall’s namesake law states, “A complex system that works is invariably found to have evolved from a system that worked” (p. 80). Consequently, in the tardy employee problem where the solution to evaluate is the procedure itself, it would be much easier to first implement a manual solution. In this way adjustments can be applied much easier; moreover, the changes can be phased in slowly and in an understandable manner therefore minimizing opposing forces to the changes. Then, once the manual system has become too cumbersome, a true software problem has arisen. But if a manual solution does not work in the context of its surrounding systems, then there is little hope that a more complex software system has any chance of working either. Therefore, as a corollary to Gall’s law we can state, “A working software system is invariably found to have evolved from a working non-software system.”
As Gall states, “A Complex System Designed From Scratch Never Works And Cannot Be Made To” (p. 80).
Many of Gall’s axioms can be seen in a number of the best practices that have evolved in software engineering. Working in smaller iterations with continuous user feedback mitigates the possible unexpected outcomes of introducing a new software system. Indeed, this approach enables software professionals to contribute new solutions while continually monitoring any unintended consequences in a controlled manner. Moreover, by working alongside the sources of potential opposing forces, inertial forces are further mitigated allowing the software to effectively “run downhill.”
The third point (“Do it without a system if you can”) offers a further conclusion that is often understated: make sure that the problem being solved is actually a software problem. The purpose of this article is not to set a hard-and-fast rule for determining what is a software problem; however, from the example above, creating a digital employee punch-in/punch-out system because pen-and-paper has become cumbersome is a much stronger case for software than creating the same system because supervisors are disinterested in staff behaviour. If there are steps missing between the problem statement and the recommendation of implementing a software system, then other options may need to be evaluated.
Finally, a common theme among all of this is the importance of having management involved in the development and deployment of software systems. Sharing concepts with W. Edwards Deming’s The New Economics, management’s job is clearly stated as “to direct the efforts of all components toward the aim of the system,” where the aim is ultimately to “achieve the best results for everybody” (p. 50). Software systems are components of the greater business therefore requiring management to ensure that advancement in software components translates to advancement for the business as a whole. This is especially true where it is necessary to make larger scale changes to business systems, or where a business problem must be solved immediately with a software solution. “Growth in size and complexity of a system […] require overall management of efforts or components” (p. 53). As discussed, in a software system these efforts and components are never only technical in nature.
Kevin is a software engineer and partner with Dattivo Software in London, Ontario. With a formal background in software engineering, he designs, develops and implements software solutions. His interests include how software supports business operations, model-driven architectures and design patterns. He can be reached at firstname.lastname@example.org.