There was a great deal of research and discourse on technology in medicine when computing systems began to enter the operating room in the 1990s. In particular, in the use of anesthesia. The most commonly discussed example was a difference in turning knobs, in which one machine turned right to increase and another turned left to increase, and in confusion, the patient was killed. This and other events caused a systemic review of medical equipment and the development of standards. The emphasis in the medical community, however, was just as directed at training its staff as it was at the hardware. This has not been the case in IT, largely because costs of risk are more easily assumed, and costs of failure are perceived as more tolerable. However, this tolerance is due in large part to a lack of visibility by executive management, to the breadth and impact of those risks, partly because of a lack of understanding of business risk measure by IT management, and in many businesses a failure of IT and Accounting and Finance to share sufficient information for IT to do so.
The medical and engineering fields attempt to solve the problem of risk and recovery differently. They do so because of biases. Those biases evolved from the methodology and traditions of the culture of the profession. There is a tendency to think that IT has fully commoditized and therefore can be regulated as is plumbing and electricity, but IT is far closer to medicine in it’s complexity than are the more mechanical traditions. And this confusion, or error in philosophy is common within many different specializations or social groups. From technical specialties to the philosophical biases of entire civilizations.
The medical field, especially in surgery and hospital care, includes infinite risk (people die, and there is a high liability cost) and consists of actions are taken by people using tools. This set of properties has made their industry focus on the human element: on improving people, and in particular, on the assumption of failure, therefore improving people.
In medical devices, there is an extraordinary emphasis (due to research papers) on producing tools with very consistent user interfaces that are extremely simple and consistent (such as dials turning the same direction producing similar results) and an emphasis on protocol (scripts that are followed), and lastly on training people to use these tools in order to reduce failure.
But every process is seen as a human problem of discipline and training. Not of engineering at lower cost, or productivity — but as risk reduction. Production costs are far lower than the costs of failure.
This is true for the military as well, where vast numbers of people must work in extraordinarily deadly conditions, under extreme duress and exhaustion, using complex and dangerous tools. Soldiers are taught very simple behaviors, one of which is to speak entirely in facts, rather than interpretations – one of the primary purposes of western basic training. To teach soldiers to separate opinion from recitation of observation.
Similarly, when it was found that different hierarchical social structures around the world prohibited airline crews from communicating effectively and was causing deadly crashes, these crews were taught English and declarative mannerisms by training specifically to overcome these cultural biases and lack of clarity in communication –which is why English is the language of transportation. English contains a spoken protocol of clarity which English speakers do not understand, just assume, and that clarity originated in the western military tradition of enfranchising all citizens in a militia.
Epistemology. This is a word meaning, in practice, ‘the study of how we know what we know’. Every field has an assumed epistemology. Teaching, Soldiers, Politicians, Engineers, Plumbers, and even psychologists, have a means of understanding causality, and a means of testing themselves. Because each field is limited and includes different kinds of risk and failure, people use different testing criteria for planning and choosing their actions.
Teachers for example over-rely on written tests rather than question and answer, and therefore test most often for short-term memory rather than understanding. This has consequences for all societies, but largely for our political system which relied on rhetorical ability.
Protestant churches in the colonial period were effectively debating forums for local social solutions — something that is required of a democratic system.
Furthermore, another consequence of teaching methods, that attempts to reduce costs, is that of literally destroying boys’ minds (physical damage to the brain development) by making them sit for hours a day. (Or by the use of drugs to cause similar brain damage.) This destroys society in doing so because while girls learn to cooperate through compromise, men learn to cooperate through displays of competition and experimentation with dominance, and if prevented from doing so they will not develop an interest in the real world, fail to take responsibility and have little interest in society. All because of the epistemology of teachers, in an effort to perform ‘efficiently’. (And as fathers they will play world of Warcraft, not because they want to but because during their development they were forcibly harmed by these teachers.)
Doctors do not make these kinds of errors. Because the cause and effect of their actions are visible. The cause and effect of political policy, in particular, monetary policy, is likewise opaque, and politicians seek to keep it so.
Fire regulations are fascinating, and building codes in particular, because of how few office building fires we have. The cost of construction is heavily influenced by these codes and has dramatically risen, and both regulations and costs continue to expand despite the fact that they appear no longer to reduce risk. Conversely, firemen still drill and practice on a regular basis which is good, but we still allow tall buildings to be constructed despite the fact that it is dangerous to put many people in a building of more than six stories, that it creates congestion, and in general, research is conclusive, that people don’t like working in them, and that they are unhealthy environments, and heat dissipators and energy consumers.
Effective military organizations run drills. Lots of them. The US in particular runs them constantly. Some NATO countries (Hungary) by contrast only allow their soldiers to shoot one to three bullets in all their basic training in order to reduce costs. But in practice, these organizations are symbolic in nature and are incapable of fighting. Partly because fighting in adverse conditions is largely dependent upon the relationships between soldiers built through shared experiences.
People are not that smart IN time, but fairly smart OVER time. We can solve problems given time. The only way to reduce the time, which is equivalent to cost, of recovery from failure is to pre-compute, or pre-train people to recover from failure, and in particular in the process of discovering how to recover from failure.
If IT management applied the same discipline, they would, once a quarter, create a scenario where three or more elements of their systems failed within a short period, and the staff had to recover from it. This is the approach most military tacticians take to educating their people.
There is too often an emphasis on the efficient achievement of goals, rather than on giving people goals and inserting ‘lessons’, or hurdles and obstacles for them to overcome.
In IT engineering, risk is rarely stated, because it is rarely visible, despite the catastrophic cost to business. Errors are considered to be functions of the machinery, rather than of the people using and maintaining it. People are considered a cost to be minimized so that more work can be put through them.
If a system cannot be assembled and disassembled and tested at every point in the process, then the people cannot understand how to recover it under duress. This mastery by intentional reconstruction is how Formula One racing teams think of the process of engineering. They constantly drill, because of the value of time in racing.
IT is this value of time, and its lost productivity cost, that is hidden by IT. furthermore, IT does not report on the problems it solved and the cost of those problems sufficiently to keep management informed and educated on risks.
The converse happens as well, which is that IT is a resistance to change because the impact of that change is something they don’t understand because they have spent too little time in drills.
Some companies are constantly fighting this battle. Citicorp for example was a cluster of different banks under one management system and brand name, but not under one infrastructure (I hope I have the bank right here, I am pulling from memory). This meant that in the financial crisis, it was less able to react, because they kept costs down by keeping risk high, by not developing a common infrastructure, both technologically and organizationally.
Doctors have extraordinary peer reviews post success and post-failure. They spread knowledge by discourse and question and answer. (Part of this is the skill of medical students in analytical thinking and rhetoric versus that of the IT population.) However, the concept of improving people thorough discourse is consistent in their approach.
Each patient is a new experiment, having the potential for failure or success and the consequential new learning that comes from either.
Retail shops use secret shoppers to test for shoplifting and customer service. The military uses maneuvers, and even uses it’s own members to test its own security. IT rarely conducts planned failures. To see how the staff reacts and to educate them. IT does perform upgrades. And for this reason, upgrades and system maintenance are one of the most important means of keeping the staff trained because they fulfill a similar function to drills and teach the value of redundancy.
These assumptions, this epistemology, is different for every little field of specialization. But what happens in each field is that they, in turn, confuse the methods, practices, tools, means of testing, and general operating philosophy then becomes assumptions about the nature of the real world, and assumptions about human nature, and even human capability, and in particular human plasticity and adaptability, as well as human learning and understanding. When in fact, we must first understand the human-animal as the maker and maintainer of complex systems, and that the human animal has very specific properties, none of which are terribly impressive without extraordinary role-playing, testing, and training in real-world (versus written or spoken) conditions, where, they must cooperate toward complex ends, in real-time, under conditions of duress.
For example, human civilizations are different largely because social orders were initially established by their warriors and their battle tactics. It may seem odd that the east, west, steppe, desert, and mystical civilizations all are caused (Armstrong, Keegan) . It is uncommon that even westerners understand that western battle tactics in Europe were heavily based on maneuver (chariots) the required cooperation. Cooperation required political enfranchisement, political enfranchisement led to equality, equality led to debate, debate led to logic, logic led to science and rationalism. This is different from both the tribal raiders, the mystical Zoroastrian as well as the Chinese familial and hierarchical traditions. An interesting problem for intellectual historians has been why Confucius could not solve the problem of politics and directed the civilization to familial structures instead. Or that the primary difference between east and west is the assumption that our job is to leave the world better than we entered it, that the purpose of man is to transform the word for his utility, that man is the ultimate work of nature, versus the eastern view that our job is to work in harmony with the world, (non-disruption), that humans are somewhat vile by nature, that man is necessarily in class structures, and that truth is less important than the avoidance of conflict (except when it involves barbarians). These differences led to our different concepts of life itself.
In IT there is a cultural assumption that the engineers’ job is to prevent failure, or, to work with the systems without causing additional complexity that increases the probability of failure or to repair from failure. However, few organizations are structured such that there are drills and processes by which to recover from failure for the entire purpose of educating the human element in the system.
This cultural legacy is largely due to the perceived (although not factual) high cost of IT implementations, largely as a remnant of the fact that during IT’s development, a great deal of research and development, in pursuit of competitive advantage, was conducted in-house, with the resulting failure of research and development programs. In fact, IT infrastructure costs were significantly lower than many previous innovative technologies adapted by business. (In particular, electricity as a replacement for steam or water power.) And by comparison, the calculative burden an uncompetitiveness placed upon companies by antiquated accountancy methods, or government taxation programs, or building codes, are often higher than IT costs. In Europe for example (as well as in California) businesses for small networks, rather than more efficiently combine into larger organizations with lower administrative costs, just to avoid these external expenses.
So, this is not only an IT problem, but an executive management problem: the CEO cannot authorize budget for risk mitigation, (nor cover himself by doing so) if the IT management does not understand and quantify the risk, or it’s probability.
( If Executive management does not promote better methods once presented with the information, then the popular revolt is the only real solution (go work somewhere more worthy of your talents that doesn’t reduce its cost of doing business by counting on the fact that you’ll live under greater unnecessary stress, and possibly lose sleep and health, or even risk your job, because you were not allowed to engage in preventative activities. Conversely, if you dont provide them with that knowledge, in form and quality at least equal to those provided by sales and accounting organizations then they are not to blame for your inability to do so. They have an epistemology too: which is that they are told many things by many people, and must be able to test these bits of gossip and opinion somehow and only numbers can provide that ability.)
IT management has long been criticized for wanting a seat at the table, but not warranting a seat at that table. (Nick Carr) But in general, these people may understand the craft, but often fail to understand the metrics and management of capital in a business, In other words, executives are included for their ability to postulate theories and deliver results. Customer service internally and externally, Risk (Failure Management), Productivity Contribution by the improvement of competitiveness, and Cost OF SErvices, are all criteria by which IT organizations should be measured. From the “ultimate question” for customer service, to cost of service, all of these are measurable. But you cannot judge that service if the management does not adequately measure it, and report on it, so that the executive management of the organization is capable of understanding and making decisions that support IT’s mission.
Think of how much information the Accounting (history) and Finance (future) organization gives to the CEO. THink about how much the Sales organization gives to the CEO. THink of how LITTLE marketing organizations tend to give by comparison, and think of how much less than marketing, the IT organization gives.
The respect and influence that a function of the company has over the distribution of resources in the company has largely to do with the metrics that it provides the management team. And how much exposure to risk the IT organization inserts into the business by failing to see the management of complex systems as one of engineering rather than one of human development and the testing of humans for failure, and the measurement of humans in their ability to recover from failure.
Just as public intellectuals try to change public opinion to influence policy, by the use of narrative and argument, as well as data and it’s interpretation, because they need to help people think differently who have previous intellectual assumptions and biases dependent upon the methods and tools that they use in daily life and then apply outside of that domain of experience, IT management, and to some degree, the staff, must look at the underlying assumptions both in IT and in general business management and develop the discipline internally to experiment with failure, in order to teach the human component of complex systems, how to react in short time periods, while at the same time, using metrics and measures to inform the policy makers in executive management, so that they can intelligently and rationally make decisions about the allocation of resources for the purpose of creating profit (a measure of our use of the world’s resources), and the reduction of risk, so that all members of the organization, who are choosing to invest in this stream of income and friendships and knowledge at this organization, instead of an alternative stream of income, friendships and knowledge at another organization, can reduce the risk and cost to themselves in the event of failure of those estimates of risk.
It’s all economics after all.