WarningThis information has been archived for reference or research purposes.

Archived Content

Information identified as archived on the Web is for reference, research or recordkeeping purposes. It has not been altered or updated after the date of archiving. Web pages that are archived on the Web are not subject to the Government of Canada Web Standards. As per the Communications Policy of the Government of Canada, you can request alternate formats on the "Contact Us" page.


A Signals officer.

Canadian Forces Combat Camera photo IS2003-1183a by Master Corporal Paul MacGregor

Battlespace Visualization: a Signals officer at the console of the Athene Tactical System, a new battlefield digital information system being tested by 2 Canadian Mechanized Brigade Group and 1 RCR.


by Colonel Christian Rousseau

Print PDF

For more information on accessing this file, please visit our help page.

The search for certainty in military decision-making has long been an elusive if persistent goal, one which has led to the development of ever more sophisticated methods of acquiring and assessing information that might affect the quality of those decisions. Every advance in the sophistication of command and control systems in fact reflects a race between the demand for information and the ability to deliver it. Until very recently, taming uncertainty proved to be a chimera, but the field of information technology is growing rapidly and, if we are to believe the present-day enthusiasts, it would seem that near- perfect Battlespace Visibility (BV)2 is closer at hand than ever. The powerful analogy of putting the commander back on his horse to survey the battle is sometimes used to describe the phenomenon.3

Achieving near-perfect Battlespace Visibility would be an enormous accomplishment, but there is significant doubt as to its feasibility.4 However, if we suspend disbelief momentarily and assume that perfect BV is not only achievable, but that its interface could be designed in such a way as to eliminate the risk of information overload, would this considerable expenditure of resources represent a significant gain in a commander’s ability to make the right decision at the right time?

This article will argue that even perfect BV would be of only marginal value to the commander, and that war at the operational level will remain a complex endeavour requiring exacting decision-making skills and coping strategies to make sense of the complexity.

To present the argument, this paper will first examine complexity theory and show its relevance to the military commander and his environment. The second part will focus on some of the latest research in the field of decision-making in complex environments, and contrast these findings with the situation of a commander. The last part of the essay will distil the ‘enablers’ to operating in war’s complex environment from the insights gained by the recognition of its chaotic nature.


Everything in war is simple, but the simplest thing is difficult.

Carl von Clausewitz5

Terms like complexity, chaos and non-linearity6 have become common in our day-to-day vocabulary, signalling broad use of the theories behind them to explain our world. But what do chaos and complexity theories have to do with commanders or the theory of warfare?

While chaos and complexity are different concepts, there is an important link between the two underlying theories. Put succinctly, chaos theory is the study of how simple systems can generate complicated behaviour, while complexity theory is the study of how complicated systems can generate simple behaviour. Familiarity with both concepts is important to an understanding of warfare. This essay will first look at chaos, to set the scene for the introduction of complexity theory.

CO of HMCS Iroquois

Canadian Forces Combat Camera photo IS2003-2296a by Master Corporal Frank Hudec

The Human Element of Command: the commanding officer of HMCS Iroquois confering with his bridge officer during a night boarding of a suspect vessel in the Gulf of Oman, April 2003.

Chaos theory has its origins in mathematics. The great French mathematician Henri Poincaré first noticed the fact that many simple non-linear deterministic systems can behave in an apparently unpredictable and chaotic manner. This realization has had broad implications for many scientific disciplines, with the ideas of chaos proving very fruitful in disciplines as diverse as biology, economics, engineering and physics.7

At the outset, it is important to recognize that chaos is not randomness. Rather, the phenomenon of chaos is a very sensitive dependence of the outcome of a process, in a deterministic system, on the tiny details of what happened earlier – the initial conditions. When chaos is present, it amplifies indeterminacy. But if all non-linear systems were completely indeterminate not much would come out of their study. Complexity theory for its part, deals with the study of systems that, within bounds, exhibit unpredictable, self-organizing behaviour. In systems, interactions are the norm, so action in one area will invariably have more than one effect. The result is that systems often display non-linear relationships, the effects of action are always multiple, outcomes can-not be understood by adding together the units or their relations, and many of the results of an action are unintended. Doctors call the undesired impact of medications “side effects”. Although the language is misleading – there are no criteria other than expectations that determine which are “main effects” and which are “side effects” – the point reminds us that disturbing a system will produce several changes.8

Haut de la page

With this basic understanding of chaos and complexity theories, the paper can now look at the commander and his operating environment to show that they indeed constitute a chaotic and complex system. For a system to be considered complex,9 it must be deterministic, its interactions must induce non-linearity and, within bounds, it must be self- organizing. If all three conditions are evident (deterministic, non-linear and pattern-forming self-organization), then it can be considered a complex system.

There can be little doubt that the elements which constitute war are deterministic. When a carrier battle group sails, it does not travel randomly around the world’s oceans. When a fighter squadron flies on a mission it does not drop ordnance arbitrarily. And, when an armoured division attacks an enemy position, advancing erratically does not serve its purpose.10 The mere fact that such groupings exist and have been shown to be potent systems to inflict and control violence signifies that there is a link between cause and effect. This is not a stochastic or random environment.

That interactions in war induce non-linearity are well documented. Some well-known examples of this might include the nursery rhyme about the kingdom lost for the want of a nail, Clausewitz’ observations on the incidence of friction11 or von Moltke’s remark that “no operation plan extends with any certainty beyond the first encounter with the main body of the enemy.”12 Practitioners, theorists and even popular culture testify to the futility of predicting results based on initial conditions because of the sensitivity of outcomes to seemingly benign perturbations. Non-linear outcomes are the hallmark of war; its nature cannot be captured in one place but emerges from the collective behaviour of all the individual agents in the system interacting locally in response to local conditions and partial information. In this respect, decentralization is not merely one choice of command and control; it is the basic nature of war.13

The pattern-forming, self-organization aspect of warfare can be glimpsed from studying its history, or more precisely from the fact that it is possible and worthwhile to use history to enhance our understanding of war-fare. If warfare were not pattern forming, the introduction of new technology that changes the balance of interactions, on one side or both, would bring unrecognizable new dynamics into the system. The functions of war – the requirements to Sense, Shield, Act, Sustain and Command – have indeed been impervious to techno-logical change.14 The formulation of Principles of War is also an indicator of pattern-forming self-organization: if there were no pattern, only non-linearity, we could not affirm that Concentration of Force is worth pursuing or that Selection and Maintenance of the Aim is a key to success, and we might indeed conclude that the wisdom of keeping a Reserve is an anachronism from the 19th century.

It is therefore clear that war – the environment in which a commander operates – is a complex system where knowing the physical component of the situation is only part of the solution. Non-linear dynamics suggests that war is uncertain in a fundamental way. Uncertainty is not merely an initial environmental condition that can be reduced by gathering information and displaying it on a computer screen. It is not that we currently lack the technology to gather enough information, but will someday have the capability. Rather, uncertainty is a natural and unavoidable product of the dynamic of war: action in war generates uncertainty.15 How can we help a commander deal with these complex systems? How much help would a perfect Battlespace Visibility system bring? To answer these questions, we must investigate how the human mind deals with complexity.


This difficulty of accurate recognition constitutes one of the most serious sources of friction in war, by making things appear entirely different from what one had expected.

Carl von Clausewitz16

Dealing with complex systems does not come naturally. Despite our seemingly advanced cognitive skills, it appears that evolution allowed human beings to develop a tendency to approach issues on an ad hoc basis.17 Using the results of recent research in the field of decision-making in complex environments, the apparent limitations of the human mind can be examined, as well as the consequent type of recurring decision errors in complex environments. This will set the stage for an exploration of successful decision-making strategies and their applicability to the commander.

At the root of our difficulty in dealing with complex systems is our poor ability to deal with variable patterns in time.18 The fact that spatial configurations can be perceived in their entirety, while temporal ones cannot, may well explain why we are far more able to recognize and deal with arrangements in space than in time. We are constantly presented with whole spatial configurations, and readily think in such terms. We know, for example, that to determine whether a parking lot is crowded we need to look at more than one or two spaces. By contrast, we often overlook time configurations and treat successive steps in a temporal development as individual events. For example, as student enrolment rises each year, members of a school board may add first one room then another onto an existing schoolhouse because they fail to see the development in time that will make an additional schoolhouse necessary. Even when we think in terms of time configurations, our intuition is very limited, so that when we have to cope with systems that do not operate in accordance with very simple temporal patterns, like the one given here, we run into major difficulties.19

This limited temporal intuition is evident in our propensity to ‘oversteer’ when action and reaction are not linked by instantaneous feedback. At the helm of the proverbial oil tanker, the uninitiated will keep turning the wheel because the ship appears non-responsive. Once it starts to turn, we realize that we have overdone it and have to compensate the other way.

This tendency to “oversteer” is characteristic of human interaction with dynamic systems. We let ourselves be guided not by development within the system, that is, by time differentials between sequential stages, but by the situation at each stage. We regulate the situation and not the process, with the result that the inherent behaviour of the system and our attempts at steering it combine to carry it beyond the desired mark.20

Unfortunately, limited temporal intuition and a tendency to oversteer do not appear to be our only flaws. Dietrich Dörner, an authority on cognitive behaviour, found that decision makers that are uncomfortable with complexity and unfamiliar with a situation are often plagued with uncertainty.21 They also tended to miss the big picture and get swamped trying to deal with the problem of the moment.22

Haut de la page

Experts, for their part, tend to take complexity within their field in stride but remain vulnerable to uncertainty. Gary Klein, an authority on Naturalistic Decision Making, found that experts familiar with the complexity of a particular situation make three types of errors: first, errors due to lack of experience; second, errors due to lack of information; and third, what he calls the de minimus error, an error of mental simulation where the decision maker notices the signs of a problem but explains them away, i.e., he finds a reason not to take seriously each piece of evidence that warns of an anomaly.23

It would thus seem clear that decision-making in a complex environment does not come naturally. Cognitive psychologists have documented strategies for effective decision-making in such environments, strategies which differ whether or not the decision maker is an expert in the field where the decisions are required. But before going into these strategies, we need to look at an emerging truth that seems to hold regardless of levels of expertise. To deal successfully with complex environments, it appears that cognitive ability is not the main indicator, and the usual battery of psychological tests is useless in predicting participant behaviour. It seems that a better predictor of participant success is their capacity to tolerate uncertainty.24

To operate successfully within a complex and dynamic system, we have to know not only what its current status is, but also what its status will be or could be in the future, and we have to know how certain actions we take are likely to influence the situation. For this we need “structural knowledge”, knowledge of how the variables in the system are related, and how they influence one another.25 As we will see when discussing Recognition-Primed Decision-making (RPD), experts will have developed an intuition26 for this, but laymen must hypothesize the links, test their hypotheses, and keep in mind the possibility that their model is probably wrong.

The decision-making strategy proposed by Dörner is very similar to what we call the Estimate of the Situation, or in collaborative planning terms, the Operational Planning Process (OPP). He sees the first step in dealing with a complex problem to be “defining goals”, the second as “developing a model and gathering information”, and “prognosis and extrapolation” as the third step. These steps are followed by “consider measures to achieve our goals” so as to be in a position to make a “decision”, and this planning is followed with “action” and revision of the plan based on feedback.27

Modernized CF-18.

DND Photo AE2003-0117-005D by Corporal Chris Bentley

Precision Munitions Delivery for Joint Operations: one of the first of the modernized CF-18s sporting special jungle camouflage.

While this kind of planning process serves us well in unfamiliar complex situations, it is slow and cumber-some and not always practical under extreme time pressure. Fortunately it appears that shortcuts are possible for situations where the decision maker has a sound structural knowledge (implicit or explicit) of the system he is dealing with. As Dörner explains:

[For the human mind] complexity28 is not an objective factor but a subjective one. Take, for example, the everyday activity of driving a car. For a beginner, this is a complex business. He must attend to many variables at once, and that makes driving in a city a hairraising experience for him. For an experienced driver, on the other hand, this situation poses no problem at all. The main difference between these two individuals is that the experienced driver reacts to many “supersignals.” For her, a traffic situation is not made up of a multitude of elements that must be interpreted individually. It is a gestalt, just as the face of an acquaintance, instead of being a multitude of contours, surfaces, and color variations, is a “face.” Supersignals reduce complexity, collapsing a number of features into one. Consequently, complexity must be understood in terms of a specific individual and his supply of supersignals. We learn supersignals from experience.29

An Intelligence officer.

Sergeant David Snashall and it was taken at the G8 summit

Information Fusion: an Intelligence officer aboard HMCS Vancouver.

Haut de la page

Gary Klein studied experts in their natural settings with ample structural knowledge and a good grasp of supersignals. He found that commanders, using Recognition-Primed Decision-making, could come up with a good course of action from the start, even when faced with a complex situation.30 The RPD fuses the two processes of pattern recognition and mental simulation to optimize decision-making. Through pattern recognition, decision makers recognize the situation as typical and familiar and proceed to take action. They understand what type of goals make sense (so the priorities are set), which cues are important (so there is not an overload of information), what to expect next (so they can prepare themselves and notice surprises), and what are the typical ways of responding in a given situation. By recognizing a situation as typical, they also recognize a course of action that is likely to succeed. They do not compare possible courses. By means of mental simulation, they ‘wargame’ the first plausible course open and use it as is, adjust it if need be or reject it if it will not do the job. They do not attempt to find the best plan; they are after the first plan they know will work, thereby achieving great economies of time and mental resources. After the decision is made, experts monitor developments and rely on their expectations as one safeguard. If they read a situation correctly, the expectations should match the events. If they are wrong, they can quickly use their experience to notice anomalies and change their plan dynamically.31

This strategy of decision-making has its limitations, however, and cannot serve in all situations. Significant structural knowledge (mainly implicit) is required, and there is a relatively low limit as to how complex a situation can be before it overwhelms our mental capabilities to simulate it.32 Moreover, with our difficulty in dealing intuitively with all but the simplest temporal pattern, mental simulation and RPD will not help in circumstances where complex temporal configurations are at play.

We have reviewed what researchers have found to be our human limitations when it comes to successful deci-sion- making in complex environments and the strategies suggested to overcome them. The applicability of their theories to the commander’s situation will now be examined.

We recognize in Klein’s decision-making model a pattern applicable to the commander possessing what Clausewitz has called “coup d’oeil.”33 Similarly, we can see the close parallel between the Operational Planning Process and Dörner’s guidelines for decision-making in unfamiliar complex situations.

In the same way we can readily find examples of the three types of errors reported by Klein,34 and an informed reading of Cohen and Gooch’s work on Military Misfortunes reveals that lack of structural knowledge is at the root of the three kinds of failure they report.35 This deficiency is also at play when Timothy Thomas remarks in Parameters that “information superiority allowed NATO to know almost everything about the battlefield [in the Kosovo conflict], but NATO analysts didn’t always understand everything they thought they knew.”36

Finally the concept that tolerance of uncertainty is a better predictor of success than sheer intellect can be corroborated by the words of General William Tecumseh Sherman telling a subordinate what made Ulysses Grant his better in the art of war:

Wilson, I’m a damned sight smarter than Grant; I know more about organization, supply and administration and about everything else than he does; but I’ll tell you where he beats me and where he beats the world. He don’t care a damn for what the enemy does out of his sight but it scares me like hell. I’m more nervous than he is. I am much more likely to change my orders or to countermarch my command than he is. He uses such information as he has according to his best judgment; he issues his orders and does his level best to carry them out without much reference to what is going on about him...37

It seems evident that the theories described here are applicable to the modern commander, and that making decisions in the complex, dynamic system that characterizes warfare is no simple matter for him or her. Our cognitive abilities do not seem robust enough to deal directly with high-level complexity on their own, and require coping strategies in the form of pattern matching and decision-making schemes. This makes evolutionary sense, since complexity often self-organizes in patterns. Armed with this understanding, we can now investigate what ‘enablers’ to decision making are available and how to integrate them into the commander’s world.


Even amidst the tumult and the clamour of battle, in all its confusion, he [the expert at battle] cannot be con-fused.

Sun Tzu38

‘Enablers’ – deriving from the insights gained by realizing that the military commander deals in an unpredictable, self-organizing complex environment – are presented in two broad categories: those related to the commander, and those that affect command within the organization. At the root of these insights are two principles. The first is that time is the scarce commodity: an organization has to be able to match the rate of change in its environment. The second is that people are the key asset, the adaptive element of any organization. Learning and innovation come only from human cognition.39

The first insight related to commanders has to do with their comfort level in a chaotic environment. We have already noted that capacity to tolerate uncertainty is a better predictor of success than straight cognitive ability. Being comfortable with chaos permits the commander to profit from it rather than waste energy fighting it. Selecting people who are at ease with chaos and nurturing that talent would therefore be advantageous to armed forces. Since this ability does not appear to come naturally, we must therefore learn to cultivate it.40 Perhaps there is more to the German tongue- in-cheek adage about the use of intelligent but lazy officers as commanders than we have thought.41

The second issue related to successful decision- making in individuals, and therefore applicable to the military commander, is what is referred to as operative intelligence or “metacognition.”42 In dealing with complex problems we handle different situations in different ways. By understanding their own cognitive limitations, experts can choose problem-solving strategies that maximize their strengths and minimize their weaknesses. For example, our poor ability to deal with variable patterns in time was noted earlier. It appears from experimentation that using graphs to convert ‘time’ into ‘space’ helps people comprehend temporal configurations.43 An understanding of our limitation thus allows us to devise strategies to deal with it. Dörner tells us that experience is the main supplier of this operative intelligence:

Haut de la page

Geniuses are geniuses by birth, whereas the wise gain their wisdom through experience. And it seems to me that the ability to deal with problems in the most appropriate way is the hallmark of wisdom rather than genius.44

Clausewitz describes the remedy to his “friction of war” in a similar fashion: “Is there any lubricant that will reduce this abrasion? Only one, and a commander and his army will not always have it readily: combat experience.”45 Direct experience is the most productive way of generating improvement in decision-making performance in complex environments.46 Considering the dearth of combat experience at the operational level in our armed forces, training becomes the vehicle of choice to acquire those habits that will make us better decision-makers.

The other insights related to the commander deal with how we prepare him or her for the job through the professional development triad of education, training and experience.

Concentrating solely on education for improving the decision-maker’s ability to deal with complex situations is likely to be of limited value. Dörner explains the results of such an approach:

[Education] gave them what I would call “verbal intelligence” in the field of solving complex problems. Equipped with lots of shiny new concepts, they were able to talk about their thinking, their actions, and the problems they were facing. This gain of eloquence left no mark at all on their performance, however. ... The ability to talk about something does not necessarily reflect an ability to deal with it in reality.47

Applying training to teach formal methods of analysis also proves to be a hindrance to rapid decision-making. Klein explains:

We do not make someone an expert through training in formal methods of analysis. Quite the contrary is true ... If the purpose is to train people in time- pressured decision making, we might require that the trainee make rapid responses rather than ponder all the implications.48

Because expertise depends on perceptual skills, and perceptual learning develops only with repetitive practice, you rarely get someone to jump a skill level by teaching more facts and rules. Perceptual learning grows with the accumulation of experience, and frequent practice is the sole reliable contributor to improved decision-making in complex environments.

Short of actual combat, the best approach to replicate the required experience is a robust simulation programme replete with exercises and realistic scenarios where ‘teaching’ takes a back seat to ‘practice’, allowing a person the opportunity to size up a number of situations very quickly. A good simulation can sometimes pro-vide more training value than direct experience since it lets the action be stopped and even backed up to see what went on, and many trials can be conducted together so a person can develop a sense of typicality.49

The next insight – maximizing the value of each experiential event – reinforces the importance of the After Action Review process as we know it. While teaching is of little value in developing the ability to make decisions in a complex environment, it appears that conscious self-reflection does make a difference. This foray into metacognition makes for better problem solvers. Furthermore, self-reflection is enhanced by the presence of an expert observer who, having witnessed how the participant planned and acted, and having noted his errors and their determinants, assists the participant in his reflection through carefully prepared follow-up ses-sions.50

Two related words of caution must be given on these last two insights. First, the customary warning on the fidelity of the simulation used to replicate the reality of the experiential event: if the patterns in the simulation do not match the ones in reality, the subject develops the wrong intuition. The second caution, more sombre, indicates that, despite our best efforts at simulation, we may never truly develop expertise in our subject area. According to Klein:

We will not build up real expertise when: The domain is dynamic, we have to predict human behavior, we have less chance for feedback, the task does not have enough repetition to build a sense of typicality, [or] we have fewer trials. Under these conditions, we should be cautious about assuming that experience translates into expertise. In these sorts of domains, experience would give us smooth routines, showing that we had been doing the job for a while. Yet our expertise might not go much beyond these surface routines; we would not have a chance to develop reliable expertise.51

This might explain why peacetime generals often get sacked at the beginning of a war: they have acquired experience, but have had no chance to develop expertise.

If we are condemned to collecting experience in our field rather than expertise, the best advice for preparing for war may be that offered by Mandeles: “In war a commander needs a set of organizations that will learn while they execute their missions. What those organizations can practice in peacetime is not so much precisely what to do in war, but how to learn quickly what to do.”52 Thus our attention can now turn to the insights that deal with command within the organization. Here we will focus on the ‘thinking apparatus’ of the commander’s Force, or, more precisely, on the way in which planning is carried out, decisions are made or delegated and intent is communicated.

Haut de la page

CO briefing PPCLI troops.

Canadian Forces Combat Camera photo ISD02-3014 by Sergeant Garry Pilote

The Human Element of Command: a 3 PPCLI company commander briefing his troops on a patrol mission in Afghanistan.

Our first realization is that the concept of metacognition can be applied to the apparatus of the Joint Force itself to create the right groupings, structure and information flow to maximize its strengths and minimize its weaknesses. When it comes to planning, we have remarked earlier the similarity of the decision-making strategy proposed by Dörner to the military Estimate of the Situation and noted that, although this serves us well in unfamiliar complex situations, it is slow and cumbersome and difficult to apply under extreme time pressure. Klein’s RPD model, on the other hand, exploits the experience of the decision maker to produce rapid reaction, but is limited to relatively familiar situations. In the Operational Planning Process, the symbiosis between the two is meant to be the commander’s planning guidance, where, having sized up the situation, he decides on the goal and directs the possible courses of action to be investigated. The staff then attempts to produce the plan that would make the preferred course work. This takes advantage of the pattern-recognition skills of the most experienced member of the force, the commander, and channels the energy of the staff’s brainpower to dealing with the complexity that each course represents. Unfortunately, the reality often differs from the theory. Commanders tend to let planners come up with possible courses open and then, realizing that they do not meet their understanding of the situation, incrementally adjust them to their liking during the information and decision briefings.53 This may be a reflection of the teaching method in our Staff Colleges where Directing Staff, acting as commanders, ask their student planners to come up with the commander’s planning guidance, ostensibly to give them a better opportunity to read into the problem. Delegating mission analysis and identification of possible courses of action to the staff is the wrong approach; not only does it waste the time and cognitive energy of the staff, it marginalizes the expertise of the commander who ultimately makes the decision.

Turning our attention to decision-making and delegating, i.e., command philosophy, we have been slow in the West to recognize that complexity demands a ‘mission-command’ approach. Even now, whenever technology floats the mirage of complete visibility of the battlespace, we let ourselves be tempted by the allure of more control. Unless the complete visibility promised also includes complete structural information (and it can-not),54 mission-command remains the only viable alternative. Using Dörner’s words again:

In very complex and quickly changing situations the most reasonable strategy is to plan only in rough outline and to delegate as many decisions as possible to subordinates. These subordinates will need considerable independence and a thorough understanding of the overall plan.55

Having sized up the situation, having planned a response, and having made or delegated the decision, the next task for the ‘thinking apparatus’ of the Force is to communicate that intent to those that will implement it. The first ‘enabler’ here, familiar to all military organizations, is team building. Working with people who understand the culture, the task and what the commander is trying to accomplish allows them to ‘read his mind’ and fill in the unspecified details.56

With implicit intent thus established, we need to deal with explicit intent. The notion of telling subordinates not only what to do, but why they must do it is again a relatively new concept in Anglo-Saxon military heritage. The primary function of communicating intent is to allow better improvisation. Once we accept that in a complex system we cannot think of all the contingencies in advance, and that we have to resort to mission-command, giving the reasoning behind a task will allow subordinates to be creative. They will adjust to the battlefield conditions that the higher-level headquarters cannot know about. They will recognize unforeseen opportunities and find ways of adapting appropriately when the plan runs into trouble. Explicit intent should be clear enough for subordinates to set and revise priorities, to decide when to grasp an opportunity and when to let it go.57

Some of the insights presented above, gained from the recognition that the military commander deals, within bounds, in an unpredictable, self-organizing complex environment, have already been adopted by Western armed forces. The philosophy of mission-command, the construct of the Operational Planning Process, the idea of communicating intent and the widespread use of After Action Reports – all are indicators that we have come to realize that we live with complexity. On the other hand, commanders comfortable with chaos and the concept of metacognition have yet to be ingrained in our culture. Frequent exposure, through simulation, to the realities of decision-making in complex environments rather than training in formal methods of analysis would go a long way in forcing that trend. The application of these insights will save precious time in the decision-action cycle, and focussing on people rather than technology to tackle the challenges of chaos will further efforts to become a true learning organization.


War – the environment in which a commander operates – is a complex system where knowing the physical component of the situation, i.e., Battlespace Visibility, is only part of the solution. Interactions, even deterministic ones, make the process of fighting a war uncertain in a fundamental way. Considering our cognitive limitations, making decisions in the complex dynamic systems that characterize warfare is no simple matter. Coping strategies in the form of pattern matching and decision-making schemes are required to make sense of the complexity. Some of these strategies, and ‘enablers’ such as the philosophy of mission-command, have already been adopted by Western military forces. Others, like the wholesale acceptance of Naturalistic Decision Making methods, have yet to make inroads. Frequent exposure to the realities of decision-making in complex environments, through simulation or otherwise, needs to figure more prominently in our professional development of commanders.

Haut de la page

Selection and development are two complementary approaches we can take to ensure that our commanders thrive on chaos rather than fight it. When it comes to selection, we already expend significant resources testing the cognitive ability and motor skill potential of candidates for specific branches of the Service. Testing for comfort level with chaos, or the aptitude to develop it, would improve our selection of candidates for operational branches that are likely to produce commanders. Then, having ascertained an individual’s capacity to tolerate chaos, adapting our professional development triad to actually develop it would be the next step. In that regard, experience that engenders expertise needs to figure prominently. Deployed operational experience, instrumented field exercises and computer-simulated exercises, all supported by comprehensive After Action Reviews, are the activities of significant value here. In the same vein, the focus on deliberate planning that characterizes our Staff Colleges has to be counterbalanced by the incorporation of Naturalistic Decision Making methods in the curriculum. It is not enough to learn how to make plans. Learning to make time-sensitive decisions during the execution of a plan must also be taught and practised if our military education institutions are to be worthy of the title “Command” and Staff Colleges.

Battlespace Visibility, even when it is perfect, deals with the current point in time and is therefore only a small part of the picture the commander needs to consider to make decisions within the complex system of war. Furthermore, its near-perfect quality – which gives the impression of clarity and finality – may lead a commander to concentrate on the space configuration of the situation rather than the more difficult temporal one. So, even with perfect Battlespace Visibility, commanders need to step back and reflect, exploit their intuition, mental simulation and other sources of power to truly appreciate the situation and arrive at decisions based on variables far more subtle than what can be captured on a computer screen.

Near-perfect Battlespace Visibility is a significant step forward for the information gatherers and managers in the headquarters, i.e., the staff. For the commander, however, it will not reduce the chaotic nature of the environment. It has very limited utility in reducing the difficulty of decision making.

CMJ Logo

Colonel Christian Rousseau is Commander 5 Area Support Group in Montreal1.


  1. This article is an abridged version of a paper submitted as part of the course requirement for the Advanced Military Studies Course 5, Canadian Forces College, Toronto.
  2. Defined as the ability for the commander to see, on demand, everything in the Battlespace.
  3. Robert K Ackerman, “Operation Enduring Freedom Redefines Warfare” in Signal Tribute: The Fight for Freedom. Signal Magazine, Vol. 57, No. 1 (September 2002), p. 3.
  4. Martin van Creveld, Command in War (Cambridge, MA: Harvard University Press, 1985), pp. 265-266.
  5. Carl von Clausewitz, On War. Michael Howard and Peter Paret eds. and trans. (Toronto: Random House of Canada Limited, 1993), p. 138.
  6. The terms Complexity, Chaos and Non-linearity are used in this article in their mathematical sense rather than their day-to-day meaning of complicated, unorganized and non-contiguous. See also the following glossary for specific definitions:

    Chaos: effectively unpredictable long time behaviour arising in a deterministic dynamical system because of sensitivity to initial conditions.

    Complexity: Complex systems are non-linear systems characterized by collective properties associated with the system as a whole that are different from the characteristic behaviours of the constituent parts.

    Deterministic: Dynamical systems are “deterministic” if there is a unique consequent to every state.

    Non-linear: In algebra, we define linearity in terms of functions that have the property ƒ(x+y) = ƒ(x)+ƒ(y) and ƒ(ax) = (x). In other words linearity implies that changes in system output are proportional to changes in input; and, that system outputs corresponding to the sum of two inputs are equal to the sum of the outputs arising from individual inputs. Non-linear is defined as the negation of linear. This means that the result f may be out of proportion to the input x or y.

    Pattern-forming self-organization: Systems where structure appears without explicit pressure or involvement from outside the system.

    Stochastic or random: Systems where there is more than one consequent to every state chosen from some probability distribution (the “perfect” coin toss has two consequents with equal probability for each initial state).
  7. University of Maryland web site <http://www-chaos.umd.edu/>
  8. Robert Jarvis, “Complex systems: The Role of Interactions”, in Complexity, Global Politics, and National Security, David S. Alberts and Thomas J. Czerwinski, eds. (Washington: National Defense University, 1997), pp. 46-48.
  9. While recognizing that this is not a universal quality of complexity, in the instances we deal with in this paper, chaotic behaviour is a precursor to complexity. Therefore from this point on, to lighten the text, the term “complex” is used to mean “chaotic and complex”.
  10. For a discussion on the deterministic aspects of the technological elements of war see Martin Van Creveld, Technology and War: From 2000 B.C. to the Present (New York: The Free Press, 1991), pp. 314-315.
  11. Clausewitz, pp.138-139.
  12. Count Helmuth Karl Bernard von Moltke, cited in Peter G. Tsouras, Warrior’s Words a Quotation Book: From Sesostris III to Schwarzkopf 1871 BC to AD 1991 (London: Arms and Armour Press,1992), p. 61.
  13. John F. Schmitt, “Command and (out of) Control: The Military Implication of Complexity Theory” in Complexity, Global Politics, and National Security,. David S. Alberts and Thomas J. Czerwinski, eds. (Washington: National Defense University, 1997), p. 232.
  14. Van Creveld (1991), p. 314.
  15. Schmitt, pp. 236-237.
  16. Clausewitz, p. 137.
  17. Dietrich Dörner, The Logic of Failure: Recognizing and Avoiding Error in Complex Situations (New York: Metropolitan Books, 1996), pp. 5-6.
  18. Ibid., p. 190.
  19. Ibid., pp. 107-152.
  20. Ibid., p. 30.
  21. Ibid., p. 18. Uncertainty is defined as “doubt that threatens to block action.” Gary Klein, Source of Power: How People Make Decisions (Massachusetts: The MIT Press, 1999), pp. 276-277.
  22. Dörner, pp. 87-88.
  23. Klein, p. 66.
  24. Dörner, p. 27.
  25. Ibid., p. 41.
  26. Dörner calls “intuition” the totality of implicit assumptions in an individual’s mind – assumptions about the simple or complex links and the one-way or reciprocal influences between variables. Ibid., p.41. See also Klein, p. 31.
  27. Ibid., pp. 43-46.
  28. Because this is a quotation, in this case the term “complexity” is not used strictly in its mathematical sense. Dörner uses it to mean “the difficulty for the human mind to deal with a given level of complexity”.
  29. Dörner, p. 39.
  30. Klein, p.17.
  31. Ibid., pp. 24-26, 35.
  32. Ibid., pp. 52-53.
  33. Clausewitz, p. 118.
  34. Examples that easily come to mind are the Iraqis in the Gulf War for lack of applicable experience, the Canadians at Dieppe for lack of information and the failure to foresee attack on the World Trade Center as a de minimus error.
  35. “Failure to learn, failure to anticipate, and failure to adapt.” See Eliot A. Cohen and John Gooch, Military Misfortunes: The Anatomy of Failure in War (New York, NY: The Free Press, 1990), p. 26.
  36. Timothy L. Thomas, “Kosovo and the Current Myth of Information Superiority,” Parameters 30, No. 1 (Spring 2000), p. 14.
  37. Lloyd Lewis, Sherman: Fighting Prophet (New York: Harcourt Brace, 1932), p. 424. Originally quoted in Cohen, p. 244.
  38. Sun Tzu, The Art of Warfare. Roger Ames trans. (Toronto: Random House of Canada Limited, 1993) p. 120.
  39. Robert R. Maxfield, “Complexity and Organization Management” in Complexity, Global Politics, and National Security, David S. Alberts and Thomas J. Czerwinski, eds. (Washington: National Defense University, 1997), pp. 183-184.
  40. Dörner, p. 42.
  41. The classification of officers based on their cleverness and industriousness is attributed to General Kurt von Hammerstein Equord, circa 1933. Cited in Tsouras, p. 297.
  42. Term coined by Klein, meaning “thinking about thinking” or seeing inside your own thought process.
  43. Dörner, p.143.
  44. Dörner, p. 193.
  45. Clausewitz, p.141.
  46. Klein, p. 42.
  47. Dörner, p. 196.
  48. Klein, p. 30.
  49. Ibid., pp. 42-43.
  50. Dörner, pp. 195-199.
  51. Klein, p. 282.
  52. Mark D. Mandeles, et al., Managing Command and Control in the Persian Gulf War (Westport, CT: Praeger, 1996), p 6.
  53. General Schwarzkopf’s first version of the plan for a ground attack in the Gulf is the typical example. Norman H.Schwarzkopf, It Doesn’t Take a Hero (New York: Bantam Books, 1992), pp. 354 and 362.
  54. No matter how precisely one measures the initial condition in complex systems, prediction of its subsequent state goes radically wrong after a short time. Typically, the predictability horizon grows only logarithmically with the precision of measurement. Thus for each increase in precision by a factor of 10, say, you may only be able to predict two more time units. See the Frequently Asked Question section of the University of Colorado Web site at <http://amath.colorado.edu/faculty/jdm/faq-[2].html>.
  55. Dörner, p. 161.
  56. Klein, p. 219. Pigeau and McCann refer to this as implicit intent. See Ross Pigeau and Carol McCann, Re-Defining Command and Control (Toronto: Defence and Civil Institute of Environmental Medicine, 1998), pp. 5-6.
  57. Klein, pp. 223-225.

100th Anniversary

Military Engineers Branch badge
Canadian Military Journal extends best wishes
to members of the Military Engineers Branch
on the occasion of the 100th Anniversary of the
founding of the Canadian Engineer Corps
on 1 July 1903