A Cognitive Load Theory Approach to Defining and Measuring Task Complexity Through Element Interactivity

  • Review Article
  • Open access
  • Published: 02 June 2023
  • Volume 35 , article number  63 , ( 2023 )

Cite this article

You have full access to this open access article

task complexity problem solving

  • Ouhao Chen 1 ,
  • Fred Paas 2 , 3 &
  • John Sweller 4  

11k Accesses

14 Altmetric

Explore all metrics

Educational researchers have been confronted with a multitude of definitions of task complexity and a lack of consensus on how to measure it. Using a cognitive load theory-based perspective, we argue that the task complexity that learners experience is based on element interactivity. Element interactivity can be determined by simultaneously considering the structure of the information being processed and the knowledge held in long-term memory of the person processing the information. Although the structure of information in a learning task can easily be quantified by counting the number of interacting information elements, knowledge held in long-term memory can only be estimated using teacher judgment or knowledge tests. In this paper, we describe the different perspectives on task complexity and present some concrete examples from cognitive load research on how to estimate the levels of element interactivity determining intrinsic and extraneous cognitive load. The theoretical and practical implications of the cognitive load perspective of task complexity for instructional design are discussed.

Similar content being viewed by others

task complexity problem solving

The Evolution of Cognitive Load Theory and the Measurement of Its Intrinsic, Extraneous and Germane Loads: A Review

task complexity problem solving

Understanding instructional design effects by differentiated measurement of intrinsic, extraneous, and germane cognitive load

task complexity problem solving

A Systematic Meta-analysis of the Reliability and Validity of Subjective Cognitive Load Questionnaires in Experimental Multimedia Learning Research

Avoid common mistakes on your manuscript.

Introduction

Task complexity is a major factor influencing human performance and behaviour. In cognitive load theory (Sweller et al., 1998 , 2019 ), task complexity is measured by counting the number of interactive elements in the learning materials. An element is defined as anything that needs to be processed and learned. Because of the nature of interactivity, elements that interact cannot be processed and learned in isolation but must be simultaneously processed together if they are to be understood. The number of elements that must be simultaneously processed in this manner determines cognitive load. Task complexity, based on element interactivity, is fundamental to cognitive load theory, determining all cognitive load effects (Sweller, 2010 ). Apart from cognitive load theory, there have been many other models used to measure task complexity. In this paper, we will indicate those other models and compare them with using levels of element interactivity to determine complexity.

Defining Task Complexity

The first point to note is that task complexity and task difficulty are different but sometimes are treated as being interchangeable (e.g., Campbell & Ilgen, 1976 ; Earley, 1985 ; Huber, 1985 ; Taylor, 1981 ). As detailed in the subsequent discussion on element interactivity, we will distinguish these concepts as separate entities (Locke et al., 1981 ). Some tasks may be difficult due to the sheer volume of individual elements involved, yet they may not be complex if these elements do not interact with each other. Conversely, tasks involving fewer but interactive elements can be both difficult and complex.

Despite many attempts, there are no widely accepted formulae for determining complexity. Liu and Li ( 2012 ) identified 24 distinct definitions (see Table 1 ), drawn from a broad array of publications. However, these definitions were employed to measure tasks outside of an educational context. While this number suggests an area in chaos, broadly the measures of complexity can be divided into objective measures that only take the characteristics of the information into account (i.e., structuralist, resource requirement) and subjective complexity that also considers the characteristics of the person dealing with that information (i.e., interaction).

An example of an objective measure was provided by Wood ( 1986 ) who considered the number of distinct acts used to perform a task, the number of distinct information cues that must be processed in performing those acts, the relations among acts, information cues and products, and the external factors influencing the relations between acts, information cues and products. Campbell’s ( 1988 ) model included the number of potential ways of reaching a desired outcome, the number of desired outcomes, the number of conflicting outcomes, and the clarity of the connections.

Subjective measures include objective measures of complexity along with the subjective consequences for the person processing the information. Some tasks are resource intensive where resources can be visual (McCracken & Aldrich, 1984 ), knowledge-based (Gill, 1996 ; Kieras & Polson, 1985 ), time-based (Nembhard & Osothsilp, 2002 ) and require cognitive effort (Bedny et al., 2012 ; Chu & Spires, 2000 ). Other subjective measures emphasise particular interactions between the task and the person. Gonzalez et al. ( 2005 ) defined task complexity as the interaction between the task and learners’ characteristics such as experience and prior knowledge, while Funke ( 2010 ) explained complexity only in terms of the number of task components that the person sees as relevant to a solution. Gill and Murphy ( 2011 ) suggested that task complexity is a construct showing how task characteristics influence the cognitive demands imposed on learners. Halford et al. ( 1998 ) focused on the relational complexity of information. For example, processing “restaurant” is a single factor, but choosing a restaurant may be dependent on “money” that you have, so the single factor becomes a binary relation (money and restaurant).

While purely objective measures are far easier to precisely define, indeed to the point where they can be defined by formulae, their very objectivity constitutes a flaw. We know that information that is highly complex for one person may be very simple for another, leading directly to the subjective view of complexity. Unfortunately, the subjective view seems to be no better with a multitude of definitions and no consensus. Out of 24 categories, only two reference learners’ prior knowledge. Yet, it remains unclear how variations in this prior knowledge influence task complexity. At least part of the reason for that lack of consensus is that there is an abundance of personal characteristics that could be relevant to measuring complexity.

If complexity is determined by an interaction between task characteristics, which can be objectively determined, and person characteristics, it is essential to determine the relevant person characteristics. Our knowledge of human cognitive architecture can be used for this purpose. Because of the critical importance of the person when determining task complexity, cognitive load theory has attempted to connect task complexity to the constructs and functions of human cognitive architecture.

Cognitive Load Theory and Human Cognitive Architecture

Human cognitive architecture provides a base for cognitive load theory and in turn, that base may be central to any attempt to determine complexity. The basic outline of human cognitive architecture as used by cognitive load theory is as follows.

Information may be categorised as either biologically primary or secondary (Geary, 2005 , 2008 , 2012 ; Geary & Berch, 2016 ). We have evolved over countless generations to acquire primary information such as learning how to listen and speak a native language. While primary information may be exceptionally complex in terms of information content, we tend not to see it as complex because we have evolved to acquire it easily, automatically, and unconsciously.

Secondary information is acquired for cultural reasons. Despite frequently being less complex than primary information, it is usually much more difficult to acquire, requiring conscious effort. Education and training institutions were developed to assist in the acquisition of biologically secondary information and cognitive load theory is similarly concerned.

Novel, biologically secondary information can be acquired either through problem solving or from other people. We have evolved to acquire information via either route, i.e., both routes are biologically primary, although it is far easier to obtain information from others than to generate it oneself via problem solving. In that sense, the same information obtained from others is likely to be seen as less complex than generating it ourselves during problem solving.

Irrespective of how it is acquired, information must be processed by a working memory that has very limited capacity (Cowan, 2001 ; Miller, 1956 ) and duration (Peterson & Peterson, 1959 ). Once processed in that limited capacity and duration working memory, information is transferred to long-term memory for storage and later use. On receipt of a suitable external signal, relevant stored information can be retrieved from long-term memory back to working memory to generate suitable action. The limits of working memory that apply to novel information, do not apply to information that has been organised and stored in long-term memory before being transferred back to working memory to govern action (Ericsson & Kintsch, 1995 ).

Consideration of this cognitive architecture is essential when determining levels of complexity. The same information can be complex for novices but simple for experts. For the expert readers of this paper, the written word “interactivity” is simple. When seen, it can be almost instantly retrieved from long-term memory as a single, simple element which does not overwhelm working memory. For anyone learning the written English alphabet and language for the first time, the squiggles on the page that make up the word “interactivity” may be complex and impossible to reproduce accurately from memory. Accordingly, complexity must be an amalgam of both the nature of the material and the knowledge of the person processing the material. Both must be considered simultaneously.

Element Interactivity

Element interactivity is a cornerstone of cognitive load theory (Sweller, 2010 ) and is typically applied to biologically secondary information. An element is defined as a concept or procedure that needs to be learned, which can be decomposed based on a learner’s level of expertise. For example, a novice English learner might perceive the English word “Dog” as consisting of three distinct elements (D, O, G), while a more experienced learner might comprehend it as a single element due to their knowledge held in long-term memory. Interactivity refers to the degree of intrinsic connection between multiple elements, necessitating their simultaneous processing in working memory for comprehension. Conversely, non-interacting elements can be processed individually and separately without reference to each other. The degree of interaction between elements determines element interactivity. High element interactivity, or complexity, occurs when more elements than can be handled by working memory must be processed concurrently. This concept of element interactivity, deeply rooted in human cognitive architecture, underpins both intrinsic and extraneous cognitive loads. These two types of cognitive load correspond to different categories of element interactivity. Intrinsic cognitive load is related to the inherent complexity of the information, while extraneous cognitive load is associated with the manner in which information is presented or taught.

Element Interactivity and Intrinsic Cognitive Load

Intrinsic cognitive load reflects the natural complexity of the learning materials (Sweller, 1994 , 2010 ; Sweller & Chandler, 1994 ). For a given learner and given learning materials, the level of intrinsic cognitive load is constant. Intrinsic cognitive load can be altered by changing the learning materials or knowledge held in long-term memory (Sweller, 2010 ).

Memorising a list of names, such as the names of elements in the chemical Periodic Table or the translation of nouns from one natural language to another, is low in element interactivity, as learners can memorise those entities separately and individually without referring to each other. Therefore, when memorising that Na is the symbol for sodium, learners do not need to refer to the fact that Cl is the symbol for chlorine, which results in only one entity being processed in working memory at a given time, imposing a low level of intrinsic cognitive load.

Compared to memorising a list, learning to solve a linear equation, such as, 3x = 9 solve for x , is higher in element interactivity. Learners cannot process 3x = 9, 3x/3 = 9/3, x = 3 separately and individually with understanding. These 15 symbols (i.e., elements) must be processed simultaneously in working memory, which imposes a high level of intrinsic cognitive load.

Element Interactivity and Extraneous Cognitive Load

Extraneous cognitive load is imposed because of suboptimal instructional designs, which do not adequately facilitate learning (Sweller, 2010 ). Cognitive load theory has been used to devise many instructional techniques to reduce extraneous cognitive load. Those instructional procedures are effective because they reduce unnecessary levels of element interactivity.

The worked example effect will be used to demonstrate how different levels of element interactivity can be used to explain extraneous cognitive load. The effect suggests that asking novices to study examples improves learning more than having them solve the equivalent problems (Sweller & Cooper, 1985 ). Novices solve problems by generating steps using means-ends analysis (Newell & Simon, 1972 ), which involves considering the current problem state and the goal state, and randomly searching for problem solving operators that will reduce differences between the two states. As indicated below, many moves may need to be considered before a suitable sequence of moves is found, a process that involves a considerable number of interacting elements thus imposing a high working memory load. However, when learning with examples, learners only need to focus on a limited number of moves as the correct solution is provided. The reduction in element interactivity reduces extraneous cognitive load compared to problem solving. Similar explanations are available for other cognitive load effects (Sweller, 2010 ). Appropriate instructional procedures reduce element interactivity and so reduce complexity.

Task Complexity and Knowledge Complexity

The distinction between task and knowledge complexity is made by distinguishing between intrinsic and extraneous cognitive load. Intrinsic cognitive load covers what needs to be known (knowledge complexity) while extraneous cognitive load covers how it will be presented (task complexity). Thus, knowledge complexity or what needs to be known (intrinsic cognitive load) does not change when problems are presented in different ways, such as presenting as goal-free problems, but task complexity (extraneous cognitive load) does change.

Expertise, Strategy Use, and Element Interactivity

The levels of the different types of cognitive load are influenced by element interactivity, that is, the number of interacting elements present in learning materials and instructional procedures. Importantly, element interactivity is directly linked to a learner’s level of expertise (Chen et al., 2017 ). As such, by varying the learners’ expertise, one can indirectly modify the levels of cognitive load, through the alteration of element interactivity levels. Consider, for instance, the case of intrinsic cognitive load. Solving an equation such as 3 x = 9 (solve for x ) would involve processing 15 interactive elements simultaneously for a novice learner, placing a high demand on their working memory. However, for a knowledgeable learner, relevant knowledge can be retrieved from long-term memory, thereby reducing the number of interactive elements (i.e., lowering the level of element interactivity) and, consequently, the intrinsic cognitive load associated with the task.

With respect to extraneous cognitive load, consider the worked example effect applied to the same problem. Novices learning through problem solving must simultaneously consider the initial problem state, the goal state, and operators to convert the initial state into the goal state, generating moves through trial-and-error. This process of trial and error involves a large number of interacting elements to be processed in working memory. In contrast, by studying a worked example, all of these elements are provided in a single package that demonstrates exactly how the various elements interact thus eliminating the element interactivity associated with trial and error. However, for more knowledgeable learners, by retrieving relevant knowledge from long-term memory when problem solving, the level of element interactivity associated with trial and error is reduced or eliminated, reducing or eliminating the advantage of worked examples and so decreasing the extraneous cognitive load associated with the task. The result is the elimination or even reversal of the worked example effect (Chen et al., 2017 ).

The extraneous cognitive load associated with solving a problem will also be determined by the problem-solving strategies used. Some strategies used to solve the same problem can result in variations in element interactivity. For example, Ngu et al. ( 2018 ) found that students taught to solve algebra transformation problems using the commonly taught “balance” strategy used in this paper (e.g. - x applied to both sides of an equation) had more difficulty solving the problems compared to students taught to use the “inverse” strategy (move x from one side of an equation to the other and reverse the sign). The balance strategy requires the manipulation of more elements than the inverse strategy and so imposes a higher cognitive load. Another strategy that can be used to reduce cognitive load is to transfer some of the interacting elements into written form to reduce the number of elements that must be held in working memory (Cary & Carlson, 1999 ).

The effect of expertise levels on element interactivity and extraneous cognitive load can be seen when considering how learners semantically code problems, an example of strategy use. As indicated by Gros et al. ( 2020 , 2021 ), learners with varying expertise levels may semantically code the same problem differently. For instance, novices might encode irrelevant features embedded in the problem statement, which can obstruct learning and problem-solving by imposing an extraneous cognitive load. This occurs because novices process these extraneous elements in their working memory, thereby increasing the levels of element interactivity. In contrast, knowledgeable learners, with more domain-specific knowledge, can more efficiently code problems for learning and solving. They can concentrate solely on elements beneficial for problem-solving and their intrinsic relationships, which reduces both the levels of element interactivity and extraneous cognitive load. Therefore, when measuring element interactivity of learning materials and instructional procedures for cognitive load, it is essential to consider the learners’ expertise levels. This issue is discussed in more detail when considering the measurement of element interactivity below.

The Distinction Between Element Interactivity and Task Difficulty

Element interactivity and task difficulty are different concepts. There are two reasons information might be difficult. First, many elements may need to be learned even though they do not interact. For instance, learning the symbols of the periodic table in chemistry is a difficult, but not a complex task. Although it is difficult, the element interactivity is low because learners can study the symbols individually and separately. Consequently, this task does not impose a heavy working memory load.

Second, fewer elements may need to be learned but the elements interact imposing a heavy working memory load. Such tasks also are difficult, but they are difficult for a different reason. Their difficulty does not stem from the large number of elements with which the learner must deal, but rather the fact that the elements need to be dealt with simultaneously by our limited working memory. The total number of elements may be relatively small, but the interactivity of the elements renders the tasks difficult.

Of course, some tasks may include many elements that need to be learned and in addition, the elements interact. Such tasks are exceptionally difficult and are likely to only be learnable by initially engaging in rote learning before subsequently combining the rote learned elements. In effect, the interacting elements of the task are initially treated as though they do not interact. It may only be possible to learn such material by first learning the individual elements by rote before learning how they interact in combination. Understanding occurs once the interacting elements can be combined into a single entity.

When interacting elements have been combined into a single element and stored in long-term memory, information that previously was high in element interactivity is transformed into low element interactivity information. We determine element interactivity by determining what constitutes an element for our students given their prior knowledge. That element may constitute many interacting elements for less knowledgeable students. To measure the element interactivity of information being presented to students, we must determine the number of new interacting elements that our students need to process.

Measuring Element Interactivity

It follows from the above that what constitutes an element is an amalgam of the structure of the information and the knowledge of the learner. Accordingly, to measure element interactivity as defined above and applied to biologically secondary information, we need to simultaneously consider the structure of the information being processed and the knowledge held in long-term memory of the person processing the information. The only metric currently available to accomplish this aim is to count the number of assumed interacting elements. The accuracy of determining the number of elements depends heavily on the precise measurement of knowledge stored in a learner’s long-term memory. As previously discussed, variations in a learner’s level of expertise influence the levels of element interactivity. While precise measures are unavailable, there are usable estimates. Those estimates depend on instructors being aware of the knowledge levels of the students for which the instruction is intended and knowing the characteristics of the information being processed under different instructional procedures.

It is important to note that the chosen method for measuring prior knowledge should align with the research goals and the nature of the subject matter. Various methods have been developed to measure prior knowledge, each with its own strengths and limitations. Traditional assessments often involve pre-tests or diagnostic tests, which provide a quantitative measure of what the learner already knows about a specific topic (Tobias & Everson,  2002 ). In addition to these, concept mapping, as proposed by Novak ( 1990 ), is another tool used to visually represent a learner’s understanding and knowledge structure of a specific topic. This method can capture more nuanced aspects of prior knowledge, such as the relationships between concepts. Furthermore, self-assessment methods, where learners rate their own understanding of a topic, can offer valuable insights, though they may be subject to biases (Boud & Falchikov, 1989 ). Lastly, interviews and discussions provide a qualitative approach to understanding a learner’s prior knowledge, offering depth and context that other methods might miss (Mason, 2002 ). However, these methods can be time-consuming and require careful interpretation.

In this section, we will provide some concrete examples from cognitive load theory–based experiments on how to estimate the levels of element interactivity determining intrinsic and extraneous cognitive load.

Intrinsic Cognitive Load

Estimating the level of element interactivity for intrinsic cognitive load has been demonstrated in many cognitive load theory experiments and this a priori analysis technique reflects a relatively mature process. The first attempt to measure element interactivity contributing to intrinsic cognitive load was provided by Sweller and Chandler ( 1994 ). In four experiments, they studied the effects of learning to use a variety of computer programs. They found that the two extraneous cognitive load effects that they were investigating, the split-attention and the redundancy effects could be readily obtained with large effect sizes for information that had a high level of element interactivity associated with intrinsic cognitive load but disappeared entirely for information for which the element interactivity associated with intrinsic cognitive load was low. They estimated element interactivity simply by counting the number of interacting elements faced by novice students. The procedure can be demonstrated using the example of students learning to solve problems such as a/b = c, solve for a . The numbers in the following example refer to the increasing count of interacting elements.

The denominator b (7) can be removed (8) by multiplying the left side of the equation by b (9) resulting in ab/b (10). Since the left side has been multiplied by b, in order to make the equation equal (11), the right side (12) also must be multiplied by b (13), resulting in ab/b = cb (20). On the left side (21), the b (22) in the numerator (23) can cancel out (24) the b in the denominator (25) leaving “a” (26) isolated (27) on the left side. The equation a = cb solves the problem (28).

These 28 elements all interact. To understand this solution, at some point they all must be processed simultaneously in working memory. For a novice who is learning this procedure, the working memory load can be overwhelming. For an expert who holds the entire solution in long-term memory ready to be transferred to working memory, the equation and its solution are likely to impose an element interactivity count of only 1 (i.e., retrieving the knowledge stored in long-term memory as a single entity). Without taking human cognitive architecture into account, the complexity of the task cannot be calculated.

Over the years, there have been many demonstrations of the estimation of levels of element interactivity associated with intrinsic cognitive load. For example, in Chen et al.’s ( 2015 ) experiments in the domain of geometry, learners were asked to either memorise some mathematics formulae or calculate the area of a composite shape. The materials for memorisation were estimated as low in element interactivity. As an example, to memorise the formula of the area of a parallelogram, the base × the height, included 4 interacting elements for a novice — the area, the base, the multiplication relation, and the height. However, when novices need to calculate the area of a composite shape, element interactivity is higher compared to memorising formulae.

Figure 1 provides an example. First, learners need to identify the rhombus, including four equal length lines, including the missing line, FC (5). They then needed to identify the trapezium, including four lines, again including the missing line, FC (10). Next, another two elements were concerned with calculating the areas of both (12). To calculate the two areas, the mathematics symbols involved in the different formulae must be processed, namely, the meaning of a, b and the multiplication function in the rhombus formula as well as the meaning of a, b, and h with addition, multiplication, and division by 2 for the trapezium formula (22). Finally, the two separate area values added together involved another 3 elements giving, in total, about 25 interactive elements.

figure 1

Example of material used in Chen et al.’s study ( 2015 )

In contrast, Chen et al. ( 2016 ) suggested the interactive elements for memorising trigonometry formulae were low in element interactivity. For example, to memorise sin (A+B) = sin(A)cos(B) + cos(A)sin(B) , there are many elements but each of the elements can be memorised independently of the others because they do not interact. Remembering or forgetting an element should not affect the memorisation of any of the other elements. Therefore, the element interactivity count is 1 despite the many elements. However, simplifying a trigonometry formula is high in element interactivity, compared to memorising the formulae.

Leahy and Sweller ( 2019 ) attempted to calculate the number of interactive elements for a non-STEM domain, teaching learners how to write puzzle poems following several rules (see Figure 2 ). The 1 st element was to read the sentence “To write one you need to follow all these rules”, the 2 nd element was “There must be 6 lines in the poem”, the 3 rd and 4 th elements were the arrow linking the 3 rd rule with the underlined word, the 5 th , 6 th and 7 th elements were the two arrows linking the 4 th rule (1 element) with line 1 and 3 (2 elements), similarly the 8 th , 9 th and 10 th elements were the two arrows linking the 5 th rule with line 4 and 6, the 11 th and 12 th elements were the next arrow linking the word “SELMAN” with the 6 th rule, the 13 th and 14 th elements were the last arrow linking the last rule and “the word is a theorist”. Based on this count, there were 14 interactive elements, suggesting that this material was high in element interactivity.

figure 2

Example of material used in Leahy et al.’s study ( 2019 )

Measuring the element interactivity for intrinsic cognitive load could also be applied to expository text. For example, the material used by Chen et al. (under review) was a passage about the Kleefstra syndrome:

“Kleefstra syndrome is a recently recognised rare genetic syndrome (1) caused by a deletion of the chromosomal region 9q34.3 (~50% of affected individuals) (2) or heterozygous pathogenic variant of the EHMT1 gene (~50% of affected individuals) (3). The clinical phenotype of Kleefstra syndrome (the group of clinical characteristics that more frequently occur in individuals with Kleefstra syndrome than those without Kleefstra syndrome) is well documented (3). Prominent characteristics include severe developmental delay/intellectual disability (4), lack of expressive speech (5), distinctive facial features (6) and motor difficulties (7). Regression of functioning is another hallmark trait of Kleefstra syndrome (8) causing individuals to lose previously obtained daily functioning as they age (9). Unlike the clinical phenotype, the behavioural phenotype of Kleefstra syndrome is not well established (10) although frequently reported behaviours include emotional outbursts (11), self-injurious behaviour (especially hand biting) (12) and ‘autistic-like’ behaviour (including avoidance of eye contact and stereotypies) (13). However, the presence and severity of these behaviours are inconsistently reported throughout past research (14) and contradictory behavioural observations are commonplace (15), for example, ‘autistic-like’ behaviour is reported in Kleefstra syndrome alongside partially intact social communication skills (16).” The estimation for this material was by calculating the unit ideas and thought groups. There were at least 16 interacting elements for novices to learn, which renders the material being high in element interactivity.

Extraneous Cognitive Load

While Sweller and Chandler ( 1994 ) occasionally mentioned that differences in extraneous cognitive load also are due to differences in element interactivity, they made no attempt to provide details. Sweller ( 2010 ) did provide such details but did not provide examples about calculating the number of element interactivity for extraneous cognitive load.

The worked example effect facilitates learning by reducing the number of interactive elements compared to problem solving. For example, a worked example indicating how to solve, a/b = c, solve for a , requires students to study:

For a novice to fully understand this worked example requires the person to process the 28 elements indicated above, a procedure that far exceeds available working memory. For this reason, many novices studying algebra struggle even when presented with worked examples. Fortunately, there is an alternative that most students are likely to use. When studying this example, at any given time, novices only need to process in working memory as few elements as they wish. For example, they do not have to decide what the first move should be, by what process they should generate that move, or how that move may relate to the goal of the problem. The worked example provides all that information so that at any given time, they can focus just on a limited number of elements and ignore all the others.

In contrast, when novices engage in problem solving, there are limits to the extent that they can concentrate on part of the problem while ignoring other parts. They not only need to consider all 28 elements, until they have fully solved the problem, they do not know whether the elements that they are processing are ones needed to solve the problem and so they are likely to process some elements that lead to a dead-end.

Novices are likely to use a means-ends strategy that requires them to consider the entire problem including where they are now, where they need to go and how to generate moves that might allow them to accomplish this end (Sweller et al., 1983 ). A means-ends strategy never guarantees that the shortest route to a solution is being followed.

Problem solvers will of course, write down many of their steps in order to reduce their working memory load but until they have reached the goal, cannot know whether the steps that they have written down are on the goal route. Whatever route they follow, including dead ends, they will need to process vastly more elements than students studying a worked example.

The split-attention effect suggests that physically integrating text with diagrams should enhance learning compared to mentally integrating them (Chandler & Sweller, 1992 ). Consider teaching the solution of the geometry problem of Figure 3 . The physically integrated format of Figure 3 b allows learners to focus on the text and the diagram without randomly searching and mapping each step of the split-attention format seen in Figure 3 a. For the integrated format, depending on knowledge levels, there may be only two elements, namely the two steps integrated with the diagram.

figure 3

a Example of material used for split-attention effect study (split-attention format). b Example of material used for split-attention effect study (integrated format)

For the split-attention format, learners need to read the problem statement and search for the goal angle on the diagram (2 or more elements depending on how many angles are searched before the correct one is found); similarly, search for angles ABC, BAC, and ACB before combining them into the equation (4 elements); then reconfirm that angle DBE is the goal angle before noting it is equal to angle ABC (2 elements). There are likely to be 8 elements that need to be processed using the split-attention format, with most of the elements associated with searching for angles. These 8 elements can be compared with the 2 elements required for the integrated format.

Theoretical and Practical Contributions

Theoretical implications.

This review offers several key theoretical contributions: First, by surveying various task complexity conceptual frameworks, it is clear that the majority rely on objective measures, neglecting the influence of a learner's expertise - a crucial factor in determining task complexity. Cognitive load theory, by considering the interaction between element interactivity and a learner’s expertise, may address this limitation found in other frameworks. Second, while the relationships among element interactivity, intrinsic cognitive load, and extraneous cognitive load have been established and thoroughly explained (Sweller, 2010 ), it remains unclear how to estimate element interactivity for different tasks. Moreover, previous studies have predominantly used element interactivity to measure and quantify intrinsic cognitive load, often overlooking the quantification of extraneous cognitive load from the perspective of element interactivity. This review provides examples of how to estimate element interactivity for both intrinsic and extraneous cognitive loads across different tasks, thereby extending prior research on element interactivity.

Element interactivity can be used to provide an estimate of informational complexity experienced when people process information, especially when learning. Applying the concept of element interactivity to estimate informational complexity is not restricted to cognitive load theory but also offers a tool for other frameworks to quantify the complexity of information perceived by learners. Element interactivity is not a precise measure of complexity because human cognitive architecture mandates that the complexity that humans experience is a simultaneous mixture of both the nature of the information and the contents of long-term memory of the person processing the information. Notwithstanding, provided we can estimate the knowledge held in long-term memory of the individual processing the information, we can use that knowledge to estimate the working memory load imposed due to both intrinsic and extraneous cognitive load. Such estimates have practical implications for both the design of experiments and interpretations of their results. Importantly, those estimates also have implications for instructional design.

Practical Implications

With respect to experimentation, because element interactivity as a measure of complexity is always an estimate rather than an exact measure, the effects of relatively small differences in element interactivity on experimental results are not likely to be visible. In contrast, the effects of very large differences in element interactivity can be readily demonstrated. The fact that measures of element interactivity are only estimates rather than precise measures becomes irrelevant if we are dealing with very large differences. We may not know the exact level of element interactivity, but we know there are large differences. Accordingly, most experiments ensure that only very large differences in element interactivity are studied.

Differences in element interactivity can be due to the use of different materials, different levels of expertise, or both. An experiment testing students learning lists of unrelated information such as the symbols of the chemical periodic table, a low element interactivity task, may yield different results to an experiment on teaching students how to balance chemical equations, a high element interactivity task. Similarly, an experiment that tests students who can easily balance chemical equations is likely to yield different results from the same experiment testing students who are just beginning to learn how to balance such equations.

These different experimental effects feed directly into instructional recommendations. Because of the very large effects that large differences in element interactivity have on learning, we cannot afford to ignore those differences when designing instruction. Most cognitive load theory effects disappear or even reverse when students study low element interactivity information, leading to the expertise reversal effect (Chen et al., 2017 ). As element interactivity decreases, the size of effects decreases and may reverse. Accordingly, recommended instructional procedures depend heavily on element interactivity. Furthermore, the suitability of an instructional procedure will not only change as the nature of the information changes, the same information may be suitable for students with a particular level of knowledge and unsuitable for students with a different level of knowledge because levels of element interactivity can change either by changing the nature of the information but also, the knowledge levels of learners. Either element interactivity or another measure of complexity that similarly takes into account the characteristics of human cognitive architecture is an essential ingredient of a viable instructional design science. At present, with the exception of element interactivity, the multitude of measures of complexity ignore the intricate consequences of the flow of information between working and long-term memory.

Limitations

Despite the fact that the use of element interactivity to measure task complexity incorporates both objective (i.e., tasks) and subjective (i.e., learners’ expertise) factors, the estimation of the number of interacting elements in a task may not always be accurate due to the inherent difficulties in precisely measuring learners’ knowledge. Furthermore, the emphasis on estimating element interactivity has primarily been on well-structured tasks, such as mathematics, with fewer examples provided for expository materials.

Conclusions

In conclusion, while measuring complexity using element interactivity is not exact, it is usable. In the absence of other measures of complexity that also take human cognitive architecture into account, we believe the use of element interactivity or a very similar measure is unavoidable.

Bedny, G. Z., Karwowski, W., & Bedny, I. S. (2012). Complexity evaluation of computer-based tasks. International Journal of Human-Computer Interaction, 28 (4), 236–257.

Article   Google Scholar  

Boud, D., & Falchikov, N. (1989). Quantitative studies of student self-assessment in higher education: A critical analysis of findings. Higher Education, 18 (5), 529–549.

Campbell, D. J. (1988). Task complexity: A review and analysis. Academy of Management Review, 13 (1), 40–52.

Campbell, D. J., & Ilgen, D. R. (1976). Additive effects of task difficulty and goal setting on subsequent task performance. Journal of Applied Psychology, 61 (3), 319–324.

Cary, M., & Carlson, R. A. (1999). External support and the development of problem-solving routines. Journal of Experimental Psychology: Learning, Memory, & Cognition, 25 , 1053–1070.

Google Scholar  

Chandler, P., & Sweller, J. (1992). The split-attention effect as a factor in the design of instruction. British Journal of Educational Psychology, 62 (2), 233–246.

Chen, O., Kalyuga, S., & Sweller, J. (2015). The worked example effect, the generation effect, and element interactivity. Journal of Educational Psychology, 107 (3), 689–704.

Chen, O., Kalyuga, S., & Sweller, J. (2016). Relations between the worked example and generation effects on immediate and delayed tests. Learning and Instruction, 45 , 20–30.

Chen, O., Kalyuga, S., & Sweller, J. (2017). The expertise reversal effect is a variant of the more general element interactivity effect. Educational Psychology Review, 29 , 393–405.

Chu, P. C., & Spires, E. E. (2000). The joint effects of effort and quality on decision strategy choice with computerized decision aids. Decision Sciences, 31 (2), 259–292.

Cowan, N. (2001). The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences, 24 (1), 87–114.

Earley, P. C. (1985). Influence of information, choice and task complexity upon goal acceptance, performance, and personal goals. Journal of Applied Psychology, 70 (3), 481–491.

Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102 (2), 211–245.

Funke, J. (2010). Complex problem solving: A case for complex cognition. Cognitive Processing, 11 , 133–142.

Geary, D. (2005). The origin of mind: Evolution of brain, cognition, and general intelligence . American Psychological Association.

Book   Google Scholar  

Geary, D. (2008). An evolutionarily informed education science. Educational Psychologist, 43 , 179–195.

Geary, D. (2012). Evolutionary educational psychology. In K. Harris, S. Graham, & T. Urdan (Eds.), APA educational psychology handbook (Vol. 1, pp. 597–621). American Psychological Association.

Geary, D., & Berch, D. (2016). Evolution and children’s cognitive and academic development. In D. Geary & D. Berch (Eds.), Evolutionary Perspectives on Child Development and Education (pp. 217–249). Springer.

Chapter   Google Scholar  

Gill, T. G. (1996). Expert systems usage: Task change and intrinsic motivation. MIS Quarterly, 20 (3), 301–329.

Gill, T. G., & Murphy, W. (2011). Task complexity and design science. In 9th International Conference on Education and Information Systems, Technologies and Applications (EISTA 2011) .

Gros, H., Thibaut, J. P., & Sander, E. (2020). Semantic congruence in arithmetic: A new conceptual model for word problem solving. Educational Psychologist, 55 (2), 69–87.

Gros, H., Thibaut, J. P., & Sander, E. (2021). What we count dictates how we count: A tale of two encodings. Cognition, 212 , 104665.

Gonzalez, C., Vanyukov, P., & Martin, M. K. (2005). The use of microworlds to study dynamic decision making. Computers in Human Behavior, 21 (2), 273–286.

Halford, G. S., Wilson, W. H., & Phillips, S. (1998). Processing capacity defined by relational complexity: Implications for comparative, developmental, and cognitive psychology. Behavioral and Brain Sciences, 21 (6), 803–831.

Huber, V. L. (1985). Effects of task difficulty, goal setting, and strategy on performance of a heuristic task. Journal of Applied Psychology, 70 (3), 492–504.

Kieras, D., & Polson, P. G. (1985). An approach to the formal analysis of user complexity. International Journal of Man-machine Studies, 22 (4), 365–394.

Leahy, W., & Sweller, J. (2019). Cognitive load theory, resource depletion and the delayed testing effect. Educational Psychology Review, 31 , 457–478.

Liu, P., & Li, Z. (2012). Task complexity: A review and conceptualization framework. International Journal of Industrial Ergonomics, 42 (6), 553–568.

Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance: 1969–1980. Psychological Bulletin, 90 (1), 125–152.

Mason, J. (2002). Linking qualitative and quantitative data analysis. In Analyzing qualitative data (pp. 103–124). Routledge.

McCracken, J. H., & Aldrich, T. B. (1984). Analyses of selected LHX mission functions: Implications for operator workload and system automation goals (Vol. ASI479-024-84). Anacapa Sciences, Inc.

Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63 (2), 81–97.

Nembhard, D. A., & Osothsilp, N. (2002). Task complexity effects on between-individual learning/forgetting variability. International Journal of Industrial Ergonomics, 29 (5), 297–306.

Newell, A., & Simon, H. A. (1972). Human problem solving . Prentice Hall.

Ngu, B. H., Phan, H. P., Yeung, A. S., & Chung, S. F. (2018). Managing element interactivity in equation solving. Educational Psychology Review, 30 , 255–272.

Novak, J. D. (1990). Concept mapping: A useful tool for science education. Journal of Research in Science Teaching, 27 , 937–949.

Peterson, L., & Peterson, M. J. (1959). Short-term retention of individual verbal items. Journal of Experimental Psychology, 58 (3), 193–198.

Sweller, J. (1994). Cognitive load theory, learning difficulty, and instructional design. Learning and Instruction, 4 (4), 295–312.

Sweller, J. (2010). Element interactivity and intrinsic, extraneous, and germane cognitive load. Educational Psychology Review, 22 , 123–138.

Sweller, J., & Chandler, P. (1994). Why some material is difficult to learn. Cognition and Instruction, 12 (3), 185–233.

Sweller, J., & Cooper, G. A. (1985). The use of worked examples as a substitute for problem solving in learning algebra. Cognition and Instruction, 2 (1), 59–89.

Sweller, J., Mawer, R. F., & Ward, M. R. (1983). Development of expertise in mathematical problem solving. Journal of Experimental Psychology: General, 112 (4), 639–661.

Sweller, J., Van Merrienboer, J. J. G., & Paas, F. (1998). Cognitive architecture and instructional design. Educational Psychology Review, 10 , 251–296.

Sweller, J., van Merriënboer, J. J. G., & Paas, F. (2019). Cognitive architecture and instructional design: 20 years later. Educational Psychology Review, 31 , 261–292.

Taylor, M. S. (1981). The motivational effects of task challenge: A laboratory investigation. Organizational Behavior and Human Performance, 27 (2), 255–278.

Tobias, S., & Everson, H. T. (2002). Knowing what you know and what you don’t: Further research on metacognitive knowledge monitoring. In College board research report 2002 – 3 . New York, NY: College Entrance Examination Board.

Wood, R. E. (1986). Task complexity: Definition of the construct. Organizational Behavior and Human Decision Processes, 37 (1), 60–82.

Download references

Author information

Authors and affiliations.

Department of Mathematics Education, Loughborough University, Loughborough, UK

Department of Psychology, Education and Child Studies, Erasmus University Rotterdam, Rotterdam, Netherlands

School of Education/Early Start, University of Wollongong, Wollongong, Australia

School of Education, University of New South Wales, Sydney, Australia

John Sweller

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ouhao Chen .

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Chen, O., Paas, F. & Sweller, J. A Cognitive Load Theory Approach to Defining and Measuring Task Complexity Through Element Interactivity. Educ Psychol Rev 35 , 63 (2023). https://doi.org/10.1007/s10648-023-09782-w

Download citation

Accepted : 28 May 2023

Published : 02 June 2023

DOI : https://doi.org/10.1007/s10648-023-09782-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Cognitive load theory
  • Element interactivity
  • Learner expertise
  • Task complexity
  • Cognitive load
  • Find a journal
  • Publish with us
  • Track your research

What It Takes to Think Deeply About Complex Problems

by Tony Schwartz

task complexity problem solving

Summary .   

The problems we’re facing often seem as intractable as they do complex. But as Albert Einstein famously observed, “We cannot solve our problems with the same level of thinking that created them.” So what does it take to increase the complexity of our thinking? To cultivate a more nuanced, spacious perspective, start by challenging your convictions. Ask yourself, “What am I not seeing here?” and “What else might be true?” Second, do your most challenging task first every day, when your mind is fresh and before distractions arise. And third, pay attention to how you’re feeling. Embracing complexity means learning to better manage tough emotions like fear and anger.

The problems we’re facing often seem as complex as they do intractable. And as Albert Einstein is often quoted as saying, “We cannot solve our problems with the same level of thinking that created them.” So what does it take to increase the complexity of our thinking?

Partner Center

  • View  PDF
  • Download full issue

Elsevier

Computers and Education: Artificial Intelligence

Opportunities of artificial intelligence for supporting complex problem-solving: findings from a scoping review.

  • Previous article in issue
  • Next article in issue

Cited by (0)

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Psychol

Complex Problem Solving: What It Is and What It Is Not

Dietrich dörner.

1 Department of Psychology, University of Bamberg, Bamberg, Germany

Joachim Funke

2 Department of Psychology, Heidelberg University, Heidelberg, Germany

Computer-simulated scenarios have been part of psychological research on problem solving for more than 40 years. The shift in emphasis from simple toy problems to complex, more real-life oriented problems has been accompanied by discussions about the best ways to assess the process of solving complex problems. Psychometric issues such as reliable assessments and addressing correlations with other instruments have been in the foreground of these discussions and have left the content validity of complex problem solving in the background. In this paper, we return the focus to content issues and address the important features that define complex problems.

Succeeding in the 21st century requires many competencies, including creativity, life-long learning, and collaboration skills (e.g., National Research Council, 2011 ; Griffin and Care, 2015 ), to name only a few. One competence that seems to be of central importance is the ability to solve complex problems ( Mainzer, 2009 ). Mainzer quotes the Nobel prize winner Simon (1957) who wrote as early as 1957:

The capacity of the human mind for formulating and solving complex problems is very small compared with the size of the problem whose solution is required for objectively rational behavior in the real world or even for a reasonable approximation to such objective rationality. (p. 198)

The shift from well-defined to ill-defined problems came about as a result of a disillusion with the “general problem solver” ( Newell et al., 1959 ): The general problem solver was a computer software intended to solve all kind of problems that can be expressed through well-formed formulas. However, it soon became clear that this procedure was in fact a “special problem solver” that could only solve well-defined problems in a closed space. But real-world problems feature open boundaries and have no well-determined solution. In fact, the world is full of wicked problems and clumsy solutions ( Verweij and Thompson, 2006 ). As a result, solving well-defined problems and solving ill-defined problems requires different cognitive processes ( Schraw et al., 1995 ; but see Funke, 2010 ).

Well-defined problems have a clear set of means for reaching a precisely described goal state. For example: in a match-stick arithmetic problem, a person receives a false arithmetic expression constructed out of matchsticks (e.g., IV = III + III). According to the instructions, moving one of the matchsticks will make the equations true. Here, both the problem (find the appropriate stick to move) and the goal state (true arithmetic expression; solution is: VI = III + III) are defined clearly.

Ill-defined problems have no clear problem definition, their goal state is not defined clearly, and the means of moving towards the (diffusely described) goal state are not clear. For example: The goal state for solving the political conflict in the near-east conflict between Israel and Palestine is not clearly defined (living in peaceful harmony with each other?) and even if the conflict parties would agree on a two-state solution, this goal again leaves many issues unresolved. This type of problem is called a “complex problem” and is of central importance to this paper. All psychological processes that occur within individual persons and deal with the handling of such ill-defined complex problems will be subsumed under the umbrella term “complex problem solving” (CPS).

Systematic research on CPS started in the 1970s with observations of the behavior of participants who were confronted with computer simulated microworlds. For example, in one of those microworlds participants assumed the role of executives who were tasked to manage a company over a certain period of time (see Brehmer and Dörner, 1993 , for a discussion of this methodology). Today, CPS is an established concept and has even influenced large-scale assessments such as PISA (“Programme for International Student Assessment”), organized by the Organization for Economic Cooperation and Development ( OECD, 2014 ). According to the World Economic Forum, CPS is one of the most important competencies required in the future ( World Economic Forum, 2015 ). Numerous articles on the subject have been published in recent years, documenting the increasing research activity relating to this field. In the following collection of papers we list only those published in 2010 and later: theoretical papers ( Blech and Funke, 2010 ; Funke, 2010 ; Knauff and Wolf, 2010 ; Leutner et al., 2012 ; Selten et al., 2012 ; Wüstenberg et al., 2012 ; Greiff et al., 2013b ; Fischer and Neubert, 2015 ; Schoppek and Fischer, 2015 ), papers about measurement issues ( Danner et al., 2011a ; Greiff et al., 2012 , 2015a ; Alison et al., 2013 ; Gobert et al., 2015 ; Greiff and Fischer, 2013 ; Herde et al., 2016 ; Stadler et al., 2016 ), papers about applications ( Fischer and Neubert, 2015 ; Ederer et al., 2016 ; Tremblay et al., 2017 ), papers about differential effects ( Barth and Funke, 2010 ; Danner et al., 2011b ; Beckmann and Goode, 2014 ; Greiff and Neubert, 2014 ; Scherer et al., 2015 ; Meißner et al., 2016 ; Wüstenberg et al., 2016 ), one paper about developmental effects ( Frischkorn et al., 2014 ), one paper with a neuroscience background ( Osman, 2012 ) 1 , papers about cultural differences ( Güss and Dörner, 2011 ; Sonnleitner et al., 2014 ; Güss et al., 2015 ), papers about validity issues ( Goode and Beckmann, 2010 ; Greiff et al., 2013c ; Schweizer et al., 2013 ; Mainert et al., 2015 ; Funke et al., 2017 ; Greiff et al., 2017 , 2015b ; Kretzschmar et al., 2016 ; Kretzschmar, 2017 ), review papers and meta-analyses ( Osman, 2010 ; Stadler et al., 2015 ), and finally books ( Qudrat-Ullah, 2015 ; Csapó and Funke, 2017b ) and book chapters ( Funke, 2012 ; Hotaling et al., 2015 ; Funke and Greiff, 2017 ; Greiff and Funke, 2017 ; Csapó and Funke, 2017a ; Fischer et al., 2017 ; Molnàr et al., 2017 ; Tobinski and Fritz, 2017 ; Viehrig et al., 2017 ). In addition, a new “Journal of Dynamic Decision Making” (JDDM) has been launched ( Fischer et al., 2015 , 2016 ) to give the field an open-access outlet for research and discussion.

This paper aims to clarify aspects of validity: what should be meant by the term CPS and what not? This clarification seems necessary because misunderstandings in recent publications provide – from our point of view – a potentially misleading picture of the construct. We start this article with a historical review before attempting to systematize different positions. We conclude with a working definition.

Historical Review

The concept behind CPS goes back to the German phrase “komplexes Problemlösen” (CPS; the term “komplexes Problemlösen” was used as a book title by Funke, 1986 ). The concept was introduced in Germany by Dörner and colleagues in the mid-1970s (see Dörner et al., 1975 ; Dörner, 1975 ) for the first time. The German phrase was later translated to CPS in the titles of two edited volumes by Sternberg and Frensch (1991) and Frensch and Funke (1995a) that collected papers from different research traditions. Even though it looks as though the term was coined in the 1970s, Edwards (1962) used the term “dynamic decision making” to describe decisions that come in a sequence. He compared static with dynamic decision making, writing:

  • simple  In dynamic situations, a new complication not found in the static situations arises. The environment in which the decision is set may be changing, either as a function of the sequence of decisions, or independently of them, or both. It is this possibility of an environment which changes while you collect information about it which makes the task of dynamic decision theory so difficult and so much fun. (p. 60)

The ability to solve complex problems is typically measured via dynamic systems that contain several interrelated variables that participants need to alter. Early work (see, e.g., Dörner, 1980 ) used a simulation scenario called “Lohhausen” that contained more than 2000 variables that represented the activities of a small town: Participants had to take over the role of a mayor for a simulated period of 10 years. The simulation condensed these ten years to ten hours in real time. Later, researchers used smaller dynamic systems as scenarios either based on linear equations (see, e.g., Funke, 1993 ) or on finite state automata (see, e.g., Buchner and Funke, 1993 ). In these contexts, CPS consisted of the identification and control of dynamic task environments that were previously unknown to the participants. Different task environments came along with different degrees of fidelity ( Gray, 2002 ).

According to Funke (2012) , the typical attributes of complex systems are (a) complexity of the problem situation which is usually represented by the sheer number of involved variables; (b) connectivity and mutual dependencies between involved variables; (c) dynamics of the situation, which reflects the role of time and developments within a system; (d) intransparency (in part or full) about the involved variables and their current values; and (e) polytely (greek term for “many goals”), representing goal conflicts on different levels of analysis. This mixture of features is similar to what is called VUCA (volatility, uncertainty, complexity, ambiguity) in modern approaches to management (e.g., Mack et al., 2016 ).

In his evaluation of the CPS movement, Sternberg (1995) compared (young) European approaches to CPS with (older) American research on expertise. His analysis of the differences between the European and American traditions shows advantages but also potential drawbacks for each side. He states (p. 301): “I believe that although there are problems with the European approach, it deals with some fundamental questions that American research scarcely addresses.” So, even though the echo of the European approach did not enjoy strong resonance in the US at that time, it was valued by scholars like Sternberg and others. Before attending to validity issues, we will first present a short review of different streams.

Different Approaches to CPS

In the short history of CPS research, different approaches can be identified ( Buchner, 1995 ; Fischer et al., 2017 ). To systematize, we differentiate between the following five lines of research:

  • simple (a) The search for individual differences comprises studies identifying interindividual differences that affect the ability to solve complex problems. This line of research is reflected, for example, in the early work by Dörner et al. (1983) and their “Lohhausen” study. Here, naïve student participants took over the role of the mayor of a small simulated town named Lohhausen for a simulation period of ten years. According to the results of the authors, it is not intelligence (as measured by conventional IQ tests) that predicts performance, but it is the ability to stay calm in the face of a challenging situation and the ability to switch easily between an analytic mode of processing and a more holistic one.
  • simple (b) The search for cognitive processes deals with the processes behind understanding complex dynamic systems. Representative of this line of research is, for example, Berry and Broadbent’s (1984) work on implicit and explicit learning processes when people interact with a dynamic system called “Sugar Production”. They found that those who perform best in controlling a dynamic system can do so implicitly, without explicit knowledge of details regarding the systems’ relations.
  • simple (c) The search for system factors seeks to identify the aspects of dynamic systems that determine the difficulty of complex problems and make some problems harder than others. Representative of this line of research is, for example, work by Funke (1985) , who systematically varied the number of causal effects within a dynamic system or the presence/absence of eigendynamics. He found, for example, that solution quality decreases as the number of systems relations increases.
  • simple (d) The psychometric approach develops measurement instruments that can be used as an alternative to classical IQ tests, as something that goes “beyond IQ”. The MicroDYN approach ( Wüstenberg et al., 2012 ) is representative for this line of research that presents an alternative to reasoning tests (like Raven matrices). These authors demonstrated that a small improvement in predicting school grade point average beyond reasoning is possible with MicroDYN tests.
  • simple (e) The experimental approach explores CPS under different experimental conditions. This approach uses CPS assessment instruments to test hypotheses derived from psychological theories and is sometimes used in research about cognitive processes (see above). Exemplary for this line of research is the work by Rohe et al. (2016) , who test the usefulness of “motto goals” in the context of complex problems compared to more traditional learning and performance goals. Motto goals differ from pure performance goals by activating positive affect and should lead to better goal attainment especially in complex situations (the mentioned study found no effect).

To be clear: these five approaches are not mutually exclusive and do overlap. But the differentiation helps to identify different research communities and different traditions. These communities had different opinions about scaling complexity.

The Race for Complexity: Use of More and More Complex Systems

In the early years of CPS research, microworlds started with systems containing about 20 variables (“Tailorshop”), soon reached 60 variables (“Moro”), and culminated in systems with about 2000 variables (“Lohhausen”). This race for complexity ended with the introduction of the concept of “minimal complex systems” (MCS; Greiff and Funke, 2009 ; Funke and Greiff, 2017 ), which ushered in a search for the lower bound of complexity instead of the higher bound, which could not be defined as easily. The idea behind this concept was that whereas the upper limits of complexity are unbound, the lower limits might be identifiable. Imagine starting with a simple system containing two variables with a simple linear connection between them; then, step by step, increase the number of variables and/or the type of connections. One soon reaches a point where the system can no longer be considered simple and has become a “complex system”. This point represents a minimal complex system. Despite some research having been conducted in this direction, the point of transition from simple to complex has not been identified clearly as of yet.

Some years later, the original “minimal complex systems” approach ( Greiff and Funke, 2009 ) shifted to the “multiple complex systems” approach ( Greiff et al., 2013a ). This shift is more than a slight change in wording: it is important because it taps into the issue of validity directly. Minimal complex systems have been introduced in the context of challenges from large-scale assessments like PISA 2012 that measure new aspects of problem solving, namely interactive problems besides static problem solving ( Greiff and Funke, 2017 ). PISA 2012 required test developers to remain within testing time constraints (given by the school class schedule). Also, test developers needed a large item pool for the construction of a broad class of problem solving items. It was clear from the beginning that MCS deal with simple dynamic situations that require controlled interaction: the exploration and control of simple ticket machines, simple mobile phones, or simple MP3 players (all of these example domains were developed within PISA 2012) – rather than really complex situations like managerial or political decision making.

As a consequence of this subtle but important shift in interpreting the letters MCS, the definition of CPS became a subject of debate recently ( Funke, 2014a ; Greiff and Martin, 2014 ; Funke et al., 2017 ). In the words of Funke (2014b , p. 495):

  • simple  It is funny that problems that nowadays come under the term ‘CPS’, are less complex (in terms of the previously described attributes of complex situations) than at the beginning of this new research tradition. The emphasis on psychometric qualities has led to a loss of variety. Systems thinking requires more than analyzing models with two or three linear equations – nonlinearity, cyclicity, rebound effects, etc. are inherent features of complex problems and should show up at least in some of the problems used for research and assessment purposes. Minimal complex systems run the danger of becoming minimal valid systems.

Searching for minimal complex systems is not the same as gaining insight into the way how humans deal with complexity and uncertainty. For psychometric purposes, it is appropriate to reduce complexity to a minimum; for understanding problem solving under conditions of overload, intransparency, and dynamics, it is necessary to realize those attributes with reasonable strength. This aspect is illustrated in the next section.

Importance of the Validity Issue

The most important reason for discussing the question of what complex problem solving is and what it is not stems from its phenomenology: if we lose sight of our phenomena, we are no longer doing good psychology. The relevant phenomena in the context of complex problems encompass many important aspects. In this section, we discuss four phenomena that are specific to complex problems. We consider these phenomena as critical for theory development and for the construction of assessment instruments (i.e., microworlds). These phenomena require theories for explaining them and they require assessment instruments eliciting them in a reliable way.

The first phenomenon is the emergency reaction of the intellectual system ( Dörner, 1980 ): When dealing with complex systems, actors tend to (a) reduce their intellectual level by decreasing self-reflections, by decreasing their intentions, by stereotyping, and by reducing their realization of intentions, (b) they show a tendency for fast action with increased readiness for risk, with increased violations of rules, and with increased tendency to escape the situation, and (c) they degenerate their hypotheses formation by construction of more global hypotheses and reduced tests of hypotheses, by increasing entrenchment, and by decontextualizing their goals. This phenomenon illustrates the strong connection between cognition, emotion, and motivation that has been emphasized by Dörner (see, e.g., Dörner and Güss, 2013 ) from the beginning of his research tradition; the emergency reaction reveals a shift in the mode of information processing under the pressure of complexity.

The second phenomenon comprises cross-cultural differences with respect to strategy use ( Strohschneider and Güss, 1999 ; Güss and Wiley, 2007 ; Güss et al., 2015 ). Results from complex task environments illustrate the strong influence of context and background knowledge to an extent that cannot be found for knowledge-poor problems. For example, in a comparison between Brazilian and German participants, it turned out that Brazilians accept the given problem descriptions and are more optimistic about the results of their efforts, whereas Germans tend to inquire more about the background of the problems and take a more active approach but are less optimistic (according to Strohschneider and Güss, 1998 , p. 695).

The third phenomenon relates to failures that occur during the planning and acting stages ( Jansson, 1994 ; Ramnarayan et al., 1997 ), illustrating that rational procedures seem to be unlikely to be used in complex situations. The potential for failures ( Dörner, 1996 ) rises with the complexity of the problem. Jansson (1994) presents seven major areas for failures with complex situations: acting directly on current feedback; insufficient systematization; insufficient control of hypotheses and strategies; lack of self-reflection; selective information gathering; selective decision making; and thematic vagabonding.

The fourth phenomenon describes (a lack of) training and transfer effects ( Kretzschmar and Süß, 2015 ), which again illustrates the context dependency of strategies and knowledge (i.e., there is no strategy that is so universal that it can be used in many different problem situations). In their own experiment, the authors could show training effects only for knowledge acquisition, not for knowledge application. Only with specific feedback, performance in complex environments can be increased ( Engelhart et al., 2017 ).

These four phenomena illustrate why the type of complexity (or degree of simplicity) used in research really matters. Furthermore, they demonstrate effects that are specific for complex problems, but not for toy problems. These phenomena direct the attention to the important question: does the stimulus material used (i.e., the computer-simulated microworld) tap and elicit the manifold of phenomena described above?

Dealing with partly unknown complex systems requires courage, wisdom, knowledge, grit, and creativity. In creativity research, “little c” and “BIG C” are used to differentiate between everyday creativity and eminent creativity ( Beghetto and Kaufman, 2007 ; Kaufman and Beghetto, 2009 ). Everyday creativity is important for solving everyday problems (e.g., finding a clever fix for a broken spoke on my bicycle), eminent creativity changes the world (e.g., inventing solar cells for energy production). Maybe problem solving research should use a similar differentiation between “little p” and “BIG P” to mark toy problems on the one side and big societal challenges on the other. The question then remains: what can we learn about BIG P by studying little p? What phenomena are present in both types, and what phenomena are unique to each of the two extremes?

Discussing research on CPS requires reflecting on the field’s research methods. Even if the experimental approach has been successful for testing hypotheses (for an overview of older work, see Funke, 1995 ), other methods might provide additional and novel insights. Complex phenomena require complex approaches to understand them. The complex nature of complex systems imposes limitations on psychological experiments: The more complex the environments, the more difficult is it to keep conditions under experimental control. And if experiments have to be run in labs one should bring enough complexity into the lab to establish the phenomena mentioned, at least in part.

There are interesting options to be explored (again): think-aloud protocols , which have been discredited for many years ( Nisbett and Wilson, 1977 ) and yet are a valuable source for theory testing ( Ericsson and Simon, 1983 ); introspection ( Jäkel and Schreiber, 2013 ), which seems to be banned from psychological methods but nevertheless offers insights into thought processes; the use of life-streaming ( Wendt, 2017 ), a medium in which streamers generate a video stream of think-aloud data in computer-gaming; political decision-making ( Dhami et al., 2015 ) that demonstrates error-proneness in groups; historical case studies ( Dörner and Güss, 2011 ) that give insights into the thinking styles of political leaders; the use of the critical incident technique ( Reuschenbach, 2008 ) to construct complex scenarios; and simulations with different degrees of fidelity ( Gray, 2002 ).

The methods tool box is full of instruments that have to be explored more carefully before any individual instrument receives a ban or research narrows its focus to only one paradigm for data collection. Brehmer and Dörner (1993) discussed the tensions between “research in the laboratory and research in the field”, optimistically concluding “that the new methodology of computer-simulated microworlds will provide us with the means to bridge the gap between the laboratory and the field” (p. 183). The idea behind this optimism was that computer-simulated scenarios would bring more complexity from the outside world into the controlled lab environment. But this is not true for all simulated scenarios. In his paper on simulated environments, Gray (2002) differentiated computer-simulated environments with respect to three dimensions: (1) tractability (“the more training subjects require before they can use a simulated task environment, the less tractable it is”, p. 211), correspondence (“High correspondence simulated task environments simulate many aspects of one task environment. Low correspondence simulated task environments simulate one aspect of many task environments”, p. 214), and engagement (“A simulated task environment is engaging to the degree to which it involves and occupies the participants; that is, the degree to which they agree to take it seriously”, p. 217). But the mere fact that a task is called a “computer-simulated task environment” does not mean anything specific in terms of these three dimensions. This is one of several reasons why we should differentiate between those studies that do not address the core features of CPS and those that do.

What is not CPS?

Even though a growing number of references claiming to deal with complex problems exist (e.g., Greiff and Wüstenberg, 2015 ; Greiff et al., 2016 ), it would be better to label the requirements within these tasks “dynamic problem solving,” as it has been done adequately in earlier work ( Greiff et al., 2012 ). The dynamics behind on-off-switches ( Thimbleby, 2007 ) are remarkable but not really complex. Small nonlinear systems that exhibit stunningly complex and unstable behavior do exist – but they are not used in psychometric assessments of so-called CPS. There are other small systems (like MicroDYN scenarios: Greiff and Wüstenberg, 2014 ) that exhibit simple forms of system behavior that are completely predictable and stable. This type of simple systems is used frequently. It is even offered commercially as a complex problem-solving test called COMPRO ( Greiff and Wüstenberg, 2015 ) for business applications. But a closer look reveals that the label is not used correctly; within COMPRO, the used linear equations are far from being complex and the system can be handled properly by using only one strategy (see for more details Funke et al., 2017 ).

Why do simple linear systems not fall within CPS? At the surface, nonlinear and linear systems might appear similar because both only include 3–5 variables. But the difference is in terms of systems behavior as well as strategies and learning. If the behavior is simple (as in linear systems where more input is related to more output and vice versa), the system can be easily understood (participants in the MicroDYN world have 3 minutes to explore a complex system). If the behavior is complex (as in systems that contain strange attractors or negative feedback loops), things become more complicated and much more observation is needed to identify the hidden structure of the unknown system ( Berry and Broadbent, 1984 ; Hundertmark et al., 2015 ).

Another issue is learning. If tasks can be solved using a single (and not so complicated) strategy, steep learning curves are to be expected. The shift from problem solving to learned routine behavior occurs rapidly, as was demonstrated by Luchins (1942) . In his water jar experiments, participants quickly acquired a specific strategy (a mental set) for solving certain measurement problems that they later continued applying to problems that would have allowed for easier approaches. In the case of complex systems, learning can occur only on very general, abstract levels because it is difficult for human observers to make specific predictions. Routines dealing with complex systems are quite different from routines relating to linear systems.

What should not be studied under the label of CPS are pure learning effects, multiple-cue probability learning, or tasks that can be solved using a single strategy. This last issue is a problem for MicroDYN tasks that rely strongly on the VOTAT strategy (“vary one thing at a time”; see Tschirgi, 1980 ). In real-life, it is hard to imagine a business manager trying to solve her or his problems by means of VOTAT.

What is CPS?

In the early days of CPS research, planet Earth’s dynamics and complexities gained attention through such books as “The limits to growth” ( Meadows et al., 1972 ) and “Beyond the limits” ( Meadows et al., 1992 ). In the current decade, for example, the World Economic Forum (2016) attempts to identify the complexities and risks of our modern world. In order to understand the meaning of complexity and uncertainty, taking a look at the worlds’ most pressing issues is helpful. Searching for strategies to cope with these problems is a difficult task: surely there is no place for the simple principle of “vary-one-thing-at-a-time” (VOTAT) when it comes to global problems. The VOTAT strategy is helpful in the context of simple problems ( Wüstenberg et al., 2014 ); therefore, whether or not VOTAT is helpful in a given problem situation helps us distinguish simple from complex problems.

Because there exist no clear-cut strategies for complex problems, typical failures occur when dealing with uncertainty ( Dörner, 1996 ; Güss et al., 2015 ). Ramnarayan et al. (1997) put together a list of generic errors (e.g., not developing adequate action plans; lack of background control; learning from experience blocked by stereotype knowledge; reactive instead of proactive action) that are typical of knowledge-rich complex systems but cannot be found in simple problems.

Complex problem solving is not a one-dimensional, low-level construct. On the contrary, CPS is a multi-dimensional bundle of competencies existing at a high level of abstraction, similar to intelligence (but going beyond IQ). As Funke et al. (2018) state: “Assessment of transversal (in educational contexts: cross-curricular) competencies cannot be done with one or two types of assessment. The plurality of skills and competencies requires a plurality of assessment instruments.”

There are at least three different aspects of complex systems that are part of our understanding of a complex system: (1) a complex system can be described at different levels of abstraction; (2) a complex system develops over time, has a history, a current state, and a (potentially unpredictable) future; (3) a complex system is knowledge-rich and activates a large semantic network, together with a broad list of potential strategies (domain-specific as well as domain-general).

Complex problem solving is not only a cognitive process but is also an emotional one ( Spering et al., 2005 ; Barth and Funke, 2010 ) and strongly dependent on motivation (low-stakes versus high-stakes testing; see Hermes and Stelling, 2016 ).

Furthermore, CPS is a dynamic process unfolding over time, with different phases and with more differentiation than simply knowledge acquisition and knowledge application. Ideally, the process should entail identifying problems (see Dillon, 1982 ; Lee and Cho, 2007 ), even if in experimental settings, problems are provided to participants a priori . The more complex and open a given situation, the more options can be generated (T. S. Schweizer et al., 2016 ). In closed problems, these processes do not occur in the same way.

In analogy to the difference between formative (process-oriented) and summative (result-oriented) assessment ( Wiliam and Black, 1996 ; Bennett, 2011 ), CPS should not be reduced to the mere outcome of a solution process. The process leading up to the solution, including detours and errors made along the way, might provide a more differentiated impression of a person’s problem-solving abilities and competencies than the final result of such a process. This is one of the reasons why CPS environments are not, in fact, complex intelligence tests: research on CPS is not only about the outcome of the decision process, but it is also about the problem-solving process itself.

Complex problem solving is part of our daily life: finding the right person to share one’s life with, choosing a career that not only makes money, but that also makes us happy. Of course, CPS is not restricted to personal problems – life on Earth gives us many hard nuts to crack: climate change, population growth, the threat of war, the use and distribution of natural resources. In sum, many societal challenges can be seen as complex problems. To reduce that complexity to a one-hour lab activity on a random Friday afternoon puts it out of context and does not address CPS issues.

Theories about CPS should specify which populations they apply to. Across populations, one thing to consider is prior knowledge. CPS research with experts (e.g., Dew et al., 2009 ) is quite different from problem solving research using tasks that intentionally do not require any specific prior knowledge (see, e.g., Beckmann and Goode, 2014 ).

More than 20 years ago, Frensch and Funke (1995b) defined CPS as follows:

  • simple  CPS occurs to overcome barriers between a given state and a desired goal state by means of behavioral and/or cognitive, multi-step activities. The given state, goal state, and barriers between given state and goal state are complex, change dynamically during problem solving, and are intransparent. The exact properties of the given state, goal state, and barriers are unknown to the solver at the outset. CPS implies the efficient interaction between a solver and the situational requirements of the task, and involves a solver’s cognitive, emotional, personal, and social abilities and knowledge. (p. 18)

The above definition is rather formal and does not account for content or relations between the simulation and the real world. In a sense, we need a new definition of CPS that addresses these issues. Based on our previous arguments, we propose the following working definition:

  • simple  Complex problem solving is a collection of self-regulated psychological processes and activities necessary in dynamic environments to achieve ill-defined goals that cannot be reached by routine actions. Creative combinations of knowledge and a broad set of strategies are needed. Solutions are often more bricolage than perfect or optimal. The problem-solving process combines cognitive, emotional, and motivational aspects, particularly in high-stakes situations. Complex problems usually involve knowledge-rich requirements and collaboration among different persons.

The main differences to the older definition lie in the emphasis on (a) the self-regulation of processes, (b) creativity (as opposed to routine behavior), (c) the bricolage type of solution, and (d) the role of high-stakes challenges. Our new definition incorporates some aspects that have been discussed in this review but were not reflected in the 1995 definition, which focused on attributes of complex problems like dynamics or intransparency.

This leads us to the final reflection about the role of CPS for dealing with uncertainty and complexity in real life. We will distinguish thinking from reasoning and introduce the sense of possibility as an important aspect of validity.

CPS as Combining Reasoning and Thinking in an Uncertain Reality

Leading up to the Battle of Borodino in Leo Tolstoy’s novel “War and Peace”, Prince Andrei Bolkonsky explains the concept of war to his friend Pierre. Pierre expects war to resemble a game of chess: You position the troops and attempt to defeat your opponent by moving them in different directions.

“Far from it!”, Andrei responds. “In chess, you know the knight and his moves, you know the pawn and his combat strength. While in war, a battalion is sometimes stronger than a division and sometimes weaker than a company; it all depends on circumstances that can never be known. In war, you do not know the position of your enemy; some things you might be able to observe, some things you have to divine (but that depends on your ability to do so!) and many things cannot even be guessed at. In chess, you can see all of your opponent’s possible moves. In war, that is impossible. If you decide to attack, you cannot know whether the necessary conditions are met for you to succeed. Many a time, you cannot even know whether your troops will follow your orders…”

In essence, war is characterized by a high degree of uncertainty. A good commander (or politician) can add to that what he or she sees, tentatively fill in the blanks – and not just by means of logical deduction but also by intelligently bridging missing links. A bad commander extrapolates from what he sees and thus arrives at improper conclusions.

Many languages differentiate between two modes of mentalizing; for instance, the English language distinguishes between ‘thinking’ and ‘reasoning’. Reasoning denotes acute and exact mentalizing involving logical deductions. Such deductions are usually based on evidence and counterevidence. Thinking, however, is what is required to write novels. It is the construction of an initially unknown reality. But it is not a pipe dream, an unfounded process of fabrication. Rather, thinking asks us to imagine reality (“Wirklichkeitsfantasie”). In other words, a novelist has to possess a “sense of possibility” (“Möglichkeitssinn”, Robert Musil; in German, sense of possibility is often used synonymously with imagination even though imagination is not the same as sense of possibility, for imagination also encapsulates the impossible). This sense of possibility entails knowing the whole (or several wholes) or being able to construe an unknown whole that could accommodate a known part. The whole has to align with sociological and geographical givens, with the mentality of certain peoples or groups, and with the laws of physics and chemistry. Otherwise, the entire venture is ill-founded. A sense of possibility does not aim for the moon but imagines something that might be possible but has not been considered possible or even potentially possible so far.

Thinking is a means to eliminate uncertainty. This process requires both of the modes of thinking we have discussed thus far. Economic, political, or ecological decisions require us to first consider the situation at hand. Though certain situational aspects can be known, but many cannot. In fact, von Clausewitz (1832) posits that only about 25% of the necessary information is available when a military decision needs to be made. Even then, there is no way to guarantee that whatever information is available is also correct: Even if a piece of information was completely accurate yesterday, it might no longer apply today.

Once our sense of possibility has helped grasping a situation, problem solvers need to call on their reasoning skills. Not every situation requires the same action, and we may want to act this way or another to reach this or that goal. This appears logical, but it is a logic based on constantly shifting grounds: We cannot know whether necessary conditions are met, sometimes the assumptions we have made later turn out to be incorrect, and sometimes we have to revise our assumptions or make completely new ones. It is necessary to constantly switch between our sense of possibility and our sense of reality, that is, to switch between thinking and reasoning. It is an arduous process, and some people handle it well, while others do not.

If we are to believe Tuchman’s (1984) book, “The March of Folly”, most politicians and commanders are fools. According to Tuchman, not much has changed in the 3300 years that have elapsed since the misguided Trojans decided to welcome the left-behind wooden horse into their city that would end up dismantling Troy’s defensive walls. The Trojans, too, had been warned, but decided not to heed the warning. Although Laocoön had revealed the horse’s true nature to them by attacking it with a spear, making the weapons inside the horse ring, the Trojans refused to see the forest for the trees. They did not want to listen, they wanted the war to be over, and this desire ended up shaping their perception.

The objective of psychology is to predict and explain human actions and behavior as accurately as possible. However, thinking cannot be investigated by limiting its study to neatly confined fractions of reality such as the realms of propositional logic, chess, Go tasks, the Tower of Hanoi, and so forth. Within these systems, there is little need for a sense of possibility. But a sense of possibility – the ability to divine and construe an unknown reality – is at least as important as logical reasoning skills. Not researching the sense of possibility limits the validity of psychological research. All economic and political decision making draws upon this sense of possibility. By not exploring it, psychological research dedicated to the study of thinking cannot further the understanding of politicians’ competence and the reasons that underlie political mistakes. Christopher Clark identifies European diplomats’, politicians’, and commanders’ inability to form an accurate representation of reality as a reason for the outbreak of World War I. According to Clark’s (2012) book, “The Sleepwalkers”, the politicians of the time lived in their own make-believe world, wrongfully assuming that it was the same world everyone else inhabited. If CPS research wants to make significant contributions to the world, it has to acknowledge complexity and uncertainty as important aspects of it.

For more than 40 years, CPS has been a new subject of psychological research. During this time period, the initial emphasis on analyzing how humans deal with complex, dynamic, and uncertain situations has been lost. What is subsumed under the heading of CPS in modern research has lost the original complexities of real-life problems. From our point of view, the challenges of the 21st century require a return to the origins of this research tradition. We would encourage researchers in the field of problem solving to come back to the original ideas. There is enough complexity and uncertainty in the world to be studied. Improving our understanding of how humans deal with these global and pressing problems would be a worthwhile enterprise.

Author Contributions

JF drafted a first version of the manuscript, DD added further text and commented on the draft. JF finalized the manuscript.

Authors Note

After more than 40 years of controversial discussions between both authors, this is the first joint paper. We are happy to have done this now! We have found common ground!

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Acknowledgments

The authors thank the Deutsche Forschungsgemeinschaft (DFG) for the continuous support of their research over many years. Thanks to Daniel Holt for his comments on validity issues, thanks to Julia Nolte who helped us by translating German text excerpts into readable English and helped us, together with Keri Hartman, to improve our style and grammar – thanks for that! We also thank the two reviewers for their helpful critical comments on earlier versions of this manuscript. Finally, we acknowledge financial support by Deutsche Forschungsgemeinschaft and Ruprecht-Karls-Universität Heidelberg within their funding programme Open Access Publishing .

1 The fMRI-paper from Anderson (2012) uses the term “complex problem solving” for tasks that do not fall in our understanding of CPS and is therefore excluded from this list.

  • Alison L., van den Heuvel C., Waring S., Power N., Long A., O’Hara T., et al. (2013). Immersive simulated learning environments for researching critical incidents: a knowledge synthesis of the literature and experiences of studying high-risk strategic decision making. J. Cogn. Eng. Deci. Mak. 7 255–272. 10.1177/1555343412468113 [ CrossRef ] [ Google Scholar ]
  • Anderson J. R. (2012). Tracking problem solving by multivariate pattern analysis and hidden markov model algorithms. Neuropsychologia 50 487–498. 10.1016/j.neuropsychologia.2011.07.025 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Barth C. M., Funke J. (2010). Negative affective environments improve complex solving performance. Cogn. Emot. 24 1259–1268. 10.1080/02699930903223766 [ CrossRef ] [ Google Scholar ]
  • Beckmann J. F., Goode N. (2014). The benefit of being naïve and knowing it: the unfavourable impact of perceived context familiarity on learning in complex problem solving tasks. Instruct. Sci. 42 271–290. 10.1007/s11251-013-9280-7 [ CrossRef ] [ Google Scholar ]
  • Beghetto R. A., Kaufman J. C. (2007). Toward a broader conception of creativity: a case for “mini-c” creativity. Psychol. Aesthetics Creat. Arts 1 73–79. 10.1037/1931-3896.1.2.73 [ CrossRef ] [ Google Scholar ]
  • Bennett R. E. (2011). Formative assessment: a critical review. Assess. Educ. Princ. Policy Pract. 18 5–25. 10.1080/0969594X.2010.513678 [ CrossRef ] [ Google Scholar ]
  • Berry D. C., Broadbent D. E. (1984). On the relationship between task performance and associated verbalizable knowledge. Q. J. Exp. Psychol. 36 209–231. 10.1080/14640748408402156 [ CrossRef ] [ Google Scholar ]
  • Blech C., Funke J. (2010). You cannot have your cake and eat it, too: how induced goal conflicts affect complex problem solving. Open Psychol. J. 3 42–53. 10.2174/1874350101003010042 [ CrossRef ] [ Google Scholar ]
  • Brehmer B., Dörner D. (1993). Experiments with computer-simulated microworlds: escaping both the narrow straits of the laboratory and the deep blue sea of the field study. Comput. Hum. Behav. 9 171–184. 10.1016/0747-5632(93)90005-D [ CrossRef ] [ Google Scholar ]
  • Buchner A. (1995). “Basic topics and approaches to the study of complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 27–63. [ Google Scholar ]
  • Buchner A., Funke J. (1993). Finite state automata: dynamic task environments in problem solving research. Q. J. Exp. Psychol. 46A , 83–118. 10.1080/14640749308401068 [ CrossRef ] [ Google Scholar ]
  • Clark C. (2012). The Sleepwalkers: How Europe Went to War in 1914 . London: Allen Lane. [ Google Scholar ]
  • Csapó B., Funke J. (2017a). “The development and assessment of problem solving in 21st-century schools,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 19–31. [ Google Scholar ]
  • Csapó B., Funke J. (eds) (2017b). The Nature of Problem Solving. Using Research to Inspire 21st Century Learning. Paris: OECD Publishing. [ Google Scholar ]
  • Danner D., Hagemann D., Holt D. V., Hager M., Schankin A., Wüstenberg S., et al. (2011a). Measuring performance in dynamic decision making. Reliability and validity of the Tailorshop simulation. J. Ind. Differ. 32 225–233. 10.1027/1614-0001/a000055 [ CrossRef ] [ Google Scholar ]
  • Danner D., Hagemann D., Schankin A., Hager M., Funke J. (2011b). Beyond IQ: a latent state-trait analysis of general intelligence, dynamic decision making, and implicit learning. Intelligence 39 323–334. 10.1016/j.intell.2011.06.004 [ CrossRef ] [ Google Scholar ]
  • Dew N., Read S., Sarasvathy S. D., Wiltbank R. (2009). Effectual versus predictive logics in entrepreneurial decision-making: differences between experts and novices. J. Bus. Ventur. 24 287–309. 10.1016/j.jbusvent.2008.02.002 [ CrossRef ] [ Google Scholar ]
  • Dhami M. K., Mandel D. R., Mellers B. A., Tetlock P. E. (2015). Improving intelligence analysis with decision science. Perspect. Psychol. Sci. 10 753–757. 10.1177/1745691615598511 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dillon J. T. (1982). Problem finding and solving. J. Creat. Behav. 16 97–111. 10.1002/j.2162-6057.1982.tb00326.x [ CrossRef ] [ Google Scholar ]
  • Dörner D. (1975). Wie Menschen eine Welt verbessern wollten [How people wanted to improve a world]. Bild Der Wissenschaft 12 48–53. [ Google Scholar ]
  • Dörner D. (1980). On the difficulties people have in dealing with complexity. Simulat. Gam. 11 87–106. 10.1177/104687818001100108 [ CrossRef ] [ Google Scholar ]
  • Dörner D. (1996). The Logic of Failure: Recognizing and Avoiding Error in Complex Situations. New York, NY: Basic Books. [ Google Scholar ]
  • Dörner D., Drewes U., Reither F. (1975). “Über das Problemlösen in sehr komplexen Realitätsbereichen,” in Bericht über den 29. Kongreß der DGfPs in Salzburg 1974 Band 1 , ed. Tack W. H. (Göttingen: Hogrefe; ), 339–340. [ Google Scholar ]
  • Dörner D., Güss C. D. (2011). A psychological analysis of Adolf Hitler’s decision making as commander in chief: summa confidentia et nimius metus. Rev. Gen. Psychol. 15 37–49. 10.1037/a0022375 [ CrossRef ] [ Google Scholar ]
  • Dörner D., Güss C. D. (2013). PSI: a computational architecture of cognition, motivation, and emotion. Rev. Gen. Psychol. 17 297–317. 10.1037/a0032947 [ CrossRef ] [ Google Scholar ]
  • Dörner D., Kreuzig H. W., Reither F., Stäudel T. (1983). Lohhausen. Vom Umgang mit Unbestimmtheit und Komplexität. Bern: Huber. [ Google Scholar ]
  • Ederer P., Patt A., Greiff S. (2016). Complex problem-solving skills and innovativeness – evidence from occupational testing and regional data. Eur. J. Educ. 51 244–256. 10.1111/ejed.12176 [ CrossRef ] [ Google Scholar ]
  • Edwards W. (1962). Dynamic decision theory and probabiIistic information processing. Hum. Factors 4 59–73. 10.1177/001872086200400201 [ CrossRef ] [ Google Scholar ]
  • Engelhart M., Funke J., Sager S. (2017). A web-based feedback study on optimization-based training and analysis of human decision making. J. Dynamic Dec. Mak. 3 1–23. [ Google Scholar ]
  • Ericsson K. A., Simon H. A. (1983). Protocol Analysis: Verbal Reports As Data. Cambridge, MA: Bradford. [ Google Scholar ]
  • Fischer A., Greiff S., Funke J. (2017). “The history of complex problem solving,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 107–121. [ Google Scholar ]
  • Fischer A., Holt D. V., Funke J. (2015). Promoting the growing field of dynamic decision making. J. Dynamic Decis. Mak. 1 1–3. 10.11588/jddm.2015.1.23807 [ CrossRef ] [ Google Scholar ]
  • Fischer A., Holt D. V., Funke J. (2016). The first year of the “journal of dynamic decision making.” J. Dynamic Decis. Mak. 2 1–2. 10.11588/jddm.2016.1.28995 [ CrossRef ] [ Google Scholar ]
  • Fischer A., Neubert J. C. (2015). The multiple faces of complex problems: a model of problem solving competency and its implications for training and assessment. J. Dynamic Decis. Mak. 1 1–14. 10.11588/jddm.2015.1.23945 [ CrossRef ] [ Google Scholar ]
  • Frensch P. A., Funke J. (eds) (1995a). Complex Problem Solving: The European Perspective. Hillsdale, NJ: Erlbaum. [ Google Scholar ]
  • Frensch P. A., Funke J. (1995b). “Definitions, traditions, and a general framework for understanding complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Lawrence Erlbaum; ), 3–25. [ Google Scholar ]
  • Frischkorn G. T., Greiff S., Wüstenberg S. (2014). The development of complex problem solving in adolescence: a latent growth curve analysis. J. Educ. Psychol. 106 1004–1020. 10.1037/a0037114 [ CrossRef ] [ Google Scholar ]
  • Funke J. (1985). Steuerung dynamischer Systeme durch Aufbau und Anwendung subjektiver Kausalmodelle. Z. Psychol. 193 435–457. [ Google Scholar ]
  • Funke J. (1986). Komplexes Problemlösen - Bestandsaufnahme und Perspektiven [Complex Problem Solving: Survey and Perspectives]. Heidelberg: Springer. [ Google Scholar ]
  • Funke J. (1993). “Microworlds based on linear equation systems: a new approach to complex problem solving and experimental results,” in The Cognitive Psychology of Knowledge , eds Strube G., Wender K.-F. (Amsterdam: Elsevier Science Publishers; ), 313–330. [ Google Scholar ]
  • Funke J. (1995). “Experimental research on complex problem solving,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 243–268. [ Google Scholar ]
  • Funke J. (2010). Complex problem solving: a case for complex cognition? Cogn. Process. 11 133–142. 10.1007/s10339-009-0345-0 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J. (2012). “Complex problem solving,” in Encyclopedia of the Sciences of Learning Vol. 38 ed. Seel N. M. (Heidelberg: Springer; ), 682–685. [ Google Scholar ]
  • Funke J. (2014a). Analysis of minimal complex systems and complex problem solving require different forms of causal cognition. Front. Psychol. 5 : 739 10.3389/fpsyg.2014.00739 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J. (2014b). “Problem solving: what are the important questions?,” in Proceedings of the 36th Annual Conference of the Cognitive Science Society , eds Bello P., Guarini M., McShane M., Scassellati B. (Austin, TX: Cognitive Science Society; ), 493–498. [ Google Scholar ]
  • Funke J., Fischer A., Holt D. V. (2017). When less is less: solving multiple simple problems is not complex problem solving—A comment on Greiff et al. (2015). J. Intell. 5 : 5 10.3390/jintelligence5010005 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Funke J., Fischer A., Holt D. V. (2018). “Competencies for complexity: problem solving in the 21st century,” in Assessment and Teaching of 21st Century Skills , eds Care E., Griffin P., Wilson M. (Dordrecht: Springer; ), 3. [ Google Scholar ]
  • Funke J., Greiff S. (2017). “Dynamic problem solving: multiple-item testing based on minimally complex systems,” in Competence Assessment in Education. Research, Models and Instruments , eds Leutner D., Fleischer J., Grünkorn J., Klieme E. (Heidelberg: Springer; ), 427–443. [ Google Scholar ]
  • Gobert J. D., Kim Y. J., Pedro M. A. S., Kennedy M., Betts C. G. (2015). Using educational data mining to assess students’ skills at designing and conducting experiments within a complex systems microworld. Think. Skills Creat. 18 81–90. 10.1016/j.tsc.2015.04.008 [ CrossRef ] [ Google Scholar ]
  • Goode N., Beckmann J. F. (2010). You need to know: there is a causal relationship between structural knowledge and control performance in complex problem solving tasks. Intelligence 38 345–352. 10.1016/j.intell.2010.01.001 [ CrossRef ] [ Google Scholar ]
  • Gray W. D. (2002). Simulated task environments: the role of high-fidelity simulations, scaled worlds, synthetic environments, and laboratory tasks in basic and applied cognitive research. Cogn. Sci. Q. 2 205–227. [ Google Scholar ]
  • Greiff S., Fischer A. (2013). Measuring complex problem solving: an educational application of psychological theories. J. Educ. Res. 5 38–58. [ Google Scholar ]
  • Greiff S., Fischer A., Stadler M., Wüstenberg S. (2015a). Assessing complex problem-solving skills with multiple complex systems. Think. Reason. 21 356–382. 10.1080/13546783.2014.989263 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Stadler M., Sonnleitner P., Wolff C., Martin R. (2015b). Sometimes less is more: comparing the validity of complex problem solving measures. Intelligence 50 100–113. 10.1016/j.intell.2015.02.007 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Fischer A., Wüstenberg S., Sonnleitner P., Brunner M., Martin R. (2013a). A multitrait–multimethod study of assessment instruments for complex problem solving. Intelligence 41 579–596. 10.1016/j.intell.2013.07.012 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Holt D. V., Funke J. (2013b). Perspectives on problem solving in educational assessment: analytical, interactive, and collaborative problem solving. J. Problem Solv. 5 71–91. 10.7771/1932-6246.1153 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S., Molnár G., Fischer A., Funke J., Csapó B. (2013c). Complex problem solving in educational contexts—something beyond g: concept, assessment, measurement invariance, and construct validity. J. Educ. Psychol. 105 364–379. 10.1037/a0031856 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Funke J. (2009). “Measuring complex problem solving: the MicroDYN approach,” in The Transition to Computer-Based Assessment. New Approaches to Skills Assessment and Implications for Large-Scale Testing , eds Scheuermann F., Björnsson J. (Luxembourg: Office for Official Publications of the European Communities; ), 157–163. [ Google Scholar ]
  • Greiff S., Funke J. (2017). “Interactive problem solving: exploring the potential of minimal complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 93–105. [ Google Scholar ]
  • Greiff S., Martin R. (2014). What you see is what you (don’t) get: a comment on Funke’s (2014) opinion paper. Front. Psychol. 5 : 1120 10.3389/fpsyg.2014.01120 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greiff S., Neubert J. C. (2014). On the relation of complex problem solving, personality, fluid intelligence, and academic achievement. Learn. Ind. Diff. 36 37–48. 10.1016/j.lindif.2014.08.003 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Niepel C., Scherer R., Martin R. (2016). Understanding students’ performance in a computer-based assessment of complex problem solving: an analysis of behavioral data from computer-generated log files. Comput. Hum. Behav. 61 36–46. 10.1016/j.chb.2016.02.095 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Stadler M., Sonnleitner P., Wolff C., Martin R. (2017). Sometimes more is too much: a rejoinder to the commentaries on Greif et al. (2015). J. Intell. 5 : 6 10.3390/jintelligence5010006 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S. (2014). Assessment with microworlds using MicroDYN: measurement invariance and latent mean comparisons. Eur. J. Psychol. Assess. 1 1–11. 10.1027/1015-5759/a000194 [ CrossRef ] [ Google Scholar ]
  • Greiff S., Wüstenberg S. (2015). Komplexer Problemlösetest COMPRO [Complex Problem-Solving Test COMPRO]. Mödling: Schuhfried. [ Google Scholar ]
  • Greiff S., Wüstenberg S., Funke J. (2012). Dynamic problem solving: a new assessment perspective. Appl. Psychol. Measure. 36 189–213. 10.1177/0146621612439620 [ CrossRef ] [ Google Scholar ]
  • Griffin P., Care E. (2015). “The ATC21S method,” in Assessment and Taching of 21st Century Skills , eds Griffin P., Care E. (Dordrecht, NL: Springer; ), 3–33. [ Google Scholar ]
  • Güss C. D., Dörner D. (2011). Cultural differences in dynamic decision-making strategies in a non-linear, time-delayed task. Cogn. Syst. Res. 12 365–376. 10.1016/j.cogsys.2010.12.003 [ CrossRef ] [ Google Scholar ]
  • Güss C. D., Tuason M. T., Orduña L. V. (2015). Strategies, tactics, and errors in dynamic decision making in an Asian sample. J. Dynamic Deci. Mak. 1 1–14. 10.11588/jddm.2015.1.13131 [ CrossRef ] [ Google Scholar ]
  • Güss C. D., Wiley B. (2007). Metacognition of problem-solving strategies in Brazil, India, and the United States. J. Cogn. Cult. 7 1–25. 10.1163/156853707X171793 [ CrossRef ] [ Google Scholar ]
  • Herde C. N., Wüstenberg S., Greiff S. (2016). Assessment of complex problem solving: what we know and what we don’t know. Appl. Meas. Educ. 29 265–277. 10.1080/08957347.2016.1209208 [ CrossRef ] [ Google Scholar ]
  • Hermes M., Stelling D. (2016). Context matters, but how much? Latent state – trait analysis of cognitive ability assessments. Int. J. Sel. Assess. 24 285–295. 10.1111/ijsa.12147 [ CrossRef ] [ Google Scholar ]
  • Hotaling J. M., Fakhari P., Busemeyer J. R. (2015). “Dynamic decision making,” in International Encyclopedia of the Social & Behavioral Sciences , 2nd Edn, eds Smelser N. J., Batles P. B. (New York, NY: Elsevier; ), 709–714. [ Google Scholar ]
  • Hundertmark J., Holt D. V., Fischer A., Said N., Fischer H. (2015). System structure and cognitive ability as predictors of performance in dynamic system control tasks. J. Dynamic Deci. Mak. 1 1–10. 10.11588/jddm.2015.1.26416 [ CrossRef ] [ Google Scholar ]
  • Jäkel F., Schreiber C. (2013). Introspection in problem solving. J. Problem Solv. 6 20–33. 10.7771/1932-6246.1131 [ CrossRef ] [ Google Scholar ]
  • Jansson A. (1994). Pathologies in dynamic decision making: consequences or precursors of failure? Sprache Kogn. 13 160–173. [ Google Scholar ]
  • Kaufman J. C., Beghetto R. A. (2009). Beyond big and little: the four c model of creativity. Rev. Gen. Psychol. 13 1–12. 10.1037/a0013688 [ CrossRef ] [ Google Scholar ]
  • Knauff M., Wolf A. G. (2010). Complex cognition: the science of human reasoning, problem-solving, and decision-making. Cogn. Process. 11 99–102. 10.1007/s10339-010-0362-z [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A. (2017). Sometimes less is not enough: a commentary on Greiff et al. (2015). J. Intell. 5 : 4 10.3390/jintelligence5010004 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A., Neubert J. C., Wüstenberg S., Greiff S. (2016). Construct validity of complex problem solving: a comprehensive view on different facets of intelligence and school grades. Intelligence 54 55–69. 10.1016/j.intell.2015.11.004 [ CrossRef ] [ Google Scholar ]
  • Kretzschmar A., Süß H.-M. (2015). A study on the training of complex problem solving competence. J. Dynamic Deci. Mak. 1 1–14. 10.11588/jddm.2015.1.15455 [ CrossRef ] [ Google Scholar ]
  • Lee H., Cho Y. (2007). Factors affecting problem finding depending on degree of structure of problem situation. J. Educ. Res. 101 113–123. 10.3200/JOER.101.2.113-125 [ CrossRef ] [ Google Scholar ]
  • Leutner D., Fleischer J., Wirth J., Greiff S., Funke J. (2012). Analytische und dynamische Problemlösekompetenz im Lichte internationaler Schulleistungsvergleichsstudien: Untersuchungen zur Dimensionalität. Psychol. Rundschau 63 34–42. 10.1026/0033-3042/a000108 [ CrossRef ] [ Google Scholar ]
  • Luchins A. S. (1942). Mechanization in problem solving: the effect of einstellung. Psychol. Monogr. 54 1–95. 10.1037/h0093502 [ CrossRef ] [ Google Scholar ]
  • Mack O., Khare A., Krämer A., Burgartz T. (eds) (2016). Managing in a VUCA world. Heidelberg: Springer. [ Google Scholar ]
  • Mainert J., Kretzschmar A., Neubert J. C., Greiff S. (2015). Linking complex problem solving and general mental ability to career advancement: does a transversal skill reveal incremental predictive validity? Int. J. Lifelong Educ. 34 393–411. 10.1080/02601370.2015.1060024 [ CrossRef ] [ Google Scholar ]
  • Mainzer K. (2009). Challenges of complexity in the 21st century. An interdisciplinary introduction. Eur. Rev. 17 219–236. 10.1017/S1062798709000714 [ CrossRef ] [ Google Scholar ]
  • Meadows D. H., Meadows D. L., Randers J. (1992). Beyond the Limits. Vermont, VA: Chelsea Green Publishing. [ Google Scholar ]
  • Meadows D. H., Meadows D. L., Randers J., Behrens W. W. (1972). The Limits to Growth. New York, NY: Universe Books. [ Google Scholar ]
  • Meißner A., Greiff S., Frischkorn G. T., Steinmayr R. (2016). Predicting complex problem solving and school grades with working memory and ability self-concept. Learn. Ind. Differ. 49 323–331. 10.1016/j.lindif.2016.04.006 [ CrossRef ] [ Google Scholar ]
  • Molnàr G., Greiff S., Wüstenberg S., Fischer A. (2017). “Empirical study of computer-based assessment of domain-general complex problem-solving skills,” in The Nature of Problem Solving: Using research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 125–141. [ Google Scholar ]
  • National Research Council (2011). Assessing 21st Century Skills: Summary of a Workshop. Washington, DC: The National Academies Press. [ PubMed ] [ Google Scholar ]
  • Newell A., Shaw J. C., Simon H. A. (1959). A general problem-solving program for a computer. Comput. Automat. 8 10–16. [ Google Scholar ]
  • Nisbett R. E., Wilson T. D. (1977). Telling more than we can know: verbal reports on mental processes. Psychol. Rev. 84 231–259. 10.1037/0033-295X.84.3.231 [ CrossRef ] [ Google Scholar ]
  • OECD (2014). “PISA 2012 results,” in Creative Problem Solving: Students’ Skills in Tackling Real-Life problems , Vol. 5 (Paris: OECD Publishing; ). [ Google Scholar ]
  • Osman M. (2010). Controlling uncertainty: a review of human behavior in complex dynamic environments. Psychol. Bull. 136 65–86. 10.1037/a0017815 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Osman M. (2012). The role of reward in dynamic decision making. Front. Neurosci. 6 : 35 10.3389/fnins.2012.00035 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Qudrat-Ullah H. (2015). Better Decision Making in Complex, Dynamic Tasks. Training with Human-Facilitated Interactive Learning Environments. Heidelberg: Springer. [ Google Scholar ]
  • Ramnarayan S., Strohschneider S., Schaub H. (1997). Trappings of expertise and the pursuit of failure. Simulat. Gam. 28 28–43. 10.1177/1046878197281004 [ CrossRef ] [ Google Scholar ]
  • Reuschenbach B. (2008). Planen und Problemlösen im Komplexen Handlungsfeld Pflege. Berlin: Logos. [ Google Scholar ]
  • Rohe M., Funke J., Storch M., Weber J. (2016). Can motto goals outperform learning and performance goals? Influence of goal setting on performance, intrinsic motivation, processing style, and affect in a complex problem solving task. J. Dynamic Deci. Mak. 2 1–15. 10.11588/jddm.2016.1.28510 [ CrossRef ] [ Google Scholar ]
  • Scherer R., Greiff S., Hautamäki J. (2015). Exploring the relation between time on task and ability in complex problem solving. Intelligence 48 37–50. 10.1016/j.intell.2014.10.003 [ CrossRef ] [ Google Scholar ]
  • Schoppek W., Fischer A. (2015). Complex problem solving – single ability or complex phenomenon? Front. Psychol. 6 : 1669 10.3389/fpsyg.2015.01669 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Schraw G., Dunkle M., Bendixen L. D. (1995). Cognitive processes in well-defined and ill-defined problem solving. Appl. Cogn. Psychol. 9 523–538. 10.1002/acp.2350090605 [ CrossRef ] [ Google Scholar ]
  • Schweizer F., Wüstenberg S., Greiff S. (2013). Validity of the MicroDYN approach: complex problem solving predicts school grades beyond working memory capacity. Learn. Ind. Differ. 24 42–52. 10.1016/j.lindif.2012.12.011 [ CrossRef ] [ Google Scholar ]
  • Schweizer T. S., Schmalenberger K. M., Eisenlohr-Moul T. A., Mojzisch A., Kaiser S., Funke J. (2016). Cognitive and affective aspects of creative option generation in everyday life situations. Front. Psychol. 7 : 1132 10.3389/fpsyg.2016.01132 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Selten R., Pittnauer S., Hohnisch M. (2012). Dealing with dynamic decision problems when knowledge of the environment is limited: an approach based on goal systems. J. Behav. Deci. Mak. 25 443–457. 10.1002/bdm.738 [ CrossRef ] [ Google Scholar ]
  • Simon H. A. (1957). Administrative Behavior: A Study of Decision-Making Processes in Administrative Organizations , 2nd Edn New York, NY: Macmillan. [ Google Scholar ]
  • Sonnleitner P., Brunner M., Keller U., Martin R. (2014). Differential relations between facets of complex problem solving and students’ immigration background. J. Educ. Psychol. 106 681–695. 10.1037/a0035506 [ CrossRef ] [ Google Scholar ]
  • Spering M., Wagener D., Funke J. (2005). The role of emotions in complex problem solving. Cogn. Emot. 19 1252–1261. 10.1080/02699930500304886 [ CrossRef ] [ Google Scholar ]
  • Stadler M., Becker N., Gödker M., Leutner D., Greiff S. (2015). Complex problem solving and intelligence: a meta-analysis. Intelligence 53 92–101. 10.1016/j.intell.2015.09.005 [ CrossRef ] [ Google Scholar ]
  • Stadler M., Niepel C., Greiff S. (2016). Easily too difficult: estimating item difficulty in computer simulated microworlds. Comput. Hum. Behav. 65 100–106. 10.1016/j.chb.2016.08.025 [ CrossRef ] [ Google Scholar ]
  • Sternberg R. J. (1995). “Expertise in complex problem solving: a comparison of alternative conceptions,” in Complex Problem Solving: The European Perspective , eds Frensch P. A., Funke J. (Hillsdale, NJ: Erlbaum; ), 295–321. [ Google Scholar ]
  • Sternberg R. J., Frensch P. A. (1991). Complex Problem Solving: Principles and Mechanisms. (eds) Sternberg R. J., Frensch P. A. Hillsdale, NJ: Erlbaum. [ Google Scholar ]
  • Strohschneider S., Güss C. D. (1998). Planning and problem solving: differences between brazilian and german students. J. Cross-Cult. Psychol. 29 695–716. 10.1177/0022022198296002 [ CrossRef ] [ Google Scholar ]
  • Strohschneider S., Güss C. D. (1999). The fate of the Moros: a cross-cultural exploration of strategies in complex and dynamic decision making. Int. J. Psychol. 34 235–252. 10.1080/002075999399873 [ CrossRef ] [ Google Scholar ]
  • Thimbleby H. (2007). Press On. Principles of Interaction. Cambridge, MA: MIT Press. [ Google Scholar ]
  • Tobinski D. A., Fritz A. (2017). “EcoSphere: a new paradigm for problem solving in complex systems,” in The Nature of Problem Solving: Using Research to Inspire 21st Century Learning , eds Csapó B., Funke J. (Paris: OECD Publishing; ), 211–222. [ Google Scholar ]
  • Tremblay S., Gagnon J.-F., Lafond D., Hodgetts H. M., Doiron M., Jeuniaux P. P. J. M. H. (2017). A cognitive prosthesis for complex decision-making. Appl. Ergon. 58 349–360. 10.1016/j.apergo.2016.07.009 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tschirgi J. E. (1980). Sensible reasoning: a hypothesis about hypotheses. Child Dev. 51 1–10. 10.2307/1129583 [ CrossRef ] [ Google Scholar ]
  • Tuchman B. W. (1984). The March of Folly. From Troy to Vietnam. New York, NY: Ballantine Books. [ Google Scholar ]
  • Verweij M., Thompson M. (eds) (2006). Clumsy Solutions for A Complex World. Governance, Politics and Plural Perceptions. New York, NY: Palgrave Macmillan; 10.1057/9780230624887 [ CrossRef ] [ Google Scholar ]
  • Viehrig K., Siegmund A., Funke J., Wüstenberg S., Greiff S. (2017). “The heidelberg inventory of geographic system competency model,” in Competence Assessment in Education. Research, Models and Instruments , eds Leutner D., Fleischer J., Grünkorn J., Klieme E. (Heidelberg: Springer; ), 31–53. [ Google Scholar ]
  • von Clausewitz C. (1832). Vom Kriege [On war]. Berlin: Dämmler. [ Google Scholar ]
  • Wendt A. N. (2017). The empirical potential of live streaming beyond cognitive psychology. J. Dynamic Deci. Mak. 3 1–9. 10.11588/jddm.2017.1.33724 [ CrossRef ] [ Google Scholar ]
  • Wiliam D., Black P. (1996). Meanings and consequences: a basis for distinguishing formative and summative functions of assessment? Br. Educ. Res. J. 22 537–548. 10.1080/0141192960220502 [ CrossRef ] [ Google Scholar ]
  • World Economic Forum (2015). New Vsion for Education Unlocking the Potential of Technology. Geneva: World Economic Forum. [ Google Scholar ]
  • World Economic Forum (2016). Global Risks 2016: Insight Report , 11th Edn Geneva: World Economic Forum. [ Google Scholar ]
  • Wüstenberg S., Greiff S., Funke J. (2012). Complex problem solving — more than reasoning? Intelligence 40 1–14. 10.1016/j.intell.2011.11.003 [ CrossRef ] [ Google Scholar ]
  • Wüstenberg S., Greiff S., Vainikainen M.-P., Murphy K. (2016). Individual differences in students’ complex problem solving skills: how they evolve and what they imply. J. Educ. Psychol. 108 1028–1044. 10.1037/edu0000101 [ CrossRef ] [ Google Scholar ]
  • Wüstenberg S., Stadler M., Hautamäki J., Greiff S. (2014). The role of strategy knowledge for the application of strategies in complex problem solving tasks. Technol. Knowl. Learn. 19 127–146. 10.1007/s10758-014-9222-8 [ CrossRef ] [ Google Scholar ]

IMAGES

  1. -Framework for Complexity in Problem Solving

    task complexity problem solving

  2. Task Complexity Issues and the SDLC

    task complexity problem solving

  3. Problem-Solving Process in 6 Steps

    task complexity problem solving

  4. How to Design Classroom Assessments Using the Difficulty and Complexity

    task complexity problem solving

  5. Effective Problem Solving in 5 Simple Steps by Synergogy

    task complexity problem solving

  6. Complex Problem Solving

    task complexity problem solving

VIDEO

  1. Master Decision Making with the Cynefin

  2. Substitution method part-2

  3. Time Complexity شرح || Problem Solving

  4. Lecture 13 Introduction to Computational Complexity: Problem Classification

  5. Psychology Series

  6. Graphs : Problem solving (Leetcode)

COMMENTS

  1. A Cognitive Load Theory Approach to Defining and Measuring ...

    In this paper, we describe the different perspectives on task complexity and present some concrete examples from cognitive load research on how to estimate the levels of element interactivity determining intrinsic and extraneous cognitive load.

  2. Task complexity: A review and conceptualization framework

    From an objective and broad sense, task complexity is conceptualized following a task-component-factor-dimension framework. A six-component task model is proposed for identifying salient complexity contributory factors. Task complexity is then structured with ten dimensions.

  3. AI customer service: Task complexity, problem-solving ability ...

    Perceived problem-solving ability mediated the effects of customers’ service usage intentions (AI or Human service) with task complexity serving as a boundary condition. •. We provide a definition of artificial intelligence (AI) in a customer service context. Abstract.

  4. Complex Problems">What It Takes to Think Deeply About Complex Problems

    Summary. The problems we’re facing often seem as intractable as they do complex. But as Albert Einstein famously observed, “We cannot solve our problems with the same level of thinking that...

  5. Opportunities of artificial intelligence for supporting ...

    Human-AI collaboration in complex problem-solving has been explored across a broad variety of AI application domains. However, the four dimensions of complex problem – namely, cognitive, metacognitive, social and affective - have been augmented by AI to a different extent.

  6. Task Complexity: A Review and Analysis | Academy of ...

    Abstract. In an attempt to identify those qualities that make a task complex, four fundamental task attributes are identified and are distinguished from other attributes usually associated with this concept. A typology of complex tasks is derived from the identified attributes.

  7. Analysing Complex Problem-Solving Strategies from a Cognitive ...

    In an effort to explore the nature of CPS, this study aims to investigate the role of inductive reasoning (IR) and combinatorial reasoning (CR) in the problem-solving process of students using statistically distinguishable exploration strategies in the CPS environment.

  8. Complex Problem Solving: What It Is and What It Is Not">Complex Problem Solving: What It Is and What It Is Not

    Complex problem solving is not only a cognitive process but is also an emotional one (Spering et al., 2005; Barth and Funke, 2010) and strongly dependent on motivation (low-stakes versus high-stakes testing; see Hermes and Stelling, 2016).

  9. Task Complexity: Extending a Core Concept | Academy of ...">Task Complexity: Extending a Core Concept | Academy of ...

    Kent D. Miller. Published Online: 19 Nov 2014 https://doi.org/10.5465/amr.2013.0350. View article. Tools. Share. Abstract. We reexamine the assumptions of current theory to update and extend the concept of task complexity to tasks that include multiple actors at any level of analysis.

  10. (PDF) AI Customer Service: Task Complexity, Problem-Solving ...

    The results show that, in the case of low-complexity tasks, consumers considered the problem-solving ability of AI to be greater than that of human customer service and were more likely to...