Emergence of Task Formation in Organizations: Balancing Units’ Competence and Capacity

This paper studies the emergence of task formation under conditions of limited knowledge about the complexity of the problem to be solved by an organization. Task formation is a key issue in organizational theory and the emergence of task formation is of particular interest when the complexity of the overall problem to be solved is not known in advance, since, for example, an organization is newly founded or has gone through an external shock. The paper employs an agent-based simulation based on the framework of NK fitness landscapes and controls for di erent levels of task complexity and for di erent coordination modes. In the simulations, artificial organizations are observed while searching for higher levels of organizational performance by two intertwined adaptive processes: short-termed search for superior solutions to the organizations’ task and, in mid term, learning-based adaptation of task formation. The results suggest that the emerging task formations vary with the complexity of the underlying problem and, thereby, the balance between units’ scope of competence and the organizational capacity for problem-solving is a ected. For decomposable problems, task formations emerge which reflect the nature of the underlying problem; for non-decomposable structures, task formations with a broader scope of units’ competence emerge. Furthermore, results indicate that, particularly for non-decomposable problems, the coordination mode employed in an organization subtly interferes with the emergence of task formation.


Introduction
. A key concept in managing complex systems is the segmentation of a system into distinct subsystems.This concept is employed, for example, in the context of product design in form of modularization (e.g., Baldwin & Clark ; Ethiraj & Levinthal ) as well as in the structuring of organizations by di erentiation (Lawrence & Lorsch ) addressing issues like division of labor, task formation, delegation and specialization.However, the segmentation into subsystems -may it be, for example, sub-modules of a product or units (departments) within a firm -has to be accompanied by integration of the subsystems to make sure that the system as a whole accomplishes its task (Baldwin & Clark ).In the context of organizational structuring, integration is, for example, associated with coordination via hierarchies, incentives provided to managers for pursuing the overall objective of an organization or the sharing of norms (Lawrence & Lorsch ).Across such di erent scientific domains as (computational) organization theory, control theory or complex systems design to name but a few (e.g., Carley & Zhiang ; Galbraith ; Gross & Blasius ; Lawrence & Lorsch ; Yongcan et al. ) it is well recognized that the performance of an organization may be subtly a ected by how it balances between di erentiation and integration (e.g., Lawrence & Lorsch ; Horling & Lesser ; Lesser , ).
. Notwithstanding the various "schools" of organizational thinking, limitations in knowledge, communication and information-processing capabilities could be regarded as essential reasons for di erentiation -may it be to make use of better information due to more specialized knowledge of subordinate managers, to parallelize and, by that, speed up problem-solving or in order to relieve the headquarter from certain tasks and, thus, avoiding its information overload (see, for example, Galbraith ).However, di erentiation also has its downside with respect to problem-solving: With a higher number of subsystems, each subsystem has a decreasing scope of competence and, hence, an even more limited perspective on the overall task to be accomplished which raises the need for integration.Generally speaking, the number of subsystems -and the number of agents employed accordingly -a ects the trade-o between capacity for problem-solving and the need for coordination in the system (e.g., Lawrence & Lorsch ; Van De Ven et al. ; Yongcan et al. ; Lesser & Corkill ).
. Against this background, finding the appropriate segmentation into subsystems is a key issue, and, with this, the concept of the environment comes into play.A central idea is that the structural complexity of a system is to be aligned with its environmental complexity, e.g., Galbraith ( ); Carroll & Burton ( ); Siggelkow & Rivkin ( ): Environmental complexity addresses the number and interrelatedness of "elements" an organization has to deal with simultaneously and, in particular, characterizes an organization's task environment (Lawrence & Lorsch ; Anderson ; Sorenson ); structural complexity is a ected by the level of an organization's di erentiation into subsystems and the mechanisms for integrating (coordinating) the subsystems in order to achieve the organization's overall objectives (Lawrence & Lorsch ).In other words, according to this idea, two properties are of relevance for achieving an appropriate di erentiation: ( ) inherent in the overall task to be accomplished, there is a certain task complexity in terms of interactions among the task's elements which captures the "real" nature of the task (task environment); ( ) the structural complexity -as deliberately shaped by di erentiation and integration -is to be aligned to the task environment.

.
For this alignment, an obvious idea is to structure an organization into distinct subsystems such that it mirrors the underlying "real" structure of the task, i.e., the interactions among its elements (Baldwin & Clark ; Henderson & Clark ).However, the underlying "real" task obviously does not have to be fully decomposable in the sense that there are merely subtasks with dense intra-subtask, but no cross-subtask interactions (Simon ; Baldwin & Clark ).Hence, without full decomposability the straightforward idea of mapping the task's structure to a corresponding, i.e., "isomorphic" structure of subtasks -each of which distinctly assigned to one unit within the organization respectively -becomes more intricate: then choices made by one subsystem related to its subtask may (detrimentally) a ect the performance of other subsystems related to their particular subtasks (Wood ; Baldwin & Clark ; Ethiraj & Levinthal ).
. It has been argued that task complexity and its relation to structural complexity requires further research, e.g.,

Haerem et al. ( ); Lesser & Corkill (
).Among the issues raised is that task complexity does not necessarily need to be stable over time and that the task complexity may not in all cases shape the communication patterns required for integration accordingly.In this context, an interesting question is whether, for shaping the appropriate organizational structure, the task complexity in terms of the "real" interaction structure among the task's elements is known in advance to the designer of an organization.For example, a er a firm has gone through some external shock which has a ected the firm's task environment, there might be rather limited knowledge of the "real" task complexity.Hence, an interesting question is which di erentiation emerges when an organization -in the search for superior performance for a given, though not perfectly known task -learns and adjusts its structure and, thus, self-adaptively balances the trade-o between its subsystems' capacity and competencies of problem-solving.

.
This paper seeks to contribute to this particular field of research and studies situations where the complexity of the underlying problem to be solved is not known to the designer of the organization in advance.In particular, the paper deals with the question which task formations in the organizational structure emerge for di erent levels of task complexity.For this, the study employs an agent-based simulation with the agents having imperfect foresight and limited information-processing capabilities.In particular, the study focuses on the following research question: . The aforementioned prominent idea of isomorphism -i.e., structural complexity mirroring task complexity -leads to the conjecture that over time isomorphism emerges.Hence, in case of a decomposable "real" structure of a task this lets one expect that the emerging formation into subsystems reflects this "real" structure.
. Since the "real" structure of tasks may not be perfectly decomposable, it is interesting to investigate the sensitivity of the formation into subsystems emerging towards increasing levels of task complexity.
. Given the core issues of organizational structuring, i.e., di erentiation and integration, an interesting question is, whether the emergence of formation into subsystems is sensitive to the mechanisms employed for integration.
. With the latter aspect, the study takes into account, that the mechanisms of coordination may allow for di erent levels of decomposition into subsystems (e.g., with more intense coordination a higher level of segmentation might be feasible).However, it is worth mentioning that this study focuses on the task formation in terms of decomposition of an organization's task; yet, the study does not address the task allocation to potentially heterogeneous agents which would require to deal with the question of which tasks are assigned to which particular agents.
. For the purpose of this study, an agent-based simulation model is employed which is based on the framework of NK fitness landscapes as suggested in evolutionary biology (Kau man & Levin ; Kau man ) and, since then, employed in rather di erent contexts in managerial science (for an overview see, for example, Wall ).A major advantage of the NK model is that it allows to systematically vary the complexity of a search problem in terms of the interdependencies among its sub-problems (Li et al. ), and this feature obviously makes it a functional basis for the research questions of this study.In the simulations, learning organizations search on NK fitness landscapes for higher levels of performance and -from time to time -may modify the task formation and, with this, the number of units established for accomplishing the respective subtasks.

.
The paper is organised as follows: Subsequently, the simulation model is introduced.Section gives a short overview over the simulation experiments conducted and the respective scenarios simulated.In Section results are presented and discussed.

Simulation Model
. The simulation model is introduced according to the ODD protocol proposed by Grimm et al. ( ) (see also Grimm et al. ; Polhill et al. ).With that, a er giving an "Overview" the model description reports on "Design concepts" and presents in the "Details", in particular, the modes of coordination and learning captured in the model.

.
The very core of this simulation study boils down to the question which task formation and, in consequence, which number of problem-solving agents emerges for which interaction structure among the elements of an overall task that an organization seeks to accomplish.The simulation model depicts artificial organizations facing decision problems of di erent levels of complexity and allows to study the task formations which emerge based on reinforcement learning.The study controls for di erent modes of coordination among the agents involved in decision-making.

State variables and scales
.
The model comprises three main building blocks: ( ) the N -dimensional decision problem with the interaction structure among the N single choices d i ; ( ) the decision-making agents, their tasks, and the mode of coordination; ( ) learning by reinforcement about task formation.).In the model presented here, parameter N captures binary decisions d it with i = 1, ..., N to be made within a certain time step t.Each of the two states d it ∈ {0, 1} contributes with C it to the overall objective of the organization.Subsequently, the term "performance" is used to capture in how far the overall objective of an organization is achieved -though not fixing what this objective may be.Hence, the term "performance" could refer to "classical" financial objectives (e.g., profit) or, for example, to ecological objectives.Contributions C it are randomly drawn from a uniform distribution with 0 ≤ C it ≤ 1 though controlled by the complexity of the underlying search problem as captured by parameter K which reflects the interdependencies among the N single choices.In particular, K denotes the number of choices d jt , j = i which also a ect contribution C it of choice d it .In case of no interactions, K is 0; at a maximum level of interactions, K equals N − 1, meaning that every single option d it a ects contributions of all other options and vice versa.Hence, subject to K, contribution C it might not only depend on the single choice d it , but also on K other choices d jt where j ∈ {1, ...N } and j = i: The simulation experiments are conducted for an N = 12-dimensional decision problem and, thereby, according to the NK framework, in sum 5.44452•10 39 di erent interaction structures are possible.Apart from the level K of interactions among decisions, this simulation study also considers the decomposability of an overall decision problem.For this, the number of sub-groups G of decisions d it with intense intra-group interactions and the level K ex of cross-group interactions are helpful parameters.The particular interaction structures (characterized by parameters K, G and K ex ) which are reflected in the simulation experiments are introduced in Section .For now, to give an example for an overall problem which could be decomposed into distinct problems think of an organization like a holding company with very di erent investments (e.g., power plant and baby food) without notable interactions among them.In contrast, a car manufacturer faces rather complex decisional problems in terms of interactions among decisions: for example, when developing a new engine it is relevant that an engine can be used in di erent models while however other design options as well as the positioning in the market are a ected by the engines used and, conversely, a ect features of an appropriate engine.
. ( ) Tasks, agents and the mode of coordination.The model comprises organizations with two types of agents: the headquarter and the units in terms of departments.All units are residing at the same hierarchical level directly below the headquarter and each unit has a head (manager) which is in primary control of decision-making.However, the model does not distinguish between the unit and its head and, hence, subsequently, only the units are adressed.Whether the headquarter intervenes in the decision-making on the N -dimensional decision problem (or is confined to, for example, just observing the level of performance achieved) depends on the mode of coordination employed (see Section "Submodels"); however, the headquarter is in charge of deciding on the task formation (see Sections ., . to .).The N -dimensional search problem is (endogenously) partitioned into M t disjoint partial problems d r t of equal size of N * t = N/M t binary choices and each subtask is exclusively assigned to one unit r ∈ {1, ..., M t }.Hence, at time step t each unit r has primary control over a subset of N * t single choices of the N choices. .Apart from the choices d it assigned to them, the units are also characterized by a certain level of competence, captured by σ r , which reflects how precisely a unit is capable to ex ante assess the consequences of its decisions.In particular, distortions from the true consequences of an option are depicted by adding a relative error (for other functions see Levitan & Kau man ).The respective error terms follow a Gaussian distribution N (0; σ) with expected values 0 and the standard deviations σ r being assumed to be the same for each unit r; errors are assumed to be independent from each other.Hence, each unit r has a distinct view of the "true" performance landscape and, thus, units are heterogeneous in this respect -although for simplicity's sake they operate with the same error levels (Wall ).In the simulation experiments the error level is set to . .There is some empirical evidence that this could be a realistic estimation of the error level of decision-making in organizations (Tee et al. ; Redman ).
. The simulation experiments are carried out for di erent modes of coordination (coord) among the units which are introduced within the section on "Submodels".In one of these modes, the headquarter intervenes in decisionmaking.In this case, the headquarter's precision of assessment, σ head , is relevant too.In order to reflect the di erence between the more specialized competence on the departmental site (as captured by σ r ) compared to the broader, though less specialized competence of the headquarter, σ head is set to . . .( ) Reinforcement Learning.The artificial organizations are endowed with learning capabilities captured by a simple form of reinforcement learning.Within the entire observation period T in every T * -th period the headquarter learns about the success of the current configuration M t of subtasks.Details on the form of reinforcement learning implemented in this study are introduced in the section on "Submodels" (see Sections .to .).

Process overview and scheduling
. The key structure of the simulation is captured by two loops which di er in their temporal horizon: In the short term, in each time step t the artificial organizations search for superior solutions of their N -dimensional decision problem where the overall problem is segmented into M t sub-problems and delegated to M t problem-solving units accordingly.In the mid-term, i.e., in each T * -th time step, the organizations evaluate the current task formation captured by M t , learn from this evaluation via reinforcement and, eventually, choose another task formation for the next T * periods.

Design Concepts
.
Emergence: The model is designed to study the emergence of task formation in organizations.The task formation is captured by the number M t of tasks and, correspondingly, of units to which the tasks are assigned, and, in a nutshell, the emergence of number M t of tasks (units) is in the focus of this simulation study.The emergence of the task formation results from the organizations' learning about which task formation leads to su icient performance enhancements.Performance enhancements, in concrete, are achieved by the adaptive search of units for superior solutions to their particular subtasks as shaped by the task formation. .

Fitness/Objectives
The overall performance level achieved by the organization as a whole with a certain configuration d t = (d 1t , ..., d N t ) of the overall decision problem at time step t is -in line with the NK framework (Kau man & Levin ; Kau man ) -defined as the normalized sum of contributions C it (see Equation ): However, each unit r = 1, ..., M t seeks to find a superior solution for its "own" partial decision problem -or in other words: The objective function of unit r is not the overall performance as captured in Equation but the performance resulting from the particular subtask assigned to unit r where N r t = N/M t ∀r.Hence, unit r is focused on the performance of the partial vector d r t which is in its own primary control.In each time step, unit r decides in favor of that option which promises the highest performance P r t . .
Interactions: The model captures two types of interactions among units.( ) Indirect interactions through interdependencies captured by levels K and K ex : Whether choices made by unit r a ect the performance resulting from choices of unit q = r, depends on the interaction structure among the single choices d i as reflected in functions f i (Equation ), and on the task formation.In case that "cross-task" interdependencies exist (i.e., K ex > 0), the individual objective functions (Equation ) do not necessarily complement each other with respect to the overall performance (Equation ) which is, in line with the NK framework, shaped by the randomly chosen C i (Equation ).( ) Direct interactions depending on the mode of coordination: The model captures di erent modes of how the decision-making units collaborate with each other and, in particular, how the final choices on the M t partitions of the overall decision problem are aligned and how the headquarter comes into play.These modes of coordination are introduced more into detail in the Section on submodels (see Sections --. ).
. Adaptation: As aforesaid, the simulation model comprises two intertwined adaptive processes with di erent time horizons (Figure ): ( ) In the short term the M units seek to find superior levels of performance with respect to their individual objectives (Equation ).In every time step t, each of the M t units seeks to find a superior solution for its "own" partial decision problem d r t of the overall problem while, so far, assuming that the other units do not alter their prior choices.For this, unit r randomly discovers two alternatives of its partial vector to status quo d r * t−1 : an alternative configuration that di ers in one of the binary choices (a1) and another (a2) where two bits are flipped compared to the status quo.With this, in time step t, unit r has three options to choose from, i.e., keeping the status quo or switching to d r,a1 t or d r,a2 t .Each unit r forms its preferences out of these and prefers that option most which promises the highest performance P r t .( ) In the mid term the organization may adapt the task formation via learning by reinforcement which is introduced more into detail in the Section on submodels (see Sections .-. ). .

Prediction:
The three options, d r * t−1 , d r,a1 t , and d r,a2 t , from which each unit has to choose in time step t, are evaluated regarding the consequences for that particular unit's individual objective function (Equation ).The units know the true performance P r t ( d r * t−1 ) that has been achieved with the status quo (for example, because they have been compensated accordingly).However, units are not necessarily capable to perfectly evaluate the performance P r t as given in Equation of the newly discovered options a1 and a2, i.e., the units may misjudge these options' contributions to objective P r t .These misjudgments may, for example, result from incomplete knowledge (expertise) or from limited information-processing capabilities for assessing performance e ects of the d r t .In the simulation model, distortions are depicted by the adding an error which reflects a relative error to true performance (see Section .), i.e., .
Sensing: The agents captured in the model, i.e., the units as well as the headquarter, have incomplete knowledge in several aspects: • As familiar in agent-based modelling, the agents are not able to overlook the entire solution space and the fitness landscape at once, and, thus, are not able to identify the best solution "instantaneously"; rather, they have to search stepwise for superior solutions (Simon , ).The units' capabilities for search are limited to finding two alternatives (i.e., a1 and a2) to the status quo per time step.
• The headquarter's as well as the units' memory is limited: They know which configuration d * t−1 has been implemented in the previous period, but they do not remember which configurations have been realized in periods prior than t − 1.
• When, in each time step t, first forming their preferences, units guess that the other units stay with their particular choices made in the previous period; only in the course of coordination (see Sections .-. ) and the, eventually, mutual adjustment of plans they may take the recent plans of other units into account.This, however, is subject to the particular mode of coordination. .

Stochasticity:
The model comprises some stochastic elements: • In line with the NK framework, the contributions C i (see Equation ) -though controlled by the interaction structure -are drawn from a uniform distribution [0; 1].
• The initial task formation as well as the initial "position" of an organization in the performance landscape is chosen randomly.
• The units -not able to perform an optimization, but confined to a satisficing approach employing stepwise search -in every time step t, discover the two alternative partial solutions (a1 and a2) compared to the status quo at random.
• The prediction of these alternatives' consequences is imperfect, captured by a non-systematic error as described in Section . .The same holds for errors the headquarters is a licted with when -depending on the mode of coordination -intervening in decision-making (see Section .).
• As familiar in reinforcement learning, the options of task formation at hand are chosen according to their particular probabilities (rather capturing propensities); and these probabilities are updated according to the success or failure of the options in the past (reinforcement) (see Sections .to .).
. Observation: For model analysis, in each T * -th period the task formation chosen was recorded.Additionally, the task formation in the final period (i.e., M t=500 ) and the final performance (V t=500 ) is observed.From this, the final performance averaged per task formation and the relative frequency of the final task formation was calculated.Moreover, for each simulation run further metrics are recorded , like the number of changes in the task formation, the number of alterations in the configuration of the organization's decision problem and the ratio of false positive alterations in the observation period.

Details
Initialization .
A er a performance landscape is generated, the initial task formation and, hence, the number of units M t=0 is determined out of the set of options A at random with uniform probabilities, i.e., p(M a , t = 0) = 1 |A| .Then the organizations are "thrown" randomly in the performance landscape.

Input .
A er initialization, the environmental conditions, i.e., the complexity of the task and the "pattern" of interactions among the single choices of the decision problem d t as well as the contributions C i to overall objective remain constant over time.Hence, there are no exogenously imposed dynamics a er initialization; dynamics result merely endogenously from the adaptation and learning processes.

Submodels .
With respect to the research question of the paper, in particular those submodels which are related to the coordination among decision-making agents' preferences and the reinforcement learning on task formation are described more into detail. .

Coordination Mechanisms
The first forming of preferences by the units as described in Sections .-. , in principle, follows the idea of a steepest ascent hill-climbing algorithm (Chang & Harrington ), though in a decentralized manner, i.e., performed by each of the units related to the respective partial decision problems.A erwards, these once formed preferences may be coordinated among the M t units, and the model comprises di erent ways of coordination (for these and other modes see Siggelkow & Rivkin ; Malone & Crowston ; Martial ): . The "decentralized" mode, in fact, may be regarded as refraining from any coordination and, at large, corresponds to what Horling & Lesser ( ) categorize as "congregations" in terms of rather loosely collaborating agents: The units make their partial choices d r t autonomously without "asking" other agents.Hence, each unit head is allowed to choose its most preferred option, denoted by d r,p1 t , whichever it is out of the three options possible (status quo, a1 and a2).The overall configuration d t simply results as an "assembly" from the units' first preferences on their partial decision problems.

.
The "sequential" mode captures what is named "sequential planning".The units make their choices sequentially where, for the sake of simplicity, the sequence is given by the index r.In particular, unit r with 2 ≤ r ≤ M is informed by unit r − 1 about the choices made so far, i.e., made by the "preceding" units < r.Unit r takes these choices into account and (potentially imperfectly) re-evaluates its own options d r * t−1 , d r,a1 t and d r,a2 t with respect to Equation -potentially resulting in adjusted preferences.Hence, the eventually "new" first preference d r,p1 t for unit r ≥ 2 is a function of the choices of the "preceding" units and, obviously, only unit does not have to take a previous choice into account: Hence, the overall configuration d t results from assembling the sequentially adjusted first preferences of the units on their partial decision problems which equals d t = d Mt t . .
In the "proposal" mode , each unit transfers its ordered list of preferences to the headquarter.The headquarter compiles the first preferences to a composite vector d C and assesses the overall performance it promises.However, like the units, the headquarter is not able to perfectly ex ante evaluate options (see Section .).The headquarter's ex ante evaluation of option ).The headquarter decides in favor of the composite vector d C if it promises the same or a higher performance as the status quo If the composite vector assembled from the units' first preferences does not meet the condition in Equation , the headquarter evaluates the vector composed from the units' second preferences according to Equation .
If this also does not, at least, promise the performance of the status quo, then the organizations stay with the status quo, i.e., then d t = d t−1 .
. Reinforcement Learning .To induce emergence of task formation, the model employs a simple mode of reinforcement learning based on statistical learning (for overviews see Sutton & Barto ; Kaelbling et al. ): a generalized form of the Bush & Mosteller ( ) model (Brenner ).The probabilities of alternatives M a from a set A = M 1 , ..., M a , ..., M |A| (with 1 ≤ a ≤ |A|) of feasible task formations are updated according to the -positive or negative -stimuli resulting from the performance enhancement ∆V t achieved with the current configuration M t ∈ A. ∆V t is the relative enhancement of overall performance (see Equation ) obtained in the last T * periods, i.e., Whether ∆V t of task formation M t is regarded positive or negative, depends on whether, or not, it at least equals an aspiration level v t .Hence, the stimulus τ (t) is given by: .
Let p(M a , t) denote the probability of an alternative number of subtasks/units M a to be chosen at time t for the next T * periods.The probabilities of options M a ∈ A are updated according to the following rule, where λ (with 0 ≤ λ ≤ 1) reflects the reinforcement strength (Brenner ): A er the probabilities are updated as given in Equation , the "next" number of subtasks/units to be implemented from t + 1 to t + T * is determined randomly according to the updated probabilities.

Scenarios and Simulation Experiments
. The very core of this study is to investigate which task formation emerges for di erent levels of task complexity.
Therefore, the study di erentiates between two types of interaction structures which, in a way, represent two basic types of interaction structures (for these and other structures see Rivkin & Siggelkow ): . First, structures with purely "local" interactions are simulated, i.e., structures where choices within a "group" a ect each others' contributions to overall performance, while no interactions across the groups exist.Hence, these structures are perfectly decomposable (e.g., Rivkin & Siggelkow ; Simon ; for studies analyzing decomposable structures in the NK framework see, for example, Rivkin & Siggelkow ; Siggelkow & Levinthal ).In the field of organizational design these structures have also been called "self-contained" (Galbraith ) and they capture situations where, for example, the task of an organization is perfectly decomposable along geographical regions or along products without any interdependencies across regions or products, respectively.Obviously, these structures show a limited coordination need across the blocks (Galbraith ; Malone & Crowston ).Examples of symmetric decomposable interaction matrices with two and six blocks (i.e., G = 2 and G = 6) are depicted in Figures .aand .
The second type of interaction structures simulated shows near decomposability, i.e., interaction matrices with K ex > 0 are studied, with K ex denoting the level of cross-group (or external) interactions of choices i.In an organizational context, interactions among the modules G may be, for example, caused by certain contraints of ressources (e.g., budget or capacities), by market-driven interactions (e.g., the price of one product may affect the price achievable for another product) or functional interrelations (e.g., the product design a ecting the production processes) (Galbraith ).The more cross-group interactions exist (i.e., the higher K ex ), the more does the influence matrix approach the idea of reciprocal interdependencies as introduced by Thompson ( ) and the higher the coordination need among the choices (Malone & Crowston ).    .Decomposable interactions structures and coordination mode "decentralized": These experiments capture the baseline scenarios where the nature of the overall decision problem would allow to find task formations without coordination need among the tasks.The interesting question is, whether these task formations emerge.
. Nearly decomposable interaction structures and coordination mode "decentralized": These experiments allow to stepwise raise the coordination need due to the "real" nature of the decision problem.It is of particular interest to study in how far the task formations emerged for decomposable structures (see above) also emerge with increasing cross-group interactions.
. Decomposable and nearly decomposable interaction structures and coordination modes "sequential" and "proposal": These experiments are conducted to study, in particular, in how far the emergence of task formation is a ected by the mode of coordination employed by an organization. .
The simulation experiments presented subsequently report on the results obtained for an observation time of T = 500 periods (see also Table ).It is obvious, that the observation time could be of critical relevance when learning-based emergence is studied: In particular, choosing a too short observation time, clearly bears the risk that the learning processes do not have "enough" time to lead to some predominance of certain task formations.Obviously, this is of particular relevance with respect to the fact that in the simulations di erent levels of problem complexity are depicted: it is well known, that more time is required to find superior solutions for complex problems (Rivkin & Siggelkow ).Hence, in order to find an appropriate observation time, pre-tests for di erent levels of complexity were conducted.These tests indicated that even increasing the observation time remarkably does not change the key results.As shown exemplarily in Table (Appendix B) doubling the observation time from T = 500 to T = 1000 does not only lead to the same task formations (given by M t ) emerging for the respective di erent levels of complexity, but also the final performances achieved do not differ significantly (even not when the requirements for significant di erences are set to a rather low level).

Results and Discussion
Emergence of task formation in decomposable interaction structures and coordination mode "decentralized" .The results obtained for the di erent decomposable interaction structures under investigation and for situations where, in fact, no coordination is provided ("decentralized") are presented in Figures .a to .e:The simulation runs were grouped according to the number M t=500 of units out of set A (with |A| = 4, see Table ) that were employed in the final observation period.For these four setups the average of the final performance is computed and displayed in the plots, i.e., that level of V t=500 which is on average achieved with a particular number M t=500 of units in the final period.(For each decomposable interaction structure, Table in Appendix B reports on the significance of mean di erences of final performance V t=500 between the four di erent setups of M t=500 according to Welch's method (Welch ; Law ).)Moreover, the plots show the relative frequency of how o en, in the final period, a certain task formation, and, thus, number of units M t=500 has been selected by the organization.

Figure :
Frequency of setups of units and final performances obtained for di erent symmetric decomposable interaction structures.For each interaction structure , adaptive walks, i.e., distinct fitness landscapes with adaptive walks on each are simulated.For parameter settings see Table . .
According to these results, that particular task formation M t=500 emerges most o en which corresponds best to the number of groups G with local interactions incorporated in the interaction matrix: For example, for an interaction structure with G = 3 a segmentation in M t=500 = 3 sub-tasks emerges most o en (see Figure .d)and the final performance is significantly the highest for that particular configuration (see Table in Appendix B).
The corresponding findings arise for the other decomposable interaction structures studied.
. An explanation may, of course, lie in the fact that with the task formation (and the related number of units) the level of interactions across tasks/units is a ected.For example, for an interaction structure of G = 2 and K ex = 0 like in Figure .a,segmenting the N = 12-dimensional task into sub-tasks (and employing units accordingly) would mean that some intra-group interactions are not taken into consideration by the decisionmaking units appropriately.Hence, the self-adaptive task formation apparently leads to setups where intragroup interactions are "residing" within the sub-tasks or, in other words, are in the scope of one unit.
. However, results suggest that not only task formations with zero cross-task interactions emerge; rather, it appears that the highest feasible number of sub-tasks/units without causing cross-task interactions emerges: for example, for an interaction structure with G = 6 and K ex = 0 (see Figure .with G = 3 and K ex = 0.It becomes obvious that, whenever the number of tasks (units) deviates from M t = 3 -here reflecting isomorphism -performance losses occur.Performance losses are particularly high when the number of tasks (units) is higher than G.In contrast, when the isomorphic task formation with M t = 3 is implemented, performance rapidly increases to a rather high level.
. Hence, according to these results, setups which maximize the number of tasks/units without causing crosstask interactions emerge most o en and result in the highest final performance.This finding, obviously, is in line with the idea of isomorphism in terms of mirrowing the "real" structure of the task in the task formation in organizations (e.g., Wood ; Baldwin & Clark ; Ethiraj & Levinthal ) as well as the "functionally accurate/cooperative (FA/C)" model for distributed problem solving (Lesser ).
. The capacity for search which is a ected by the number of decision-making units reasonably provides the explanation for these results: In the model, it is assumed that the capacity for search per decision-making unit is limited regardless of its scope of competence in order to capture the limited human information-processing capabilities (Simon ): In particular, in time step, each unit discovers two alternatives a1 and a2 where one or two bits, respectively, are flipped compared with the status quo.Hence, for N = 12 and with units, in principle, in one time step the entire configuration d t could be altered, whereas with two units at maximum of d i can be modified.Thus, with the number of units the capacity of search is increased and, thereby, potentially performance enhancements can be achieved faster with more units (see also, for example, Baldwin & Clark ; Ethiraj & Levinthal ).
. Hence, a trade-o between the capacity of search and the scope of competence reasonably occurs which -for decomposable interaction structures -apparently results in a task formation with as many as possible subtasks/units without causing interactions between tasks/units.

Emergence of task formation in nearly decomposable interaction structures with coordination mode "decentralized"
. The next step of this simulation study is to investigate the emerging task formations for nearly decomposable tasks, i.e., for situations where no segmentation into sub-tasks without causing cross-task interactions is feasible (i.e., K ex > 0).In particular, this section introduces the results obtained from simulation experiments for the structures indicated in Figure for levels of K ex ∈ {1, 2, 3} and for "decentralized" coordination.
. For these experiments the same analyses as introduced in the preceding section are carried out.However, now a more condensed presentation is appropriate.In particular, Figure picks up the presentation of Figure and in each cell -apart from the characterization of the interaction structure by G and K ex -the number M t=500 of sub-tasks/units which emerges most o en (large bold letters in red) and with the highest final performance V t=500 (numbers in blue) is given.When significance at a .level of the best performing task formation according to the method of Welch ( ); Law ( ) is not reached, then the task formation with the second-highest final performance is displayed in brackets.Figure exemplary plots the relative frequencies of the di erent feasible numbers of sub-tasks/units M t ∈ A over the observation time for an interaction structures with G = 3 and di erent levels of cross-group interactions K ex ∈ {0, 1, 2, 3}.The results indicate, that increasing the level of cross-group interactions from K ex = 0 to K ex = 1 apparently does not a ect the predominantly emerging task formation as given by the number M t=500 of sub-tasks or units employed in the organization.However, as can be seen from comparing Figure .ato .b, the predominance of M t=500 = 3 over M t=500 = 2 decreases when K ex is raised to .
. With K ex ≥ 2, things change: the results indicate that then the number of sub-tasks and units M t=500 , respectively, employed in the organization is reduced to a considerably lower level.Obviously, with segmenting the overall task into two sub-tasks only, the scope of competence of the decision-making units is raised to a maximum and cross-group interactions are at the best comprised within the scope of the units' competences.However, this comes at the cost of search capacity (recall that with two units at maximum of the N = 12 choices i of the search problem can be altered in one time step). .
To illustrate these e ects more into detail, Figure .bdisplays a "typical" adaptive walk for G = 3 and K ex = 3: Compared to the case of no cross-group interactions (K ex = 0) in Figure .a,obviously, the oscillations of performance are much stronger.This is caused by the fact, that now the choices of one unit may considerably a ect the performance obtained by the choices of other units and, thus, are inducing the other units to alter the solutions to their respective partial problems.In this interaction structure, a formation with only two tasks (units) apparently provides the best results and, whenever this task formation is implemented, performance increases and fluctuations decrease -though remaining at a notably high level compared to the decomposable structure.
. Hence, the results indicate that with increasing complexity of the search problem the balance between search capacity and scope of competence shi s towards broader competence and lower search capacity (and vice versa).
. Reducing cross-task interactions between sub-tasks by shaping rather broad, but few sub-tasks, obviously, has the advantage that the related decision-making units consider more spillover e ects among the N single decisions when making their choices on their partial decision-problem.However, it worth mentioning, that -at a given level of N -broader, though fewer sub-tasks may have another positive e ect which is related to imperfect sensing of decision makers incorporated in the model: .Among the informational imperfections captured in the model is that -when first forming their preferences -the decision-making units assume that the other units' managers will stay with their prior choice (see Section .).Hence, the broader the "own" scope of competence (i.e, the higher N * = N/M t ) the lower is the potential "error" that each of the M t decision-making units makes regarding the anticipation of the other units' choices: From each unit manager's perspective, the number of those particular choices which are assumed to remain unchanged is given by N − N * .In other words, when forming preferences each decision-maker assumes a ratio s t = 1 − 1 Mt of the N -dimensional decision-problem to be kept stable.Hence, the less sub-tasks M t are formed, the lower is the potential "error" made due to assuming that the fellow units will stay with their prior choices.However, the peril of misjudgments is subject to cross-task interactions: In case that the other units' choices a ect the performance achieved by unit r, it becomes more likely that a particular unit runs into danger of choosing an option that turns out particularly less beneficial than expected.Hence, with increasing K and K ex the relevance of imperfect expectations regarding the fellow units' behavior becomes more relevant.From this, the obvious question arises, whether with more intense coordination mechanisms -which also incorporate more communication among units -the emerging task formations may di er which is analyzed in the next section.

Emergence of task formation with coordination modes "sequential" and "proposal"
.
The third part of the simulation experiments studies the task formation emerging under the regime of the "sequential" and the "proposal" coordination mode, respectively.Both mechanisms comprise the communication of units' preferences to another decision-making agent (another unit or the headquarter, respectively) and a second evaluation of options (based on more information than available in the formation of preferences).Figures and report on the condensed results regarding the emerging task formations in the format introduced in the preceding section.
. In a rather broad sense, for both coordination mechanisms the results are in line with those obtained for the "decentralized" mode, i.e., in fact, without coordination: For decomposable interaction structures (i.e., for K ex = 0), those task formations emerge most o en and show highest final performance which reflect the "real" nature of the overall task of the organization.Like in the "decentralized" mode, these task formations also emerge when the level of cross-group interactions K ex is increased from to , while, with K ex ≥ 2, the emerging task formations di er, i.e., showing lower levels of M t .
Figure : Sequential Mode: Setups indicated by number M t=500 of tasks/units (red) as predominantly emerged in the final observation period and final performance (blue) obtained for decomposable and nearly decomposable interaction structures.Numbers in brackets indicate second-best setups when significance at a .level for the setup with highest final performance, reported in the blue numbers, is not given.For each interaction structure , adaptive walks, i.e., distinct fitness landscapes with adaptive walks on each are simulated.For parameter settings see Table .Figure : Proposal Mode: Setups indicated by number M t=500 of tasks/units (red) as predominantly emerged in the final observation period and final performance (blue) obtained for decomposable and nearly decomposable interaction structures.Numbers in brackets indicate second-best setups when significance at a .level for the setup with highest final performance, reported in the blue numbers, is not given.For each interaction structure , adaptive walks, i.e., distinct fitness landscapes with adaptive walks on each are simulated.For parameter settings see Table . .However, some aspects deserve closer attention: First of all, comparing the three coordination modes with respect to the emerging number M t=500 of sub-tasks/units, it is worth mentioning that, with the "sequential" as well as the "proposal" mode, for K ex ≥ 2 the emerging task formations are less predominant in terms of significance than in the decentralized mode (in Figures and indicated by formations in brackets which have no significantly lower final performances).

.
A second aspect to be mentioned is related to the number M t=500 of tasks/units: In the "proposal" modewhere hierarchy comes into play -the task formations emerging for K ex ≥ 2 show a remarkably higher segmentation than obtained in the other coordination modes.Apparently, the headquarter's evaluation of options with respect to the overall organizations objective serves as a kind of substitute for incorporating cross-group interactions within sub-tasks.This finding obviously is in line with organizational theory emphasizing the tension between di erentiation and integration (Lawrence & Lorsch , for further references Wall ).
. Third, comparing the three coordination mechanisms with respect to the final performance V t=500 against each other suggests that for the case of perfectly decomposable problem structures (i.e., K ex = 0) the "decentralized" mode leads to the highest performance while the "proposal" mode results in the lowest performance (the performance di erences are significant at a .level, see Table in Appendix B).In contrast, with K ex ≥ 2 the "order" among coordination modes is the opposite: apparently, for a given level of K the emerging task formations result in the highest performances under the "proposal" mode, the "sequential" mode yields the second-highest performances and the "decentralized" mode performs worst (in the vast majority of cases performance di erences are significant, see Table in Appendix B).With K ex = 1, the three coordination modes provide final performances which do not di er significantly from each other.
. Hence, an interesting question is what may cause these results.I argue that the coordination mode subtly interferes with the balance between capacity of problem-solving and scope of competencies -both shaped by the number of units M t : As already mentioned, with the number of units M t the capacity of problem-solving increases and, in the model, each decision-making unit, at maximum, may change two of the single decisions d i delegated to that unit.With this, the ratio of decisions (bits) d it in the N -dimensional decision problem that might be changed in each time step increases in M and is given by 2 N • M .When forming preferences, in the 'decentralized" mode, unit r assumes that all the fellow units stay with the status quo, and, with this, unit r, at maximum, may be wrong by a ratio 2 N • (M − 1) of the overall decision problem.In the "sequential'" mode, the error due to imperfect anticipation of the fellow units' intentions in average is only half as high and is given by 1 N • (M − 1) (recall that here, when making the decision, unit r knows which choices the units to r − 1 have opted for).

.
Hence, the "sequential" mode allows for increasing the problem-solving capacities as given by M while the error made by the imperfect anticipation of the other units' intentions in average increases only half as much as in the "decentralized" mode.This explanation is broadly confirmed by the number of periods with false positive choices: For example, for K ex = 2, this ratio is about points of percentage lower with the "sequential" mode than with the "decentralized" mode of coordination.However, the (imperfect) anticipation of the other units' intentions, obviously, is the more relevant the higher the level of interactions K and cross-group interactions K ex .This may, in particular, explain the di erences in the number of units and the final performances between the decentralized and the sequential mode for higher levels of K ex . .In the "proposal" mode, the balance between sub-systems' capacity of problem-solving and sub-systems' scope of competencies is shi ed towards the former even more than with sequential planning, and I argue that this results from a switch in the objective function relevant for the final choices combined with the headquarter's knowledge of all units' preferences: . In particular, in the proposal mode the headquarter performs a second evaluation of the options favored by the units, and, this -though based on the headquarter's rather coarse knowledge (in the model captured by a fairly high level of noise) -with respect to the overall objective.In contrast, in the other coordination modes the evaluations and final choices are based on the subunits' objective functions , and, obviously, this kind of "ignorance" is the more relevant, the higher the levels of K and K ex .
. These arguments are broadly supported by the ratio of periods with false positive choices: for example, for K ex = 2, in the "proposal" mode false positive alterations of d t occur in approximately percent of the periods while occuring in about percent of periods in the "sequential" mode.Moreover, this explanation is in line with the argumentation put forward in the seminal paper of Sah & Stiglitz ( ) which shows that false positive choices are considerably reduced by employing hierarchical structures in decision-making.Hence, even for decision-problems with higher complexity, apparently integration via hierarchy -as captured in the proposal mode -allows to make use of a higher problem-solving capacity due to high levels of di erentiation.

Conclusion
. This study analyses the emergence of task formation under conditions of limited knowledge about the "real" structure of the problem to be solved by an organization and when further limitations regarding the informationprocessing capabilities of decision-making agents apply.
. The results indicate that, under these conditions and for decomposable problems, task formations emerge which follow the idea of isomorphism, i.e., task formations mirrowing the fragments of the overall problem which is, for example, recommended in the field of design of complex systems as well as proposed in organizational theory (Baldwin & Clark ; Ethiraj & Levinthal ; Galbraith ).The results suggest that the emergence of "isomorphic" task formations is rather robust against the mode of coordination -may this be of a polyarchic or a hierarchical nature (Sah & Stiglitz ).Moreover, this emergence of task formations appears to be rather "robust" against a low level of interactions across the fragments of the "real" problem of an organization: With only some few interactions the same task formations emerge as for perfectly decomposable structures, regardless of the mode of coordination employed. .
With higher levels of interactions among segments, a clear shi towards task formations with lower number of agents or broader scopes of agents' competencies, and, with that, lower capacity of problem-solving results.Obviously, thereby interactions among segments of the "real" problem are incorporated into the tasks.This tendency shows up for all coordination modes under investigation.However, with hierarchical coordinationwhich provides the most intense integration among tasks under investigation -higher levels of segmentation (i.e., smaller, but more tasks) emerge, and, with this, more parallelized problem-solving capacity is a ordable.Hence, more intense coordination serves as substitute for the low segmentation which is in line with the wellknown tension between di erentiation and integration in organizational theory (Lawrence & Lorsch ).
. In sum, the results suggest that with increasing task complexity the balance between problem-solving capacity and competence shi s in favor of broader competence and lower capacity (and vice versa).However, the coordination mode appears to a ects this balance too: with more intense coordination more segmentation providing more parallel problem-solving is a ordable.
. It is worth mentioning, that these findings are derived from a model with agents (i.e., only locally optimizing units as well as the headquarter) which follow very simple -for not to say mechanistic -decision and learning rules and which have limited information on the entire solution space and a rather limited memory.Hence, on the one hand one may argue that these assumptions could be too restrictive in respect of, for example, the spectrum of capabilities of organizations.On the other hand, the fact that the the shown emergence results from rather simple rules, local optimization and limited information may put top-down approaches in management into some perspective.At the same time, the findings of this study indicate that the nature of an organization's overall task as well as the regime in terms of the intensity of coordination shape the emerging task formation.
In this sense, this study may contribute to the field of organizational design in several aspects.
. From a more practical point of view, the results are particular relevant for situations when the organizational structure is designed without being perfectly informed about the nature of the overall problem to be solved.For example, the organization may be newly established or the organization's task environment may have gone through an external shock, e.g., due to disruptive technological innovations.In this respect, the results may be regarded as an a irmation to allow for learning-based adaptation of the organizational structure.
. From a more theoretical perspective, the findings, for example, let us conjecture that firms in complex task environments may show a relatively lower number of business units or are making use of more intense coordination modes; firms which are diversified according to products or regions are expected to reflect this in their formation of units.In this sense, the study may be of interest for further empirical research on the relation of organizational complexity and governance structure as, for example, in ).While the research questions of the study introduced here aims on findings related to this more organizational level, as an agent-based simulation, it builds on behaviors and interactions related to the individual level of decision-makers like, for example, imperfect ex ante-assessments of options or the coordinative behavior.However, apart from the simple decision rules captured in this paper's model, for example, the agents' behavior in the course of coordination is not a ected by the task complexity (Liu & Li ).It could be of particular interest to connect the modelling of this study with laboratory experimental research on task complexity and coordination (e.g., Ho & Weigelt )) in order to test and refine the model.
. Further extensions of this study appear promising and, in particular, it is to be mentioned that the results are obtained for binary decision problems as captured in the NK framework.This has obvious advantages with respect to controlling the level of task complexity; however, apart from the structure of interactions, it provides a rather low structure with respect to the (randomized) contributions to performance; hence, in further research studies learning-based adaptation of task formation may turn out even more beneficial in case of more structured problems.Moreover, the simulation experiments introduced here do not consider costs of coordination: more intense coordination usually does not come along without costs (e.g., for communication).Hence, an obvious extension is to analyze self-adaptive task formations for di erent modes of coordination including its costs too.With respect to the form of learning employed to induce emergence, it would be interesting to investigate whether other models of learning like, for example, belief learning or melioration learning (Brenner ; Zschache ) lead to similar results.
. Moreover, in the research e ort presented here, the decision-making agents -though having distinct tasks and di erent information -are homogenous in several aspects as, for example, with respect to their informationprocessing capabilities.Hence, a natural extension would be to investigate systems with more heterogeneous agents (for example, with respect to learning and problem-solving capabilities).Hence, this would also imply to consider aspects of task allocation in terms of answering the question of which task should be assigned to which particular agent.This question of task allocation which was excluded from the study presented here could lead to further insights bridging to the domain of human resource management.
Appendix A: Flow chart n Table : Mean di erences and confidence intervals at the (relatively low) .level for all pairwise comparisons of final performances V t=500 against V t=1000 obtained for search processes with self-adaptively determined numbers M t=1000 and M t=500 of tasks/units for interaction structures with G = 3 in the decentralized mode; numbers in brackets indicate second-best setups when significance for the best performing task formation is not given; see also Figure ; n indicates a non-significant di erence.: Mean di erences and confidence intervals at a .level for all pairwise comparisons of final performance V t=500 obtained with the best performing task formation as given by number M t=500 of tasks/units across coordination modes; * indicates a significant di erence.

Notes
With N = 12, an interaction matrix has entries, of which those on the main diagonal are always set to "X" (see Figure ).Hence, other elements remain, each of which could be filled or not, and -since the level of K does not necessarily need to be the same for every single decision i -with N = 12 in sum 2 132 = 5.44452 • 10 39 di erent interaction structures are possible.For a further discussion of di erent interaction structures see Rivkin & Siggelkow ( ).
Obviously, with N ∈ N this requires that N is divisible by M t without remainder.With N = 12 and if, for example, a number of M t = 3 subtasks has emerged at time step t, then each subtask comprises N * = 4 single choices.
With this, the N -dimensional decision problem d t = (d 1t , ...d N t ) can also be expressed by the combination of partial decision problems as d t = d 1 t ... d r t ... d Mt with each unit's choices related only to its own partial decision problem d r t .The final performance V t=500 is given in relation to the global maximum of the respective performance landscape: otherwise the results could not be compared across di erent performance landscapes.
More "technical" documentations on the framework of NK fitness landscapes including aspects of implementation can be found in Altenberg ( ) and Li et al. ( ).
The aspiration level v t is modified according to the performance experience and, in particular, is captured as an exponentially weighted moving average of past performance where b denotes the speed of adjustment For the probabilities p(M a , t) it holds that 0 ≤ p(M a , t) ≤ 1 and M a ∈A (p(M a , t))= 1.
In the simulations, for the sake of simplicity, the level of K ex is assumed to be the same for each contribution C i , i = (1, ..., N ).
Recall that, according to the NK framework, K captures the number of choices j = i which a ect the contribution C i of choice i to overall performance V .
With each unit's scope of competence at time step t being N * = N/M t and the "residual" part of the decision problem in the competence of other unit managers being N − N * , the relative size s t of the residual part from each unit's perspective results from s t = (N − N Mt ) For example, in their seminal paper Lawrence & Lorsch ( ) state that "di erentiation and integration are essentially antagonistic, and that one can be obtained only at the expense of the other" (p. ).
In particular, unit r knows the choices of the "prior" units in the sequence but assumes that the "subsequent" units will stay with the status quo.With this, unit r, at maximum, is wrong by a ratio of 2 N • (M − r) and, the average unit's maximum ratio of error is 1 N • (M − 1).In particular, without further institutional arrangements like appropriate incentive schemes related to firm performance (Wall ), each subunit focuses on the "own" performance (see Equations and ) and not on the overall performance (see Equation ).
Figure depicts the principle processual structure of the simulation model and Figure in Appendix A illustrates the processes of the model in a flow chart more into detail.

Figure :
Figure : Structure of the simulation model Figure .b,respectively.The simulation experiments are conducted for five di erent symmetric decomposable structures with a number of , , , and -as a borderline case of K = 0 -groups G of equal size where only local interactions occur.
Figure .cdepicts a situation with G = 2 and K ex = 1; with N = 12, this situation results in K = 6 according to the NK framework ; a structure with G = 6 and K ex = 2 (and, with this, K = 3) is displayed in Figure .d.

Figure
Figure summarizes the influence matrices for which simulation experiments are presented in this study.For every interaction structure in combination with each mode of coordination analyzed, , simulations are run, i.e., for each combination runs on distinct landscapes.

Figure :
Figure : Examples of decomposable and nearly decomposable interaction structures b) even with M = 2 and M = 3 sub-tasks/units no cross-task interactions would show up; however, according to Figure .b,a setup with tasks/units emerges most o en and shows the highest final performance.The same applies to G = 4 and K ex = 0 where, according to Figure c, not setups with , but with sub-tasks/units emerge most o en and (significantly) show highest final performance. .For an illustration more into detail, Figure .aplots a "typical" single adaptive walk for an interaction structure

Figure :
Figure : Examplary adaptive walks for a decomposable and a nearly decomposable interaction structure.

Figure :
Figure : Decentralized mode: Setups indicated by number M t=500 of tasks/units (red) as predominantly emerged in the final observation period and final performance (blue) obtained for decomposable and nearly decomposable interaction structures.Numbers in brackets indicate second-best setups when significance at a .level for the setup with highest final performance, reported in the blue numbers, is not given.For each interaction structure , adaptive walks, i.e., distinct fitness landscapes with adaptive walks on each are simulated.For parameter settings see Table.

Figure :
Figure : Example of emergence of task formation for decomposable and nearly decomposable interaction structures with G = 3.For each interaction structure , adaptive walks, i.e., distinct fitness landscapes with adaptive walks on each are simulated.For parameter settings see Table .
Table summarizes the related variables and parameters.
a , t) Probability of option a of number of sub-tasks, or units 0 ≤ p(M a , t) ≤ 1 respectively, to be chosen at time step t v0 Aspiration level in time step t = 0 b Speed of adjustment of vt .λ Learning strength .
Mean di erences and confidence intervals at a .level for all pairwise comparisons of final performance V t=500 obtained for search processes with self-adaptively determined numbers M t=500 of tasks/units in decomposable decision problems; * indicates a significant di erence.